Creating a Spacewalk / Cobbler Kickstart Tree for CentOS

December 10th, 2013 No comments

There are a number of ways to create a new kickstart tree from mounting the full ISO directly to pulling out just the bits you need; however you may find this script a little bit easier.

The assumption is that your ks-trees live in e.g. /var/distro-trees where this script is /var/distro-trees/make-tree.sh:

#!/bin/bash
set -e

release="6.5";
repoarch="x86_64";
tree="centos-${release}-${repoarch}";

mkdir -p ${tree};

(
  cd $tree;
  for dir in images isolinux repodata;
  do
    wget -e robots=off -r -nH --cut-dirs=4 --no-parent --reject "index.html*" http://mirrors.kernel.org/centos/${release}/os/${repoarch}/${dir}/;
  done;
)

echo "Created KS Tree ${tree}";

exit 0;

Thanks go to Gh0sty in #spacewalk on FreeNode for this one.

Categories: CentOS, Linux, Spacewalk Tags: , ,

Logstash v1.1 -> v1.2 :: JSON Event Layout Format Change

November 20th, 2013 No comments

This has been briefly covered by my previous post but is such an important and misunderstood topic I need this post as an RTFM link.

Logstash uses JSON to manage data. This is the main reason it is useful to ship logs to it in JSON format; it then has less work to do and there is less room for mistakes in converting content.

The layout of the logstash format has changed as of v1.2.0 (currently at v1.2.2) from V0 to V1 which makes things look broken in Kibana if you’ve not accounted for it. In the most basic sense the change is that the “@” (at) prefix has been removed from just about every field except “@timestamp” (e.g. “@message”-> “message”, “@tags” -> “tags”) and the @fields group has been deprecated entirely moving all sub-fields up one level.

So, if you had events like this:

{
    "@timestamp": "1970-01-01T00:00:00.000Z",
    "@message": "<log message>",
    "@source_host": "<host>",
    "@tags": ["fudge","donkey","macarena"],
    "@fields": {
        "timestamp": 0000000000000,
        "level": "<level>",
        "file": "<file>",
        "exception": {
            "exception_class": "<e_class>",
            "exception_message": "<e_msg>",
            "stacktrace": "<trace>"
        },
    }
}

You should now have events like this:

{
    "@timestamp": "1970-01-01T00:00:00.000Z",
    "message": "<log message>",
    "source_host": "<host>",
    "tags": ["fudge","donkey","macarena"],
    "timestamp": 0000000000000,
    "level": "<level>",
    "file": "<file>",
    "exception": {
        "exception_class": "<e_class>",
        "exception_message": "<e_msg>",
        "stacktrace": "<trace>"
    }
 }

In practice this doesn’t affect anything logstash does internally with its own inputs as these all effect the changes and you just see the new layout turning up in ElasticSearch/Kibana. However, if you are shipping JSON directly to your indexer you will need to make sure that you are sending the new format (V1) not the old format (V0).

For example:

  • If you are using log4j-jsonevent-layout, you need to upgrade to v1.5 which supports “JSONEventLayoutV0″ and “JSONEventLayoutV1″ as separate options.
  • If you are using Beaver, you are (currently) waiting on the next release to provide the format as the most recent release does not (not sure about the status of the git master but might work).
  • If you have apache configured to output CustomLog format with a custom JSON event layout, you will need to modify your LogFormat as below.

Apache CustomLog

Per Untergeek’s blog posts: Old & New, this is how you would have done it before:

LogFormat "{ \
            \"@vips\":[\"vip.example.com\",\"customer.example.net\"], \
            \"@source\":\"file://host.example.com//usr/local/apache2/logs/access_log\", \
            \"@source_host\": \"host.example.com\", \
            \"@source_path\": \"/usr/local/apache2/logs/access_log\", \
            \"@tags\":[\"Application\",\"Customer\"], \
            \"@message\": \"%h %l %u %t \\\"%r\\\" %>s %b\", \
            \"@fields\": { \
                \"timestamp\": \"%{%Y-%m-%dT%H:%M:%S%z}t\", \
                \"clientip\": \"%a\", \
                \"duration\": %D, \
                \"status\": %>s, \
                \"request\": \"%U%q\", \
                \"urlpath\": \"%U\", \
                \"urlquery\": \"%q\", \
                \"method\": \"%m\", \
                \"bytes\": %B \
                }  \
           }"
ls_apache_json

And this is how you would do it now:

LogFormat "{ \
            \"@timestamp\": \"%{%Y-%m-%dT%H:%M:%S%z}t\", \
            \"@version\": \"1\", \
            \"vips\":[\"vip.example.com\"], \
            \"tags\":[\"apache\"], \
            \"message\": \"%h %l %u %t \\\"%r\\\" %>s %b\", \
            \"clientip\": \"%a\", \
            \"duration\": %D, \
            \"status\": %>s, \
            \"request\": \"%U%q\", \
            \"urlpath\": \"%U\", \
            \"urlquery\": \"%q\", \
            \"bytes\": %B, \
            \"method\": \"%m\", \
            \"referer\": \"%{Referer}i\", \
            \"useragent\": \"%{User-agent}i\" \
           }"
ls_apache_json

Backward Compatibility

If you are using a logstash indexer of version 1.2.0 or older, but you need to support log formats or shippers still using the V0 JSON format, you can use the oldlogstashjson codec on a logstash input to convert the V0 data into V1 as it is indexed. This could be useful to support a progressive upgrade where many systems have to be changed. I do not recommend using this for a fresh installation as you should try to only use the correct format where possible.

Indexer Configuration

The indexer configuration no longer uses the “format” configuration item to configure for JSON input, but the “codec” item instead. Generally “codec” will be set to “json” (V1), “plain” (Plain Text) or “oldlogstashjson” (V0).

When logstash is configured to use the “json” codec and the parser is given invalid JSON or unparsable JSON, it will automatically fall back to the plain codec for that event with a default character set of UTF-8. The fallback charset can be specified if necessary by using the “charset” modifier in the JSON codec config like this:

{
  codec => json { charset => "ASCII-8BIT" }
}

This wonderful little feature allows you to send plain text and JSON to the same input port if you need to. It’s not perfect and will be sub-optimal efficiency for plain input, but can pull you out of the fire in little situations like apache not permitting you to use a JSON format AND a plain format simultaneously for its ErrorLog if you must have a plain text format stored on the server.

Categories: Apache, Logstash Tags: , ,

Slackware64 14.1 :: Nvidia Proprietary Driver Fail

November 11th, 2013 1 comment

Issue #2 with Slackware64-14.1: The latest Nvidia blob doesn’t install properly and neither do some older ones. The main issue with the latest 331.20 driver being that it provides libEGL for 32-bit but not 64-bit and then gives you whinges such as these:

unable to determine the architecture of the files ‘/usr/X11R6/lib/libEGL.la’, which has an architecture specific conflict
unable to determine the architecture of the files ‘/usr/X11R6/lib64/libEGL.la’, which has an architecture specificconflict
unable to determine the architecture of the files ‘/usr/lib/libEGL.la’ which has an architecture specific conflict
unable to determine the architecture of the files ‘/usr/lib64/libEGL.la’ which has an architecture specific conflict

If you are having any Nvidia blob troubles using Slackware64-14.1, you should be fine if you use 325.15 until Nvidia get their act together:
http://uk.download.nvidia.com/XFree86/Linux-x86_64/325.15/NVIDIA-Linux-x86_64-325.15.run

(By clicking the link, you are confirming that you have read and agree to be bound by the License For Customer Use of NVIDIA Software for use of the driver.)

EDIT: 20140225 Thanks to bmarley for a link to this LinuxQuestions post:

I had the same problem when compiling webkitgtk3, the nvidia driver seems to remove this file, but it backs it up. Use

grep libEGL.la /var/lib/nvidia/log

and copy the file (/var/lib/nvidia/102 in my case) to /usr/lib{,64}/libEGL.la.

Slackware 14.1 :: New initrd + LUKS Keyboard Issues

November 11th, 2013 1 comment

Hey hey. So Slackware(,64}-14.1 is out. Woo!

Ok, celebration over; time to work on the minor issues that arise post-release.

Issue #1: You upgrade your LUKS-encrypted system with a USB keyboard to 14.1 and *BOOM* goes your keyboard input at boot time. But that’s not possible, you were careful(!) You made sure to update your initrd and re-ran lilo. You even double-checked that you were using the right {e,x,u,o}hci-hcd controller module but it still doesn’t work. You scream and pull your hair out because re-plugging the keyboard makes it show in the log buffer so it MUST work, but still nada.

Yep. Been there.

The problem is simply that there’s a new module to worry about: hid_generic. Yes your keyboard is loaded properly but you no longer have enough code to load it as an interface device. So just add hid_generic to your initrd, rerun lilo and relax:

This assumes you have your root and swap partitions as logical volumes inside a single volume group within a LUKS-encrypted partition.

#!/bin/bash

# Set parameters
KERNEL_VERSION=$(uname -r) # e.g. 3.10.17 for Slackware 14.1
ROOT_FS='ext4' # Your root filesystem
LVM_PART='/dev/sda2' # The partition containing your LUKS/LVM setup
VOLGROUP='myvg' # The LVM VG containing your encrypted / and swap partitions
KEYMAP='uk' # Your console keyboard map, eg. uk for en_GB

# Make the initrd
mkinitrd -c -k "${KERNEL_VERSION}" -m "${ROOT_FS}":jbd2:mbcache:ehci-pci:ehci-hcd:xhci-hcd:uhci-hcd:ohci-hcd:usbhid:hid_generic:hid -f "${ROOT_FS}" -r /dev/"${VOLGROUP}"/root -C "${LVM_PART}" -L -l "${KEYMAP}" -h /dev/"${VOLGROUP}"/swap -o /boot/initrd.gz

# Re-run lilo
lilo -v

ALSA Pre-Amp Volume Boost :: The SIMPLE Explanation

October 23rd, 2013 1 comment

Too many guides have too little information on how to do this. Often giving you an ALSA config snippet to use that doesn’t work for you and with little explanation as to what does what. Hopefully this one will be a little better.

Bear in mind the flow:
Applications -> ALSA Default input -> whatever your configured default is -> Speakers

And therefore in this case:
Applications -> ALSA Default input -> Softvol plug-in -> dmix plug-in -> Sound card -> Speakers

While I use this for a Slackware64-14.0 desktop machine with a Logitch USB Headset as card 2 and my default, this should apply to pretty much any ALSA user.

As root, create/modify /etc/asound.conf:

# Set your DEFAULT device to the softvol plug-in
# NOT to a hardware card device
#
# The "!" means completely override the previous default
# Not just changing/adding to it.
pcm.!default {
  type plug
  slave.pcm "softvol"
}

# Configure softvol
pcm.softvol {
  type softvol

  # Send softvol's output to dmix
  slave {
    pcm "dmix"
    # If you wanted to you could send the output to a card directly
    # But in most cases it's better to send it to dmix and let
    # dmix handle where to send it. You can add a whole extra section
    # to configure dmix and where it sends output, but I'm
    # not covering that here.

    ## Use Card 0 Device 0 instead of dmix
    # pcm "hw:0,0"
    ## Use Card 2 Device 0 instead of dmix
    # pcm "hw:2,0"
  }

  # Add a control slider in your mixer interfaces
  # i.e. KMix and alsamixer
  control {
    name "Pre-Amp"
    card <CardNumberYouWantControlToShowOn>
    # i.e. card 0 or card 2
  }

  # Minimum dB when slider is at 0%
  min_dB -5.0

  # Maximum DB when slider is at 100%
  max_dB 40.0

  # How many levels the slider should go through
  # i.e. how granular do you want your control to be
  resolution 12
}

Hopefully that makes things make a lot more sense and you can stop reading guides that tell you to specify a hardware card as your default device while then trying to overload it with softvol and getting you in a horrific mess.

To apply this properly.. reboot. You don’t have to, but results may be unexpected.

Don’t forget if you’re using KDE to set phonon to use the device called “default” as it’s default output for stuff.

Logstash :: tomcat6 & log4j on CentOS 6

October 17th, 2013 No comments

Background

I’m beginning to venture into things like logstash and puppet at work to try to sanitise the administration and logging of an ever-growing number of servers and applications.

Getting logstash going is simple if you know how, and non-trivial to do it in a managed environment such as ours using Spacewalk.

Getting rsyslogd to feed into logstash is also simple enough, but then having said that one of the next things on my list is dumping the standard “syslog” input for the tcp+grok version on the logstash cookbook due to the limitations of the syslog input.

The one that will screw you into the ground is getting tomcat-6.0.24 to push all its logging into logstash. The main reason is because the version of tomcat6 available currently on CentOS6 (6.0.24) is hard-coded to use java.util.logging for its core logging, and the format is horrific. The datestamp doesn’t even come until the second line of the entry. Tomcat does now provide a OneLineFormatter to try to make it a little more sane, but it’s not available until tomcat-6.0.32 leaving those on standard CentOS6 set-ups SOL.

UPDATE 02/12/2013: As of CentOS 6.5 you should no longer require the first step of replacing the hard-coded JULI JAR files (tomcat-juli.jar and tomcat-juli-adapters.jar) with fresh JCL copies. This has now been done upstream: http://rhn.redhat.com/errata/RHBA-2013-1721.html

But it IS solvable.

  1. Configure tomcat6 to use log4j instead of java.util.logging (Prior to CentOS 6.5)
  2. Configure log4j to output in a logstash-friendly JSON format
  3. Use your favourite shipper to get ship the output to logstash

Note, throughout this I try to use the full true paths to files rather than symlink locations to save confusion

Getting the Job Done

Configure tomcat6 to use log4j

tomcat6 uses JULI as a kind of wrapper to write out via java.util.logging. Not content with using the Java Commons Library (JCL) version, it ships with it’s own version that is hard-coded to the defaults. In order to change the defaults, you have to replace JULI with the JCL version.

rm /etc/tomcat6/logging.properties

# These two lines iff using CentOS6 prior to v6.5
wget -O /usr/share/tomcat6/bin/tomcat-juli.jar "http://archive.apache.org/dist/tomcat/tomcat-6/v6.0.24/bin/extras/tomcat-juli.jar"
wget -O /usr/share/java/tomcat6/tomcat-juli-adapters.jar "http://archive.apache.org/dist/tomcat/tomcat-6/v6.0.24/bin/extras/tomcat-juli-adapters.jar"

# Carry on
wget -O /usr/share/java/log4j-1.2.17.jar "http://archive.apache.org/dist/logging/log4j/1.2.17/log4j-1.2.17.jar"
ln -snf /usr/share/java/log4j{-1.2.17,}.jar

If you wanted to just stop there and have the default logging replaced by log4j without JSON output, create /usr/share/java/tomcat6/log4j.properties (taken from http://tomcat.apache.org/tomcat-6.0-doc/logging.html#Using_Log4j):

log4j.rootLogger=INFO, CATALINA

# Define all the appenders
log4j.appender.CATALINA=org.apache.log4j.DailyRollingFileAppender
log4j.appender.CATALINA.File=${catalina.base}/logs/catalina.
log4j.appender.CATALINA.Append=true
log4j.appender.CATALINA.Encoding=UTF-8
# Roll-over the log once per day
log4j.appender.CATALINA.DatePattern='.'yyyy-MM-dd'.log'
log4j.appender.CATALINA.layout = org.apache.log4j.PatternLayout
log4j.appender.CATALINA.layout.ConversionPattern = %d [%t] %-5p %c- %m%n

log4j.appender.LOCALHOST=org.apache.log4j.DailyRollingFileAppender
log4j.appender.LOCALHOST.File=${catalina.base}/logs/localhost.
log4j.appender.LOCALHOST.Append=true
log4j.appender.LOCALHOST.Encoding=UTF-8
log4j.appender.LOCALHOST.DatePattern='.'yyyy-MM-dd'.log'
log4j.appender.LOCALHOST.layout = org.apache.log4j.PatternLayout
log4j.appender.LOCALHOST.layout.ConversionPattern = %d [%t] %-5p %c- %m%n

log4j.appender.MANAGER=org.apache.log4j.DailyRollingFileAppender
log4j.appender.MANAGER.File=${catalina.base}/logs/manager.
log4j.appender.MANAGER.Append=true
log4j.appender.MANAGER.Encoding=UTF-8
log4j.appender.MANAGER.DatePattern='.'yyyy-MM-dd'.log'
log4j.appender.MANAGER.layout = org.apache.log4j.PatternLayout
log4j.appender.MANAGER.layout.ConversionPattern = %d [%t] %-5p %c- %m%n

log4j.appender.HOST-MANAGER=org.apache.log4j.DailyRollingFileAppender
log4j.appender.HOST-MANAGER.File=${catalina.base}/logs/host-manager.
log4j.appender.HOST-MANAGER.Append=true
log4j.appender.HOST-MANAGER.Encoding=UTF-8
log4j.appender.HOST-MANAGER.DatePattern='.'yyyy-MM-dd'.log'
log4j.appender.HOST-MANAGER.layout = org.apache.log4j.PatternLayout
log4j.appender.HOST-MANAGER.layout.ConversionPattern = %d [%t] %-5p %c- %m%n

log4j.appender.CONSOLE=org.apache.log4j.ConsoleAppender
log4j.appender.CONSOLE.Encoding=UTF-8
log4j.appender.CONSOLE.layout = org.apache.log4j.PatternLayout
log4j.appender.CONSOLE.layout.ConversionPattern = %d [%t] %-5p %c- %m%n

# Configure which loggers log to which appenders
log4j.logger.org.apache.catalina.core.ContainerBase.[Catalina].[localhost]=INFO, LOCALHOST
log4j.logger.org.apache.catalina.core.ContainerBase.[Catalina].[localhost].[/manager]= INFO, MANAGER
log4j.logger.org.apache.catalina.core.ContainerBase.[Catalina].[localhost].[/host-manager]= INFO, HOST-MANAGER

Configure logging for JSON output

Get log4j-jsonevent-layout

Not as trivial a step as you would think.

If you’re using logstash 1.1.x, then it is just a case of getting the jar and the dependency jars, however I’m going to assume you’re a sane person and are using the latest logstash (currently 1.2.1).

Due to a change of format, you need to use a more recent log4j-jsonevent-layout than the 1.4 release. This is because of the new JSONEventLayoutV1 class. Don’t let this confuse you. It’s not how it seems that JSONEventLayoutV1 is the old version and JSONEventLayout is the new one, it’s the other way around. Think of JSONEventLayout as if it were JSONEventLayoutV0 and JSONEventLayoutV1 is the newer one that you need (that isn’t in v1.4).

At the same time, having looked at the fork tree, for the moment I have decided to use Spredzy’s master instead of the original logstash version, hoping they will merge later on.

This does of course mean you’re going to need to build the 1.5-SNAPSHOT yourself with maven or Jenkins.. oh wait, no you’re not! ;-)

wget -O /usr/share/java/tomcat6/jsonevent-layout-1.5-SNAPSHOT.jar "http://blog.tpa.me.uk/wp-content/uploads/2013/10/jsonevent-layout-1.5-SNAPSHOT.jar"

Get the other three dependencies

log4j-jsonevent-layout depends upon: commons-lang-2.4, json-smart-1.1.1 & junit-4.8.1:

cd /usr/share/java/tomcat6
wget "http://repo1.maven.org/maven2/commons-lang/commons-lang/2.4/commons-lang-2.4.jar"
wget "http://repo1.maven.org/maven2/net/minidev/json-smart/1.1.1/json-smart-1.1.1.jar"
wget "http://repo1.maven.org/maven2/junit/junit/4.8.1/junit-4.8.1.jar"

Configure an appropriate log4j.properties

Create or modify (from earlier) /usr/share/java/tomcat6/log4j.properties to reference the JSONEventLayoutV1 class.

log4j.rootLogger=INFO, CATALINA

log4j.appender.CATALINA=org.apache.log4j.DailyRollingFileAppender
log4j.appender.CATALINA.File=${catalina.base}/logs/catalina
log4j.appender.CATALINA.Append=true
log4j.appender.CATALINA.Encoding=UTF-8

log4j.appender.CATALINA.DatePattern='.'yyyy-MM-dd'.log'
log4j.appender.CATALINA.layout = net.logstash.log4j.JSONEventLayoutV1

log4j.appender.LOCALHOST=org.apache.log4j.DailyRollingFileAppender
log4j.appender.LOCALHOST.File=${catalina.base}/logs/localhost
log4j.appender.LOCALHOST.Append=true
log4j.appender.LOCALHOST.Encoding=UTF-8
log4j.appender.LOCALHOST.DatePattern='.'yyyy-MM-dd'.log'
log4j.appender.LOCALHOST.layout = org.apache.log4j.PatternLayout
log4j.appender.LOCALHOST.layout.ConversionPattern = %d [%t] %-5p %c- %m%n

log4j.logger.org.apache.catalina.core.ContainerBase.[Catalina].[localhost]=INFO, LOCALHOST

And after a tomcat restart you should find /var/log/tomcat6/catalina happily spitting out JSON format log entries. Keep an eye on the catalina.out though for anything strangely still passing through java.util.logging.

Configure your shipper

In my case, I’m using logstash as the shipper, so my /etc/logstash/conf.d/shipper.conf looks something like this:

input {
  file {
    type => "tomcat"
    path => [ "/var/log/tomcat6/catalina" ]
    exclude => [ "*.gz" ]
    codec => json
    sincedb_path => ["/opt/logstash/.sincedb_catalina"]
  }
}

output {
  redis {
    host => "<redis_host>"
    data_type => "list"
    key => "logstash"
    codec => json
  }
}

Works for me, your results may vary and all that malarkey. Have fun!

Zabbix :: Guest Account when using LDAP Auth

August 2nd, 2013 No comments

Simple one to start with.

So you’re using Zabbix and you have given the no-login guest account permission to read the status of your network – then you move to LDAP authentication and *blam* goes your guest access.

It’s not clearly explained anywhere, but fall-back auth for this does exist, it’s just only per-group no per-user.

Head to Configuration -> Users -> Guests and change “Frontend access” for the guests group to Internal.

That’s it; “Login as guest” should be back and you should smile :-)

Categories: Zabbix Tags: , ,

I’m Not Dead

August 2nd, 2013 No comments

I would like to take the opportunity to apologise for disappearing off the face of the earth – my life has not given me the time and opportunities I’ve needed to keep making contributions to this blog.

Hopefully, that can now change. Having had my job change dramatically, and then no longer having one, I am now back up and running as a proper SysAdmin again; and I couldn’t be happier about it.

So, fingers crossed I will have more and more useful things to say soon :-D

Categories: Miscellaneous Tags:

lamp-backup :: Backing up LAMP Server Content

February 28th, 2012 No comments

It’s been a while since I posted anything useful, but hopefully this will make up for it.

A friend came to me asking for help configuring the backup of his remote LAMP server and I duly obliged, but ended up getting a little carried away. Some hours later I emerged with this bash script to do the job properly.

Called from cron, each time the script runs:

  • It creates/updates a local rsync-copy of a remote directory.
  • It runs a mysqldump to a file, named by date-stamp.
  • Any archives or log files older than the user-specified expiry time (days) are deleted.
  • All three operations are carefully logged to appropriate “.log” and “.err” files.
  • The mysqldump, the rsync-copy and all the logs are run through tar.
  • The tarball is run through a user-specified compression program (default: xz).
  • The final archive is named by date-stamp.
  • A summary file is created including highlighted errors and execution times for each step.
  • The summary.log is (optionally) e-mailed to the backup administrator.
  • The script cleans up after itself, deleting any superfluous log and mysqldump files.

In the end you have a configurable period of snapshotted-history for your LAMP server, all in small archive files, named by their date and with logs for every step of every backup and permanently-deleted file.

I recommend the script is called by cron like this (assuming midnight backup):

0 * * * * /path/to/lamp-backup.sh 1>/path/to/lamp-backup/lamp-backup.log 2>/path/to/lamp-backup.err

Which will also give you activity and error logs for the script itself.

If you wish to run the script manually from the command line to test it, you can set the output to verbose like this:

$ VERBOSE=true ./lamp-backup.sh



Ensure that lamp-backup.conf lives in the same directory as lamp-backup.sh and is readable and carefully configured.

DO NOT RUN THIS SCRIPT AS ROOT UNLESS YOU REALLY NEED TO AND HAVE TESTED YOUR CONFIGURATION AS A NORMAL USER.



You can get a copy of v1.0 (stable) of this script and associated config file from my subversion repository:

https://svn.tpa.me.uk/lamp-backup/tags/1.0

I will be working on it more soon, abstracting further, adding command line parameters and adding additional features that seem useful.

You can track the current version via the repository trunk.

Categories: Apache, Linux Tags: , , , , ,

Rosetta Code

June 1st, 2011 No comments

Stumbled across the Rosetta Code site this morning. I can’t remember when I last discovered a site so unbelievably useful; especially for people like me who occasionally dip into languages and then don’t use them again for a long time.

http://rosettacode.org/

Rosetta Code is a programming chrestomathy site. The idea is to present solutions to the same task in as many different languages as possible, to demonstrate how languages are similar and different, and to aid a person with a grounding in one approach to a problem in learning another. Rosetta Code currently has 504 tasks, and is aware of 384 languages, though we do not (and cannot) have solutions to every task in every language.

A very good example from the site:

http://rosettacode.org/wiki/Reverse_a_string