I'll repeat - RAM disk is NOT what you want to do. Tom, I realize you know this, I just want to make sure everyone else gets the message.
Tom - Postgres fsync's datafiles at the end of a transaction (standalone individual statements each run in their own transaction). fsynch doesn't return until the data has actually migrated to the disk drive.
Now - Linux and Postgres have no control over what happens in the disk drive itself. You get blocks in the disk cache that haven't made it to the actual platter. So all you know for sure after an fsync is that your fate is in your disk drive's hands.
As caches get bigger, the problem gets bigger. Sigh. Expensive RAID systems include battery backup to ensure blocks reported to the OS as written will really get written, assuming the RAID subsystem itself doesn't break in a nasty way.
The above problem isn't restricted to Postgres, of course...
And we (Janine Sisk and I - I'm doing a site for one of her clients using ACS Classic) just saw Oracle corrupt itself a couple of days ago on a perfectly healthy machine, how about that for inspiring confidence? SQL*Plus returned with "unexpected EOF" error messages on a simple "update" statement (and I do mean simple!). This is equivalent to the PG "backend closed unexpectedly" which old-time users like Lamar Owen are so familiar with (it's become much harder to crash PG in recent releases).
OK, I'm rambling well off-topic here. Check out the recently improved OpenACS docs to see an example of running with a large shared buffer pool and giving Postgres permission to use considerable RAM for sorts before spilling to disk.
On my 256MB server, I've got 64MB dedicated to my shared buffer pool. Increasing the maximum amount of shmem no longer requires a kernel recompile (as of RH 6.1 or 6.2, i.e. the 2.2.* kernel), however Postgres uses default values for the memory addresses and these limit shmem use to 16MB - and changing it requires a kernel recompile.
Oh, well...