Forum OpenACS Q&A: Re: Performing queries on a cached query result?
Details at http://www.equi4.com/metakit.html
Btw, SQLite can also be used either on-disk or in-memory now, and there's been discussion on the SQLite list lately about concurrency improvements. It sounds like they have a workable design for an MVCC model with table level locking for writers, but it's not yet clear whether it will ever be implemented or not.
See also the Ratcl thread, about Jean-Claude's new prototype package for relational algebra in Tcl (eventually other languages as well). It might ultimately be quite useful as a simple in-memory RDBMS, probably with read/write mutex locking per table/view.
With any sort of in-memory database, I figure table level locking is probably good enough, even if it's strictly pessimistic mutex locking, rather than the preferable MVCC model where locks block only other writers, never readers.
Note that table locking is probably always less scalable than simple Nsv key/value pairs, which you can always split up btween more and more mutex buckets. And of course nsv/tsv is simple and just the right thing in many cases - but when it's not, having a real in-memory RDBMS database would be very, very handy.
A long time ago, someone also pointed out Konstantin Knizhnik's FastDB and GigaBASE, which do have a sort of MVCC with one database-wide lock for writers, but IMNSHO are hardly "relational" at all, so I have trouble even imagining when or why I would ever want to use them. (No joins, unions, or foreign key references at all; instead each row is a C++ object and that object is allowed to have references - including dangling refereces - to other objects in other tables, so you are free do query that table, then try to do more queries chasing all the references. Yuck!)
I mentioned the Scheme SLIB relational database above, way back when. If I remember correctly, it is strictly one reader/write only, so is unlikely to be useful, even as an example, for the type of concurrent web-oriented applications we tend to talk about here.