Forum OpenACS Q&A: Response to Yet another PostgreSQL x MySQL Question

Collapse
Posted by Todd Gillespie on
Normally, PostgreSQL is a magnitude slower than MySQL. See section 12.7 Using your own benchmarks. This is due largely to they have only transaction safe tables and that their transactions system is not as sophisticated as Berkeley DB's. In MySQL you can decide per table if you want the table to be fast or take the speed penalty of making it transaction safe.
This is becoming more frequent now that MySQL comes linked to Copycat (Berkeley DB). Can anybody comment on Berkeley DB? Is the above statement true? I thought you could tell PostgreSQL if you wanted the super-safe mode or a less-safe mode. Am I wrong? What about the "one order of magnitude slower" deal?
You _can_ tell PostgreSQL to change transaction isolation level, but the less-secure mode only allows committed changes to be read by uncommitted transactions - it's not like a complete breakdown of atomicity. And I've never noticed a speed difference between the two, (except me trying to figure out which is needed in a given action).

As for the whole 'order of magnitude slower b/c of safe tables' argument, I can quote someone on slashdot who responded to the latest IIS-is-faster-than-Apache news bit with the rebuttal: "You can't serve pages at any speed when your webserver's crashed." Not a precise analogy, but I hope you get the point....

As for Berkely DB, I really can't say much beyond that Perl is linked to it on some systems for use in their tied-DBM objects (for those who don't know Perl, the method of which I speak treats a disk object as a normal hash table). I never much thought of them as being all that amazing - worked fine, but I never saw any claims of concurrency control and rollback logs.