To add more fuel to the fire the time required for the PG optimizer to create a plan is not linear in the number of tables being joined but rather linear in the number of ways those tables can be joined (i.e. via hash, merge, or nested loop joins using sequential or index scans, in differing order using differing keys, etc etc).
The parser itself should be fast ...
Experiments run by the guy who started work on the caching of query plans was seeing about a 30% improvement on simple queries. On more complex queries parsing/optimization overhead's going ot depend a lot on the actual query. If you join a bunch of tables but only return a few rows creating the plan will take more time than executing it, and caching the plan will be a big win. On the other hand if you're doing an cartesian join of ten huge tables the optimizer will have few choices and run quickly, while the output will be so huge you'll probably run out of swap space before it is completed :)
As far as Oracle goes, I'm sure everyone here is aware of the fact that [Open]ACS 4 uses bind variables throughout. AFAIK though, no one has done any measuring on an active site to see how much memory Oracle should be allowed to use for caching in order to see the benefits without eating too mucn into your shared buffer space used to cache database blocks.