Jeff, those numbers are really interesting, as we've not
stressed-test our toolkit in that manner thus far (though there are
several very busy sites running ACS/Oracle so there are some non-PG
datapoints).
Stress testing and automated regression testing for the OpenACS 4
platform are under development today, but until we generate some
numbers of our own using simulated users your numbers should serve
to show folks what's possible with a Postgres-based platform.
Thanks ...
Did you guys gather any data on how large your tables grew between
VACUUMs, and how long your VACUUMs took on average?
It would also be interesting to load test under PG 7.2 in order to
see how well the new "lazy VACUUM" works vs "VACUUM FULL". The lazy
version won't remove dead trailing blocks from the data files, i.e.
it frees space for reuse but doesn't compact (which is one reason
why it only needs to lock a page at a time and therefore has much
less impact on system concurrency). Thus one would expect that
datafiles will be a bit larger on an active site that exclusively
uses "lazy VACUUM" but AFAIK no one has solid data on how well or
poorly the new strategy will work. There's been some testing but
nothing on the scale you're talking of (and I'm not talking about
"within the OpenACS community", here, but rather within the PG
community at large).
At least I've not seen any large-scale loading combined with
systematic "lazy VACUUMs" discussed on the PG hacker's list.