Regarding testing on more RAM ...
I did monitor things to make sure my test results weren't being skewed by disk I/O. I'm in no way offended by the fact that Deds has raised the issue, but the datasets I'm talking about are small enough so it isn't an issue.
After all, when I was a kid 384MB was unimaginable for a warehouse of magnetic tapes :)
Good question, though. Results were CPU, not I/O bound. I made sure of that before collating them.
But ... various architectures and implementations of architectures do exhibit different execution profiles. My laptop has a P500 Celeron with 128KB L2 cache. Straight Coppermine PIIIs should perform better in the high calculation case, AMD Athalon's > Thunderbird even better. P4's? I don't know, have never used one.
Still ... I don't think architectural differences won't change the story on the gross scale. "a lot better" is "a lot better" even if one or the other CPU chip defines "a lot" differently. Obviously the experimental queries are doing much, much less work.
It also indirectly raises the issue of maintaining an expanded "party_member_map" table and its indices. There are only two columns in the table, so under Oracle the cost should be fairly minimal. PostgreSQL, however, has a fairly steep per-row penalty that makes the storage impact more severe than under Oracle. And of course for both RDBMSs there is the need to store the index.
I think we all accept the fact that the most intense and busiest sites built with OpenACS may require Oracle. We mostly care that the 99% of sites that don't fall into that "busier than hell" category can run with Postgres. I think my tests show that both can be made to scale much better than they do today even if the Oracle numbers are considerably better.
And of course if any of you want to experiment and improve upon either version of the function, and can beat me - hurray! Kick my ass! The faster the better, that's all that can be said!