Forum OpenACS Q&A: Response to What expectations/settings should I have for Postgres?

There have been some in-statement memory leak issues surrounding functions, i.e. memory wasn't freed until the statement (rather than the function) was complete. I believe that many/most/all of these were fixed for PG 7.1 so as always I recommend you try PG 7.1.

If it still runs out, pass it along as a bug as I suspect the concatenate, as suggested, is not freeing memory until statement end. Which in this case is too late in this case.

As far as count goes, no, PG can't just use the unique index. This is a weakness in the PG indexing model. PG indexes don't carry transaction data, so each row in the table has to be "poked at" to see if it is still live for the current transaction. Adding transaction data to the index table is a double-edged sword - the index tables themselves become larger, updating them slower, and it becomes a lot harder for users to write new index types (unless the PG framework was expanded to accomodate both transaction-aware and transaction-stupid indexes).

This is, I suspect, why MySQL shines in their favorite benchmark - "count(*)". Traditional MySQL doesn't have transactions so of course they never had to deal with the problem or the implications! So they may just count the index hits.

Why can't you just do this?

select count(*), sid, membid, date, time
from logs
group by sid, membid, date, time
having count(*) > 1
Isn't the concat already unique after the group operator?