Forum OpenACS Q&A: Response to optimizing query with aggregates

Collapse
Posted by Don Baccus on
It's not a bad plan - note that it is using the btree index to grab the right rows from your battles table.  You might time the sum and count separately, my guess is that the count portion's noticably faster but it would be interesting to see if that's true.

Why are you joining on parties instead of users?  It won't make much difference unless or until you start using groups, etc but I assume you're tracking results for individual users so you should use that table.

Since you're computing this for every user who participates in at least one battle, obviously it's not going to scale very well no matter what.  Presuming that all your users participate in battles, there will be one row returned for each of your users, meaning that the time spent will be linearly proportional to the number of users.

So even if it only took 0.01 seconds, 500 users would eat up the same five seconds now taken by 50 users.

One possible workaround is to use "limit" and "offset" to paginate your output, limiting the number of users displayed per page and incidently also placing a ceiling on the number of rows computed per query, making the page display time not dependent on the number of users in the system.

Are you sure the GROUP BY plan is the same?  I would expect a single index scan on battles with a sort rather than the two index scans.  This may or may not be faster (or slower) but is worth timing, at least.

60K rows of battles - are you sure your shared buffer space is large enough to contain the tables during the query?  If not, even though your system probably has enough RAM for the OS filesystem cache to avoid disk I/O you'll chew up a fair amount of time shuffling data between the PG and filesystem caches.

Robert's answer's probably the right one, though ... by using a trigger to maintain a summary table you're distributing the cost of keeping track of this over each update to the battle table, trading off a bunch of inexpensive operations in order to avoid one big one.