Forum OpenACS Q&A: Re: Tracking expensive DB queries

Collapse
Posted by Dave Bauer on
Malte,

Since, a db_multirow or db_foreach does an ns_db getrow on every row, it is not scalable to track individual database operations on a production environment.

I am not sure why you'd want to rewrite all the developer support database code into the xotcl request monitor. It does not really matter where you put the code but you definitely can't send a signal to a postgres process from the nsd user. That seems like a really bad idea.

I'd suggest that a scheduled proc that checks for long running queries by looking in pg_stat_activity, and if it finds one, a n email or other message to an admin who can take the appropriate action.

In general, don't write queries that take that long, although I ran into a similar problem with dynamic flters that a user can build a bad query if you filter code is not smart enough. I fixed that problem by not joining on the CR views, but instead using the type specific table and joining explicitly on cr_items and cr_revisions as necessary. This worked really well and a query went from minitues to under 1 second.