Forum OpenACS Q&A: Re: How to go about debugging a Postgres performance problem?

A possibility is that the statistics are not accurate enough, so when the vacuum analyze part is done, sometimes PG gets accurate stats and sometimes it does not.

See this:

http://www.varlena.com/varlena/GeneralBits/89.php
(second subject)

As I understand it, Postgres does not actually grind through all the values in a table and update the statistics; instead, it samples the some of the values and uses that statistical sample.  Thus different runs of analyze might generate different statistics, thus giving you different performance.  In the case of the dev server vs. the production server, my guess would be that there is a difference in the way the database is used that is causing PG to calculate statistics differently.

Probably what you should is to increase the statistics value as mentioned in the article above for the affected tables, e.g. "ALTER TABLE ALTER COLUMN SET STATISTICS 100;" where TABLE and COLUMN are the table and column affected.  With more values comes slower analyze times, but more accuracy.

Also, just for fun, run "SELECT attname, n_distinct, most_common_vals FROM pg_stats WHERE tablename = 'TABLE';" on both the dev and production server and see what the most common values are... this may give you an insight into "why" it is happening.