Forum OpenACS Development: Re: Strange problem with util_memoize flush
I had also these same toughts. The thing is that I've already reviewed 100 times PostgreSQL configuration and I couldn't find any bottleneck. The machine is ok with resources, PostgreSQL is well tuned and we never get out of resources.
I've configured 200 in max_connections, but it never get more than 30 at the same time. I've double checked Warm Stand By config, shared_buffers, kernel shared memory, everyhing that I know and PostgreSQL seems to be ok.
It seems like a cache problem to me because it only happens when util_memoize cache goes full. In any other situation the queries take the regular amount of time.
In PostgreSQL log I keep seeing the same timeout messages:
2011-01-14 21:08:43 BRST LOG: não pôde receber dados do cliente: Tempo esgotado para conexão
It's a timeout message saying that the server got a EOF message from the client connection.
This problem is being very difficult to catch.
How do AOLServer implement connection pooling? Are they persistent connections? Is there a separated pool for cache renew queries?
I'm asking this because there can be some connection reset on the network structure based on connection time. Maybe this can be problem (I'm running out of ideas).
Gustaf, thank you very much for your time and your answers. They're being very helpfull.
Do you have
MaxOpen set for
pool2? If so, remove your settings.
Concerning your questions:
ns_cache is completely agnostic about the database. The worker threads (e.g. connection threads) require db-handles (references to db-connections) via gethandle, they require in most cases a pool1 connection and sometimes a pool2 connection (for subqueries). The handles are released either explicitly (happens seldomly) or automatically when a request is done and the worker thread is done.