Forum OpenACS Q&A: Response to Memory usage - what is normal?

Collapse
Posted by Don Baccus on
How are you calculating that?  Are you including shared memory?
<p>
In the case of Postgres, there are three main consumers of memory:
<ul>
<li>The code - this is shared between the backend processes
<li>The shared buffer cache - again, shared between backends
<li>Temp storage - used for building hash tables for hash joins, to sort tables, to hold query returns, etc.
</ul>
The total memory used is the size of the code, the size of the shared buffer cache, and the sum of temp storage currently being used by all the backends.
<p>
In essence, for scaling up you want to decide how much of your machine  to allocate to the Postgres shared buffer cache (max 16mb without rebuilding your kernel) and how much space to allow Postgres to use for sorts, etc before it spills to disk (I'd make this value fairly large).  The maximum memory used will then be the size of the cache, the size of the image, the sum of the private space you allow for sorts, etc plus stack storage for each process (not much).
<p>
If you just blindly read the numbers from top without figuring out which are shared between processes, you'll think a lot more memory's being used than really is.
<p>
On the dual P450 I'm setting up as my production machine, I have 256 MB RAM.  I intend to devote 64MB to the shared buffer cache and to let  each backend use up to 8 or 16 MB for sorting, etc (I forget now just  how much I configured it for).  In reality, the chance that more than  one or two backends will be doing large sort/merge or hash joins at any one point in time is pretty low so I could set that figure to be larger, I imagine.  In a straight ACS site, the size of joins will be smaller on average than with my bird species distribution database.
<p>
AOLserver's less configurable AFAIK, but the code footprint is only a few megabytes.  It does caching, etc, too of course.