Forum .LRN Q&A: Re: forums admin portlet title upgrade: run out of memory!

After the meeting, it's clear that I must wait for the cvs merge, in order to get the new version numbering of the forums-portlet package. It should be 2.5.0d1, so I should do the upgrade-2.5.0d1-2.5.0d2.sql file. The content of that file could be as simple as...
update portal_element_map set pretty_name = '#!-- --forums.pretty_name#''
where name = 'dotlrn_forums_admin_portlet' and pretty_name != '#!-- -- forums.pretty_name#';

I was trying to simulate the upgrade on a test server, but I got that ugly error...

Failed to install Forums Portlet, version 2.5.0d2. The following error was generated:

couldn't fork child process: not enough memory

---
I have read another forum posts about this kind of message, but I have no idea how to solve it.

The error message simply says that the machine is running out of memory while it was trying to start an external program. As a first measure, you should probably provide more swap space. What is the normal memory footprint of your aolserver? how much physical memory does the machine have? what else runs on this machine?

Certainly, you should try to reduce the memory footprint of your installation by
- not installing packages, which are not use
- not installing too many scheduled repeating procs in own threads
- rebooting the server from time to time
- checking the parameters of you config.tcl script (e.g. high number of minthreads)

-gustaf neumann

The problem was the swap memory: 0KB free. Thanks!

The machine has 5 aolserver v 4.0.10. This services are the official tests servers. They are supposed to be used for launching automated tests, but we usually used them also for testing scripts, or tcl's, manually. Anyway, the installed packages are available for testing, and the server is not supposed to have very heavy traffic.

The current machine only has 1264MB of RAM, and 611MB of swap. Our sysadmin has increased the swap 1GB more right now. He also has set a daily reboot.

About config.tcl, I think it's the default one... it says...

   ns_param   maxthreads         10
   ns_param   minthreads         5

Thanks a lot.

PS: I've seen that as aolserver is restarted, after loading all the oacs + .lrn stuff, it's about 150MB of virtual memory, and just going to APM and running the upgrade script it grows up to 300 MB.
After a few more clics, virtual memory used by nsd grows up to 611MB, what I think it's a big growth for a few clics in a short period of time.

1 GB of swap is not really much, but at least the machine will become slower (i.e. swapping) before it crashes :). Extending swap should help in the short range. Is the machine running as well the database? anyhow, 1 GB of real memory is not much for running 5 fully configured openas/dotlrn instances.

Concerning the parameters: i would recommend to comment out the minthreads line, and reduce maxthreads to 5. Note, that every thread gets a full copy of the blueprint (containing a tcl code from all -proc.tcl files). Every thread has its own tcl interpreter; once you are loading dotlrn, the threads are far from being lightweight.

Concerning the memory growth: when the server starts, only the bare minimum of threads is created. After some time the first scheduled procs are run (note that every repeating scheduled proc running in its own thread will use its own thread containing the full blueprint). Reducing the number of such scheduled procedures will help to reduce the memory consumption as well.

So when the server starts, you might have only a few threads running (e.g. 4 with main, driver, sched, 1 conn thread); when the first requests come in (for e.g. a page with some images) you are likely to have with your configuration 10 connection threads running (we are now at 13), if you run the full set of scheduled procs from all dotlrn packages, you might have 10 or even more scheduled threads) (we are now at 23 thread). So, it is not unlikely to use after some time 6 times the memory, even if one takes into account that zippy and tcl tend to over-allocate memory for speed reasons.

Hope, this explain a little the underlying mechanisms
-gustaf neumann