Forum OpenACS Q&A: Re: Tracking Memory Usage of nsd

Collapse
Posted by Dave Bauer on
I found a daily procedure that was loading into a tcl variable and parsing a 256mb XML file.

I disabled that as it was a legacy function no longer needed.

I will have to rewrite that as a plain tcl cron job as it was not really using any OpenACS or AOLserver functions that can't be replaced by regular tcl.

So that explains the unable to alloc 512mb notice at least. Size is down to around 500mb stable for 24 gours so far.

Collapse
Posted by Gustaf Neumann on
... parsing a 256MB file ...

Note that if one has a Tcl-obj of size X, and appends to it, Tcl tends to double the allocation size to omit further copying operations, when one is again appending to that string. This strategy is quite runtime efficient when more appends are happening later, but as well quite wasteful when X is e.g. 1 GB. A similar situation might happen depending on the Tcl version and Tcl operation, when Tcl converts the string into UCS-2, where every character is represented by two bytes. Also, when parsing the XML file, the parser will need space for its data structures and copies of the string chunk.

So, altogether i would not be surprised, if the parsing would use 1 GB of memory. The bad thing about the zippy allocator is that it does not give memory back to the operating system, so the nsd process will grow by such operations, but it will not shrink later. Using different allocators can help here, but these require tcl modifications.

Another option is to use ns_proxy [1], which is run as a separate process. When this process is terminated all memory is returned back to the OS. It might be less effort for you to move the parsing to the ns_proxy instead of a separate cronjob.

-gustaf neumann

PS: a similar bulky memory growth might happen in the filters within search, when e.g. huge pdf files or the like are indexed.

[1] http://naviserver.sourceforge.net/n/nsproxy/files/doc/mann/ns_proxy.html