This is a warning for OpenACS programmers to avoid Tcl's open with command pipelines when possible. When moving from bare metal to virtualized environments, we experienced situations where the server was suddenly freezing for several seconds. This might on some applications not a big issue, but when the server receives 1000+ requests per second, there will be substantial queuing happening in such situations.
The problem turned out to be a common Tcl idiom of opening a command pipeline with the Tcl "open" command. The following demo page uses a pipe open with a simple external program ("cat").
set t [clock milliseconds]
set f [open "|cat" w]; puts $f "hi"; close $f
ns_return 200 text/plain "pipe open took [expr {[clock milliseconds] - $t}]ms"
This page takes under on bare metal servers a few milliseconds, but
when the memory footprint of NaviServer is getting large (e.g., 60GB,
when running 100+ threads with a huge blueprint and large caches), and
it is running in a virtualized environment, we experienced that the
same page was talking 6s or more. A quick test showed that the fork in
the virtualized environment is twice as slow (but your mileage may vary
due to different virtualization environments etc.).
The problem is that Tcl performs a fork() operation for spawning the
pipe, and during the fork, everything in this process stops (meaning
as well every thread of the process, every write operation (logs,
network), etc. You will also see in this situation long mutex lock
durations, since the unlock will happen after the fork() has finished.
While the fork() operation of exec is greatly avoided via nsproxy (see also [2]), the fork() inside the Tcl open is not covered. So, to avoid the problem, avoid constructs like
set f [open "|$cmd ..." w]; ....; close $f
and simply write instead to a temporary file, pass it to the command, "exec" it, and delete the temporary file later.
Hope, this helps somebody.
-g
[1] https://www.tcl.tk/man/tcl8.6/TclCmd/open.html
[2] https://openacs.org/xowiki/out-of-memory-exec