Forum OpenACS Development: Re: Pipe-open Considered Harmful
I wanted to add a related note that touches on the same subject, without being exactly identical: We've seen significant slow downs on systems when using Tcl to pipe data to a network-connected file systems (i.e. NFS or EFS). Along the lines of:
set fd [open /nfs/file w]
http::geturl $url -channel $fd ...
I'm assuming this is down to buffering settings on the file handle and the configuration of the network filesystem, but generally it is better to stream to block storage instead.
AFIKS, Tcl's geturl does not make a pipe open, so this is something different. By looking at the implementation of
http::geturl i would assume that it is not the case, the the full server will all threads stops, but that a single request takes a long time. Tcl's
http::geturl should be avoided inside connection threads, since it uses it's own event management, and it will hard crash Tcl (segmentation violation) when more than 1024 file descriptors are open (which can easily happen on busy servers).
One should use instead the builtin
instead. Running HTTP requests from external sources in connection threads is dangerous (potentially vulnerable to slow-read or slow write attacks), unless the runtime of the request is limited. Blocking connection requests can further lead to run out of connections. To address this,
ns_http has the
-donecallback which allows the HTTP requests to run in a background thread.
If you really want to write to a slow NSF drive, and this takes long/has a large performance variance, then you should consider doing this asynchronously or in some the background job.
all the best -g