Given the scale of what you are trying to do, here is what I would suggest:
1. first use a 'trace' filter. These run after the connection has closed, they always run, and theoretically you have the most information about what happened, essentially you can record information during the request and handle it at the end. Trace filters don't interfere with the quickness of serving the request.
2. Dump you collected information to a log file of some type. You may be able to just piggyback on the error log, or the access log, or create a new file to accept output.
3. Unfortunately, 2 requires that there is some text format for the HDF. I bet there is one.
4. There are certain utilities that can filter the error log to multiple files (multilog), so using the error.log facility is a good choice.
5. Occasionally process the output files with whatever tools are available for your HDF client/server.
Usint a text file buffer, you can start immediately instead of creating a module specific to AOLserver or Tcl, and you can test each part separately. You also have a transaction log, and you don't have to worry if your HDF software errors out, or if you change software.