Home
The Toolkit for Online Communities
17120 Community Members, 0 members online, 1994 visitors today
Log In Register
OpenACS Home : Forums : OpenACS Q&A : Large Error Logs : One Message

Forum OpenACS Q&A: Re: Large Error Logs

Collapse
9: Re: Large Error Logs (response to 1)
Posted by Randy O'Meara on
Just thought I'd throw in another issue to consider. In case people have not considered all of the potential problems with log size/space usage...

I believe Roberto mentioned (a year or so ago) that a total solution must limit the *total* space used by logfiles. I immediately implemented a solution, suggested by Roberto, and will never have to address the space issue again.

Time-based log maintenance will never guarantee an absolute maximum limit on individual logfile size. At best, it can attempt to guess how large a file may become by *assuming* a particular rate of growth. But, if the rate increases substantially from the assumed rate, you may find that the same problem (file size, total storage space) reappears. How can that happen? slashdot is one answer.

If I'm running a production (as in commercial) site and I wish not to be (at the very least) embarrassed when the public flocks in at incredible rates, by a crashed aolserver or (worse) a full filesystem, I'm going to protect against this problem.

A true solution to this problem must be based on (duh) file size, not time. A true solution would take into account maximum individual file size and number of files. You can then establish an upper bound on the space used and *know* that it will not be exceeded. I don't think a scheduled proc is the right *final* answer here unless it runs sufficiently often enough to be certain that the file cannot grow past its bound. The only way I can think of to be sure is to pick a rate that *cannot* be exceeded given a particular collection of hardware and software, i.e. how fast can the disk be written. The scheduled proc would then run at the calculated frequency and roll the log only when its size is sufficiently large. The proc would keep only the latest n archives.

'multilog', a component of DJB's daemontools can be configured to meet the above requirements. It takes (little)  effort to install and configure, and it also solves this problem...once and for all.

Collapse
10: Re: Large Error Logs (response to 9)
Posted by Andrew Piskorski on
Randy, you are basically correct, and using Mayoff's dqd_log or the like to roll logs based on total disk space used is the way to go if you really need or want that.

But that's unnecessary for probably 95% of OpenACS users, and setting it up, regardless of however easy, is still significantly more complicated - minimally you need to also install one extra AOLserver module and DJB's multilog. So it would be nice to explain how to do so in the docs, but the out of the box configuration OpenACS provides should just use the simplest, time-based log rolling configuration possible, without any extra external dependencies or complications.