Forum OpenACS Q&A: Large Error Logs

Collapse
Posted by Koroush Iranpour on
I have tried to follow threads about large error logs and how to address the issue. I've tried the tips, except the one where CVS is mentioned but haven't obtained any significant results.

Where can I get more information as how to deal and solve  with this problem. I believe it would be embedded in the source code for the AOLserver as I can't see anything in the Openacs code.

Any guidance would be appreciated

Thanks

Collapse
2: Re: Large Error Logs (response to 1)
Posted by Jade Rubick on
See: https://openacs.org/bugtracker/openacs/bug?bug%5fnumber=594

Let us know what you try, and if it doesn't work, we'll help out 😊

Collapse
3: Re: Large Error Logs (response to 2)
Posted by Joel Aufrecht on
I'd really like to fix this. It will get fixed when any one of the following conditions is satisfied (this is generally true for most bugs):
  • Somebody posts a complete description of the fix, including all necessary changes to all files (ie, a patch or equivalent), and this fix is obviously backwards-compatible. Reading the three forum threads and two bugs, the best fix seems to be Andrew's, detailed in two parts, first at https://openacs.org/forums/message-view?message_id=72941 and then at https://openacs.org/forums/message-view?message_id=103601. I didn't put it in then because there wasn't a firm assertion of exactly where all the bits should go.
  • I have spare time to take the many different, useful comments and implement them on a test server and make sure that nothing's missing
  • Another committer fixes it
Collapse
4: Re: Large Error Logs (response to 1)
Posted by Mark Aufflick on
There have also been discussions about making logging more granular so they don't have to get so darn big all the time!

I haven't been able to find those threads by searching the forums, but they are there somewhere.

Collapse
5: Re: Large Error Logs (response to 4)
Posted by Jeff Davis on
I changed a lot of the more verbose and generally less useful notice messages to debug (or removed them entirely).

If you run a server with debug off your logs grow quite slowly now.

Collapse
6: Re: Large Error Logs (response to 5)
Posted by Bruce Spear on
Here's a little solution, in perl, written my buddy Manfred that might help you.

1. Create a logrotate file that archives the error.log daily on a 7 day rotation and notifies you via email.
2. Have cron run it daily

############################################################################
# logrotate.pl
############################################################################

# Rotieren der Logfiles

use File::Copy "mv";

$LOGFILE="error.log";
$LOGDIR="/usr/services/service0/log/";

($day, $mon, $year, $wday) = (localtime) [3,4,5,6];
($shh, $smm, $ssec) = (localtime) [2,1,0];
$mon += 1;
$year += 1900;

$datum = "$year$mon$day-$shh$smm$ssec";
$src = "$LOGDIR$LOGFILE";
$dst = "$LOGDIR$LOGFILE.$wday";

$MAILF = "/tmp/logrotate.$datum";
open (DEBUG, ">$MAILF") or die "$0: Can't create $MAILF $!\n";
print DEBUG "$0 Report to Bruce:\n";

print DEBUG "Operation starts: $shh:$smm:$ssec\n";
print DEBUG "mv $src $dst\n";
mv ($src, $dst) || die "$0: mv ($src, $dst) failed\nreason: $!\n";

$cmd = "/usr/local/bin/svc -h /usr/services/service0/etc/daemontools/";
@res = `$cmd`;
print DEBUG "Sending HUP signal to service.\n$cmd\n@res\n";
print DEBUG "success.\n";
close DEBUG;

fini();
exit;

sub fini {
        system ("/usr/bin/Mail -s \"[log] $day.$mon.$year \" mailto:boylston\@zedat.fu-berlin.de <$MAILF");
        unlink ($MAILF);
        exit 1;
}
~
~
##########################################################
nsd.cron.daily
##################
#!/bin/sh
/bin/su -c /usr/services/tools/logrotate.pl - service0
~
~

Collapse
7: Re: Large Error Logs (response to 1)
Posted by Andrew Piskorski on
While sending a HUP signal to AOLserver like the Perl script above does works just fine, there are substantially better and simpler ways to do this. See further info in the bug link Jade gave above.
Collapse
8: Re: Large Error Logs (response to 7)
Posted by Bruce Spear on
Andrew:  Right you are: these prior posts offer far more elegant solutions: I simply hadn't dug deep enought in the OpenACS forums to have found them.  Thanks!  Bruce
Collapse
9: Re: Large Error Logs (response to 1)
Posted by Randy O'Meara on
Just thought I'd throw in another issue to consider. In case people have not considered all of the potential problems with log size/space usage...

I believe Roberto mentioned (a year or so ago) that a total solution must limit the *total* space used by logfiles. I immediately implemented a solution, suggested by Roberto, and will never have to address the space issue again.

Time-based log maintenance will never guarantee an absolute maximum limit on individual logfile size. At best, it can attempt to guess how large a file may become by *assuming* a particular rate of growth. But, if the rate increases substantially from the assumed rate, you may find that the same problem (file size, total storage space) reappears. How can that happen? slashdot is one answer.

If I'm running a production (as in commercial) site and I wish not to be (at the very least) embarrassed when the public flocks in at incredible rates, by a crashed aolserver or (worse) a full filesystem, I'm going to protect against this problem.

A true solution to this problem must be based on (duh) file size, not time. A true solution would take into account maximum individual file size and number of files. You can then establish an upper bound on the space used and *know* that it will not be exceeded. I don't think a scheduled proc is the right *final* answer here unless it runs sufficiently often enough to be certain that the file cannot grow past its bound. The only way I can think of to be sure is to pick a rate that *cannot* be exceeded given a particular collection of hardware and software, i.e. how fast can the disk be written. The scheduled proc would then run at the calculated frequency and roll the log only when its size is sufficiently large. The proc would keep only the latest n archives.

'multilog', a component of DJB's daemontools can be configured to meet the above requirements. It takes (little)  effort to install and configure, and it also solves this problem...once and for all.

Collapse
10: Re: Large Error Logs (response to 9)
Posted by Andrew Piskorski on
Randy, you are basically correct, and using Mayoff's dqd_log or the like to roll logs based on total disk space used is the way to go if you really need or want that.

But that's unnecessary for probably 95% of OpenACS users, and setting it up, regardless of however easy, is still significantly more complicated - minimally you need to also install one extra AOLserver module and DJB's multilog. So it would be nice to explain how to do so in the docs, but the out of the box configuration OpenACS provides should just use the simplest, time-based log rolling configuration possible, without any extra external dependencies or complications.