Just thought I'd throw in another issue to consider. In case people have not considered all of the potential problems with log size/space usage...
I believe Roberto mentioned (a year or so ago) that a total solution must limit the *total* space used by logfiles. I immediately implemented a solution, suggested by Roberto, and will never have to address the space issue again.
Time-based log maintenance will never guarantee an absolute maximum limit on individual logfile size. At best, it can attempt to guess how large a file may become by *assuming* a particular rate of growth. But, if the rate increases substantially from the assumed rate, you may find that the same problem (file size, total storage space) reappears. How can that happen? slashdot is one answer.
If I'm running a production (as in commercial) site and I wish not to be (at the very least) embarrassed when the public flocks in at incredible rates, by a crashed aolserver or (worse) a full filesystem, I'm going to protect against this problem.
A true solution to this problem must be based on (duh) file size, not time. A true solution would take into account maximum individual file size and number of files. You can then establish an upper bound on the space used and *know* that it will not be exceeded. I don't think a scheduled proc is the right *final* answer here unless it runs sufficiently often enough to be certain that the file cannot grow past its bound. The only way I can think of to be sure is to pick a rate that *cannot* be exceeded given a particular collection of hardware and software, i.e. how fast can the disk be written. The scheduled proc would then run at the calculated frequency and roll the log only when its size is sufficiently large. The proc would keep only the latest n archives.
'multilog', a component of DJB's daemontools can be configured to meet the above requirements. It takes (little) effort to install and configure, and it also solves this problem...once and for all.