Forum OpenACS Development: Re: whole page caching using memcached

Collapse
Posted by Gustaf Neumann on
For typical OpenACS applications whole-page-caching has many problems and limitations. Keep in mind that the page does not only depend on the "application objects" (tracking the implications of operations especially on aggregating functions is hard) but as well on the user state (user_id, cookies, permissions). In case of personalized content (e.g. dotlrn, where essentially every user sees different content for one url) you have to construct quite complex keys containing the query and user context, which will be some work to get it right, especially if you would try to re-implement this in nginx. I have doubts that the number of successful hits in the cache would be high. Caching page fragments is another story, but that won't be of use in nginx, and doing this via ns_cache will be probably much faster on aolserver/naviserver.

We did some experiments with memcached/pgmemcache and aolserver a few years ago on our learn-system with mixed results. pgmemcache was needed in our case, since often only our stored procedures knew, what objects are affected by a change. For our applications the performance gain was very little (often not existing) in comparison with the classical approach of storing in the db + ns_cache (full-page caching was not applicable). My recommendation is to identify the bottlenecks of your application and improve there.

If you are not discouraged about page caching, you might find xowiki an easy animal to experiment with page caching. xowiki builds from every query a string, which is sent back to the user via a "delivery method" (normally ns_return, see method reply_to_user). It is easy to use alternative methods for specialized packages. The most important part is in xowiki/www/index.vuh

Collapse
Posted by Tom Jackson on

Hey, this isn't necessarily an answer to your question, but I was looking at the new AOLserver 4.5 sources yesterday. There is a new filter point. It is called prequeue. Example:

proc ::prequeue {} {
    ns_log Notice "running ::prequeue"
    return filter_ok
}

ns_register_filter prequeue GET /* ::prequeue

Note that you don't define the proc with the usual 'why' arg. At the moment I don't know if the prequeue filters are run with their own thread, or if the driver thread waits for the prequeue procs to run, but it looks like you can prevent the whole connection from running any further. It looks like there is a callback, but anyway, it could be an inexpensive way to return cached pages without going through any of the request processor code.