For typical OpenACS applications whole-page-caching has many problems and limitations. Keep in mind that the page does not only depend on the "application objects" (tracking the implications of operations especially on aggregating functions is hard) but as well on the user state (user_id, cookies, permissions). In case of personalized content (e.g. dotlrn, where essentially every user sees different content for one url) you have to construct quite complex keys containing the query and user context, which will be some work to get it right, especially if you would try to re-implement this in nginx. I have doubts that the number of successful hits in the cache would be high. Caching page fragments is another story, but that won't be of use in nginx, and doing this via ns_cache will be probably much faster on aolserver/naviserver.
We did some experiments with memcached/pgmemcache and aolserver a few years ago on our learn-system with mixed results. pgmemcache was needed in our case, since often only our stored procedures knew, what objects are affected by a change. For our applications the performance gain was very little (often not existing) in comparison with the classical approach of storing in the db + ns_cache (full-page caching was not applicable). My recommendation is to identify the bottlenecks of your application and improve there.
If you are not discouraged about page caching, you might find xowiki an easy animal to experiment with page caching. xowiki builds from every query a string, which is sent back to the user via a "delivery method" (normally ns_return, see method reply_to_user). It is easy to use alternative methods for specialized packages. The most important part is in xowiki/www/index.vuh