The first chapter of "The Data Webhouse Toolkit" by Kimball and Mertz addresses some of these issues.
Essentially, caching is your friend, and layers of caching your posse. Especially if you believe that you will have "12,000 external users, all making request at the exact same time." Kimball suggests intermediate servers that serve static pages from a file system and not a database, that hold precomputed results to the most popular queries. Actually the file system can hold entire pages or just holds pieces of content that are then fit into other pages. You also have scheduled processes that are responsible for rerunning the queries and updating the file system based servers.
Consider the bboard module. This entire bboard page could be precomputed once and changed only when a new posting arrives. Even pages that hold typical, generic, connection specific query parameters such as user id can be precomputed and then served out with a simple templating engine that might replace user_id markers with the actual user_id.
12,000 db requests per second: that could be from say as little as 1000 hits per second to as much as 12,000 hits per second. You may be able to do this (1000 hits per second) with one AOLserver on one machine, but you most likely will need to consider the possibility of requiring more than one. aD has some support for clustering servers and coordinating caches, but you may have to fill in the gaps. (I am pretty sure the 28,000 hits per second is against several machines)
And the AOLserver author has written about the AOL Digital City architecture at: http://aolserver.com/docs/tcl2k/html/index.htm.