Thank you for sharing. One fly in the ointment is that the "12,000 external users, all making request at the exact same time" are requesting 12,000 separate and unique "sets of numbers" from the database. Also, the specific values served up to them will be continually changing.
Unfortunately, it's neither 12,000 people requesting the same thing or 12,000 people requesting different things that each do not change very often and thus could easily reside in "static" cache rather than being updated in cache continually. Since the 12,000 items will be constantly changing, I'm not sure how much cache would buy you compared what it would cost you to effect a constantly changing replicated database in cache; just not sure due to the dynamic nature of thousands of "sets of numbers" **constantly** changing during the day.
Besides my own interest, I think this is an important topic to cover from a community wide perspective since many folks and their clients will at least consider factors involving their sites ability to massively scale for future needs -- and handle the load.
Sharing on the topics of both the hardware and software architecture for this level of 12,000 db backed request per second capability is most appreciated. I've heard about a single AOLserver machine being able to dish out 1,000 request per second. 12,000 -- how many of what kind of server/set up? BTW, this would, of course run in a data center where bandwith would not be a limiting factor.
Thanks bunches to one and all for sharing your input on this!