Forum OpenACS Q&A: How would one handle 12,000 db backed request per second?

I've read the server architecture articles on Arsdigita.com.  What I'm
wondering is what kind of architecture would one have to have in place
to handle 12,000 db backed request per second (say, for example with a
three table query for each request processed).

Lets say, for instance, that you have a web service used by 12,000
external users, all making request at the exact same time -- several
times throughout the day.

I've seen descriptions of set ups for sites up to photo.net in
size/use, but not for sites this big that I recall.  I know Phillip's
articles on AOLserver talk about AOLserver being able to handle 28,000
request per second -- but these 12,000 db backed request per second
I'm asking about would be against a single database image -- not sure
if it's the same thing his 28,000 hits per second reference is
referring to or not.

Would one simply replicate the servers one has and put load balancer
servers in front or what?  Anyone know of an Oracle installation that
handles a load this big? PostgreSQL?

Please share your input on this.  I would like to champion an open
source solution (no profit for me -- just want to champion open
source) to a company who is going to be deploying just such an
application in the next year or so, if possible.

Is the kind of open source solution Arsdigita provides customers (I'm
referring to the server architecture here, not the ACS) able to scale
to this level?  It would not be a collaborative community; rather an
application/service along the lines of electronic banking/commerce.

Thanks,

Louis

The first chapter of "The Data Webhouse Toolkit" by Kimball and Mertz addresses some of these issues.

Essentially, caching is your friend, and layers of caching your posse.  Especially if you believe that you will have "12,000 external users, all making request at the exact same time."  Kimball suggests intermediate servers that serve static pages from a file system and not a database, that hold precomputed results to the most popular queries.  Actually the file system can hold entire pages or just holds pieces of content that are then fit into other pages.  You also have scheduled processes that are responsible for rerunning the queries and updating the file system based servers.

Consider the bboard module.  This entire bboard page could be precomputed once and changed only when a new posting arrives.  Even pages that hold typical, generic, connection specific query parameters such as user id can be precomputed and then served out with a simple templating engine that might replace user_id markers with the actual user_id.

12,000 db requests per second: that could be from say as little as 1000 hits per second to as much as 12,000 hits per second.  You may be able to do this (1000 hits per second) with one AOLserver on one machine, but you most likely will need to consider the possibility of requiring more than one.  aD has some support for clustering servers and coordinating caches, but you may have to fill in the gaps.  (I am pretty sure the 28,000 hits per second is against several machines)

And the AOLserver author has written about the AOL Digital City architecture at: http://aolserver.com/docs/tcl2k/html/index.htm.

Philip was referring to all of the all of the combined aol sites that use aolserver when he mentioned the 28,0000 number.  He was not referring to one site by itself.
12,000 requests/second is pretty large. The big question is: how
often does the data get updated? Is it mostly reads? If so,
optimization is fairly easy, and caching things as Jerry described
is going to be the way to go for sure.

If you're doing serious data updating as part of those requests,
the caching strategy is going to be significantly less useful. You
are talking about 40,000,000 requests per hour, which is
significantly more than almost any web site, AFAIK. So you are
pushing the envelope....

Hi Jerry,

Thank you for sharing.  One fly in the ointment is that the "12,000 external users, all making request at the exact same time" are requesting 12,000 separate and unique "sets of numbers" from the database.  Also, the specific values served up to them will be continually changing.

Unfortunately, it's neither 12,000 people requesting the same thing or 12,000 people requesting different things that each do not change very often and thus could easily reside in "static" cache rather than being updated in cache continually.  Since the 12,000 items will be constantly changing, I'm not sure how much cache would buy you compared what it would cost you to effect a constantly changing replicated database in cache; just not sure due to the dynamic nature  of thousands of "sets of numbers" **constantly** changing during the day.

Besides my own interest, I think this is an important topic to cover from a community wide perspective since many folks and their clients will at least consider factors involving their sites ability to massively scale for future needs -- and handle the load.

Sharing on the topics of both the hardware and software architecture for this level of 12,000 db backed request per second capability is most appreciated.  I've heard about a single AOLserver machine being able to dish out 1,000 request per second.  12,000 -- how many of what kind of server/set up?  BTW, this would, of course run in a data center where bandwith would not be a limiting factor.

Thanks bunches to one and all for sharing your input on this!

Louis

Hi Ben,

We were posting at the same time I was replying to Jerry.

Alas, the database will be UPDATED continually, not just inquired upon.

Thanks for sharing.

It is pushing the envelope compared to the size of most sites, yes.  I hope to champion an open source solution to the decision makers I have access to...if I can get an understanding of what this would involve/require as to an open source architecture solution.

Thanks again,

Louis

Ben,

To clarify, it won't be 12,000 request per second **every second**.  But it will be 12,000 request per second several times throughout the day.  12,000 db backed request per second is a peak load number, not an average load number.  Heck, I guess this even goes beyond being "slashdoted"  :o)

Take care,

Louis

I don't know of a database installation (even Oracle) that can
handle that # of requests on a single instance. I'd be happy to
hear about someone who has seen this, but that kind of peak
load is bigger than anything I've even heard of.

There at least one immediate potential solution which, by the
way, doesn't apply specifically to open-source. You can partition
your data. If there's no "collaboration," you may be able to
segment your user base into multiple systems, literally running
separate database instances, and routing everything through a
single first layer so everyone can start from the same point.

Thanks Ben,

What you mentioned is the only thing I've been able to think of so far as well.  I haven't gone down that road yet because dividing the database would make things more complicated.  The data also will require periodic (at least hourly) "global summary inquiries" -- and maybe updates across all user accounts as well.

I was wondering if it were possible to do this against one database instance.  I just don't know the upper limit of the software component parts.  I'm not aware of any definitive source that spells out what the upper capabilities of the software components happens to be.  I suppose a lot depends on the specifics of implementation.  But I don't know of any specific implementation of any sort at this scale that's available with reliable and proven benchmarks.

I **very much** appreciate your input.  Please share whatever comes to mind.  Basically, think of a central system with the same type of data for each user.  It's just that the user's "set of numbers" is constantly changing and that they all need to access it and or update it at the same time at several points throughout the day.

Thanks again,

Louis

Is there something timing wise that will ensure that all 12,000 users hit the database in the same second? Because just spreading that load over 12 seconds drastically reduces the load to 1000 requests per second. If you had 12,000 human users that are not making requests 100% of the time you would be hard pressed to ever get them to all make a request simultaneously. Automated users are a different story but you still might be able to schedule them around each other.

I would guess that if one AOLServer machine can handle 1000 requests per second then 12 AOLServer machines should handle 12,000 requests per second. That says nothing about the database server though.
12000 hits in a second.  You say each query involves three tables, presumably a join?  How many rows are going to be returned?

We're talking about 83 MICROSECONDS per query, dude.  On my Celeron 500 laptop PG executes relatively simple queries returning one or a few rows in the ten-to-forty millisecond range.  Oracle does the same.

That's two orders of magnitude too slow for what you want to do.  Even  a 16-way Sun monster's going to leave you about an order of magnitude  short.

And frequent updating will hose you.  Updating requires disk writes because the REDO log must be physically, not logically, written.  Then  every few minutes the RDBMS will checkpoint to disk, causing a filesystem synch or (in the case of Oracle and raw devices) buffers flushed physically to the disk.

At the risk of offending you, something scaled to this extent requires  an expert, someone who already knows the answers or can tell you if it is even possible with any combination of general-purpose RDBMS software and hardware.  If it is possible, a novice is unlikely to get it right.

And in this context I'd say that everyone who's posted here (not just you) is a rank novice.  This is way off the map of our experience.

I'm going to tell you something that I don't know absolutely for certain, but which I think would be a fairly safe bet if I could find someone to take me up on it:

I bet that there are no general-purpose RDBMS installations out there processing transactions at this pace.  Banks have their own transaction processing systems, SABRE (used by airlines) is a custom-built database system, etc etc.  I think you're up in the realm  of custom-based solutions.

I know there's at least one SQL but non-RDBMS system out there designed to handle massive hit rates of financial queries for the stock market.  I forget the name, it's designed for datamining (i.e. querying not updating or inserting new data).  They build a massive datastructure in RAM that essentially partitions the data both by row and by column.  The approach is very specialized and not appropriate for general RDBMS activity.  You don't do updates in this context, at least not frequent ones with ACID protection, etc.

This is the kind of solution space you're talking about if you're really serious about servicing that many queries in a second.

How much does it cost to build the infrastructure for Ebays, Yahoo, Googgle, AOL, or Nasdaq. You are talking about that type of traffic and normally their architecture was evolved and continues to be today. Most of these sites have many specialized features to help optimize them for their particular needs. An open source solution often contains many features that are useful to many but may not be the *best choice* implementation for some. That doesn't mean that bits and pieces of open source solutions could be used in the overall infratstructure.


There are many issues that would need a little more clarification to get better feedback.

  • Is this a web application / does it need to be?
  • Are the users of the system required to login? Are the sessions monitored, secured, and timedout?
  • Do the users have different areas of access or is it the same accross the board with no permission checking?
  • Will there be images / ads?
  • What sort of accounting is needed at what granularity?


The scope and architecture needs to be designed and each piece optimized if possible. If you can separate out a majority of the base functionality that is across whole application and optimize that then move on to the application itself. You may want to denormalize and partition you data. The modeling itself is probably the biggest aspect limiting your underlying application scalability.


BTW what exactly does champion mean? Potentially fund an architecture that would then be released into the Open Source community? Digital Citys core API that sits on top of AOLserver is probably the most scalable toolkit that works well with the most scalable web server but that toolkit is not open source

As an aside I have been developing a site for the last 16 months that recreates alot of the ACS functionality but without ever having to touch the db/fs. Amadahl law dictates that the scalablilty of any system is limited by its bottlenecks. Determine these bottlenecks and eliminate/optimize as much as possible. I consider any time the server process has to hit the db/fs a bottleneck. There may be times that this is not realistic but a large portion of the sites functionality may be handled inside the server process residing in memory. You do give up a certain things by moving this functionality from the db to memory but hopefully AOLserver is reaching a point where crashes are rare.

I have been working on some articles that show how AOLserver's api can function in this manner and hope to have more time to finish up and post. Remember though that any site requiring your type of performance will require alot of customization.

Best Regards,
Carl Garland

If you read the tcl2k presentation, you'll see the kinds of things that have to happen to a system as they increase in traffic.

Compare the naive implementation of the Movies site with the one that they endedup with in 3.0. Very different birds, and for very good reasons. Congratulations to them for finding a salable solution.

Last year, I led the team that runs www.ftd.com, a very high-volume flower retailer. For long periods of time, we sustain traffic levels of more than 1 order per second. Think about all the hits that go into completing an order (~~ 100x more). It is a hugely expensive undertaking to build a system that can handle such traffic.

Solutions go to two extremes: extremely expensive Sun hardware (e.g. Enterprise 4500 servers with 12 CPUs) or lots of extremely cheap hardware and a few big scary boxes to handle the databases. We took the 2nd route, and it has worked well. But it takes 30 web servers and a few peripheral machines for monitoring and backup to handle such a load.

Take a look at what google does: http://slashdot.org/articles/00/05/31/1242237.shtml.

You might want to check out this thread if you haven't already done so...
Hi Don,

Thank you very much for sharing.  Please never worry about offending me -- I'm very easy going, hard to offend and am someone who enjoys learning and considering other points of view.  Moreover, you are among the handful of people here whose post I most enjoy reading and have benefited from.  Thank you **very** much!

I appreciate what you said about the novice .vs expert knowledge bases.  Basically, without revealing what I'm not allowed, this is a current mainframe based system that's being considered for webification or rewritting for the web.  It's in the consideration, scoping and planning stage.  I have input into the process.  I also may do some of the development work on it.

So far, all the consideration is on commercial products -- lots of application server this and middleware that.  These folks aren't familiar with open source at all.  I would like to get an open source solution considered for all or part of the project **IF** it's up to the task and scale of what needs to be done.  This is what I was refering to as hoping to champion open source.  Since I have no experience pushing the envelope with the open source software that many here are experienced with, I thought I would ask for input here.  I figured some here might have experiences with very large systems and would be willing to share their insights and opinions.

I often wonder just how big a task open source is currently up to task for -- what it can and can't handle -- so I know when it is and isn't the appropriate tool for whatever project I'm giving input for.  My background is mostly very large mission critical mainframe systems.

Again, in the kind of large companies I've always worked for open source often isn't even on the radar.  I like to at least raise awareness and possible see it used successfully.

Thank you again, as always for sharing.  I almost always learn from your post!

Louis

That's a good question, Louis, about what kind of systems open source software is powerful enough to handle. However, I think that since most open source developers build their systems on the systems that they have lying around (except in a PHew isolated cases where the principle objective is to start a cult -- but those cases are Phew and PHar between) the solution for what you are looking for may be difficult to recognize in this space. It seems that the open source OS's are proven architectures on massive system (I'm thinking specifically of Yahoo and Hotmail) but it seems like your requirements might even max those guys out.

I imagine that if someone made a couple or three *real* mainframes available to the OSS community there would be someone willing to build appropriate software for the system or make sure available software can scale. Maybe there are some people doing that right now. But it seems that endeavors like IBM's $1 billion dollar commitment to OSS might initiate something like this, although I imagine it's doubtful any of that money will go to competing products like PG.

As an aside and in response to someone championing OSS, I'm also on another mailing list for non-profits interested in OSS. Recently, someone posted the article where IBM tested DB2 on Windows, Linus and other OSes. Linux came out on top. The test was conducted on some fairly serious hardware, 8 and 16 way servers. For some reason this guy thought that non-profit tech managers would be interested.

If a non-profit ever comes to me, says it has a spare 16 way server and would I be interested in building a system for them, I think my answer would be, "Sure, let's meet. I'm scheduled to get back from my Space Shuttle mission at noon on Friday, but I can just have the pilot drop me off at your offices..."

Hi Carl,

Here are the answers to your questions:

1)  The system is currently mainframe based.  There is a desire to move it to the web.

2)  All users **are** required to login.  Security is of utmost importance since we are talking about large amounts of money.  All sessions are secured.  I don't think one could say they are "monitored", but they will time out after a set period of time.

3)  The users will have different points of access probably about 20,000 at least -- from all over the United States -- but not the world.

4)  Except for maybe a small logo or something similar, there will be no images or ads.  This will be basically a system to transact normal daily financial business with existing customers -- not a shopping or "for sale/buy here" type of site with engaging graphics.  The users would be mostly clerical folks using the system to do their jobs -- but now doing so via a web browser over the internet instead of a dedicated machine with custom software running thru a secure private link.

5)  Accounting is needed both at the global/summary level, the customer level and down to the individual transaction detail level.

6)  By "hoping to champion open source", I mean that I strive to see open source solutions at least considered and hopefully implemented when they make technical sense for part or all of a project.  As I mentioned in my reply to Don above, in the large Midwestern companies where I typically work, open source isn't even on the radar.  I would expect that if open source solutions were implemented, then enhancements would indeed make it back into the community.  I try and do what I can.  However, without knowing what the upper limits of what open source software is capable of -- take Postgres or AOLserver for instance -- it's not possible to present a solid proposal for consideration for the scale of projects I've been involved with the last few years.

Thank you very much,

Louis

OK, a simple question: what happens to the existing mainframe installation if you throw 12,000 queries similar to those in question at it?  Does it smile and say "heh, I didn't even have to work up a sweat to handle THAT trivial task!"  or does it leave broken computer parts strewn over the floor as it dies?

In other words, by studying the existing mainframe installation and the throughput it can sustain you can start getting a handle on what would be required to service the workload you anticipate for the web.

If I were in your shoes, I'd do everything I could to understand where  the existing bottlenecks exist on the current installation.  Since the db is likely to be your most difficult bottleneck to solve, a deep understanding of how the current db performs can only help.

When moving to the web, the actual webserver portion can be solved simply by throwing more and more boxes behind a load balancer, each talking to the database server (or server cluster if you're forced to go that route).

As far as how scalable Open Source software is, depends on the software.  We know, for instance, that AOLserver and Apache scale well  enough to serve the busiest sites in the world - proof by example, because they *do* serve some of the busiest sites in the world.

Ditto Linux.

PostgreSQL hasn't been tested in that space.  To be honest, though, I'm not certain that Oracle has been tested in that space IN A WEB ENVIRONMENT.  As Ben said, 12,000 db hits a second is far and above any load placed on a db server by any web site we're aware of.

In the more general sense, several companies - IBM, Intel, etc - have invested in the Open Source Development Lab (OSDL), specifically meant  to be available for testing scalability of Open Source solutions on massive hardware typically not available to Open Source projects (or most software development companies in the closed-source world, for that matter, we're talking BIG IRON).  As it happens it's only a few miles from my house and I had the pleasure of talking to their first technical hire about three weeks ago - he just happened to wander into  my local coffee shop, saw that I was running Linux on my laptop, and struck up a conversation.

They're really just getting up and running, but in the next several months to a year we should be able to get firm answers on some of these questions.  I know that Postgres is high on the list...

Collapse
Posted by Vadim Nasardinov on

Louis Gabriel wrote:

What I'm wondering is what kind of architecture would one have to have in place to handle 12,000 db backed request per second (say, for example with a three table query for each request processed).

What I'm wondering is how you came up with this number? Sites whose load peaks at 12,000 DB-backed requests per second would have to have tens of millions of users. A few things you said lead me to believe that you don't really anticipate that kind of load. For instance, you said,

Lets say, for instance, that you have a web service used by 12,000 external users

...

To clarify, it won't be 12,000 request per second **every second**. But it will be 12,000 request per second several times throughout the day. 12,000 db backed request per second is a peak load number, not an average load number

...

The users will have different points of access probably about 20,000 at least -- from all over the United States -- but not the world.

It sounds like you are going to have 12,000 users total -- not millions of users handling "large amounts of money." In that case, there is no way in hell your site is ever going to receive 12,000 concurrent requests per second. It's like saying that if Verizon has 10,000,000 subscribers in New York City, it must have the capacity to handle 10,000,000 concurrent phone calls. Guess what. Verizon doesn't have that capacity.

If your estimate of 12,000 db-backed requests per second is based on the historic data from an existing installation, that's one thing. Otherwise you'd have to know about things like queueing theory, M/M/K/K models, and crap like that to be able to estimate the likelihood of getting more than n db-backed requests per second.

In other words, please clarify where the number comes from.

I've refrained from questioning Louis' assumption about the load actually being that high, but ... Vadim's right.  You really need to defend that number and more importantly, if your management is the source of the number they really should be asked to defend them.
Hello everybody,

The 12,000 request per second peak load number is a realistic peak load number the system would have to be capable of if it is to be migrated off the mainframe onto the web.  It comes from the "powers that be".  I don't have access to their raw data.  I have to take them at their word.  They know how big their current system is.  They do have experts in their three data centers to feed them their numbers.

Due to confidentiality policies at work, I can't tell you what the system is.  I have said it's a nation wide financial system; that's all I can say without risking violating the rules at work.  If you think about it, there are lots of systems that do have this level of transaction processing.  Think banking, financial, trading and credit card systems.

As I stated earlier, I simply wanted to know if anyone had experience or thoughts on an internet system that big -- and if anyone knew for sure or not if an open source solution could handle the job in a production environment.

I thank everyone who replied.  It sounds like no firm votes of confidence are out there at this time that an open source solution can handle the kind of load the system would be under at the current level of open source software maturity.  That's fine.  Great to know.  I simply won't work on creating an open source proposal for consideration at this time.  Maybe with the next project; who knows.  It might be much smaller.  Maybe the world will just have to wait for Postgres version 15.2  :o)  Or maybe some magic will come from IBM's $1 Billion and open source lab efforts.  Exciting times ahead, I guess.

If my question or post have irritated anyone on this mater, that was not my intent.  My questions were very sincere and honestly made in the hope of learning enough specifics about what open source is capable of at the large scale end to be able to get open source at least considered and hopefully adopted where I work.

The folks on this board are a really super group of people I have and continue to learn a lot from.  I am and will continue to be grateful for all that you here share.  But since I can't share the specs of the proposed system openly with you all, I guess it's best if I don't ask anything further on this issue.  Maybe I shouldn't have asked in the first place.  I'm sorry if anyone here feels that's the case.

I'm a mainframe guy used to working on very large systems handling very large volumes of data and transactions.  It's only been the last year or two I've been moving towards distributed computing and the web from the mainframe's "glass house".

I mostly take for granted just how much data and transaction volume goes thru the mainframe systems I've worked on.  One thing about the mainframe space, it is a very mature, reliable, robust and secure environment at the Fortune 500 data center level.  It's very infrequent that environmental (i.e. non application) software causes any production problems in any system I've ever worked on.  Optimization and tuning have, for the most part been done and put in place long ago.  And the hoops you have to jump thru to touch it are horendous!  New projects, like this one, allow for new approches and solutions.  Unless someone can present a compelling case, the tried and true is basically replicated since it's less risky.

Learning to do it individually for *nix is certianly a difference.  Heck, it seems one cannot take anything for granted.  How you all have mangaged to wear practically every hat in the data center *so well* is a tribute to all here.

As is typical here, you all have given the help and info requested to help one of your fellow community members.  I thank you.  And I won't trouble you any more on this issue.  I just hope as I develop expertise as I move from newbie to more knowledgeable community member that I can contribute and help others as much as many of you here have.

Thanks again, one and all,

Louis

You can't really help people unless you've played with some 12,000 db backed request per second sites--but sure--try. Some of the people try.

This thread has a sister thread on www.arsdigita.com/asj under this link: http://www.arsdigita.com/bboard/q-and-a-fetch-msg?msg%5fid=000dh9&topic%5fid=web%2fdb&topic=

Keywords for future searchability: Caching, Server Replication, Domain and Data Partitioning, Scaling Oracle (Book), Oracle, Oracle Tuning, Oracle Benchmarks, SQL Server, Extended Procedures, Store Procedures, Triggers, Dynamically update caching, dependency tree, client-side caching, server-side caching, Cisco Local Director, IBM Austin Research, CS Berkeley SEDA Project, C10K, asynchronous I/O, non-blocking i/o, Low context-switching cost, Olympic Website '98 '96 '00, state-less programming, clusters, Linux kernel, BSD kernel, Flash webserver, Squid proxy server, Zeus webserver, util_memoize.

(hopefully) Helpful links: http://www.elementkjournals.com/sql/0002/sql0021.htm.

Just an interesting side-note. Assuming most of those massively many "hits" are SQL queries (and frequently one hit uses the same identical SQL query as the next).. you may find it interesting that most websites end up doing few work on tuning the database and much more time on the caching server, the web server, the data and domain partitioning engineering, clustering and networks. Basically little is changed on the server. Just figure out a way not to force it to do something it CAN'T. One of the things people figure out is that you still shouldn't ask a Oracle box to handle thousands of practically identical queries for no apparant good reason.

It's interesting that Don mentioned SABRE, because in some ways SABRE does on the spot reservation and availability queries for goodness knows how many car rentals, hotels, and airlines. Each query is likely to be unique (although there will be hot spots--or opportunities for tuning--like people care most about next week's flight availability--less so next years).. and each transaction does count. However, it seems like that sort of database would benefit from a smart (less work building and maintaining) data partitioning plan (to achieve the high transactions per second)--one clearly neither Oracle nor another major RDBMS vendor could provide--why else would they build their own database engine (I don't know if they use Oracle or build their own engine)? Has any one at SABRE (or APOLLO) written about the engineering problems they were trying to solve? It'd be interesting read. Travelocity.com is a SABRE company I believe, is it merely a website that talks to the same RDBMS engine or does it talk to a commercial database like Oracle? I am very interested in hearing something from a SABRE expert!

Louis I doubt you have worked on mainframes much, if at all.
Hi Li-fan Chen,

Thank you for your help and the information you shared.

I've worked for the last twelve years on mainframe systems as a programmer/analyst for large companies.  Mostly very large mission critical systems.

Thanks & take care!

Louis

Hi again, Li-fan Chen,

I guess I can share where I've worked these past twelve years as a mainframer.

Started on May 1, 1989 at McDonnell Douglas, a defense contractor.  Luckily, I was in the last group of new hires they put thru full training prior to the comany falling on hard financial times.  240 hours of classroom instruction the first year, 60 hours minimum per year after that, assigned mentors, hand picked assignments to help me grow as a programmer/analyst.  Couldn't have asked for a better way to get started as a mainframe P/A in 1989, IMO.

After 4 years and one week, I quit McDonnell Douglas to go to work as a contractor for Edward D. Jones, a stock brokerage and investment company.  Now I think it's know as Edward Jones -- without the D.  From there, I switched to a better paying contracing gig to work at Blue Cross/Blue Shield, an insurance company.

In October of 1994, I went to work for the Federal Reserve Bank of Saint Louis as a contractor.  On March 6, 1995, I hired on at the Federal Reserve as a full time employee where I've worked these past six plus years as a senior programmer analyst.  I work in the group called Treasury Relations and System Support.

Don't know why you doubted that I've worked with mainframes much, but I thought I would share where I worked with them the last twelve years since you were nice and helpful enough to post and share as you have.  Except for about a year and a half working on a client server project where the server was a mainframe, It's been pretty standard mainframe work; COBOL, IMS DB & DC, DB2, IDMS at the TSO/ISPF CLI.  I've never worked developing personal computer (except for client modules the COOL:Gen code generator spit out on above client server project) or UNIX apps.  This has all been in the Saint Louis, MO area.

Thanks again for sharing your input,

Take care,

Louis

I don't like interfering in such kind of talks, but I believe that we don't need accusations like "Louis I doubt you have worked on mainframes much, if at all." on these bboards.

Let's stick to real help postings.

Don't misunderstand me Li-fan Chen, I liked most of your postings but some few are just a little bit aggressive.
"If you think about it, there are lots of systems that do have this level of transaction processing. Think banking,
    financial, trading and credit card systems"

I, at least, wasn't questioning this.  What I was questioning was whether or not they were doing so with generalized RDBMS systems, as opposed to specialized systems.

The original banking transaction systems were non-SQL, non-RDBMS specialized transaction systems (that's where most of our theory on transaction processing comes from, too, i.e. two-phased commits and the like).

I had thought, but am not sure, that credit card systems were built atop the existing transaction processing infrastructure that already exists in the banking system.

I already mentioned SABRE.

If you know of any RDBMS (say Oracle) systems sustaining that level of  throughput on three-table joins, I'm sure many of us here would be interested in hearing how that has been achieved.  It's got to take a lot of work in terms of setting up the system architecture, hardware and software.