Forum OpenACS Development: What are the performance goals of the OpenACS core?

I started to reply the Gratuitous use of acs_objects? thread but decided I was getting to far off topic. The thing I like most about ACS is the common object model for all data, but it appears there is some belief that it's not scalable to large sites. It seems to me we need a common performance goal in order to judge this.

My site will only do about 1 page every 10 seconds on a good day. I use objects for everything and rely heavily on the permission system and I don’t think I'll have any problem meeting that goal. I do everything I’ve read not to do. I build large tree hierarchies of objects, I build pages that display data from 50 different objects and I permission check large result sets. I’ve had to do some tweeking to meet the goal but it has not been to bad and I think I could hit 1 page per second if I had to. My site is very data intensive and I probably average 25 database queries per page. I have a Sun X1 (500mhz 1 gig memory ) app server and an old dell 2450 ( dual 750 2 gig ram ) database server. I try to return a page in less than 2 seconds but I think I’m closer to 3.

I realize benchmarks can be very misleading, but what kind of traffic are people seeing on their sites and what kind of hardware are they running? Perhaps more importantly what are your performance goals?

I think the current design could do 10 reasonable pages per second on commonly available hardware. I think the limiting factor is the permission system. If you could optionally add permissions to objects I don’t think the object model would play much of a role in the performance of the site. The next issue would be how may object sequence numbers are there? With Oracle I think you could create objects till the sun burns out, with a 32-bit number you could have a problem.

I think the current design could do 10 reasonable pages per second on commonly available hardware. I think the limiting factor is the permission system. If you could optionally add permissions to objects I don’t think the object model would play much of a role in the performance of the site.

Yes, I think this is true. Depends on what one considers "commonly available hardware" but if you're willing to think in terms of a modern dual-processor Pn or Athlon with a GB or two of RAM I think such a goal is achievable.

The next issue would be how may object sequence numbers are there? With Oracle I think you could create objects till the sun burns out, with a 32-bit number you could have a problem

I just ran a quick calculation and creating ten objects a second would require a bit under 5000 days to overflow a 32-bit sequence. By then I expect to be running on 64-bit hardware :) A site creating ten objects a second would be an extraordinarily busy site, of course, so in practice overflow won't be a problem in the near term.

PostgreSQL uses 32-bit transaction IDs. Standalone statements run in an "invisible" transaction in order to support atomicity, so this number doesn't just increase when you explicitly wrap a transaction in "BEGIN END". Wrap-around used to kill PG and was only solved in PG 7.1 IIRC. My point is that the PG group just started seeing bug reports related to the wraparound issue about a year ago ... it took a very long time to overflow in the real world.