Forum OpenACS Q&A: "Goldie locks" Server size/type to max out a home DSL line

Hi,

What do you think would be appropriate server hardware today for
someone running an OpenACS server/service from their home DSL line?

I'm wondering what would be an appropriate server PC that's big enough
to max out whatever request a DSL line at 384, 768 or 1.5 speeds could
handle -- yet not be so big (i.e. expensive) that it would be a waste
of money.

The new $220 Tyan Tiger S2460, the $500 Tyan Thunder K7's little
brother, looks nice.  Dual AMD 1.2 GHz CPUs with 1 Gig DDR PC2100 (4
DIMM slots) on that board would be sweet I would think.  I wonder how
much overkill it might be.

Maybe a single CPU platform would be more than adequate?  Abit's KG7
has four DIMM slots also; I don't know if that board would be
appropriate for a server.

In general the AMD CPUs seem to offer a lot of bang for the buck
compared to Pentium IIIs right now -- or the equivalent performing P4s
from the benchmarks I've read.

Memory is certianly reasonable.  How much memory is enough -- and how
much is likely too much?

Also, are IDE drives (IBM 60GXP) a possibility -- or is SCSI a
necessity.  Any suggestions on good controller cards -- or on RAID?

I know the answer as always is "it depends", but please make whatever
assumptions you want on average page size, DB calls per request or
whatever and do share.

Thank you very much!

Louis

I can't resist ... "it depends!"

If you're going to be serving up MP3s and not much else maxing out your DSL line is going to be easy - assuming anyone visits your site.

If you're going to be serving up plain, mostly-text, community-oriented OpenACS pages then you'll have a very hard time attracting enough visitors to max out your DSL line, at least if you buy one of the higher-speed options you've listed.

Perhaps a better way of thinking about the question is to set a budget
number, then to start thinking about trade-offs that get you a decent combination of performance and reliability within that budget.  RAID 1
(mirroring) is nice.  SCSI is nice.  RAM > 1/2 GB is probably excessive, at least until proven otherwise.  Dual CPU machines are nice but probably overkill for your situation.  Hot swap is nice, fairly expensive, and not really necessary unless your under-the-kitchen-table DSL server is running a very mission-critical service, in which case it should be located in a datacenter with more disincentives to casual burglary anyway.

With prices the way they are today, a nice motherboard with integrated SCSI and NIC, 1/2 gig of DDR, a pair of fast SCSI platters mirrored by software RAID, and a couple of decent dual Athlon MPs are going to set you back a couple of thousand bucks.

And you'll probably be able to play quake on it simultaneously without your website visitors ever noticing ...

Working backwards, a single-processor machine with the cheaper non-MP Athlons and 256MB RAM will save you a few hundred bucks.

Of course, an alternative is to take the "dumpster diver" approach and
to see just how much traffic you can support on a system that costs you less than, say, $100 ... in some ways that would be more fun.
Frankly, any PC manufactured 3 years ago could easily support a home-based Web server on a DSL line (i.e., 400Mhz Pentium II w/128MB RAM).

For a short while, I ran a couple of static page Web servers on my SDSL (symmetric DSL: 784Kb downstream, 784Kb upstream) that were averaging a couple hundred discrete visitors, tens of thousands of page views, and approximately half a gigabyte of throughput. There were days when it maxed out at double those figures.

Albeit I was running sites with static pages, but average page sizes were generally larger than any ACS served page. CPU activity was insignificant (so much so I ran a SETI@Home instance with the leftover CPU cycles). I'm personally a big SCSI fan, but any IDE drive could have served up the pages.

My system: 733Mhz PIII on an Asus CUSL2 motherboard (133MHz front-side bus), 384MB RAM, an Ultra160 root drive and some Ultra2 LVD drives. I have no doubt that I could have served up the site with my previous computer, a 400MHz PII on an Asus P2B-S (100MHz front-side bus), 256MB RAM, and some Ultra2 SE drives (the PC I built three years ago).

While I admit I haven't run Classic ACS nor OpenACS recently, I ran it on my old 400MHz PII system (and other similarly spec'ed PCs), and I believe your planned system is total overkill. Buy all the memory you want; it's so incredibly cheap nowadays.

Goodness gracious.  Dual fast SCSI drives, 1/2 GB of RAM - nuts!

One of my clients has a full OpenACS install (though most features aren't turned on, but that doesn't affect the request processor checks on each page) along with some custom db-backed stuff I wrote.  They didn't want to spring for the full server option, so I just charge them a few bucks to hang off www.zill.net.

The system runs Apache/PHP for a few small sites as well.

When I fired off a web-crawler on the site to pull down all the pages, the CPU was idle 90% of the time, and things stayed zippy.

When one of the Apache sites on the same server got zapped on SlashDot (dev.designharbor.org), CPU load stayed low then, and the OpenACS site kept cranking.  Slowness was due to bandwidth, not CPU.

What are the specs on this monster system?  AMD K6-2 350Mhz w/256MB RAM and IDE drives.

You can get a sub-$300 from TigerDirect.com or a zillion other places that will handle anything you can send over the DSL line.

I have run OpenACS/mod_aolserver on a Sparc 10 with 50MHz CPU - it was a little slower, especially on the parts of the site that were a little clunky or brute-force to begin with; but it was good enough for development use, actually.  The Sparc was manufactured in 1993.  If I had cooked it by tuning it (eg memoization) some more, it would have been fast enough to put into production.

OpenACS 4 may well require more, but then again may not, if some of the 3.x cruftage has been removed.  It's still too early to tell.

My only concession is really the memory. For my system, half a gig of PC-133 would be US$60 from memman.com ($70 at my local PC components store). If you're running a home-based Web server, there's good chance that you're firing up X and using the box as your workstation as well. Mozilla 0.9.x eats up tons of memory and maybe you'd like to listen to MP3s with XMMS while you're hacking Tcl.

xref to put things into perspective.

Vadim,

The thread you linked to was a request for info on behalf of where I work -- a very large bank.

This thread is on behalf of myself -- with what I'm thinking of doing at home.  That's why I'm only asking about what will fill up a DSL line.  The thread you linked to refers to a nationwide corporate system that's still on the drawing board at work.  This thread and that one are apples and oranges.

FYI,

Louis

I am currently running OpenACS 3.2.5 on Redhat 7.1 using the RPM installation set from Jonathan Marsden <b>(great job in the latest version)</b>. My hardware is 1GHz Athalon with 3-256mb Dimms, and a 30gb Western Digital HD. I have a 10/100mb 3 Com NIC on a Roadrunner cable modem through a 10/100 Linksys 4 port BEFSR41 router.
<p>
I run the following virtual websites:
<p>
<a href=http://www.gilprice.com>http://www.gilprice.com</a><br>
<a href=http://www.opedworld.com>http://www.opedworld.com</a><br>
<a href=http://www.fltstd.com>http://www.fltstd.com</a><br>
<a href=http://www.pcs-sc.com>http://www.pcs-sc.com</a>
<p>
The above sites are served with Apache Server, utilize php packages of phpweblog and one uses the phpNuke package. All 4 tie into a MySQL (I know 😊...)DB. On port :8000 (<a href=http://www.gilprice.com:8000/>http://www.gilprice.com:8000)</a> I run the OpenACS. I'm migrating everything over to OpenACS and should be OpenACS pure in a few weeks...
<p>
I occasionally run an X session for editing files or grabbing a file for download with Mozilla, other than that I don't use the machine for anything else. This configuration has been quite satisfactory for my needs so far...I have had no complaints from anyone, but I must admit there isn't any "dynamite" content yet. I DO have about 30 to 40 unique connections a day and have noticed a slowdown or stressing of the system. In fact when monitoring CPU usage in an X session, my CPU shows about 6 to 20% utilization and memory peaks at 30%...
<p>
I have $300 (cpu+motherboard) $160 (RAM) $210 (Case+power-supply+2 extra cooling fans+floppy+CD-Rom+LS-120)and $80 hardrive.
<p>
I don't know if this helps, but hit my sites and see how they are for speed if you desire...
One of the things I keep idly thinking about, but never actually trying, is to build a box with 4+ IDE drives and RAID 0/1. (Then I would run Debian Linux and ReiserFS on it, and plug it into a decent UPS, but none of that matters for this discussion.)

A Seagate 40 GB IDE drive costs $100. An IBM 36 GB SCSI drive is $400.

Now, I'm sure the SCSI is a much better drive, but 4x better? Am I really better off spending $400 on a single 36 GB SCSI drive, rather than buying four IDE drives, running RAID 0 or 0/1, and getting 80 GB of storage, all for my same $400? Somehow I doubt it. But I don't really know. People just say "SCSI is better", and I've never seen numbers showing me just how much better, which is what I'd need to understand the cost/benefit ratios of the different options.

Also, there are motherboards with integrated "hardware" RAID 0/1 on them, which typically seem to use a High Point HPT370 IDE Controller chip. But, they all seem to also require "drivers" to use the RAID, which makes me confused as to just how "hardware" the RAID really is. Then there are the lower cost IDE RAID PCI cards which often seem to use the same High Point controller chip, but two of them instead of one. And finally there's always the option of just using software RAID. However, in my idle web browsing I've not stumbled across any good information by which to objectively evaluate these different options.

So if anybody knows about this stuff, I'm all ears. Post away. :)

Well, for starters, if you really have a useful 80GB database (2 x 2 40GB RAID 0/1) will cost matter?

I can always find SCSI drives for about $200 at Fry's electronics.  Last time I looked they had 9 GB 10K RPM IBM SCSI II's or U160s (I forget exactly).  My own server has modest 4.5 GB IBM SCSI II drives in it that I picked up for $180 just as they were being phased out for the 9 GB drives.

Integrated SCSI on a motherboard sets you back $150 or so.  So two SCSI platters in a RAID 1 configuration and on-board SCSI is doable for $600 or so.

IDE is fine for one or two disks (on separate IDE channels), if you get a brand/model that works reliably in UDMA mode.  Promise IDE controllers now work with Linux though I'm not sure the code's in commercial distributions yet (you may have to find the appropriate patches/drivers yourself).  This starts looking attractive, because it's easy and relatively cheap to add a Promise IDE controller, pick up two more channels, then have four platters each on its own channel (assuming your box's CD Rom is off most of the time, and in any case the disk will be slave not master).

> Promise IDE controllers now work with Linux though I'm not sure 
> the code's in commercial distributions yet (you may have to find 
> the appropriate patches/drivers yourself). 

RedHat Linux 7.1 includes the Promise drivers. I recently upgraded my system (PIII 500MHz, 384MB mem, 20GB IDE RAID1) from RH 6.2 to 7.1, and I was able to use my Promise UDMA IDE controller -- RH 7.1 found it without drama.

Between the RedHat upgrade and using the UDMA controller, the requests-per second on my OpenACS system jumped from ~6.5 RPS to as high as ~30 RPS for static pages, and ~12 RPS for dynamic pages w/ ~2 DB queries.

See https://openacs.org/bboard/q-and-a-fetch-msg.tcl?msg_id=0002TO&topic_id=OpenACS&topic=11...