Forum OpenACS Q&A: Debian stable or testing? apt-get or yum? etc.
So far for my Debian systems, I've gotten away with just using Debian Stable plus selected adulterations of the "Ah, screw the package system from now, just install the source from the tarball like I'd do on Solaris" sort. But this is not acceptable nor desirable to me in the long run. E.g., trivially, even just running the latest OpenACS means the Debian Stable PostgreSQL package is much older than what I want.
Until Yum came out, I wouldn't even have even considered choosing an rpm based distribution over Debian. But the rpm based distributions seem to keep their "stable" packages more recent, which is attractive, so if Yum now gives me everything apt-get did...
Also, every time I've ever installed any Linux (or Windows, for that matter), I've always ended up making lots of manual configuration changes in to fix stuff the way I want, and I'm just plain tired of that.
I'm thinking about maintenance and turnover in a home network with at least a half dozen or so machines (file server and DNS proxy, MP3 machine in the living room, laptops, some other random server, misc. desktops scattered around, etc.). Manually fixing all the same things over and over again in almost-but-not-quite the same way for each individual machine - that is to say, traditional system administration - is something I've grown to despise. Fooling with just two or so Linux boxes at any one time has been enough to convince me of that.
Therefore, over and above just getting software installed onto machines in a relatively sane fashion, ideally, I also want some form of version-controlled, programmatic multi-host configuration control. Robert G. Brown touched on something sounding rather like that recently, for what I believe is his home network, complete with automatic installs via PXE ethernet booting. (His editorial moan here is also apropos.)
Which is better, yum or apt-get? And, perhaps much more importantly, what other tools need to be in the mix for a complete "install, configure, and control all my N arbitrary Linux boxes" solution? Biased towards simplicity please, where we may if necessary assume that N << 100, probably more like 5 to 10 or 20 max. (I neither expect nor particularly want to ever apply my new-won knowledge to maintaing bazillions of machines at some huge company.)
I think Yum might not handle source packages yet (bad...), but other than that does it have any down-sides compared to apt-get? What I've read about it so far (granted, all written by Yum authors or users) suggest that Yum is cleaner and simpler than apt-get.
Which is better, Debian or one of the zillion rpm based distributions? And I don't mean religiously, I mean in terms of which will make my life easier in scenarios like the above - vague as that may be.
All good advice, war stories, thoughts, comments, etc. appreciated. ;)
When Microsoft stops supporting Windows NT in 2004, it will leave some 2 million users without new security patches, and will require most of these users to develop a strategy to migrate quickly from the discontinued software. IBM is helping its business customers to move to Linux now."
I would love to know what independent advice IBM customer service can give you; and whether they can convince you/us of a good choice and strategie.
Many other people share this need. So they've created repositories of packages backported to Debian stable releases. Some examples are http://www.backports.org/ and http://www.apt-get.org/ (this one being more a collection of URLs to third-parties' backports).
Many Debian packagers also make backports of their packages to stable, because they use it too. Often they are made available in apt-gettable repositories in http://people.debian.org/
Recently I installed OpenACS 5 on a stable machine and had no trouble. Oliver Elphick (PostgreSQL's package maintainer) makes backports available at http://people.debian.org/~elphick/
Regarding YUM, Debian is so much more than the packaging system. RPMs still have no way to be interactively configured (like with debconf). With the notable exception of Mandrake, no other RPM-based distribution uses the menu system. I installed apt4rpm and it leaves much to be desired. And Debian so many more packages readily available. This is specially relevant for a desktop-type machine, but also for a server. The other day I couldn't find IceWM RPMs for a server, for example.
As for your maintenance issue, I have faced the same. I think I am going to move towards a thin client approach. I'll have a home server doing everything, and all my other machines (kitchen's, wife's, son's, except for my laptop) to be thin clients. That way I only have to manage one machine.
And since I'm researching OpenMosix for my masters, I will make all the machines to be on it, so processes are migrated and I use some of the processing power of those machines. The OpenMosix web site has documentation on how to make OpenMosix work with LTSP.
I don't want interactive configurability, I think it's evil. Yes, I see why package install-time configurability is very desirable and useful, so rpm not having anything like that is very bad. But it should not be interactive (except optionally), it should be scriptable. I should be able to make an API call saying, "what are the configuration questions I need to answer?", then save the answers in a script and automate everything.
Regarding Debian being "so much more than the packaging system", yes, I'm at least peripherally aware of that. Comments I've read elsewhere suggest that the real value-added in packaging software (rpm or deb) is all the hard work that goes into (or should go into) keeping a very large set of packages organized, consistent, rational, and sane - avoiding dependency loops, etc. And here Debian would seem to still be substantially ahead of any other distribution, likely because they've had the pre-requisite enabling tools like apt-get for a much longer time, and thus recognized and started working on the hard stuff much earlier.
I plan to mostly have fattish clients, not thin. (Exceptions might be certain special purposes, like if I ever had a small-ish diskless Beowulf cluster.) Which is to say, each client has its own disk and own installed software, but any data that needs to be backed up should be on the central file server.
Probably I'll end up with separate home directories on each machine even though (dependings on how installs and configuration updates are handled) that might be in some conflict with the above. So far I don't really know much about LDAP, NIS, etc., and I do want many (though not necessarily all) machines in my home network to still be able to function just fine even if the file server is down. If it's feasible to combine the best of both worlds - all user logins and home directories centrally controlled and consistent, but still retain the ability to temporarily (ideally, even permanently) revert to operating "detached" from the central server - if that's feasible I'd appreciate pointers to how to do it.
I didn't say debconf was not scriptable. It is. You can configure it to show you only ubber-critical things that it must ask, and even then there are reasonable defaults. Your debconf answers are saved by default.
Well, what you say regarding Debian is true, but the Debian policies (IMHO) are what make the difference. Take the menu system for example. It's not enough that it's there unless every package of a graphical application is required to register itself with the menu system, so that you have a consistent menu of applications regardless of desktop environment/window manager you choose to use.
Red Hat and Mandrake have adopted Debian's menu and alternatives systems, but their packages don't use it very much (yet). There are only a few alternatives options on a Red Hat 9 install I did yesterday.
With fat clients and keeping your central file system replicated to client machines, you might want to look at an rsync solution. There is probably something on freshmeat for doing that.
/etc/apt/sources.list, if no version of the backported package exists at all in Stable, as is he case for e.g.:
then the package does not show up when I do "deb http://www.backports.org/debian/ woody smartmontools
dpkg -l packagename", but if I install it with "
apt-get install packagename" it installs just fine.
On the other hand, if the same package does exist already in Stable, as is the case with Mozzilla, then nothing I do with apt-get and friends will install or upgrade the package!
What am I missing here? How do I upgrade to backported packages like Mozilla 1.6?deb http://www.backports.org/debian woody mozilla
Oh yes, you did an apt-get upgrade after chaning /etc/apt/sources.list, right?
I'm afraid, though, of leaving my SuSE install: the fear is not being able to install debian, and loose SuSE. In which case I will be left with an unusable computer, a dead box and no connection to the internet (no advice, no help). I have just one computer, and I'm alone.
Also, I could not enable floppy/CD/CDwriter on my computer (linux is not so easy as they say), so that there is no way for me to backup any data (apart from sending a very few files to yahoo suitcase).
Joel --> be happy with unstable.
Roberto --> stable + backports.
Please Joel and Roberto, could you suggest which debian version will be my doom? Which will I try and install?
I've tried to install OpenACS for long, but it was far too difficult to me. Now that 5 is ready I will try again, with the hope that instructions are really step-by-step, maybe with examples, and plain English (not computer-jargon).
First I will (try to) install debian. Which one, stable or unstable?
P.S. The only debian experience that made me happy is knoppix. I've read of some knoppix/OACS thing. Would that be interesting to a basic user (me)? Or is it just a demo disk, nothing usable?
Thanks for your help.
Since I've just spent most of two days trying to use debian to upgrade kde from 3.1 to 3.2, my advice today is to not use a computer.
Never had a problem; actually the quality of its software (i.e. drivers that just work) contributed to Debian becoming the platform of choice for me.
I also find it hard to agree with misconceptions like, 'Debian = hard to install'. I am still a beginner in many respects, but hard to install Debian isn't. I usually go through the defaults and then apt-get install whatever I need.
Regarding your network card, I found that sometimes it will not install the module if I try and configure it manually at the time of installation, but somehow the card has been identified automatically with no need for any further module install. If that's not the case you could also press F3 and select bf24 at when you boot your install disk/cd, this will give a greater choice of modules/drivers.
I am afraid that I will *have to* install debian.
Between stable and unstable, which one do most of you suggest, keeping in mind that I would eventually try and install OACS, and that I'm looking for the easiest choice, not the most performant one?
Network cards not working is a common problem. That usually goes away when you install a newer kernel, which is usually the first thing I do. Of course, that's annoying, because if you're not connected to the network...
There are a lot of alternative ways of installing debian, though, so sometimes people who think Debian is hard to install just haven't tried one of the easy ways..
Thanks again to Joel for the Knoppix advice: that was the very base for me. I wouldn't have dared to make any move without that lifesaver cd. I then tried and burned many different disks from the debian site (and others): they all went to the rubbish bin after some weeks and many efforts. Thanks to Jade: "There are a lot of alternative ways of installing debian, though, so sometimes people who think Debian is hard to install just haven't tried one of the easy ways..." I looked for something easy and found it at the end: Libranet.
You just start Knoppix, download Libranet 2.7 and a few minutes later debian is installed.
They say it is 100% debian woody: I do not know how to check myself, so I trust them. Now it is time to try and see whether there is a way to install OpenACS.
I recently gave a look an the documentation: very complicated, convoluted, lots of different software, far from being easy. I will try, anyway.
If somebody knows of an easy way to install OpenACS, please let me know.
Libranet sounds interesting, but there don't seem to be any docs available on the web, and it's not really clear what you end up with once you're installed. AKA, can you easily update from the official Debian package repositories, or are you locked into Libranet somehow?
There's more helpful Debian install info in the Jan. 2004 Best Configuration in Debian for Openacs Development thread.
It's not really clear what you end up with once you're installed. They say you get a 100% debian woody + an "adminmenu" tool which let you easily do something that expert people do by hand. I tried it, and it seems true: skilled technicians will laugh at me, but I was happy to find such a thing inside my pc. Can you easily update from the official Debian package repositories, or are you locked into Libranet somehow? Try and read here: www.osnews.com/story.php?news_id=5061: it was the only manual I followed, and was far enough for me to install debian.
They say that by using some apt-get command you can easily do many things: update software packages, switch to sarge and sid, and other things.
Personally, I didn't understand the difference between woody/sarge/sid yet. The people at the debian.org site think they explained it clearly. Anyway, I did not understand. So, for the moment I do not care, and will be content with what I have.
Your timing is very good - check out the other post where Galileo University has released a debian package for .LRN:
I think I will follow the documentation step by step, and try installing OpenACS. There are many black holes, though; the fist one is: Debian documentation strongly suggests to install software with that apt-get command, while OpenACS manual often says something different (es. "Debian users, especially Debian stable users, should install PostGreSQL from source as detailed below...").
I am a bit puzzled.
The packages handle quite a bit of the initial configuration, do some security tightening, etc. For Debian stable, packages are available from people.debian.org/~elphick/ as posted earlier on this thread.
I have installed OpenACS on Debian woody. I grabbed the PostgreSQL woody packages, grabbed the tcl8.4 package from backports.org (IIRC) and re-compiled the AOLserver package from unstable.
It'd be nice if the AOLserver package maintainer made woody backports of those packages on his people.debian.org area. Maybe he already does.
I run Debian unstable on three different machines (one of them is a server) and only had one breakage in a desktop machine in the last year. The breakage was with GNOME, and was easy to solve.
In the one server I moved to unstable, we were running Woody + backports, but it became too painful. For a while we had a chroot with unstable, just so we could have some sid packages. So we (the group handling the machine) just bit the bullet and went to "careful unstable", meaning we watch the mailing lists and are careful before doing an upgrade.
Woody is the current release of the stable branch. It is very very stable, tested on all the 11 architectures Debian supports. There are policies in place that make sure that only certain updates make it into revisions of the stable distribution (such as security updates, critical bug fixes, etc.)
I think Woody is up to r3 now (third revision). A new version of a certain application package does not just make it into new revisions, unless it fixes some critical bug. This is because a new version may also introduce incompatibilities with other packages that could cause a ripple effect of not-well-tested updates.
Testing is what will become the next stable distribution once it's released. Currently it's called "Sarge" (all releases are after Toy Story characters for historical purposes). Sarge has a completely new installer and other things. Once Sarge has gone through severe tests on all architectures Debian supports, and all "release-critical" bugs have been resolved, it will be released as stable.
Unstable ("sid", the evil boy who tied Buzz Lightyear to a rocket) is where new packages (or new versions) are uploaded to. Once they've been tested, they make it to the testing distribution. Sometimes things break in sid, but not often. Usually they're fixed in a couple hours.
Experimental is like the wild west. A place for the strong of heart. Sort of another dimension. But it's kinda cool.
So basically, if you have a desktop machine, then you probably want to stick with unstable. If you have a server, you want stable. If you're a Debian developer, you want unstable (and hopefully you have a stable machine around so you can backport packages) and testing.
I've done some Debian installs through gnoppix and Morphix as well (knoppix-based distributions, which in turn is based on Debian). These live-cd distributions do the hardware detection and much of desktop configuration (e.g. X) for you. Morphix is modular and there are several flavors available. Knoppix/gnoppix is not modular, an the installer is less flexible (i.e. it wants a strict pre-defined partitioning).
So if you want a quick desktop machine, I'd say go with Morphix.
Hope that clears things up.
I we were to run a Debian unstable desktop, immediately after each "apt-get upgrade" I would want to tag all current packages so that if anything later broke I would have some automatic record of what package versions I was running when, to help me figure out what went wrong. Is there some easy way to do that?