Forum OpenACS Q&A: Re: Debian stable or testing? apt-get or yum? etc.

Collapse
Posted by Andrew Piskorski on
Roberto, excellent, that's much of the sort of info I was looking for. So do you recommend Debian Stable plus backports, rather then running Debian Testing?

I don't want interactive configurability, I think it's evil. Yes, I see why package install-time configurability is very desirable and useful, so rpm not having anything like that is very bad. But it should not be interactive (except optionally), it should be scriptable. I should be able to make an API call saying, "what are the configuration questions I need to answer?", then save the answers in a script and automate everything.

Regarding Debian being "so much more than the packaging system", yes, I'm at least peripherally aware of that. Comments I've read elsewhere suggest that the real value-added in packaging software (rpm or deb) is all the hard work that goes into (or should go into) keeping a very large set of packages organized, consistent, rational, and sane - avoiding dependency loops, etc. And here Debian would seem to still be substantially ahead of any other distribution, likely because they've had the pre-requisite enabling tools like apt-get for a much longer time, and thus recognized and started working on the hard stuff much earlier.

I plan to mostly have fattish clients, not thin. (Exceptions might be certain special purposes, like if I ever had a small-ish diskless Beowulf cluster.) Which is to say, each client has its own disk and own installed software, but any data that needs to be backed up should be on the central file server.

Probably I'll end up with separate home directories on each machine even though (dependings on how installs and configuration updates are handled) that might be in some conflict with the above. So far I don't really know much about LDAP, NIS, etc., and I do want many (though not necessarily all) machines in my home network to still be able to function just fine even if the file server is down. If it's feasible to combine the best of both worlds - all user logins and home directories centrally controlled and consistent, but still retain the ability to temporarily (ideally, even permanently) revert to operating "detached" from the central server - if that's feasible I'd appreciate pointers to how to do it.

Collapse
Posted by Roberto Mello on
Definitely stable + backports. Testing is your worse call because you don't have all the updated stuff of unstable, and security updates must go through stable backporting -> unstable -> testing.

I didn't say debconf was not scriptable. It is. You can configure it to show you only ubber-critical things that it must ask, and even then there are reasonable defaults. Your debconf answers are saved by default.

Well, what you say regarding Debian is true, but the Debian policies (IMHO) are what make the difference. Take the menu system for example. It's not enough that it's there unless every package of a graphical application is required to register itself with the menu system, so that you have a consistent menu of applications regardless of desktop environment/window manager you choose to use.

Red Hat and Mandrake have adopted Debian's menu and alternatives systems, but their packages don't use it very much (yet). There are only a few alternatives options on a Red Hat 9 install I did yesterday.

With fat clients and keeping your central file system replicated to client machines, you might want to look at an rsync solution. There is probably something on freshmeat for doing that.

-Roberto