Linux reliability

Last Updated on September 30, 2010 by Dave Farquhar

Linux reliability. Steve Mahaffey brought up a good point yesterday, while I was off on a consulting gig, where I learned one of the secrets of the universe–but since it’ll bore a lot of people to tears, I’ll save that for the end.
I’ve found that text-based apps and servers in Linux are extremely reliable. As David Huff’s tagline reads, “Linux: Because reboots are for upgrades.” If you’re running a server, that’s pretty much true. Unless you have to upgrade the kernel or install hardware that requires you to open the case, you can go for months or years without upgrading it.

The problem with Linux workstations is that up until very recently, the GUI apps people want to run the most have been in beta. The developers made no bones about their quality, but companies like Red Hat and Mandrake and SuSE have been shipping development versions of these apps anyway. On one hand, I don’t blame them. People want programs that will do what they’re used to doing in Windows. They want word processors that look like Word and mail clients that look like Outlook, and if they’re good enough–that is, they don’t crash much more than their Windows equivalents and they provide nearly as much functionality, or, in some cases, one or two things MS didn’t think of–they’ll put up with it. Because, let’s face it, for 50 bucks (or for nothing if you just download it off the ‘net) you’re getting something that’s capable of doing the job of Microsoft packages that would set you back at least $1,000. Even if you just use it for e-mail and Web access, you come out ahead.

The bigger bone I have to pick with Red Hat and Mandrake and, to some extent, even SuSE is where they put experimental code. I don’t mind experimental desktop apps–I’ve been running Galeon since around version 0.8 or so. But when you start using bleeding-edge versions of really low-level stuff like the C compiler and system libraries just to try to eke out some more performance, that really bothers me. There are better ways to improve performance than using experimental compilers. Not turning on every possible daemon (server) is a good start.

Compile beta-quality apps with a compiler that’s beta quality itself, and throw in every other bleeding-edge feature you can think of, and you’ll end up with a system that has the potential to rival Windows’ instability. Absolutely.

That’s one reason I like Debian. Debian releases seem to take as long as the Linux kernel does, and that’s frustrating, but reassuring. You can install the current stable Debian package, then add one or more of the more desirable apps from either the testing or unstable tree (despite the name, Debian unstable’s stability seems comparable to Mandrake) and have the best of all worlds. And when a .01 release of something comes out (which it always seems to do, and quickly) it’s two commands to upgrade to it.

It’ll be interesting to see how Lycoris (formerly Redmond Linux) pans out. Lycoris appears to take a more conservative approach, at least with the number of apps they install. If that conservatism extends to the versions of those packages they install, it’ll go a long way towards extending server Linux’s reliability to the desktop.

Debian is intimidating. I find it less intimidating than Slackware, but it does zero handholding during installation. So generally I recommend someone start with SuSE or Mandrake or Red Hat, get comfortable with how things are laid out, and get familiar with PC hardware if not already, and then, once feeling brave, tackle Debian. Debian is hard to install, but its quality is pristine and it’s exceptionally easy to maintain. Debian developers try to justify the difficulty of installing it by saying no one ever has to install it twice on the same PC, and they’re right about the second part. Eventually I expect they’ll take the installer from another distro that’s based on Debian to make it easier, but it won’t be in Debian 3.0 and it may not make it into 3.1 either.

The secret of consulting. My employer sent me off on a consulting gig yesterday. The main reason for it, I suspect, is because of my training as a journalist. It means I can ask questions, keep track of the answers, and make a PowerPoint presentation that looks decent.

Consultants get a bad rap because they’re notorious for not knowing anything. You pay lots of money to have someone who knows nothing about you and potentially nothing about your problem come in and ask questions, then come back later and give you a dog-and-pony show featuring sugar-coated versions of your answers and little else.

I won’t say who my client is, nor will I say who my employer is. What I will say is that my partner in this endeavor knows a whole lot more about the subject matter than I do. I’ll also say that the two of us are good researchers and can learn very quickly. Our regular job titles attest to that. We both have liberal arts degrees but we primarily work as systems administrators. We didn’t learn this stuff in school.

Up until Monday, I knew nothing about our client. Absolutely nothing. Up until yesterday afternoon, I knew nothing meaningful about the client. I knew its name and what its logo looked like, the name of one person who worked there, and I had a vague notion what they wanted to know.

I think that was an advantage. We both asked a lot of questions. I wrote down the answers quickly, along with whatever other information I could gleen. We left three hours later. I had six pages of typewritten notes and enough documents from them to fill a standard manilla file folder. We knew what they didn’t want, and we knew they were willing to throw money at the problem.

There’s such thing as knowing too much. One of the solutions they’re considering is overkill. The other is underkill. The difference in price between them is about 3 times our consulting fee. It took me another hour’s worth of research to find something that will give them the bare minimum of what they need for about $500 worth of additional equipment on top of the low-ball figure. When you’re talking the high-ball figure costing in excess of $40,000, that’s nothing. I found another approach that basically combines the two that will double the cost of the low-ball figure, but still save them enough to more than justify our fee.

I don’t know their internal politics or their priorities on the nice-to-have features. My job isn’t to tell them what to buy. Nor is it my job to give them my opinion on what they should buy. My job is to give them their options, based on the bare, basic facts. Whatever they buy, my feelings won’t be hurt, and there’s every possibility I’ll never see them again. They’ll make a better-informed decision than they would have if they’d never met me, and that’s the important thing to all involved.

I never thought I’d be able to justify a role as a high-priced expert on nothing relevant. But in this case at least, being an expert on absolutely nothing relevant is probably the best thing I could have brought to the table.

And since we haven’t done a whole lot of this kind of consulting before, I’ll get to establish some precedents and blaze a trail for future projects. That’s cool.

That other thing. There’s a lot of talk about the current scandal in Roman Catholicism. It’s not a new scandal; it’s been a dirty little — and not very well-kept — secret for years. There’s more to the issue than we’re reading in the papers. I’ll talk about that tomorrow. I come neither to defend nor condemn the Roman Catholic church. Its problems aren’t unique to Catholicism and they’re not unique to the Christianity either. Just ask my former Scoutmaster, whose filthy deeds earned him some hefty jail time a decade and a half ago.

Stay tuned.

If you found this post informative or helpful, please share it!

3 thoughts on “Linux reliability

  • March 22, 2002 at 5:18 am
    Permalink

    About all I know about system libraries and compilers is that I try to grab a lot of the c and perl stuff in the initial install. Hopefully this avoids some of the "dependancy hell" that one runs into when trying to configure/make/install applications. However, if what I grab is unstable junk, that’s a very bad thing. I had thought that modern 32 bit OSs were supposed to be able to keep rogue aps from transgressing into their memory space and crashing the whole computer, but maybe that’s not just up to the kernel, but also to some of the other lower level parts.

    I’ve been thinking about trying to grab a cheap pc at a pawn shop or something and play with Debian. When you talk about hardware familiarity are we talking really ugly stuff like manually assigning IRQs or just things like knowing the make of your video card and if it’s PCI or AGP? What books do you like in terms of a good overview of Linux "under the hood"? Shell commands I have reference books for, I know how to use Gnome and KDE and run applications. But things like which scripts do what (I know it differs by distro) at boot and what’s really going on under the hood, and which major systems files control what are things that I’m still pretty vague on. Even stuff like finding that it’s trivial to move an app to another directory and it might even still run was new. The trick seems to be learning what scripts may reference the ap and have to be changed. A concise and readable coverage of important things like that for system setup and maintenance would be nice to have.

    As for consulting, sounds interesting and possibly quite fun. However, I’d guess that it might, at the lower levels, be like my experience in public accounting was: all about billable hours. At higher levels, probably mostly about selling.

  • March 22, 2002 at 11:00 am
    Permalink

    To install Debian you need to know the make of your video card, yes (PCI vs AGP vs VESA vs ISA is irrelevant, fortunately) or, more specifically, which X server supports your particular card. Knowing your chipset and searching XFree86.org usually does the trick.

    You also need to know which kernel module supports your network card, in most cases. For that, a web search will usually suffice nicely. It’s possible to make Linux networking plug and play (if you’re using PCI NICs) but none of the commercial distros are doing it, to my knowledge.

    I haven’t seen a good mid-level reference for Linux (it’s all either from the user perspective or the developer perspective). I’ve picked up bits here and there. Hmm…. Now you’ve got me thinking.

    And yes, you’re right, low-level consulting is all about billable hours, and high-level consulting is all about convincing the other firm that you’re well-equipped to solve their problem and selling them on your people.

  • March 22, 2002 at 12:49 pm
    Permalink

    Steve – a good place for you to look would be the FHS.

    http://www.pathname.com/fhs/

    This only tells you where what goes and why, but it’s a good start for the kind of information you’re wanting. Just remember:

    /usr/local = Program Files
    /usr/bin and /bin = Windows
    /opt = (for programs that need their own *large* directory hierarchy)
    /home = My Documents

    If you’re really snazzy, you can do what I do:

    http://www.linuxfromscratch.org/

    Install LFS as per the book and when you add new programs install them all under /opt. That way, if you ever want your system to be totally back to its original just-built state you can:

    rm -rf /opt

    You’ll still have a script or two in /etc (like your registry – only using .ini files instead of a database, by the way).

    There are all kinds of places for you to learn about bootscripts. The lfs-dev mailing list, believe it or not, is a really good one of those places. 🙂

    Yes, this is blatant LFS advocacy, but it teaches you a lot that you wouldn’t have otherwise learned from just using a regular distribution. You are, after all, compiling your own distribution from scratch.

Comments are closed.