Linux is unrelated to extremism

The NSA’s spying on Linux Journal readers is precisely what’s wrong with NSA spying. Why? It paints with an overly broad brush.

Eric Raymond’s views on many things are on the fringes of what’s considered mainstream, but he’s not the kind of person who blows up buildings to try to get his point across.

And here’s the other problem. Does Eric Raymond even represent the typical Linux Journal reader? Odds are a sizable percentage of Linux Journal readers are system administrators making $50,000-ish a year, or aspiring system administrators who want to make $50,000-ish a year, who see knowing Linux as a means to that end.

It’s no different from targeting Popular Mechanics readers because someone could use information it publishes in ways you don’t agree with. Read more

Linux sites I read regularly.

Linux sites I read regularly. I had a conversation with someone yesterday, and sometime during the course of it, the person said, “I could know more about Linux than so-and-so within four weeks if I had time.” To which I replied he’d probably know more within a week. Then he cited the whole time thing.
That’s a problem for everybody, but there are Web sites that’ll help you get sharp and stay sharp, without spending an inordinate amount of time at Google. Here are my favorites.

LinuxToday. (Daily) I’ve been reading this site on a regular basis for more than three years. At one point it became a really obnoxious advocacy/attack site, but that’s calmed down and it’s a lot more professional now. If a Linux story appears somewhere, a link to it will show up here at some point during the day.

NewsForge. (Daily) Not as prolific as LinuxToday; occasionally gets the story sooner, but also produces much more of its own content.

Slashdot. (Daily) Not strictly a Unix or Linux site, but when they run a Unix or Linux story, it’s worth digging through the comments. High noise-to-signal ratio, but you’ll find buried gems you’re not likely to find elsewhere.

Linux Weekly News. (Weekly) A weekly summary of the biggest Linux headlines. If you can only afford the time to read one Linux news source, make a habit of reading this one every Thursday. You’ll find the week’s biggest headlines, plus any new developments in Linux distributions, applications, and the kernel, all boiled down and organized in one place.

LinuxWorld and LinuxPlanet. (Weekly) Two online Linux magazines, well-written, informative and useful. New content is posted irregularly; you won’t miss anything if you just visit each once a week.

Linux Gazette. (Monthly) Another online Linux magazine. It was much “thicker” (more articles) a couple of years ago, but it’s still worth visiting once a month. In 1997, it was just about all we had and I don’t know if we’d have the others if it hadn’t been for LG.

Sys Admin, Linux Journal, and Linux Magazine. (Monthly) Online versions of print magazines. Not exactly beginner stuff; these have useful content for professional admins and developers. The latter two have a little bit of stuff for end users. Read them and learn what you can from them. If you’re looking for a tax writeoff for yourself, or you have a couple hundred bucks left in your annual budget to burn, subscribe to these and think about buying their CD archives to get all the back issues.

Freshmeat. (whenever) Whenever a new open-source project is released, you’ll find the details here. Search here when you’re looking for something. Give it a cursory glance once in a while; you’ll find stuff you weren’t necessarily looking for but then wonder how you ever lived without it.

There are other sites, of course, but these are the sites that stood the test of time (for me at least).

News analysis

Short takes. Yesterday was a newsworthy day in technology, and I’m sure there’s going to be a ton of misinformation about it eminating from both coasts, so we might as well set the record straight.
Poor quality control drives IBM from the hard drive business! Yeah, whatever. IBM makes one questionable model (and many GXP failures sounded more like power supply failures than hard drive failures), and suddenly everything they’ve ever made is crap. Guess what? Seven years ago you couldn’t give me a Seagate drive, because the drives they were making back then were so slow and unreliable. Maxtors were worse–and my boss at the time, who has a very long memory, nearly disciplined me a couple of years ago for specifying a Maxtor drive in an upgrade. But he’s a reasonable man and saw that the drive held up and performed well. Western Digital has been so hit and miss I still don’t want to buy any of their drives. Though their drives started to look better after they licensed some technology from… Old Big Black and Blue.

And the truth about GXPs: Regardless of how true the quality control allegations are, the drives themselves are the most innovative and advanced IDE devices ever commercially marketed. The platters are made using different materials and processes than conventional discs, which was supposed to make them more reliable. Expect that technology to come of age in a generation or two. The drives even include SCSI-like command queueing (the newest version of Linux’s hdparm allows you to turn this feature on; I have no idea if Windows switches it on by default). The successor to the 60GXP is going to be worth a second and a third look.

Wanna know what’s really going on? Hard drives aren’t very profitable. IBM has a history of spinning off questionable divisions to see if they can survive as smaller, more independent entities. The most famous recent example of this is Lexmark. That’s what’s going on here. IBM and Hitachi spin off and merge their storage divisions, and each company takes a stake in it. If the company mops up the floor with the competition, IBM and Hitachi make lots of money. If the company continues to bleed cash, IBM and Hitachi get nice tax write-offs. Either way, the shareholders are happy.

A number of years ago, IBM was a large producer of memory chips as well. In fact, you can open up a Mac manufactured in the mid-1990s, and chances are you’ll find an IBM-manufactured PowerPC CPU, one or more IBM-manufactured DIMMs, and an IBM SCSI hard drive. Making memory had its ups and downs, and during one of the many downturns in the 90s, IBM got out of the business. There was a time when Intel and AMD were in that business too (I have some old AMD DRAM chips on an expansion card somewhere, and I’ve seen Intel DRAMs but I don’t know if I’ve ever owned any).

This news is a little bit surprising, but hardly shocking. IBM’s making tons of money selling software and services, they’re not making money selling hard drives, and they’ve got a new CEO and nervous investors. This is a way for them to hedge their bets.

And you can expect them to possibly start getting more aggressive about marketing their technologies to other drive manufacturers as well now. Seagate, Maxtor, Western Digital, Fujitsu and Samsung have just changed from competitors into potential customers. Expect disk performance to increase and price to continue to decrease as a result.

How to gauge hard drive reliability. This isn’t exactly news but it seems very relevant. Professional writers don’t see a lot of drives. They can recommend based on their own experience, but their recent experience is going to be limited to a few dozen drives. Message boards are very hit and miss. You have no way of knowing whether it’s a book author hiding behind that handle or a clueless 12-year-old kid. Find an experienced technician who’s still practicing as a technician (I’m not a very good example; at this stage of my career I no longer deal with large numbers of desktop systems–I deal with a handful of servers and my own desktop machine and that’s it) and ask what hard drives they’ve seen fail. When I was doing desktop support regularly, I could tell you almost the exact number of drives I’d seen fail in the past year, and I could tell you the brands. I’d prefer to talk to someone who fixes computers for a large company rather than a computer store tech (since his employer is in the business of selling things, he’s under pressure to recommend what’s in stock), but I’ll still trust a computer store tech over some anonymous user on Usenet or a message board, as well as over a published author. Myself included.

AMD withdraws from the consumer market! AMD mentioned in a conference call yesterday that it plans to discontinue the Duron processor line this year. It makes sense. Fab 25 in Austin is being re-tooled to make flash memory, leaving the Duron without a home. But beyond that, AMD’s new 64-bit Hammer chip is going to hit the market later this year. So they can sell a slightly crippled K7 core as their low-end chip, or they can make their high-end K7 core into the low-end chip and sell the Hammer as a high-end chip. This strategy makes more sense. Clock for clock, the Athlon is still a better chip than the P4. Hammer scales better and performs better. So AMD can pit the Athlon against the Celeron and give P4 performance at a Celeron price, and the Hammer against the P4, which will give P4 clock rates and deliver better performance for 32-bit apps, along with a 64-bit future. There’s not much room in that strategy for the Duron. AMD would rather cede the $35 CPU business to VIA.

Look for the Hammer to gain widespread use in the Linux server market, especially among smaller companies. The Athlon already has an audience there (in spite of some pundits calling AMD-based systems “toys,” you see far more ads for AMD-based servers in Linux Journal than you see for Intel boxes), but the Hammer will become the poor man’s Alpha.

Linux Performance Tuning

I found a very superficial Linux Journal article on performance tuning linked from LinuxToday this week. I read the article because I’m a performance junkie and I hoped to maybe find something I hadn’t heard before.
The article recommended a kernel recompile, which many people don’t consider critical anymore. It’s still something I do, especially on laptops, since a kernel tuned to a machine’s particular hardware boots up faster–often much faster. While the memory you save by compiling your own kernel isn’t huge and was much more critical back when a typical computer had 8 MB of RAM, since Linux’s memory management is good, I like to give it as much to work with as possible. Plus, I’m of the belief that a simple system is a more secure system. The probability of a remote root exploit through the parallel port driver is so low as to be laughable, but when my boss’ boss’ boss walks into my cube and asks me if I’ve closed all possible doors that are practical to close, I want to be able to look him in the eye and say yes.

The same goes for virtual consoles. If a system runs X most of the time, it doesn’t need more than about three consoles. A server needs at most three consoles, since the only time the sysadmin will be sitting at the keyboard is likely to be during setup. The memory savings isn’t always substantial, depending on what version of getty the system is running. But since Linux manages available memory well, why not give it everything you can to work with?

The best advice the article gave was to look at alternative window managers besides the ubiquitous KDE and Gnome. I’ve found the best thing I’ve ever done from a performance standpoint was to switch to IceWM. KDE and Gnome binaries will still run as long as the libraries are present. But since KDE and Gnome seem to suffer from the same feature bloat that have turned Windows XP and Mac OS X into slow pigs, using another window manager speeds things along nicely, even on high-powered machines.

I take issue with one piece of advice in the article. Partitioning, when done well, reduces fragmentation, improves reliability, and allows you to tune each filesystem for its specific needs. For example, if you had a separate partition for /usr or /bin, which hold executable files, large block sizes (the equivalent of cluster sizes in Windows) will improve performance. But for /home, you’ll want small block sizes for efficiency.

The problem is that kernel I/O is done sequentially. If a task requires reading from /usr, then /home, then back to /usr, the disk will move around a lot. A SCSI disk will reorder the requests and execute them in optimal order, but an IDE disk will not. So partitioning IDE disks can actually slow things down. So generally with an IDE disk, I’ll make the first partition a small /boot partition so I’m guaranteed not to have BIOS issues with booting. This partition can be as small as 5 megs since it only has to hold a kernel and configuration files. I usually make it 20 so I can hold several kernels. I can pay for 20 megs of disk space these days with the change under my couch cushions. Next, I’ll make a swap partition. Size varies; Linus Torvalds himself uses a gig. For people who don’t spend the bulk of their time in software development, 256-512 megs should be plenty. Then I make one big root partition out of the rest.

With a multi-drive system, /home should be on a separate disk from the rest. That way, if a drive fails, you’ve halved your recovery time because you’ll either only have to install the OS on a replacement drive, or restore your data from backups on a replacement drive. Ideally, swap should be on a separate disk from the binaries (it can be on the same disk as /home unless you deal with huge data files). The reason should be obvious: If the system is going to use swap, it will probably be while it’s loading binaries.

Still, I’m very glad I read this article. Buried in the comments for this article, I found a gem of a link I’ve never seen referenced anywhere else before: Linux Performance Tuning. This site attempts to gather all the important information about tuning Linux to specific tasks. The pros know a lot of this stuff, but this is the first time I’ve seen this much information gathered in one place. If you build Linux servers, bookmark that page. You’ll find yourself referring back to it frequently. Contributors to the site include kernel hackers Rik van Riel and Dave Jones.