I found a very superficial Linux Journal article on performance tuning linked from LinuxToday this week. I read the article because I’m a performance junkie and I hoped to maybe find something I hadn’t heard before.
The article recommended a kernel recompile, which many people don’t consider critical anymore. It’s still something I do, especially on laptops, since a kernel tuned to a machine’s particular hardware boots up faster–often much faster. While the memory you save by compiling your own kernel isn’t huge and was much more critical back when a typical computer had 8 MB of RAM, since Linux’s memory management is good, I like to give it as much to work with as possible. Plus, I’m of the belief that a simple system is a more secure system. The probability of a remote root exploit through the parallel port driver is so low as to be laughable, but when my boss’ boss’ boss walks into my cube and asks me if I’ve closed all possible doors that are practical to close, I want to be able to look him in the eye and say yes.
The same goes for virtual consoles. If a system runs X most of the time, it doesn’t need more than about three consoles. A server needs at most three consoles, since the only time the sysadmin will be sitting at the keyboard is likely to be during setup. The memory savings isn’t always substantial, depending on what version of getty the system is running. But since Linux manages available memory well, why not give it everything you can to work with?
The best advice the article gave was to look at alternative window managers besides the ubiquitous KDE and Gnome. I’ve found the best thing I’ve ever done from a performance standpoint was to switch to IceWM. KDE and Gnome binaries will still run as long as the libraries are present. But since KDE and Gnome seem to suffer from the same feature bloat that have turned Windows XP and Mac OS X into slow pigs, using another window manager speeds things along nicely, even on high-powered machines.
I take issue with one piece of advice in the article. Partitioning, when done well, reduces fragmentation, improves reliability, and allows you to tune each filesystem for its specific needs. For example, if you had a separate partition for /usr or /bin, which hold executable files, large block sizes (the equivalent of cluster sizes in Windows) will improve performance. But for /home, you’ll want small block sizes for efficiency.
The problem is that kernel I/O is done sequentially. If a task requires reading from /usr, then /home, then back to /usr, the disk will move around a lot. A SCSI disk will reorder the requests and execute them in optimal order, but an IDE disk will not. So partitioning IDE disks can actually slow things down. So generally with an IDE disk, I’ll make the first partition a small /boot partition so I’m guaranteed not to have BIOS issues with booting. This partition can be as small as 5 megs since it only has to hold a kernel and configuration files. I usually make it 20 so I can hold several kernels. I can pay for 20 megs of disk space these days with the change under my couch cushions. Next, I’ll make a swap partition. Size varies; Linus Torvalds himself uses a gig. For people who don’t spend the bulk of their time in software development, 256-512 megs should be plenty. Then I make one big root partition out of the rest.
With a multi-drive system, /home should be on a separate disk from the rest. That way, if a drive fails, you’ve halved your recovery time because you’ll either only have to install the OS on a replacement drive, or restore your data from backups on a replacement drive. Ideally, swap should be on a separate disk from the binaries (it can be on the same disk as /home unless you deal with huge data files). The reason should be obvious: If the system is going to use swap, it will probably be while it’s loading binaries.
Still, I’m very glad I read this article. Buried in the comments for this article, I found a gem of a link I’ve never seen referenced anywhere else before: Linux Performance Tuning. This site attempts to gather all the important information about tuning Linux to specific tasks. The pros know a lot of this stuff, but this is the first time I’ve seen this much information gathered in one place. If you build Linux servers, bookmark that page. You’ll find yourself referring back to it frequently. Contributors to the site include kernel hackers Rik van Riel and Dave Jones.