More on tiny but potentially modern Linux distributions

I found a couple of interesting things on Freshmeat today.
First, there’s a Linux-bootfloppy-from-scratch hint, in the spirit of Linux From Scratch, but using uClibc and Busybox in place of the full-sized standard GNU userspace. This is great for low-memory, low-horsepower machines like 386s and 486s.

I would think it would provide a basis for building small Linux distributions using other tools as well.

What other tools? Well, there’s skarnet.org, which provides bunches of small tools. The memory usage on skarnet’s web server, not counting the kernel, is 2.8 megs.

Skarnet’s work builds on that of Fefe, who provides dietlibc (yet another tiny libc) and a large number of small userspace tools. (These tools provide most of the basis for DietLinux, which I haven’t been able to figure out how to install, sadly. Some weekend I’ll sign up for the mailing list and give it another go.

And then there’s always asmutils, which is a set of tools written in pure x86 assembly language and doesn’t use a libc at all, and the e3 text editor, a 12K beauty that can use the keybindings for almost every popular editor, including two editors that incite people into religious wars.

These toolkits largely duplicate one another but not completely, so they could be complementary.

If you want to get really sick, you can try matching this kind of stuff up with Linux-Lite v1.00, which is a set of patches to the Linux 1.09 kernel dating back to 1998 or so to make it recognize things like ELF binaries. And there was another update in 2002 that lists fixes for the GCC 2.72 compiler in its changelog. I don’t know how these two projects were related, if at all, besides their common ancestry.

Or you could try using a 1.2 kernel. Of course compiling those kernels with a modern compiler could also be an issue. I’m intrigued by the possibility of a kernel that could itself use less than a meg, but I don’t know if I want to experiment that much.

And I’m trying to figure out my fascination with this stuff. Maybe it’s because I don’t like to see old equipment go to waste.

A text-mode download manager for Linux/Unix

Way back when, I used to use a program in Windows called Gozilla to speed up my downloads. The problem with it was that Gozilla was invasive and contained a spyware payload. Competing programs emerged, but it seemed like the biggest added feature was always more spyware. So I gave up on HTTP download accelerators.

Read more

DietLinux — a Linux that boots in under 10 seconds

The tinkerer in me just couldn’t stay away. I saw a reference on Linux Weekly News to DietLinux and had to look at it.
DietLinux is an example of a Linux distribution that can’t properly be called GNU/Linux, because the majority of its userspace didn’t come from the GNU project. GNU’s libc–the main API for Unixish systems, and I’ll call Linux a Unix just to hack off SCO–is replaced with an alternative, trimmed-down libc called dietlibc. It’s not feature-complete but it’s tiny. Those of you who programmed casually in the 1980s and 1990s probably remember a day when you could write a fairly sophisticated program in a few kilobytes. Under modern operating systems, a simple program that simply emits “Hello, world!” can take up 32K or more. Using dietlibc instead of GNU’s libc shrinks that program back down to a couple of kilobytes.

The majority of DietLinux’s userspace comes from Felix von Leitner, the author of dietlibc. Von Leitner reimplemented init–the program that bootstraps a Unix system once the kernel is loaded–and getty, which is the program that handles text-based logins. These unglamorous programs can eat up a fair chunk of memory, and since Unix systems typically go for long periods of time without being rebooted, it’s a bit of a waste unless you need certain features provided by the more traditional init and getty programs. He also wrote replacements for several standard utilities.

Obviously, not every program in the world designed for glibc will compile and run under dietlibc, so DietLinux won’t ever be a complete general-purpose distribution. But for network infrastructure glue-type servers providing services like firewalling, DNS and DHCP (all of which already function), it would be perfect.

I don’t know what the future plans for DietLinux are. The asmutils provide an impressive number of userspace and server utilities, written in assembly language with very low overhead, and would appear to be a nice complement to DietLinux’s infrastructure. Their use would limit DietLinux to x86, however. And the text editor e3 is tiny, full-featured, and emulates keybindings for vi, emacs, WordStar, and Pico, so it’s friendly to pretty much any command-line jockey regardless of heritage and takes little space.

It’s also not a newbie distribution. Installation requires a fair bit of skill and pretty much requires an existing Linux system to bootstrap it.

But it’s definitely something I want to keep an eye on. I’m highly tempted to put it on one of my 486s. I just wish I had more time to mess around with it.

Creating images of floppy diskettes with Linux or DOS

If you want to archive floppies–a good idea, since a floppy disk can sit for months unused and go bad between the time you made it and the time you really needed it, and since it’s hard to shuffle through a collection of hundreds of disks and find the one you need–Linux is an ideal environment for it. To create a disk image, use the following command:
dd if=/dev/fd0 of=filename bs=18k

The if parameter tells it the input device or file (the floppy drive, in this case), and the of parameter tells it the output device or filename. The bs parameter is block size. Most people use a block size of 512, since that’s the size of a disk sector, but it’s slightly faster to write an entire track at once. The speed increase is only slight, but I thought you might like to know. Floppies are already slow enough as it is. I’ve also heard allegations that reading and writing entire tracks at once is more reliable, but I can’t substantiate those claims.

To write out a disk image, simply reverse the if and of parameters:

dd if=filename of=/dev/fd0 bs=18k

Disk images in this format are portable; any Unix can rewrite them to disk, as can the DOS/Windows utility rawrite, which you’ll find on virtually every Linux installation CD. Most other popular disk-imaging programs for DOS and Windows can handle this file format as well.

If you want an equivalent DOS/Windows command-line program to create dd/rawrite-compatible disk images, check out fimage. You can even make those images self-extracting executables with sfx144, if you wish.

SCO stoops to RIAA tactics

SCO is now threatening legal action against corporations that use Linux, since it supposedly infringes on their intellectual property but they haven’t revealed the infringing code yet. I guess they need to start by suing themselves.

If I had my own Linux distribution

I found an interesting editorial called If I had my own Linux Distro. He’s got some good ideas but I wish he’d known what he was talking about on some others.
He says it should be based on FreeBSD because it boots faster than Linux. I thought everyone knew that Unix boot time has very little to do with the kernel? A kernel will boot more slowly if it’s trying to detect too much hardware, but the big factor in boot time is init, not the kernel. BSD’s init is much faster than SysV-style init. Linux distros that use BSD-style inits (Slackware, and optionally, Debian, and, as far as I understand, Gentoo) boot much faster than systems that use a traditional System V-style init. I recently converted a Debian box to use runit, and the decrease in boot time and increase in available memory at boot was noticeable. Unfortunately now the system doesn’t shut down properly. But it proves the concept.

He talks about installing every possible library to eliminate dependency problems. Better idea: Scrap RPM and use apt (like Debian and its derivatives) or a ports-style system like Gentoo. The only time I’ve seen dependency issues crop up in Debian was on a system that had an out of date glibc installed, in which case you solve the issue by either keeping the distribution up to date, or updating glibc prior to installing the package that fails. These problems are exceedingly rare, by the way. In systems like Gentoo, they don’t happen because the installation script downloads and compiles everything necessary.

Debian’s and Gentoo’s solution is far more elegant than his proposal: Installing everything possible isn’t going to solve your issue when glibc is the problem. Blindly replacing glibc was a problem in the past. The problems that caused that are hopefully solved now, but they’re beyond the control of any single distribution, and given the choice between having a new install stomp on glibc and break something old or an error message, I’ll take the error message. Especially since I can clear the issue with an apt-get install glibc. (Then when an old application breaks, it’s my fault, not the operating system’s.)

In all fairness, dependency issues crop up in Windows all the time: When people talk about DLL Hell, they’re talking about dependency problems. It’s a different name for the same problem. On Macintoshes, the equivalent problem was extensions conflicts. For some reason, people don’t hold Linux to the same standard they hold Windows and Macs to. People complain, but when was the last time you heard someone say Windows or Mac OS wasn’t ready for the desktop, or the server room, or the enterprise, or your widowed great aunt?

He also talks about not worrying about bloat. I take issue with that. When it’s possible to make a graphical Linux distribution that fits on a handful of floppies, there’s no reason not to make a system smooth and fast. That means you do a lot of things. Compile for an advanced architecture and use the -O3 options. Use an advanced compiler like CGG 3.2 or Intel’s ICC 7.0 while you’re at it. Prelink the binaries. Use a fast-booting init and a high-performance system logger. Mount filesystems with the highest-performing options by default. Partition off /var and /tmp so those directories don’t fragment the rest of your filesystem. Linux can outperform other operating systems on like hardware, so it should.

But when you do those things, then it necessarily follows that people are going to want to run your distribution on marginal hardware, and you can’t count on marginal hardware having a 20-gig hard drive. It’s possible to give people the basic utilities, XFree86, a reasonably slick window manager or environment, and the apps everyone wants (word processing, e-mail, personal finance, a web browser, instant messaging, a media player, a graphics viewer, a few card games, and–I’ll say it–file sharing) in a few hundred megabytes. So why not give it to them?

I guess all of this brings up the nicest thing about Linux. All the source code to anything desirable and all the tools are out there, so a person with vision can take them and build the ultimate distribution with it.

Yes, the idea is tempting.

Linux gets more attractive on the Xbox

There’s been another milestone in getting Linux running on Microsoft’s Xbox game console. It’s now possible to get it going if you bridge a couple of solder points on the motherboard to enable flashing the unit’s BIOS, then you use the James Bond 007 game and a save game that exploits a buffer overflow, and with a few more tricks, you can unlock the hard drive, put it in a Linux PC, install Linux, then move the drive back to the Xbox and turn it into a cheap Linux box.

Read more

What needs to happen for Linux to make it on the desktop

I saw an editorial at Freshmeat that argued that there’s actually too much software for Linux. And you know what? It has a point.
I’m sure some people will be taken aback by that. The number of titles that run under Windows must number into six digits, and it’s hard to walk into a computer store and buy Linux software.

But I agree with his argument, or at least most of it. Back in my Amiga days, the first thing people used to ask me was, “What, do you not like software?” Then I asked why they felt the need to have their choice of 10 different word processors, especially when they’d just buy pirate Microsoft Word or WordPerfect anyway. (Let’s face it: Large numbers of people chose PCs in the early 90s over superior architectures was because they could pirate software from work. Not everyone. Maybe not even the majority. But a lot.) I argued that one competent software title in each category I needed was all I wanted or needed. And for the most part, the Amiga had that, and the software was usually cheaper than the Mac or PC equivalent.

Linux is the new Amiga. Mozilla is a far better Web browser than IE, and OpenOffice provides most of the functionality of Microsoft Office XP–it provides more functionality than most people use, and while it doesn’t always load the most complex MS Office documents correctly, it does a much better job of opening slightly corrupt documents and most people don’t create very complex documents anyway. But let’s face it: Its biggest problem is it takes an eternity to load no matter how fast your computer is. If it would load faster, people would be very happy with it.

But there is nothing that provides an equivalent to a simple database like Access or Filemaker. I know, they’re toys, and MySQL is far more powerful. But end users like dumb, brain-dead databases with clicky GUI interfaces on them that they can migrate to once they realize a spreadsheet isn’t intended to do what they’re trying to do with it. Everyone’s first spreadsheet is Excel. Then someday they realize Excel wasn’t intended to do what they’re using it for. But you don’t instantly dive into Oracle. You need something in between, and Linux doesn’t really have anything for that niche.

People are constantly asking me about a WYSIWYG HTML editor for Linux as well. I stumbled across one. Its name is GINF. Yes, another stupid recursive-acronym name. GINF stands for “GINF is not Frontpage.” How helpful. What’s wrong with a descriptive name like Webpage-edit?

More importantly, what was the first non-game application that caught your fancy? For most people I know, it was Print Shop, or one of the many knockoffs of Print Shop. People love to give and receive greeting cards, and when they can pick their own fonts and graphics and write their own messages, they love it even more. Not having to drive to the store and fork over $3.95 is just a bonus. Most IT professionals have no use for Print Shop, but Linux’s lack of alternatives in that department is hurting it.

Take a computer with a CPU on the brink of obsolesence, a so-so video chipset, 128 megs of RAM and the smallest hard drive on the market, preload Linux on it along with a fast word processor that works (AbiWord, or OpenOffice Writer, except it’s not fast), a nice e-mail client/PIM (Evolution), a nice Web browser (Mozilla), and a Print Shop equivalent (bzzzt!), and a couple of card games (check Freshmeat) and you’d have a computer for the masses.

The masses do not need 385 text editors. Sysadmin types will war over vi and emacs until the end of time; one or two simple text-mode editors as alternatives will suffice, and one or two equivalents of Notepad for X will suffice.

Linux’s day will eventually arrive regardless, if only because Microsoft is learning what every monopolist eventually learns: Predatory pricing stops working once you corner the market. Then you have to raise prices or find new markets. Eventually you run out of worthwhile markets. So in order to sustain growth, you have to raise prices. Microsoft is running out of markets, so it’s going to have to raise prices. Then it will be vulnerable again, just like Apple and CP/M were vulnerable to Microsoft because their offerings cost more than Microsoft was willing to charge. And, as Microsoft showed Netscape, you can’t undercut free.

But that day will arrive sooner if it doesn’t take a week to figure out the name of the Linux equivalent of Notepad because there are 385 icons that vaguely resemble a notepad and most of them have meaningless names.

The Abit BP6 and modern Linux distributions

Mail from Dave T.: I bumped into a place that is selling a used, functional Abit BP6 and a 400MHz Celeron to go with it. I already got another 400 MHz Celeron so it would be perfect. I always wanted to try out SMP but so far I haven't thought it was worth it. Now I can buy this combo and make my dream come true 🙂
I looked for reviews on the board but most of them were from 1999 and early 2000, when Linux was using kernel 2.2 and there also seemed to be problems with bios on the BP6 causing stability issues. None of the reviews were recent.

Being a long time reader I remembered you talking about owning a BP6 and a quick search confirmed that you were running a dual 500MHz BP6. Do you still have it? If I buy the board then I'll be running Linux of course so I was wondering if you do that as well? How well does it work? Stability? I know that processors in a dual configuration should have identical stepping. If the two are not the same stepping, do you think it will pose a problem? What power supply rating would you recommend for 2x400MHz Celerons?

Thanks,

/Dave T.

The Abit BP6, for those who are unfamiliar with it, was a popular board among enthusiasts back at the turn of the millenium, because it was the first really cheap and easy SMP board. Prior to the BP6, to run dual Celerons you had to resort to some trickery, either soldering on slocket-type adapters or, later, playing with jumpers on them. The BP6 just allowed you to buy a pair of cheap Socket 370 Celerons and drop them on. A lot of people bought Celeron-366s and overclocked them to 550 MHz with this board.

It’s been forever since I’ve mentioned my BP6 because I’ve never found it newsworthy. My main Linux workstation runs on an Abit BP6 with dual Celeron-500s (originally a pair of 366s, which I upgraded a couple of years ago). I bought the board in late 1999 or early 2000 and it’s still my second-fastest PC.

I run Debian Unstable on it, running updates every month or two, so I’m running bleeding-edge everything on it most of the time. The kernel is either at 2.4.19 or 2.4.20. I’ve been running 2.4-series kernels on the BP6 pretty much since the 2.4 series came out, although I’ve changed distributions several times since then. The board has an Intel 440BX chipset, which used to be common as dirt, so I expect even 2.6 kernels and beyond won’t have problems with it.

I haven’t updated the BIOS on my BP6 in years, if ever. I’ve found the system to be stable–the only problems I’ve ever had could easily be attributed to memory leaks. Things would get goofy, I’d run top, and I’d find XFree86 had several hundred megs of memory allocated to it. I’d kill X, and then the system would be fine. So the rare problems I have probably aren’t the board’s fault, but rather the fault of bleeding-edge software. I was confident enough in the system’s stability that this Web site ran on that system for several weeks and I never had problems.

CPUs are supposed to be identical stepping. I’ve seen dual-CPU machines with different steppings work together without having any problems that I could directly attribute to the mismatch. It’s not a great idea and I wouldn’t run my enterprise on a mismatched system–although one of my clients does–but for hobbyist use at home at a bargain price, why not?

As far as power supplies, I ran my BP6 with dual 500s on a 235W box in an emergency. It’s had a 300W box in it for most of its life, so I’d go with a 300W unit, or a 350W unit if you want to overengineer the box a little bit.

Performance wise, I find it adequate but I run IceWM on it, and my primary browser is Galeon. Evolution runs fine on it. Some of the more resource-intensive desktop environments might pose a bit of a problem.

As far as upgradability, if you don’t overclock, the fastest Celerons you can use are Celeron-533s. If you want to do dual processing, you’re limited to the Mendocino-core Celerons. Celerons faster than 366 MHz didn’t overclock well; the limit of the Mendocino core seems to have been around 550 MHz or so.

Adapters to allow newer Celerons to work on the board ought to let you go higher (I haven’t tried it) but the newer Celerons have their SMP capability removed. So theoretically this board tops out at a 1.2 GHz Celeron with an adapter, but that pretty much defeats the purpose of getting a BP6. That’s also probably why they’re cheap when you can find them; the kinds of people who bought these boards in the first place aren’t going to be too happy with two CPUs in the 500 MHz range these days.

But I’m pretty happy with mine. I’ll run it until it dies, and that’ll probably be a while.

How to determine which device drivers to use in Linux

A question came up in the comments of one of my past entries that raised a good question: When you can’t get a device to work, how do you determine which kernel module (Unix-speak that roughly translates to “device driver”) to use to get the hardware working?
Linux has a virtual file at /proc/pci that lists every PCI device it finds in the system. So you can just more /proc/pci to page through a system inventory and find out what video card, NIC, motherboard chipset, IDE and/or SCSI controller, and other devices are in the system.

If you’re in the process of installing when you need this information (highly likely), use ALT-F2 to get to a text console–or CTRL-ALT-F2 to get to a text console from a GUI installer–and issue the commands.

To get back to your installer, hit ALT-F1 to get back to a text-based installer, or CTRL-ALT-F7 to get back to a GUI installer. If CTRL-ALT-F7 doesn’t get you back to the GUI, try the other CTRL-ALT-function combinations.