A tiny Linux server distribution? Maybe?

Last Updated on April 14, 2017 by Dave Farquhar

OK, so we’ve been talking about NAS boxes at work. NAS (Network Attached Storage) is a simple server appliance. Plug this thing into the network and you’ve got an instant file server.
Problem is, they’re not that much less expensive than a file server, if at all.

Now, file serving isn’t a particularly CPU-intensive task. Put some decent-speed disks in a box with a simple CPU and some memory, running an embedded operating system, and you’ve got a NAS box, right? Sounds like a perfect job for Linux, right? And you can stuff a minimal Linux into 8 megs of disk space and save the overwhelming majority of your disk space for real work, right?

Well, I asked Charlie if I was completely crazy or not. He didn’t seem to think I was completely nuts. He did ask if I checked to see if anyone’s compiled Samba against uClibc, the alternative libc I was talking about using. I know one person has gotten Samba 2.2.8 to compile against a recent uClibc.

And I even found a project that downloads and compiles uClibc, TinyLogin and Busybox, essentially giving you a complete Linux environment in 600K of disk space, not counting the kernel. And it boots very quickly, even off a floppy. The only problem is that its tools are set up for the ancient Minix filesystem.

Charlie didn’t think running the enterprise on the Minix filesystem was one of my brighter ideas. Maybe I should be glad he didn’t tell me exactly what he was thinking.

Well, getting the system up and running with JFS or XFS probably won’t be much of a problem. Those filesystems are enterprise class if anything ever was.

I had difficulty getting Samba to compile though. I forget the exact error message I was getting.

I may have to opt for the uClibc-based Linux from Scratch, since it’s being actively maintained. That’ll be a bit more work.

I suspect it’s possible to get this combination of tools to work together though. I can’t imagine Quantum is running its Snap servers on Red Hat. I’m sure they’re using uClibc and other embedded tools in conjunction with Samba.

The question is how much more time I want to put into it. If indeed I ever get more time to put into it. The surprising thing to me is that nobody else has built and released this.

If you found this post informative or helpful, please share it!

7 thoughts on “A tiny Linux server distribution? Maybe?

  • July 17, 2003 at 6:43 am
    Permalink

    I know that Dell uses Windows 2000 for their NAS servers. What a terrible choice for an operating system for this type of product…..

    /DT

  • July 17, 2003 at 8:24 am
    Permalink

    So does Iomega. The overriding concern with NAS is it needs to be capable and cheap. A lot of companies are basing them on Windows and using SCSI, and by the time you pay for all that you could have had a full-blown server.

    I need space for backups and less-critical files. Remember fast, reliable, cheap: pick two? Any modern disk is going to be orders of magnitude faster than my tape drives. I need reliable and cheap. The Iomega units have P4 processors and a half-gig of RAM. Why? The cheapest 32-bit processor on the market right now, whatever it is, can do this job. It doesn’t even have to be x86 if the price is right. So why are people paying for a $200 OS and a $140 CPU so they can put four $100 hard drives on the network?

    I happened across one company whose NAS solution is actually a set of IDE hot-swap cages and a floppy containing a custom OS–probably a Linux or BSD or QNX derivative running Samba. You provide an off-the-shelf PC, this disk makes it a NAS.

    The software portion ought to be free. I’m not in love with the idea of putting mission-critical stuff on a beat-up PC. But with entry-level servers selling for under $500, it should be very easy to undercut the price of most of the NAS boxes on the market without compromising on quality.

  • July 17, 2003 at 9:55 am
    Permalink

    Dave, if you are going to store the “enterprises’s” files on a box, you can afford to buy something fairly current – which means it will have enough RAM and enough disk space to not mess around with a tiny Linux installation.

    Also, consider that the NAS boxes are usually higher quality than a typical PC. We bought 4 of the Maxtor 4400 (?) NAS boxes back about a year ago because they were cheap – $2K for 640Gb of drives with a 3 year warranty.

    It runs raid 5, so you have reliability (the boot partition is mirrored instead of raided). It takes “snapshots” of the drives so you can access older versions of files. There are no licensing issues for clients, even though it runs Win2K. It has two 10/100 ethernet ports and a 1Gb port. It’s a 2-high rack mount, I believe. 384Mb of RAM, a Pentium 600 or so.

    Headless, so you use Windows Terminal Server to access it. It has a SCSI connector to hook up an external SCSI tape drive for backup.

    At the time we couldn’t have built anything equivalent for the hardware costs alone. And anything we could have built wouldn’t have been as reliable.

  • July 17, 2003 at 10:24 pm
    Permalink

    Gary, for real storage we use real servers, as in Compaq Proliants. RAID 5 (or mirroring sometimes, especially on the OS drive), Ultra160 or Ultra320 SCSI, 10K RPM drives, the works. For a place to dump backups before they go off to tape, maybe we can justify 3 grand for a box that serves up data off four $100 IDE hard drives in it and maybe we can’t. There’s not much harm in trying to figure out if we can just do it ourselves. And maybe somebody else can benefit. I’m doing the search mostly on my own time, off the clock.

    We laid off five very good people last year. Most of our other departments laid off a whole lot more than that. Speeding up our backups is something we can’t afford not to do, but if we can do it without spending three grand per server room then we ought to think about it. That’s three grand we can spend on other things we desperately need, like more gigabit NICs and switches.

    And, yes, if I put 480 gigs of storage on the LAN (assume 300 gigs after RAID), I can afford 100 megs for a Debian install. But I don’t want to end up 90 megs short of something fitting one morning. Not when a 10-meg install–ideally loading off a bootable CD and running out of a ramdisk–could have done the job.

    That’s just how I am. That’s what drove me to write Optimizing Windows. Like I said, it’s either crazy or genius. And I probably don’t want to know which. Or maybe it’s a mixture of both.

  • July 18, 2003 at 9:41 am
    Permalink

    Dave, I just re-read your original post and you didn’t mention backups anywhere. Indeed, you said “Charlie didn’t think running the enterprise on the Minix filesystem”, which implies data more “valuable” than backup copies.

    OTOH, what’s more valuable than backups if you lose the data center? I’d hate to have to explain how the backups yesterday didn’t happen because the server hiccuped and then the data center burned last night. But I did save $1K on building the server!

    Yes, I realize hiccups can happen with any server. And I’m not necessarily saying that you should not use Linux, just that quality hardware is almost always a good idea if the company is making any profits.

    I improved the quality of our backups greatly just by paying attention to what I’m doing. We have 4 servers with DLT backups and one with DAT. The guy doing network/PC admin before me (when I was doing more programming) was careless, and I’d say there was a “miss” at least once/week on one of the servers. The only errors I’ve had in the last year have been from failure of the tape or a tape drive!

  • July 22, 2003 at 5:52 pm
    Permalink

    The problem is that we’ve got people who want full backups every night. But that takes more hours than we have available–the tape drives we have just aren’t fast enough, and if newer drives are, they won’t be next year. So what a lot of people do is back up to something hosting cheap IDE disks at night, closing the backup window down to something acceptable, then dump that backup to tape during the day. The tapes go offsite immediately so you’re safe if the datacenter burns. The disk backups are retained for a few days for short-term backups.

    It’s hard for me to justify spending $3200 for a 480-gig NAS box when Dell’s entry-level PowerEdge server sells for $400 (including gigabit Ethernet and an 80-gig disk) and a 200-gig IDE drives sell for under $200 and it’s got two bays open.

    And of course you don’t use the Minix file system with its 2-gig partition limit for something like this. I was thinking more along the lines of XFS, or possibly Reiser, and probably doing software RAID. But aside from a good, fast, reliable filesystem the needs aren’t much: just enough Samba to create one share and put it on the network for the backup software to see.

    And then the tape backup portion of the backup can happen during the day, when there are plenty of available eyeballs to troubleshoot any failures that do occur, and they do, especially when you have a farm of 40 servers.

Comments are closed.