I could ping my router, but not the rest of my network

I don’t know what happened, but my Ubuntu Linux server crashed hard the other night. And when I brought it back, the rest of the network couldn’t see it. I could ping my gateway (router), and the server was pulling an IP address over DHCP, and the rest of the world had connectivity to it, but I couldn’t ping anything else on the network. And my Windows machines couldn’t connect to it.

Read more

How to mount a USB drive in Turnkey Linux

I like Turnkey Linux, which is a collection of pre-built server appliances based on Ubuntu. When you need a server fifteen minutes from now, it’s about the only way you can make it happen.

But as far as I can tell, it doesn’t mount USB drives automatically. That’s fine; these servers are designed to have the minimum necessary for their stated purpose in life and nothing more. Here’s how I mount a USB drive to use for making backups.

Read more

Nginx, a leaner, meaner web server

Arstechnica posted a nice writeup on Nginx, a cut-down webserver that does less than Apache does, but does the few things it does much faster. That’s nothing particularly new, as smaller and faster webservers have existed for as long as I can remember.

What makes Nginx different is that it can work with PHP. And therefore, it can run WordPress.

Read more

Fixing my b0rken WordPress installation

A little over a week ago, WordPress started acting weird. First, it just got dog slow. Then my site stats page started freezing until I scrolled down and then back up again. Then I started seeing a WordPress.com logon screen on my site stats page. I had to look that account up. Thank goodness for Gmail. Then my Akismet spam filter quit working. Then my stats page stopped working entirely.

I lived with it for a couple of days. I figured maybe WordPress and Akismet had changed something. Or maybe my Linux distribution had. And maybe some update messed things up, and some other update would come along and fix it. No such luck. Read more

How I changed servers midstream

When upgrading this site, I replaced the underlying hardware as well. The old server was just a dead end in too many regards to be worth upgrading in place, and besides, being able to run new and old side by side for a time is helpful.

This type of maneuver is routine work for a professional sysadmin. But it’s been at least two years since I’ve done a similar maneuver at all, and at least five years since I did it with Linux.

When I built the new machine, I gave it a unique IP address. Turnkey Linux makes getting an operational LAMP stack trivial, and depending on what you want to run on that stack, you may even be able to get that installed for you too.

Unfortunately for me, the Geeklog migration tool doesn’t seem to work with WordPress 3.0.1. So I had to get WordPress running on my old hardware in order to migrate. I chose WordPress 2.0.11 because the 2.0 branch appeared to be the current branch when Justdave wrote his migration tool, and 2.0.11 ran without complaint on the dated versions of PHP and MySQL that were on my old server.

After importing the content, I used mysqldump to export my databases. Specifically:

mysqldump --opt -u [mysql username] -p [database name, probably wordpress] > wordpress.sql

I should have gzipped the file, but I didn’t.

gzip wordpress.sql

I then connected to the old server via FTP and transferred the file. Use your favorite file transfer method; I happened to have FTP set up for my internal network.

Uncompress the file if you compressed it:

gunzip wordpress.sql.gz

Then restore the file:

mysql -u [mysql username] -p [database name] < wordpress.sql

Or, if the database already exists, like in my case:

mysqlimport -u [uname] -p [database name] wordpress.sql

Then I connected to the webserver via my web browser. WordPress 3.0.1 saw the WordPress 2.0.11 database and informed me that it needed to be upgraded. So I let it do its thing, and a few minutes later, I had a functioning WordPress site with 10 years’ worth of legacy entries.

I messed around with it for a while. Finally, I decided to go live. And at this point, I should have physically moved the new server into its permanent home. I didn’t do that, so now when I decide to move the server, I’m going to have some downtime.

To flip the IP addresses, you need to know where your Linux box stores its IP address. Debian and Ubuntu both store it in /etc/network/interfaces. As far as I can tell, Red Hat and derivatives like CentOS store it in /etc/sysconfig/network-scripts/ifcfg-eth0, but I haven’t used Red Hat or a derivative in a long time, perhaps 2003.

If worse comes to worse, try something like this to determine where it’s stored:

grep -r [ip address] /etc/

I edited the appropriate file on both boxes, changing the IP address while leaving all of the other parameters unchanged.

I then issued the command ifdown eth0 on both machines.

On my new production server, I then issued the command ifup eth0. Depending on the Linux distribution, it might also be necessary to re-issue a default route command. I didn’t have to do that.

Depending on how much Linux/Unix cred you have at stake, you could just do it the Windows way and reboot the box. Or both of them.

Once I was satisfied everything was working, I powered down the old server and celebrated.

Weekly roundup: 6 Oct 2010

I used to do a weekly roundup every so often, just doing short takes on stuff that interested me as I found it. I haven’t done that in years; I thought I’d give it a whirl again. I don’t know how often I’ll do it, but it was fun.

Ars Technica says Intel’s neutral stance on Atom in servers is a mistake. Absolutely. A dual-core Atom gives plenty of power for infrastructure servers like Active Directory DCs, print servers, and other similar roles. Atoms could even handle many web server tasks.

Xeons are appropriate for database servers and application servers, but throwing them at everything is severe overkill. A lot of server tasks are more disk-bound or network-bound than CPU-bound.

I worked in a datacenter facility for several years that was literally at half capacity, physically. But they didn’t have enough power or cooling capacity to add much more to it.

The only way anything can be added there is to take something away first. Right-sizing servers is the only way to fix that. If they would yank a Xeon, they’d be able to replace it with several Atom-based servers and get a net gain in functionality per square foot and BTU.

Virtualization, a la VMWare, is an option, but one isn’t necessarily a drop-in replacement for the other.

Or, of course, Intel can sit back and wait for ARM to come in and save the day. ARM provides even more functionality per watt. And even though ARM doesn’t run Windows, it does run Linux, and Samba has reached the point where it can stand in for an Active Directory domain controller.

Is there a market out there for a domain controller that fits in a package the size of a CD/DVD drive and consumes less than 20 watts? I’m sure there is. And if Intel doesn’t want to deliver it, ARM and its partners can.

There may be some resistance to ARM, since some decision makers are nervous of things they haven’t heard of, but it should be possible to overcome that. Maybe you haven’t heard of ARM, but guess what? Do you have a smartphone? It has an ARM CPU in it. That PDA you carried before you had a smartphone? It had an ARM CPU in it. It’s entirely possible that your consumer-grade network switch at home has one in it too. Not your router, though. That’s probably MIPS-based. (MIPS is another one of those scary RISC CPU architectures.)

Put a solid operating system on an ARM CPU, and it can run with anything. I have ARM devices that only reboot when the power goes out. If it weren’t for tornado and thunderstorm season causing the power to hiccup, those devices could run for years without a reboot or power-down.

And speaking of ARM, I have seen the future.

Pogoplug is an ARM-based appliance for sharing files. You plug it in, plug USB drives into it, and share files on your home network and the Internet with it. At least, that’s how it’s marketed. But you can hack it into a general purpose Linux box.

Inside, there’s a 1.2 GHz ARM CPU, 256 MB of RAM, and another 256MB of flash memory. Not a supercomputer, but that’s enough power to be useful. And it’s tiny, silent, and sips power. You can plug it in, stash it somewhere, and it’ll never remind you that it’s there.
I’ve actually considered picking up a Pogoplug or two (they go on sale for $45 occasionally, and the slightly less powerful Seagate Dockstar is available for about $30 when you can find them) to run this web site on. Considering how surprisingly well WordPress runs on a 450 MHz Pentium II with 128 MB of RAM (don’t ask me how I know), I think a Pogoplug could handle the workload.

What stops me? I can build an Atom-based PC for less than $150, depending on what I put in it, and run Turnkey Linux on it. Under a worst-case scenario, Turnkey Linux installs in 15 minutes, and it doesn’t take me any longer than that to drop a motherboard and hard drive into a case. So I can knock together an Atom-based webserver in 30 minutes, which is a lot less time than it would take me to get the LAMP stack running on an ARM system.

But if I had more time than money, I’d be all over this.

A device similar to this with an operating LAMP stack on it ready to go is probably too much to ask for. A ready-to-go image running the LAMP stack, similar in form to the DD-WRT or Tomato packages that people use to soup up their routers, might not be. I think it’s a good idea but it isn’t something I have time to head up.

I don’t think I’ve mentioned Turnkey Linux before. I’ve played with it a little, and I’m dead serious that it installs in 15 minutes or less. Installing off a USB flash drive, it might very well install in five.

And it’ll run pretty happily on any PC manufactured this century. More recent is better, of course, but the base requirements are so modest they aren’t worth mentioning.

I’ve built dozens of Linux servers, but this is fantastic. Spend a few minutes downloading an image, copying it onto installation media, and chances are the installation process will take less time than all of that does.

It’s based on Ubuntu LTS, and comes in literally 38 flavors, with more to come after the next refresh is done.

They haven’t built their collection based on the current version of Ubuntu LTS yet because they’ve been distracted with building a backup service. But that’s OK. Ubuntu 8.04.3 still has a little life left in it, and you can either do a distribution upgrade after the initial install, or build a new appliance when the new version comes out and move the data over.

And if Ubuntu isn’t your thing, or you really want 10.04 and you want it now, or worse yet, Linux isn’t your thing, there’s always Bitnami (bitnami.org).

Linux appliances took a little while to get here, but they’re here now, and they work.

Working for Canonical doesn’t make you pro-Free Software?

Stuart Langridge works for Canonical. Canonical produces Ubuntu, a popular Linux distribution. Apparently, this means he favors proprietary software in some people’s minds.

Yes, this is the same Ubuntu Linux you can download freely. You can make copies of it and sell them, legally. You can modify it, if you have the ability and inclination. Just setting the record straight.

Canonical does what it has to do to get Linux working well on your computer. And it succeeds rather nicely. If a computer can run Windows XP or newer, it can run Ubuntu, and installing Ubuntu will be easier than installing Windows in many cases. The computer this website runs on was built on a variant of Ubuntu, and it literally took longer to burn the CD than it took to run the installation. It blew my mind.

This is a case of software being like religion.

I am Lutheran. Almost militantly so, to the annoyance of some people who know me. I break from the traditional Lutheran camp in two regards: favoring music in the service that was written during my lifetime, and not being uptight enough about doctrine. I take the concept of grace alone, faith alone very seriously, and to an outsider, that plus the Lutheran definition of grace–God’s riches at Christ’s expense–is enough to make you Lutheran. That’s good enough for me. Some vocal Lutherans expect you to be able to recite precisely what makes John Calvin a heretic. I neither know nor care about that. I read the Bible, in its entirety, and concluded that Calvin puts certain responsibilities on you, a human being, that Luther puts on God. Since I believe that God is more reliable than me, I concluded that the Lutheran view is safer. I believe that ought to be enough.

The big question is whether I care if I’m Lutheran enough for some people. And the answer is no, I do not. I just ignore the rants about heresy that I see on Facebook, or better yet, stay off Facebook for long stretches at a time, and go about my business.

I guess that’s easier said than done in the Free Software community. There are a lot more witch hunters in that group. I suppose the people who can’t write working code try to make up for it by concentrating on ideology, or something like that. I do know it’s a whole lot easier to crusade for ideology than to write code.

The silent majority of people just want a system that works. They don’t want to hunt down drivers and compile them, or spend hours editing configuration files. I can’t tell you how many e-mail messages I received over the years from people who tried the most popular Linux distribution of the time, ran into difficulty, and gave up. (It’s one reason my e-mail address isn’t on this site anywhere anymore.) Even if the problem was something I could answer relatively easily, they just gave up and installed Windows instead. In their minds, if Dave Farquhar knows how to make that work, then whoever made that particular Linux distribution ought to make it work automatically. And they have a point.

So if Ubuntu installs a driver or some other low-level code that isn’t completely Richard Stallman-approved, the majority of people really don’t care. They’re happy it works. If their freedoms are infringed upon, they don’t know it.

I’ve said before that I could re-train my mother to use Linux. In fact, she could probably get all of her work done in Linux and emacs, and I’m sure John the Baptist Richard Stallman would be absolutely thrilled. But it would take her several years to learn the nuances of emacs, and some of her job duties would take much longer. Perhaps she wouldn’t mind occasionally spending hours to do something that can be accomplished in minutes using a more specialized, albeit proprietary, tool. In the end, when she’s a master of emacs, I’ll be able to tell her that she’s free. And she’ll tell me, “It wasn’t worth it.” Or, if she’s feeling a little more reasonable, she’ll throw something at me.

It’s easier said than done. But perhaps when the witch hunters come knocking, it would help to ask them if they had anything better to do?

After all, he could be a total sell-out like me. In my job, I’ve recommended Linux-based solutions when appropriate, but I spend the overwhelming majority of my time supporting things that run on Windows. Perhaps they would prefer he do that.

But I wouldn’t. I really like the work Canonical is doing.

Why I still like Debian

Say what you will about Debian–the development process is slow and plodding, the distribution is always trailing-edge and Debian is always the last to get everything–but installing it today reminded me why I still like it.I need a temporary holding place where I can experiment. I want to move my genealogy page to a new piece of software, and I want to migrate this blog to WordPress.

The only spare computer I have right now that works reliably is an ancient P2-266. I don’t know how that ended up being, but I’ll work with it..

The system has 192 MB of RAM. I have a pile of DIMMs, but it doesn’t like most of them. So 192 it is.

Ubuntu’s installer won’t load on this system. It tries and tries, but after several hours, the only result is a graphical screen with a heron on it and a mouse pointer.

Debian just loads in text mode and doesn’t complain. It asks a few questions along the way, and it’s slower than the last few installs I’ve done, but it’s steady.

I’m confident I could get it to work on my 486 too, if I had the need or inclination (I don’t). I’ll save the 486 for the day I want to set up a DOS box for some old-school gaming. Probably in another 10 years.

Using video memory as a ramdisk in Linux

An old idea hit me again recently: Why can’t you use the memory that’s sitting unused on your video card (unless you’re playing Doom) as a ramdisk? It turns out you can, just not if you’re using Windows. Some Linux people have been doing <a href=”http://hedera.linuxnews.pl/_news/2002/09/03/_long/1445.html”>it</a> for two years.<p>Where’d I get this loony idea? Commodore, that’s where. It was fairly common practice to use the video RAM dedicated to the C-128’s 80-column display for other purposes when you weren’t using it. As convoluted as PC video memory is, it had nothing on the C-128, where the 80-column video chip was a netherword accessible only via a handful of chip registers. Using the memory for anything else was slow, it was painful, but it was still a lot faster than Commodore’s floppy drives.<p>

So along comes someone on Slashdot, asking about using idle video memory as swap space. I really like the idea on principle: The memory isn’t doing anything, and RAM is at least an order of magnitude faster than disk, so even slow memory is going to give better performance.<p>

The principle goes like this: You use the Linux MTD module and point it at the video card’s memory in the PCI address space. The memory is now a block device, which you can format and put a filesystem on. Format it ext2 (who needs journaling on a ramdisk?), and you’ve got a ramdisk. Format it swap, and you’ve got swap space.<p>

The downside? Reads and writes don’t happen at the same speed with AGP. Since swap space needs to happen quickly both directions, this is a problem. It could work a lot better with older PCI video cards, but those of course are a lot less likely to have a useful amount of memory on them. It would also work a lot better on newer PCIe video cards, but of course if your system is new enough to have a PCIe card, it’s also likely to have huge amounts of system RAM.<p>

The other downside is that CPU usage tends to really jump while accessing the video RAM.<p>

If you happen to have a system that has fast access to its video RAM, there’s no reason not to try using it as swap space. On some systems it seems to work really well. On others it seems to work really poorly.<p>

If it’s too slow for swap space, try it as a ramdisk. Point your browser cache at it, or mount it as /tmp. It’s going to have lower latency than disk, guaranteed. The only question is the throughput. But if it’s handling large numbers of small files, latency matters more than throughput.<p>

And if you’re concerned about the quality of the memory chips on a video card being lower than the quality of the chips used on the motherboard, a concern some people on Slashdot expressed, using that memory as a ramdisk is safer than as a system file. If there’s slight corruption in the memory, the filesystem will report an error. Personally I’m not sure I buy that argument, since GPUs tend to be even more demanding on memory than CPUs are, and the consequences of using second-rate memory on a video card could be worse than just some stray blips on the screen. But if you’re a worry wart, using it for something less important than swap means you’re not risking a system crash by doing it.<p>

If you’re the type who likes to tinker, this could be a way to get some performance at no cost other than your time. Of course if you like to tinker and enjoy this kind of stuff anyway, your time is essentially free.<p>

And if you want to get really crazy, RAID your new ramdisk with a small partition on your hard drive to make it permanent. But that seems a little too out there even for me.

Need a cheap NAS? Grab this floppy and an old Pentium and you’ve got it

I wanted to build a small-as-possible Linux for the purpose of creating a lightweight NAS a few years back. I even downloaded the uclibc development tools and started compiling for the purpose of doing it. Then I got distracted.

I guess it doesn’t matter. I think NASLite had beaten me to the punch anyway.Here’s how it works. You download the appropriate floppy for the network type (SMB for Windows networks, NFS for Unix) and network card you have. You find an old PC. As long as it has PCI slots, it’ll work. Drop in the NIC if there isn’t one there already, and then drop in as many as four IDE hard drives. (The disk will reformat the drives if there’s anything there, so make sure they’re new or scratch drives beforehand.) If the BIOS doesn’t support the drives because they’re too big, disable them in the BIOS. Don’t worry, Linux controls the drives directly so you don’t need the BIOS. Boot off the floppy, and it joins the network and you’ve got a bunch of disk space for the cost of the drives and possibly the NIC.

Nice, huh?

This isn’t suitable for use in most corporate environments since it creates wide-open storage (it might work well as a big file dump, so long as people realize there’s no security there, but I’ve learned the hard way that users tend not to listen, or at least not remember, when they’re told such things). For home networks it’s fine, unless you’ve got wireless, in which case anyone who can get into your wireless network would also be able to get to your NAS.

Even then, it’s useful if what you want is a central repository for programs like Irfanview and Mozilla Firefox that you install on all your PCs and want to keep handy.

At any rate, if you’re creative and careful and have a Linux box and know how to use the dd command (or have a fairly up-to-date copy of WinImage) to copy a 1.72-meg disk image to a floppy, this is a useful tool for you.

WordPress Appliance - Powered by TurnKey Linux