Possibly the first Apache worm

I just found this article describing a worm that attempts to infect vulnerable Apache servers running on FreeBSD.
This doesn’t have much effect on Linux or other Unix variants (other than probably crashing lots of Apache sessions, which the machine may or may not recover gracefully from) but chances are this is just a harbinger of things to come.

You should upgrade to Apache 1.3.26 or Apache 2.0.39 immediately to avoid any problems, especially if you use FreeBSD. I’ve been running version 1.3.26 on Debian here for about a week without any issues, as I’ve come to expect from Apache.

Will ZDNet ever get a clue about Linux?

The next time ZDNet runs a story about Linux and you start feeling the urge to click on the link and read it, I’ve got a piece of advice for you.
Lie down until it goes away.

If you have a clue about Linux, the story will just make you mad. If you’re trying to learn about Linux, ZDNet will fill you up with enough misinformation to confuse you for weeks.
Read more

A DOS-style editor for Linux

I keep seeing “someday someone will write a DOS edit clone for Linux”-type longings in Linux publications. These are pointless, because someone already did, years ago.

And no, its name isn’t vi or emacs. It’s a true blue (it really is blue) DOS-like editor that uses a lot of the same keystrokes as the Microsoft tool we all learned to tolerate, if not love, in the early ’90s. Hey, it wasn’t very powerful or fast, I know, but it was easy to learn and a whole lot better than edlin.

This one’s called SETedit, it’s from Argentina, and it’s just as easy to use but a whole lot more powerful. It’s also been ported to Win32, if you want to run it in more than just Linux.
Read more

Disguising a Linux box for the big, bad world

I had to put a Linux server out all alone in the big, bad world today. Before I turned it loose, I did a few things to give it a fighting chance out there.
The biggest thing I did was make the machine volunteer as little information as possible. Here’s how.
Read more

Getting out of a sticky BIND

Setting up DNS on Linux isn’t supposed to be the easiest thing in the world. But it wasn’t supposed to be this hard either.
I installed Debian (since it’s nice and lean and mean) and BIND 9.2.1 and dutifully entered the named.conf file and the zones files. I checked out their syntax with the included tools (named-checkconf and named-checkzone). It checked out fine. But my Windows PCs wouldn’t resolve against it.
Read more

A useful Linux app for your CD-R

Quit wasting space on your CD rack with CDs that are only 3/4 full!
C’mon. You know you’ve done it. You’ve got 1.9 gigs worth of stuff to burn to CD. You know it should fit on three CDs. Half an hour later, you’re tired of trying to figure out how to make it fit and you just burn 500 megs’ worth on four CDs.
Read more

Wiping a disk securely

Sometimes in the course of work, it’s necessary to securely wipe a disk. A drive containing confidential information may require replacement. Assuming you caught the problem before the drive died for good, you can wipe it before sending it back to the manufacturer.
Programs to securely wipe a drive cost money. Sometimes big money. Fortunately, it’s easy to do it with Linux.
Read more

Optimizing Web graphics

Gatermann told me about a piece of freeware he found on one of my favorite sites, tinyapps.org, called JPG Cleaner. It strips out the thumbnails and other metadata that editing programs and digital cameras put in your graphics that isn’t necessary for your Web browser to render them. Sometimes it saves you 20K, and sometimes it saves you 16 bytes. Still, it’s worth doing, because more often than not it saves you something halfway significant.
That’s great but I don’t want to be tied to Windows, so I went looking for a similar Linux program. There isn’t much. All I was able to find was a command-line program, written in 1996, called jpegoptim. I downloaded the source, but didn’t have the headers to compile it. I went digging and found that someone built an RPM for it back in 1997, but Red Hat never officially adopted it. I guess it’s just too special-purpose. The RPM is floating around, I found it on a Japanese site. If that ever goes away, just do a Google search for jpegoptim-1.1-0.i386.rpm.

I used the Debian utility alien to convert the RPM to a Debian package. It’s just a 12K binary, so there’s nothing to installing it. So if you prefer SuSE or TurboLinux or Mandrake or Caldera, it’ll install just fine for you. And Debian users can convert it, no problem.

Jpegoptim actually goes a step further than JPG Cleaner. Aside from discarding all that metadata in the header, its main claim is that it optimizes the Huffman tables that make up the image data itself, reducing the image in size without affecting its quality at all. The difference varies; I ran it on several megabytes’ worth of graphics, and found that on images that still had all those headers, it frequently shaved 20-35K from their size. On images that didn’t have all the extra baggage (including some that I’d optimized with JPG Cleaner), it reduced the file size by another 1.5-3 percent. That’s not a huge amount, but on a 3K image, that’s 40-50 bytes. On a Web page that has lots of small images, those bytes add up. Your modem-based users will notice it.

And Jpegoptim will also let you do the standard JPEG optimization, where you set the file quality to a numeric value between 1 and 100, the higher being the truest to the original. Some image editors don’t let you adjust the quality in a very fine-grained manner. I’ve found that a level of 70 is almost always perfectly acceptable.

So, to try to get something for nothing, change into an image directory and type this:

jpegoptim -t *

And the program will see what it can save you. Don’t worry if you get a negative number; if the “optimized” file ends up actually being bigger, it’ll discard the results.

To lower the quality and potentially save even more, do this:

jpegoptim -m70 -t *

And once again, it’ll tell you what it saves you. (The program always optimizes the Huffman tables, so there’s no need to do multiple steps.) Be sure to eyeball the results if you play with quality, and back up the originals.

Commercial programs that claim to do what these programs do cost anywhere from $50 to $100. This program may be obscure, but that’s criminal. Go get it and take advantage of it.

Also, don’t forget the general rule of file formats. GIF is the most backward-compatible, but it’s encumbered by patents and it’s limited to 256-color images. It’s good for line drawings and cartoons, because it’s a lossless format (it only compresses the data, it doesn’t change it).

PNG is the successor to GIF, sporting better compression and support for 24-color images. Like GIF, it’s lossless, so it’s good for line drawings, cartoons, and photographs that require every detail to be preserved. Unfortunately, not all browsers support PNG.

JPEG has the best compression, because it’s lossy. That means it looks for details that it can discard to make the image compress better. The problem with this is that when you edit JPEGs, especially if you convert them between formats, you’ll run into generation loss. Since JPEG is lossy, line drawings and cartoons generally look really bad in JPEG format. Photographs, which usually have a lot of subtle detail, survive JPEG’s onslaught much better. The advantage of JPEG is the file sizes are much smaller. But you should always examine a JPEG before putting it on the Web; blindly compressing your pictures with high compression settings can lead to hideous results. There’s not much point in squeezing an image down to 1.5K when the result is something no one wants to look at.

A stupid BIND trick

My head’s still swimming from my crash course in BIND. I knew enough BIND to be dangerous–I’ve known how to set up a caching nameserver for years, and even stumbling through creating a master server for someone with a fixed IP address who wanted to host a domain wasn’t beyond me. Creating BIND servers for an enterprise isn’t too big of a deal, but creating one right can be.
After reading a lot, I set to the task.

Here’s a hint: If you’re migrating your servers from another OS to some Unixish OS and BIND, you can avoid re-keying all those zone files. (We’ve got more than 60 of the blasted things; our external server alone is 404K worth of configuration files. I didn’t bother to check the internal files.) Set your server to be a slave server to your current server. Be sure to comment out your allow-updates line; BIND 9 will complain if you mention slave servers and updates in the same breath. Now restart BIND (/etc/init.d/bind9 restart in Debian 3.0; the command may be /etc/init.d/named restart or /etc/init.d/bind restart in other distros) and wait. In my case, the files started appearing within seconds, and within a couple of minutes, my server had downloaded all of them. Reset your server to master status, then find a few people to change their TCP/IP configuration to use it. Give it a day or two, and when you’re convinced that all is well, turn off DNS on the old server and put the new server in production.

Yes, my Linux box was perfectly capable of pulling DNS records from an NT-based DNS. This is good. If you’re running DNS on NT currently, I wholeheartedly recommend you migrate away from it. Don’t waste clock cycles and network bandwidth on an expensive NT server. Grab a server-grade machine that’s too old to be a useful NT server and load Linux or some BSD variant on it. I know a company that ran BIND on some old 25 MHz DEC VAX workstations for years. That’s a too low-end to be comfortable, but if you’ve got server-grade 486-66s kicking around in a dusty corner somewhere, they’ll be adequate. A Pentium-133 will treat you a little bit better. A good rule of thumb: If the machine ever ran NT Server with any competence at all (even if it was in 1996), it’s got enough oomph to run BIND.

The nice thing about machines like that is that you usually have more than one of them and it doesn’t cost you anything to keep a hot spare. If one fails, unplug it and boot up the spare. Yes, DNS is mission-critical, but by definition it’s also redundant.

I’m shocked that there isn’t a single-floppy Linux distro that’s basically just Linux and BIND. Here’s a challenge for some sicko: Make a mini-distro incorporating BIND and Linux 1.09 so the minimum requirements will be a 386sx/16 with 2 megs of RAM and an NE2000 NIC.

I believe there are other slick BIND tricks, but I think I’ll wait and see if they work before I go touting a bunch of stuff that might not work.

Linux sites I read regularly.

Linux sites I read regularly. I had a conversation with someone yesterday, and sometime during the course of it, the person said, “I could know more about Linux than so-and-so within four weeks if I had time.” To which I replied he’d probably know more within a week. Then he cited the whole time thing.
That’s a problem for everybody, but there are Web sites that’ll help you get sharp and stay sharp, without spending an inordinate amount of time at Google. Here are my favorites.

LinuxToday. (Daily) I’ve been reading this site on a regular basis for more than three years. At one point it became a really obnoxious advocacy/attack site, but that’s calmed down and it’s a lot more professional now. If a Linux story appears somewhere, a link to it will show up here at some point during the day.

NewsForge. (Daily) Not as prolific as LinuxToday; occasionally gets the story sooner, but also produces much more of its own content.

Slashdot. (Daily) Not strictly a Unix or Linux site, but when they run a Unix or Linux story, it’s worth digging through the comments. High noise-to-signal ratio, but you’ll find buried gems you’re not likely to find elsewhere.

Linux Weekly News. (Weekly) A weekly summary of the biggest Linux headlines. If you can only afford the time to read one Linux news source, make a habit of reading this one every Thursday. You’ll find the week’s biggest headlines, plus any new developments in Linux distributions, applications, and the kernel, all boiled down and organized in one place.

LinuxWorld and LinuxPlanet. (Weekly) Two online Linux magazines, well-written, informative and useful. New content is posted irregularly; you won’t miss anything if you just visit each once a week.

Linux Gazette. (Monthly) Another online Linux magazine. It was much “thicker” (more articles) a couple of years ago, but it’s still worth visiting once a month. In 1997, it was just about all we had and I don’t know if we’d have the others if it hadn’t been for LG.

Sys Admin, Linux Journal, and Linux Magazine. (Monthly) Online versions of print magazines. Not exactly beginner stuff; these have useful content for professional admins and developers. The latter two have a little bit of stuff for end users. Read them and learn what you can from them. If you’re looking for a tax writeoff for yourself, or you have a couple hundred bucks left in your annual budget to burn, subscribe to these and think about buying their CD archives to get all the back issues.

Freshmeat. (whenever) Whenever a new open-source project is released, you’ll find the details here. Search here when you’re looking for something. Give it a cursory glance once in a while; you’ll find stuff you weren’t necessarily looking for but then wonder how you ever lived without it.

There are other sites, of course, but these are the sites that stood the test of time (for me at least).