If you didn’t compile it yourself, it’s not really yours.

I’m on my Linux From Scratch kick again. Unfortunately, compiling a complete workstation from scratch takes a really long time (the systems that benefit the most from it, namely low-end P2s, need close to a day to compile everything if you want X, KDE and GNOME and some common apps) and requires you to type a lot of awkward commands that are easy to mess up. The upside: Messages like, “I did my first LFS on a Pentium II 18 months ago and it was by far the best workstation I’ve ever had,” are common on LFS discussion boards.
So what to do…? If you want to learn a lot about how Linux works, you type all the commands manually and let the system build itself, and if you’re away while the system’s waiting for the next set of commands, well, the system just sits there waiting for you. In a couple of days or a week you’ll literally know Linux inside and out, and you’ll have the best workstation or server you ever had.

If, on the other hand, you’re more interested in having the best workstation or server farm you ever had and less interested in knowing Linux inside and out (you can always go back and do it later if you’re really interested–CPUs and disks aren’t getting any slower, after all), you use a script.

What script? Well, RALFS, for one. Just install Mandrake 8 or another 2.4-based distribution, preferably just the minimum plus all the compilers plus a text editor you’re comfortable with, then download the sources from www.linuxfromscratch.org, then download RALFS, edit its configuration files, get into text mode to save system resources, and let RALFS rip.

RALFS looks ideal for servers, since the ideal server needs just a kernel, the standard utilities that make Unix Unix, plus just a handful of server apps such as Apache, Samba, Squid, or BIND. So RALFS should build in a couple of hours for servers. And since a server should ideally waste as few CPU cycles and disk accesses as possible, RALFS lets you stretch a box to its limits.

I think I need a new mail server…

A remote administration Unix trick

OK, here’s the situation. I had a Linux box running Squid, chugging away, saving us lots of bandwidth and speeding things up and making everything wonderful, but we wanted numbers to prove it, and we liked being able to just check up on it periodically. Minimalist that I am, though, I never installed Telnet or SSH on it. And besides, I haven’t found an SSH client for Windows I really like, and Telnet is horribly insecure.
Sure, I could just walk up to it and log in and look around. But the server was several city blocks away from my base of operations. For a while it was a good excuse to go for a walk and talk to girls, but there weren’t always girls around to talk to, and, well, sometimes I needed to check up on the server while I was in the middle of something else.

So here’s what I did. I used CGI scripts for the commands I wanted. Take this, for example:

#!/bin/sh
echo ‘Content-type: text/html’
echo ”
echo ‘‹pre›’
ps waux
echo ”
cat /proc/meminfo
echo ‘‹/pre›’

Then I dropped those files into my cgi-bin directory and chmodded them to 755. From then on, I could check on my server by typing http://192.168.1.50/cgi-bin/ps.cgi into a Web browser. Boom, the server would tell me what processes were running, how much memory was in use, and even more cool, how much memory was used by programs and how much was used for caching.

Here’s how it works. The first two lines fake out Apache and your Web browser, essentially just giving them a header so they’ll process the output of these commands. The next line tells it it’s pre-formatted text, so don’t mess with it. This isn’t necessary for all commands, but for commands like ps that output multicolumn stuff, it’s essential. Next, you can type whatever Unix commands you want. Their output will be directed to the Web browser. I echoed a blank line just so the memory usage wouldn’t butt up against the process info. The last line just cleans up.

I wrote up scripts for all the commands I frequently used, so that way when my boss wanted to know how Squiddy was doing, I could tell him. For that matter, he could check it himself.

But if I knew there were going to be girls around, I went ahead and made an excuse to walk that direction anyway. Some things are more important than remote administration, right?

Linkfest.

I felt downright awful yesterday, but it’s my own fault. I remember now why I don’t take vitamins with breakfast. Very bad things happen.
So I’m whupped, and I’m not going to post anything original today. Just some stuff I’ve found lately and haven’t gotten around to posting anywhere.

But first, something to keep in the back of your mind: If The Good News Players, a drama troupe from the Concordia University system, is ever visiting a Lutheran church near you, be sure to go check it out. They are amazing. I put myself together enough to catch them at my church last night and I didn’t regret it in the least. They tell Bible stories in the form of mini-musicals; they’re easy to understand, professional, and just plain funny.

Linux OCR. This is huge. It’s not quite production-quality yet, but then again, neither is the cheap OCR software shipped with most cheap scanners. Check it out at claraocr.org.

It would seem to me that this is the missing link for a lot of small offices to dump Windows. Linux has always been a good network OS, providing fileshares, mail and Web services. Put Zope on your Web server and you can update your company’s site without needing anything like FrontPage. WordPerfect for Linux is available, and secretaries generally love WordPerfect, as do lawyers. ClaraOCR provides an OCR package. SANE enables a large number of scanners. GIMP is available for graphics work. And we’re close to getting a good e-mail client. And the whole shebang costs less than Windows Me.

Linux VMs, without VMware. This is just plain cool. If, for security reasons, you want one service per server, but you don’t have the budget or space for 47 servers in your server room, you can use the User-Mode Linux kernel. (The load on most Linux servers is awfully light anyway, assuming recent hardware.) This Linux Magazine article describes the process. I could see this being killer for firewalls. On one machine, create several firewalls, each using a slightly different distribution and ruleset, and route them around. “Screw you, l337 h4x0r5! You are in a maze of twisty passages, all alike!”

And a tip. I find things by typing dir /s [whatever I’m looking for] from a DOS prompt. I’m old-fashioned that way. There’s no equivalent syntax for Unix’s ls command. But Unix provides find. Here’s how you use it:

find [subdirectory] -name [filename]

So if I log in as root and my Web browser goes nuts and saves a file somewhere it shouldn’t have and I can’t find it, I can use:

find / -name obnoxious_iso_image_I’d_rather_not_download_again.iso

Or if I put a file somewhere in my Web hierarchy and lose it:

find /var/www -name dave.jpg

Windows XP activation cracked. Here’s good news, courtesy of David Huff:

Seems that the staff of Germany’s Tecchannel has demonstrated that WinXP’s
product activation scheme is full of (gaping) holes:

WinXP product activation cracked: totally, horribly, fatally and
Windows Product Activation compromised (English version)

Failures (of the modern man).

I love Joy Division references. For those of you reading this on the front page who can’t see the title (I’m sure I can fix that but I’m lazy and it’s late), I titled this “Failures (of the modern man),” which was the title of an early Joy Division song. I don’t remember what it was about. It was just a really cool title.
I saw another ghost today. Not literally. A ghost from the past. Someone I knew a long time ago, someone I hadn’t seen in eight years. I know I looked vaguely familiar to him because we made eye contact and he gave me the I-know-you-from-somewhere-but-I-don’t-know-where-so-I-won’t-say-anything look. I gave him the very similar I’m-pretty-sure-I-know-who-you-are-but-I’m-not-saying-anything-just-in-case-I’m-wrong look.

I met him in 1991. I’d just turned 16 and this was my first job, at a place called Rax, a now-defunct fast-food joint whose specialty was roast beef sandwiches. I was ambitious and worked hard. I had several reasons for working: It was something to do. I liked having date and weekend money. (Not that I got many dates–so it was mostly weekend money.) It was another place to meet people outside of school.

A lot of people looked down on me because I was working in a restaurant, and a fast-food restaurant at that, and it made me mad sometimes. I got the job easy and I didn’t want to make a lateral move, so it made sense to stay there. At that point in time, it was virtually impossible for a 16-year-old male to get a job outside of food service because until you turn 18, you can’t be prosecuted if you steal stuff. At a fast-food joint you can steal little stuff but they can fire you for doing it, and that’s usually enough deterrent. And I was ambitious. This was my job until I turned 18 and could get something else. Then when I turned 18 it was hard to get something else because not many places were hiring in 1992. We were in a recession. So I stayed until I left for college. No shame in any of that. I worked hard, I did the job well, I was good at it, and I did move up. My next job was in retail, hawking consumer electronics for two summers and two Thanksgiving and Christmas breaks. My next job after that was as a part-time computer tech, which grew into a full-time network administrator job.

I’ve been a young professional for just over four years now, and I can look back over the four years, see good reviews, a lot of work accomplished, and a steadily increasing salary, presumably a reflection of how my employers have valued my work. My car’s a 2000, I wear a tie most days, and I command respect. I guess I turned out OK.

Back to this guy I saw yesterday. He worked part-time. He was the guy who walked around the mess hall “dining room” and cleaned off the tables. He swept the carpet. He washed the trays. He got people refills. He was a good guy, a nice guy, personable. He didn’t strike me as dumb either. I remember him being reasonably articulate. He’s not of old European stock; he’s at most a second-generation American and more likely he’s an immigrant, but he had a very light accent.

I don’t know how well respected he was. One night I came in, and the general manager asked me to fill out the night’s lineup. The positions were usually pretty obvious. You had to be 18 to run the slicer, so you’d put the 18-year-old there. One of the people working until close took the salad bar, and the other closer took the drive-thru. Of the two people left, one took the dining room and one took the front register. Most of us had our specialties. I was good on the register, fast at making change, so I was usually on the drive-thru or up front. So I filled out the lineup, handed it to her, and asked if it was OK.

“No, put [this guy] on front cash,” she said, then laughed. “No, it’s great. Post it.”

I don’t know when I last saw him. The store closed in 1993. I left a little before that to go to college, but I remember the store’s last day. I don’t know if I came in just to say goodbye, or if I came in that day to get my last paycheck. I know I didn’t see him that day. The store fell on tough times near the end, because the company was struggling big-time, and just about all of us knew it. More often than not, we went without someone in the dining room. The salad bar person or the front cashier would pop out there and clean up the dining room when things were slow, which was often. He probably didn’t make more than $4.50 an hour, but if the store could save 9 bucks by not having him there from 5-7, the pressure was there for them to do it. He may well have sought employment elsewhere long before the store closed.

I saw him today. I was bad today. I rarely eat fast food anymore, because it’s terribly unhealthy, but today I had a Jack in the Box craving, so I went there. And there, working the dining room, was a dead ringer for the guy I was talking about. This guy had a beard, and he looked older, but it’s been 8 years, so of course he looks older. The name on his badge was a diminuitive form of the name he used when I met him, and it’s not a terribly common name. If I were a betting man, I’d eagerly wager a hundred bucks it’s the same guy.

And I felt bad. I’m 26 now. I said something derogatory about yuppies a couple of weeks ago, and one of my coworkers said, “But you are one.” And I guess he’s right. I wear a tie. I drive a 2000. I can afford a ritzy apartment. (I prefer to bank the money instead.) I’ve done OK.

And here’s a former coworker, in all likelihood in his 60s now, still doing the very same job he was doing when I met him more than 10 years ago. I confess I don’t know what minimum wage is these days because it’s been eight years since minimum wage affected how much money I made. And I never made minimum. My starting pay was $4.50 an hour, when minimum was $4.35.

While all fast-food jobs, outside of management, are considered unskilled labor, cashiers are generally paid better than the people who clean dining rooms. I doubt he made much more than minimum 10 years ago, and I doubt he makes much more than minimum now. Meanwhile, the only way minimum wage affects me is by raising the price of things like soft drinks and milkshakes.

And I’m wondering, where did it go wrong? Maybe he likes working fast food. I don’t know. But he didn’t look particularly happy, so I doubt it.

You can learn a lot in 10 years. I’ve been slacking. I wanted to know Unix by now. I also wanted to be able to read Greek and Hebrew by now. I can build and administer simple Linux servers now, but the only non-English language I know is Spanish, and I sure don’t know much of that. I can find the bathroom and I can ask for three of my favorite foods, I know a good way to get funny looks is to say, “Llavo mis manos con sopa de pollo,” and when one of my coworkers curses in Spanish, I know she’s cursing but I usually don’t know what she’s saying.

But I have learned a lot.

Why hasn’t he? Where were his opportunities? Did he choose not to better himself, or has the door been slammed in his face? I know it’s not the government’s responsibility to see to it he betters himself, or necessarily even to give him opportunities, but isn’t it his neighbors’ responsibility? What have they been doing? What should they be doing? What should I be doing?

Those are tough questions I don’t have an answer for.

What can I say about Tuesday…?

Photography. Tom sent me links to the pictures he took on the roof of Gentry’s Landing a couple of weeks ago. He’s got a shot of downtown, the dome, and the warehouse district, flanked by I-70 on the west and the Mississippi River on the east.
I’m tired. I spent yesterday fighting Mac OS X for a couple of hours. It still feels like beta software. I installed it on a new dual-processor G4/533 with 384 MB RAM, and it took four installation attempts to get one that worked right. Two attempts just flat-out failed, and the installation said so. A third attempt appeared successful, but it felt like Windows 95 on a 16-MHz 386SX with 4 megs of RAM. We’re talking a boot time measured in minutes here. The final attempt was successful and it booted in a reasonable time frame–not as fast as Windows 2000 on similar hardware and nowhere near the 22 seconds I can make Win9x boot in, but faster, I think, than OS 9.1 would boot on the same hardware–and the software ran, but it was sluggish. All the eye candy certainly wasn’t helping. Scrolling around was really fast, but window-resizing was really clunky, and the zooming windows and the menus that literally did drop down from somewhere really got on my nerves.

All told, I’m pretty sure my dual Celeron-500 running Linux would feel faster. Well, I know it’d be faster because I’d put a minimalist GUI on it and I’d run a lot of text apps. But I suspect even if I used a hog of a user interface like Enlightenment, it would still fare reasonably well in comparison.

I will grant that the onscreen display is gorgeous. I’m not talking the eye candy and transparency effects, I’m talking the fonts. They’re all exceptionally crisp, like you’d expect on paper. Windows, even with font smoothing, can’t match it. I haven’t seen Linux with font smoothing. But Linux’s font handling up until recently was hideous.

It’s promising, but definitely not ready for prime time. There are few enough native apps for it that it probably doesn’t matter much anyway.

Admittedly, I had low expectations. About a year ago, someone said something to me about OS X, half in jest, and I muttered back, “If anyone can ruin Unix, it’s Apple.” Well, “ruin” is an awfully harsh word, because it does work, but I suspect a lot of people won’t have the patience to stick with it long enough to get it working, and they may not be willing to take the extreme measures I ultimately took, which was to completely reformat the drive to give it a totally clean slate to work from.

OS X may prove yet to be worth the wait, but anyone who thinks the long wait is over is smoking crack.

Frankly, I don’t know why they didn’t just compile NeXTStep on PowerPC, slap in a Mac OS classic emulation layer, leave the user interface alone (what they have now is an odd hybrid of the NeXT and Mac interfaces that just feels really weird, even to someone like me who’s spent a fair amount of time using both), and release it three years ago.

But there are a lot of things I don’t know.

I spent the rest of the day fighting Linux boot disks. I wanted the Linux equivalent of a DOS boot disk with Ghost on it. Creating one from scratch proved almost impossible for me, so I opted instead to modify an existing one. The disks provided at partimage.org were adequate except they lacked sfdisk for dumping and recreating partition tables. (See Friday if you don’t have the foggiest idea what I’m talking about right about now, funk soul brother.) I dumped the root filesystem to the HD by booting off the two-disk set, mounting the hard drive (mount -t ext2 /dev/hda1 /mnt) and copying each directory (cp -a [directory name] [destination]). Then I made modifications. But nothing would fit, until I discovered the -a switch. The vanilla cp command had been expanding out all the symlinks, bloating the filesystem to a wretched 10 megs. It should have been closer to 4 uncompressed, 1.4 megs compressed. Finally I got what I needed in there and copied it to a ramdisk in preparation for dumping it to a floppy. (You’ve gotta compress it first and make sure it’ll fit.) I think the command was dd if=/dev/ram0 bs=1k | gzip -v9 > [temporary file]. The size was 1.41 MB. Excellent. Dump it to floppy: dd if=[same temporary file from before] of=/dev/fd0 bs=1k

And that’s why my mind feels fried right now. Hours of keeping weird commands like that straight will do it to you. I understand the principles, but the important thing is getting the specifics right.

E-scape from the Hotel California…

Escaping Microsoft’s Hotel California. For lack of any other available alternative, I started using Outlook Express for mail about 18 months ago. It’s a decent mail client, does most of what I want–I don’t want much–and doesn’t do too terribly many things I don’t want it to. But it’s Microsoft. It runs on Windows. Its file formats are proprietary. It forces me to read my mail with the same workstation all the time. Migration makes me leave the mail behind. Most of it I want to leave behind, but do I want to sort it? NO! OK then. What to do?
Make an IMAP-enabled mail server out of a deprecated old PC and move all that mail over to it, that’s what. I tried to do this with TurboLinux but none of my mail clients wanted to talk to it. Since all of the books I have talk about Red Hat, I went with it, and it worked.

Here’s what I did. Install basic Red Hat. Include sendmail, procmail, fetchmail, imap. I pulled out all the XFree86 stuff. GUIs are for workstations. Command lines are for servers (and for workstations where you expect to get any work done quickly). Actually, I also pulled out just about everything else it would allow. A secure installation is a minimalist installation. After installation, edit /etc/inetd.conf. Uncomment imap line, save and exit. (I like pico, but you can do it with vi if that’s all you’ve got–find the line, delete the comment character, then save by hitting ZZ.) Bounce inetd with /etc/rc.d/rc3.d/inet stop ; /etc/rc.d/rc3.d/inet start. Create a user account with adduser [name] ; passwd [password].

Connect to your new IMAP server. For now, just use your ISP’s existing mail server for outgoing mail; use your IMAP server for incoming. Your username and password are the name/password you just created. After a brief delay, you should see your empty inbox, and you can start dragging stuff to it.

It went great for me. I created a new IMAP folder, opened one of OE’s folders, dragged all the contents over to the IMAP folder, and bingo! They moved. Read status and date were preserved too. (I’ve seen IMAP servers that wouldn’t do that.) I switched to another PC that had OE loaded and connected to my new mail server via IMAP and read some messages. Fantabulous.

Theoretically, I can go to my DSL router and forward port 143 to my mail server and read my mail from the outside.

Now, if you want to actually use your mail server to send mail, that gets trickier–you’ve gotta configure sendmail for that. The out-of-box setup is too secure to just use. Open /etc/mail/access and add your LAN to it, like so:

172.16.5 RELAY

Of greater interest is the fetchmail/procmail combo. You can use fetchmail to automatically go grab mail from the 47 mail accounts you have, then use procmail to sort it and filter out some spam.

To configure fetchmail, create the file /root/.fetchmailrc and chmod it to 0600. Here’s a very basic configuration:

#.fetchmailrc
poll mailserver.myisp.com
with protocol pop3
username myname password mypassword is my_name_on_my_linux_box

And finally, what’s the point of running your own mail server if you don’t spam filter it? There are lots of ways to go about it. I’m experimenting with this method. It uses procmail, which is called by sendmail, which is called by fetchmail. See how all this works?

If you want to get really smooth, you can even block mail before you download it with a program called Mailfilter. You probably don’t want to get as fancy with Mailfilter as people do with procmail, but you can use Mailfilter to search for certain key words or phrases like (checking my spam folder) viagra, mortgage, “fire your boss,” “lose weight” and delete them before you waste time and bandwidth downloading them. I’ve read estimates that spam traffic costs ISPs an average of $3 per month per user. Mailfilter won’t save your ISP very much, since the mail’s already been routed through its network and is just on its very last leg of the trip, but it’ll save them a little, and it’ll save you some bandwidth and time, so it’s probably worth it.

So if you’re looking to leave Outlook and/or Outlook Express all behind, or at least give yourself the option to use a different client, here’s the way out. It’s not too terribly difficult. And you gain an awful lot in the process: mail in a standardized, open format; redundancy; ease and versatility of backup (just schedule a cron job that tars it up and does stuff with it); the ability to very, very quickly search all of your mail with the Unix grep command (just log in, type grep -r [search string] * | more, and find what you’re looking for instantly) and far, far better mail filtering options.

And it’s infinitely cheaper (and more secure) than Exchange.

Finding an open-source alternative to Ghost

Finding an open-source alternative to Ghost. Have I mentioned lately just how pathetic a software company Symantec is? Norton Utilities is adequate, don’t get me wrong. But I don’t think I’d put Norton AntiVirus on any computer that I wanted to work right. I’d give you my opinion of McAfee’s product, but that’s a violation of the license agreement, so I’ll give you my opinion of the company instead. They’d rather spend their time and money and energy keeping you from talking about their products than they would making them worth buying.
So, anyway. Since Symantec is making my life difficult, why do we keep rewarding them by buying Ghost licenses over and over again?

Knowing that the Unix command dd if=/dev/hda of=[filename] makes a bit-for-bit copy of a hard drive, I sought to utilize the Linux kernel and dd as an alternative. Pipe it through bzip2 and it’d be great, right?

Uh, no. I imaged a 1.6-gig HD that had about 400 MB in use. About an hour later, I had a 900 MB disk image. This is bad. Very bad. Ghost would have given me a 250-300 MB image in 15 minutes.

But then I stumbled across PartImage, which does an intelligent, files-only disk image like Ghost does. It’s fast, it’s small, it works. NTFS support is experimental, but as long as you defragment your drive before you try to make an image, it seems to do fine.

However, it doesn’t do a full disk clone like Ghost does. Not yet, at least. Not on its own, at least. But this is Unix. Where there’s a will, there are 47 ways.

First, dump your partition table: sfdisk -d /dev/hda > table

Next, get your MBR: dd if=/dev/hda of=mbr bs=512 count=1

Yes, Eagle Eye, dd does grab your partition table. But restoring the table with DD will only get your primary partition(s). It won’t get your extended partitions, so that’s why sfdisk is necessary.

Now that we’ve got that detail out of the way, you can use PartImage to create images of all your disk partitions. It’s menu driven like Ghost. It’s text mode and not graphics-mode, so it’s not as pretty, but it’s also a fraction of the size.

Got your files made? Great. Now, to make the clone, you reverse it.

Write out the MBR: dd if=mbr of=/dev/hda bs=512 count=1

Re-create your partition layout: sfdisk /dev/hda

Then restore your partitions, one at a time, using PartImage either in interactive mode or with command-line switches.

It's a lot to remember, so the best bet would be to dump the images plus these two small files to a CD, make a Linux boot floppy containing dd, sfdisk, and partimage, and write a shell script that does it all. Then you can think about getting fancy and making a bootable CD that holds all of it and restores a system lickety-split.

A lot of trouble? Ugh. Yeah. Worth it? Probably. Ghost licenses aren't cheap, and PartImage has the potential to be a whole lot quicker, since it's built on a better foundation. Today's PCs are extremely powerful, and DOS has been underutilizing PCs' power since the introduction of the PC/AT in 1985. Linux will very happily scale up to whatever amount of memory and CPU power your PC has under the hood, making compression and decompression go faster. And if you do a little tweaking with hdparam before creating and before restoring (again, a good job for a shell script), you'll get far better disk throughput than DOS could ever give you. On these P3-866s, I found PartImage was a good 20-60 MB/minute faster than Ghost.

So this is not only faster, it also frees you from the difficulty of keeping track of Ghost licenses, which is a hidden administrative expense. With Linux and PartImage and the associated tools, you're free to use them as you like. The only questions anyone will ask is, "How'd you do that?"

That's not to say I have any objection to paying for a good product, but when you can't even buy a site license to escape the paperwork, it gets ridiculous. I suspect some companies just count their PCs and buy that many Ghost licenses once a year in order to be rid of the administrative overhead.

So I think it's more than worth it to figure out how to effectively do this job with open-source tools.

Of course I've left some questions. How do you make Linux boot floppies? How do you make Linux CDs? The PartImage site has images of bootdisks and boot CDs, but they don't have everything you need. Notably, sfdisk is missing from those images. And obviously you'd have to write your shell scripts and add those yourself.

I'll let you know when I figure it out. I'm pretty darn close.

Minesweeper is murder.

Minesweeper is murder.

Minesweeper is murder. An activist group is asserting that the Windows game Minesweeper is disrespectful of victims of land mines and should be removed and replaced with a game about flowers. I have no idea if these guys are serious or not. I never liked the game anyway and just always wanted an excuse to say “Minesweeper is murder.” So now I’ve said it three times. I’m happy.

Read more

One way to defeat spammers

Ever since Brightmail closed up their free filtering service, I’ve been thinking a lot more about spam because I’ve been getting a lot more. I know where these losers are getting my e-mail address. It’s right here on my Web page. But I need to post that so people can contact me. Fortunately, I found a trick. Look at this:
dfarq@swbell.net

That’s just an e-mail link, right? It works just like any other, right? Well, here’s the HTML code for that:

mailto:dfarq@swbell.net

See what I did? I obscured the @ sign with an ASCII code (64), along with the dot (46) and a couple of other characters like the colon. Most automated e-mail address harvesters don’t decode the HTML, so their search routines, which look for things like @ signs and dot-somethings will blow right past that.

So if you run a site, obscure your e-mail address. If you don’t remember your ASCII codes, hopefully you’ve still got QBasic on one of your machines. In QBasic, the command PRINT ASC(“A”) will give you the ASCII code for the letter A. Substitute any letter you like. Or you can remember that A is 65 and lowercase a is 97. A is 65, B is 66, and so on.

When a Web site asks you for an e-mail address, you can see if it’ll let you obscure parts of it. Unfortunately, my forums flag illegal characters, but I may be able to modify that. Some Web sites aren’t that smart.

Obviously this trick won’t work in e-mail, unless you always send your mail in HTML format, which I (along with about half the world) really wish you wouldn’t–it’s annoying. And even if you obscure the mail you send, if I copy and paste your mail to my site, it’ll go up there unobscured. So this advice is mostly for webmasters.

Anyway… On to other things.

We’ve moved, if you haven’t noticed. These pages should be at least a little bit faster. The forums will be several times faster. And the forums are goofy. I haven’t figured out exactly why, but posts are missing and user files are acting up. If you’re having problems (Steve DeLassus just told me he can’t post because it tells him his .dat file can’t be accessed), go ahead and re-register. If you want your post count raised to its previous level, just let me know. I can change that. (Hmm, I wonder if Gatermann would notice if I set his post count to a negative number…?) I’d have preferred to move everything intact, of course.

Anyway. Go play in the forums. See what breaks. If I don’t know it’s broke, I sure can’t fix it. (I may not be able to if I do know, but hey, I can give it my best shot.)

Update: It’s 5:45 in the p.m., and you’re watching… Wait. That’s something else. The forums seem to be working properly now. Lack of uniformity between Linux distributions bites me again… It wasn’t the location of the files YaBB was objecting to, nor was it permissions. It was ownership. Under Mandrake, Apache runs as a user named “apache” and thus files created by CGI scripts like YaBB are owned by “apache.” Under TurboLinux, Apache runs as user “nobody,” and thus files created by CGIs are owned by “nobody.” And when you just tar up your Web site and move it to a new box like I did, those files remain owned by their old owners. Since Linux assumes you know what you’re doing, it happily handed those files over to a non-existant user. So when YaBB came knocking, Unix security kicked in and said, “Hey, nobody, you don’t own these files,” hence those error 103s everyone was getting.

How to get mod_gzip working on your Linux/Apache server

My research yesterday found that Mandrake, in an effort to get an edge on performance, used a bunch of controversial Apache patches that originated at SGI. The enhancements didn’t work on very many Unixes (presumably they were tested on Linux and Irix) and were rejected by the Apache group. SGI has since axed the project, and it appears that only performance-oriented Mandrake is using them.
I don’t have any problem with that, of course, except that Mod_Gzip seems to be incompatible with these patches. And Mod_Gzip has a lot of appeal to people like me–what it does is intercept Apache requests, check for HTTP 1.1 compliance, then compress content for sending to browsers that can handle compressed data (which includes just about every browser made since 1999). Gzip generally compresses HTML data by about 80 percent, so suddenly a DSL line has a whole lot more bandwidth–three times as much.

Well, trying to make all of this work by recompiling Apache had no appeal to me (I didn’t install any compilers on my server), so I went looking through my pile-o’-CDs for something less exotic. But I couldn’t find a recent non-Mandrake distro, other than TurboLinux 6.0.2. So I dropped it in, and now I remember why I like Turbo. It’s a no-frills server-oriented distro. Want to make an old machine with a smallish drive into a firewall? The firewall installation goes in 98 megs. (Yes, there are single-floppy firewalls but TurboLinux will be more versatile if you’re up to its requirements.)

So I installed Apache and all the other webserver components, along with mtools and Samba for convenience (I’m behind a firewall so only Apache is exposed to the world). Total footprint: 300 megs. So I’ve got tons of room to grow on my $50 20-gig HD.

Even better, I tested Apache with the command lynx http://127.0.0.1 and I saw the Apache demo page, so I knew it was working. Very nice. Installation time: 10 minutes. Then I tarred up my site, transferred it over via HTTP, untarred it, made a couple of changes to the Apache configuration file, and was up and going, sort of.

I still like Mandrake for workstations, but I think Turbo is going to get the nod the next few times I need to make Linux servers. I can much more quickly and easily tailor Turbo to my precise requirements.

Now, speaking of Mod_Gzip… My biggest complaint about Linux is the “you figure it out” attitude of a lot of the documentation out there, and Mod_Gzip may be the worst I’ve ever seen. The program includes no documentation. If you dig on the Web site, you find this.

Sounds easy, right? Well, except that’s not all you have to do. Dig around some more, and you find the directives to turn on Mod_Gzip:

# [ mod_gzip sample configuration ]

mod_gzip_on Yes

mod_gzip_item_include file .htm$
mod_gzip_item_include file .html$
mod_gzip_item_include mime text/.*
mod_gzip_item_include mime httpd/unix-directory

mod_gzip_dechunk yes

mod_gzip_temp_dir /tmp

mod_gzip_keep_workfiles No

# [End of mod_gzip sample config]

Then, according to the documentation, you restart Apache. When you do, Apache bombs out with a nice, pleasant error message–“What’s this mod_gzip_on business? I don’t know what that means!” Now your server’s down for the count.

After a few hours of messing around, I figured out you’ve gotta add another line, at the end of the AddModule section of httpd.conf:

AddModule mod_gzip.c

After adding that line, I restarted Apache, and it didn’t complain. But I still didn’t know if Mod_Gzip was actually doing anything because the status URLs didn’t work. Finally I added the directive mod_gzip_keep_workfiles yes to httpd.conf and watched the contents of /tmp while I accessed the page. Well, now something was dumping files there. The timestamps matched entries in /var/log/httpd/access_log, so I at least had circumstantial evidence that Mod_Gzip was running.

More Like This: “/cgi-bin/search.cgi?terms=linux&case=insensitive&boolean=and”>Linux