Setting up Freesco for port forwarding

It’s a little late, but here’s how Gatermann and I got a Web server running behind a Freesco-based router. Freesco, despite the name, is a micro-distribution of Linux (based on the 2.0.x kernel) that offers firewalling, NAT, caching DNS, port forwarding, a lightweight Web server, and print services on a single floppy. Requirements are minimal; it’d run on a 386 with 8 megs of RAM, a floppy drive, and a pair of NE2000 NICs. For performance and ease of setup, I recommend a P75 (or faster, but a P75’s overkill; the main reason to use it is to get PCI) with a pair of PCI NICs and 8 megs.
What NICs do I recommend? Avoid the new Netgear FA311. The older FA310 worked fabulously, but Freesco doesn’t provide a module for the FA311’s NatSemi chipset, at least not yet. (The source code for a module is available at scyld.com and it’s compatible with the 2.0.x kernel, but compiling a kernel module isn’t a trivial operation for most of Freesco’s audience.) I’d probably go with a Realtek 8139-based card like a D-Link DFE-530TX+, a recent 3Com PCI card, or a PCI NE2000 clone. There’s a modules archive you can download that supports most other common NICs. A pair of D-Links, a P75 board, a floppy and this disk ought to give you nearly plug-and-play operation.

Enough of that. Here are the answers to the questions Freesco asked, in order.

Boot off the floppy. When it asks what you want to make with it, select ethernet router. Hostname doesn’t matter. Accept default for domain name, unless you’ve registered a domain for your LAN.

Don’t detect modems. Select two network cards. If you are using PCI cards, answer 0 to next four questions (IRQ, I/O). If you’re using ISA cards, enter the addresses and IRQs the cards use. DHCP? Depends on your ISP.

The first card’s name is eth0. (This is the card for your cable/dsl modem). Don’t use dhcp logging. Don’t update DNS by DHCP. Second card is eth1. Give it an IP address (10.x.x.x is fine, which is Freesco’s default; normally I use 192.168.1.x network and put my router on 192.168.1.1). Network mask will almost always be 255.255.255.0. I don’t configure for DHCP, so I don’t give it an IP range. if you want one, tell it the range of addresses you want to reserve. The fewer the better, for memory purposes, especially if you’ve only got 8 MB of RAM in the box.

Caching DNS? Answer S (secure). Don’t log.

Enable DHCP? Depends. If you don’t want to configure your LAN manually, DHCP is nice. If your LAN is already configured, DHCP is probably more trouble than it’s worth.

Public HTTP server. Answer Y. Default is S. Port 80. (You might be able to get away with answering N here, and you’ll save a little memory. DO NOT answer S–you’ll never forward port 80 if you do.)

Time server via HTTP? No.

Print server. No.

Telnet server. no.

Screensaver/spindown? 5 min is fine.

Swap file–0 if you have 8 mb or more. I suppose you could run Freesco on some tiny machines if you put in a small hard drive and enabled the swap file, but as cheap as a P75 with a pair of 4-meg SIMMs is these days, I wouldn’t bother.

Extra modules/programs? No.

Log: take defaults.

Host gateway–depends on ISP. Check one of your other PCs and use it.

Primary/secondary DNS. Use your ISP’s. Proxy, probably none. Check your ISP.

Export services? YES. This is the magic forwarding formula.

Now, assuming your web server is on 10.42.42.3, you’d use this line in config:

t,80,10.42.42.3/80

If you want to export other services, like, say, IMAP on port 143, add additional lines, subbing in the appropriate port and IP address. (HTTP is port 80.)

Pick a root password and web admin password, save configuration and reboot. You’re up and going.

Now, to configure your Windows boxes to get their Internet connection through your lovely what-was-old-is-new-again Freesco router, just open your TCP/IP settings, give it an IP address on the same subnet as your Freesco router if it doesn’t already have one, and set your gateway and DNS to the address you gave your Freesco router.

Voila. Configure your system’s BIOS for keyboardless operation if it has such an option, then take the keyboard and monitor away, write-protect the floppy and make a backup of it just in case (or burn it to a bootable CD if the machine is capable of booting off CD and you have an old drive to put in it) stick the box in a corner somewhere, and forget about it. If you have a power failure, it’ll reboot and happily start itself up again. As for stability, I find Freesco, in combination with decent hardware, is more stable than the hardware routers that are popular these days. Since it has a caching DNS, it’ll usually give you better performance too. And since you can probably build one with parts you have laying around, it’s cheaper.

How to get mod_gzip working on your Linux/Apache server

My research yesterday found that Mandrake, in an effort to get an edge on performance, used a bunch of controversial Apache patches that originated at SGI. The enhancements didn’t work on very many Unixes (presumably they were tested on Linux and Irix) and were rejected by the Apache group. SGI has since axed the project, and it appears that only performance-oriented Mandrake is using them.
I don’t have any problem with that, of course, except that Mod_Gzip seems to be incompatible with these patches. And Mod_Gzip has a lot of appeal to people like me–what it does is intercept Apache requests, check for HTTP 1.1 compliance, then compress content for sending to browsers that can handle compressed data (which includes just about every browser made since 1999). Gzip generally compresses HTML data by about 80 percent, so suddenly a DSL line has a whole lot more bandwidth–three times as much.

Well, trying to make all of this work by recompiling Apache had no appeal to me (I didn’t install any compilers on my server), so I went looking through my pile-o’-CDs for something less exotic. But I couldn’t find a recent non-Mandrake distro, other than TurboLinux 6.0.2. So I dropped it in, and now I remember why I like Turbo. It’s a no-frills server-oriented distro. Want to make an old machine with a smallish drive into a firewall? The firewall installation goes in 98 megs. (Yes, there are single-floppy firewalls but TurboLinux will be more versatile if you’re up to its requirements.)

So I installed Apache and all the other webserver components, along with mtools and Samba for convenience (I’m behind a firewall so only Apache is exposed to the world). Total footprint: 300 megs. So I’ve got tons of room to grow on my $50 20-gig HD.

Even better, I tested Apache with the command lynx http://127.0.0.1 and I saw the Apache demo page, so I knew it was working. Very nice. Installation time: 10 minutes. Then I tarred up my site, transferred it over via HTTP, untarred it, made a couple of changes to the Apache configuration file, and was up and going, sort of.

I still like Mandrake for workstations, but I think Turbo is going to get the nod the next few times I need to make Linux servers. I can much more quickly and easily tailor Turbo to my precise requirements.

Now, speaking of Mod_Gzip… My biggest complaint about Linux is the “you figure it out” attitude of a lot of the documentation out there, and Mod_Gzip may be the worst I’ve ever seen. The program includes no documentation. If you dig on the Web site, you find this.

Sounds easy, right? Well, except that’s not all you have to do. Dig around some more, and you find the directives to turn on Mod_Gzip:

# [ mod_gzip sample configuration ]

mod_gzip_on Yes

mod_gzip_item_include file .htm$
mod_gzip_item_include file .html$
mod_gzip_item_include mime text/.*
mod_gzip_item_include mime httpd/unix-directory

mod_gzip_dechunk yes

mod_gzip_temp_dir /tmp

mod_gzip_keep_workfiles No

# [End of mod_gzip sample config]

Then, according to the documentation, you restart Apache. When you do, Apache bombs out with a nice, pleasant error message–“What’s this mod_gzip_on business? I don’t know what that means!” Now your server’s down for the count.

After a few hours of messing around, I figured out you’ve gotta add another line, at the end of the AddModule section of httpd.conf:

AddModule mod_gzip.c

After adding that line, I restarted Apache, and it didn’t complain. But I still didn’t know if Mod_Gzip was actually doing anything because the status URLs didn’t work. Finally I added the directive mod_gzip_keep_workfiles yes to httpd.conf and watched the contents of /tmp while I accessed the page. Well, now something was dumping files there. The timestamps matched entries in /var/log/httpd/access_log, so I at least had circumstantial evidence that Mod_Gzip was running.

More Like This: “/cgi-bin/search.cgi?terms=linux&case=insensitive&boolean=and”>Linux

How I set up Greymatter for Weblogging

How I set up Greymatter for Weblogging. First things first: I’m sure everyone’s asking how much hardware you need. I’m using a Pentium-120 with 64 megs of RAM, and it’s plenty fast most of the time. It takes a little while to regenerate all the templates, but other than that it’s mostly sitting idle. Any Pentium-class machine should be plenty. I’d be hesitant about using a 486 because the templates will take an awfully long time to rebuild. Remember, Greymatter’s written in Perl, and Perl’s an interpreted language. Interpreters are slow for the same reason emulators are slow–the translation is real-time.
But Greymatter offers advantages. You can control your destiny. You have total control over your site–it’s running on your Linux box. And you’re free from FrontPage’s tyrrany. Did I hear cheers? Most importantly for me, I set the clock. I can set the clock ahead a couple of hours, make my post at 10 p.m., and it’ll be dated the next day. That can only mean… The return of the infamous Farquhar Time Machine. I can start sleeping in again! Or go to work earlier… Hey, I can start sleeping in again!

Anyway, I had the Pentium-120 already configured with Mandrake 7.2, but I discovered Mandrake 7.2 in high security mode doesn’t seem to allow Web traffic from the outside world. So I installed Mandrake 7.2 again in low-security mode. I used a server installation. The only things I really cared about were Apache and Perl, but I didn’t feel like de-selecting everything. Both will be in there by default. I think Perl’s part of the Development group during installation. I’m not sure what group Apache is in. I don’t recommend running XFree86 on your server. Those memory resources are better used for server purposes. Oh, and one last thing: Don’t use DHCP. Give your Web server a local, static IP address.

Once I was up and running, Apache wasn’t running by default, so I dinked around with a cp /etc/rc.d/init.d/httpd /etc/rc.d/rc3.d/S45httpd so that Apache would start on boot. Then I started Apache by executing /etc/rc.d/rc3.d/S45httpd start. Of course there are plenty of other ways to accomplish the same thing. It was close to midnight and I just wanted the thing open to the world at that point.

Then I pointed my Web browser at the server’s address, and my embryonic Weblog came up.

It won’t happen that way for you, because I already had Greymatter installed and configured before I did all that. In other words, I did things bass-ackwards. You should do it differently. Get Apache working right first. It’s less frustrating that way.

With Apache installed and running, point a Web browser at it. You should see some kind of Apache welcome screen–it’ll vary based on your Linux distro, but it’ll basically be some kind of show-off screen. You see it? Great. You don’t? Get Apache working. How? I dunno. Make sure it’s running, first of all. Type the command pidof httpd. You should get a couple of numbers. Maybe a lot of numbers. If all you get is a blank line, then Apache’s not running. If it’s running but not responding, you’ve probably got a problem with the configuration file. The default configuration file for Apache, unlike the default configuration of a lot of programs, does work reasonably well. The defaults will certainly do for a Weblog. Start with the default config, get it working, then get fancy later.

Working? Great. Open up port 80 on your DSL router and point it to your server’s address. Don’t expose any other ports. This improves security immensely. Now go to www.grc.com and run Shields Up!, then Probe My Ports. Port 80 should be open. If it’s not, either your Linux box is too secure (I wish I could offer some advice there but I don’t know much about un-securing a Linux box) or your router’s not forwarding the port right.

By default, in Mandrake at least, Apache puts its HTML files in /var/www. So, first, clear out /var/www/html. Next, I put all of the Greymatter files in /var/www/cgi-bin. Then I created directories named Archives in both /var/www/cgi-bin and in /var/www/html. The documentation is pretty good about what files need permissions of 755 and what needs 777 (yuck!) and what needs more restrictive settings, like 644 or 666.

As an aside, the archives directory being chmodded to 777 makes me nervous. That means that if I install Greymatter to a server that shares space with someone else, the entire world can see that directory. They can’t manipulate anything inside there as long as the files inside have more restrictive permissions, but I always cringe every time I see anything with 777 permissions. I knew people in college who’d just chmod everything to 777 because then it meant everything just worked all the time. Unfortunately, anyone who had telnet access to the machine could then go into that directory and change anything. I’m not as concerned about that, since I don’t share this PC with anyone. But 777 still doesn’t give me warm fuzzies. Unix ain’t Christianity. In Unix, 666 is ok (but 644 is much better), and 777 is a hacker’s delight, and therefore, pure evil.

After you chmod all your files, assuming your server is at 192.168.1.2, go to http://192.168.1.2/cgi-bin/gm.cgi. Greymatter should pop up. Go to the configuration screen and run down the line:

Local log: /var/www/html
Local entries: /var/www/html/archives
Local CGI: /var/www/cgi-bin
Website log path: /
Website entries path: /archives
Website CGI path: /cgi-bin

Set the other stuff the way you want it. Now hit Save Configuration. Now, immediately run Diagnostics and Repair. This will ensure that all files are where they need to be and permissions set correctly. If it can’t find something, do what you have to to satisfy it.

Now you’re ready to start editing templates and adding entries. You’ll need to exercise your HTML skills for that, or rip off someone’s templates. I didn’t look too hard, but I’m sure there are people out there offering Greymatter templates. If you have to, use an HTML generator to draw what you want, then take the code and put it in the template. I know HTML, so I coded mine by hand. That’s why they’re still sparse. The basic layout is there; I need to flesh it out. And I haven’t entered every template yet myself.

Now, for backups and stats… Backups are easy. I use the command tar -c /var/www >/home/dave/backup.tar. It only takes a second. You can compress the tar file and throw it on a floppy with the mcopy command. Or if Samba’s also configured and running, backup to a network-accessible directory and pull the file over to another machine.

For stats, I use LiveWebStats, but I don’t like it. Any Apache log analyzer will work.

There’s one other issue with Greymatter. It sends passwords plaintext, and thus, they’ll show up in your logs. So don’t make your stats public, at least not your referrers. If you’ll have remote editors, you need to consider that vulnerability–an editor’s password can potentially be intercepted.

Setting up Greymatter is a lot of work, but it’s a one-shot deal. You make your design, then it’s content-driven. Change your design, and it applies to the whole site. Nice. And when you publish, you only publish your new stuff.

But overall, I like Greymatter an awful lot.

A free memory tester and a Linux tip

I lost my notes for today somehow, and I’ve been home a grand total of 14 hours the past 48 hours (I think), so you’ll have to excuse this quickie.

Free memory tester. I found this over the weekend:

www.memtest86.com

It’s a memory test disk. Self-booting, about 74K in memory, builds from DOS, Windows, or Linux (and possibly others too). I use and recommend RAM Stress Test, by Ultra-X Inc. ( www.uxd.com ), but this seems nearly as good and it’s free. If you’ve got frequent bluescreens, download this and try it on your PC. A lot of problems are caused by bad memory, and the power-on memory test usually won’t find it. Neither will most DOS-based memory utilities.

MemTest is still no substitute for buying brand-name memory, though I’d never let commodity memory sit on the same table with my hardware without testing it first. About 1 in 1,000 brand-name sticks are bad, as opposed to about 1 in 12 commodity sticks, in my extensive experience. One of the first things I do when faced with an unstable system is test the memory overnight, just in case.

Linux (and Unix) tip of the day. If you vaguely remember a command but can’t completely recall it, type the part you remember, then hit tab. A list of possibilities will appear. Hopefully the command you’re looking for is among them.

And if any of the possibilities sound interesting, type man command. The online documentation will come up and explain usage.

Don’t let anyone fool you. You never master this OS. You just learn how to find what you need to get a job done quickly. And hopefully you develop a long memory.

Outta here. And if you’ve mailed me over the last couple of days, my apologies. I’ll get back to you tonight after work.

Playing with Squid

Mandrake Squid. To turn a Mandrake server install into a Squid server, here’s all you have to do. Issue the command squid -NCd1 to build the cache directory structure. Then, issue the command mv /etc/rc.d/rc3.d/K25squid /etc/rc.d/rc3.d/S25squid so that Squid runs at startup (assuming your server’s set to run in text mode, as servers should be–why waste all that memory and CPU cycles keeping a GUI running when those resources can be dedicated to server tasks?). If it you boot and run GUI mode automatically, (maybe you want to run Squid on your workstation), add the command mv /etc/rc.d/rc5.d/K25squid /etc/rc.d/rc5.d/S25squid to the mix.

Now to start Squid, you can do one of two things. You can reboot, which is the Windows way of doing things, or you can just start the daemon, which is the Unix way of doing things. I like the Unix way. Run Squid’s startup script manually by issuing the command /etc/rc.d/rc3.d/S25squid restart. (There are other ways to do it too of course but I like this way.)

Single-floppy Squid. And just in case you haven’t seen everything yet, you can get a single-floppy FreeBSD-based Squid server. Head over to www.ryuchi.org/~ilovefd/1fdsquid/1fdsquidus.shtml for the goods. It uses the system’s hard drive for storage. You want a semi-powerful CPU (a Pentium-133 is sufficient for a small workgroup) and a fair bit of memory (I’m thinking 64 megs is the minimum). That’s less power than you need for a Windows workstation these days, but considering you can do a light-duty Unix-based fileserver with a 33 MHz 486, it’s a comparatively powerful machine.

Open source and innovation

Innovation. And of course I can’t let this slip by. Microsoft is trying to say that open source stifles innovation. Steve DeLassus and I have been talking about this (he was the one who originally pointed it out to me), and I think he and I are in agreement that open source by nature isn’t inherently innovative. It may improve on another idea or add features, but most open source projects (and certainly the most successful ones) are clones of proprietary software. Then again, so was a lot of Microsoft software, starting out. Pot, meet Kettle. Kettle, meet Pot.

But although the programs themselves aren’t always innovative, I think the open source atmosphere can stimulate innovation. Huh? Bear with me. Open source gets you in closer contact with computer internals than a Microsoft or Apple OS generally will. That gets you thinking more about what’s possible and what’s not–the idea of what’s possible starts to have more to do with the hardware than it does with what people have tried before. That stimulates creativity, which in turn stimulates innovation.

Need an example? A calculator company called Busicom accidentally invented the personal computer. I’ve heard several versions of the story, but the gist of it was, Busicom wanted to create a programmable calculator. In the process of creating this device, they commissioned the Intel 4004 CPU, the first chip of its kind. There are conflicting accounts as to whether the resulting product even used the Intel 4004, but that’s immaterial–this calculator’s other innovation was its inclusion of a tape drive.

Intel bought back the rights and marketed the 4004 on its own and became a success story, of course. Meanwhile, people started using their Busicom calculators as inexpensive computers–the built-in tape drive worked as well for data storage as it did for program storage. This was in 1970-1971, several years before the Altair and other kit computers.

Four years later, Busicom was out of business but the revolution was under way, all because some people–both engineers at Intel and end-users who bought the calculators–looked beyond the device’s intended use and saw something more.

Open source software frequently forces you to do the same thing, or it at least encourages it. This fuels innovation, and thus should be encouraged, if anything.

Last week’s flood. No, I haven’t answered all the mail about it. I’m going to give it another day before I deal with it, because dealing with a ton of mail is frankly harder than just writing content from scratch. I don’t mind occasionally, but I’d rather wait until a discussion reaches critical mass, you know?

One reader wrote in asking why foreigners care about U.S. gun laws. I don’t really have an answer to that question. I find it very interesting that no American has yet voiced any strong objections to anything I said–I even had a lifelong liberal Democrat write in, and while she stayed to my left, she advocated enforcement of the laws we already have on the books, rather than an outright ban. She’d force more safety classes, but I don’t have any real objections to that notion.

An interesting upgrade approach. The Register reported about a new upgrade board, about to be released by Hypertec, that plugs into any PC with an available ISA slot and upgrades the CPU, video, and sound subsystems. I’m assuming it also replaces the memory subsystem, since pulling system memory through the ISA bus would be pitifully slow.

The solution will be more expensive than a motherboard swap, but for a corporation that has a wide variety of obsolescent PCs, it might be a good solution. First, it’s cheaper than outright replacement. Second, it creates common ground where there was none: two upgraded systems would presumably be able to use the same Ghost/DriveImage/Linux DD image, lowering administrative costs and, consequently, TCO. Third, corporations are frequently more willing to upgrade, rather than replace, existing systems even when it doesn’t make economic sense to do so (that’s corporate management for you).

Depending on the chipset it uses and the expected timeframe, I may be inclined to recommend these for the company I work for. We’ve got anywhere from 30-100 systems that aren’t capable of running Office 2000 for whatever reason. Some of them are just old Micron Client Pros, others are Micron Millenias who were configured by idiots (a local clone shop that we used to contract with way back when–I’ve never seen anyone configure NT in a more nonsensical manner), others are clones built by idiots, and others are well-built clones that just happen to be far too old to upgrade economically.

Many of these machines can be upgraded–the Microns are all ATX, so an Intel motherboard and a low-end CPU would be acceptable. Most of the others are ATs and Socket 7-based. An upgrade CPU would likely work, but will be pricey and compatibility is always a dicey issue, and most businesses are still stuck in the Intel-only mindset. (Better not tell them Macintoshes don’t use Intel CPUs–wait… Someone PLEASE tell them Macs don’t use Intel CPUs! Yeah, I’ll be an Intel lackey in exchange for never having to troubleshoot an extension conflict on a Mac again. But that’s another story.) They all need memory upgrades, and buying SIMMs in this day and age is a sucker bet. Average price of the upgrades would be $550, but we’d have a hodgepodge of systems. If we can get common ground and two years of useful life for $700 from Hypertec, upper management would probably approve it.

Early experiments in building gateways

Gateways. I worked with Gatermann last night after I got back from church (three Macs and an NT server died yesterday–I needed it last night) on trying to get his Linux gateway running under FloppyFW . We were finally able to get it working with dual NICs, able to ping both inside and outside his LAN (I finally found an old Pentium-75 board that didn’t have compatibility issues). But we weren’t able to actually get his Web browsers working.

I suspect something about the IP masquerading configuration just isn’t right, but it’s been so long since I wrote one of those by hand (and it was really just copycating an existing configuration), so since I have working Linux boxes at home I finally just gave up and downloaded the shell script version of Coyote Linux and ran it. It’s not foolproof because you have to know what kernel module your Ethernet cards use, but assuming you know that (make it easy on yourself–get a pair of Netgear 10/100 cards, which use the Tulip module), but it’s definitely a two-edged sword. It makes it a little harder to configure, but it means it’ll work with a much wider variety of cards. If Linux supports it, so does Coyote, whereas a lot of the other single-floppy distributions just support the three most common types (NE2000, 3Com 3c509, and DEC Tulip). So an old DEC Etherworks3 card will work just fine with Coyote, while getting it to work with some of the others can be a challenge.

I’m disappointed that Coyote doesn’t include the option to act as a caching DNS, because you can fit caching DNS on the disk, and it’s based on the Linux Router Project, for which a BIND tarball is certainly available. I’ll have to figure out how to add BIND in and document that, because there’s nothing cooler than a caching nameserver.

I was messing around briefly with PicoBSD , a microdistribution of FreeBSD, but the configuration is just different enough that I wasn’t comfortable with it. FreeBSD would be ideal for applications like this though, because its networking is slightly faster than Linux. But either Linux or FreeBSD will outperform Windows ICS by a wide margin, and the system requirements are far lower–a 386, 8 megs of RAM, floppy drive, and two NICs. Can’t beat that.

Rarely used trivia department: Using Linux to create disk images. To create an image of a floppy under Unix, use this command: dd if=/dev/fd0 of=filename.img bs=10k . There’s no reason why this command couldn’t also be used to clone other disks, making a single-floppy Linux or FreeBSD distribution an alternative to DriveImage or Ghost, so long as the disks you’re cloning have the same geometry.

Test this before you rely on it, but the command to clone disk-to-disk should be dd if=/dev/hda of=/dev/hdb while the command to clone disk-to-image should be dd if=/dev/hda of=filename.img and image-to-disk should be dd if=filename.img of=/dev/hda .

And yesterday. While the computers (and I’ll use that phrase loosely when referring to those Macs) were going down all around me at work, the mail was pouring in. Needless to say, some people agree and others don’t. We’ll revisit it tomorrow. I’ve gotta go to work.

Amiga influence on Linux

Amiga lives! (Well, sort of). When it comes to GUIs, I’m a minimalist. Call me spoiled; the first GUI I used was on a 7.16-MHz machine with a meg of RAM, and it was fast. Sure, it wasn’t long before software bloat set in and I had to add another meg, and then another, but at a time when Windows 3.1 was running like crap on 4 megs and only decently on 8, I had 6 megs on my Amiga and didn’t really know what to do with all of it. So I left 3 megs available to the system, ran a 3-meg ramdisk, and all was well with the world. Until Commodore’s raw dead fish marketing caught up with it and pulled it and the company under.
Under Linux, KDE and GNOME look good, but they run slower than Windows on my PCs. And I like the idea of my P120 being a usable box. I can do that under Linux, but not with KDE as my Window manager. There’s IceWM, which is nice and lean, and there’s xFCE, which resembles HP’s implementation of CDE (and also resembles OS/2, bringing back fond memories for me–why is it everything I like is marketed as raw dead fish?), and now, two years after its release, I’ve discovered AmiWM.

AmiWM (http://www.lysator.liu.se/~marcus/amiwm.html) is a clone of the Amiga Workbench, the Amiga’s minimalist GUI. It’s small and fast and reminds me of the good old days when computers were computers, and didn’t try to be CD players, dishwashers, toaster ovens, televisions, and the like. For an aging PC (or for a new one that you want to run as quickly as possible–hey, you must be mildly interested in that, seeing as you’re reading my site and that’s my specialty), this one’s hard to beat.

BIOS tweaking leads to successful Linux install

Thursday, 6/15/00
Power supplies and Linux installs… I swapped out a power supply last night for Steve DeLassus (there’s something mildly amusing about an electrical engineer asking a journalist for help with a power supply issue), and I installed Mandrake 7 on one of my PCs so I could get ready to mess with Apache. It kept dying during install, so I reset the BIOS defaults, after which it worked fine. Probably it was memory timing sensitivity but I didn’t feel like messing with it. Linux is much more sensitive to such things than Windows, which may explain some people’s installation difficulties (I think nothing of messing with my BIOS settings until I get it right, but some people understandably never think to check those). Loading BIOS defaults, or, better yet, safe defaults if available, may tame the beast.

Apache… I’m not going to say I can change the world, but some of the things you can do with Apache are totally out of sight. I can’t wait until I can type well enough again to really start experimenting. I’m no pioneer in doing these things, but if I start explaining how to do them, then I will be. If you think I’m looking at this to be one of the big selling points of the next book, you’re dead on.

Until next week…

Killing a process in Unix

My Linux gateway likes to fall off the Internet occasionally. I think it’s Southwestern Bell’s fault, because it always seems to happen right after it tries to renew its DHCP lease. Rebooting fixes the problem, but I wanted a cleaner way.
Here it is. Do a tail /var/log/messages to get the PID for pumpd. [Or, better, use the command pidof [program name] –DF, 5/25/02] Do a kill -9 [PID] to eliminate the problem process. (This process tends to keep the network from restarting.) Then, do a /etc/rc.d/rc3.d/S10network restart to stop and restart the network. [Better: use /etc/init.d/network restart, which is runlevel independent and works on more than just Red Hat-derived distros. –DF, 5/25/02] Try pinging out just to make sure the Internet’s working again, and bingo. Back in business.

I don’t know that this is the best or most elegant way of doing it, but it works and it’s much faster than waiting for that old 486 clunker to do a warm boot.