Hey hey! Sorcerer Linux works

I’m writing this from my fast and lovely new Linux workstation, compiled from scratch using Sorcerer.. I’m a psycho, I know, starting a compile in the morning before leaving for work and letting it run all day, just in hopes of having a slightly faster computer. But it is faster. Compiling XFree86 and KDE sure does take a while though. I let KDE run while I was at work; I got home and found it had compiled successfully, so I fired up the Konqueror Web browser, hoping to see the fastest Web browser in history. It was quick, but didn’t render GIFs. A little hunting turned up why: It hadn’t configured QT with the -gif option while compiling it. I don’t know the legality of a private individual in the United States compiling QT with the option to decode GIFs. Don’t you just love software patents?
If you’re willing to risk being a criminal, or you’re into civil disobedience, or you’ve forked the bucks over to Unisys for the right to decode GIFs, and you’re wanting to give Sorcerer a try, edit /var/lib/sorcery/grimoire/graphics/qt-x11/BUILD and add the option -gif to the ./configure line.

How do I like it? An awful lot. I may never go back to a standard distribution again. Seriously. And, frankly, the Linux apps are good enough to do just about everything I want or need to do. I need to decide on a mail client, but there are several to choose from.

My general take on Linux hasn’t changed much. Yeah, it takes a long time to learn. A lot of it doesn’t seem intuitive until you’ve been using it for 10 years. But how many Windows tools have you been using for 10 years? Not many because it changes so fast. So I can keep on learning a bunch of underpowered stuff, or I can learn a bunch of really powerful stuff that I can more or less count on still being the same 10, 15, 20 years from now. I think I like that option. (That’s not to say I’m going to become a vi proponent; I can stumble around in vi now, but it’s obvious to me that vi first looked easy because yeah, anything’s easier than a line editor, and commands and features got bolted on later, and the result was fast and powerful but clumsy.)

Getting back into business…

My mail’s working again. My mail server problems seem to be mostly solved. It was indeed a hardware problem–with my Linksys router. My mail server couldn’t talk to the outside world, and my Windows boxes couldn’t talk to (couldn’t even ping) the mail server. But my Web server could. But since my Web server is a Web server, it doesn’t have a mail client on it. Oh well. So I pulled the plug on the Linksys router, called it a few names, then plugged it back in. Soon I had a flood of mail, telling me all about how I can make $5K a month online, get high legally, drive my Web counter ballistic, get out of debt… And a really weird one: I love you and I don’t want you to die! I had to check that one. Weight-loss spam. Hmm. I guess that spammer doesn’t know that if I lost 40 pounds, I probably would die…
You know, I wonder if maybe I liked my mail server better when it didn’t work. Nah. There was some legit stuff buried in it, and I’m slowly replying to it all.

The funeral was yesterday. Since I wasn’t quite the only one who had trouble figuring out when to sit and when to stand, I take it I wasn’t the only Protestant there. It was a very nice service.

And there’s this, courtesy of Dan He sent me the first installment in a series about using Linux as a thin client. Well, technically, I suppose the machines he’s describing are fat clients, since they do have some local storage. No importa. Dan asked if I’ve made this point before. I think I have. I know I started to make it in my second book, The Linux Book You’ll Never Read, but it was cancelled before I started on the research to tell how to implement it.

So here’s the story. You get yourself a big, honkin’ server. Go ahead and go all out. I’m talking dual CPUs, I’m talking 60K RPM Ultra1280 SCSI drives (OK, you can settle for 15K RPM Ultra320 SCSI, since that’s all they make), I’m talking a gig or two of RAM if you’ve got the slots–build a powerhouse.

Then you go round up the dinkiest, sorriest bunch of PCs you can find. Well, actually, since video performance is fairly important, the ideal system would be a P100 with 24 MB RAM, a fairly nice PCI video card, a smallish hard drive, and a network card. The most important component is the video card, far and away. The fat clients connect to your network and run applications off that honkin’ server. The apps run on the server and display on the fat client. Data is stored on the applications server.

Yes, you’ll want a good sysadmin to keep that honkin’ applications server happy. But desktop support virtually ceases to exist. When you have problems with your PC, someone comes, swaps out the unit, and you get back to work. You’re supposed to have one desktop support guy for every 25 end users (in reality most places have one for every 75). That’s 40,000 smackers plus benefits annually for an army of people whose job it is to make sure NT keeps running right. These people are expensive, hard to find, and if they’re any good, even harder to keep.

Move to fat clients, and you can probably replace desktop support with one desktop support guy (to play Dr. Frankenstein on the dead systems and support the remaining few who can’t get by with a fat client) and a kick-butt sysadmin.

Sorcerer: An easier way to get Linux your way

I’ve talked about Linux From Scratch before, and I like how it gives you just what you want, compiled how you want, by your system, for your system, but it doesn’t actually give you a very useful system in the end.
Sure, you’ve got a text-based system with all the standard Unix utilities, and it boots like greased lightning, but there’s still a fair bit of configuration you have to do afterward. And the attitude of the committee that wrote it seems to be that if the documentation to do something exists elsewhere, it shouldn’t be repeated there. Speaking as a published author, I don’t agree with that absolute. Sure, a table listing DOS commands and their Unix equivalents is out of place in that kind of book, because that’s non-essential for getting a working system. But the two paragraphs required to tell you how to get your network card configured isn’t a big deal. Just do it!

I could spend way too much time ragging on the project, and it wouldn’t accomplish anything productive. Linux From Scratch is a fabulous way to learn a lot about the inner workings of a Linux system, and it’s an opportunity few, if any, other operating systems give you. And I guess since it makes you work so hard and look in other places for information, you learn more.

But if your main goal is a lean, mean system built the way you want it, rather than education, and you’re willing to give up a little control, there’s another way: Sorcerer Linux.

For Sorcerer, you download an ISO image that contains the essentials like a kernel, file utilities, a C compiler, and necessary libraries, all compiled for i586. This gives a good balance of compatibility and performance. When you install it, it compiles a kernel for your system, then it copies everything else to the drive.

The heart of Sorcerer is a set of shell scripts that automatically downloads current versions of software, checks dependencies, and compiles and installs them for you. It’s not as convenient or as polished as RPM, but it’s usable and the benefits, of course, are tremendous. You get the newest, most secure, most stable (and, usually, fastest) versions of the software you need, compiled for your particular architecture rather than the lowest common denominator.

I had some trouble installing Sorcerer at first. I found that after compiling the kernel, I had to answer Yes to the question, “Edit /etc/lilo.conf?” and make a change. The default /boot parameter didn’t work for my system. I had to change it from /devices/discs/disc0/part7 to /devices/discs/disc0/disc.

To avoid having to recompile the kernel over and over to get to that menu option that let me edit LILO’s parameters, here’s what I did:

chroot /mnt/root
mount -t devfs /devices /devices
nano /etc/lilo.conf
lilo -f
exit

Sorcerer doesn’t currently have spells (sorcerers cast spells, therefore, Sorcerer packages are called spells, get it?) for every package under the sun, but most of the essentials are covered. I’ll have to write spells for a few of my faves and contribute them.

Disappointment… Plus Linux vs. The World

It was looking like I’d get to call a l337 h4x0r to the carpet and lay some smackdown at work, but unfortunately I had a prior commitment. Too many things to do, not enough Daves to go around. It’s the story of my life.
And I see Infoworld’s Bob Lewis is recommending companies do more than give Linux a long, hard look–he’s saying they should consider it on the desktop.

He’s got a point. Let’s face it. None of the contenders get it right. So-called “classic” Mac OS isn’t a modern OS–it has no protected memory architecture, pre-emptive multitasking, and limited threading support. It’s got all the disadvantages of Windows 3.1 save being built atop the crumbling foundation of MS-DOS. I could run Windows 3.1 for an afternoon without a crash. I can run Windows 95 for a week or two. I can usually coax about 3-4 days out of Mac OS. Mac users sometimes seem to define “crash” differently, so I’ll define what I mean here. By a crash, I mean an application dying with an error Type 1, Type 2, or Type 10. Or the system freezing and not letting you do anything. Or a program quitting unexpectedly.

But I digress. Mac OS X has usability problems, it’s slow, and it has compatibility problems. It has promise, but it’s been thrust into duty that it’s not necessarily ready for. Like System 7 of the early ’90s, it’s a radical change from the past, and it’s going to take time to get it ready for general use. Since compilers and debuggers are much faster now, I don’t think it’ll take as long necessarily, but I don’t expect Mac OS X’s day to arrive this year. Developers also have to jump on the bandwagon, which hasn’t happened.

Windows XP… It’s slow, it’s way too cutesy, and only time will tell if it will actually succeed at displacing both 9x and NT/2000. With Product Activation being an upgrader’s nightmare, Microsoft may shoot themselves in the foot with it. Even if XP is twice as good as people say it’s going to be, a lot of people are going to stay away from it. Users don’t like Microsoft policing what they do with their computers, and that’s the perception that Product Activation gives. So what if it’s quick and easy? We don’t like picking up the phone and explaining ourselves.

Linux… It hasn’t lived up to its hype. But when I’ve got business users who insist on using Microsoft Works because they find Office too complicated, I have a hard time buying the argument that Linux can’t make it in the business environment without Office. Besides, you can run Office on Linux with Win4Lin or VMWare. But alternatives exist. WordPerfect Office gets the job done on both platforms–and I know law offices are starting to consider the move. All a lawyer or a lawyer’s secretary needs to be happy, typically, is a familiar word processor, a Web browser, and a mail client. The accountant needs a spreadsheet, and maybe another financial package. Linux has at least as many Web browsers as Windows does, and plenty of capable mail clients; WP Office includes Quattro Pro, which is good enough that I’ve got a group of users who absolutely refuse to migrate away from it. I don’t know if I could run a business on GnuCash. But I’m not an accountant. The increased stability and decreased cost makes Linux make a lot of sense in a law firm though. And in the businesses I count as clients, anywhere from 75-90% of the users could get their job done in Linux just as productively. Yes, the initial setup would be more work than Windows’ initial setup, but the same system cloning tricks will work, mitigating that. So even if it takes 12 hours to build a Linux image as opposed to 6 hours to build a Windows image, the decreased cost and decreased maintenance will pay for it.

I think Linux is going to get there. As far as Linux looking and acting like Windows, I’ve moved enough users between platforms that I don’t buy the common argument that that’s necessary. Most users save their documents wherever the program defaults to. Linux defaults to your home directory, which can be local or on a server somewhere. The user doesn’t know or care. Most users I support call someone for help when it comes time to save something on a floppy (or do anything remotely complicated, for that matter), then they write down the steps required and robotically repeat them. When they change platforms, they complain about having to learn something new, then they open up their notebook, write down new steps, and rip out the old page they’ve been blindly following for months or years and they follow that new process.

It amuses me that most of the problems I have with Linux are with recent distributions that try to layer Microsoft-like Plug and Play onto it. Linux, unlike Windows, is pretty tolerant of major changes. I can install TurboLinux 6.0 on a 386SX, then take out the hard drive and put it in a Pentium IV and it’ll boot. I’ll have to reconfigure XFree86 to take full advantage of the new architecture, but that’s no more difficult than changing a video driver in Windows–and that’s been true since about 1997, with the advent of Xconfigurator. Linux needs to look out for changes of sound cards and video cards, and, sometimes, network cards. The Linux kernel can handle changes to just about anything else without a hiccup. Once Red Hat and Mandrake realize that, they’ll be able to develop a Plug and Play that puts Windows to shame.

The biggest thing that Linux lacks is applications, and they’re coming. I’m not worried about Linux’s future.

If you didn’t compile it yourself, it’s not really yours.

I’m on my Linux From Scratch kick again. Unfortunately, compiling a complete workstation from scratch takes a really long time (the systems that benefit the most from it, namely low-end P2s, need close to a day to compile everything if you want X, KDE and GNOME and some common apps) and requires you to type a lot of awkward commands that are easy to mess up. The upside: Messages like, “I did my first LFS on a Pentium II 18 months ago and it was by far the best workstation I’ve ever had,” are common on LFS discussion boards.
So what to do…? If you want to learn a lot about how Linux works, you type all the commands manually and let the system build itself, and if you’re away while the system’s waiting for the next set of commands, well, the system just sits there waiting for you. In a couple of days or a week you’ll literally know Linux inside and out, and you’ll have the best workstation or server you ever had.

If, on the other hand, you’re more interested in having the best workstation or server farm you ever had and less interested in knowing Linux inside and out (you can always go back and do it later if you’re really interested–CPUs and disks aren’t getting any slower, after all), you use a script.

What script? Well, RALFS, for one. Just install Mandrake 8 or another 2.4-based distribution, preferably just the minimum plus all the compilers plus a text editor you’re comfortable with, then download the sources from www.linuxfromscratch.org, then download RALFS, edit its configuration files, get into text mode to save system resources, and let RALFS rip.

RALFS looks ideal for servers, since the ideal server needs just a kernel, the standard utilities that make Unix Unix, plus just a handful of server apps such as Apache, Samba, Squid, or BIND. So RALFS should build in a couple of hours for servers. And since a server should ideally waste as few CPU cycles and disk accesses as possible, RALFS lets you stretch a box to its limits.

I think I need a new mail server…

Optimizing Linux. Part 1 of who-knows-what

Optimizing Linux. I found this link yesterday. Its main thrust is troubleshooting nVidia 3D acceleration, but it also provides some generally useful tweakage. For example:
cat /proc/interrupts

Tells you what cards are using what interrupts.

lspci -v

Tells you what PCI cards you have and what latencies they’re using.

setpci -v -s [id from lspci] latency_timer=##

Changes the latency of a card. Higher latency means higher bandwidth, and vice-versa. In this case, latency means the device is a bus hog–once it gets the bus, it’s less likely to let go of it. I issued this command on my Web server to give my network card free reign (this is more important on local fileservers, obviously–my DSL connection is more than slow enough to keep my Ethernet card from being overwhelmed):

setpci -v -s 00:0f.0 latency_timer=ff

Add that command to /etc/rc.d/rc.local if you want it to stick.

Linux will let you tweak the living daylights out of it.

And yes, there’s a ton more. Check out this: Optimizing and Securing Red Hat Linux 6.1 and 6.2. I just turned off last-access attribute updating on my Web server to improve performance with the command chattr -R +A /var/www. That’s a trick I’ve been using on NT boxes for a long time.

Baseball. I’m frustrated. The Royals let the Twins trade promising lefty Mark Redmon to the Tigers for Todd Jones. Why didn’t the Royals dangle Roberto Hernandez in the Twins’ face? Hernandez would have fetched Redmon and a borderline prospect, saved some salary, and, let’s face it, we’re in last place with Hernandez, so what happens if we deal him? It’s not like we can sink any further.

Meanwhile, the hot rumor is that Rey Sanchez will be traded to the Dodgers for Alex Cora, a young, slick-fielding shortstop who can’t hit. Waitaminute. We just traded half the franchise away for Neifi Perez, an enthusiastic, youngish shortstop who can’t hit outside of Coors Field and is overrated defensively and makes 3 and a half mil a year. What’s up with that?

Moral dilemma: Since the Royals don’t seem to care about their present or their future at the moment, is rooting for Oakland (featuring ex-Royals Jermaine Dye and Johnny Damon and Jeremy Giambi) and Boston (featuring ex-Royals Jose Offerman and Chris Stynes and Hippolito Pichardo and the last link to that glorious 1985 season, Bret Saberhagen) to make the playoffs like cheating on your wife?

Back in the swing of things

Here are some odds and ends, since I’ve gone nearly a week without talking computers.
Intro to Linux. I found this last week. It’s a 50-page PDF file that serves as a nice Linux primer, from the experts at IBM. It’s a must-read for a Windows guru who wants to learn some Linux.

Linux from Scratch. Dustin mentioned Linux From Scratch last week. The idea is you download the source to an already-installed Linux box, then compile everything yourself. Why? Stability, security, and speed.

Security. You’ve got fresh, updated code, compiled yourself, with no extras. If you didn’t compile it, it’s not there. Less software means fewer holes for l337 h4x0r5 (“leet hackers,” or, more properly, script kiddies, or, even more properly, wankers who really need to get a life because they have nothing better to do than try to mess around with my 486s–Steve DeLassus asked me “what the #$%@ is an el-three-three-seven-aitch-four…” last week) to exploit.

Stability. Well, you get that anyway when you liberate your system from Microsoft’s grubby imperialistic mitts, but it makes sense that if you run software built by your system, for your system, it ought to run better. Besides, if you’ve got a borderline CPU or memory module or disk controller and try to compile all that code with aggressive compiler settings, you’ll expose the problems right away instead of later.

Speed. You’re running software built for your system, by your system. Not Mandrake’s PCs. Not Red Hat’s PCs. Yours. You want software optimized for your 486SX? You want software optimized for a P4? You won’t get either anywhere else. And recent GCC compilers with aggressive settings can sometimes (not always) outperform hand-built assembly. It’s hard to know what settings Mandrake or Red Hat or those Debian weirdos used.

I really want to replace my junky Linksys router with a PC running LFS and firewalling software. The Linksys router seems to be fine for Web surfing, but if you want to get beyond serfdom and serve up some content from your home LAN, my Linksys router’s even more finicky and problematic than Linksys’ NICs, which is saying something. It’ll just decide one day it doesn’t want to forward port 80 anymore.

Firewalling. And speaking of that, Dan Seto detailed ways to make a Linux box not even respond to a ping last week. It’s awfully hard for a l337 w4nk3r to find you if he can’t even ping you.

A story. My sister told me this one. She’s a behavioral/autism consultant, and one of her kids likes to belch for attention. He’ll let out an urp, and if you don’t respond, he’ll get closer and closer to you, letting out bigger and bigger belts until you acknowledge it. Di hasn’t managed to break that behavioral habit yet. She was telling her boss, a New Zealander, about this kid (he’s 3).

“Hmm,” he said. “Must be Australian.”

An update. I heard some howls of protest about a cryptic post I made last week. Yes, that was a girl I was talking to in the church parking lot until well past 11 the other night. Yes, we met at church. I’ve known her maybe six months. Yes, she’s nice. Yes, she’s cute. No, I haven’t asked her where she went to high school. Remember, I’m not a native St. Louisan… (And if you clicked on that link, be sure to also check out the driving tips.)

No, I’m not really interested in saying much more about her. Not now.

A remote administration Unix trick

OK, here’s the situation. I had a Linux box running Squid, chugging away, saving us lots of bandwidth and speeding things up and making everything wonderful, but we wanted numbers to prove it, and we liked being able to just check up on it periodically. Minimalist that I am, though, I never installed Telnet or SSH on it. And besides, I haven’t found an SSH client for Windows I really like, and Telnet is horribly insecure.
Sure, I could just walk up to it and log in and look around. But the server was several city blocks away from my base of operations. For a while it was a good excuse to go for a walk and talk to girls, but there weren’t always girls around to talk to, and, well, sometimes I needed to check up on the server while I was in the middle of something else.

So here’s what I did. I used CGI scripts for the commands I wanted. Take this, for example:

#!/bin/sh
echo ‘Content-type: text/html’
echo ”
echo ‘‹pre›’
ps waux
echo ”
cat /proc/meminfo
echo ‘‹/pre›’

Then I dropped those files into my cgi-bin directory and chmodded them to 755. From then on, I could check on my server by typing http://192.168.1.50/cgi-bin/ps.cgi into a Web browser. Boom, the server would tell me what processes were running, how much memory was in use, and even more cool, how much memory was used by programs and how much was used for caching.

Here’s how it works. The first two lines fake out Apache and your Web browser, essentially just giving them a header so they’ll process the output of these commands. The next line tells it it’s pre-formatted text, so don’t mess with it. This isn’t necessary for all commands, but for commands like ps that output multicolumn stuff, it’s essential. Next, you can type whatever Unix commands you want. Their output will be directed to the Web browser. I echoed a blank line just so the memory usage wouldn’t butt up against the process info. The last line just cleans up.

I wrote up scripts for all the commands I frequently used, so that way when my boss wanted to know how Squiddy was doing, I could tell him. For that matter, he could check it himself.

But if I knew there were going to be girls around, I went ahead and made an excuse to walk that direction anyway. Some things are more important than remote administration, right?

Linkfest.

I felt downright awful yesterday, but it’s my own fault. I remember now why I don’t take vitamins with breakfast. Very bad things happen.
So I’m whupped, and I’m not going to post anything original today. Just some stuff I’ve found lately and haven’t gotten around to posting anywhere.

But first, something to keep in the back of your mind: If The Good News Players, a drama troupe from the Concordia University system, is ever visiting a Lutheran church near you, be sure to go check it out. They are amazing. I put myself together enough to catch them at my church last night and I didn’t regret it in the least. They tell Bible stories in the form of mini-musicals; they’re easy to understand, professional, and just plain funny.

Linux OCR. This is huge. It’s not quite production-quality yet, but then again, neither is the cheap OCR software shipped with most cheap scanners. Check it out at claraocr.org.

It would seem to me that this is the missing link for a lot of small offices to dump Windows. Linux has always been a good network OS, providing fileshares, mail and Web services. Put Zope on your Web server and you can update your company’s site without needing anything like FrontPage. WordPerfect for Linux is available, and secretaries generally love WordPerfect, as do lawyers. ClaraOCR provides an OCR package. SANE enables a large number of scanners. GIMP is available for graphics work. And we’re close to getting a good e-mail client. And the whole shebang costs less than Windows Me.

Linux VMs, without VMware. This is just plain cool. If, for security reasons, you want one service per server, but you don’t have the budget or space for 47 servers in your server room, you can use the User-Mode Linux kernel. (The load on most Linux servers is awfully light anyway, assuming recent hardware.) This Linux Magazine article describes the process. I could see this being killer for firewalls. On one machine, create several firewalls, each using a slightly different distribution and ruleset, and route them around. “Screw you, l337 h4x0r5! You are in a maze of twisty passages, all alike!”

And a tip. I find things by typing dir /s [whatever I’m looking for] from a DOS prompt. I’m old-fashioned that way. There’s no equivalent syntax for Unix’s ls command. But Unix provides find. Here’s how you use it:

find [subdirectory] -name [filename]

So if I log in as root and my Web browser goes nuts and saves a file somewhere it shouldn’t have and I can’t find it, I can use:

find / -name obnoxious_iso_image_I’d_rather_not_download_again.iso

Or if I put a file somewhere in my Web hierarchy and lose it:

find /var/www -name dave.jpg

Windows XP activation cracked. Here’s good news, courtesy of David Huff:

Seems that the staff of Germany’s Tecchannel has demonstrated that WinXP’s
product activation scheme is full of (gaping) holes:

WinXP product activation cracked: totally, horribly, fatally and
Windows Product Activation compromised (English version)

Building a Squid server

I’ve talked about Squid before. Squid is a caching Web proxy, designed to improve network speed and conserve bandwidth by caching Web content locally. How much it helps you depends on how you use the Web in that particular environment, but it’s usually worthwhile, seeing as the software is either free or costs next to nothing (it comes with most Linux distributions) and it doesn’t take much hardware to run it. Don’t use your Pentium-75, but you can deploy a standard desktop PC as a Squid server and it’ll work fabulously, unless you’ve got thousands of PCs hitting it. For a thousand bucks, you can seriously reduce your traffic and chances are it’ll pay for itself fairly quickly.
And ironically, Squid integrates beautifully with Internet Explorer 5.0 and newer.

Here’s what you do. Build up a minimal Linux server. For this, I prefer TurboLinux 6.01–it’s more lightweight than the current version, and you can still get patches for it that keep it from being h4x0r h34v3n. Pick the minimum base install, then add Squid and Apache. Yes, you need Apache. We’ll talk about that in a minute. I don’t like to have anything else on a Squid box, because Squid tends to be a big memory, CPU, and disk hog. Keep your computing resources as free as possible to accomodate Squid. (For that reason it would probably be better under a 2.4 kernel using ReiserFS-formatted partitions but I didn’t have time to test that.)

Once Squid is installed, modify /etc/squid/squid.conf. You’ll find a pair of lines that read “allow localhost” and “deny all.” That allows Squid to work only for the local machine, which isn’t what we want. Assuming you’re behind a firewall (you should be, and if you’re not, I’ll help you make a really big banner that says, “Welcome, l337 h4x0r5!”), change the “deny all” line to read “allow all.”

Next, make sure Apache and Squid are running. Go to /etc/rc.d/rc3.d and make sure there are scripts present that start Apache (httpd) and Squid. If there aren’t, go to /etc/rc.d/init.d and make copies of the Apache and Squid scripts. Give them a name that starts with S and a number, e.g. S50httpd.

Next, let Squid build and configure the directories and logs it needs with the command squid -NCd1. No, I don’t know what the -NCd1 means. I found it in a forum somewhere.

Now, go to your DNS and add an entry called wpad.yourdomainname. How you do this depends on the DNS you use. Someone else handles those duties at my job, so I just had him do it. Point that to your squid server.

Now, in /home/httpd/html (assuming TurboLinux–use the default Apache directory if you’re using a different distro), create two files, called proxy.pac and wpad.dat. They should both contain the following Javascript code:

function FindProxyForURL(url,host)
{
return “PROXY 192.168.10.50:3128”;
}

Substitute your Squid server’s IP address for 192.168.10.50.

What’s this do? Well, when IE is set to autodetect your Proxy settings, it goes looking for http://wpad.yourdomainname/wpad.dat, which tells it where to find the Proxy server. You could use any Web server you wanted; I just use the Squid server on the theory that if the Squid server is for whatever reason unavailable, a Web server running on the same machine is the most likely to also be unavailable, so IE won’t find it and won’t use a proxy, giving you a degree of failover.

The cool thing is, this combination of Apache and Squid works well, and can be quickly implemented with almost no work since Internet Explorer by default goes looking for a proxy and most people don’t uncheck that checkbox in the control panel.

We did this to reduce traffic on a T1 line for a short period of time (it saves us from needing to get multiple T1s) and so far we’re very impressed with the results. I recommend you try it.