Cleaning the Windows registry – and optimizing

Cleaning the Windows registry is a popular and controversial topic. Many pundits tell you never to do it. When I wrote a book about Windows back in 1999, I dedicated most of one chapter to the topic. But today the pundits have a point. Most registry cleaning utilities do much more harm than good. I don’t recommend you clean your registry, per se, but I do recommend you maintain it.

I don’t want to dismiss the concept completely out of hand. There’s a difference between a bad idea and a bad implementation. Registry cleaning and maintenance is a victim of bad implementation. But that doesn’t mean it was a bad idea. So let’s talk about how to get the benefit while minimizing the drawbacks.

Read more

Common AmigaDOS commands

Common AmigaDOS commands

The Amiga had a command line, or CLI. It was a rather powerful CLI, especially for its time. But there are a number of differences between AmigaDOS and other operating systems you may be familiar with. These are the common AmigaDOS commands and their equivalents from other operating systems like DOS, Windows, Unix or Linux.

I’ve never seen a primer that relates or cross-references Amiga commands to Windows and Unix. So I wrote one. I hope it helps you understand your Amiga better. Because Amiga is sometimes like Windows and sometimes it’s like Unix, I think it might. And maybe, just maybe, you’ll learn something you didn’t know about Windows or Unix too.

Read more

Curing random errors when installing Office 2013

I got lots of random errors installing Office 2013 when I went to do it, including error code 112-4 and error code 0-4, and some other install errors mostly ending in 4 that aren’t documented on Microsoft’s web site. Although previously undocumented, these errors are fixable. Read more

SSD write endurance (aka longevity) vindicated

I found this chart earlier this week regarding SSD write endurance. Basically, it plots out how long an SSD would last if you set out to deliberately destroy it by writing to it continuously.

You could expect a mainstream 128-GB drive to last 4.7 years under those conditions, which is longer than a platter hard drive would last if subjected to the same kind of abuse. Other studies have similar results.

Read more

How to move your temporary files to a ramdisk

Moving the rest of your temporary files to a ramdisk provides a number of performance benefits. Program installations proceed noticeably faster, and fewer files written to your system disk means less fragmentation, less maintenance for an SSD, and, most likely, longer SSD life.

Read more

Using video memory as a ramdisk in Linux

An old idea hit me again recently: Why can’t you use the memory that’s sitting unused on your video card (unless you’re playing Doom) as a ramdisk? It turns out you can, just not if you’re using Windows. Some Linux people have been doing <a href=”http://hedera.linuxnews.pl/_news/2002/09/03/_long/1445.html”>it</a> for two years.<p>Where’d I get this loony idea? Commodore, that’s where. It was fairly common practice to use the video RAM dedicated to the C-128’s 80-column display for other purposes when you weren’t using it. As convoluted as PC video memory is, it had nothing on the C-128, where the 80-column video chip was a netherword accessible only via a handful of chip registers. Using the memory for anything else was slow, it was painful, but it was still a lot faster than Commodore’s floppy drives.<p>

So along comes someone on Slashdot, asking about using idle video memory as swap space. I really like the idea on principle: The memory isn’t doing anything, and RAM is at least an order of magnitude faster than disk, so even slow memory is going to give better performance.<p>

The principle goes like this: You use the Linux MTD module and point it at the video card’s memory in the PCI address space. The memory is now a block device, which you can format and put a filesystem on. Format it ext2 (who needs journaling on a ramdisk?), and you’ve got a ramdisk. Format it swap, and you’ve got swap space.<p>

The downside? Reads and writes don’t happen at the same speed with AGP. Since swap space needs to happen quickly both directions, this is a problem. It could work a lot better with older PCI video cards, but those of course are a lot less likely to have a useful amount of memory on them. It would also work a lot better on newer PCIe video cards, but of course if your system is new enough to have a PCIe card, it’s also likely to have huge amounts of system RAM.<p>

The other downside is that CPU usage tends to really jump while accessing the video RAM.<p>

If you happen to have a system that has fast access to its video RAM, there’s no reason not to try using it as swap space. On some systems it seems to work really well. On others it seems to work really poorly.<p>

If it’s too slow for swap space, try it as a ramdisk. Point your browser cache at it, or mount it as /tmp. It’s going to have lower latency than disk, guaranteed. The only question is the throughput. But if it’s handling large numbers of small files, latency matters more than throughput.<p>

And if you’re concerned about the quality of the memory chips on a video card being lower than the quality of the chips used on the motherboard, a concern some people on Slashdot expressed, using that memory as a ramdisk is safer than as a system file. If there’s slight corruption in the memory, the filesystem will report an error. Personally I’m not sure I buy that argument, since GPUs tend to be even more demanding on memory than CPUs are, and the consequences of using second-rate memory on a video card could be worse than just some stray blips on the screen. But if you’re a worry wart, using it for something less important than swap means you’re not risking a system crash by doing it.<p>

If you’re the type who likes to tinker, this could be a way to get some performance at no cost other than your time. Of course if you like to tinker and enjoy this kind of stuff anyway, your time is essentially free.<p>

And if you want to get really crazy, RAID your new ramdisk with a small partition on your hard drive to make it permanent. But that seems a little too out there even for me.

Optimizing dynamic Linux webservers

Linux + Apache + MySQL + PHP (LAMP) provides an outstanding foundation for building a web server, for, essentially, the value of your time. And the advantages over static pages are fairly obvious: Just look at this web site. Users can log in and post comments without me doing anything, and content on any page can change programmatically. In my site’s case, links to my most popular pages appear on the front page, and as their popularity changes, the links change.

The downside? Remember the days when people bragged about how their 66 MHz 486 was a perfectly good web server? Kiss those goodbye. For that matter, your old Pentium-120 or even your Pentium II-450 may not be good enough either. Unless you know these secrets…

First, the simple stuff. I talked about a year and a half ago about programs that optimize HTML by removing some extraneous tags and even give you a leg up on translating to cascading style sheets (CSS). That’s a starting point.

Graphics are another problem. People want lots of them, and digital cameras tend to add some extraneous bloat to them. Edit them in Photoshop or another popular image editor–which you undoubtedly will–and you’ll likely add another layer of bloat to them. I talked about Optimizing web graphics back in May 2002.

But what can you do on the server itself?

First, regardless of what you’re using, you should be running mod_gzip in order to compress your web server’s output. It works with virtually all modern web browsers, and those browsers that don’t work with it negotiate with the server to get non-compressed output. My 45K front page becomes 6K when compressed, which is better than a seven-fold increase. Suddenly my 128-meg uplink becomes more than half of a T1.

I’ve read several places that it takes less CPU time to compress content and send it than it does to send uncompressed content. On my P2-450, that seems to definitely be the case.

Unfortunately, mod_gzip is one of the most poorly documented Unix programs I’ve ever seen. I complained about this nearly three years ago, and the situation seems little improved.

A simple apt-get install libapache-mod-gzip in Debian doesn’t do the trick. You have to search /etc/apache/httpd.conf for the line that begins LoadModule gzip_module and uncomment it, then you have to add a few more lines. The lines to enable mod_gzip on TurboLinux didn’t save me this time–for one thing, it didn’t handle PHP output. For another, it didn’t seem to do anything at all on my Debian box.

Charlie Sebold to the rescue. He provided the following lines that worked for him on his Debian box, and they also worked for me:

# mod_gzip settings

mod_gzip_on Yes
mod_gzip_can_negotiate Yes
mod_gzip_add_header_count Yes
mod_gzip_minimum_file_size 400
mod_gzip_maximum_file_size 0
mod_gzip_temp_dir /tmp
mod_gzip_keep_workfiles No
mod_gzip_maximum_inmem_size 100000
mod_gzip_dechunk Yes

mod_gzip_item_include handler proxy-server
mod_gzip_item_include handler cgi-script

mod_gzip_item_include mime ^text/.*
mod_gzip_item_include mime ^application/postscript$
mod_gzip_item_include mime ^application/ms.*$
mod_gzip_item_include mime ^application/vnd.*$
mod_gzip_item_exclude mime ^application/x-javascript$
mod_gzip_item_exclude mime ^image/.*$
mod_gzip_item_include mime httpd/unix-directory
mod_gzip_item_include file .htm$
mod_gzip_item_include file .html$
mod_gzip_item_include file .php$
mod_gzip_item_include file .phtml$
mod_gzip_item_exclude file .css$

Gzipping anything below 400 bytes is pointless because of overhead, and Gzipping CSS and Javascript files breaks Netscape 4 part of the time.

Most of the examples I found online didn’t work for me. Charlie said he had to fiddle a long time to come up with those. They may or may not work for you. I hope they do. Of course, there may be room for tweaking, depending on the nature of your site, but if they work, they’re a good starting point.

Second, you can use a PHP accelerator. PHP is an interpreted language, which means that every time you run a PHP script, your server first has to translate the source code into machine language and run it. This can take longer than the output itself takes. PHP accelerators serve as a just-in-time compiler, which compiles the script and holds a copy in memory, so the next time someone accesses the page, the pre-compiled script runs. The result can sometimes be a tenfold increase in speed.

There are lots of them out there, but I settled on Ion Cube PHP Accelerator (phpa) because installation is a matter of downloading the appropriate pre-compiled binary, dumping it somewhere (I chose /usr/local/lib but you can put it anywhere you want), and adding a line to php.ini (in /etc/php4/apache on my Debian box):

zend_extension=”/usr/local/lib/php_accelerator_1.3.3r2.so”

Restart Apache, and suddenly PHP scripts execute up to 10 times faster.

PHPA isn’t open source and it isn’t Free Software. Turck MMCache is, so if you prefer GPL, you can use it.

With mod_gzip and phpa in place and working, my web server’s CPU usage rarely goes above 25 percent. Without them, three simultaneous requests from the outside world could saturate my CPU.

With them, my site still isn’t quite as fast as it was in 2000 when it was just serving up static HTML, but it’s awfully close. And it’s doing a lot more work.

 

If I had my own Linux distribution

I found an interesting editorial called If I had my own Linux Distro. He’s got some good ideas but I wish he’d known what he was talking about on some others.
He says it should be based on FreeBSD because it boots faster than Linux. I thought everyone knew that Unix boot time has very little to do with the kernel? A kernel will boot more slowly if it’s trying to detect too much hardware, but the big factor in boot time is init, not the kernel. BSD’s init is much faster than SysV-style init. Linux distros that use BSD-style inits (Slackware, and optionally, Debian, and, as far as I understand, Gentoo) boot much faster than systems that use a traditional System V-style init. I recently converted a Debian box to use runit, and the decrease in boot time and increase in available memory at boot was noticeable. Unfortunately now the system doesn’t shut down properly. But it proves the concept.

He talks about installing every possible library to eliminate dependency problems. Better idea: Scrap RPM and use apt (like Debian and its derivatives) or a ports-style system like Gentoo. The only time I’ve seen dependency issues crop up in Debian was on a system that had an out of date glibc installed, in which case you solve the issue by either keeping the distribution up to date, or updating glibc prior to installing the package that fails. These problems are exceedingly rare, by the way. In systems like Gentoo, they don’t happen because the installation script downloads and compiles everything necessary.

Debian’s and Gentoo’s solution is far more elegant than his proposal: Installing everything possible isn’t going to solve your issue when glibc is the problem. Blindly replacing glibc was a problem in the past. The problems that caused that are hopefully solved now, but they’re beyond the control of any single distribution, and given the choice between having a new install stomp on glibc and break something old or an error message, I’ll take the error message. Especially since I can clear the issue with an apt-get install glibc. (Then when an old application breaks, it’s my fault, not the operating system’s.)

In all fairness, dependency issues crop up in Windows all the time: When people talk about DLL Hell, they’re talking about dependency problems. It’s a different name for the same problem. On Macintoshes, the equivalent problem was extensions conflicts. For some reason, people don’t hold Linux to the same standard they hold Windows and Macs to. People complain, but when was the last time you heard someone say Windows or Mac OS wasn’t ready for the desktop, or the server room, or the enterprise, or your widowed great aunt?

He also talks about not worrying about bloat. I take issue with that. When it’s possible to make a graphical Linux distribution that fits on a handful of floppies, there’s no reason not to make a system smooth and fast. That means you do a lot of things. Compile for an advanced architecture and use the -O3 options. Use an advanced compiler like CGG 3.2 or Intel’s ICC 7.0 while you’re at it. Prelink the binaries. Use a fast-booting init and a high-performance system logger. Mount filesystems with the highest-performing options by default. Partition off /var and /tmp so those directories don’t fragment the rest of your filesystem. Linux can outperform other operating systems on like hardware, so it should.

But when you do those things, then it necessarily follows that people are going to want to run your distribution on marginal hardware, and you can’t count on marginal hardware having a 20-gig hard drive. It’s possible to give people the basic utilities, XFree86, a reasonably slick window manager or environment, and the apps everyone wants (word processing, e-mail, personal finance, a web browser, instant messaging, a media player, a graphics viewer, a few card games, and–I’ll say it–file sharing) in a few hundred megabytes. So why not give it to them?

I guess all of this brings up the nicest thing about Linux. All the source code to anything desirable and all the tools are out there, so a person with vision can take them and build the ultimate distribution with it.

Yes, the idea is tempting.

Red Hat and Debian fans debate desktop Linux

Mail from longtime reader Steve Mahaffey on the state of desktop Linux. My responses interspersed within:
SM: It’s been a while since I’ve emailed you, though I still read your site almost daily and comment from time to time.

DF: I appreciate that.

SM: Other than our common faith the most important subject that I could comment on might be desktop Linux.

DF: And it’s been a while since I’ve written about either of those. Too long.

SM: In the past I’ve used Mandrake and Suse briefly, and Red Hat 7.2/3 more extensively. As a server, Red Hat 7.3, booted to runlevel 3, runs until the power goes off at my West Houston home long enough to outlast my UPS. On the other hand, as a desktop OS, Red Hat 7.3 with KDE or Ximian Gnome would crash 1-3 x per week, and Ximian Gnome would get corrupted, requiring me to delete various ./.gnome* config files or files in /tmp to fix it, which most users would not be able to fathom or guess at.

DF: The more advanced desktop environments seem to be pretty sensitive to something or other. I haven’t figured out what exactly. That’s part of the reason why I run IceWM on Debian on my desktop; it’s stable. Running Gnome apps under IceWM on Debian “Unstable” (the experimental, bleeding-edge Debian distro), I’ve been chasing a slow memory leak that eventually consumes all available physical memory and eventually leads to a crash, but it takes a month or two. More on what I think is going on in a minute.

SM: Red Hat 8.0 on my primary workstation, on the other hand, is currently at 43 days uptime. NO CRASHES, once or twice I have restarted the x-server, and once I had a problem with the gnome conifg files. I know that you use Debian mostly, but Red Hat, Lindows, Mandrake, Lycoris, or the like will be the ones to have a mass impact on the desktop. Seems like Lycoris or Lindows was Debian based, though.

DF: I know Lindows is based on Debian. I don’t know Lycoris’ origin. You are correct that Debian will have minimal impact on the desktop, at least in the home. Debian doesn’t give a rip about commercial success and it shows.

I saw Red Hat 8 and Mandrake 9 recently and I was impressed at how far they’ve come. I haven’t touched Red Hat since 6.2 or Mandrake since, well, 7.2 probably. They looked stable and fast. And I saw a minimal (no options picked) Mandrake 9 install the other night. It was 144 megs. I remember not long ago trying to do minimal Red Hat and Mandrake installs and they were 300 megs, at least. That’s definitely a step in the right direction.

SM: Anyway, besides much greater stability, I have enough functionality for most of my needs in programs like Open Office, gnucash, Mozilla or Galeon, Evolution or KMail, etc. Some may have other needs, only met via Windows only programs, of course. I have noticed that RH 8.0 seems on occasion to be slow, but not most of the time. The menus are a little funny … easy to add to the KDE menus, but they don’t always seem to work. With Gnome, it’s easier to add a custom panel to add a non-default application, but it does work then.

DF: Linux currently meets most of the needs I observe on the typical user’s desktop. Not necessarily power users, but for the basic users who are interested in typing simple documents like letters and memos, simple spreadsheets (and let’s face it, an awful lot of spreadsheets use very basic math, if any at all), e-mail, Web browsing, chat, and listening to music, Linux provides solutions that are as good as, if not superior to, those that run on Windows.

I also observe how many users don’t know how to add an application to Windows’ Start menu, or desktop, or that quick-launch thing on the taskbar. It may be easier on Windows, but it’s still not easy enough for most people.

Of course, this is coming from someone who keeps at least one shell window open at all times in Linux and launches apps from there because it’s faster and easier for me to type the first few letters of an app and hit tab and then enter than it is to navigate a menu. For people like me, Linux is much, much superior to Windows and always will be.

SM: RH 8.0 did recognize my nVidia card, but did NOT enable opengl 3d acceleration. I had to install the nVidia drivers from the nVidia web site to get opengl acceleration…apparently Red Hat has decided to not support that at this time. Another oddity is that I have had to turn on the cd sound to play audio CDs by using the kde mixer…can’t seem to do it with the gnome mixer, and don’t know where to hack a config file or file permissions to do this.

DF: Given Red Hat’s history with KDE, it’s ironic that some things work better in KDE than Gnome on Red Hat. Nvidia’s decision to only provide binary drivers (not source) hasn’t proven popular with a lot of Linux distributors, which probably has a lot to do with the OpenGL issues. Red Hat isn’t going to go out of its way to make nVidia look good, and might actually go out of its way to make nVidia not look as good as ATI or Matrox or other companies who are willing to provide straight source, taking the chance that users will blame nVidia rather than Red Hat or Linux. (That’s not a particularly safe bet, but it’s not out of character, given past history.)

SM: Other things… Evolution crashes a lot. I’ve given up and started using KMail (for IMAP since I use my own mail server with IMAP). Galeon is good, but it seems that I had some printing issues and I’ve been using Mozilla more. I’ll have to see how the Phoenix browser comes along…it might be the best choice. Flash and Java required a manual install.

DF: Evolution is stable for me in Debian (more stable than Outlook 2000 under Windows 2000) but I’ve heard that complaint. I have to wonder if Evolution might be picky about the libraries it’s linked to and what it’s compiled with and how? Debian is really conservative; Red Hat is much more apt to use C compilers that haven’t proven themselves just yet. It’s great that GCC 3.2 is so much faster, but if that speed is still coming at the price of stability, let’s back off, eh?

I like Galeon but I don’t print Web pages much. Phoenix is turning into a very nice browser. Lately I’ve been using Mozilla nightly builds for the spam filtering in the mail client and no other reason.

SM: All in all, maybe Red Hat 8.0 is still more a distro that is more suited for corporate environments that have IT personnel around to hand-hold, and which need only modest desktop application abilities. But, it’s coming quite close to the fabled “Aunt Minnie” friendly OS that will really give Microsoft fits.

DF: It’ll take time to get mainstream appeal but I believe it will. Linux PCs in Wal-Mart are a very good thing, because it gives exposure and feedback. The press hasn’t been too kind to the Linux PCs sold there, but if the criticisms are addressed, things will get better, faster, for all distributions. Windows nothing but a really bad Mac wanna-be for 10 years, but it ripened because it infiltrated mass-market PCs. The press applauded Microsoft as it washed its dirty laundry in public. Linux won’t get that same treatment, but I’ll take a criticizing press over a kiss-butt press any day of the week if the goal is product maturity. Windows has been 20 years in the making, but XP still crashes too much.

And as far as Red Hat vs. Debian goes, I may have to give Red Hat another look as a desktop OS soon.

SM: Most of your comments seem to center around Linux and server applications. This is not trivial or unimportant. However, I think that the time for desktop Linux may be getting quite close, and I’d be interested in your comments if you feel so inclined.

DF: My focus has changed in the past year. Two years ago, I did desktop support, and server work in emergencies. About a year ago, I started moving into server support and only did desktop support in emergencies. It’s been a year since I’ve dealt with end users on a regular basis, so I don’t know as much what’s wanted or needed on the desktop anymore and I definitely don’t think about it nearly as much since I’m almost never confronted with it.

I think my thoughts on it are still worth something, since it’s only been a year, but that kind of experience definitely doesn’t age well.

Getting back to the desktop, the apps we need are in place. What they need most now are must-have features that Microsoft won’t supply, or won’t supply quickly. Bayesian spam filtering in Mozilla is a prime example of Open Source beating MS to the punch. A great idea showed up on Slashdot, some early implementations showed up immediately, and within a month or two, it’s in Mozilla’s alpha builds. The public at large will have a usable implementation within a couple of months. And there will be others. I suspect we’ll see lots of examples of it in digital media. I mean, whose design would you rather use, the design of someone concerned only with corporate interests, or the design of a group of users concerned with their fair-use rights and yours and mine?

SM: Anyway, maybe you’ll find my observations to be of interest.

DF: Always.

How to get mod_gzip working on your Linux/Apache server

My research yesterday found that Mandrake, in an effort to get an edge on performance, used a bunch of controversial Apache patches that originated at SGI. The enhancements didn’t work on very many Unixes (presumably they were tested on Linux and Irix) and were rejected by the Apache group. SGI has since axed the project, and it appears that only performance-oriented Mandrake is using them.
I don’t have any problem with that, of course, except that Mod_Gzip seems to be incompatible with these patches. And Mod_Gzip has a lot of appeal to people like me–what it does is intercept Apache requests, check for HTTP 1.1 compliance, then compress content for sending to browsers that can handle compressed data (which includes just about every browser made since 1999). Gzip generally compresses HTML data by about 80 percent, so suddenly a DSL line has a whole lot more bandwidth–three times as much.

Well, trying to make all of this work by recompiling Apache had no appeal to me (I didn’t install any compilers on my server), so I went looking through my pile-o’-CDs for something less exotic. But I couldn’t find a recent non-Mandrake distro, other than TurboLinux 6.0.2. So I dropped it in, and now I remember why I like Turbo. It’s a no-frills server-oriented distro. Want to make an old machine with a smallish drive into a firewall? The firewall installation goes in 98 megs. (Yes, there are single-floppy firewalls but TurboLinux will be more versatile if you’re up to its requirements.)

So I installed Apache and all the other webserver components, along with mtools and Samba for convenience (I’m behind a firewall so only Apache is exposed to the world). Total footprint: 300 megs. So I’ve got tons of room to grow on my $50 20-gig HD.

Even better, I tested Apache with the command lynx http://127.0.0.1 and I saw the Apache demo page, so I knew it was working. Very nice. Installation time: 10 minutes. Then I tarred up my site, transferred it over via HTTP, untarred it, made a couple of changes to the Apache configuration file, and was up and going, sort of.

I still like Mandrake for workstations, but I think Turbo is going to get the nod the next few times I need to make Linux servers. I can much more quickly and easily tailor Turbo to my precise requirements.

Now, speaking of Mod_Gzip… My biggest complaint about Linux is the “you figure it out” attitude of a lot of the documentation out there, and Mod_Gzip may be the worst I’ve ever seen. The program includes no documentation. If you dig on the Web site, you find this.

Sounds easy, right? Well, except that’s not all you have to do. Dig around some more, and you find the directives to turn on Mod_Gzip:

# [ mod_gzip sample configuration ]

mod_gzip_on Yes

mod_gzip_item_include file .htm$
mod_gzip_item_include file .html$
mod_gzip_item_include mime text/.*
mod_gzip_item_include mime httpd/unix-directory

mod_gzip_dechunk yes

mod_gzip_temp_dir /tmp

mod_gzip_keep_workfiles No

# [End of mod_gzip sample config]

Then, according to the documentation, you restart Apache. When you do, Apache bombs out with a nice, pleasant error message–“What’s this mod_gzip_on business? I don’t know what that means!” Now your server’s down for the count.

After a few hours of messing around, I figured out you’ve gotta add another line, at the end of the AddModule section of httpd.conf:

AddModule mod_gzip.c

After adding that line, I restarted Apache, and it didn’t complain. But I still didn’t know if Mod_Gzip was actually doing anything because the status URLs didn’t work. Finally I added the directive mod_gzip_keep_workfiles yes to httpd.conf and watched the contents of /tmp while I accessed the page. Well, now something was dumping files there. The timestamps matched entries in /var/log/httpd/access_log, so I at least had circumstantial evidence that Mod_Gzip was running.

More Like This: “/cgi-bin/search.cgi?terms=linux&case=insensitive&boolean=and”>Linux