An 8086-series microprocessor, the 8088, powered the original IBM PC. Its direct descendants power PCs to this day. Not only that, they power modern Macs too. This was always controversial, especially running Mac OS on Intel chips. Why? What are the disadvantages of the 8086 microprocessor?
I recently saw advice to buy a Cisco RV130W instead of buying an Asus router such as an RT-AC66U and souping it up with Asuswrt-Merlin. I can see both sides of the argument but in the end I favor the Asus solution when I consider Asuswrt-Merlin vs Cisco. Here’s why.
Now, if you’re arguing business vs personal use, there’s no contest. In a business setting, buy the Cisco.
My neighbor asked me for advice on setting up wi-fi in his new house. I realized it’s been a while since I’ve written about wi-fi, and it’s never been cheaper or easier to blanket your house and yard with a good signal.
Blanketing your house and yard while remaining secure, though, is still important.
Ars Technica said yesterday that Mozilla needs to make 64-bit Firefox on Windows a high priority. I agree with this completely. With web browsers, you can’t have too much security, and Firefox on Windows is a big target.
I’ve alluded in the past to why it’s a good idea to make a DMZ with two routers, but I’ve never gone into depth about how and necessarily why to do it.
If your ISP gave you a combination modem/switch/access point/router and it only supports 100 megabit wired and 54-megabit (802.11g) wireless and you want to upgrade to gigabit wired/150-meg (802.11n) wireless, here’s a great way to make the two devices work together and improve your security.
I’ve resisted the pull to 64 bits, for a variety of reasons. I’ve had other priorities, like lowering debt, fixing up a house, kids in diapers… But eventually the limitations of living with 2003-era technology caught up with me. Last week I broke down and bought an AMD Phenom II 560 and an Asus M4N68T-M v2 motherboard. Entry-level stuff by today’s standards. But wow.
If you can get one, an AMD Phenom II x4 840 is a better choice, but those are getting hard to find. And if you can’t afford a $100 CPU there are bargains at the very low end too: A Sempron 145 costs less than $45, and a dual-core Athlon II x2 250 costs $60. The second core is worth the money.
It’s not enough to know what to look for in a router. I wanted to get some solid advice on wi-fi network security. Who better to give that advice than someone who built an airplane that hacks wi-fi? So I talked to WhiteQueen at http://rabbit-hole.org, the co-builder of a wi-fi hacking airplane that made waves at Defcon.
Hacker stereotypes aside, WhiteQueen was very forthcoming. He’s a white hat, and I found him eager to share what he knows.
An old idea hit me again recently: Why can’t you use the memory that’s sitting unused on your video card (unless you’re playing Doom) as a ramdisk? It turns out you can, just not if you’re using Windows. Some Linux people have been doing <a href=”http://hedera.linuxnews.pl/_news/2002/09/03/_long/1445.html”>it</a> for two years.<p>Where’d I get this loony idea? Commodore, that’s where. It was fairly common practice to use the video RAM dedicated to the C-128’s 80-column display for other purposes when you weren’t using it. As convoluted as PC video memory is, it had nothing on the C-128, where the 80-column video chip was a netherword accessible only via a handful of chip registers. Using the memory for anything else was slow, it was painful, but it was still a lot faster than Commodore’s floppy drives.<p>
So along comes someone on Slashdot, asking about using idle video memory as swap space. I really like the idea on principle: The memory isn’t doing anything, and RAM is at least an order of magnitude faster than disk, so even slow memory is going to give better performance.<p>
The principle goes like this: You use the Linux MTD module and point it at the video card’s memory in the PCI address space. The memory is now a block device, which you can format and put a filesystem on. Format it ext2 (who needs journaling on a ramdisk?), and you’ve got a ramdisk. Format it swap, and you’ve got swap space.<p>
The downside? Reads and writes don’t happen at the same speed with AGP. Since swap space needs to happen quickly both directions, this is a problem. It could work a lot better with older PCI video cards, but those of course are a lot less likely to have a useful amount of memory on them. It would also work a lot better on newer PCIe video cards, but of course if your system is new enough to have a PCIe card, it’s also likely to have huge amounts of system RAM.<p>
The other downside is that CPU usage tends to really jump while accessing the video RAM.<p>
If you happen to have a system that has fast access to its video RAM, there’s no reason not to try using it as swap space. On some systems it seems to work really well. On others it seems to work really poorly.<p>
If it’s too slow for swap space, try it as a ramdisk. Point your browser cache at it, or mount it as /tmp. It’s going to have lower latency than disk, guaranteed. The only question is the throughput. But if it’s handling large numbers of small files, latency matters more than throughput.<p>
And if you’re concerned about the quality of the memory chips on a video card being lower than the quality of the chips used on the motherboard, a concern some people on Slashdot expressed, using that memory as a ramdisk is safer than as a system file. If there’s slight corruption in the memory, the filesystem will report an error. Personally I’m not sure I buy that argument, since GPUs tend to be even more demanding on memory than CPUs are, and the consequences of using second-rate memory on a video card could be worse than just some stray blips on the screen. But if you’re a worry wart, using it for something less important than swap means you’re not risking a system crash by doing it.<p>
If you’re the type who likes to tinker, this could be a way to get some performance at no cost other than your time. Of course if you like to tinker and enjoy this kind of stuff anyway, your time is essentially free.<p>
And if you want to get really crazy, RAID your new ramdisk with a small partition on your hard drive to make it permanent. But that seems a little too out there even for me.
This is a response to the eWeek editorial Bring DIY Systems to Work. Nice theory. Unfortunately, lab theory and the real world don’t always mesh.
I like building PCs. I built my first PC in early 1994, back when everything was on a separate card and you had to set interrupts and DMA channels using jumpers and DIP switches and in most cases you had to tell the BIOS exactly what size drive was in it–it wouldn’t detect anything for you. I built my main PC at home myself. I built my secondary and tertiary PCs at home myself too. And my girlfriend’s PC, and my mom’s PC, and my sister’s PC.
Get the idea?For the first couple of years of my career, I did DIY at work. It made sense then. IBM was selling us one-size-fits-all PCs with a lot more capability than most secretaries needed. As this was a state university, our budget was being cut, and it was 1998 and the PC on most people’s desk was still a 486–sometimes the crippled IBM 486SLC2 which was really just a 386SX with the 486 instruction set added, clock doubled to 66 MHz and given a supersized L1 cache, but with a 16 megabyte address space–so we were stuck.
My proposal was that since we couldn’t afford $1,500 PCs for everyone, we build $500 PCs. I’d build $500 PCs by cutting corners where appropriate. In my estimation, there was no reason for a secretary to have a sound card. Or a fancy video card. So I’d skip the sound card, buy the cheapest PCI video card I could find that still had some GUI acceleration, drop in an AMD or Cyrix CPU instead of a Pentium–these computers were for running Word, not Quake–and, since this was business use and we wanted the computers to last as long as possible, I’d splurge on the hard drive, buying the fastest model I could find and stay within budget.
At the time, this made all kinds of sense. Emachines didn’t exist yet, so there was a whole lot of nothin’ in the sub-$500 space. The computers made fantastic productivity boxes but lousy game systems. We wouldn’t have to worry about people loading games illegally on the systems–chances were the games wouldn’t install anyway, and if they would they’d run so poorly no one would bother.
Then we learned the downside the hard way. Support was all us. If a component failed, we had to find a spare to get the system back up and running, ship it off, convince the vendor it was bad, who might replace it, or they might refer us to the manufacturer, who would want to know why we thought it was bad and might want us to run some tests. In a home environment where you have two PCs, this is a minor hassle. In a business environment where you have a few hundred PCs, you’re going to have some failures–even the big boys have them–and it might take half an FTE just to take care of this stuff.
Unfortunately, we didn’t really have the budget to keep much in the way of spare parts. An awful lot of components from the old 486s ended up getting recycled, even though they had no business being recycled. But when you have a dead hard drive and nothing but a 500-meg drive to replace it, what do you do?
I put it in and listened to a lot of complaints.
Overall I don’t regret it. At the time it was the only way to accomplish what we needed to accomplish.
Times changed quickly, however. The $1,000 PC existed in 1998. Late in 1998, eMachines came along and rocked everyone’s world. Those early eMachines were underpowered, but they launched a price war. Today, you can get a business PC for what would cost to build it. If you buy in any kind of quantity, you can usually get it cheaper. You get to deal with one vendor instead of five (assuming the best-case scenario of a system consisting of an integrated motherboard, memory, hard drive, case, and monitor). And when a computer built by someone else breaks, some of the blame goes to you and some of it goes to the company who built it. Having had it both ways, I like to be able to share the blame.
Additionally, the corners I cut in 1998 can’t really be cut anymore. Integrated motherboards with video, sound, networking, and basically everything you need exist today, and they cost 50 bucks. AMD and Intel sell cheap CPUs and VIA sells really slow and cool-running CPUs. In 1998, AMD, Cyrix, and IDT (WinChip) were all willing to sell cheap CPUs, and a fourth company, Rise, was going to release one as well. (If Rise ever did release its cheap CPU, I never saw one. But the threat was there.)
By building your own, you might be able to save 50 bucks. But you’ll spend more than 50 bucks in labor to put the thing together and burn it in.
As far as the suggestion of using DIY servers, forget it. There is no benefit to upgrading a server the way you would a desktop PC. In anything bigger than a small business, people howl when you take the server down to patch Internet Exploiter. Do you think they’ll let you take it down to replace a motherboard? No way.
Would you do it anyway? No. We still have a couple of servers from 1996 running. They’re slow, but they’re still getting the job done.
I agree with the author that for the price of a service contract you can keep a lot of spare parts. But that means you have to have a place to store them. You also really need to verify that they work before you store them, because when a server is down, the replacement part needs to work. I’ve had three hardware failures on servers within the past week. (It sounds like a lot, but when you have 125 of them and can’t really remember when the last one was, that’s not nearly as bad.) It’s worth the money on that service contract for it to have been someone else’s problem.
Besides, only really cheap servers use desktop components. When you can find server-grade motherboards, they aren’t cheap. You might save 50 bucks by building your own server. But again, you’ll spend 50 bucks in labor.
Even when we were building our own PCs, we still bought our servers from IBM. I remember one horrible weekend when one of the servers failed. Between IBM’s mighty resources and ours, we were able to get it back up and going over the course of a weekend. Without IBM behind us, it might have taken us a week. A very long week.
If we’d ever had any thoughts of building our own servers, they evaporated during that grueling 72-hour time frame.
If building PCs is something you enjoy, great. Make it your hobby. Unless your business only has a dozen or two PCs in it, you don’t have time to be doing it at work too.
I don’t remember how I stumbled across it, but textfiles.com tries to collect documents from the classic days of BBSing, which the curator defines as having ended in 1995. I wouldn’t have thought it that recent. I was still BBSing in the summer of ’94, but by the fall of ’94 I’d discovered the Web, and I thought I was the last one to wake up to it.
I’d learned FTP and Gopher when I went to college in 1993, and I’d been using Usenet via local BBSs for even longer, but as everyone knows now, it was the Web that put the Internet on the map. I think a lot of people think the Web is the Internet.
Anyway, before the Internet, hobbyists would take computers, get a phone line, hook up a modem, and see who called. There were usually discussion boards, file transfers, and at least one online multiplayer game. The really big BBSs ran on 386s with hard drives, but an awful lot of the BBSs I called ran on 8-bit computers and stored their data on floppy drives. I remember one board I called used seven or eight floppy drives to give itself a whopping 6 or 7 megs of online storage. It was called The Future BBS, and the sysops’ real names were Rick and Jim (I don’t remember their handles), and it ran on a Commodore 64 or 128 with, ironically, a bunch of drives that dated back to the days of the PET–Commodore had produced some 1-meg drives in the early 80s that would connect to a 64 or 128 if you put an IEEE-488 interface in it. Theirs was a pretty hot setup and probably filled a spare bedroom all by itself for the most part.
It was a very different time.
Well, most of the boards I called were clearinghouses for pirated software. It was casual copying; I didn’t mess with any of that 0-1 day warez stuff. We were curmudgeons; someone would wax nostalgic about how great Zork was and how they didn’t know what happened to their copy, then someone would upload it. I remember on a couple of occasions sysops would move to St. Louis and complain about how St. Louis was the most rampant center of software piracy they’d ever seen, but I see from the files on textfiles.com that probably wasn’t true.
Besides illegal software, a lot of text files floated around. A lot of it was recipes. Some of them were “anarchy” files–how-to guides to creating mayhem. Having lots of them was a status symbol. Most of the files were 20K in length or so (most 8-bit computers didn’t have enough address space for documents much longer than that once you loaded a word processor into memory), and I knew people who had megabytes of them in an era of 170K floppies.
A lot of the stuff on the site is seedy. Seedier than I remember the boards I called being.
But a lot of the content is just random stuff, and some of it dates itself. (Hey, where else was I going to find out that the 1982 song “Pac-Man Fever” was recorded by Buckner & Garcia? Allmusic.com forgot about that song. If I recall correctly, that’s probably proof that God is merciful, but hey.)
Mostly I find it interesting to see what people were talking about 10 and 20 years ago. Some of the issues of yesterday are pretty much unchanged. Some of them just seem bizarre now. Like rumors of weird objects in Diet Pepsi cans.
Actually that doesn’t sound so bizarre. I’m sure there’s an e-mail forward about those in my inbox right now.