No surprises in the PC Magazine reliability/service survey

It’s that time of year again. Time for PC Magazine’s annual reliability and service survey. I’ve been reading it for almost half my life, and half a lifetime ago, it really meant something.

Today, the subtitle ought to be “What happens when you outsource.”So what does happen when you outsource? All the PCs are basically the same these days. It makes sense. We’re down to three or four suppliers for almost all of the chips on the motherboard, and everyone, including the big vendors, buy their motherboards from one of a half dozen or so companies now. Some contract manufacturer in the Far East puts them all together and puts some other company’s name on it.

The good news is that if there’s a secret to building good, reliable PCs, it’s really poorly kept. The basic hardware is much more reliable today than a decade ago. Back when I sold computers at retail, I remember a Compaq sales rep complaining bitterly that Intel’s “Intel Inside” campaign was hurting them by making everyone think all computers were the same inside. At the time they weren’t. Compaq’s engineering and rigorous testing didn’t always produce the fastest PCs, but they were always near the top, and it did produce some really reliable stuff.

Would that same philosophy applied to today’s technology yield something better? It’s impossible to know. Compaq PCs are exactly the same as everyone else’s these days. The good news is the hardware is about as problem-free as it was back then. And so is everyone else’s. The only difference is the software the manufacturer loads on them.

You may be surprised, but even the bargain-basement eMachines scored high on reliability ratings. It turns out it’s cheaper to get things right the first time than it is to cut corners on quality and have to accept lots of returns. Their machines were dirt cheap, the company was profitable, and the reliability was good. That’s why Gateway bought them and then turned management of the combined company over to the eMachines management.

Speaking of Gateway, support is almost uniformly lousy across the board. People have always complained to me that the support people don’t know what they’re doing. Now it’s hard to know how much the phone techs know because you can’t understand them.

Someone has got to realize this makes poor business sense and make a change. IBM knows, but IBM doesn’t sell PCs at retail anymore. In the early ’90s, Gateway had tremendous brand loyalty. Their PCs were terrible, but the tech support was friendly and determined. When Dell and others started undercutting Gateway’s prices, they cut costs by decimating their tech support. The result was lousy computers and no help getting the problem fixed. The only thing left to do was to buy eMachines, whose management had walked into a similarly bad situation in 2001 and righted it.

It’s pretty obvious to me that the way to break this logjam of sameness is to offer first-rate technical support. I want to believe that the first company that moves its technical support back to the United States and advertises the fact would even be able to get by with charging a premium price.

In the meantime, you stand to get slightly better support by buying from a retail store rather than over the phone or web, if only because the store will be able to help you with basic questions. The quality of in-store help varies widely, but if you find good help in the store, find out that person’s name and ask for that person if you have to call again. Most people who are really good don’t stay in retail for long–at least one company here in St. Louis scouts the retail stores’ computer help and tries to hire away anyone over the age of 21 or 22 who seems to be any good–but you may get some good help in the meantime. Use the manufacturer’s support as a backup, if the store will let you.

This is still a blog

A year, or maybe two years ago, I wrote a piece called “This is a blog” in response to an overly full-of-himself author who said that serious professionals don’t blog. It infuriated some people and got me kicked off the daynotes.com web page. I don’t have anything like that to lose this time around, so I don’t approach the topic with the same kind of eagerness–you’re always more eager when you know someone’s going to be offended and throw a temper tantrum–but since everyone and his uncle seems to be writing about John C. Dvorak’s current PC magazine column, Co-opting the future, I might as well weigh in, since it’s the in thing to do, and disagree with the majority knee-jerk reaction, since that isn’t the in thing to do. But I won’t do it to be counterculture. No, I’ll disagree with the majority reaction because the majority reaction is wrong.

Yes, I find it funny that the guy who was recommending novelty domain names as Christmas presents back when a domain name still cost $99 a year is today opposed to blogs. What else is someone going to do with a personalized domain name? I’ll tell you what I’d do if someone gave me the davefarquhar.com domain–I’d run a mail server on it and I’d hang my blog off it. Dvorak would run a mail server off it and post some recipes on it and some pictures of his pets. But my site would be more useful–at least blogging software provides a search engine so you can find the stuff. Isn’t it tacky to tell people to go to Google and type what they’re looking for, followed by site:yourdomainnamehere.com?

But unlike the vigilante masses, I don’t take issue with the majority of what Dvorak says. So he cites a paper that says the majority of blogs get abandoned. The blogosphere goes nuts. Well, I’m sorry, folks, but Dvorak’s right. Go to any public blogging community and start navigating random sites, and you’re going to find a lot of abandonware. It’s like any other hobby. It’s great when the novelty is new. But eventually the newness fades away. Some people abandon their blogs for a while, for various reasons, then come back. Hey, I posted as much in the months of September and October as I used to post in a week. It happens. I came back because I love writing. Some people find they don’t love writing. Some people find they love writing but they run out of things to say. It happens. Large numbers of people trying it and deciding they don’t like it doesn’t invalidate it. How many millions of cameras sit in closets, only to be taken out during birthdays and holidays, if then? Does that somehow invalidate photography?

Then Dvorak says the people who stick with blogging are professional writers. Interestingly, the people rebutting Dvorak bring up the blogs written by–guess who?–professional writers. Now I don’t see how that invalidates Dvorak’s point that the longest lasting, most popular blogs tend to be written by people who do it professionally. I think it’s obvious. If you’re going to write professionally, you have to love it. And if you love writing, you’re more likely to blog.

In other news, computer professionals are more likely than others to build their own computers, dogs are more likely to bark than cats, the sky is blue, and if there’s snow on the ground it’s probably cold outside.

The really incendiary statement Dvorak quotes is that the majority of blogs have an audience of about 12 people. Sometimes reality hurts. I remember checking my logs in my early days and being shocked when I had 40 visitors. Then I was shocked when I found out some people looked up to me because I had 40 visitors. I thought I was the only small-market guy.

Eventually, one of three things happens to every small-audience blogger. Some get frustrated and quit. Others toil on in obscurity. Still others one way or another stumble onto something that people like and they grow their audience.

Today, my audience is closer to 12 hundred people. That doesn’t make me a superstar, but it’s not bad. Some people I remember celebrating breaking 20 readers a day five years ago aren’t doing it anymore. Others are, and they probably get 1200 people a day too. Or more.

I didn’t like Dvorak’s tone, but Dvorak will be Dvorak. I didn’t like Dvorak’s tone when he wrote about OS/2 either, and I think Dvorak’s personal crusade against the caps lock key is idiotic and annoying. He needs to just download a utility that remaps it to a control key and shut up. Those of us who really know how to type will continue to use it when we need it. So Dvorak doesn’t like blogs either. If I only ever read people who agree with me, I wouldn’t ever read.

The only thing I really disagreed with was Dvorak’s assertion that big media is taking over the blogs. Yes, big media is blogging. But the little guys will always outnumber big media. There’ll always be professional writers who blog on their own time to keep sharp or to experiment. There’ll be part-time pros like me who don’t like big media and don’t like most editors–well, I can name four editors I worked with who I liked–who blog because it’s a way to write and stay in touch with the craft and be true to one’s self. There’ll be up-and-comers who are in high school or college and decide to start blogging as part of the process of finding one’s self. There’ll be people who do it just as a hobby.

And guess what? Google starts out with no assumptions. It treats all links the same. That’s why little guys like me can get 1,200 hits a day.

And next week Dvorak will be off on another crusade. There’s about a 50% chance of him being right. I’ve known that since I started reading the guy a decade ago.

My experience with online dating doesn’t match PC Magazine’s

OK, I guess it’s time I come out of hiding and make a confession: I’ve used an online dating service. And, if I found myself single and unattached again, I’d probably do it again.
I don’t know if the stigma around online dating still exists, but the inescapable fact is I’m terribly shy in person, especially with women. But I can write a little and when you read a little bit of what I’ve written, you get to know me pretty well. So the computer allows me to get past that shyness.

I saw the service I used reviewed on PC Magazine’s web site this week. It was pretty critical. Every other review I’ve heard about it gushed. And truth be told, in early September I was gushing pretty nasty things about it. I even told some people to stay away from it. It turned things around after a month. Maybe two. I can’t remember the time frame anymore.

The service is eharmony.com. I got that out of the way. Now let me tell you that if you heard about it on Dr. Dobson, which was the original source I heard about it from second- or third-hand, Dobson was gushing about it. Frankly I don’t care much what Dobson has to say about singlehood. Live 10 years listening to people ask you what’s wrong with you because you don’t have a girlfriend, and then I’ll listen to you. I’m not terribly interested in the opinions of this week’s fifty-something who got married in his early 20s on how to cure the disease called singlehood in the early 21st century. (Since when is it a disease anyway?)

Contrary to what Dobson’s gushing might have you believe, eharmony isn’t a magic bullet. Now don’t get me wrong: It does have potential. When one of my friends called me up all excited about it and he described its process, I was willing to humor him. It starts out with a psychological profile. I remember doing a psychological profile using a program called Mind Prober on a Commodore 64 in the late 1980s. It did a pretty good job of profiling me. It got a few details wrong, but I grew into those. Spooky, huh? So if a computer with 64K of memory, 1 megahertz of processing power, and 340K of available secondary storage could profile me, a modern computer could do just fine so long as the profiling algorithm and data is good. So I believe in computer psychological profiling.

Another part of the idea is that you interview thousands of married couples. Happily married couples who’ve been that way for a very long time. That’s a small percentage of people who get married. Take a large sample set, profile them, and you can eventually get an idea of what personality traits are compatible long-term. Nice theory. I buy that. I’ll definitely take it over guesswork.

PC Magazine expressed doubts over use of science in finding love. Considering the success rate of the traditional methods, I’ll take whatever edge I can get.

Here’s what happened with me.

PC Magazine’s reviewer bemoaned her lack of initial matches. I was the opposite. Christian males seem to be a rarity, or at the very least, highly outnumbered. But I think I’ve gotten ahead of myself.

It started off with a questionaire. It took PC Magazine’s reviewer 45 minutes to fill it out. I’m pretty sure it took me closer to an hour and a half. It’s important to consider the questions carefully and answer honestly. A lot of the questions were things I hadn’t thought about in a long time, if ever. By the time I was done, I felt like eharmony’s computer probably knew me better than most of my closest friends. It was that exhaustive. Some of the questions are about you, and some of them are about what you’re looking for. Again, it’s important to be honest. And specific. And picky. The important questions for me were about faith. I won’t date someone who doesn’t share that with me, period. It understood that. It went so far as to give me a list of denominations and ask which ones were OK and which ones weren’t. I ticked off all of the evangelical-minded denominations, then I ticked Lutheran, just because it felt weird to leave my own out. Then I un-checked Presbyterian, only because the girl who will always have the title of The Ex-Girlfriend was/is Presbyterian. We all have baggage, and that’s some of mine.

The system immediately found four matches. Over the course of 2-3 months (I don’t remember how long I stayed) it would find close to 20. I started exchanging questions with one of them right away. I don’t remember the exact process right now. I know early on you’d read a superficial profile of the person–excerpts from their interview. You’d learn things like where they’re from and how to make them smile. If you’re both interested in talking, you pick from a list of questions to exchange back and forth. The first set is multiple choice. One question I asked everyone, without fail, was “If you were going out to dinner with a friend, what kind of restaurant would you choose?” And there were four answers, ranging from a fancy restaurant to a greasy spoon. I wanted to weed out the snobs, which was why I asked that one. I think you got a second round, where the questions were still canned, but you got to write out your own answers, limited to a couple of paragraphs. (I usually pushed the limit. Surprise!) I don’t recall if there was a third, but if there was, it was a shorter-still number of questions, permitting a longer answer. When you got through that round, you entered “Open Communication,” which is basically e-mail, with no restrictions.

The first girl I talked to was from Defiance, Missouri, which is about 45 minutes northwest of St. Louis proper. As I recall, she was 30 and she worked in sales. She was really interested at first but got pretty cyclical. We’d talk a couple of times one day, then a week might pass. It didn’t pan out–one day I got the notice she’d chosen to close communication to concentrate on other matches. One nice thing about doing this online–rejection’s a lot easier when it’s not in person.

I can’t remember where the next girl I talked to was from. Across the river in Illinois but I don’t remember the town. She was 24 or 25, and worked in banking. We took off like a rocket. The first time we talked on the phone, we talked for three hours or something obnoxious like that. I had serious hopes for this match, until we met in person. Everything right had come out all at once, and then, everything wrong came out all at once. She found out I’m not as good at communicating in person as in writing. And she found out I can be distant. I had some red flags too. She seemed to want to move a lot faster than I would be able to, and there were personality traits that weren’t necessarily bad, but they just weren’t right for me. And I knew I would never live up to the expectations she had for me. I may be smart and I may be a nice guy, but I am still human. I felt pretty bad after this date. I stopped believing in the approach and took a serious look at what other options I might have.

Then along came the girl from Manchester, Missouri. She was a year older than me. She played guitar. She led Bible studies. She was a math teacher by trade. I was enamored before we even started talking. And it started off great. She answered every question with the response I was looking for. We started talking, and I thought we were going great. Then she got cold feet and started to withdraw. We talked on the phone a few times and it was pleasant, but she seemed to be big into rules and guidelines, whereas I’m more interested in learning the rules to follow and understanding them well enough to know when to break them. (The exception being 10 particular rules you never break, which you can find in Exodus.) We went on one double date. Once again I wasn’t as strong of a communicator as in writing, and I got the distinct impression she wasn’t very interested in continuing. I was questioning whether I was myself. I’ve still got her phone number somewhere but it’s been four or five months. I doubt I ever use it.

Meanwhile, the girl from Troy, Illinois came into the picture. She and the girl from Manchester were contemporaries, but the girl from Manchester got the head start. I’m pretty sure it was the guitar. She was a student, age 21. I was concerned about the age gap. That was the only question mark about her. Her answers to my questions were mostly the second-best answers. The questions she chose to ask me puzzled me a bit–I had trouble figuring out what it was she wanted to know about me. (With all of the other girls, it was plain as day what they were trying to find out.) We stumbled into open communication, talked for a while, and I still couldn’t get over that age thing. Finally she asked me, “I don’t mean to be rude or anything, but where’s this going? Do I ever get to meet you?”

So we met in Belleville, then went to O’Fallon, had dinner, and drove around O’Fallon for a couple of hours, talking. My eharmony subscription was up for renewal in a week or so. I let it lapse.

I won’t go into specifics because our relationship is half her business, and I don’t make it a habit to go putting other people’s business on my blog. For the first two months we dated, she got irritated with me once. I’m pretty sure that’s a world record. Most people are doing well if they only get irritated me once over a 24-hour period. Lately I screw up once or twice a month. Most couples I know are thrilled with just once or twice a day.

At one point I seriously questioned the relationship, even to the point where if I’d had to make a yes or no decision right then and there I would have ended it. But that’s not unusual and it’s healthy. And I’m used to being on the other end of that every couple of weeks.

The bottom line is, while we surprise each other, most of the surprises are good ones, and the bad surprises generally aren’t huge surprises. For about 25 years, the only women who understood me at all were my mom and my sister. She’s rocketed onto that list, and frankly, they all probably jockey for that #1 spot. Not bad for someone I first met in person in October. I think at this point my biggest complaint about her is that she doesn’t like mushrooms or olives. I’m sure she’s got bigger complaints about me but she keeps coming around anyway, so they can’t be too big.

I’m not going to say that eharmony is the only way to meet someone, and I won’t say it guarantees you’ll meet someone. I know in at least one case I was a girl’s only match, and it couldn’t have felt good when we flopped. It’s not a magic bullet, no matter what anyone says. I had 17 matches at one point and it still took three months to find someone I felt like I should be dating. Roughly a third were interested in me but I wasn’t interested in them, about a third I was interested in but they weren’t interested in me, and about a third had enough interest on both sides that we talked. If you’re looking for a date this week, you won’t find it on eharmony but you might very well find it somewhere else. And eharmony is definitely expensive.

But I was looking for something long-term, and I think I found it.

Like I said earlier, I’d go back. And that says something.

Why I dislike Microsoft

“Windows 2000,” I muttered as one of my computers fired up so my girlfriend could use it. “Must mean something about the number of bugs that’ll be discovered tomorrow.”
She told me she liked Windows and asked me why I hated Microsoft so much.

It’s been a while since I thought about that. She speculated that I was annoyed that Bill Gates is smarter than me. (Which he probably is, but aside from a couple more books in print, it hasn’t gotten him anything I don’t have that I want.) There’s more to it than that.

I’m still annoyed about the foundation Microsoft built its evil empire upon. In the ’70s, Microsoft was a languages company, and they specialized in the language Basic. Microsoft Basic wasn’t the best Basic on the market, but it was the standard. And when IBM decided it wanted to enter the personal computer market, IBM wanted Microsoft Basic because nobody would take them seriously if they didn’t. So they started talking to Microsoft.

IBM also wanted the CP/M operating system. CP/M wasn’t the best operating system either, but it was the standard. IBM was getting ready to negotiate with Gary Kildall, owner of Digital Research and primary author of the OS, and ran into snags. Gates’ account was that Kildall went flying and kept the IBM suits waiting and then refused to work with them. More likely, the free-spirited and rebellious Kildall didn’t want to sign all the NDAs IBM wanted him to sign.

Microsoft was, at the time, a CP/M subcontractor. Microsoft sold a plug-in board for Apple II computers that made them CP/M-compatible. So IBM approached Microsoft about re-selling CP/M. Microsoft couldn’t do it. And that bothered Gates.

But another Microsoft employee had a friend named Tim Patterson. Tim Patterson was an employee of Seattle Computer Products, a company that sold an 8086-based personal computer similar to the computer IBM was developing. CP/M was designed for computers based on the earlier 8080 and 8085 CPUs. Patterson, tired of waiting for a version of CP/M for the 8086, cloned it.

So Seattle Computer Products had something IBM wanted, and Microsoft was the only one who knew it. So Microsoft worked out a secret deal. For $50,000, they got Patterson and his operating system, which they then licensed to IBM. Patterson’s operating system became PC DOS 1.0.

Back in the mid-1990s, PC Magazine columnist John C. Dvorak wrote something curious about this operating system. He said he knew of an easter egg present in CP/M in the late 1970s that caused Kildall’s name and a copyright notice to be printed. Very early versions (presumably before the 1.0 release) of DOS had this same easter egg. This of course screams copyright violation.

Copyright violation or none, Kildall was enraged the first time he saw DOS 1.0 because it was little more than a second-rate copy of his life’s work. And while Digital Research easily could have taken on Microsoft (it was the bigger company at the time), the company didn’t stand a prayer in court against the mighty IBM. So the three companies made some secret deals. The big winner was Microsoft, who got to keep its (possibly illegal) operating system.

Digital Research eventually released CP/M-86, but since IBM sold CP/M-86 for $240 and DOS for $60, it’s easy to see which one gained marketshare, especially since the two systems weren’t completely compatible. Digital Research even added multiuser and multitasking abilities to it, but they were ignored. In 1988, DR-DOS was released. It was nearly 100% compatible with MS-DOS, faster, less expensive, and had more features. Microsoft strong-armed computer manufacturers into not using it and even put cryptic error messages in Windows to discourage the end users who had purchased DR-DOS as an upgrade from using it. During 1992, DR-DOS lost nearly 90% of its marketshare, declining from $15.5 million in sales in the first quarter to just $1.4 million in the fourth quarter.

Digital Research atrophied away and was eventually bought out by Novell in 1991. Novell, although the larger company, fared no better in the DOS battle. They released Novell DOS 7, based on DR-DOS, in 1993, but it was mostly ignored. Novell pulled it from the market within months. Novell eventually sold the remnants of Digital Research to Caldera Inc., who created a spinoff company with the primary purpose of suing Microsoft for predatory behavior that locked a potential competitor out of the marketplace.

Caldera and Microsoft settled out of court in January 2000. The exact terms were never disclosed.

Interestingly, even though it was its partnership with IBM that protected Microsoft from the wrath of Gary Kildall in 1981, Microsoft didn’t hesitate to backstab IBM when it got the chance. By 1982, clones of IBM’s PC were beginning to appear on the market. Microsoft sold the companies MS-DOS, and even developed a custom version of Basic for them that worked around a ROM compatibility issue. While there was nothing illegal about turning around and selling DOS to its partner’s competitors, it’s certainly nobody’s idea of a thank-you.

Microsoft’s predatory behavior in the 1980s and early ’90s wasn’t limited to DOS. History is littered with other operating systems that tried to take on DOS and Windows and lost: GeoWorks. BeOS. OS/2. GeoWorks was an early GUI programmed in assembly language by a bunch of former videogame programmers. It was lightning fast and multitasked, even on 10 MHz XTs and 286s. It was the most successful of the bunch in getting OEM deals, but you’ve probably never heard of it. OS/2 was a superfast and stable 32-bit operating system that ran DOS and Windows software as well as its own, a lot like Windows NT. By Gates’ own admission it was better than anything Microsoft had in the 1990s. But it never really took off, partly because of IBM’s terrible marketing, but partly because Microsoft’s strong-arm tactics kept even IBM’s PC division from shipping PCs with it much of the time. BeOS was a completely new operating system, written from scratch, that was highly regarded for its speed. It never got off the ground because Microsoft completely locked it out of new computer bundles.

Microsoft used its leverage in operating systems to help it gain ground in applications as well. In the 1980s, the market-leading spreadsheet was Lotus 1-2-3. There was an alleged saying inside Microsoft’s DOS development group: DOS ain’t done ’til Lotus won’t run. Each new DOS revision, from version 3 onward, broke third-party applications. Lotus 1-2-3, although once highly regarded, is a noncontender in today’s marketplace.

Once Windows came into being, things only got worse. Microsoft’s treatment of Netscape was deplorable. For all intents and purposes, Microsoft had a monopoly on operating systems by 1996, and Netscape had a monopoly on Web browsers. Netscape was a commercial product, sold in retail stores for about $40, but most of its distribution came through ISPs, who bought it at a reduced rate and provided it to their subscribers. Students could use it for free. Since the Web was becoming a killer app, Netscape had a booming business. Microsoft saw this as a threat to its Windows franchise, since Netscape ran well not only on Windows, but also on the Mac, OS/2 and on a number of flavors of Unix. So Microsoft started bunding Internet Explorer with Windows and offering it as a free download for those who already had Windows, or had an operating system other than Windows, such as Mac OS. In other industries, this is called tying or dumping, and it’s illegal. Netscape, once the darling of Wall Street, was bought for pennies on the dollar by AOL, and AOL-Time Warner is still trying to figure out what to do with it. Once Microsoft attained a monopoly on Web browsers, innovation in that space stopped. Internet Explorer has gotten a little bit faster and more standards compliant since IE4, but Microsoft hasn’t put any innovation in the browser for five years. Want popup blocking or tabs? You won’t find either in IE. All of the innovation in that space has come in browsers with a tiny piece of the market.

One could argue that consumers now get Web browsers for free, where they didn’t before. Except every new computer came with a Web browser, and most ISPs provided a browser when you signed up. So there were lots of ways to get a Web browser for free in the mid-’90s.

And when it came to the excesses of the dotcom era, Netscape was among the worst. But whether Netscape could have kept up its perks given its business model is irrelevant when a predator comes in and overnight renders unsalable the product that accounts for 90% of your revenue.

Allegations popped up again after Windows 95’s release that Win95 sabotoged competitors’ office software, such as WordPerfect and Lotus 1-2-3. Within a couple of years, Microsoft Office was a virtual monopoly, with Lotus SmartSuite existing almost exclusively as a budget throw-in with new PCs and WordPerfect Office being slightly more common on new PCs and an also-ran in the marketplace. It’s been five years since any compelling new feature has appeared in Microsoft Office. The most glaring example of this is spam filtering. Innovative e-mail clients today have some form of automatic spam filtering, either present or in development. Outlook doesn’t. “Microsoft Innovation” today means cartoon characters telling you how to indent paragraphs.

And the pricing hasn’t really come down either. When office suites first appeared in 1994, they cost around $500. A complete, non-upgrade retail copy of Microsoft Office XP still costs about $500.

Pricing hasn’t come down on Windows either. In the early 90s, the DOS/Windows bundle cost PC manufacturers about $75. Today, Windows XP Home costs PC manufacturers about $100. The justification is that Windows XP Home is more stable and has more features than Windows 3.1. Of course, the Pentium 4 is faster and less buggy than the original Pentium of 1994, but it costs a lot less. Neither chip can touch Windows’ 85% profit margin.

And when Microsoft wasn’t busy sabotaging competitors’ apps, it was raiding its personnel. Microsoft’s only really big rival in the languages business in the ’80s and early ’90s was Borland, a company founded by the flambouyant Phillippe Kahn. Gates had a nasty habit of raiding Borland’s staff and picking off their stars. It didn’t go both ways. If a Microsoft employee defected, the employee could expect a lawsuit.

Well, Kahn decided to play the game once. He warmed up to a Microsoft staffer whose talents he believed weren’t being fully utilized. The employee didn’t want to jump ship because Microsoft would sue him. Kahn said fine, let Microsoft sue, and Borland would pay whatever was necessary. So he defected. As expected, Gates was enraged and Microsoft sued.

Soon afterward, Kahn and his new hire were in an airport when a Hare Krishna solicited a donation. Kahn handed him $100 on the spot and told him there was a whole lot more in it for him if he’d deliver a message to Bill Gates: “Phillippe just gave us $100 for hot food because he suspects after this lawsuit, your employees are going to need it.”

He delivered the message. Gates wasn’t amused.

It was a bold, brash move. And I think it was pretty darn funny too. But smart? Not really. Borland’s glory days were pretty much over 10 years ago. For every star Borland could lure away, Microsoft could lure away three. Borland’s still in business today, which makes it fairly unique among companies that have taken on Microsoft head-on, but only after several reorganizations and major asset selloffs.

The only notable company that’s taken on Microsoft in the marketplace directly and won has been Intuit, the makers of Quicken. Microsoft even gave away its Quicken competitor, Microsoft Money, for a time, a la Internet Explorer, in an effort to gain market share. When that failed, Microsoft bought Intuit outright. The FTC stepped in and axed the deal.

The thanks Microsoft has given the world for making it the world’s largest software company has been to sell buggy software and do everything it could to force companies and individuals to buy upgrades every couple of years, even when existing software is adequate for the task. While hardware manufacturers scrape for tiny margins, Microsoft enjoys 85% profit margins on its product. But Microsoft mostly sits on its cash, or uses it to buy companies or products since it has a terrible track record of coming up with ideas on its own. The company has never paid dividends, so it’s not even all that much of a friend to its own investors.

For me, the question isn’t why I dislike Microsoft. The question for me is why Microsoft has any friends left.

Transferring VHS movies to VCD or DVD

Mail from Maurie Reed about VHS home movie transfers to digital formats.
MR: Dave, I’ve read all of your threads on video editing with interest. I’m not claiming to have understood everything but I’m less in the dark than I was before ( a 20 watt bulb as compared to a 10?).

DF: Remember, there are people who get 4-year degrees in this stuff. And graduate degrees after that.

MR: My question is: does the Pinnacle DV500 work in conjunction with a regular AGP video card or is it the sole video device in the system?

DF: It works in conjunction with another card. The DV500 does the heavy lifting and then sends its display over to the other card. So if you’ve got a DV500, any video card on the market today will be way more than enough. I used an S3 Savage4 card for a long time, and it was fine.

MR: Maybe better yet, what I’d lke to do is take the VHS tapes that we have made of the family over the years and transfer them to DVD. The first reason is to archive them for safety. After that’s done I’d like to edit them for quality, i.e., clean up, lighten,etc.

DF: PC Magazine’s Lance Ulanoff has done some columns on that. His approach, using Sonic MyDVD 4.0 (though Dazzle DVD Complete gets better reviews), is simpler than mine and eliminates the DV500, though you’ll still need some way to get the analog video into your PC. An ATI All-In-Wonder card would be good for that. I know Newegg has the less-expensive All-in-Wonders sometimes but they tend to sell out quickly so you’ll probably have to use their notify feature. Then you can spend the money you’d spend on a DV500 on a DVD writer instead (I suggest one of the Sony drives that can do DVD+R/+RW and DVD-R/-RW, that way if one format works better in your DVD player, you’re not stuck.

Keep in mind that Ulanoff used Firewire to get his video in, but that’s because he used Hi8 as his source, and those tapes will work in a Digital8 camera. If you’re using VHS, you’re limited to using analog inputs.

What you gain in simplicity you lose in power, but that’s not necessarily a bad thing.

MR: Toward this end I’ve been slowly building up a new machine: P4-2.4, Asus P4-533E, 512M PC-2700 RAM, 120G WD HD (SCSI’s not quite in the budget right now although I do have some Adaptec 2940 cards). I’m running an old S3 8M video card in it right now to test components (all from newegg…thanks for thesuggestion!) and I have no DVD-ROM drive or DVD burner yet (I do have a LiteOn CDRW). I thought I’d work on the video first. I’m sure at some point down the road we would like to do more video but never anything professional (read – making money at it). It would probably be my wife and daughters working with it anyway as I’m more of an audio person then video.

DF: You’re off to a great start. Add a DVD burner and an All-In-Wonder card (or a similar nVidia card with analog inputs–if your camera or VCR supports S-Video, use that, since its picture quality is noticeably better) and you’re ready to go. You might want to grab a smallish drive to hold your OS and apps so you can dedicate the WD drive just to video. Watch the post-Thanksgiving sales. For VHS-to-DVD transfers, IDE is sufficient.

Since you do have a CD burner, if you want to get started right away, get the All-In-Wonder and the software and start making VCDs, then get the DVD burner later.

As for being an audio person rather than a video person, I come at it from a magazine/newspaper background. I think it’s a shorter step from audio to video than it is from print to video! (And you knowing what it takes to make the video sound good is a very good thing. The audio quality on some of my projects has been positively awful.)

MR: I understand you’re very busy and NOT in the free advice business so I’ll understand if you decline to comment.

Thanks (no matter what the answer) in advance and have a great Thanksgiving!

DF: Thanks for the good questions. You have a great Thanksgiving too.

More perspective on video editing

I read Bill Machrone’s current PC Magazine column on PC non-linear video editing with a bit of bemusement. He talked about the difficulty he and his son have editing video on their PCs, and he concluded with the question: “How do normal people do this stuff?” and the misguided answer: “They buy a Mac.”
You don’t have to do that. In fact, you can do pretty well on a PC if you just play by the same rules the Mac forces you to play by.

Consider this for a minute: With the Mac, you have one motherboard manufacturer. Apple tends to revise its boards once a year, maybe twice. Apple makes, at most, four different boards: one for the G4 tower systems, one for the iMac, one for the iBook, and one for the PowerBook. Frequently different lines will share the same board–the first iMacs were just a PowerBook board in an all-in-one case.

And the Mac (officially) supports two operating systems: the OS 9 series and the OS X series. You keep your OS at the current or next-most-recent level (always wait about a month before you download any OS update from Apple), and you keep your apps at current level, and you minimize compatibility problems. Notice I said minimize. PageMaker 7 has problems exporting PDF documents that I can’t track down yet, and I see from Adobe’s forums that I’m not the only one. So the Mac’s not always the bed of roses Machrone’s making it out to be.

Now consider the PC market for a minute. You’ve got two major CPU architectures, plus also-ran VIA; 4-6 (depending on who you ask) major suppliers of chipsets; at least four big suppliers of video chipsets; and literally dozens of motherboard manufacturers. Oh, you want an operating system with that? For all the FUD of Linux fragmentation, Microsoft’s in no better shape: Even if you only consider currently available offerings, you’ve got Windows 98, ME, NT4, 2000, and two flavors of XP.

So we go and we buy a video capture card and expect to put it in any old PC and expect it to work. Well, it probably ought to work, but let’s consider something. Assuming two CPU architectures, four chipset manufacturers, four video architectures, and twelve motherboard manufacturers, the chances of your PC being functionally identical to any other PC purchased right around the same time are 1 in 384. The comparable Mac odds: 1 in 4. But realistically, if you’re doing video editing, 1 in 1, because to do serious video work you need a desktop unit for its expandability. No Blue Dalmation browsing for you!

So you can rest assured that if you have a Mac, your vendor tested the equipment with hardware functionally identical to yours. On a PC you just can’t make that assumption, even if you buy a big brand name like Dell.

But you want the best of both worlds, don’t you? You want to play it safe and you want the economy of using inexpensive commodity PC hardware? It’s easy enough to do it. First things first, pick the video editing board you want. Next, visit the manufacturer’s Web site. Pinnacle has a list of motherboards and systems they’ve tested with the DV500, for instance. You can buy one of the Dell models they’ve tested. If you’re a do-it-yourselfer like me, you buy one of the motherboards they’ve tested. If you want to be really safe, buy the same video card, NIC, and SCSI card they tested as well, and plug them into the same slots Pinnacle did. Don’t worry about the drives Pinnacle used; buy the best-available SCSI drive you can afford, or better yet, two of them.

Video capture cards are cranky. You want a configuration the manufacturer tested and figured out how to make work. Otherwise you get the pleasure. Or the un-pleasure, in some cases.

As far as operating systems go, Windows 2000 is the safe choice. XP is too new, so you may not have drivers for everything. 98 and ME will work, but they’re not especially stable. If I can bluescreen Windows 2000 during long editing sessions, I don’t want to think about what I could do to 9x.

And the editing software is a no-brainer. You use what comes with the card. The software that comes with the card should be a prime consideration in getting the card. Sure, maybe an $89 CompUSA special will do what you want. But it won’t come with Premiere 6, that’s for certain. If I were looking for an entry-level card, I’d probably get a Pinnacle DV200. It’s cheap, it’s backed by a company that’ll be around for a while, and it comes with a nice software bundle. If you want to work with a variety of video sources and output to plain old VHS as well as firewire-equipped camcorders, the DV500 is nice, and at $500, it won’t break the bank. In fact, when my church went to go buy some editing equipment, we grabbed a Dell workstation for a DV500, and we got a DV200 to use on another PC in the office. The DV200-equipped system will be fine for proof of concept and a fair bit of editing. The DV500 system will be the heavy lifter, and all the projects will go to that system for eventual output. I expect great things from that setup.

The most difficult part of my last video editing project (which is almost wrapped up now; it’s good enough for use but I’m a perfectionist and we still have almost a week before it’ll be used) was getting the DV500’s video inputs and outputs working. It turned out my problem was a little checkbox in the Pinnacle control panel. I’d ticked the Test Video box to make sure the composite output worked, back when I first set the board up. Then I didn’t uncheck it. When I finally unchecked it, both the video inputs and outputs started working from inside Premiere. I outputted the project directly to VHS so it could be passed around, and then for grins, I put in an old tape and captured video directly from it. It worked. Flawlessly.

One more cavaet: Spend some of the money you saved by not buying a Mac on memory. Lots of memory. I’m using 384 MB of RAM, which should be considered minimal. I caught myself going to Crucial’s Web site and pricing out three 512-meg DIMMs. Why three? My board only has three slots. Yes, I’d put two gigs of RAM in my video editing station if I could.

OK, two more cavaets: Most people just throw any old CD-ROM drive into a computer and use it to rip audio. You’ll usually get away with that, but if you want high-quality samples off CD to mix into your video production, get a Plextor drive. Their readers are only available in SCSI and they aren’t cheap–a 40X drive will run you close to $100, whereas no-name 52X drives sometimes go for $20-$30–but you’ll get the best possible samples from it. I have my Plextor set to rip at whatever it determines the maximum reliable speed may be. On a badly scratched CD sometimes that turns out to be 1X. But the WAV files it captures are always pristine, even if my audio CD players won’t play the disc anymore.

Mac mice, PC data recovery

A two-button Mac mouse!? Frank McPherson asked what I would think of the multibutton/scroll wheel support in Mac OS X. Third-party multibutton mice have been supported via extensions for several years, but not officially from Ye Olde Apple. So what do I think? About stinkin’ time!

I use 3-button mice on my Windows boxes. The middle button double-clicks. Cuts down on clicks. I like it. On Unix, where the middle button brings up menus, I’d prefer a fourth button for double-clicking. Scroll wheels I don’t care about. The page up/down keys have performed that function just fine for 20 years. But some people like them; no harm done.

Data recovery. One of my users had a disk yesterday that wouldn’t read. Scandisk wouldn’t fix it. Norton Utilities 2000 wouldn’t fix it. I called in Norton Utilities 8. Its disktool.exe includes an option to revive a disk, essentially by doing a low-level format in place (presumably it reads the data, formats the cylinder, then writes the data back). That did the trick wonderfully. Run Disktool, then run NDD, then copy the contents to a fresh disk immediately.

So, if you ever run across an old DOS version of the Norton Utilities (version 7 or 8 certainly; earlier versions may be useful too), keep them! It’s something you’ll maybe need once a year. But when you need them, you need them badly. (Or someone you support does, since those in the know never rely on floppies for long-term data storage.) Recent versions of Norton Utilities for Win32 don’t include all of the old command-line utilities.

Hey, who was the genius who decided it was a good idea to cut, copy and paste files from the desktop? One of the nicest people in the world slipped up today copying a file. She hit cut instead of copy, then when she went to paste the file to the destination, she got an error message. Bye-bye file. Cut/copy-paste works fine for small files, but this was a 30-meg PowerPoint presentation. My colleague who supports her department couldn’t get the file back. I ride in on my white horse, Norton Utilities 4.0 for Windows in hand, and run Unerase off the CD. I get the file back, or so it appears. The undeleted copy won’t open. On a hunch, I hit paste. Another copy comes up. PowerPoint chokes on it too.

I tried everything. I ran PC Magazine’s Unfrag on it, which sometimes fixes problematic Office documents. No dice. I downloaded a PowerPoint recovery program. The document crashed the program. Thanks guys. Robyn never did you any harm. Now she’s out a presentation. Not that Microsoft cares, seeing as they already have the money.

I walked away wondering what would have happened if Amiga had won…

And there’s more to life than computers. There’s songwriting. After services tonight, the music director, John Scheusner, walks up and points at me. “Don’t go anywhere.” His girlfriend, Jennifer, in earshot, asks what we’re plotting. “I’m gonna play Dave the song that he wrote. You’re more than welcome to join us.”

Actually, it’s the song John and I wrote. I wrote some lyrics. John rearranged them a little (the way I wrote it, the song was too fast–imagine that, something too fast from someone used to writing punk rock) and wrote music.

I wrote the song hearing it sung like The Cars, (along the lines of “Magic,” if you’re familiar with their work) but what John wrote and played sounded more like Joe Jackson. Jazzy. I thought it was great. Jennfier thought it was really great.

Then John tells me they’re playing it Sunday. They’re what!? That will be WEIRD. And after the service will be weird too, seeing as everybody knows me and nobody’s ever seen me take a lick of interest in worship music before.

I like it now, but the lyrics are nothing special, so I don’t know if I’ll like it in six months. We’ll see. Some people will think it’s the greatest thing there ever was, just because two people they know wrote it. Others will call it a crappy worship song, but hopefully they’ll give us a little credit: At least we’re producing our own crappy worship songs instead of playing someone else’s.

Then John turns to me on the way out. “Hey, you’re a writer. How do we go about copyrighting this thing?” Besides writing “Copyright 2000 by John Scheusner and Dave Farquhar” on every copy, there’s this.  That’s what the Web is for, friends.

~~~~~~~~~~

Note: I post this letter without comment, since it’s a response to a letter I wrote. My stuff is in italics. I’m not sure I totally agree with all of it, but it certainly made me think a lot and I can’t fault the logic.

From: John Klos
Subject: Re: Your letter on Jerry Pournelle’s site

Hello, Dave,

I found both your writeup and this letter interesting. Especially interesting is both your reaction and Jerry’s reaction to my initial letter, which had little to do with my server.To restate my feelings, I was disturbed about Jerry’s column because it sounded so damned unscientific, and I felt that he had a responsibility to do better.
His conclusion sounded like something a salesperson would say, and in fact did sound like things I have heard from salespeople and self-promoted, wannabe geeks. I’ve heard all sorts of tales from people like this, such as the fact that computers get slower with age because the ram wears out…

Mentioning my Amiga was simply meant to point out that not only was I talking about something that bothered me, but I am running systems that “conventional wisdom” would say are underpowered. However, based upon what both you and Jerry have replied, I suppose I should’ve explained more about my Amiga.

I have about 50 users on erika (named after a dear friend). At any one moment, there are anywhere from half a dozen to a dozen people logged on. Now, I don’t claim to know what a Microsoft Terminal Server is, nor what it does, but it sounds something like an ’80s way of Microsoft subverting telnet.

My users actually telnet (technically, they all use ssh; telnet is off), they actually do tons of work is a shell, actually use pine for email and links (a lynx successor) for browsing. I have a number of developers who do most of their development work in any of a number of languages on erika (Perl, C, C++, PHP, Python, even Fortran!).

Most of my users can be separated into two groups: geeks and novices. Novices usually want simple email or want to host their domain with a minimum of fuss; most of them actually welcome the simplicity, speed, and consistency of pine as compared to slow and buggy webmail. Who has used webmail and never typed a long letter only to have an error destroy the entire thing?

The geeks are why sixgirls.org got started. We all
had a need for a place
to call home, as we all have experienced the nomadic life of being a geek
on the Internet with no server of our own. We drifted from ISP to ISP
looking for a place where our Unix was nice, where our sysadmins listened,
and where corporate interests weren’t going to yank stuff out from underneath us at any moment. Over the years, many ISPs have stopped
offering shell access and generally have gotten too big for the comfort of
geeks.

If Jerry were replying to this now, I could see him saying that shells are
old school and that erika is perhaps not much more than a home for  orphans and die-hard Unix fans. I used to think so, too, but the more novice users I add, the more convinced I am that people who have had no shell experience at all prefer the ease, speed, and consistency of the shell
over a web browser type interface. They’re amazed at the speed. They’re
surprised over the ability to instantly interact with others using talk and ytalk.

The point is that this is neither a stopgap nor a dead end; this IS the
future. I read your message to Jerry and it got me thinking a lot. An awful
lot. First on the wisdom of using something other than what Intel calls a server, then on the wisdom of using something other than a Wintel box as a server. I probably wouldn’t shout it from the mountaintops if I were doing it, but I’ve done it myself. As an Amiga veteran (I once published an article in Amazing Computing), I smiled when I saw what you were doing with your A4000. And some people no doubt are very interested in that. I wrote some about that on my Weblogs site (address below if you’re interested).

I am a Unix Systems Administrator, and I’ve set up lots of servers. I made
my decision to run everything on my Amiga based upon several
criteria:
One, x86 hardware is low quality. I stress test all of the servers I
build, and most x86 hardware is flawed in one way or another. Even if
those flaws are so insignificant that they never affect the running of a
server, I cannot help but wonder why my stress testing code will run just
fine on one computer for months and will run fine on another computer for
a week, but then dump a core or stop with an error. But this is quite
commonplace with x86 hardware.

For example, my girlfriend’s IBM brand FreeBSD computer can run the stress testing software indefinitely while she is running the GIMP, Netscape, and all sorts of other things. This is one of the few PCs that never has any problems with this stress testing software. But most of the other servers I set up, from PIIIs, dual processor PIIIs and dual Celerons, to Cyrix 6×86 and MII, end up having a problem with my software after anywhere from a few days to a few weeks. But they all have remarkable uptimes, and none crash for any reason other than human error (like kicking the cord).

However, my Amigas and my PowerMacs can run this software indefinitely.

So although I work with x86 extensively, it’s not my ideal choice. So what
else is there? There’s SPARC, MIPS, m68k, PowerPC, Alpha, StrongARM… pleanty of choices.

I have a few PowerMacs and a dual processor Amiga (68060 and 200 mhz PPC 604e); however, NetBSD for PowerMacs is not yet as mature as I need it to be. For one, there is no port of MIT pthreads, which is required for MySQL. Several of my users depend on MySQL, so until that is fixed, I can’t consider using my PowerMac. Also, because of the need to boot using Open Firmware, I cannot set up my PowerMac to boot unattended. Since my machine is colocated, I would have to be able to run down to the colocation facility if anything ever happened to it. That’s
fine if I’m in the city, but what happens when I’m travelling in Europe?

SPARC is nice, but expensive. If I could afford a nice UltraSPARC, I
would. However, this porject started as a way to have a home for
geeks; coming up with a minimum of $3000 for something I didn’t even plan to charge for wasn’t an option.

Alpha seems too much like PC hardware, but I’d certainly be willing to
give it a try should send me an old Alpha box.

With MIPS, again, the issue is price. I’ve always respected the quality of
SGI hardware, so I’d definitely set one up if one were donated.

StrongARM is decent. I even researched this a bit; I can get an ATX
motherboard from the UK with a 233 mhz StrongARM for about 310 quid. Not too bad.

But short of all of that, I had a nice Amiga 4000 with a 66 mhz 68060, 64
bit ram, and wide ultra SCSI on board. Now what impresses me about this
hardware is that I’ve run it constantly. When I went to New Orleans last
year during the summer, I left it in the apartment, running, while the
temperatures were up around 100 degrees. When I came back, it was
fine. Not a complaint.

That’s the way it’s always been with all of my Amigas. I plug them in,
they run; when I’m done, I turn off the monitor. So when I was considering
what computer to use as a server when I’d be paying for a burstable 10
Mbps colocation, I wanted something that would be stable and consistent.

 Hence Amiga.

One of my users, after reading your letter (and, I guess, Jerry’s),
thought that I should mention the load average of the server; I assume
this is because of the indirectly stated assumption that a 66 mhz 68060 is
just squeaking by. To clarify that, a 66 mhz 68060 is faster per mhz than
any Pentium by a measurable margin when using either optimised code (such as a distributed.net client) or straight compiled code (such as LAME). We get about 25,000 hits a day, for a total of about 200 megs a day, which accounts for one e

ighth of one percent of the CPU time. We run as a Stratum 2 time server for several hundred computers, we run POP and IMAP services, sendmail, and we’re the primary nameserver for perhaps a hundred machines. With a distributed.net client running, our load average hovers arount 1.18, which means that without the dnet client, we’d be idle most of the time.

If that weren’t good enough, NetBSD 1.5 (we’re running 1.4.2) has a much
improved virtual memory system (UVM), improvements and speedups in the TCP stack (and complete IPv6 support), scheduler enhancements, good softdep support in the filesystem (as if two 10k rpm 18 gig IBM wide ultra drives aren’t fast enough), and more.

In other words, things are only going to get better.

The other question you raise (sort of) is why Linux gets so much more
attention than the BSD flavors. I’m still trying to figure that one
out. Part of it is probably due to the existance of Red Hat and
Caldera and others. FreeBSD gets some promotion from Walnut
Creek/BSDi, but one only has to look at the success of Slackware to
see how that compares.

It’s all hype; people love buzz words, and so a cycle begins: people talk
about Linux, companies spring up to provide Linux stuff, and people hear
more and talk more about Linux.

It’s not a bad thing; anything that moves the mainstream away from
Microsoft is good. However, the current trend in Linux is not good. Red
Hat (the company), arguably the biggest force in popularising Linux in the
US, is becoming less and less like Linux and more and more like a software company. They’re releasing unstable release after unstable release with no apologies. Something I said a little while ago, and someone has been using as his quote in his email:
In the Linux world, all of the major distributions have become
companies. How much revenue would Red Hat generate if their product was flawless? How much support would they sell?

I summarise this by saying that it is no longer in their best interest to
have the best product. It appears to be sufficient to have a working
product they can use to “ride the wave” of popularity of Linux.

I used Linux for a long time, but ultimately I was always frustrated with
the (sometimes significant) differences between the distributions, and
sometimes the differences between versions of the same distribution. Why
was it that an Amiga running AmigaDOS was more consistent with Apache and Samba docs than any particular Linux? Where was Linux sticking all of
these config files, and why wasn’t there documentation saying where the
stuff was and why?

When I first started using BSD, I fell in love with its consistency, its
no bull attitude towards ports and packa
ges, and its professional and
clean feel. Needless to say, I don’t do much linux anymore.

It may well be due to the people involved. Linus Torvalds is a
likeable guy, a smart guy, easily identifiable by a largely computer
illiterate press as an anti-Gates. And he looks the part. Bob Young is
loud and flambouyant. Caldera’s the company that sued Microsoft and probably would have won if it hadn’t settled out of court. Richard
Stallman torques a lot of people off, but he’s very good at getting
himself heard, and the GPL seems designed at least in part to attract
attention. The BSD license is more free than the GPL, but while
freedom is one of Stallman’s goals, clearly getting attention for his
movement is another, and in that regard Stallman succeeds much more than the BSD camp. The BSD license may be too free for its own good.

Yes, there aren’t many “figureheads” for BSD; most of the ones I know of
don’t complain about Linux, whereas Linux people often do complain about the BSD folks (the major complaint being the license).

I know Jerry pays more attention to Linux than the BSDs partly because Linux has a bigger audience, but he certainly knows more about Linux than about any other Unix. Very soon after he launched his website, a couple of Linux gurus (most notably Moshe Bar, himself now a Byte columnist) started corresponding with him regularly, and they’ve made Linux a reasonably comfortable place for him, answering his questions and getting him up and going.

So then it should be their responsibility, as Linux advocates, to give
Jerry a slightly more complete story, in my opinion.

As for the rest of the press, most of them pay attention to Linux only because of the aforementioned talking heads. I have a degree in journalism from supposedly the best journalism school in the free world, which gives me some insight into how the press works (or doesn’t, as is usually the case). There are computer journalists who get it, but a g

ood deal of them are writing about computers for no reason in particular, and their previous job and their next job are likely to be writing about something else. In journalism, if three sources corroborate something, you can treat it as fact. Microsoft-sympathetic sources are rampant, wherever you are. The journalist probably has a Mac sympathy since there’s a decent chance that’s what he uses. If he uses a Windows PC, he may or may not realize it. He’s probably heard of Unix, but his chances of having three local Unix-sympathetic sources to use consistently are fairly slim. His chances of having three Unix-sympathetic sources who agree enough for him to treat what they say as fact (especially if one of his Microsofties contradicts it) are probably even more slim.

Which furthers my previous point: Jerry’s Linux friends should be more
complete in their advocacy.

Media often seems to desire to cater to the lowest common denominator, but it is refreshing to see what happens when it doesn’t; I can’t stand US
news on TV, but I’ll willingly watch BBC news, and will often learn more
about US news than if I had watched a US news program.

But I think that part of the problem, which is compounded by the above, is
that there are too many journaists that are writing about computers,
rather than computer people writing about computers.

After all, which is more presumptuous: a journaist who thinks that he/she
can enter the technical world of computing and write authoritatively about
it, or a computer person who attempts to be a part time journalist? I’d
prefer the latter, even if it doesn’t include all of the accoutrements
that come from the writings of a real journalist.

And looking at the movement as a whole, keep in mind that journalists look for stories. Let’s face it: A college student from Finland writing an operating system and giving it away and millions of people thinking it’s better than Windows is a big story. And let’s face it, RMS running
around looking like John the Baptist extolling the virtues of something called Free Software is another really good story, though he’d get a lot more press if he’d talk more candidly about the rest of his life, since that might be the hook that gets the story. Can’t you see this one now?

Yes. Both of those stories would seem much more interesting than, “It’s
been over three years and counting since a remote hole was found in
OpenBSD”, because it’s not sensationalistic, nor is it interesting, nor
can someone explain how you might end up running OpenBSD on your
appliances (well, you might, but the fact that it’s secure means that it’d
be as boring as telling you why your bathtub hasn’t collapsed yet).

Richard Stallman used to keep a bed in his office at the MIT Artificial Intelligence Lab.

He slept there. He used the shower down the hall. He didn’t have a home outside the office. It would have distracted him from his cause: Giving away software.

Stallman founded the Free Software movement in 1983. Regarded by many as the prophet of his movement (and looking the part, thanks to his long, unkempt hair and beard), Stallman is both one of its most highly regarded programmers and perhaps its most outspoken activist, speaking at various functions around the world.

Linux was newsworthy, thanks to the people behind it, way back in 1993 when hardly anyone was using it. Back then, they were the story. Now, they can still be the story, depending on the writer’s approach.

If there are similar stories in the BSD camp, I’m not aware of them. (I can tell you the philosophical differences between OpenBSD,  NetBSD and FreeBSD and I know a little about the BSD directory structure, but that’s where my knowledge runs up against its limits. I’d say I’m more familiar with BSD than the average computer user but that’s not saying much.) But I can tell you my editor would have absolutely eaten this up. After he or she confirmed it wasn’t fiction.

The history is a little dry; the only “juicy” part is where Berkeley had
to deal with a lawsuit from AT&T (or Bell Labs; I’m not doing my research
here) before they could make their source free.

Nowadays, people are interested because a major layer of Mac OS X is BSD, and is taken from the FreeBSD and NetBSD source trees. Therefore, millions of people who otherwise know nothing about BSD or its history will end up running it when Mac OS X Final comes out in January; lots of people already are running Mac OS X Beta, but chances are good that the people who bought the Beta know about the fact that it’s running on BSD.

And it’s certainly arguable that BSD is much more powerful and robust than Windows 2000. So there’s a story for you. Does that answer any of your question?

Yes; I hope I’ve clarified my issues, too.

Neat site! I’ll have to keep up on it.

Thanks,
John Klos