Dr. Jerry Pournelle

Email Me

Why not subscribe now?

Chaos Manor Subscribe Now

Useful Link(s)...


Hosting by

Powered by Apache

Computing At Chaos Manor:
The Mailbag

Jerry Pournelle jerryp@jerrypournelle.com
Copyright 2010 Jerry E. Pournelle, Ph.D.

September 30, 2010

iPhone / iPad closed application environment.

In response to one of your readers, you replied

"Mr. Jobs has stated that he intends to control what goes on iPhone and iPad to insure the quality of experience for users."

Well, yes that is more or less true in the general sense. But in the specific sense, you can write and deploy your own applications, without using the Apple store, for $399/year. That's all it costs an enterprise to develop and deploy applications. Of course, they have the option of deploying applications through the Apple store, but they are no longer required to.

Note that the individual developer program, for $99/year, requires you to deploy through Apple store, but you can load and run your applications on something like 100 individual devices, with some minor restrictions. (i.e. Once you define a device, you cannot "undefine" that device. It takes up one of your 100 slots for a year.)


Peter Glaskowsky comments:

This refers to the iPhone developer program. I joined the program at the $100/year level for a couple of years myself just to help stay on top of the trends in iPhone development technology. Being a developer means that you can install anything on your phone that you can write or otherwise get the source code for, even if it doesn't comply with Apple's usual requirements for iPhone apps.

-- PNG –

You can install it on your own phone, and other Apple devices that you own or control, but Apple continues to control distribution to everyone else. Apple intends to continue that policy.

The newer GSM-based Kindles *do in fact* have a telephone number - that's how they work on the 3G network.

Go to a GSM-based Kindle's Settings screen, and then type '611' - you'll get all kinds of information (four pages of it), including the telephone number of the GSM Kindle.

You can try and register it on the MicroCell and see if it works . . . AT&T may disallow the phone number ranges they've allocated for the GSM Kindles, but it's worth a shot, if you've access to one of the newer GSM-based Kindles.

Roland Dobbins

Alas I do not have a GSM Kindle. Mine is an older one.

AT&T MicroCell and Peter Glaskowsky's comments

Dear Jerry,

Just read the latest Chaos Manor mailbag, specifically the items on the AT&T MicroCell, including the following comment from Peter Glaskowsky:

> Peter Glaskowsky comments


> LWATCDR says "What I find really absurd is that when using the MicroCell you paid for and the internet connection you paid for AT&T has the nerve to count the time you use the MicroCell on you calling plan minutes!" This is not necessarily true, and not fair. The subscriber can sign up for an extra-cost plan that provides unlimited voice calling through the MicroCell for no additional cost.


> The statement is unfair because the MicroCell is not the whole solution for voice or data. Obviously the MicroCell, connecting only to the Internet as it does, can't inject a phone call into the wired telephone network or the cellphone network. Instead, the MicroCell establishes the most direct connection possible into AT&T's private network, which handles the rest of the work of routing the call. So even if you're using a MicroCell, a large part of the cost of the call (or data access) is still paid by AT&T, and it's fair for the company to expect to be paid for that.

There is a serious problem, though, with how AT&T (or indeed any cell operator deploying a similar MicroCell or femtocell device) handles call routing. The problem is that with the MicroCell, AT&T is not paying a "large part" of the cost of that call. They're paying the smaller part. How much smaller depends on where the call is terminating. If it is an International call, they may indeed be paying the larger portion of the cost of terminating that call. For virtually any domestic call, the MicroCell removes a good deal of the costs involved with connecting that call, and if the call is terminating elsewhere on AT&T's own network (say, another AT&T mobile subscriber, perhaps even using their own MicroCell) the costs involve drop even further.

Yes, these calls go through AT&T's network first before reaching the PSTN to be terminated at the network belonging to the party you're calling. As such, there are costs to be borne by AT&T. However, these costs pale in comparison to the costs associated with deploying, operating and maintaining a cell tower, which is the portion of the network you are replacing with your MicroCell.

For a traditional base station or cell tower, you need to purchase the base station from an infrastructure vendor. Prices are in the tens of thousands of dollars, depending on technology, frequency and capacity. The operator needs to rent space on the building or tower, as well as provide electricity and climate control. The station needs to be connected to the central network (backhaul), either by optical cable or microwave relay, and there is a charge associated with that as well.

With the MicroCell, you are the cell site provider. Except instead of AT&T paying you, you pay them.

With the MicroCell, you provide the electricity. Except AT&T doesn't reimburse you for it.

With the MicroCell, you provide the backhaul to AT&T's network over your Internet connection. Except AT&T doesn't pay you for it.

The only thing that makes this a tenable situation at all is that you are able to restrict access to your MicroCell in such a way that only you benefit from it. If it were available to any and all AT&T subscribers within range, one would certainly be able to make a case that you are providing a cell site to AT&T, for which you should be compensated. On the other hand, then you'd have to meet uptime and availability requirements.

Just think about comparable situations. When you have an AT&T wired phone, you get free calling within your local calling area included in your monthly fee. If you and the party you are calling employ a solution like Skype, you have end to end free voice and video calling included in your ISP's monthly fee. If you make a Skype call that originates on your ISP's line, but terminates somewhere on the PSTN, you pay their long-distance or International rates on a per-minute basis, nearly all of which are lower than most traditional operator's rates. This is because Skype maintains no last mile network of their own; you use your Internet line for that, and they interconnect to the PSTN through a network of partner operators.

When you make a cellular call between two AT&T mobile devices, or between one mobile device and a land line, these count as minutes towards your limit. Since AT&T is not only performing the call routing, but all the expenses relating to the cell towers serving the mobiles in the equation, this makes sense, and is included in the per minute pricing model.

If you call from an AT&T mobile connected to your MicroCell, and place a local call, you're using minutes-- even though AT&T's costs to connect this call are dramatically lower; there is no cell site involved, no backhaul costs (since you're paying for it) and if the call terminates anywhere on AT&T's own network, there is no hard currency to be paid to a third party for call termination costs-- no revenue to share with anyone. AT&T makes a good deal more margin on such calls, and doesn't pass the decreased costs on to the subscriber.

For many, the increased coverage this device offers may be worth it, even if one knows that AT&T is making much better margin off the calls you make on it. On the other hand, if you are within range of a MicroCell connected to your home Internet line, you're probably also within range of your own Wifi network, and call costs and quality using either Skype or another VOIP provider (and there are many VOIP applications available for the iPhone and other smartphones) may be even better.

Full disclosure: I do work for a telco, but not one in the US, so I don't have a horse in the race AT&T is running.

D. M. Josselyn
Synfibers IT Consulting

Thank you for the explanation. My MicroCell works, sort of: that is I get one of two 3G bars on my iPhone in most rooms of the house, but in one room I get only one and neither the phone nor the information web browsing works usably. Of course that's the room where our television roosts, and I don't get wireless there either. I'm going to string an Ethernet cable into there and be done with it but given my house's layout that turns out to be harder to do than I expected.

I'll get to it one of these days.

Bookglutton -- read eBooks on your computer

Dr Pournelle

Bookglutton.com has been around since 2008, but this is the first I heard of them: http://www.youtube.com/watch?v=_LbQ5hEjTBg

Live long and prosper
h lynn keith

Looks interesting. I have lots of book sources, and probably more books than I have time to read – and every week a box of review books arrives as well. Reading matter is not a scarce resource at Chaos Manor...

Encryption algorithms that will be safe against attacks by quantum computers

Hi Jerry,

This might be of interest to you...


Good column today!!

-John G. Hackett

I'm not sure I need those, but thanks...

Mathematica vs. Maple

There's a bifurcuation in theoretical physics between Mathematica and Maple. There are a few reasons for this. My reason is that I am a general relativity theorist and Maple contains much better GR libraries and has a much richer history in the field.

I've never really been able to see much of a difference, from the bird's eye view, between the two but generally speaking Maple is more prevalent in theoretical physics.

Dr. Paul J. Camp
Physics Department
Spelman College

As hardware improves so will math simulators. At one time the sheer ability to evaluate complex expressions was one of the gatekeeping skills for becoming a theoretical physicist. One wonders if that will change once most of that work is done by mathematics programs?


If cookies can't read arbitrary information on your computer, why do I get multiple cookie requests if I don't check the Always box? If not responding to something the first request found, there is no need for a second request, much less the 17 Disney used to send before their Welcome page opened. (I have no idea how many they send now — that was on a client's computer long ago.) How does Doubleclick earn its money if its cookie can't telegraph home which partners' cookies are on a computer? Where did you get this information, "Enabling and accepting cookies does not enable the site sending the cookie to write anywhere in your system.," and how do you know its truth since nobody told you about the second set of flash cookies? When I look in User/[me]/AppData/Local/Temp/Cookies I find copies of cookies that do not meet the specification of cacheable cookies, so they were written somewhere other than their mandated location. Which of the 484 files currently in the Temporary Internet Files might be copies of cookies?

Why are the second, third, and fourth entries on the attached clip not marked as blocked when the specification says that blocking the first kills any more from that url? Where are they stored since they aren't in my Cookies folder? Especially the css and javascript ones? Why can't I copy and paste a line from the list so I can easily search for them?

Most of the ones in the report (since You can't resize the Report box, I can't show the dozens without many snips) are just the elements of the webpage, but the javascript unblocked line could be anything. I know what the specification says the last element on those lines mean, but, given the large number of holes M$ is admitting to these days, is that malformed j 4 characters from the end of line 3 an artifact or a malicious malformation to trick the parser, which is a dll? (See today's M$ advisory.) And is the specification any better implemented than any other command written by Micro$oft?

We haven't gotten near Cisco's Sticky Cookie command for its routers, nor the java CookieCommand. Sorry you have to listen to my ranting.

Don Miller

I don't claim to understand all there is to know about cookies, but I do know there is no "Great Cookie Conspiracy", and that most of the consumer advertisements about tracking your internet use are in most part nonsense.

I generally allow cookies, but I use Firefox and the BetterPrivacy and CookieEditor tools to delete them periodically. I'm also fairly careful about where I go on the web.

Mostly I keep my systems up to date, and I periodically go to eset and let their scanner look at my entire hard drive. I hear good things about Norton's on-line scan as well. So far I have found nothing alarming.

Your long-standing gripe about Firefox --


You have often complained that Firefox gave you one grief or another when you had 80 or 90 tabs open. As I recall, you do this because you are researching things and need to come back to them later.

I work much the same way, except I typically end up with eight or nine Firefox windows open, each with a bunch of related tabs. I particularly like being able to move tabs from one Firefox window to another, or to a new Firefox window. But I digress...

A couple of tools that might help you are Mozilla Archive with Faithful Format (MAFF, a.k.a. MAF) and UnMHT add-ons. These two tools let you save web pages and their content into either an archive format. The UnMHT tool uses the MHT format that Internet Explorer uses. The MAF tool also does that but has another format available that is portable to other operating systems and browsers. With MHT format, you're kinda stuck in the Windows world. (It's almost like Bill planned it that way. Go figure.)

I have installed both tools but haven't had them long enough to form an opinion about either. I'll probably use the MAF tool because it looks like it can capture more than the MHT format (video and sound, in particular), and because I hate being locked in to a specific technology. Another nice trick that MAF can do is store all your open tabs in the same archive. Very hand for bulk research, I would think.

If you'd like to check them out for yourself, here are the pages in the Firefox Add-Ons and the projects home pages:

MAF at Firefox: https://addons.mozilla.org/en-US/firefox/addon/212/

MAF home page: http://maf.mozdev.org/

UnMHT at Firefox: https://addons.mozilla.org/en-US/firefox/addon/8051/

UnMHT home page (in English, they have a link to an oriental language but I don't know which one): http://www.unmht.org/unmht/en_index.html

I hope this helps.
--Gary Pavek

Thanks. Actually my main complaint about Firefox is that when it wants to update itself, it wants to do that Right Now Without Delay. While that is annoying, it is also the right thing to do: keeping your browser up to date is very important.

I find that I can use folders in Firefox bookmark to keep track of places I want to get back to without keeping a million open tabs. And I still keep a lot of tabs open.

Deep in the Archives (UNCLASSIFIED)

Classification: UNCLASSIFIED

Caveats: NONE

Mr. Pournelle,

Many years ago, more than either of us would care to number, as a young LAN Administrator for the Arizona Department of Administration, I read an article that both amused me and caused me to begin a crusade to get my superiors to change their minds regarding the issue of power strips & surge protectors versus UPS for PCs and workstations. I don't remember the exact title, but it was something like "The Great Power Surge at Chaos Manor". I loved it so much that I tore the page out of my Byte magazine and had it laminated! The article caused me to do some research and I wrote a white paper on the false sense of security that our division was giving to our customers. The paper was flatly rejected and I was laughed out of the conference room. However, I didn't give up and, eventually was able to get the attention of a mid-level manager and things began to change ever so slowly. That was long before the days of buildings having their own back up and or/clean power and we were replacing power supplies and hard drives weekly. When I was able to convince my boss and his that the cost could be offset by purchasing PC rated UPSs, I finally won the battle.

A few minutes ago, I was telling a co-worker the story above, and I wondered if I could find that article online. I Googled it, but no joy. I did, however, come across your blog and email address. Would you possibly have that article stashed away back in the furthest depths of your archives? I'd love to re-read it again and add it to my collection of professional mementos.

Thank you, Sir

Mark Spencer

D. Mark Spencer
General Dynamics Information Technology
Network Enterprise Center (NEC)
Fort Huachuca, Arizona

The Great Power Spike was one of my more popular columns...

Clary UPS were made by Falcon Electric and are now sold under the Falcon brand name. I have been using Falcon UPS for most of my Chaos Manor operations ever since the Great Power Spike. I've never lost a Byte of data due to power problems.

September 2010 International edition column

Dear Jerry:

Some updates on the e-book publishing. Our strategy at Brass Cannon Books has evolved thusly: We will now use the Kindle as a kind of beta-test platform. Since the software is now available for the PC, the iPad, the Android and other platforms, that elevates the number of potential customers into the millions. I never did trust Amazon.com's claims for the Kindle, but the price point and reliability on the new version does make it a true consumer product. Look for it to be repriced to $99.00 about Christmas and available in brick and mortar stores.

The problem for publishers such as myself is formatting, and this was why I simply stopped trying to do anything with Smashwords, which looked easy going in, but demanded too much time and precision in execution to work for us. Then there was the issue of covers. These are demanded for widespread distribution on Smashwords, but not on Amazon Kindle. Book covers are a useful marketing tool, but if you are not an artist, you hire it done and the price may not be worth it. The cover for "The Shenandoah Spy" cost over $2,000 and is designed for brick and mortar stores, where if displayed outwards,it generates sales by getting people to pick up and examine the book. If you can get the book into the customer's hands, you are halfway to a sale. There is social science research on this point. For e-books, covers become a matter of not conceding competitive advantage; if others have them then you have to also.

But is that social science research true for e-books, where physical handling is impossible? No one knows and images that work for print hardbounds and trade paperbacks can disappear on a thumbnail as 72 dpi. It's a different ball game. You have to design specifically for that format. And at a lower cost. Since I have been publishing e-books since 2004, I can tell you that most do not sell particularly well. If you look at my titles you will see individual and groups of old magazine articles, such as you propose republishing. Hopefully, since you are a much bigger brand, you will enjoy better sales than I have. I have titles that I thought would sell very well: articles about rock stars, and Star Trek and the like, and they do no better than the rest.

You will note I am not rushing to get Kindle editions up of any of them. They might sell better there, but most non-fiction legacy material has an audience of single researchers and is not for the mass market. Covers were required by our distributor Ingram, but they didn't move product either. I did find a source for inexpensive covers, a site called Freelancer.com in Australia where I found very competent illustrators in third world countries who could deliver serviceable images for less than a hundred dollars. I have used some of these for the new Kindle editions of short fiction I have put up. The break-even for providing these covers for an item that sells for 99 cents goes up significantly, so this is part of the continuing experiment. (Feel free to link to any of them from my Kindle page.) The artwork here is designed to catch the eye and get people to read about the story. Nothing more. I will have to sell hundreds of additional copies to recover the money. But e-book publishing has always been an experiment for us, so we are willing to try.

We may have to hire contractors to do the formatting as well, especially on long works. It only takes one minor error to throw the whole thing off and fiddling with it endlessly takes away time we need for other tasks. So the jury is still out on the viability of e-book publishing. The promised bonanza has not come for most of us. Kindle might change that but I suspect that you will do better with your fiction than your non-fiction.


Francis Hamit
Brass Cannon Books

I am working on getting a lot of my older works into Kindle and iTunes formats, and I'll chronicle that story in the column. Thanks.

September Column - Will the Empire Strike Back?

I read with interest the section of your September column about competition between Android, Apple and Microsoft. We own a condominium at the Oregon Coast, which we bought about three years ago. The project consists of 26 two story buildings with 144 units built up the side of a steep cliff spread over 32 acres. Only a handful of units have full time residents. When we purchased it, there was a Wi Fi system for the property which had been purchased for about $25,000 and which cost $4,000 per year for monitoring by the vendor. It didn't work very well. Several months after we purchased our unit, I became a member of the board. At my recommendation, we replaced the existing system with a Meraki Mesh system with 40 outdoor units. The cost, at the time was about $4,000 with no monthly fees. Because of a change in Meraki's pricing model, a similar system would cost about twice that now.

The Meraki system uses a web based Dashboard to configure and monitor the network. The Dashboard allows me to view usage including the operating system of devices connecting to the network. On a typical day there will be about 50 unique devices on the network and in a typical month there will be about 500. I think that users are a cross section of portable device users. In the past 30 days, 565 unique devices connected to the network and transferred 215 gb of data. The breakdown in devices was as follows:

Operating SystemNumber
Mac OSX88
Windows 783

There were 143 Apple devices and 294 Microsoft devices. The "Other" category includes Windows Mobile devices, game consoles, Palm devices and such. Microsoft dominates in conventional portable devices and Apple dominates in handheld/tablet devices.

It will be interesting to see how the mix of devices changes over time. Right now it appears that Microsoft is a bit player in handheld and tablet devices. I recently switched from a Windows Mobile phone (Palm Treo Pro on AT&T) to an Android phone (Samsung Captivate/Galaxy S on AT&T) and can understand why nobody is using Microsoft phones. While my Android phone is not perfect, it is far better than the Windows Mobile phones I have owned. I use applications on it much more than I did on my WM phones. I use AT&T because it is the only service that work reliably where I work. I get a very good signal at my house. The phone does not work well inside our coast condo, but other carriers don't either. My experience owning a dozen or so smart phones, including an iPhone, over the years is that they are generally mediocre phones. My wife's old cheap Nokia gets better phone reception than most smart phones I have used. I think that part of AT&T's bad reputation is due to is due to the iPhone's shortcomings as a phone.

I hope Microsoft does come back in mobile devices. I don't want any player to dominate.

Richard Samuels

We can all agree to that.

Re: September Column

"At some point I suppose I ought to get energetic about finding a way to see just what I have on those disks"

Dr. Pournelle,

recent tale of a similar nature, as a warning to others...

I had a bunch of Zip disks, with old stuff, possibly/probably some early videos and photos of first-daughter that hadn't been transferred yet to CD/DVD, and no current system with a Zip drive.

So I load a SCSI card into my current system, get the old SCSI Zip connected, load drivers and such (what once seemed like DOS voodoo) and everything seems to work.

First Zip disk in, and it doesn't insert all the way, as though there were a problem of being powered on.

Power off, recheck cables, recheck everything, retry, same story.

Power off, invert power cable (tough fit this time), try again, no sign of life.

Take everything out, out pops a little plastic coinage piece from some toy of second-daughter's first years, which obviously blocked anything from passing to its normal position.

Long story short, the last attempt fried the drive. Still have the disks, but have great difficulty in finding a working Zip drive.

Moral? Maybe keep everything in multiple copies, and as soon as something starts to fade, copy everything else to another (more recent) device/medium (or two or three alternatives as well, if it is anything important). Every generation seems to have more capacity for storage than the last, so the only obstacle is (my) laziness.

I am pretty sure I have the videos on some old Kodak Gold CD's that I can still probably read, but can't currently check (for sure in a box in the storeroom), for the price I paid way back when I sure hope they are readable after only 15 years...

As always, best regards,

James Siddall jr


I would comment that a great many working programmers are using languages that do provide more structure than C.

Java is very popular, and it is quite rigorous about enforcing structure and preventing errors. It is impossible to have a buffer overflow error in Java, for instance. Java code is compiled to class files in a virtual machine language bytecode, and all such classes are run through a logic verifier before they start executing. It is not possible to create code to run on a Java system that violates the safety and security rules of the language.

Things are similar (though perhaps not quite as security-rigorous in some ways) in Microsoft land with all of the .NET languages.

There are a wide variety of languages that target the Java and .NET runtimes, as well. Everything from very static and structured languages like C#, Java, and Scala to a variety of looser and dynamic languages like Python and Ruby. There are even high performance Lisp systems like Clojure to hand that run on those systems.

Neither Java nor .NET are necessarily intended for low level systems / performance programming, but with modern systems it doesn't much matter if you're writing anything other than an operating system kernel level or a high performance video game.

Linux is written in C, but much of Windows and almost all high performance video games are written in C++. C++ adds higher level structures but is still very much in the C family.

"If C gives you enough rope to hang yourself, then C++ gives you enough rope to bind and gag your neighborhood, rig the sails on a small ship, and still have enough rope to hang yourself from the yardarm" - Anon., The Unix-Haters Handbook

So, fear not. There is a very great deal of debate and discussion about languages still ongoing, it just doesn't much concern most users. After all, the Internet has made users of most of the population.



Another Go at Programming Language Design -

I've sent some of this before, in one form or another, and you've posted some of what I've sent in View and Mail, but your comments on programming languages in the September 2010 Column stimulated me to collect, reorganize and expand, in the hope that you might find (some of?) it suitable for MailBag.

Good compile-time checking is not a Lost Cause, nor is it necessary to sacrifice compilation speed to get it.

I first learned of the Go programming language (www.golang.org) from a contribution by Giles Lean, which contribution you posted in Mail for 19 February 2010.

Begin with a quote from Dick Gabriel:

"I'm always delighted by the light touch and stillness of early programming languages. Not much text; a lot gets done. Old programs read like quiet conversations between a well-spoken research worker and a well-studied mechanical colleague, not as a debate with a compiler. Who'd have guessed sophistication bought such noise?"

That quote is a centerpiece of Rob Pike's introductory talks on Go, most recently "Public Static Void" at O'Reilly Media's Open-Source Convention, OSCON 2010:

(The quote comes originally from a talk by Gabriel and Guy Steele, "50 in 50" (http://blip.tv/file/1472720), which romps through the history of programming languages and includes a performance by Filk-musician Julia Ecklar. "50 in 50" is also well worth viewing, if only for Ms. Ecklar's performance, but covers much more ground than the patch on which Go stands.)

View the whole thing, if you have the time, but here are the key points I see (which are not quite exactly the points Rob Pike makes):

The dominant programming language is no longer C, but rather its descendants C++, Java and C#. C++ was an attempt to make an object-oriented language out of C, by grafting onto C a style of object-oriented programming taken from the language Simula. Java and C# were largely attempts to simplify C++, Java coming from Sun Microsystems (since bought by Oracle) and C# being Microsoft's reaction to Java. Java is now, arguably, the programming language most often taught in colleges and universities.

Alas, those descendants encourage, indeed pretty much require, an arcane, opaque and bureaucratically verbose approach to programming *because*of* the "object-oriented" features that their inventors *added* to C for the purpose of providing ways to get compilers to give programmers more help, in finding mistakes at compile time, than C compilers can give. The inventors largely accomplished what they intended -- but at the cost of making programmers write lots of redundant program text to convince the compiler that they are not making mistakes. Those languages took pretty much all the *fun* out of programming, whence Dick Gabriel's comment quoted above. They also led to very long compile times.

Taken all together, those characteristics of the C++/Java/C# family have given "static, compiled" programming languages their current unsavory reputation.

One reaction to this produced the "dynamic" languages, for example Python and Ruby. Those languages don't insist that programmers write the same stupifying amount of redundant program text, and they compile comparatively fast, at the cost of doing pretty much all correctness checking at run time rather than at compile time. They also have the same object model as the C++/Java/C# family, based on classes that have inheritance relationships, that leads to the same kind of rigidities, which rigidities lead in turn to difficulties in extending and modifying programs after they're built.

Consequently, there has grown up among programmers the notion that we have to choose between (a) static languages that are hard to write in and have slow compilers, but that do lots of checking at compile time and produce reliable, fast-running code, and (b) dynamic languages that are fun to write in and compile fast, but that produce slow-running code, with pretty much all semantic errors caught at run time rather than at compile time.

Go is an alternative reaction. By careful language and compiler design, Go provides the extensive compile-time checking and fast-running compiled programs of a statically-typed language and the fast compilation and ease of writing generally associated with dynamic languages. Indeed, some programmers have reported that compiling their Go programs takes less time than loading the Python run-time system.

Go is object-oriented, but not class-hierarchy-based. There's less (data-)typing, so there's also less (with-your-fingers-)typing.

Go provides a set of primitives for concurrent programming that's straightforward, easy to use, yet powerful, which simplifies writing programs that can make full use of modern CPUs with multiple cores. Go also provides the memory-management simplicity and safety of automatic garbage collection. There are pointers in Go, but no pointer arithmetic, so you can't make the pointer-arithmetic mistakes that bedevil C programmers.

Most important of all, Go makes programming fun again.

Go try it!

Rod Montgomery==monty@starfief.com

Thanks for the summary. Alas, given my work load and energy level, I am not likely to play with programming for a while. I used to enjoy writing programs; I still use an accounting program I wrote in the 1980's, and it still produces the kinds of double entry accounting books that were standard in the 1980's, which look hopelessly out of date but turn out to be Good Enough. I wrote games, too, mostly in CBASIC (meaning they didn't have graphics; you gave orders to your star ship, but most of the visualization of what happened next was up to you).

I can think of some programs I'd like to write, but first I have books to do.

Thanks again.


Hi Jerry

Your comments on computer languages in the column today brought back a lot of memories. At one time I could claim to be conversant (able to read and understand, mostly) as many as 15 languages, many of which are dead (or almost) now, including a couple that you mentioned in your article, like Modula-2 (I used the Logitech compiler you wrote about to play around with the language) and a little bit of Lisp, as well as C and C++ (obligatory since I worked at Bell Labs) as well as a number that you didn't mention, including Prolog, Smalltalk, Ratfor (RATional FORtran), awk, Korn Shell, and others.

The thing is, I don't think that C is nearly as ubiquitous as you imply in the column. Many many IT shops out there use Java, many others use C# or C++. Of course both Python and Perl have a multitude of adherents, as does Ruby. I understand that Google uses Python for much of their programming. And don't forget Visual Basic.NET and JavaScript. Of these only C++ (and its cousin Objective-C) have close ties to C, and most implement the modular programming paradigm that you associate with Modula-2. Your closing comment calls for the adoption of a "easily learned and highly readable programming language", and even you have pointed out in past columns that Python seems to fill that requirement.

The problem with most current programming languages, Python included, is that in addition to learning the language itself, you've got to learn a huge library of Modules for performing various functions, essentially the modern embodiment of the promise of Modula-2, and that learning can be daunting indeed. If you don't believe me, take a look at the standard library that is supplied with Java, Python, or Perl. Granted, some of these modules implement rather advanced tasks, for example generating a PDF file directly or setting up and handling a network connection, either as a client or a server, but many others implement fairly basic things like string manipulation, math functions like sin, ln, and sqrt, or simple data structures like linked lists. Any program that does real work is most likely going to need to use several, and perhaps many, of these library modules.

On another subject, your "Bewildering Omission", the combination of custom styles and perhaps a bit of macro (Visual Basic) programming will do the trick for you. You are right about there not being much in the way of documentation out there, though. I would suggest that you look into styles carefully – you can define your own for whatever purpose you want, and also experiment with macros, if you have time. When I was writing requirements documents for software, which had to follow a very specific format and layout, I wrote my own template for a requirements document, which created an empty requirements specification document, complete with placeholders for all the required sections, and a set of macros, accessed from custom buttons on the toolbar, that would automatically insert a new requirement, build the requirements index at the end of the document, etc. It's not all that hard, once you learn how.


Karen Parker

I am pleased to say that Peter Glaskowsky and others have taken on the task of designing a novelist's template for my use. It should be available shortly, and I'll use it not only for the new books but to pound some of my old ones into shape for conversion of Kindle format.

Thanks for your remarks on the language debate. I agree that learning about libraries and what they can do can be daunting; but it can also be liberating, because with Modula-2 modules cannot affect other modules except in very recognized ways and with permission understanding and declarations. With C libraries you can't ever be sure that what you do over here won't affect what's going on over there...

Subject: Programming Languages


I enjoyed the trip down memory lane regarding programming languages, and the recap of the long ago contest between C and various structured language alternatives. But I feel a little unsatisfied that you ended the discussion with C as the winner. C may have been the winner back then, but I don't have the impression that it has remained the language of choice for modern program development. I thought most large projects in Windows were now done with C++, using extensive class libraries for the interface. Apple has moved from Pascal, to C, to Objective-C, for OSX programming. Objective-C is also the only way to write iOS programs. Google uses Java for Android development, and Microsoft uses it's Java clone, C#, as the basis for Windows Phone 7.

Now I will grant you that all of these object oriented programming languages use syntax that is derived from C. Most books that strive to teach you one of these languages will either give you a short tutorial on C lnaguage conventions, or tell you to learn that first. BUT, as someone who used to write programs in Fortran, Basic, and C, and has been trying to master one of the object oriented languages, it hasn't been an easy transition for me. Syntax similarities aside, writing software in the object oriented way seems to require a significant change in the way that you think about the task of programming.

This amateur programmer would like to hear more commentary about the evolution of programming languages since C won out. In particular, I was struck by the comment you made about module independence in Modula-2. That sounds vaguely similar to what I've been reading about the characteristics of Objects in Java. So I'm wondering if structured programming didn't ultimately win out, only it happened via the incorporation of object oriented constructs into the successors to C.

CP, Connecticut

I think that you will find more than you want to know if you keep on reading this mailbag...

: c over Pascal.

Pascal was my first programing language back in 1982 and I am very grateful. C beat out pascal and Modula for a few reasons. On is that standard Pascal was close to useless. Strings and File IO where really not standardized. That and it didn't offer separate compile and linking options. Turbo Pascal and later Borland fixed a lot of that. Also c is more concise. {} is easer to type and read than Begin and End for a programmer. Finally you had the influence of Unix. Unix was written in c and it became a darling. Also some times the strict rules of Pascal cause more read ability problems than they solve. In c I can declare a variable in any block of code even the body of a loop. The scope of the variable is that block. In Pascal the scope is always the function or procedure. Modula-2 made gave you the Modula scope which was useful but still not as fine grained as c The final problem was the lack of really good supported compilers for Modula, Pascal, and Oberon. Dephi was a good system but it was sort of Pascal but not really. In the end it all came down to three things. On Windows Microsoft published there example code in c and c++ Linux, BSD, and Unix in general where written in c and gcc is free and good. On the mac the examples are in Objective C. c, c++, and Objective C are the paths of least resistance.

Java is kind of the best parts of both Pascal and c in my opinion. Some people don't like it because it got a bad name for being too slow. Kind of like c back in the day. You also have Python which is becoming very popular. C# is another good safe alternative but I have not used it yet. Both Java and Python make it easier to wrote safe code than c. For programmers c can be more readable than Pascal or Cobol. The problem is that c can be really abused as well. As far as the least readable I have to vote for Perl. It is a freaking nightmare.

If you have any desire to start hacking again I would suggest Python or maybe Squeak. They both well documented and free. Squeak is Smalltalk and Python is Python I hate it because it uses indent for blocks which I find annoying but I am old.


I like Python but it's not what I have in mind. What I want is a polished Modula-2 that compiles itself. I want modules that accomplish recognizable things and are unable to affect any other part of the program.

The discussion is just getting going...

Subject: The success of the iPad


For all the power that modern computers put in the hands of business people, it still is the case that much of the conduct of business involves two basic activities: reading, and writing. For a long time now, the writing needs of business people have been well served by computers. Their reading needs have not. Although most people now feel comfortable parsing email and surfing the web with a desktop or laptop computer, when it comes to long form content (books, long essays, white papers, etc), a desktop or laptop computer just doesn't cut it.

What the iPad represents is a portable computer that has been optimized for reading, be it email, web pages, or ebooks. With that comes a device that also has some utility as a writing tool. So if your business needs when traveling lean heavily toward reading, then the iPad could well be a better choice than a laptop.

I do find myself wondering why it took so long for the computer industry to figure this out? Now that Apple has produced a product to fill that need, it seems like such an obvious opportunity!

CP, Connecticut

I continue my love affair with the iPad, and I see more and more of them in play wherever I go. I believe the iPad is changing the world.

ore cookies

Concerning my contention that cookies can be written anywhere the sender wants, see this Register article.

Don Miller

Peter Glaskowsky comments:

ore cookies

Well, there are definitely more cookie-like methods described in this "evercookie" work than I knew about.

I haven't spent much time analyzing it, but I think only one of these methods produces a cross-site vulnerability-- a way to test the browser history for specific URLs. This means if you visit youremployer.com it can see if you've been to pornsite.com. But if your employer has never heard of rarefetish.com it can't discover you've been there.

In principle, someone could use brute force to test all possible domain names, but I don't think that's a practical threat unless someone turns up some additional weakness.

I think if I were a browser vendor I'd close up this loophole.

Otherwise I think the evercookie idea is just a way for a website to protect its own cookies, not peek at other sites' cookies.

. png