June 3rd, 2010
The last several postings in this blog thread have focused on the future of IT from a technical perspective. For those of us who work in the IT industry, this is no surprise: there’s no question that we want to use the new “stuff” as soon as it’s available, and the main questions are simply how quickly it will arrive, and just how much better it will be.
But there’s another perspective — the social perspective — in which we find ourselves asking, “How will we know when the future has arrived? Will we recognize it? Will we welcome it? What will we do with it? Will our children want to do the same kind of things that we do, and will we care?”
One way of approaching this part of the conversation is by reminding you of Clarke’s Laws, which were first published in his 1962 book, Profiles of the Future (the hyperlink I’ve provided here is for Amazon’s vintage-2000 paperback reprint):
- When a distinguished but elderly scientist says that something is possible, he is almost certainly right. When he states that something is impossible, he is probably wrong.
- The only way of discovering the limits of the possible is to venture a little way past them into the impossible.
- Any sufficiently advanced technology is indistinguishable from magic.
There are, unfortunately, many many ridiculous predictions made by “distinguished but elderly scientists over the years. Here’s a representative list:
• In 1895, British Postmaster General Arnold Morley said, “Gas and water are necessities for every inhabitant of the Country. Telephones are not and never will be. It is no use trying to persuade ourselves that the use of the telephone could be enjoyed by the large masses of people in their daily life.” (see “Public Ownership and the Telephone in Great Britain,” Chapter VIII, p. 117)
• In 1903, soon after the first Wright Brothers flight, Rudyard Kipling predicted that airpseeds would reach only 300 mph by the year 2000.
• In 1927, J.B.S. Haldane predicted that the first landing on Mars would not take place for 10 million years.
• In 1943, IBM Chairman Thomas Watson may have said, “I think there is a world market for maybe five computers.” (see this Wikipedia article for discussion of alleged comment.)
• In 1945, FDR’s naval aide, Admiral William Leahy, said about the atomic bomb, “That is the biggest fool thing we have ever done … the bomb will never go off, and I speak as an expert in explosives.”
• In 1949, “Popular Mechanics,” forecasting the relentless march of science, wrote “Computers in the future may weigh no more than 1.5 tons.”
• In 1977, DEC founder/CEO Ken Olsen remarked at a World Future Society conference that “There is no reason why anyone would want a computer in their home.”
• In 1981, an obscure computer geek named Bill Gates allegedly said, “640K bytes ought to be enough for anybody.” (But see this article for Gates’ denial that he ever said such a thing.)
Here are two other things to keep in mind as we think about the social aspects of future advances in technology: first, the people least likely to anticipate how new technology will be applied are the very inventors of the new technology.
And second, when you dramatically improved technology to people (e.g., a tenfold improvement), they first begin using the new technology to do the same old thing they were doing before, but somewhat faster or cheaper or more conveniently. It is only later that they begin to recognize entirely new and different things that are made possible with the new technology.
The first observation is not so surprising when you think about it. The inventors are desperately trying to persuade skittish investors, conservative business managers, and mainstream consumers that their new technology will be “useful” — so they try to imagine various applications and uses that could be seen as … well, “productive” and “serious” and “efficient”. Thus, Thomas Edison people that his newfangled invention, the phonograph, could be used to record the minutes of a business meeting, or a lecture by a university professor, or various other “serious” things. Using the phonograph to record music was way down near the bottom of his “top ten” list.
The second observation is often referred to as Fubini’s Law, which goes like this:
- People initially use technology to do what they do now – but faster.
- Then they gradually begin to use technology to do new things.
- The new things change life-styles and work-styles
- The new life-styles and work-styles change society.
… and eventually change technology.
As we’ll discuss about three or four blog postings from now, much of this behavior comes from the fact that adults have a “legacy” of experiences and lessons and guidelines and “common sense” about how to live their life, how to succeed, how to get things done. Through their own Darwinian behaviors, they’ve weeded out the practices and behaviors that have not served them well, and emphasized the ones that have served them well.
Thus, if you give them technology that is substantially more powerful than what they have now, their instinct is to make an incremental change in the practices and behaviors that have worked well in the past. There is an understandable reluctance to abandon everything that has worked well in the past, and try something “wild and crazy” with technology that is — or at least might be — then times better than what they had before.
With children, the situation is likely to be different, because (a) they have little or no legacy to guide them or hold them back, and (b) they have a natural tendency to rebel against anything and everything their parents are doing. We’ll discuss this further in a future blog posting…