It’s commonly accepted that “techie” types exist in a constant state of enthusiasm for the future — incessantly seeking out the next cool technology. I disagree with that notion. I believe that most “techie” types actually exists in a sort of punctuated equilibrium. To be sure, that frenzy over the next new shiny tech gizmo certainly exists; but often it is separated by large stretches where the tech luminaries espouse a sort of short-sighted anti-innovation conservatism that would make Ned Ludd proud.

Take, for example, some of the responses to Google’s “Glass” project.

I want to state at the start that I’m not arguing that Glass is a great product; or that all the negative responses to it are invalid. Readers of this blog shouldn’t be surprised to read that I’m actually very skeptical of Glass as a product. While I’m keenly interested in wearable computing, the thought of Google being the company to take the lead in that area is terrifying. Jokes about Glass pumping advertising into users eye-holes are all too prophetic. That said, there has been a sadly disappointing trend in some of the negative responses to dismiss Google Glass — and Heads Up Display (HUD) style wearable computing in general — as variously “uncool”, “worthless”, “stupid”, and “harmful”.

I’m not going to link to or debate specific articles. What I want to discuss are some of the over-arching themes that I’ve seen. Specifically:

The “distraction” argument is one that I actually will concede is a real danger of HUD-based wearable computing. No one but a complete loon (or perhaps Sergey Brin) would argue that projecting graphics into a person’s visual field on an ongoing basis doesn’t create the potential for dangerous distraction. But really this is a matter of degree. Certainly obscured vision is dangerous; but so are many other things related to our current technological lifestyles: impaired hearing from wearing ear-buds, talking on a cell phone while driving/walking, looking at a cell-phone while driving/walking, the list goes on. Hell, I bet that primitive screw-heads looked at the first pocket-watch and said, “Holy shit, someone’s going to be looking at that and fall down a well!” As functioning humans we deal with distracting things on an ongoing basis. If the past is any indicator of the future people who wish to remain among the living will figure out how to use HUD-based computers without driving off a cliff.

The “in the moment” argument and the “over sharing” argument are actually pretty similar. They usually go something like: “Hey nerd, why are you taking a video of that awesome rainbow when you could be enjoying it” and “Great, just what we need, more nerds sharing pictures of their breakfast”. As with the “distraction” argument, the simplest way to dismiss these arguments is to look as the present and past. We’ve all seen the social-media addict at some event spending more time tweeting about it than enjoying it. Or the inane parade of Instagram photos of people’s food. Not being engaged in one’s surroundings, or overly sharing are people problems —- not technological problems. In fact, HUD-based computing should actually allow people to be more engaged in their lives…even as they obsessively share them.

The “tech makes us dumb” argument is so old that I’m sure someone used it on Johannes Gutenberg. Every advance in access to information that I’ve experienced in my lifetime — widespread computer adoption, the proliferation of cable television, the Internet, smartphones — has been met by a contingent of people claiming that increased access to information will somehow turn our brains to a pasty sludge. This is rank nonsense. Increased access to information can only make us better as a society. Sure, some people are lazy shits who can’t think for themselves; but making them work harder for information won’t make them smarter — they’ll just stop looking.

The argument that Google Glass makes you look like a dork is simultaneously true and utterly dumb. Sure, Glass looks retarded now but so did the first Bluetooth headsets. The point is that society’s perceptions change. Yes, the first batch of nerds walking around in HUDs will be laughingstocks, but eventually HUDs will be seen as just another accessory.

I believe that there is a clear progression in computing. The overall path has been smaller, faster, more connected, and more ubiquitous1. I also believe that wearable computing, in particular HUD-based computing, is the next logical step in that progression. It’s somewhat sad to see writers who pride themselves on their technical savvy dismiss the concept out of hand — but it’s actually not that unexpected.

  1. Some of my thoughts on “ubiquitous” computing can be found here