The first age of computer interfaces involved paper tape, punch cards, and other cumbersome methods that required specialized operators.
The second age allowed users to type in commands, and required plastic template overlays that fit over computer keyboards, or function code cheat-sheets taped next to computers. Employees typically needed training and thick manuals in order to learn to use the software, and usually specialized in just a couple of applications.
The third age starred graphical user interfaces, where employees could often figure out how to do things on their own simply by clicking on menus and buttons to see what they did. Many software packages used a similar menu structure, allowing tech-savvy users to quickly learn how to use new software. User interfaces, hardware -- and IE sponsor IBM -- have changed a lot. New interfaces based on gesture and voice promise more dramatic evolutions, applications, and opportunities.
(Source: Columbia University)
Now we're entering the fourth age, an era that features more natural, intuitive interfaces.
Consider Microsoft Kinect or Apple's iPhone Siri virtual assistant. In this fourth age, the computer can see who you are, what you're doing, and even where you are. You can use gestures, facial expressions, and voice commands to tell the computer what you want it to do. Or it can guess what you want, based on what you're doing, saying, or emoting.
For example, your computer could automatically turn on when you sit in front of it, or temporarily postpone reminders about upcoming meetings or incoming emails if you look busy or stressed. My computer already reminds me of meetings a day ahead of an appointment, which is useful if I have to prepare anything for the event -- but that reminder doesn't have to interrupt me when I'm on deadline or on a call.
Gesture control is even more interesting, but more problematic.
As I've discovered when trying to navigate the Hulu Plus menu on my TV from the couch across the room, gesture controls can be imprecise.
A better bet, when it comes to controlling a computer over a distance, is to use a smartphone or tablet as the input device or remote control. A TV set or display can automatically see which smart devices are nearby, and which are authorized to do what.
As we're getting to the point where everyone carries a personal smartphone, we could arrive at a time when everybody has their own personal control for the home entertainment center -- or office or factory floor display screens.
Another problem with the kind of gesture controls displayed so ably in Minority Report is that we don't have a common vocabulary of hand gestures. Well, except the impolite ones.
People would need to learn which gestures to use to select, cut, and paste -- and, unless we all learn sign language, data entry via hand gestures would be extremely difficult. My Hulu Plus interface deals with this issue with an on-screen alphabet. Say, for example, I want to find a particular program. I point my Wii controller at one letter at a time, but it's tricky to point to a single letter on the other side of the room and it takes a long time to peck out a title this way.
A combination of voice and gestures could address most of these concerns.
The immediate applications in the enterprise would be work environments that aren't conducive to keyboard-based input: factory floors, hospital operating rooms, at the front of a classroom, in moving vehicles, at construction sites, retail showroom floors, in the field of battle, and at accident or murder scenes.
Eventually, there will come a point where instead of us having to learn how to use our computers, our computers will have to learn how to understand us.
— Maria Korolov is president of Trombly International, an editorial services company that provides coverage of emerging technologies and markets. She has been a journalist for more than 20 years.