First, a disclaimer: I work for IBM, but all thoughts here represent my opinions and don't represent IBM's positions, strategies or opinions, nor those of my future cognitive computing doppelganger who I expect in the next few years will take over my role as blogger and social media specialist so that I can head out to the links and improve my cognitive golfing.
Late this morning, I attended an IBM People for a Smarter Planet Tweetchat concerning the promise and future of cognitive computing. Simply put, cognitive computing systems represent the next frontier of computing, the first two waves having centered upon, first, tabulation, and more recently, programmable systems.
With cognitive computing, we've begun to see systems that learn and interact naturally with people to extend what either humans or machines could do on their own. In so doing, they can help human experts make better decisions by tapping into the vast complexities of big data.
Take Watson, for example, the first cognitive system which IBM debuted in a televised Jeopardy! challenge where Watson bested the show's two greatest champions in early 2011.
Watson's challenge was seemingly simple, yet belied the great complexity going on under the covers. To win, it had to answer questions posed in every nuance of natural language -- puns, synonyms, homonyms, slang, and jargon -- by tapping into a reservoir of data accumulated over years, a massive set of unstructured knowledge.
Using machine learning, statistical analysis, and natural language processing, Watson was designed to find and understand the clues in the Jeopardy! questions, compare the possible answers, then rank its confidence in their accuracy and respond -- typically in three seconds or less. I watched all the matches in their entirety, and to me it was one of those moon landing moments. Of course, I've long been accused my colleagues of being from outer space, so what do I know?
But as I've written in this blog over the past year or so, newer generations of Watson are now being trained in oncology diagnosis for healthcare professionals, and in customer service as a support representative. IBM Research continues to push the boundaries of Watson by developing new interfaces that will allow humans and computers to interact more naturally.
What does the future direction of this technology portend for we mere cognitive mortals? Dharmhendra Modha, an IBM Research senior manager who founded the IBM Cognitive Computing group within IBM Research, opened today's Tweetchat by explaining that "Cognitive computing moves beyond today's computers that at heart are just calculators, and would lead to fundamentally new architectures, algorithms, and applications."
Steve Hamm, an IBM communications strategist and co-author of a new book on cognitive computing, Smart Machines: IBM's Watson and the Era of Cognitive Computing, explained a key distinguishing and contrasting feature of the cognitive era: That traditional computers needed programmers, but cognitive machines "will learn."
Dharmendra Modha echoed this sentiment, explaining we would require "a completely new programming paradigm, a new way to think about computing and how to harness it" and eventually start "moving from programming to learning."
Participant Adrian Bowles chimed in and explained that we would be "getting away from deterministic outcomes, and more into 'real life' thinking."
Take that, Hal.
So what are the benefits and major applications we'd likely see emerge in this arena? Mr. Modha explained we're already well under way with some of the WatsonPath health diagnostics capabilities, but painted a picture of an even more promising future:
"Imagine health care where signals such as blood pressure, temperature, smell, oxygen levels could be acted upon in real-time. Imagine a hand-held device for managing chronic illnesses such as asthma to avoid health emergencies." A device, presumably, which would be powered by cognitive computing capabilities.
IBM big data guru James Kobelius suggested the major applications would include "natural language processing, sentiment analysis, pattern recognition, and signal intelligence," all of which would have dramatic impact on a far-reaching range of industries and knowledge-intensive-requisite endeavors.
One mind-bending observation by Mr. Modha brought the idea of our entering a new paradigm computing home when he mentioned some "concept models" that illustrate the value of the "Synapse" project he's working on: "Imagine a new paradigm where instead of bringing data to computation, we bring computation and intelligence to data."
Steve Hamm suggested the effects of such a change could be powerful on everything from city planning to disaster management: Cognitive systems could help "Anticipate the unintended consequences of actions," providing for "a better informed citizenry, transparency, and better democracy."
Hopefully these would also provide a Mini-Cognitive Me to author my blog posts -- I've got a real challenge in and around the greens I need to get working on!
I thought IBM's Bob Pulver made a keen observation about cognitive applications, one apropos to my only-partly-jesting idea of a Mini-Me sitting on my shoulder. Bob explained such systems could "act as an advisor to anyone who cannot possibly absorb all knowledge," and provide "insights to make optimal decisions."
Adrian Bowles responded, suggesting that "cognitive computing acts as a force multiplier for human cognition requiring complex reasoning, not (yet) a replacement."
So, you and I will be here in this cognitive future -- we're just going to have a whole lot of help in the form of these systems to get more things done, more effectively and quickly.
However, it's going to take a whole lot of computing horsepower. Mr. Modha explained the "end goal is to create a 'brain-in-a-box' with 100 trillion synapses while consuming merely 1 kilowatt of power… To approach [the] brain's computing efficiency, we are combining neuroscience, supercomputing, and nanotechnology."
Does that mean you'll need a chip implanted in your head?
Perhaps someday, especially those of us in Texas, but not quite just yet.
If that futuristic notion scares you, Steve Hamm suggested not to worry, that your Mini-Me Watson would never make the step up to a Silicon Bard:
"Computers will be best at calculations, finding patterns, extending our senses, searching and memory. We humans will be superior in intuition, creativity, reasoning, and decision making. We have empathy, a moral compass. We must govern the machines."
I, for one, am all for governing the machines and not having the machines govern us!
So the lingering question I had throughout the TweetChat was, can all of the machines and humans just get along?
Hamm indicated, "The best of human and machine combined would perform better than either by itself," again coming back to this notion of smart machine as powerful assistant.
In other words: If you knew what you needed to know, at the time you needed to know it, would you have known enough to know you needed to know more?
Or something like that.
Go here to brush up on your own cognitive computing skills, and check out the short but informative video below to help you get in touch with your inner Watson and fast!