As I mentioned in my last post, yesterday was day three of Information On Demand 2012 here in Las Vegas.
There was LOTS going on out here in the West.
In the IOD 2012 day 3 general session, IBM Fellow Rob High explains how IBM's Watson technology may soon help drive down call center costs by 50 percent, using the intelligence engine of Watson to help customer service reps faster respond to customer queries.
So congrats, Nate, and thanks again for a scintillating interview.
During the morning session, we also heard from IBM's own Craig Rinehart about the opportunity for achieving better efficiencies in healthcare using enterprise content management solutions from IBM.
I nearly choked when Craig explained that 30 cents out of every dollar on healthcare in the US is wasted, and despite spending more than any other country, it is ranked 37th in terms of care.
Craig explained the IBM Patient Care and Insights tool was intended to bring advanced analytics out of the lab and into the hospital to help start driving down some of those costs, and more importantly, to help save lives.
We also heard from IBM fellow and CTO of IBM Watson Solutions' organization, Rob High, about some of the recent advancements made on the Watson front.
High explained the distinction between programmatic and cognitive computing, the latter being the direction computing is now taking, and an approach that provides for much more "discoverability" even as it's more probabilistic in nature.
High walked through a fascinating call center demonstration, whereby Watson helped a call center agent more quickly respond to a customer query by filtering through thousands of possible answers in a few seconds, then honing in on the ones most likely to answer the customer's question.
Next, we heard from Jeff Jonas, IBM's entity analytics "Ironman" (Jeff also just competed his 27th Ironman triathlon last weekend), who explained his latest technology, context accumulation.
Jeff observed that context accumulation was the "incremental process of integrating new observations with previous ones."
Or, in other words, developing a better understanding of something by taking more into account the things around it.
Too often, Jeff suggested, analytics has been done in isolation, but that "the future of Big Data is the diverse integration of data" where "data finds data."
His new method allows for self-correction, and a high tolerance for disagreement, confusion, and uncertainty, and where new observations can "reverse earlier assertions."
For now, he's calling the technology "G2," and explains it as a "general purpose context accumulating engine."
Of course, there was also the Nate Silver keynote, the capstone of yesterday's morning session, to which I'll refer you back to the interview Scott and I conducted to get a summary taste of all the ideas Nate discussed. Your best bet is to buy his book if you really want to understand where he thinks we need to take the promise of prediction.