It was a great pleasure and honor being on the show, and chatting. Thanks!! (Kim S: That is a great way to do it. Currently a computer can't touch that mode of testing for the foreseeable future.) Bye. //S
Kim D: Such a device would indeed have come in handy -- but of course such a device is really just tantamount to having with you a bunch of humans in support, b/c the humans would've built the device. I think that it's possible, in 2011, to bring certain AI technology into high-stakes testing environments (SAT, eg) and rely on it. That would be cheating, but testing organizations are going to have to deal w/ this. When the glasses you wear can send text back to a parser, and a reasoning engine, it would be cheating. But we're close, if not there now. //S
Mary J: Courtesy of the internet, yes. On-board only, no. So the internet becomes a necessary part of machine intelligence down to the small artifact. This is just a weird (but important) fact about our world. //S
Mary J: Yes, I agree. The internet is key to AI. Absolutely. It might have evolved differently, if local horsepower had been higher. But that's not how it went. So if you want an iPhone to do automated theorem proving, as I do, you have to go outside the local device to the internet. //S
Hi Kim S: I don't think B-C's right -- but I'm no psychologist. Autism cannot be a necessary condition for evil. Too narrow a correlated. That evil is treatable (in non-medical sense) is the core proposition of Christianity, which might be why M. Scott Peck has been a leading figure in this area (eg /People of the Lie). //S
Thanks! Also, I'd like to point out how the Internet plays such a key role in a lot of the AI applications we discussed. So the future seems to depend as much on the network as on the processors/robots/etc. that tie into it. Do you agree?
Mary J: My interest in mathematizing evil in formal logic and then implementing it in working software (in, eg, a synthetic character) ultimately stems from my longstanding interest in the basic moral structure of the universe, and investigating whether logic is up to the task. Also, it's fun -- for me anyway. //S
We can't (if I'm right) give emotions to robots; but we /can/ give them the capacity to impel humans who behold them to ascribe emotions to the robots. And that might be good therapy. However, I view it as a form of deception; as -- if you will -- taking the blue pill. I'm only in favor, personally, of red-pill robots. //S
Re: "some kinds of cognition which are not in principle reducible to codeable propositions?" Yes, I believe that such irreducible cognition exists in the human mind. Like what? I would indeed say that aesthetic sensibility is beyond information-processing. Also a capacity to grasp the infinite. Etc. //S
Re self-fixing computers: That's actually a wonderful issue, b/c it's a superset of the sub-field of automatic programming. AP is: Give a machine a function, and then have it /automatically/ generate the code that computes that function. /Very/ hard; no progress to speak of made. //S
Professor, your skepticism about progress towards the goal of strong AI seems very well justified. Do you go beyond that and agree that there are some kinds of cognition which are not in principle reducible to codeable propositions? I am thinking partly of the distinction between saying and showing, partly of emotions and aesthetic responses.
I would like to see AI applications integrated to Microsoft Kinect kind of instruments. Based on our facial expressions, these instruments can play nice song or say some soothing words. What is your openion on this idea?
Initially computer will reflect the intentions of the human creator. So to some extent can trust machine if you trust creator. Later on that will not be true. But then we will not understand what is going on anyway without merging with machines.
For military drones, I have read recently that the sensors are just not adequate. They have an 800m range but only an 8 m line of sight. How could we sensor everything? Whats to stop a non-networked actor?
His championing of philosophy in his work is very uplifting! You can imagine it another way: If we lose philosophy then humankind really is doomed! So we need to get to that in this discussion. It is almost the opposite of war.
The whole Amazon.reader debate is a double-stupid. It's stupid to think that there's any e-book buyer who doesn't know Amazon's URL, and it was stupider to let ICANN launch the whole free-form TLD initiative to start with.
Enterprises would like to move to cloud computing but are hesitant because they are concerned about providers’ ability to secure company data. Here are some tips that help to ensure that if breaches occur, the business is not left holding the bag.
Edmunds separates customers into segments based on the info it collects on its site and from partners, and uses that to push out custom content, said Brian Baron, director of business analytics for Edmunds.com, at Predictive Analytics Innovation Summit.
The automotive website uses propensity modeling to target ads and customer registration forms, said Brian Baron, director of business analytics for Edmunds.com, at Predictive Analytics Innovation Summit.
Expert Integrated Systems: Changing the Experience & Economics of IT In this e-book, we take an in-depth look at these expert integrated systems -- what they are, how they work, and how they have the potential to help CIOs achieve dramatic savings while restoring IT's role as business innovator. READ THIS eBOOK
your weekly update of news, analysis, and
opinion from Internet Evolution - FREE! REGISTER HERE
Wanted! Site Moderators Internet Evolution is looking for a handful of readers to help moderate the message boards on our site as well as engaging in high-IQ conversation with the industry mavens on our thinkerNet blogosphere. The job comes with various perks, bags of kudos, and GIANT bragging rights. Interested?