,

AAAI-15 Conference – Day 0 (Open House)

My observations from the AAAI-15 Open House, 1/26/2015 in Austin, Texas.

“Leveraging Multi-modalities for Egocentric Activity Recognition”

Using Google glass and similar tech, combined various techniques to improve accuracy of activity recognition; e.g. whether you are brushing your teeth or doing laundry.

“Cogsketch: Sketch Understanding for Cognitive Science and Education”

This was a pretty interesting poster demonstrating software that can classify objects in a hand-drawn sketch and then use that information for various interactive contexts; e.g. educational programs for kids where they draw answers to questions.  Will eventually go open source, but can download code now.

“2012 BotPrize Champion: Human-like Bot for Unreal Tournament”

The idea here is that you play Unreal for a while and then identify which other player is the bot; sort of a non-verbal Turing Test.  I sat down and played and died a lot and could not really determine who was or wasn’t a bot.  I kinda think this is a bit jinky in that: you’re really distracted by the game, there’s limited interaction (shoot shoot bang bang), and real life players vary greatly.

“Going Beyond Literal Command-Based Instructions: Extending Robotic Natural Language Interaction Capabilities”

This poster was interesting in that it was focused on human-robot interaction and designing the bot to ask questions to inform about the intention of the human asking it to perform a task, especially if the question can be interpreted  differently depending on tone of voice, etc.  Some compelling results, but still very stochastic in methodology.

“An Agent-Based Model of the Emergence and Transmission of a Language System for the Expression of Logical Combinations”

This poster was pretty interesting in that the research created a virtual environment for independent agents who, with separate overlapping concept sets, would be set in scenarios with each other where they would generate utterances to represent concepts an the back and forth interplay would result in a generated, agreed upon language to describe the objects in the environment.  However, relied a lot on the contrivances of the environment, and is modelled mostly on propositional logic (all coded in Prolog).

The Future of (Artificial) Intelligence (talk by Stuart Russell)

Gave quick overview of recent advancements in AI.  Deep Learning, poker “solved”, etc.  Implying that we’ve been maximizing expected utility.   Deep Mind can learn to play Space Invaders from scratch simply from watching pixels on a screen.  Deep Mind can learn to drive in a similar way.   Watson winning Jeopardy (funny slide with one of the contestants having written with his answer that he bows to “our future computer overlords”).   A Google cat bot prototype can traverse on ice, and regain its balance if pushed.  Bots can do your laundry.

However, there are misconceptions in the media.  There’s an odd quote (should look up) from the Harvard Business Review predicting that AI will soon have IQ’s greater than 90% of the workforce (which is ridiculous).  As we know in the AI community, this is so far from reality.  We have made progress along narrow corridors of the science, but really no breakthroughs that would constitute an sentient IQ.   Also a misconception that progress in AI follows Moore’s law.  Also pretty false.  (note to self: Minsky recently made a statement about how there hasn’t really been any progress in AI for 20+ years).

Injected note about work he has been doing to help detect nuclear explosions around the globe; heavily suggesting the use of AI tech to help and improve the world.  Apparently there have been 2k+ nuclear detonations since 1945, and now there are detection sites all over the world listening for vibrations in the earth.  Distinguishing between a nuclear detonation and other types of disturbances is a challenge that his AI is helping to solve (and has significantly improved the accuracy of).

What about the impact and realization of fears if AI succeeds?  Is a human with a voice (think Hitler) more powerful than a robot army?  How likely is a robot to exhibit spontaneous malevolence vs competent decision making?  References to “Superintelligence” (note to self: planning to read).  Some movement in the direction of establishing counter-AI regulation and security, to limit the impact of AI on humans.

Russell’s proposal:  create provably beneficial AI.  Set limitations, verifiers, etc.   Cooperative value alignment, which means that the robot has the human’s interests in mind, not its own, and wants to make the human “happy”.   There are obviously problems to be addressed.  How do we verify behavior?  The bot may be programmed to have the human interest in mind, but humans are fallible, fickle, inconsistent and weak-willed.  Even if the bot asks questions, how will it know what’s right?  Objective functions have largely been presumed, but really they need to be learned/generated (so the race to solve this is on!)

Injected other things we should maybe be concerned about outside of AI; e.g. the ability to literally type the structure of a genome and generate the organic equivalent, which can potentially be used to modify the genomic structure of babies in the womb.

Brought up the idea of forming a robotics commission in the government, mostly for the need to have more understanding about robotics in the government.  There are some laws that need to be changed; e.g. current law allows an ISP to more or less own bot AI that you put through their service.  But really, we mostly need common sense containment in the same way that we do with nuclear energy / bombs; i.e. you can use that knowledge to create h-bombs or generate energy efficiently in a fission reactor.