,

AAAI-15 Conference – Day 1

My observations from the 1st full day of sessions at the AAAI-15 artificial intelligence conference in Austin, Texas.

Hot Talk: General Game Playing

This talk involved taking a look at the utility of using game playing as an outlet for proving out AI.  Gave contrast between typical approaches that presume too much, but more realistically have to consider things like incomplete game trees, lack of prior knowledge of problem space, etc.  2007 brought us the effective Monte Carlo search.  Various ways cited to pre-process info, different types of games, segregating concerns, etc.

Hot Talk: Robocup @ Home

This was an overview of research that is being done testing robots in real human environments; e.g. coffee shops and malls… getting a beer, carrying a tray, by request, etc.   Emphasis on working out the kinks with human robot interaction, maybe dispelling some of the fear.  Involves combining of many many aspects of AI; semantic 3D mapping, etc.

“This Time the Robot Settles for a Cost”

This paper on temporal logic planning involves simulating janitorial robot in an office, with a list of tasks to accomplish.  This solution tries to bridge the gap between the ai and robotics approaches.

Approach in robotics:

  • continuous motion planning
  • LTL is defacto language

Approach in AI:

  • discrete planning
  • pddf is defacto language

Solutions tried involved translating “co-safe LTL” into DFA.  Information flows as: high level planner -> synergistic planner -> low level planner.  Problems come into play in the form of closed doors (closed connections/edges)

Example: to bot: “don’t step on carpet with dirty feet (until you have slippers on)”.  however, carpets are between bot and slippers…   leads bot to ask user question.  user in this case replies by assigning a “cost” to stepping on the carpet.  becomes a shortest path problem with cost applied to paths, as a matter of ‘least cost of violation”.

Animation of the path taken by the bot seem decent.  Can see that the bot handles exceptions; e.g. closed doors, etc, on the fly.

(note to self:  can almost see value, implied in the government funding meeting, in refreshing knowledge with new generations by allowing them to run through experiments that use the same techniques as 20 years ago.)

“Strong Temporal Planning with Uncontrollable Durations”

How to deal with temporal planning that is subject to tasks that may or may not take a certain amount of time…

  • actions have durations
  • conditions can be non-instantaneous
  • effects at-start and at-end

PDDL used as language for.

Solutions tried are extensions of “forward state-space temporal planning”.  When introducing uncertainty of task duration into a plan using standard FSSTP, trying to get the system to re-order constraints doesn’t work.  You get an incomplete solution, even when the algorithm tries to re-order tasks to suit the overall goals.

Tried a variety of ordering techniques to use within the main technique…  landed on disjunctive ordering, but ended up being really slow.

Note:  Dude kept throwing around acronyms.  Colin, SMT, DR, TO, LAD, etc, and was not particularly good at explaining the research

“Robustness in Probabilistic Temporal Planning”

This research involved multi agent scheduling. How do you tell how good a given schedule is?  Solution uses robot create platform and probabilistic reasoning techniques.

iRobot example video shown, avoiding collision with each other, with some level of communication between.

Given a simple temporal network (nodes = events, edges = diff between events), there’s a challenge in responding to new constraints in real time.  Introduce “flexibility”, where naive flexibility is when you sum up all possible events.  Problem with naive flexibility is that it double counts events, so devised a way to minimize duplicates.

Questions arose:  How do we define what’s “good enough”?  When is flexibility more relevant?  What are the critical moments?  This led to a solution that involved moving integrals, but that ended up being to complex.

So… introduced “sampling approximation” and “representative approximation” as ways to add robustness to adaptive approximation of next move.  In the case of sampling approximation…  as it comes to the first edge, it takes a sample, which propagates changes to constraints.  Success or failure (as in, whether or not there was a collision or either failed to reach their destination) is used to train for the next time around.

Brief Planning Oriented Poster Previews:

NEXT:

Resolving Over-Constrained Probabilistic Temporal Problems through Chance Constraint Relaxation.  Involves a bus schedule problem, wherein, for example, I am leaving office for home at 6pm.  Complications in planning involve determining probability of a bus arriving on time, etc.  Found that by relaxing some constraints got better solutions.

NEXT:

“tBurton”: A Divide and Conquer Temporal Planner.  Involves factoring the planning problem into bits, planning using heuristic forward search, merging child plans by de-conflicting sub-goals, etc.  Ends up performing better than state of the art heuristic planners.

NEXT:

Take a general maze plan.  Drop an agent with a camera and non-precise actuators into the maze.  Goal is to send a policy into the agent to reach probability of 1 that the bot will get out of the maze.  Results showed that  positive, negative and approximated costs all play into good results.

NEXT:

SMT based nonlinear pddl + planning.  Couldn’t tell what they accomplished other than munging translation between knowns methods.

NEXT:

Crowdsourcing action model for planning (no-show)

NEXT:

This paper involved dealing with large number of sensor resources and how to allocate them; e.g. sensor systems like multi camera systems.  Becomes untenable, but introduced a “greedy” bpvi as a way to solve this.

NEXT:

Transition constraints for parallel planning.  Investigated various different models.  Created new encoding that encoded constraints better and made use of Minion to come up with better plans using the new constraints

NEXT:

This involved planning with multiple objectives; specifically semi-autonomous driving.  Initially you sequentially order objectives, then introduce “slack” deviation from optimal.  References change as driver slips from “attentive” to “tired”.  Data used from open street map.

NEXT:

Discretization of temporal planning.  Involved developing conditions for breaking timeline into discrete segments to improve scalability of planning.

NEXT:

Chance constrained scheduling.  Used new technique to assess temporal risk of a plan based on conflicts that arise.  Circular ordering of processing: temporal plan – > allocate risk -> conflict -> check controllability -> check policy -> extract conflict -> (back to start).

NEXT:

Hybrid models from guarded transitions.  JMLS insufficient. PHA better, but doesn’t learn as well.  Uses classifiers from machine learning to help train.  Tested on target tracking.  Showed significant improvements over existing systems.  Applicable to activity recognition (on going research).

NEXT:

Multi agent epistemic planning as classical planning.  Reasoning about the states of other agents from perspective of a single agent, uses modal logic axioms, states in classical method, etc.  Addresses plan synthesis based on perception of state of other agents, and employs conditional mutual awareness.  Applied to a classical gossip problem.  Focuses on belief instead of knowledge.

“Probabilistic Attributed Hashing”

Applied to binary similarity; e.g. representations of images, music genres, paper authors, etc.  Involves large scale approximate nearest neighbor search.  Preserves similarity using Hamming distance.

Purports, by way of attributes and content features, to capture human cognition more accurately.  Introduced generative model to deal with scalability problem.

To capture heterogeneity of attributes and content features, applied hashing with separation and gaussian distribution.  Algorithm shown, but would have to look at in more detail.  Kinda zoomed through it.

Tested against DBLP authors database and NUS-WIDE images with tags from Flickr.  Showed 10% performance improvement with this research, goal being to build a better hash function, it seems.

“The Boundary Forest Algorithm for Online Supervised and Unsupervised Learning”

Problem is to approximate complicated functions.  Applicable to many complicated functions, including robotic control, vision, etc etc.

Online algorithm performs effectively and efficiently.  Uses boundary trees (as applied to condensed nearest neighbor problem), and incrementally builds a graphical structure.  Sort of monte carlo style, samples points and classifies incrementally as new points are added.  Animation of the process shows the algorithm eventually (after 10k sample points) capturing the pattern in an original image.  Better performance came by forming a “forest” of these boundary trees.  Faster and more accurate.

“Lazier Than Lazy Greedy”

Big data searching.  Goal is to select a small representative subset out of a large data set.  Typical solutions are subject to diminishing returns.

Greedy algorithm tags elements as it passes through the dataset, picks the one with the best score and adds to the solution set.  Then repeats runs through.  However…  the greedy algorithm can be untenable, performance-wise.

Lazy greedy algorithm is an alternative.  Does a first pass, then sorts according to last scores, forming a high-score subset.  On subsequent passes don’t go through the whole set.

Stochastic-greedy algorithm only scans a random subset on each round.  Can reduce to O(n log 1/e) vs the O(n * k) of the original greedy algorithm.

Experimented and compared to a variety of algorithms, including random selection.  Applied to non-parametric learning in massive data sets.  Stochastic-greedy had the best utility (accuracy) to cost factor.  Tested successfully on set of 10,000 images with 3072 attributes, and “Battle of Water Sensor Networks” challenge.

Transfer Feature Representation via Multiple Kernel Learning”

Cross domain feature learning…  This was relatively gibberish-filled.  The presenter zoomed very quickly through many pages of notation and equations without showing much relatable context, accompanied by highly-accented, terse chatter.  Kinda picked up on some distance and cost functions, but geez.  Sorry, this one will probably require just looking at the paper.

Tested out on face data sets (slide shown for like 1 second) to classify faces, as far as I can tell.  Egads.  Indecipherable. 

Presenter suggested at the end to just contact the main researcher for any questions.

Brief Machine Learning Oriented Poster Previews:

NEXT:

Stacked denoising auto-encoding.  Involves creating a relational probabilistic graph model.  Applied to tagging movies and predicting award winners.

NEXT:

Sample targeted clinical trial adaptation.  Applied to determining people best to select for a clinical trial.  Uses auxiliary data to stratify the candidates and remove least likely ones.

NEXT:

Matrix completion for multi label image classification.  Involves applying a “low-rank” classification method in contrast to classical multi-view subspace methods that are not suitable for multi-label classification.

NEXT:

Multi objective reinforcement learning.  Involves learning a continuous approximation of a Pareto frontier of an underlying markov process.  Also involves building a parameterized manifold in the policy space that maps the frontier, and then minimizing loss from the real Pareto frontier.

NEXT:

Learning half spaces using query synthesis.  Assume a deterministic noise-free half-space.  You can only conduct membership queries, but that’s expensive and would like to be able to estimate the half space efficiently.  Solution involves creating an elliptical generation process that ends up being polynomial (vs exponential).

NEXT:

Reinforcement learning via demonstrations and relaxed reward demands.  Involves a learning cycle:  request -> policy -> teacher -> action -> world -> observations -> state -> learning -> model -> planning -> back to policy.

NEXT:

Probabilistic tensor decomposition, leveraging features and networks.  Involves probabilistic methods and side-information to inform the tensors.