Abstracts
Geoff Gordon, Carnegie Mellon University
Title: Learning Compressed Information Spaces from Execution Traces
Dieter Fox, University of Washington
Hierarchical Graphical Models for State Estimation
Over the last decade, the robotics community has developed highly efficient and
robust solutions to state estimation problems such as robot localization, people
tracking, and map building. However, robots need to be able to reason about far
more complex concepts including different types of objects and places, and
humans performing various tasks. To support such reasoning, state estimation
has to extract high level information from raw sensor data. I will describe
work using hierarchical graphical models to extract such information from raw
sensor data. Specifically, I will describe models that bridge the gap between
streams of GPS measurements and high level information about a person’s
activities, such as her mode of transportation, her current goal, and her
significant places.
Maxim Likhachev, University of Pennsylvania
Title: Beyond Assumptive Planning: Planning with Preferences on Missing Information
Abstract:
For many real-world problems, information state at the time of planning is only partially-known. For example, robots often have to navigate partially-known terrains, planes often have to be scheduled under changing weather conditions, and car route-finders often have to figure out paths with only partial knowledge of traffic congestions. While general decision-theoretic planning that takes into account the uncertainty about the missing information is hard to scale to large problems, many such problems exhibit a special property: one can clearly identify beforehand the best (called clearly preferred) values for the variables that represent the unknowns in the environment. For example, in the robot navigation problem, it is always preferred to find out that an initially unknown location is traversable rather than not, in the plane scheduling problem, it is always preferred for the weather to remain a good flying weather, and in route-finding problem, it is always preferred for the road of interest to be clear of traffic. It turns out that the existence of clear preferences on missing information can be used to solve these complex planning under uncertainty problems with a series of deterministic low-dimensional graph searches.
In this talk, we formally define the notion of clear preferences on missing information and explain how they can be used to perform efficient planning under uncertainty and what theoretical guarantees they can provide. We will also briefly present an experimental analysis showing that running a series of fast low-dimensional searches turns out to be much faster than solving the full problem at once since memory requirements are much lower and deterministic searches are orders of magnitude faster than probabilistic planning.
Steven LaValle, University of Illinois
Distributed Combinatorial Filters
In this talk, I will review our recent work on minimalist filtering
techniques for basic tasks including target tracking, counting, and
pursuit-evasion. The key concept is to use the weakest sensors
possible to maintain critical, combinatorial information that enables
topological or weakly-geometric state information to be extracted.
The information is maintained in small information spaces that can be
iteratively and efficiently updated. After reviewing recent results,
I will propose distributed combinatorial filters to be developed
during the MURI project. Following this direction, information must
be integrated and updated from observations that are both spatially
and temporally distributed in irregular ways. We especially want to
determine the weakest sensors possible, with asynchronous and
distributed observations, that can nevertheless extract useful global
information about common tasks such as navigation, mapping, tracking,
counting, and pursuit.
Tony Stentz, Carnegie Mellon University
Market-based Approaches to Decentralized Reasoning in Distributed Information Spaces
Nicholas Roy, MIT
Title: Belief Roadmaps and High Dimensional Belief Space Search
Abstract:
Online, forward-search techniques have demonstrated promising results for solving problems in partially observable environments. These techniques depend on the ability to efficiently search and evaluate the set of information states reachable from the current state.
However, enumerating or sampling action-observation sequences in high-dimensional information spaces in order to compute the reachable information states can be computationally demanding; coupled with the need to satisfy real-time constraints, existing online solvers can only search to a limited depth.
I will describe recent results that allow us to generate policies directly from the distribution of the agent’s posterior information states.
When the underlying state distribution is Gaussian, and the observation function is an exponential family distribution, we can calculate this distribution of information states without enumerating the possible observations. This property not only enables us to plan in problems
with high-dimensional spaces, but also allows us to search deeper by considering policies composed of multi-step action sequences. I will give two examples of this approach, the Belief Roadmap and Posterior Distribution Prediction, and show their application in UAV domains.
“Video, Language, and Contextual Reasoning-StoryLine Model for Video Understanding”
Jianbo Shi
University of Pennsylvania
Abstract:
There has been a significant interest in utilizing visual and contextual models for high level semantic reasoning in video. There are many weakly annotated images and videos available on the internet, along with other rich sources of information such as dictionaries, which can be used to learn visual and contextual models for recognition.
The goal of this work is to analyze videos of human activities not only by recognizing actions (typically based on their appearances), but also by determining the story/plot of the video. The storyline of a video describes causal relationships between actions. Beyond recognition of individual actions, discovering causal relationships helps to better understand the semantic meaning of the activities. We present an approach to learn a visually grounded storyline model of videos directly from weakly labeled data. The storyline model is represented as an AND-OR graph, a structure that can compactly encode storyline variation across videos. The edges in the AND-OR graph correspond to causal relationships which are represented in terms of spatio-temporal constraints. We formulate an Integer Programming framework for action recognition and storyline extraction using the storyline model and visual groundings learned from training data.
This is a joint work with Abhinav Gupta, Praveen Srinivasan, and Larry S. Davis