Stuart Russell -- Research areas
I am interested in building systems that can act intelligently in the real
world. To this end, I work (with various students, postdocs, and
collaborators) on a broad spectrum of topics in AI. These can
be grouped under the following headings:
- Formal foundations
Provably intelligent systems based on the mathematical framework of
bounded optimality. Topics include quasioptimal control of
search and composition of real-time systems.
- Learning probability models
Topics include learning static and dynamic Bayesian networks and
related models and learning with prior knowledge. Applications include
speech recognition, computational biology, and human driver modelling.
- First-order probabilistic languages
FOPLs are languages that combine probability theory (for handling
uncertainty) with the expressive power of first-order logic.
Whereas Bayesian networks assume possible worlds defined by the values
of a fixed set of random variables, FOPLs assume possible worlds
defined by sets of objects and relations among those objects.
Our work includes BLOG, the first language capable of handling
unknown objects and identity uncertainty, both of which
are inherent in many real-world applications such as vision, language
understanding, information extraction, database merging and cleaning,
and tracking and data association.
- State estimation
State estimation (also known as filtering, tracking, belief update,
and situation assessment) is the problem of figuring out what state the world is in,
given a sequence of percepts. It is a core problem for all intelligent systems.
We have investigated both probabilistic state estimation
and nondeterministic logical state estimation; one current project
looks at the game of Kriegspiel, a version of
chess in which one cannot see any of the opponent's pieces.
- Hierarchical reinforcement learning
Intelligent behavior does not appear to consist of a completely
unstructured sequence of actions; instead, it seems to have
hierarchical structure in that each primitive action is part of some
higher-level activity, and so on up to very high-level activities such
as "get a PhD" and "earn enough money to retire to Bali".
Hierarchical reinforcement learning is about methods for learning
structured behaviors and using the structure of behavior to learn
faster and to reuse the results of learning in new contexts.
- Intelligent agent architectures
This topic combines all of the preceding topics in order to design
complete intelligent systems. We also examine general structural properties of
intelligent agents, including the connection between functional
decomposition of agents and additive decomposition of reward functions.
Some older projects (PNPACK, BATmobile, RoadWatch) are described here.