K. Waugh, Brian D. Ziebart and J. A. Bagnell. Computational Rationalization: The Inverse Equilibrium Problem. Proceedings of the International Joint Conference on Machine Learning (ICML), 2011. [pdf]
Modeling the behavior of imperfect agents from a small number of observations is a difficult, but important task. In the single-agent decision-theoretic setting, inverse optimal control has been successfully employed. It assumes that observed behavior is an approximately optimal solution to an unknown decision problem, and learns the problem’s parameters that best explain the examples. The inferred parameters can be used to accurately predict future behavior, describe the agent’s preferences, or imitate the agent’s behavior in similar unobserved situations. In this work, we consider similar tasks in competitive and cooperative multi-agent domains. Here, unlike single-agent settings, a player cannot myopically maximize its reward — it must speculate on how the other agents may act to influence the game’s outcome. Employing the game-theoretic notion of regret and the principle of maximum entropy, we introduce a technique for predicting and generalizing behavior, as well as recovering a reward function in these domains.