Any system with sensors finds itself living in an information space (called I-space for short), whether it wants to or not. What does this mean and why does this happen? Consider a complicated scenario such as a team of robots operating in the real world. What is the appropriate notion of state?
In computer science, state is usually internal, referring to the discrete modes of a computer. In physics and control theory, state usually models the external physical world. We often hear that the most important distinction between these two spaces is discrete vs. continuous; however, the far more important distinction is internal vs. external. For robots deployed in a complicated scenario, it is therefore important to maintain the distinction between these two. There exists an external, physical state space in which a state captures the configuration, phase, or other changeable properties of all physical bodies of interest in the world. In contrast, we refer to the internal, information space as an I-state, which captures the data received from sensors and given to actuators. The external state space should represent everything that is needed to define sensors, control laws, and the task, and ideally nothing more. If the task is to track several vehicles in a city, then the external state should define the city map and all vehicle configurations.
For a complex domain with many agents, such a state space is unwieldy. A naive representation of the internal information space is also unwieldy; in its raw form it is referred to as the history I-space, in which each element is the full history of every sensor reading and control (or action) given up to the current time. Note that the history I-state is always trivially known, whereas reconstructing the precise physical state may be a dif?cult or impossible problem. Most of our research efforts involve mapping the gigantic I-spaces down to reduced I-spaces on which computations can be ef?ciently and reliably performed while ensuring that tasks are achieved.
Recall the celebrated Kalman filter; this Bayesian approach enabled history I-states to be transformed into probability density functions and Kalman further showed that for linear-Gaussian (LG) systems the I-states could be further reduced to Gaussians. This meant that the filter could “live” in a reduced I-space in which each internal state encoded only mean and covariance.