Humans and animals rely on mental simulations of real-world objects to help them predict the consequences of their actions and generate accurate motor commands in a wide range of situations. Such mental simulations are commonly referred to as internal models.
For instance, imagine you are lifting a juice box from your breakfast table. When your internal model of the carton matches the actual object (for instance if the contents of the box are visible), you know the correct way to interact with the object, resulting in applying the appropriate force. However, when there is a mismatch between the juice box and your expectation, you immediately notice the disagreement between the world and your internal model. This happens for instance when you lift an empty box which you assume to be full, resulting in erroneous predictions and a lifting motion that does not go according to plan.
Internal models are commonly used in robots and computers to help them plan their movements, respond rapidly to new situations and predict future events. Traditionally, these internal models are designed and implemented by human engineers. Hand-designed internal models are of limited use in dynamic environments, however: Changes to an agent or its surroundings may make the pre-specified model(s) invalid. Hand designing internal models also suffers from the limitation that the models become extremely complex and difficult to specify as agents and their environment become more complex. We therefore study techniques to allow automatic generation of internal models, e.g. by use of machine learning and evolutionary algorithms.
Multiple Internal Models
Humans and animals display a remarkable ability to maintain and utilize multiple internal models, allowing us to plan and act appropriately in a wide variety of different scenarios. For instance, different models of different people allows us to adapt our social interactions to different communication partners, and due to having a different internal model of apples and tennis balls, I know I can eat the former and bounce the latter off the wall. Think about the stunning variety of real-world objects you can accurately imagine and simulate mentally. And remarkably, you can do so with a very low degree of interference - even though tennis balls and oranges are quite similar, you would never mix up their application areas.
It has been suggested that our ability to maintain a large variety of internal models without interference is facilitated by the modular organization of our brain. Computational models inspired by this have shown the ability to generate multiple internal models by separating knowledge into several modules, and enforcing a competition between those modules during training. We are exploring whether such modularly separated internal models can self-organize, by applying recent insights from evolving neural networks.
With regards to our application areas, robotics and interactive music systems, we envision several situations where multiple internal models will be beneficial. In robotics, multiple models are useful when interacting with a complex environment with many different human users, objects and other robots. A robot may even benefit from having multiple models of itself - for instance, a robot which can reconfigure its morphology can reach different areas depending on how it is configured. In interactive music, multiple internal models can aid in adapting a digital instrument to different users. Such adaptation could range from adjusting the instrument's complexity to the users' degree of competence to giving users the ability to play with an ensemble consisting of models (simulations) of other users.