Almost all languages in the world have a way to formulate commands. Commands specify actions that the body should undertake (such as “stand up”), possibly involving other objects in the scene (such as “pick up the red block”). Action language involves various competences, in particular (i) the ability to perform an action and recognize which action has been performed by others (the so-called mirror problem), and (ii) the ability to identify which objects are to participate in the action (e.g. “the red block” in “pick up the red block”) and understand what role objects play, for example whether it is the agent or undergoer of the action, or the patient or target (as in “put the red block on top of the green one”). This chapter describes evolutionary language game experiments exploring how these competences originate, can be carried out and acquired, by real robots, using evolutionary language games and a whole systems approach.
Basic postures such as sit, stand and lie are ubiquitous in human interaction. In order to build robots that aid and support humans in their daily life, we need to understand how posture categories can be learned and recognized. This paper presents an unsupervised learning approach to posture recognition for a biped humanoid robot. The approach is based on Slow Feature Analysis (SFA), a biologically inspired algorithm for extracting slowly changing signals from signals varying on a fast time scale. Two experiments are carried out: First, we consider the problem of recognizing static postures in a multimodal sensory stream which consists of visual and proprioceptive stimuli. Secondly, we show how to extract a low-dimensional representation of the sensory state space which is suitable for posture recognition in a more complex setting. We point out that the beneficial performance of SFA in this task can be related to the fact that SFA computes manifolds which are used in robotics to model invariants in motion and behavior. Based on this insight, we also propose a method for using SFA components for guided exploration of the state space.
This chapter introduces the modular humanoid robot Myon, covering its mechatronical design, embedded low-level software, distributed processing architecture, and the complementary experimental environment. The Myon humanoid is the descendant of various robotic hardware platforms which have been built over the years and therefore combines the latest research results on the one hand, and the expertise of how a robot has to be built for experiments on embodiment and language evolution on the other hand. In contrast to many other platforms, the Myon humanoid can be used as a whole or in parts. Both the underlying architecture and the supportive application software allow for ad hoc changes in the experimental setup.
This paper presents a software system that integrates different computational paradigms to solve cognitive tasks of different levels. The system has been employed to empower research on very different platforms ranging from simple two-wheeled structures with only a few cheap sensors, to complex two-legged humanoid robots, with many actuators, degrees of freedom and sensors. It is flexible and adjustable enough to be used in part or as a whole, to target different research domains
PROJECTS and questions, including Evolutionary Robotics, RoboCup and Artificial Language Evolution on Autonomous Robots (ALEAR, an EU funded cognitive systems project). In contrast to many other frameworks, the system is such that researchers can quickly adjust the system to different problems and platforms, while allowing maximum reuse of components and abstractions, separation of concerns and extensibility.