Recent years have been marked by the development of robotic pets or partners such as small animals or humanoids. People interact with them using natural human social cues, in particu� lar emotional expressions. It is crucial that robot can detect the emotional information contained in speech using only prosodic features, since this is often the only information that they can measure. We present here the first large scale experiment in which a large feature set space and a large machine learning algorithm space are searched concurrently. We describe new fea� tures which prove to be much more efficient than the traditional features used in the litterature.