The area of affective computing, and in particular recognition of emotion from voice, has received continually increasing attention in recent years. At the same time, there remain significant challenges to
speech-based emotion recognition.
This paper presents the Cogito submission to the Interspeech
Computational Paralinguistics Challenge (ComParE), for the
second sub-challenge. The aim of this second sub-challenge
is to recognize self-assessed affect from short clips of speech containing audio data.