My PhD research focuses on the vocal expression of affective states in humans and, more broadly, across the animal kingdom. I investigate the links between affective states and the acoustic features of vocal signals, as well as how humans perceive these links, with particular attention to methodological aspects. My work is structured around two main research axes.
First, I explore the human ability to recognize affective states in the voices of humans, chimpanzees, and bonobos. While these two species share equal phylogenetic proximity to humans, they differ greatly in terms of vocal production. This contrast makes them an ideal model for studying the determinants of interspecific recognition of affective cues in vocalizations.
Second, I aim to develop a data analysis method to statistically compare the performance of different automatic classification models. This method, largely inspired by resampling techniques, could support the identification of the acoustic features that are most critical for accurately classifying affective states.