En poursuivant votre navigation sur ce site, vous acceptez le dépôt de cookies dans votre navigateur. (En savoir plus)

M/F PhD Student : Human nonverbal vocalisations: The missing link

This offer is available in the following languages:
Français - Anglais

Date Limite Candidature : lundi 11 juillet 2022

Assurez-vous que votre profil candidat soit correctement renseigné avant de postuler. Les informations de votre profil complètent celles associées à chaque candidature. Afin d’augmenter votre visibilité sur notre Portail Emploi et ainsi permettre aux recruteurs de consulter votre profil candidat, vous avez la possibilité de déposer votre CV dans notre CVThèque en un clic !

General information

Reference : UMR5596-RABMAK-007
Workplace : LYON 07
Date of publication : Monday, June 20, 2022
Scientific Responsible name : Dr. Katarzyna Pisanski (CNRS, DDL) et Prof. David Reby (ENES)
Type of Contract : PhD Student contract / Thesis offer
Contract Period : 36 months
Start date of the thesis : 1 September 2022
Proportion of work : Full time
Remuneration : 2 135,00 € gross monthly

Description of the thesis topic

The doctoral thesis will be funded by a grant from the ANR (French National Research Agency). It is part of an interdisciplinary research project on the evolution and social functions of human nonverbal vocalisations, based in the ENES bioacoustics lab at the University of Saint-Etienne, and the DDL linguistics lab at the University of Lyon 2, France.
Human nonverbal vocalisations such as laughter, screams, roars, and grunts occupy a unique place in the human vocal repertoire (1–3) and yet have received relatively little attention in the human voice sciences compared to speech. Emerging and converging evidence suggests that the putative acoustic structures (forms) and social communicative outcomes (functions) of human nonverbal vocalisations are largely homologous to the calls of other mammals, from distress cries in infant mammals (4,5) to laughter in other great apes (6,7). However, unlike other mammals including non-human primates8, we humans have a remarkable ability to easily and voluntarily modulate the acoustic structure of our vocalisations, and even to produce them in the complete absence of endogenous or exogenous stimuli that would normally trigger their production in non-human mammals (8–10). Selection pressure for this unprecedented vocal dynamicity in humans, that almost certainly arose before speech, could have played a crucial role in the evolution of speech (9,10). In this view, human nonverbal vocalisations represent "living fossils" of the missing link between animal calls and human speech, and studying them can provide novel insight into how humans came to speak and other mammals did not.
With this aim, the project will systematically investigate human vocalisations, from cries of pain to moans of pleasure, which still play a significant role in our everyday social interactions (3). Tentatively, the project methods will entail three main work packages that are broadly indicative but will be further co-developed with the successful PhD candidate:
1- Audio recordings of human nonverbal vocalisations produced in both spontaneous ('genuine') and volitional ('acted') contexts, collected by exploiting a combination of online resources, lab recordings, and field work.
2- Comparative acoustic analyses and resynthesis. Using the software Praat (11) and soundgen (1,12), the student will measure key nonverbal vocal parameters such as fundamental and formant frequencies and nonlinear phenomena from nonverbal vocalisations. State-of-the-art acoustic resynthesis techniques will allow experimentally testing the causal effects of vocalisations on the biological and behavioural responses of human listeners in psychoacoustic playback experiments (see e.g., (13–15) for recent studies using this approach).
3- Psychoacoustic playback experiments. To fully understand the communicative function of human vocalisations, we must study both the acoustic information they encapsulate, and how this information affects listeners. In a series of perception experiments conducted in the lab and online (e.g., Prolific), the student will present human listeners with natural vocalisations and/or their resynthesized variants. Listeners will judge vocalisations on a given trait or state of biological and social relevance, for instance, assessing how much pain a person is experiencing (16) or how strong a person sounds (17). By mapping listeners' judgments onto the acoustic parameters of the vocal stimuli (and known vocaliser traits/stats), we can test a number of specific predictions about the functional role of these parameters or vocalisation types.
The 3-year PhD will begin September 1, 2022. The research of the doctoral thesis will be promoted at local and international conferences, outreach events aimed at the general public, and through the publication of several research papers in top-tier international research journals within the biological and behavioural sciences.

References
(We encourage interested candidates to visit the lab websites to learn more about the lab's research axes and other recent publications on human vocal behaviour).

1. Pisanski, K., Bryant, G.A., Cornec, C., Anikin, A., and Reby, D. (2022). Form follows function in human nonverbal vocalisations. Ethol. Ecol. Evol.
2. Pisanski, K., and Bryant, G.A. (2019). The evolution of voice perception. In The Oxford Handbook of Voice Studies, N. S. Eidsheim and K. L. Meizel, eds. (Oxford University Press).
3. Anikin, A., Bååth, R., and Persson, T. (2018). Human non-linguistic vocal repertoire: Call types and their meaning. J. Nonverbal Behav. 42, 53–80.
4. Lingle, S., Wyman, M.T., Kotrba, R., Teichroeb, L.J., and Romanow, C.A. (2012). What makes a cry a cry? A review of infant distress vocalizations. Curr. Zool. 58, 698–726.
5. Koutseff, A., Reby, D., Martin, O., Levrero, F., Patural, H., and Mathevon, N. (2017). The acoustic space of pain: cries as indicators of distress recovering dynamics in pre-verbal infants. Bioacoustics 0, 1–13.
6. Bryant, G.A., and Aktipis, C.A. (2014). The animal nature of spontaneous human laughter. Evol. Hum. Behav. 35, 327–335.
7. Scott, S.K., Lavan, N., Chen, S., and McGettigan, C. (2014). The social life of laughter. Trends Cogn. Sci. 18, 618–620.
8. Ackermann, H., Hage, S.R., and Ziegler, W. (2014). Brain mechanisms of acoustic communication in humans and nonhuman primates: An evolutionary perspective. Behav. Brain Sci. 37, 529–546.
9. Pisanski, K., Cartei, V., McGettigan, C., Raine, J., and Reby, D. (2016). Voice modulation: A window into the origins of human vocal control? Trends Cogn. Sci. 20, 304–318.
10. Fitch, W.T. (2018). The biology and evolution of speech: A comparative analysis. Annu. Rev. Linguist. 4, 255–279.
11. Boersma, P., and Weenink, D. (2020). Praat: Doing phonetics by computer v 6.1.21.
12. Anikin, A. (2019). Soundgen: An open-source tool for synthesizing nonverbal vocalizations. Behav. Res. Methods 51, 778–792.
13. Anikin, A., Pisanski, K., Massenet, M., and Reby, D. (2021). Harsh is large: nonlinear vocal phenomena lower voice pitch and exaggerate body size. Proc. R. Soc. B Biol. Sci. 288.
14. Anikin, A., Pisanski, K., and Reby, D. (2022). Static and dynamic formant scaling conveys body size and aggression. R. Soc. Open Sci. 9, 211496.
15. Massenet, M., Anikin, A., Pisanski, K., Reynaud, K., Mathevon, N., and Reby, D. (under review). Nonlinear vocal phenomena affect human perceptions of distress, size and dominance in puppy whines. Proc. R. Soc. B-Biol. Sci.
16. Raine, J., Pisanski, K., Simner, J., and Reby, D. (2018). Vocal communication of simulated pain. Bioacoustics, 1–23.
17. Raine, J., Pisanski, K., Oleszkiewicz, A., Simner, J., and Reby, D. (2018). Human listeners can accurately judge relative strength and height from aggressive roars and speech. iScience 4, 273–280.
18. Henrich, J., Heine, S.J., and Norenzayan, A. (2010). The weirdest people in the world? Behav. Brain Sci. 33, 61–83.

Work Context

The PhD position is funded by the ANR and will cover the salary of the PhD and key research expenses (e.g., laptop, recruitment costs, recording and playback equipment, mission expenses).
The project involves a partnership between two CNRS teams specialising in animal communication, linguistics, speech development, and anthropology:
1. DDL linguistics lab (Language Dynamics Laboratory, CNRS & University of Lyon 2). The DDL lab has expertise in languages around the world and of the neuro-cognitive mechanisms of vocal development, production, and perception in humans.
2. ENES bioacoustics lab (Sensory Neuroethology Lab, Lyon Neuroscience Research Centre, CNRS & University Jean Monnet, Saint Etienne). The ENES lab is a global leader in animal vocal communication across an extensive range of species, including humans.
The student will complete their PhD within these two partnering French labs, and will have access to all critical resources and materials of both teams such as laptop, recording equipment, sound attenuated recording chambers, and software licenses, as well as the extensive expertise of the research teams. The student will be co-supervised by the principal investigators of the project who are independent international leaders in their field with together more than 175 scientific articles:
Dr. Katarzyna (Kasia) Pisanski (CNRS permanent researcher @ DDL; ORCID 0000-0003-0992-2477; katarzyna.pisanski@cnrs.fr)
Prof. David Reby (Professor @ ENES; ORCID 0000-0001-9261-1711)
To further ensure its feasibility, the project will also involve external national and international collaborations (with e.g., UCLA, UCL).
The student will be enrolled in the doctoral school SIS (Sciences Ingénierie et Santé) at the Jean Monnet University Saint Etienne, France.
The student will also be strongly encouraged to take part in the ENES Bioacoustics Winter School, a two-week intensive course in acoustic communication in the first two weeks of January 2023.

Constraints and risks

In the context of the work packages outlined above, the student will be responsible for co-designing experimental protocols, collecting and analysing data, and disseminating research results, namely via:

- Voice recording
- Acoustic analysis and resynthesis
- Organizing, storing and coding stimulus materials and data
- Human participant recruitment
- Designing experimental platforms to collect data in playback/rating experiments from human listeners
- Data processing and statistical analysis of data
- Writing up of research results for publication in research journals and dissertation materials
- Research presentations (lab, national and international conferences)

The student will also contribute regularly to the joint activities of the DDL and ENES laboratories.

Additional Information

The position is open to any individual who has completed a master's degree before August 31, 2022. Preference may be given to those with specialised knowledge and skills in Animal Behaviour, Acoustic communication, Evolutionary biology, Experimental Psychology, Neuroscience, Cognitive Science, Speech/Language Sciences, or a related discipline involving the study of sound, cognition or behaviour.

The candidate should have some foundation in bioacoustics and acoustic analysis, particularly useful would be some experience analysing nonverbal vocal parameters (e.g., fo, formants, nonlinear phenomena). The candidate should be competent in statistical analysis (e.g., linear mixed modelling) and also have very good oral and writing skills, especially in English. Knowledge of human or animal voice production and perception and animal behaviour are additional assets. Seriousness and rigor in the conduct of experimental protocols and an autonomous working capacity will be essential.

Short-listed candidates will be contacted shortly for an interview (online or in-person) to be conducted between July 7 and July 12, 2022.

Applications must be submitted via the CNRS employment portal.

We talk about it on Twitter!