sh.sePublications
Change search
Refine search result
1 - 23 of 23
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • harvard-anglia-ruskin-university
  • apa-old-doi-prefix.csl
  • sodertorns-hogskola-harvard.csl
  • sodertorns-hogskola-oxford.csl
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Rows per page
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sort
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
Select
The maximal number of hits you can export is 250. When you want to export more records please use the Create feeds function.
  • 1.
    Bhatara, Anjali
    et al.
    CNRS, Paris, France / University of Paris Descartes, Paris, France.
    Laukka, Petri
    Södertörn University, School of Social Sciences, Psychology. Stockholm University.
    Boll-Avetisyan, Natalie
    University of Potsdam, Potsdam, Germany.
    Granjon, Lionel
    CNRS, Paris, France / University of Paris Descartes, Paris, France.
    Elfenbein, Hillary Anger
    Washington University, St Louis, USA.
    Banziger, Tanja
    Mid Sweden University.
    Second Language Ability and Emotional Prosody Perception2016In: PLOS ONE, E-ISSN 1932-6203, Vol. 11, no 6, article id e0156855Article in journal (Refereed)
    Abstract [en]

    The present study examines the effect of language experience on vocal emotion perception in a second language. Native speakers of French with varying levels of self-reported English ability were asked to identify emotions from vocal expressions produced by American actors in a forced-choice task, and to rate their pleasantness, power, alertness and intensity on continuous scales. Stimuli included emotionally expressive English speech (emotional prosody) and non-linguistic vocalizations (affect bursts), and a baseline condition with Swiss-French pseudo-speech. Results revealed effects of English ability on the recognition of emotions in English speech but not in non-linguistic vocalizations. Specifically, higher English ability was associated with less accurate identification of positive emotions, but not with the interpretation of negative emotions. Moreover, higher English ability was associated with lower ratings of pleasantness and power, again only for emotional prosody. This suggests that second language skills may sometimes interfere with emotion recognition from speech prosody, particularly for positive emotions.

  • 2.
    Bhatara, Anjali
    et al.
    Université Paris Descartes, France.
    Laukka, PetriStockholms universitet, Psykologiska institutionen.Levitin, Daniel J.McGill University, Canada.
    Expression of emotion in music and vocal communication2014Collection (editor) (Other academic)
    Abstract [en]

    Two of the most important social skills in humans are the ability to determine the moods of those around us, and to use this to guide our behavior. To accomplish this, we make use of numerous cues. Among the most important are vocal cues from both speech and non-speech sounds. Music is also a reliable method for communicating emotion. It is often present in social situations and can serve to unify a group's mood for ceremonial purposes (funerals, weddings) or general social interactions. Scientists and philosophers have speculated on the origins of music and language, and the possible common bases of emotional expression through music, speech and other vocalizations. They have found increasing evidence of commonalities among them. However, the domains in which researchers investigate these topics do not always overlap or share a common language, so communication between disciplines has been limited. The aim of this Research Topic is to bring together research across multiple disciplines related to the production and perception of emotional cues in music, speech, and non-verbal vocalizations. This includes natural sounds produced by human and non-human primates as well as synthesized sounds. Research methodology includes survey, behavioral, and neuroimaging techniques investigating adults as well as developmental populations, including those with atypical development. Studies using laboratory tasks as well as studies in more naturalistic settings are included.

  • 3.
    Bhatara, Anjali
    et al.
    Sorbonne Paris Cité, Université Paris Descartes, Paris, France / Laboratoire Psychologie de la Perception, CNRS, UMR 8242, Paris, France.
    Laukka, Petri
    Stockholms universitet, Samhällsvetenskapliga fakulteten, Psykologiska institutionen. .
    Levitin, Daniel J.
    Department of Psychology, McGill University, Montreal, QC, Canada.
    Expression of emotion in music and vocal communication: Introduction to the research topic2014In: Frontiers in Psychology, E-ISSN 1664-1078, Vol. 5, article id 399Article in journal (Other academic)
    Abstract [en]

    In social interactions, we must gauge the emotional state of others in order to behave appropriately. We rely heavily on auditory cues, specifically speech prosody, to do this. Music is also a complex auditory signal with the capacity to communicate emotion rapidly and effectively and often occurs in social situations or ceremonies as an emotional unifier.

    In sum, the main contribution of this Research Topic, along with highlighting the variety of research being done already, is to show the places of contact between the domains of music and vocal expression that occur at the level of emotional communication. In addition, we hope it will encourage future dialog among researchers interested in emotion in fields as diverse as computer science, linguistics, musicology, neuroscience, psychology, speech and hearing sciences, and sociology, who can each contribute knowledge necessary for studying this complex topic.

  • 4.
    Feingold, Daniel
    et al.
    Ariel University, Ariel, Israel / Sheba Medical Center, Tel Hashomer, Israel.
    Hasson-Ohayon, Ilanit
    Bar-Ilan University, Ramat-Gan, Israel.
    Laukka, Petri
    Södertörn University, School of Social Sciences, Psychology. Stockholm University.
    Vishne, Tali
    Bar-Ilan University, Ramat-Gan, Israel.
    Dembinsky, Yael
    Sourasky Medical Center, Tel-Aviv, Israel.
    Kravets, Shlomo
    Bar-Ilan University, Ramat-Gan, Israel.
    Emotion recognition deficits among persons with schizophrenia: Beyond stimulus complexity level and presentation modality.2016In: Psychiatry Research, ISSN 0165-1781, E-ISSN 1872-7123, Vol. 240, p. 60-65Article in journal (Refereed)
    Abstract [en]

    Studies have shown that persons with schizophrenia have lower accuracy in emotion recognition compared to persons without schizophrenia. However, the impact of the complexity level of the stimuli or the modality of presentation has not been extensively addressed. Forty three persons with a diagnosis of schizophrenia and 43 healthy controls, matched for age and gender, were administered tests assessing emotion recognition from stimuli with low and high levels of complexity presented via visual, auditory and semantic channels. For both groups, recognition rates were higher for high-complexity stimuli compared to low-complexity stimuli. Additionally, both groups obtained higher recognition rates for visual and semantic stimuli than for auditory stimuli, but persons with schizophrenia obtained lower accuracy than persons in the control group for all presentation modalities. Persons diagnosed with schizophrenia did not present a level of complexity specific deficit or modality-specific deficit compared to healthy controls. Results suggest that emotion recognition deficits in schizophrenia are beyond level of complexity of stimuli and modality, and present a global difficulty in cognitive functioning.

  • 5. Gold, Rinat
    et al.
    Butler, Pamela
    Revheim, Nadine
    Leitman, David I.
    Hansen, John A.
    Gur, Ruben C.
    Kantrowitz, Joshua T.
    Laukka, Petri
    Stockholms universitet, Psykologiska institutionen.
    Juslin, Patrik N.
    Silipo, Gail S.
    Javitt, Daniel C.
    Auditory Emotion Recognition Impairments in Schizophrenia: Relationship to Acoustic Features and Cognition2012In: American Journal of Psychiatry, ISSN 0002-953X, E-ISSN 1535-7228, Vol. 169, no 4, p. 424-432Article in journal (Refereed)
    Abstract [en]

    Objective: Schizophrenia is associated with deficits in the ability to perceive emotion based on tone of voice. The basis for this deficit remains unclear, however, and relevant assessment batteries remain limited. The authors evaluated performance in schizophrenia on a novel voice emotion recognition battery with well-characterized physical features, relative to impairments in more general emotional and cognitive functioning. Method: The authors studied a primary sample of 92 patients and 73 comparison subjects. Stimuli were characterized according to both intended emotion and acoustic features (e.g., pitch, intensity) that contributed to the emotional percept. Parallel measures of visual emotion-recognition, pitch perception, general cognition, and overall outcome were obtained. More limited measures were obtained in an independent replication sample of 36 patients, 31 age-matched comparison subjects, and 188 general comparison subjects. Results: Patients showed statistically significant large-effect-size deficits in voice emotion recognition (d=1.1) and were preferentially impaired in recognition of emotion based on pitch features but not intensity features. Emotion recognition deficits were significantly correlated with pitch perception impairments both across (r=0.56) and within (r=0.47) groups. Path analysis showed both sensory-specific and general cognitive contributions to auditory emotion recognition deficits in schizophrenia. Similar patterns of results were observed in the replication sample. Conclusions: The results demonstrate that patients with schizophrenia show a significant deficit in the ability to recognize emotion based on tone of voice and that this deficit is related to impairment in detecting the underlying acoustic features, such as change in pitch, required for auditory emotion recognition. This study provides tools for, and highlights the need for, greater attention to physical features of stimuli used in studying social cognition in neuropsychiatric disorders.

  • 6.
    Juslin, Patrik N.
    et al.
    Uppsala University.
    Liljeström, Simon
    Uppsala University.
    Laukka, Petri
    Stockholms universitet, Psykologiska institutionen.
    Västfjäll, Daniel
    Linköping University.
    Lundqvist, Lars-Olov
    Örebro University Hospital.
    Emotional reactions to music in a nationally representative sample of Swedish adults: Prevalence and causal influences2011In: Musicae scientiae, ISSN 1029-8649, E-ISSN 2045-4147, Vol. 15, no 2, p. 174-207Article in journal (Refereed)
    Abstract [en]

    Empirical studies have indicated that listeners value music primarily for its ability to arouse emotions. Yet little is known about which emotions listeners normally experience when listening to music, or about the causes of these emotions. The goal of this study was therefore to explore the prevalence of emotional reactions to music in everyday life and how this is influenced by various factors in the listener, the music, and the situation. A self-administered mail questionnaire was sent to a random and nationally representative sample of 1,500 Swedish citizens between the ages of 18 and 65, and 762 participants (51%) responded to the questionnaire. Thirty-two items explored both musical emotions in general (semantic estimates) and the most recent emotion episode featuring music for each participant (episodic estimates). The results revealed several variables (e.g., personality, age, gender, listener activity) that were correlated with particular emotions. A multiple discriminant analysis indicated that three of the most common emotion categories in a set of musical episodes (i.e., happiness, sadness, nostalgia) could be predicted with a mean accuracy of 70% correct based on data obtained from the questionnaire. The results may inform theorizing about musical emotions and guide the selection of causal variables for manipulation in future experiments.

  • 7. Kantrowitz, J. T.
    et al.
    Scaramello, N.
    Jakubovitz, A.
    Lehrfeld, J. M.
    Laukka, Petri
    Stockholms universitet, Samhällsvetenskapliga fakulteten, Psykologiska institutionen.
    Elfenbein, H. A.
    Silipo, G.
    Javitt, D. C.
    Amusia and protolanguage impairments in schizophrenia2014In: Psychological Medicine, ISSN 0033-2917, E-ISSN 1469-8978, Vol. 44, no 13, p. 2739-2748Article in journal (Refereed)
    Abstract [en]

    Background. Both language and music are thought to have evolved from a musical protolanguage that communicated social information, including emotion. Individuals with perceptual music disorders (amusia) show deficits in auditory emotion recognition (AER). Although auditory perceptual deficits have been studied in schizophrenia, their relationship with musical/protolinguistic competence has not previously been assessed. Method. Musical ability was assessed in 31 schizophrenia/schizo-affective patients and 44 healthy controls using the Montreal Battery for Evaluation of Amusia (MBEA). AER was assessed using a novel battery in which actors provided portrayals of five separate emotions. The Disorganization factor of the Positive and Negative Syndrome Scale (PANSS) was used as a proxy for language/thought disorder and the MATRICS Consensus Cognitive Battery (MCCB) was used to assess cognition. Results. Highly significant deficits were seen between patients and controls across auditory tasks (p<0.001). Moreover, significant differences were seen in AER between the amusia and intact music-perceiving groups, which remained significant after controlling for group status and education. Correlations with AER were specific to the melody domain, and correlations between protolanguage (melody domain) and language were independent of overall cognition. Discussion. This is the first study to document a specific relationship between amusia, AER and thought disorder, suggesting a shared linguistic/protolinguistic impairment. Once amusia was considered, other cognitive factors were no longer significant predictors of AER, suggesting that musical ability in general and melodic discrimination ability in particular may be crucial targets for treatment development and cognitive remediation in schizophrenia.

  • 8.
    Kantrowitz, Joshua T.
    et al.
    Columbia Univ, Nathan S Kline Inst Psychiat Res, Orangeburg, NY USA.
    Jakubovitz, Aaron
    Nathan S Kline Inst Psychiat Res, Orangeburg, NY 10962 USA.
    Scaramello, Nayla
    Nathan S Kline Inst Psychiat Res, Orangeburg, NY 10962 USA.
    Laukka, Petri
    Stockholms universitet, Samhällsvetenskapliga fakulteten, Psykologiska institutionen..
    Silipo, Gail
    Nathan S Kline Inst Psychiat Res, Orangeburg, NY 10962 USA.
    Javitt, Daniel C.
    Columbia Univ, Nathan S Kline Inst Psychiat Res, Orangeburg, NY USA.
    Are Schizophrenia Patients Amusical?: The Role of Pitch and Rhythm in Auditory Emotion Recognition Impairments in Schizophrenia2013In: Biological Psychiatry, ISSN 0006-3223, E-ISSN 1873-2402, Vol. 73, no 9, p. 18S-18SArticle in journal (Refereed)
  • 9.
    Kantrowitz, Joshua T.
    et al.
    Schizophrenia Research Center, Nathan Kline Institute for Psychiatric Research, Orangeburg, NY / Columbia University, New York.
    Leitman, David I.
    University of Pennsylvania, Philadelphia.
    Lehrfeld, Jonathan M.
    Laukka, Petri
    Stockholms universitet, Samhällsvetenskapliga fakulteten, Psykologiska institutionen.
    Juslin, Patrik N.
    Uppsala University.
    Butler, Pamela D.
    Schizophrenia Research Center, Nathan Kline Institute for Psychiatric Research, Orangeburg, NY / New York University, New York.
    Silipo, Gail
    Schizophrenia Research Center, Nathan Kline Institute for Psychiatric Research, Orangeburg, NY .
    Javitt, Daniel C.
    Schizophrenia Research Center, Nathan Kline Institute for Psychiatric Research, Orangeburg, NY / New York University, New York.
    Reduction in Tonal Discriminations Predicts Receptive Emotion Processing Deficits in Schizophrenia and Schizoaffective Disorder2013In: Schizophrenia Bulletin, ISSN 0586-7614, E-ISSN 1745-1701, Vol. 39, no 1, p. 86-93Article in journal (Refereed)
    Abstract [en]

    Introduction: Schizophrenia patients show decreased ability to identify emotion based upon tone of voice (voice emotion recognition), along with deficits in basic auditory processing. Interrelationship among these measures is poorly understood. Methods: Forty-one patients with schizophrenia/schizoaffective disorder and 41 controls were asked to identify the emotional valence (happy, sad, angry, fear, or neutral) of 38 synthesized frequency-modulated (FM) tones designed to mimic key acoustic features of human vocal expressions. The mean (F0M) and variability (F0SD) of fundamental frequency (pitch) and absence or presence of high frequency energy (HF500) of the tones were independently manipulated to assess contributions on emotion identification. Forty patients and 39 controls also completed tone-matching and voice emotion recognition tasks. Results: Both groups showed a nonrandom response pattern (P < .0001). Stimuli with highest and lowest F0M/F0SD were preferentially identified as happy and sad, respectively. Stimuli with low F0M and midrange F0SD values were identified as angry. Addition of HF500 increased rates of angry and decreased rates of sad identifications. Patients showed less differentiation of response across frequency changes, leading to a highly significant between-group difference in response pattern to maximally identifiable stimuli (d = 1.4). The differential identification pattern for FM tones correlated with deficits in basic tone-matching ability (P = .01), voice emotion recognition (P < .001), and negative symptoms (P < .001). Conclusions: Specific FM tones conveyed reliable emotional percepts in both patients and controls and correlated highly with deficits in ability to recognize information based upon tone of voice, suggesting significant bottom-up contributions to social cognition and negative symptom impairments in schizophrenia.

  • 10.
    Laukka, Petri
    et al.
    Stockholms universitet, Psykologiska institutionen.
    Audibert, Nicolas
    Université Sorbonne, Paris , France.
    Aubergé, Véronique
    Université Joseph Fourier—Université Stendhal , Grenoble , France.
    Exploring the determinants of the graded structure of vocal emotion expressions2012In: Cognition & Emotion, ISSN 0269-9931, E-ISSN 1464-0600, Vol. 26, no 4, p. 710-719Article in journal (Refereed)
    Abstract [en]

    We examined what determines the typicality, or graded structure, of vocal emotion expressions. Separate groups of judges rated acted and spontaneous expressions of anger, fear, and joy with regard to their typicality and three main determinants of the graded structure of categories: category members’ similarity to the central tendency of their category (CT); category members’ frequency of instantiation, i.e., how often they are encountered as category members (FI); and category members’ similarity to ideals associated with the goals served by its category, i.e., suitability to express particular emotions. Partial correlations and multiple regression analysis revealed that similarity to ideals, rather than CT or FI, explained most variance in judged typicality. Results thus suggest that vocal emotion expressions constitute ideal-based goal-derived categories, rather than taxonomic categories based on CT and FI. This could explain how prototypical expressions can be acoustically distinct and highly recognisable but occur relatively rarely in everyday speech.

  • 11.
    Laukka, Petri
    et al.
    Stockholms universitet, Samhällsvetenskapliga fakulteten, Psykologiska institutionen.
    Eerola, Tuomas
    University of Jyväskylä, Finland.
    Thingujam, Nutankumar S.
    Sikkim Univ, Gangtok, India.
    Yamasaki, Teruo
    Osaka Shoin Womens Univ, Nara, Japan.
    Beller, Gregory
    IRCAM, Paris, France.
    Universal and Culture-Specific Factors in the Recognition and Performance of Musical Affect Expressions2013In: Emotion, ISSN 1528-3542, E-ISSN 1931-1516, Vol. 13, no 3, p. 434-449Article in journal (Refereed)
    Abstract [en]

    We present a cross-cultural study on the performance and perception of affective expression in music. Professional bowed-string musicians from different musical traditions (Swedish folk music, Hindustani classical music, Japanese traditional music, and Western classical music) were instructed to perform short pieces of music to convey 11 emotions and related states to listeners. All musical stimuli were judged by Swedish, Indian, and Japanese participants in a balanced design, and a variety of acoustic and musical cues were extracted. Results first showed that the musicians' expressive intentions could be recognized with accuracy above chance both within and across musical cultures, but communication was, in general, more accurate for culturally familiar versus unfamiliar music, and for basic emotions versus nonbasic affective states. We further used a lens-model approach to describe the relations between the strategies that musicians use to convey various expressions and listeners' perceptions of the affective content of the music. Many acoustic and musical cues were similarly correlated with both the musicians' expressive intentions and the listeners' affective judgments across musical cultures, but the match between musicians' and listeners' uses of cues was better in within-cultural versus cross-cultural conditions. We conclude that affective expression in music may depend on a combination of universal and culture-specific factors.

  • 12.
    Laukka, Petri
    et al.
    Stockholms universitet, Psykologiska institutionen.
    Elfenbein, H.A.
    Chui, W.
    Thingujam, N.S.
    Iraki, F.K.
    Rockstuhl, T.
    Althoff, J.
    Presenting the VENEC corpus: Development of a cross-cultural corpus of vocal emotion expressions and a novel method of annotating emotion appraisals2010In: Proceedings of the LREC 2010 Workshop on Corpora for Research on Emotion and Affect / [ed] L. Devillers, B. Schuller, R. Cowie, E. Douglas-Cowie, & A. Batliner, Valetta, Malta: European Language Resources Association , 2010, p. 53-57Conference paper (Refereed)
    Abstract [en]

    We introduce the Vocal Expressions of Nineteen Emotions across Cultures (VENEC) corpus and present results from initial evaluation efforts using a novel method of annotating emotion appraisals. The VENEC corpus consists of 100 professional actors from 5 English speaking cultures (USA, India, Kenya, Singapore, and Australia) who vocally expressed 19 different affects/emotions (affection, amusement, anger, contempt, disgust, distress, fear, guilt, happiness, interest, lust, negative surprise, neutral, positive surprise, pride, relief, sadness, serenity, and shame), each with 3 levels of emotion intensity, by enacting finding themselves in various emotion-eliciting situations. In all, the corpus contains approximately 6,500 stimuli offering great variety of expressive styles for each emotion category due to speaker, culture, and emotion intensity effects. All stimuli have further been acoustically analyzed regarding pitch, intensity, voice quality, and durational cues. In the appraisal rating study, listeners rated a selection of VENEC-stimuli with regard to the characteristics of the emotion eliciting situation, described in terms of 8 emotion appraisal dimensions (novelty, intrinsic pleasantness, goal conduciveness, urgency, power, self- and other-responsibility, and norm compatibility). First, results showed that the inter-rater reliability was acceptable for all scales except responsibility. Second, the perceived appraisal profiles for the different vocal expressions were generally in accord with predictions based on appraisal theory. Finally, listeners’ appraisal ratings on each scale were significantly correlated with several acoustic characteristics. The results show that listeners can reliably infer several aspects of emotion-eliciting situations from vocal affect expressions, and thus suggest that vocal affect expressions may carry cognitive representational information.

  • 13.
    Laukka, Petri
    et al.
    Stockholms universitet, Samhällsvetenskapliga fakulteten, Psykologiska institutionen.
    Elfenbein, Hillary Anger
    Washington University in St. Louis, St Louis, MO, USA .
    Emotion Appraisal Dimensions can be Inferred From Vocal Expressions2012In: Social Psychology and Personality Science, ISSN 1948-5506, E-ISSN 1948-5514, Vol. 3, no 5, p. 529-536Article in journal (Refereed)
    Abstract [en]

    Vocal expressions are thought to convey information about speakers' emotional states but may also reflect the antecedent cognitive appraisal processes that produced the emotions. We investigated the perception of emotion-eliciting situations on the basis of vocal expressions. Professional actors vocally portrayed different emotions by enacting emotion-eliciting situations. Judges then rated these expressions with respect to the emotion-eliciting situation described in terms of appraisal dimensions (i.e., novelty, intrinsic pleasantness, goal conduciveness, urgency, power, self-and other responsibility, and norm compatibility), achieving good agreement. The perceived appraisal profiles for the different emotions were generally in accord with predictions based on appraisal theory. The appraisal ratings also correlated with a variety of acoustic measures related to pitch, intensity, voice quality, and temporal characteristics. Results suggest that several aspects of emotion-eliciting situations can be inferred reliably and validly from vocal expressions which, thus, may carry information about the cognitive representation of events.

  • 14.
    Laukka, Petri
    et al.
    Stockholms universitet, Samhällsvetenskapliga fakulteten, Psykologiska institutionen.
    Elfenbein, Hillary Anger
    Washington University, St. Louis, MO, USA.
    Söder, Nela
    Stockholms universitet, Samhällsvetenskapliga fakulteten, Psykologiska institutionen..
    Nordström, Henrik
    Stockholms universitet, Samhällsvetenskapliga fakulteten, Psykologiska institutionen..
    Althoff, Jean
    University of Queensland, Brisbane, QLD, Australia.
    Chui, Wanda
    University of California, Berkeley, CA, USA.
    Iraki, Frederick K.
    United States International University, Nairobi, Kenya.
    Rockstuhl, Thomas
    Nanyang Technological University, Singapore.
    Thingujam, Nutankumar S.
    Sikkim University, Gangtok, India.
    Cross-cultural decoding of positive and negative non-linguistic emotion vocalizations2013In: Frontiers in Psychology, E-ISSN 1664-1078, Vol. 4, article id 353Article in journal (Refereed)
    Abstract [en]

    Which emotions are associated with universally recognized non-verbal signals? We address this issue by examining how reliably non-linguistic vocalizations (affect bursts) can convey emotions across cultures. Actors from India, Kenya, Singapore, and USA were instructed to produce vocalizations that would convey nine positive and nine negative emotions to listeners. The vocalizations were judged by Swedish listeners using a within-valence forced-choice procedure, where positive and negative emotions were judged in separate experiments. Results showed that listeners could recognize a wide range of positive and negative emotions with accuracy above chance. For positive emotions, we observed the highest recognition rates for relief, followed by lust, interest, serenity and positive surprise, with affection and pride receiving the lowest recognition rates. Anger, disgust, fear, sadness, and negative surprise received the highest recognition rates for negative emotions, with the lowest rates observed for guilt and shame. By way of summary, results showed that the voice can reveal both basic emotions and several positive emotions other than happiness across cultures, but self-conscious emotions such as guilt, pride, and shame seem not to be well recognized from non-linguistic vocalizations.

  • 15.
    Laukka, Petri
    et al.
    Stockholms universitet, Samhällsvetenskapliga fakulteten, Psykologiska institutionen. .
    Neiberg, Daniel
    KTH.
    Elfenbein, Hillary Anger
    Washington Univ, USA.
    Evidence for Cultural Dialects in Vocal Emotion Expression: Acoustic Classification Within and Across Five Nations2014In: Emotion, ISSN 1528-3542, E-ISSN 1931-1516, Vol. 14, no 3, p. 445-449Article in journal (Refereed)
    Abstract [en]

    The possibility of cultural differences in the fundamental acoustic patterns used to express emotion through the voice is an unanswered question central to the larger debate about the universality versus cultural specificity of emotion. This study used emotionally inflected standard-content speech segments expressing 11 emotions produced by 100 professional actors from 5 English-speaking cultures. Machine learning simulations were employed to classify expressions based on their acoustic features, using conditions where training and testing were conducted on stimuli coming from either the same or different cultures. A wide range of emotions were classified with above-chance accuracy in cross-cultural conditions, suggesting vocal expressions share important characteristics across cultures. However, classification showed an in-group advantage with higher accuracy in within-versus cross-cultural conditions. This finding demonstrates cultural differences in expressive vocal style, and supports the dialect theory of emotions according to which greater recognition of expressions from in-group members results from greater familiarity with culturally specific expressive styles.

  • 16.
    Laukka, Petri
    et al.
    Stockholms universitet, Samhällsvetenskapliga fakulteten, Psykologiska institutionen.
    Quick, Lina
    University of Gävle.
    Emotional and motivational uses of music in sports and exercise: A questionnaire study among athletes2013In: Psychology of Music, ISSN 0305-7356, E-ISSN 1741-3087, Vol. 41, no 2, p. 198-215Article in journal (Refereed)
    Abstract [en]

    Music is present in many sport and exercise situations, but empirical investigations on the motives for listening to music in sports remain scarce. In this study, Swedish elite athletes (N = 252) answered a questionnaire that focused on the emotional and motivational uses of music in sports and exercise. The questionnaire contained both quantitative items that assessed the prevalence of various uses of music, and open-ended items that targeted specific emotional episodes in relation to music in sports. Results showed that the athletes most often reported listening to music during pre-event preparations, warm-up, and training sessions; and the most common motives for listening to music were to increase pre-event activation, positive affect, motivation, performance levels and to experience flow. The athletes further reported that they mainly experienced positive affective states (e.g., happiness, alertness, confidence, relaxation) in relation to music in sports, and also reported on their beliefs about the causes of the musical emotion episodes in sports. In general, the results suggest that the athletes used music in purposeful ways in order to facilitate their training and performance.

  • 17.
    Laukka, Petri
    et al.
    Stockholms universitet, Psykologiska institutionen.
    Åhs, Fredrik
    Duke University, Durham, North Carolina .
    Furmark, Tomas
    Uppsala University.
    Fredrikson, Mats
    Uppsala University.
    Neurofunctional correlates of expressed vocal affect in social phobia2011In: Cognitive, Affective, & Behavioral Neuroscience, ISSN 1530-7026, E-ISSN 1531-135X, Vol. 11, no 3, p. 413-425Article in journal (Refereed)
    Abstract [en]

    We investigated the neural correlates of expressed vocal affect in patients with social phobia. A group of 36 patients performed an anxiogenic public-speaking task while regional cerebral blood flow (rCBF) was assessed using oxygen-15 positron emission tomography. The patients’ speech was recorded and content masked using low-pass filtering (which obscures linguistic content but preserves nonverbal affective cues). The content-masked speech samples were then evaluated with regard to their level of vocally expressed nervousness. We hypothesized that activity in prefrontal and subcortical brain areas previously implicated in emotion regulation would be associated with the degree of expressed vocal affect. Regression analyses accordingly revealed significant negative correlations between expressed vocal affect and rCBF in inferior frontal gyrus, putamen, and hippocampus. Further, functional connectivity was revealed between inferior frontal gyrus and (a) anterior cingulate cortex and (b) amygdala and basal ganglia. We suggest that brain areas important for emotion regulation may also form part of a network associated with the modulation of affective prosody in social phobia.

  • 18. Leitman, David I.
    et al.
    Wolf, Daniel H.
    Laukka, Petri
    Stockholms universitet, Psykologiska institutionen.
    Ragland, J. Daniel
    Valdez, Jeffrey N.
    Turetsky, Bruce I.
    Gur, Raquel E.
    Gur, Ruben C.
    Not pitch perfect: Sensory contributions to affective communication impairment in schizophrenia2011In: Biological Psychiatry, ISSN 0006-3223, E-ISSN 1873-2402, Vol. 70, no 7, p. 611-618Article in journal (Refereed)
    Abstract [en]

    Background: Schizophrenia patients have vocal affect (prosody) deficits that are treatment resistant and associated with negativesymptoms and poor outcome. The neural correlates of this dysfunction are unclear. Prior study has suggested that schizophrenia vocal affectperception deficits stem from an inability to use acoustic cues, notably pitch, in decoding emotion.

    Methods: Functional magnetic resonance imaging was performed in 24 schizophrenia patients and 28 healthy control subjects, during theperformance of a four-choice (happiness, fear, anger, neutral) vocal affect identification task in which items for each emotion variedparametrically in affective salient acoustic cue levels.

    Results: We observed that parametric increases in cue levels in schizophrenia failed to produce the same identification rate increases as incontrol subjects. These deficits correlated with diminished reciprocal activation changes in superior temporal and inferior frontal gyri andreduced temporo-frontal connectivity. Task activation also correlated with independent measures of pitch perception and negativesymptom severity.

    Conclusions: These findings illustrate the interplay between sensory and higher-order cognitive dysfunction in schizophrenia. Sensorycontributions to vocal affect deficits also suggest that this neurobehavioral marker could be targeted by pharmacological or behavioralremediation of acoustic feature discrimination.

  • 19. Leitman, David I.
    et al.
    Wolf, Daniel H.
    Ragland, J. Daniel
    Laukka, Petri
    Stockholms universitet, Psykologiska institutionen.
    Loughead, James
    Valdez, Jeffrey N.
    Javitt, Daniel C.
    Turetsky, Bruce I.
    Gur, Ruben C.
    “It’s not what you say, but how you say it”: A reciprocal temporo-frontal network for affective prosody2010In: Frontiers in Human Neuroscience, E-ISSN 1662-5161, Vol. 4, no 19Article in journal (Refereed)
    Abstract [en]

    Humans communicate emotion vocally by modulating acoustic cues such as pitch, intensity and voice quality. Research has documented how the relative presence or absence of such cues alters the likelihood of perceiving an emotion, but the neural underpinnings of acoustic cue-dependent emotion perception remain obscure. Using functional magnetic resonance imaging in 20 subjects we examined a reciprocal circuit consisting of superior temporal cortex, amygdala and inferior frontal gyrus that may underlie affective prosodic comprehension. Results showed that increased saliency of emotion-specific acoustic cues was associated with increased activation in superior temporal cortex (planum temporale (PT), posterior superior temporal gyrus (pSTG), and posterior superior middle gyrus (pMTG)) and amygdala, whereas decreased saliency of acoustic cues was associated with increased inferior frontal activity and temporo-frontal connectivity. These results suggest that sensory-integrative processing is facilitated when the acoustic signal is rich in affective information, yielding increased activation in temporal cortex and amygdala. Conversely, when the acoustic signal is ambiguous, greater evaluative processes are recruited, increasing activation in inferior frontal gyrus (IFG) and IFG STG connectivity. Auditory regions may thus integrate acoustic information with amygdala input to form emotion-specific representations, which are evaluated within inferior frontal regions.

  • 20. Neiberg, Daniel
    et al.
    Laukka, Petri
    Stockholms universitet, Psykologiska institutionen.
    Ananthakrishnan, Gopal
    Classification of affect in speech using normalized time-frequency cepstra2010In: Speech Prosody 2010 Conference Proceedings, 2010, p. 100071-1-4Conference paper (Refereed)
    Abstract [en]

    Subtle temporal and spectral differences between categorical realizations of para-linguistic phenomena (e.g., affective vocal expressions) are hard to capture and describe. In this paper we present a signal representation based on Time Varying Constant-Q Cepstral Coeffcients (TVCQCC) derived for this purpose. A method which utilizes the special properties of the constant Q-transform for mean F0 estimation and normalization is described. The coeffcients are invariant to segment length, and as a special case, a representation for prosody is considered. Speaker independent classifcation results using v-SVM with the Berlin EMO-DB and two closed sets of basic (anger, disgust, fear, happiness, sadness, neutral) and social/interpersonal (affection, pride, shame) emotions recorded by forty professional actors from two English dialect areas are reported. The accuracy for the Berlin EMO-DB is 71.2 %, and the accuracies for the first set including basic emotions was 44.6% and for the second set including basic and social emotions the accuracy was 31.7% . It was found that F0 normalization boosts the performance and a combined feature set shows the best performance.

  • 21. Neiberg, Daniel
    et al.
    Laukka, Petri
    Stockholms universitet, Psykologiska institutionen.
    Elfenbein, Hillary Anger
    Intra-, inter-, and cross-cultural classification of vocal affect2011In: Procedings of INTERSPEECH 2011: 12th Annual Conference of the International Speech Communication Association : Vol. 3, International Speech Communication Association , 2011, p. 1581-1584Conference paper (Refereed)
    Abstract [en]

    We present intra-, inter- and cross-cultural classifications of vocal expressions. Stimuli were selected from the VENEC corpus and consisted of portrayals of 11 emotions, each expressed with 3 levels of intensity. Classification (nu-SVM) was based on acoustic measures related to pitch, intensity, formants, voice source and duration. Results showed that mean recall across emotions was around 2.4-3 times higher than chance level for both intra- and inter-cultural conditions. For cross-cultural conditions, the relative performance dropped 26%, 32%, and 34% for high, medium, and low emotion intensity, respectively. This suggests that intracultural models were more sensitive to mismatched conditions for low emotion intensity. Preliminary results further indicated that recall rate varied as a function of emotion, with lust and sadness showing the smallest performance drops in the crosscultural condition.

  • 22.
    Thingujam, Nutankumar S.
    et al.
    University of Pune, India.
    Laukka, Petri
    Stockholms universitet, Samhällsvetenskapliga fakulteten, Psykologiska institutionen.
    Elfenbein, Hillary Anger
    Washington University in St. Louis, USA.
    Distinct emotional abilities converge: Evidence from emotional understanding and emotion recognition through the voice2012In: Journal of Research in Personality, ISSN 0092-6566, E-ISSN 1095-7251, Vol. 46, no 3, p. 350-354Article in journal (Refereed)
    Abstract [en]

    One key criterion for whether Emotional Intelligence (El) truly fits the definition of "intelligence" is that individual branches of El should converge. However, for performance tests that measure actual ability, such convergence has been elusive. Consistent with theoretical perspectives for intelligence, we approach this question using El measures that have objective standards for right answers. Examining emotion recognition through the voice that is, the ability to judge an actor's intended portrayal and emotional understanding that is, the ability to understand relationships and transitions among emotions we find substantial convergence, r = .53. Results provide new data to inform the often heated debate about the validity of El, and further the basis of optimism that El may truly be considered intelligence. (C) 2012 Elsevier Inc. All rights reserved.

  • 23.
    Yamasaki, Teruo
    et al.
    Osaka Shoin Women’s University, Japan.
    Yamada, Keiko
    Osaka Shoin Women’s University, Japan.
    Laukka, Petri
    Stockholms universitet, Psykologiska institutionen.
    Viewing the world through the prism of music: Effects of music on perceptions of the environment2015In: Psychology of Music, ISSN 0305-7356, E-ISSN 1741-3087, Vol. 43, no 1, p. 61-74Article in journal (Refereed)
    Abstract [en]

    Questionnaire and interview studies suggest that music is valued for its role in managing the listener’s impression of the environment, but systematic investigations on the topic are scarce. We present a field experiment wherein participants were asked to rate their impression of four different environments (a quiet residential area, traveling by train in the suburbs, at a busy crossroads, and in a tranquil park area) on bipolar adjective scales, while listening to music (which varied regarding level of perceived activation and valence) or in silence. Results showed that the evaluation of the environment was in general affected in the direction of the characteristics of the music, especially in conditions where the perceived characteristics of the music and environment were incongruent. For example, highly active music increased the activation ratings of environments which were perceived as inactive without music, whereas inactive music decreased the activation ratings of environments which were perceived as highly active without music. Also, highly positive music increased the positivity ratings of the environments. In sum, the findings suggest that music may function as a prism that modifies the impression of one’s surroundings. Different theoretical explanations of the results are discussed.

1 - 23 of 23
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • harvard-anglia-ruskin-university
  • apa-old-doi-prefix.csl
  • sodertorns-hogskola-harvard.csl
  • sodertorns-hogskola-oxford.csl
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf