Facial gestures in the expression of prosodic attitudes of Brazilian Portuguese (original) (raw)
Related papers
Analysis of Facial Expressions in Brazilian Sign Language (Libras)
European Scientific Journal, ESJ
Brazilian Sign Language (in Portuguese, Libras) is a visuospatial linguistic system adopted by the Brazilian deaf communities as the primary form of communication. Libras are a language of minority groups, thus their research and production of teaching materials do not receive the same incentive to progress or improve as oral languages. This complex language employs signs composed of forms and hands movements combined with facial expressions and postures of the body. Facial expressions rarely appear in sign language literature, despite their being essential to this form of communication. Thereby, this research objectives are to present and discuss sub-categories of the grammatical facial expressions of Libras, with two specific objectives: (1) the building of an annotated video corpus comprehending all the categories identified in the literature of facial expressions in Brazilian sign language; (2) the application of Facial Action Coding System (FACS) (which has its origins as an ex...
The Role of Visual Stimuli in the Perception of Prosody in Brazilian Portuguese
This study analyzes the role of visual and acoustic stimuli in the recognition of prosodic characteristics as the discrimination of statements and yes-no questions in Brazilian Portuguese (BP). Other Studies such as Massaro (1998), Fagel (2006), Abelin (2007), and Ronquest et al. (2010) claim that speech perception is bimodal. In this study, after performing 3 experiments, it was observed that both modalities (acoustic and visual) play important role in speech perception. The process by which sounds that belong to some language are heard, interpreted, and understood is bimodal.
Facial Expressions and Speech Acts
2016
The illocutionary force-indicator devices (IFIDs) are all the linguistic elements that indicate how an utterance is to be taken, i.e. what illocutionary act a speaker is performing while uttering a sentence (Searle & Vanderveken 1985). Up to now, research in linguistics and psycholinguistics has produced a rich literature focused strictly on the linguistic IFIDs (i.e. semantic, syntactic and prosodical IFIDs). Nonetheless, it is commonly recognized that the comprehension of a speech act depends on non-verbal illocutionary force-indicators devices too. Indeed, decoding the illocutionary force of a speech act constitutes a multimodal process that often involves the computing of non-verbal signals such as gestures – e.g. movements of the hands and of the body – or postural signs – e.g. arms folded. A psycholinguistic research line on non-verbal IFIDs is still lacking though. The present paper proposes three production and comprehension psycholinguistic experiments aimed at evaluating t...
Visual and auditory cues of assertions and questions in brazilian portuguese and Mexican Spanishy
Journal of Speech Sciences, 2020
The aim of this paper is to compare the multimodal production of questions in two different language varieties: Brazilian Portuguese and Mexican Spanish. Descriptions of the auditory and visual cues of two speech acts, assertions and questions, are presented based on Brazilian and Mexican corpora. The sentence “Como você sabe” was produced as an yes-no (echo) question and an assertion by ten speakers (five male) from Rio de Janeiro and the sentence “Apaga la tele” was produced as a yes-no question and an assertion by five speakers (three male) from Mexico City. The results show that, whereas the Brazilian Portuguese and Mexican Spanish assertions are produced with different F0 contours and different facial expressions, questions in both languages are produced with specific F0 contours but similar facial expressions. The outcome of this comparative study suggests that lowering the eyebrows, tightening the lid and wrinkling the nose can be considered question markers in both language ...
Journal of Speech Sciences, 2020
The aim of this paper is to compare the multimodal production of assertions and questions in two different languages: Brazilian Portuguese and Mexican Spanish. Descriptions of the auditory and visual cues of these speech acts are presented based on Brazilian and Mexican corpora. The sentence "Como você sabe" was produced as an assertion and an echo question by ten speakers (five male) from Rio de Janeiro and the sentence "Apaga la tele" was produced as an assertion and a yes-no question by five speakers (three male) from Mexico City. The speech acts intonational patterns were described in terms of F0 movements and annotated in the nuclear region of the contours with ToBI system. Momentary facial muscular changes (namely Action Units) located in the upper and lower part of the face as well as head movements were used to analyze the facial expressions. The acoustic description showed that Brazilian Portuguese assertions are produced with a falling F0 nuclear configuration (H+L*L%) and echo questions with a rising F0 nuclear configuration (L+<H*L%). Mexican Spanish assertions present two types of F0 nuclear configurations, either a low flat nuclear F0 (L*L%) or a falling-rising (L+H*L%) nuclear F0, whereas Mexican Spanish yes-no questions are produced with a low nuclear F0 followed by a rising boundary tone (L*LH%). The outcome of the visual analysis indicates that, whereas Brazilian Portuguese assertions are visually produced with blink and right head tilt and Mexican Spanish assertions with lip stretcher, lowering the eyebrows, tightening the eyelid and wrinkling the nose can be considered question markers in both language varieties.
2005
In order to explain a proposal for a theoretical framework for a holistic micro-analysis of verbal and nonverbal modalities, a micro-analysis of a parenthesis in a face-to-face interaction between three Portuguese participants will be presented. In this analysis both verbal and nonverbal signals were described in relation to each other and classified according to their interactional functions. In other words, various kinds of nonverbal communication, movements and positions of body, head, eyes, face, arms and hands will be analyzed not only regarding their form and semiotics, but also their multiple functions in relation to speech production/reception-which includes words, prosody, and what is beyond it: participants expectations, attitudes, motivations and relation to each other. Besides, this example will emphasize the importance of when and where a movement is made and what it means in interaction, independently of its idiosyncratic aspects. The moment and the function of each movement, as well as its synchronization with speech and other movements/positions of all participants can offer very important cues regarding speech processing. The functional categories used for the classification of verbal signals result from a synthesis of principles and categories of the theories of Ethnomethodological Conversation Analysis, Discourse Analysis and Contextualization. Furthermore, for the analysis of prosody, the Interactional Linguistics was taken in account. As for the nonverbal modalities, recent investigation from several disciplinary on gesture and body movements areas was considered.
Gestural prosody and the expression of emotions: A perceptual and acoustic experiment
2015
This paper presents a perceptual and acoustic experiment and introduces methodological procedures to deal with qualitative and quantitative variables. Its objectives are: investigating the functions of vocal and facial gestures in the appraisal of six basic emotions (Anger, Distaste, Fear, Happiness, Sadness and Shame) and valence (positive, neutral and negative); discussing the interaction between the visual, vocal and semantic dimensions in the evaluation of audio, visual and audiovisual stimuli corresponding to 30 utterances (10 of them semantically positive, 10 neutral and10 negative). The correlation among the variables was made by non-parametric tests applying FAMD and MFA. Among the perceptual and acoustic variables investigated, the most influential for the identification of valence/emotions were found to be the VPAS and the ExpressionEvaluator measures. Judgments concerning the positive, negative and neutral valence of the utterances and the type of emotion varied according...
SPEECH GESTURES AND THE PRAGMATIC ECONOMY OF ORAL EXPRESSION IN FACE-TO-FACE INTERACTION
The aim of this paper is to observe how the speech gestures contribute to the pragmatic economy and redundancy of the global utterance. The term speech gestures is used when referring to the visual suprasegmental manifestations of the spoken language in the context of any oral face-to-face interaction. It includes facial expressions, eye movements, head/hand movements, and body movement in general, as well as touches, postures, and proxemics. The research is based on transcriptions of all perceptible behavior (auditory and visual) in short sequences of videotaped narratives and and interactions of children and adults.