Please use this identifier to cite or link to this item:
https://www.um.edu.mt/library/oar/handle/123456789/103907
Title: | Towards an empirically-based grammar of speech and gestures |
Authors: | Paggio, Patrizia |
Keywords: | Prosodic analysis (Linguistics) Grammar, Comparative and general -- Phonology Head-Driven Phrase Structure Grammar Speech acts (Linguistics) -- Data processing Body language -- Research Speech and gesture Typology (Linguistics) |
Issue Date: | 2012 |
Publisher: | De Gruyter |
Citation: | Paggio, P. (2012). Towards an empirically-based grammar of speech and gestures. In P. Bergmann , J. Brenning , M. Pfeiffer, & E. Reber (Eds.), Prosody and Embodiment in Interactional Grammar (pp. 281-314). Berlin: De Gruyter. |
Abstract: | The purpose of this article is to discuss how non-verbal behavior, in particular head movement and facial expressions, can be represented in a multimodal grammar. The term grammar is used here in a rather broad sense to indicate not only syntax, but all aspects of language structure, and we follow Head-Driven Phrase Structure Grammar (HPSG) (Pollard and Sag 1994) in conceiving of the grammar of a language as a system of constraints operating at various levels (phonology, morphology, syntax, semantics). We extend this notion by talking about multimodal grammar, which we define as the system of constraints that models the interaction of speech with non-verbal behavior in language. Still following HPSG, we use typed feature structures to model grammatical constraints. Only, in our case, constraints relate to the shape and dynamics of gestures, their possible interpretation and their relation to speech. In particular, we focus on three issues: i. the relation between nonverbal behavior and speech; ii. the expression of feedback through gestures and iii. the contribution of gestures to information structure. Our analysis is based on Danish multimodal data annotated according to the MUMIN gesture coding scheme. The scheme and its application to data in several languages, as well as the use of such annotated multimodal data for machine learning, are described in detail in Paggio and Diderichsen (2010). Here, we are interested in how the various gesture types in the annotated data can be represented in a grammar, and how the empirical findings comply with theoretical assumptions about how gestures interact with speech. |
URI: | https://www.um.edu.mt/library/oar/handle/123456789/103907 |
ISBN: | 9783110295108 |
Appears in Collections: | Scholarly Works - InsLin |
Files in This Item:
File | Description | Size | Format | |
---|---|---|---|---|
Towards_an_empirically-based_grammar_of_speech_and_gestures(2012).pdf Restricted Access | 5.74 MB | Adobe PDF | View/Open Request a copy |
Items in OAR@UM are protected by copyright, with all rights reserved, unless otherwise indicated.