Please use this identifier to cite or link to this item: https://www.um.edu.mt/library/oar/handle/123456789/103907
Full metadata record
DC FieldValueLanguage
dc.contributor.authorPaggio, Patrizia-
dc.date.accessioned2022-11-23T06:54:08Z-
dc.date.available2022-11-23T06:54:08Z-
dc.date.issued2012-
dc.identifier.citationPaggio, P. (2012). Towards an empirically-based grammar of speech and gestures. In P. Bergmann , J. Brenning , M. Pfeiffer, & E. Reber (Eds.), Prosody and Embodiment in Interactional Grammar (pp. 281-314). Berlin: De Gruyter.en_GB
dc.identifier.isbn9783110295108-
dc.identifier.urihttps://www.um.edu.mt/library/oar/handle/123456789/103907-
dc.description.abstractThe purpose of this article is to discuss how non-verbal behavior, in particular head movement and facial expressions, can be represented in a multimodal grammar. The term grammar is used here in a rather broad sense to indicate not only syntax, but all aspects of language structure, and we follow Head-Driven Phrase Structure Grammar (HPSG) (Pollard and Sag 1994) in conceiving of the grammar of a language as a system of constraints operating at various levels (phonology, morphology, syntax, semantics). We extend this notion by talking about multimodal grammar, which we define as the system of constraints that models the interaction of speech with non-verbal behavior in language. Still following HPSG, we use typed feature structures to model grammatical constraints. Only, in our case, constraints relate to the shape and dynamics of gestures, their possible interpretation and their relation to speech. In particular, we focus on three issues: i. the relation between nonverbal behavior and speech; ii. the expression of feedback through gestures and iii. the contribution of gestures to information structure. Our analysis is based on Danish multimodal data annotated according to the MUMIN gesture coding scheme. The scheme and its application to data in several languages, as well as the use of such annotated multimodal data for machine learning, are described in detail in Paggio and Diderichsen (2010). Here, we are interested in how the various gesture types in the annotated data can be represented in a grammar, and how the empirical findings comply with theoretical assumptions about how gestures interact with speech.en_GB
dc.language.isoenen_GB
dc.publisherDe Gruyteren_GB
dc.rightsinfo:eu-repo/semantics/restrictedAccessen_GB
dc.subjectProsodic analysis (Linguistics)en_GB
dc.subjectGrammar, Comparative and general -- Phonologyen_GB
dc.subjectHead-Driven Phrase Structure Grammaren_GB
dc.subjectSpeech acts (Linguistics) -- Data processingen_GB
dc.subjectBody language -- Researchen_GB
dc.subjectSpeech and gestureen_GB
dc.subjectTypology (Linguistics)en_GB
dc.titleTowards an empirically-based grammar of speech and gesturesen_GB
dc.typebookParten_GB
dc.rights.holderThe copyright of this work belongs to the author(s)/publisher. The rights of this work are as defined by the appropriate Copyright Legislation or as modified by any successive legislation. Users may access this work and can make use of the information contained in accordance with the Copyright Legislation provided that the author must be properly acknowledged. Further distribution or reproduction in any format is prohibited without the prior permission of the copyright holder.en_GB
dc.description.reviewedpeer-revieweden_GB
dc.publication.titleProsody and Embodiment in Interactional Grammaren_GB
Appears in Collections:Scholarly Works - InsLin

Files in This Item:
File Description SizeFormat 
Towards_an_empirically-based_grammar_of_speech_and_gestures(2012).pdf
  Restricted Access
5.74 MBAdobe PDFView/Open Request a copy


Items in OAR@UM are protected by copyright, with all rights reserved, unless otherwise indicated.