Observer Annotation of Affective Display and Evaluation of Expressivity: Face vs. Face-and-body

UTSePress Research/Manakin Repository

Search UTSePress Research


Advanced Search

Browse

My Account

Show simple item record

dc.contributor.author Gunes, Hatice en_US
dc.contributor.author Piccardi, Massimo en_US
dc.contributor.editor R. Goecke, A. Robles-Kelly and T. Caelli en_US
dc.date.accessioned 2009-11-09T02:45:25Z
dc.date.available 2009-11-09T02:45:25Z
dc.date.issued 2006 en_US
dc.identifier 2006014570 en_US
dc.identifier.citation Gunes Hatice and Piccardi Massimo 2006, 'Observer Annotation of Affective Display and Evaluation of Expressivity: Face vs. Face-and-body', Australian Computer Society, Australia, pp. 35-42. en_US
dc.identifier.issn 1445-1336 en_US
dc.identifier.other E1UNSUBMIT en_US
dc.identifier.uri http://hdl.handle.net/10453/1809
dc.description.abstract A first step in developing and testing a robust affective multimodal system is to obtain or access data representing human multimodal expressive behaviour. Collected affect data has to be further annotated in order to become usable for the automated systems. Most of the existing studies of emotion or affect annotation are monomodal. Instead, in this paper, we explore how independent human observers annotate affect display from monomodal face data compared to bimodal face-and-body data. To this aim we collected visual affect data by recording the face and face-and-body simultaneously. We then conducted a survey by asking human observers to view and label the face and face-and-body recordings separately. The results obtained show that in general, viewing face-and-body simultaneously helps with resolving the ambiguity in annotating emotional behaviours. en_US
dc.publisher Australian Computer Society en_US
dc.relation.isbasedon http://portal.acm.org/citation.cfm?id=1273385.1273393&coll=ACM&dl=ACM&type=series&idx=SERIES10714&part=series&WantType=Proceedings&title=AICPS&CFID=86 en_US
dc.title Observer Annotation of Affective Display and Evaluation of Expressivity: Face vs. Face-and-body en_US
dc.parent Use of Vision in Human-Computer Interaction: Proceedings of the HCSNet Workshop on the use of vision in human-computer interaction en_US
dc.journal.volume 56 en_US
dc.journal.number en_US
dc.publocation Australia en_US
dc.identifier.startpage 35 en_US
dc.identifier.endpage 42 en_US
dc.cauo.name FEIT.School of Systems, Management and Leadership en_US
dc.conference Verified OK en_US
dc.conference.location Canberra, Australia en_US
dc.for 080109 en_US
dc.personcode 034144 en_US
dc.personcode 020073 en_US
dc.percentage 50 en_US
dc.classification.name Pattern Recognition and Data Mining en_US
dc.classification.type FOR-08 en_US
dc.custom the HCSNet Workshop on the use of vision in human-computer interaction en_US
dc.date.activity 20061101 en_US
dc.location.activity Canberra, Australia en_US
dc.description.keywords Affective face-and-body display, bimodal affect annotation, expressivity evaluation. en_US
dc.staffid 020073 en_US


Files in this item

This item appears in the following Collection(s)

Show simple item record