dblp: ICMI 2017 (original) (raw)



default search action
- combined dblp search
- author search
- venue search
- publication search
Authors:
- no matches

Venues:
- no matches

Publications:
- no matches


19th ICMI 2017: Glasgow, UK

jump to

SPARQL queries 
Refine list

refinements active!
zoomed in on ?? of ?? records
view refined list in
export refined list as

Edward Lank, Alessandro Vinciarelli, Eve E. Hoggan, Sriram Subramanian, Stephen A. Brewster:
Proceedings of the 19th ACM International Conference on Multimodal Interaction, ICMI 2017, Glasgow, United Kingdom, November 13 - 17, 2017. ACM 2017, ISBN 978-1-4503-5543-8
Invited Talks

Charles Spence:
Gastrophysics: using technology to enhance the experience of food and drink (keynote). 1

Danica Kragic:
Collaborative robots: from action and interaction to collaboration (keynote). 2

Lawrence W. Barsalou:
Situated conceptualization: a framework for multimodal interaction (keynote). 3

Phil Cohen:
Steps towards collaborative multimodal dialogue (sustained contribution award). 4
Oral Session 1: Children and Interaction

Julia Woodward
, Alex Shaw, Aishat Aloba, Ayushi Jain, Jaime Ruiz, Lisa Anthony:
Tablets, tabletops, and smartphones: cross-platform comparisons of children's touchscreen interactions. 5-14

Arthur Crenn, Alexandre Meyer
, Rizwan Ahmed Khan
, Hubert Konik, Saïda Bouakaz:
Toward an efficient body expression recognition based on the synthesis of a neutral movement. 15-22

Ovidiu Serban
, Mukesh Barange
, Sahba Zojaji
, Alexandre Pauchet
, Adeline Richard, Émilie Chanoni:
Interactive narration with a child: impact of prosody and facial expressions. 23-31

Alex Shaw, Jaime Ruiz, Lisa Anthony
:
Comparing human and machine recognition of children's touchscreen stroke gestures. 32-40
Oral Session 2: Understanding Human Behaviour

Volha Petukhova, Tobias Mayer, Andrei Malchanau, Harry Bunt:
Virtual debate coach design: assessing multimodal argumentation performance. 41-50

Biqiao Zhang, Georg Essl, Emily Mower Provost
:
Predicting the distribution of emotion perception: capturing inter-rater variability. 51-59

Abdelwahab Bourai, Tadas Baltrusaitis, Louis-Philippe Morency:
Automatically predicting human knowledgeability through non-verbal cues. 60-67

Zakaria Aldeneh, Soheil Khorram, Dimitrios Dimitriadis, Emily Mower Provost
:
Pooling acoustic and lexical features for the prediction of valence. 68-72
Oral Session 3: Touch and Gesture

Dario Pittera
, Marianna Obrist
, Ali Israr:
Hand-to-hand: an intermanual illusion of movement. 73-81

Feng Feng
, Tony Stockman:
An investigation of dynamic crossmodal instantiation in TUIs. 82-90

Robert Tscharn, Marc Erich Latoschik, Diana Löffler, Jörn Hurtienne:
"Stop over there": natural gesture and speech interaction for non-critical spontaneous intervention in autonomous driving. 91-100

Ilhan Aslan, Elisabeth André
:
Pre-touch proxemics: moving the design space of touch targets from still graphics towards proxemic behaviors. 101-109

Maadh Al Kalbani, Maite Frutos-Pascual
, Ian Williams:
Freehand grasping in mixed reality: analysing variation during transition phase of interaction. 110-114

Euan Freeman
, Gareth Griffiths, Stephen A. Brewster:
Rhythmic micro-gestures: discreet interaction on-the-go. 115-119
Oral Session 4: Sound and Interaction

Jamie Iona Ferguson, Stephen A. Brewster:
Evaluation of psychoacoustic sound parameters for sonification. 120-127

Augoustinos Tsiros, Grégory Leplâtre:
Utilising natural cross-modal mappings for visual control of feature-based sound synthesis. 128-136
Oral Session 5: Methodology

Felix Putze, Maik Schünemann, Tanja Schultz
, Wolfgang Stuerzlinger
:
Automatic classification of auto-correction errors in predictive text entry based on EEG and context information. 137-145

Joy O. Egede, Michel F. Valstar:
Cumulative attributes for pain intensity estimation. 146-153

Rémy Siegfried
, Yu Yu, Jean-Marc Odobez
:
Towards the use of social interaction conventions as prior for gaze model adaptation. 154-162

Minghai Chen, Sen Wang, Paul Pu Liang, Tadas Baltrusaitis, Amir Zadeh, Louis-Philippe Morency:
Multimodal sentiment analysis with word-level fusion and reinforcement learning. 163-171

Reza Asadi, Ha Trinh, Harriet J. Fell, Timothy W. Bickmore:
IntelliPrompter: speech-based dynamic note display interface for oral presentations. 172-180
Oral Session 6: Artificial Agents and Wearable Sensors

Pauline Trung, Manuel Giuliani
, Michael Miksch, Gerald Stollnberger, Susanne Stadler, Nicole Mirnig, Manfred Tscheligi
:
Head and shoulders: automatic error detection in human-robot interaction. 181-188

Stephanie Gross, Brigitte Krenn, Matthias Scheutz
:
The reliability of non-verbal cues for situated reference resolution and their interplay with language: implications for human robot interaction. 189-196

Magalie Ochs, Nathan Libermann, Axel Boidin, Thierry Chaminade
:
Do you speak to a human or a virtual agent? automatic analysis of user's social cues during mediated communication. 197-205

Marjolein C. Nanninga, Yanxia Zhang, Nale Lehmann-Willenbrock
, Zoltán Szlávik
, Hayley Hung:
Estimating verbal expressions of task and social cohesion in meetings by quantifying paralinguistic mimicry. 206-215

Terry Taewoong Um, Franz Michael Josef Pfister, Daniel Pichler, Satoshi Endo, Muriel Lang, Sandra Hirche, Urban Fietzek, Dana Kulic:
Data augmentation of wearable sensor data for parkinson's disease monitoring using convolutional neural networks. 216-220
Poster Session 1

Pooja Rao S. B.
, Sowmya Rasipuram, Rahul Das, Dinesh Babu Jayagopi
:
Automatic assessment of communication skill in non-conventional interview settings: a comparative study. 221-229

Radoslaw Niewiadomski
, Maurizio Mancini
, Stefano Piana, Paolo Alborno, Gualtiero Volpe, Antonio Camurri
:
Low-intrusive recognition of expressive movement qualities. 230-237

Harrison South, Martin Taylor, Huseyin Dogan
, Nan Jiang:
Digitising a medical clerking system with multimodal interaction support. 238-242

Benjamin Hatscher, Maria Luz, Lennart E. Nacke, Norbert Elkmann, Veit Müller
, Christian Hansen
:
GazeTap: towards hands-free interaction in the operating room. 243-251

Byungjoo Lee, Qiao Deng, Eve E. Hoggan
, Antti Oulasvirta
:
Boxer: a multimodal collision technique for virtual objects. 252-260

Helen F. Hastie, Xingkun Liu, Pedro Patrón:
Trust triggers for multimodal command and control interfaces. 261-268

Matthew Heinz, Sven Bertel, Florian Echtler
:
TouchScope: a hybrid multitouch oscilloscope interface. 269-273

Shalini Bhatia, Munawar Hayat
, Roland Goecke
:
A multimodal system to characterise melancholia: cascaded bag of words approach. 274-280

Vikram Ramanarayanan, Chee Wee Leong, David Suendermann-Oeft, Keelan Evanini:
Crowdsourcing ratings of caller engagement in thin-slice videos of human-machine dialog: benefits and pitfalls. 281-287

Bruno Dumas, Jonathan Pirau, Denis Lalanne
:
Modelling fusion of modalities in multimodal interactive systems with MMMM. 288-296

Casey Kennington, Ting Han
, David Schlangen
:
Temporal alignment using the incremental unit framework. 297-301

Mohamed Abouelenien, Verónica Pérez-Rosas, Rada Mihalcea, Mihai Burzo:
Multimodal gender detection. 302-311

Skanda Muralidhar, Marianne Schmid Mast
, Daniel Gatica-Perez
:
How may I help you? behavior and impressions in hospitality service encounters. 312-320

Naoto Terasawa, Hiroki Tanaka
, Sakriani Sakti, Satoshi Nakamura:
Tracking liking state in brain activity while watching multiple movies. 321-325

Benjamin Stahl, Georgios N. Marentakis
:
Does serial memory of locations benefit from spatially congruent audiovisual stimuli? investigating the effect of adding spatial sound to visuospatial sequences. 326-330

Naveen Madapana, Juan P. Wachs:
ZSGL: zero shot gestural learning. 331-335

Gabriel Murray:
Markov reward models for analyzing group interaction. 336-340

Béatrice Biancardi
, Angelo Cafaro, Catherine Pelachaud
:
Analyzing first impressions of warmth and competence from observable nonverbal cues in expert-novice interactions. 341-349

Angelo Cafaro, Johannes Wagner, Tobias Baur
, Soumia Dermouche, Mercedes Torres, Catherine Pelachaud, Elisabeth André
, Michel F. Valstar:
The NoXi database: multimodal recordings of mediated novice-expert interactions. 350-359

Carl Bishop, Augusto Esteves
, Iain McGregor
:
Head-mounted displays as opera glasses: using mixed-reality to deliver an egalitarian user experience during live events. 360-364
Poster Session 2

Ryo Ishii, Shiro Kumano, Kazuhiro Otsuka:
Analyzing gaze behavior during turn-taking for estimating empathy skill level. 365-373

A. Seza Dogruöz, Natalia Ponomareva, Sertan Girgin, Reshu Jain, Christoph Oehler:
Text based user comments as a signal for automatic language identification of online videos. 374-378

Maneesh Bilalpur, Seyed Mostafa Kia
, Manisha Chawla, Tat-Seng Chua, Ramanathan Subramanian
:
Gender and emotion recognition with implicit user signals. 379-387

Tiago Ribeiro
, Ana Paiva:
Animating the adelino robot with ERIK: the expressive robotics inverse kinematics. 388-396

Fatma Meawad, Su-Yin Yang, Fong Ling Loy:
Automatic detection of pain from spontaneous facial expressions. 397-401

Abhinav Shukla, Shruti Shriya Gullapuram, Harish Katti, Karthik Yadati, Mohan S. Kankanhalli
, Ramanathan Subramanian
:
Evaluating content-centric vs. user-centric ad affect recognition. 402-410

Nam Le, Jean-Marc Odobez
:
A domain adaptation approach to improve speaker turn embedding using face representation. 411-415

Miao Yu
, Liyun Gong, Stefanos D. Kollias:
Computer vision based fall detection by a convolutional neural network. 416-420

Fumio Nihei, Yukiko I. Nakano, Yutaka Takase
:
Predicting meeting extracts in group discussions using multimodal convolutional neural networks. 421-425

Catherine Neubauer, Mathieu Chollet
, Sharon Mozgai, Mark Dennison, Peter Khooshabeh, Stefan Scherer:
The relationship between task-induced stress, vocal changes, and physiological state during a dyadic team task. 426-432

Laurens R. Krol
, Sarah-Christin Freytag, Thorsten O. Zander:
Meyendtris: a hands-free, multimodal tetris clone using eye tracking and passive BCI for intuitive neuroadaptive gaming. 433-437

Giuseppe Boccignone
, Donatello Conte
, Vittorio Cuculo
, Raffaella Lanzarotti:
AMHUSE: a multimodal dataset for HUmour SEnsing. 438-445

Mohamed Khamis
, Mariam Hassib, Emanuel von Zezschwitz, Andreas Bulling
, Florian Alt
:
GazeTouchPIN: protecting sensitive data on mobile devices using secure multimodal authentication. 446-450

Cigdem Beyan
, Francesca Capozzi
, Cristina Becchio, Vittorio Murino
:
Multi-task learning of social psychology assessments and nonverbal features for automatic leadership identification. 451-455

Daniel McDuff, Paul Thomas
, Mary Czerwinski, Nick Craswell:
Multimodal analysis of vocal collaborative search: a public corpus and results. 456-463

Atef Ben Youssef, Chloé Clavel
, Slim Essid, Miriam Bilac, Marine Chamoux, Angelica Lim:
UE-HRI: a new dataset for the study of user engagement in spontaneous human-robot interactions. 464-472

Chris Porhet, Magalie Ochs, Jorane Saubesty, Grégoire de Montcheuil, Roxane Bertrand
:
Mining a multimodal corpus of doctor's training for virtual patient's feedbacks. 473-478

Ashwaq Al-Hargan
, Neil Cooke
, Tareq Binjammaz:
Multimodal affect recognition in an interactive gaming environment using eye tracking and speech signals. 479-486
Demonstrations 1

Jennifer Müller, Uwe Oestermeier, Peter Gerjets:
Multimodal interaction in classrooms: implementation of tangibles in integrated music and math lessons. 487-488

Bok Deuk Song, Yeon Jun Choi, Jong Hyun Park:
Web-based interactive media authoring system with multimodal interaction. 489-490

Euan Freeman
, Ross Anderson, Julie R. Williamson
, Graham A. Wilson
, Stephen A. Brewster:
Textured surfaces for ultrasound haptic displays. 491-492

Dan Bohus, Sean Andrist, Mihai Jalobeanu:
Rapid development of multimodal interactive systems: a demonstration of platform for situated intelligence. 493-494

Helen F. Hastie, Francisco Javier Chiyah Garcia, David A. Robb
, Pedro Patrón, Atanas Laskov:
MIRIAM: a multimodal chat-based interface for autonomous systems. 495-496

Dong-Bach Vo, Mohammad Tayarani, Maki Rooksby
, Rui Huan, Alessandro Vinciarelli, Helen Minnis, Stephen A. Brewster:
SAM: the school attachment monitor. 497-498

David G. Novick, Laura M. Rodriguez, Aaron Pacheco, Aaron Rodriguez, Laura Hinojos, Brad Cartwright, Marco Cardiel, Iván Gris Sepulveda, Olivia Rodriguez-Herrera, Enrique Ponce:
The Boston Massacre history experience. 499-500

Matthew Heinz, Sven Bertel, Florian Echtler:
Demonstrating TouchScope: a hybrid multitouch oscilloscope interface. 501

Maria Koutsombogera
, Carl Vogel
:
The MULTISIMO multimodal corpus of collaborative interactions. 502-503

Matthieu Poyade
, Glyn Morris, Ian Taylor, Victor Portela:
Using mobile virtual reality to empower people with hidden disabilities to overcome their barriers. 504-505
Demonstrations 2

Mirjam Wester, Matthew P. Aylett
, David A. Braude:
Bot or not: exploring the fine line between cyber and human identity. 506-507

Amol A. Deshmukh, Bart G. W. Craenen, Alessandro Vinciarelli, Mary Ellen Foster
:
Modulating the non-verbal social signals of a humanoid robot. 508-509

Patrizia Di Campli San Vito
, Stephen A. Brewster, Frank E. Pollick
, Stuart White:
Thermal in-car interaction for navigation. 510-511

Daisuke Sasaki, Musashi Nakajima, Yoshihiro Kanno:
AQUBE: an interactive music reproduction system for aquariums. 512-513

Michal Joachimczak, Juan Liu, Hiroshi Ando:
Real-time mixed-reality telepresence via 3D reconstruction with HoloLens and commodity depth sensors. 514-515

Ruth Aylett, Frank Broz
, Ayan Ghosh, Peter E. McKenna
, Gnanathusharan Rajendran
, Mary Ellen Foster
, Giorgio Roffo, Alessandro Vinciarelli:
Evaluating robot facial expressions. 516-517

Gözel Shakeri, John H. Williamson, Stephen A. Brewster:
Bimodal feedback for in-car mid-air gesture interaction. 518-519

Kirby Cofino, Vikram Ramanarayanan, Patrick L. Lange, David Pautler, David Suendermann-Oeft, Keelan Evanini:
A modular, multimodal open-source virtual interviewer dialog agent. 520-521

Daniel M. Lofaro, Christopher Taylor, Ryan Tse
, Donald Sofge:
Wearable interactive display for the local positioning system (LPS). 522-523
Grand Challenge

Abhinav Dhall, Roland Goecke
, Shreya Ghosh
, Jyoti Joshi
, Jesse Hoey, Tom Gedeon:
From individual to group-level emotion recognition: EmotiW 5.0. 524-528

Dae Ha Kim, Min Kyu Lee, Dong-Yoon Choi, Byung Cheol Song:
Multi-modal emotion recognition using semi-supervised learning and multiple neural networks in the wild. 529-535

Stefano Pini
, Olfa Ben Ahmed, Marcella Cornia
, Lorenzo Baraldi
, Rita Cucchiara, Benoit Huet:
Modeling multimodal cues in a deep learning-based framework for emotion recognition in the wild. 536-543

Alexandr G. Rassadin
, Alexey S. Gruzdev
, Andrey V. Savchenko
:
Group-level emotion recognition using transfer learning from face identification. 544-548

Lianzhi Tan, Kaipeng Zhang
, Kai Wang, Xiaoxing Zeng, Xiaojiang Peng, Yu Qiao:
Group emotion recognition with individual facial emotion CNNs and global image based CNNs. 549-552

Ping Hu, Dongqi Cai, Shandong Wang, Anbang Yao, Yurong Chen
:
Learning supervised scoring ensemble for emotion recognition in the wild. 553-560

Asad Abbas, Stephan K. Chalup
:
Group emotion recognition in the wild by combining deep neural networks for facial expression classification and scene-context analysis. 561-568

Valentin Vielzeuf, Stéphane Pateux, Frédéric Jurie:
Temporal multimodal fusion for video emotion classification in the wild. 569-576

Xi Ouyang, Shigenori Kawaai, Ester Gue Hua Goh, Shengmei Shen, Wan Ding, Huaiping Ming, Dong-Yan Huang:
Audio-visual emotion recognition using deep transfer learning and multiple temporal models. 577-582

B. Balaji, Venkata Ramana Murthy Oruganti
:
Multi-level feature fusion for group-level emotion recognition. 583-586

Qinglan Wei, Yijia Zhao, Qihua Xu, Liandong Li, Jun He
, Lejun Yu, Bo Sun:
A new deep-learning framework for group emotion recognition. 587-592

Luca Surace
, Massimiliano Patacchiola, Elena Battini Sönmez
, William Spataro, Angelo Cangelosi
:
Emotion recognition in the wild using deep neural networks and Bayesian classifiers. 593-597

Shuai Wang, Wenxuan Wang, Jinming Zhao, Shizhe Chen, Qin Jin, Shilei Zhang, Yong Qin:
Emotion recognition with multimodal features and temporal models. 598-602

Xin Guo, Luisa F. Polanía, Kenneth E. Barner:
Group-level emotion recognition using deep models on image scene, faces, and skeletons. 603-608
Doctoral Consortium

Revathy Nayar:
Towards designing speech technology based assistive interfaces for children's speech therapy. 609-613

Katie Winkle:
Social robots for motivation and engagement in therapy. 614-617

Nikita Mae B. Tuanquin
:
Immersive virtual eating and conditioned food responses. 618-622

Tom Gayler
:
Towards edible interfaces: designing interactions with food. 623-627

Béatrice Biancardi
:
Towards a computational model for first impressions generation. 628-632

Esma Mansouri-Benssassi:
A decentralised multimodal integration of social signals: a bio-inspired approach. 633-637

Alex Shaw:
Human-centered recognition of children's touchscreen gestures. 638-642

Soheil Rayatdoost
:
Cross-modality interaction between EEG signals and facial expression. 643-646

Valentin Barrière:
Hybrid models for opinion analysis in speech interactions. 647-651

Rui Huan
:
Evaluating engagement in digital narratives from facial data. 652-655

Maedeh Aghaei:
Social signal extraction from egocentric photo-streams. 656-659

Dimosthenis Kontogiorgos
:
Multimodal language grounding for improved human-robot collaboration: exploring spatial semantic representations in the shared space of attention. 660-664
Workshop Summaries

Thierry Chaminade
, Fabrice Lefèvre
, Noël Nguyen, Magalie Ochs:
ISIAA 2017: 1st international workshop on investigating social interactions with artificial agents (workshop summary). 665-666

Keelan Evanini, Maryam Najafian, Saeid Safavi, Kay Berkling:
WOCCI 2017: 6th international workshop on child computer interaction (workshop summary). 667-669

Gualtiero Volpe, Monica Gori, Nadia Bianchi-Berthouze
, Gabriel Baud-Bovy
, Paolo Alborno, Erica Volta:
MIE 2017: 1st international workshop on multimodal interaction for education (workshop summary). 670-671

Julie R. Williamson
, Tom Flint, Chris Speed:
Playlab: telling stories with technology (workshop summary). 672-673

Carlos Velasco, Anton Nijholt
, Marianna Obrist, Katsunori Okajima, Rick Schifferstein
, Charles Spence:
MHFI 2017: 2nd international workshop on multisensorial approaches to human-food interaction (workshop summary). 674-676

manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.
Unpaywalled article links
Add open access links from
to the list of external document links (if available).
load links from unpaywall.org
Privacy notice: By enabling the option above, your browser will contact the API of unpaywall.org to load hyperlinks to open access articles. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Unpaywall privacy policy.
Archived links via Wayback Machine
For web page which are no longer available, try to retrieve content from the
of the Internet Archive (if available).
load content from archive.org
Privacy notice: By enabling the option above, your browser will contact the API of archive.org to check for archived content of web pages that are no longer available. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Internet Archive privacy policy.
Reference lists
Add a list of references from
,
, and
to record detail pages.
load references from crossref.org and opencitations.net
Privacy notice: By enabling the option above, your browser will contact the APIs of crossref.org, opencitations.net, and semanticscholar.org to load article reference information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Crossref privacy policy and the OpenCitations privacy policy, as well as the AI2 Privacy Policy covering Semantic Scholar.
Citation data
Add a list of citing articles from
and
to record detail pages.
load citations from opencitations.net
Privacy notice: By enabling the option above, your browser will contact the API of opencitations.net and semanticscholar.org to load citation information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the OpenCitations privacy policy as well as the AI2 Privacy Policy covering Semantic Scholar.
OpenAlex data
Load additional information about publications from
.
load data from openalex.org
Privacy notice: By enabling the option above, your browser will contact the API of openalex.org to load additional information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the information given by OpenAlex.
dblp was originally created in 1993 at:
since 2018, dblp has been operated and maintained by:






