Oya Aran | Idiap research institue (original) (raw)

Papers by Oya Aran

Research paper thumbnail of Identifying emergent leadership in small groups using nonverbal communicative cues

Research paper thumbnail of A Multimodal Corpus for Studying Dominance in Small Group Conversations

Research paper thumbnail of An Incremental Neural Network Construction Algorithm for Training Multilayer Perceptrons

Research paper thumbnail of An Incremental Framework Based on Cross-Validation for Estimating the Architecture of a Multilayer Perceptron

International Journal of Pattern Recognition and Artificial Intelligence, 2009

Research paper thumbnail of Combined Gesture-Speech Analysis and Speech Driven Gesture Synthesis

Research paper thumbnail of Using Modality Replacement to Facilitate Communication between Visually and Hearing-Impaired People

IEEE Multimedia, 2011

Abstract—The present paper presents a multimodal frame-work for the intercommunication between vi... more Abstract—The present paper presents a multimodal frame-work for the intercommunication between visually and hearing impaired people based on the concept of situated modality replacement. Utilizing sign language, speech and haptics as main communication ...

Research paper thumbnail of Sign-language-enabled information kiosk

Research paper thumbnail of A MULTIMODAL FRAMEWORK FOR THE COMMUNICATION OF THE DISABLED

Research paper thumbnail of Combined Gesture-Speech Analysis and Synthesis

Research paper thumbnail of REAL TIME GESTURAL INTERFACE FOR GENERIC APPLICATIONS

Research paper thumbnail of Modeling Hesitation and Conflict: A Belief-Based Approach for Multi-class Problems

Research paper thumbnail of Speech and sliding text aided sign retrieval from hearing impaired sign news videos

Journal on Multimodal User Interfaces, 2008

Research paper thumbnail of Automatic fingersign-to-speech translation system

Journal on Multimodal User Interfaces, 2011

The aim of this paper is to help the communication of two people, one hearing impaired and one vi... more The aim of this paper is to help the communication of two people, one hearing impaired and one visually impaired by converting speech to fingerspelling and fingerspelling to speech. Fingerspelling is a subset of sign language, and uses finger signs to spell letters of the spoken or written language. We aim to convert finger spelled words to speech and vice versa. Different spoken languages and sign languages such as English, Russian, Turkish and Czech are considered.

Research paper thumbnail of A Multimodal 3D Healthcare Communication System

Research paper thumbnail of Trk İşaret Dili Yz İfadesi ve Baş Hareketi Veritaban

Research paper thumbnail of A multi-class classification strategy for Fisher scores: Application to signer independent sign language recognition

Pattern Recognition, 2010

Research paper thumbnail of SignTutor: An Interactive System for Sign Language Tutoring

Research paper thumbnail of A Dempster-Shafer Theory Based Combination of Classifiers for Hand Gesture Recognition

As part of our work on hand gesture interpretation, we present our results on hand shape recognit... more As part of our work on hand gesture interpretation, we present our results on hand shape recognition. Our method is based on attribute extraction and multiple partial classifications. The novelty lies in the fashion the fusion of all the partial classification results are performed. This fusion is (1) more efficient in terms of information theory and leads to more accurate results, (2) general enough to allow heterogeneous sources of information to be taken into account: Each classifier output is transformed to a belief function, and all the corresponding functions are fused together with other external evidential sources of information.

Research paper thumbnail of Multi-modal dialogue system with sign language capabilities

Research paper thumbnail of A particle filter based algorithm for robust tracking of hands and face under occlusion

Research paper thumbnail of Identifying emergent leadership in small groups using nonverbal communicative cues

Research paper thumbnail of A Multimodal Corpus for Studying Dominance in Small Group Conversations

Research paper thumbnail of An Incremental Neural Network Construction Algorithm for Training Multilayer Perceptrons

Research paper thumbnail of An Incremental Framework Based on Cross-Validation for Estimating the Architecture of a Multilayer Perceptron

International Journal of Pattern Recognition and Artificial Intelligence, 2009

Research paper thumbnail of Combined Gesture-Speech Analysis and Speech Driven Gesture Synthesis

Research paper thumbnail of Using Modality Replacement to Facilitate Communication between Visually and Hearing-Impaired People

IEEE Multimedia, 2011

Abstract—The present paper presents a multimodal frame-work for the intercommunication between vi... more Abstract—The present paper presents a multimodal frame-work for the intercommunication between visually and hearing impaired people based on the concept of situated modality replacement. Utilizing sign language, speech and haptics as main communication ...

Research paper thumbnail of Sign-language-enabled information kiosk

Research paper thumbnail of A MULTIMODAL FRAMEWORK FOR THE COMMUNICATION OF THE DISABLED

Research paper thumbnail of Combined Gesture-Speech Analysis and Synthesis

Research paper thumbnail of REAL TIME GESTURAL INTERFACE FOR GENERIC APPLICATIONS

Research paper thumbnail of Modeling Hesitation and Conflict: A Belief-Based Approach for Multi-class Problems

Research paper thumbnail of Speech and sliding text aided sign retrieval from hearing impaired sign news videos

Journal on Multimodal User Interfaces, 2008

Research paper thumbnail of Automatic fingersign-to-speech translation system

Journal on Multimodal User Interfaces, 2011

The aim of this paper is to help the communication of two people, one hearing impaired and one vi... more The aim of this paper is to help the communication of two people, one hearing impaired and one visually impaired by converting speech to fingerspelling and fingerspelling to speech. Fingerspelling is a subset of sign language, and uses finger signs to spell letters of the spoken or written language. We aim to convert finger spelled words to speech and vice versa. Different spoken languages and sign languages such as English, Russian, Turkish and Czech are considered.

Research paper thumbnail of A Multimodal 3D Healthcare Communication System

Research paper thumbnail of Trk İşaret Dili Yz İfadesi ve Baş Hareketi Veritaban

Research paper thumbnail of A multi-class classification strategy for Fisher scores: Application to signer independent sign language recognition

Pattern Recognition, 2010

Research paper thumbnail of SignTutor: An Interactive System for Sign Language Tutoring

Research paper thumbnail of A Dempster-Shafer Theory Based Combination of Classifiers for Hand Gesture Recognition

As part of our work on hand gesture interpretation, we present our results on hand shape recognit... more As part of our work on hand gesture interpretation, we present our results on hand shape recognition. Our method is based on attribute extraction and multiple partial classifications. The novelty lies in the fashion the fusion of all the partial classification results are performed. This fusion is (1) more efficient in terms of information theory and leads to more accurate results, (2) general enough to allow heterogeneous sources of information to be taken into account: Each classifier output is transformed to a belief function, and all the corresponding functions are fused together with other external evidential sources of information.

Research paper thumbnail of Multi-modal dialogue system with sign language capabilities

Research paper thumbnail of A particle filter based algorithm for robust tracking of hands and face under occlusion