Oya Aran | Idiap research institue (original) (raw)
Papers by Oya Aran
International Journal of Pattern Recognition and Artificial Intelligence, 2009
IEEE Multimedia, 2011
AbstractThe present paper presents a multimodal frame-work for the intercommunication between vi... more AbstractThe present paper presents a multimodal frame-work for the intercommunication between visually and hearing impaired people based on the concept of situated modality replacement. Utilizing sign language, speech and haptics as main communication ...
Journal on Multimodal User Interfaces, 2008
Journal on Multimodal User Interfaces, 2011
The aim of this paper is to help the communication of two people, one hearing impaired and one vi... more The aim of this paper is to help the communication of two people, one hearing impaired and one visually impaired by converting speech to fingerspelling and fingerspelling to speech. Fingerspelling is a subset of sign language, and uses finger signs to spell letters of the spoken or written language. We aim to convert finger spelled words to speech and vice versa. Different spoken languages and sign languages such as English, Russian, Turkish and Czech are considered.
Pattern Recognition, 2010
As part of our work on hand gesture interpretation, we present our results on hand shape recognit... more As part of our work on hand gesture interpretation, we present our results on hand shape recognition. Our method is based on attribute extraction and multiple partial classifications. The novelty lies in the fashion the fusion of all the partial classification results are performed. This fusion is (1) more efficient in terms of information theory and leads to more accurate results, (2) general enough to allow heterogeneous sources of information to be taken into account: Each classifier output is transformed to a belief function, and all the corresponding functions are fused together with other external evidential sources of information.
International Journal of Pattern Recognition and Artificial Intelligence, 2009
IEEE Multimedia, 2011
AbstractThe present paper presents a multimodal frame-work for the intercommunication between vi... more AbstractThe present paper presents a multimodal frame-work for the intercommunication between visually and hearing impaired people based on the concept of situated modality replacement. Utilizing sign language, speech and haptics as main communication ...
Journal on Multimodal User Interfaces, 2008
Journal on Multimodal User Interfaces, 2011
The aim of this paper is to help the communication of two people, one hearing impaired and one vi... more The aim of this paper is to help the communication of two people, one hearing impaired and one visually impaired by converting speech to fingerspelling and fingerspelling to speech. Fingerspelling is a subset of sign language, and uses finger signs to spell letters of the spoken or written language. We aim to convert finger spelled words to speech and vice versa. Different spoken languages and sign languages such as English, Russian, Turkish and Czech are considered.
Pattern Recognition, 2010
As part of our work on hand gesture interpretation, we present our results on hand shape recognit... more As part of our work on hand gesture interpretation, we present our results on hand shape recognition. Our method is based on attribute extraction and multiple partial classifications. The novelty lies in the fashion the fusion of all the partial classification results are performed. This fusion is (1) more efficient in terms of information theory and leads to more accurate results, (2) general enough to allow heterogeneous sources of information to be taken into account: Each classifier output is transformed to a belief function, and all the corresponding functions are fused together with other external evidential sources of information.