A Framework of Adaptive Multimodal Input for Location-Based Augmented Reality Application (original) (raw)

Location-based AR is one of the most familiar mobile application currently being used. The position of the user relative to the real world will be located and digital information can be overlaid to provide information on the user’s current location and surroundings. Four main types of mobile augmented reality interfaces have been studied and one of them is a multimodal interface. Multimodal interface processes two or more combined user input modes (such as speech, pen, touch, manual gesture and gaze) in a coordinated manner with multimedia system output. In the multimodal interface, many frameworks have been proposed to guide the designer to develop multimodal applications including in augmented reality environment but there has been little work reviewing the framework of adaptive multimodal input in mobile augmented reality application. This paper presents the conceptual framework to illustrate the adaptive multimodal interface for location-based augmented reality application. We r...