A Lightweight Application for Reading Digital Measurement and Inputting Condition Assessment in Manufacturing Industry (original) (raw)

Opportunities to Use Speech Recognition for Bridge Inspection

Construction Congress VI, 2000

Inspections of bridge mandated by the National Bridge Inventory (NBI) are required to be performed every two years. On an NBI inspection, inspectors go out to the field and collect inspection information based on the requirement of the state owning the bridge. Since the development of mobile computing devices, such as notebook computers, personal digital assistants (PDAs), and pen computers, many organizations have been trying to deliver computing support for bridge inspectors in the field using these devices in order to improve productivity. However, some bridge inspectors are still using paperbased forms and clipboards during their inspection activity. From our experience with bridge inspectors from the Pennsylvania Department of Transportation, District 11, the user interface modality as well as the mobility of the hardware have major effects on the usability of such systems. It has been long known that the success or failure of a particular application depends significantly on the way the user interacts with the system. We have studied the use of a wearable computer to provide an unobtrusive hardware platform supporting bridge inspection. This paper specifically discusses the potential use of speech recognition for the bridge inspection application in order to improve the usability of the user interface. The background of speech recognition technology, along with the results of our preliminary study will be discussed in this paper.

Forey: An Android Application for the Visually Impaired

In their day to day lives, visually impaired people face many problems. They are completely dependent on other people for their work. They cannot use the internet and many of the facilities it offers. With the growth of wireless communication, the need for voice recognition technology has dramatically increased. Voice applications based on voice interfacing and voice discourse administration permit clients to center on their current work without additional exertion. The main goal is to provide visually impaired people with the ability to do their daily work. Visually impaired people depend on others. However, this application allows them to connect with each other using audio stories and take a step towards enjoying the unique benefits of the Internet. It is difficult for them to make financial transactions without outside help. Bank notes cannot be recognized by them because they are similar in texture and size. This app will help them recognize different currencies. The application allows the visually impaired to give input on their smartphone, which recognizes the note and tells the user the value of the note by voice output. This feature can be implemented using machine learning techniques. This way currency detection for new currencies is easy for the visually impaired. This application also helps them in recognising objects using a QR scanner.

WeldVUI: Establishing Speech-Based Interfaces in Industrial Applications

Human-Computer Interaction – INTERACT 2019, 2019

Voice User Interfaces (VUIs) and speech-based applications have recently gained increasing popularity. During the past years, they have been included in a wide range of mass-market devices (smart phones or technology installed in common car cockpits) and are thus available for many everyday interaction scenarios (e.g., making phone calls or switching the lights on and off). This popularity also led to a number of guidelines for VUI design, software libraries and devices for speech recognition becoming available for interface designers and developers. Although generally helpful, these resources are often broad and do not fully satisfy the specific requirements of certain industrial applications. First, grammar and vocabulary in such settings usually differ drastically from everyday scenarios. Second, common software libraries and devices are often not able to comply with the conditions in industrial environments (e.g. involving high levels of noise). This paper describes the iterative, usercentered design process for VUIs and functional speech-based interaction prototypes for the domain of industrial welding, including a two-stage Wizard of Oz procedure, rapid prototyping, speech recognition improvement and thorough user involvement. Our experiences throughout this process generalize to other industrial applications and so-called "niche applications" where grammar and vocabulary usually have to be established from scratch. They are intended to guide other researchers setting up a similar process for designing and prototyping domain-specific VUIs.

IJERT-Vision based Calculator for Speech and Hearing Impaired using Hand Gesture Recognition

International Journal of Engineering Research and Technology (IJERT), 2014

https://www.ijert.org/vision-based-calculator-for-speech-and-hearing-impaired-using-hand-gesture-recognition https://www.ijert.org/research/vision-based-calculator-for-speech-and-hearing-impaired-using-hand-gesture-recognition-IJERTV3IS060447.pdf Even after more than two decades of development input devices such as data gloves, infrared cameras, many people still find the interaction with computers an uncomfortable experience. Efforts should be made to adapt computers to our natural means of communication: speech and body language. This paper is the proposal of a real time and fast command system through hand gesture recognition, using low cost sensors, like a simple personal computer and an USB web cam, so any user could make use of it in his industry or home. This paper describes the new methodology for vision based, fast and real time hand gesture recognition which can be used in many HCI applications. The proposed algorithm first detects and segments the hand region. Then using our novel approach, it locates the fingers and classifies the gesture. The proposed algorithm is invariant to hand position, orientation or distance from web cam. We have developed a gesture based mathematical tool (Calculator) as an application of proposed algorithm.

Seamless Integration of Handwriting Recognition into Pen-Enabled Displays for Fast User Interaction

2012 10th IAPR International Workshop on Document Analysis Systems, 2012

This paper proposes a framework for the integration of handwriting recognition into natural user interfaces. As more and more pen-enabled touch displays are available, we make use of the distinction between touch actions and pen actions. Furthermore, we apply a recently introduced mode detection approach to distinguish between handwritten strokes and graphics drawn with the pen. These ideas are implemented in the Touch & Write SDK which can be used for various applications. In order to evaluate the effectiveness of our approach, we have conducted experiments for an annotation scenario. We asked several users to mark and label several objects in videos. We have measured the labeling time when using our novel user interaction system and compared it to the time needed when using common labeling tools. Furthermore, we compare our handwritten input paradigm to other existing systems. It turns out that the annotation is performed much faster when using our method and the user experience is also much better.

Design and evaluation of handwriting input interfaces for small-size mobile devices

This paper deals with the design of handwriting input methods for small-size mobile devices. A first input method using the isolated cursive handwritten character recognizer RESIFCar has been embedded into smartphones sold in Europe. This industrial feedback has shown the importance of the interface quality even if the associated recognizer reaches good recognition rates. Consequently, we focus our researches on the design of handwriting input interfaces for small-size devices. The originality of our works comes from the integration of the user in the interface design process with an iterative implementation-evaluation cycle. The interface quality is evaluated thanks to experiments base on the cognitive psychology framework. We present here the first iteration of the design cycle of DIGIME, the DIGital Ink Micro Editor associated to RESIFCar. We start with a quick overview of the existing handwriting input methods. Then we present our handwritten character recognizer RESIFCar and establish a set of design principles for handwriting input interfaces. After detailing the main features of the first DIGIME prototype based on these principles, we describe its evaluation. This study examines the effect of ink writer's persistence on the screen and the effect of visual feedback contiguity. The results of this study have emphasized some problems in the implementation choices. So, we proposed a second version of DIGIME in order to improve the quality of the interface and give some hints for the future evaluation of this new prototype.

Clearspeech: A Display Reader for the Visually Handicapped

IEEE Transactions on Neural Systems and Rehabilitation Engineering, 2000

Many domestic appliances and much office equipment is controlled using a keypad and a small digital display. Programming such devices is problematical for the blind and visually handicapped. In this paper, we describe a device that may be used to read the displays on these devices. The device is designed to accept a description of the display being read, which specifies the types and locations of elements of the display. Images are captured using a handheld webcam. Images are processed to remove the distortions due to the camera orientation. The elements of the screen are interpreted and a suitable audio output is generated. In suitably illuminated scenes, the display data is interpreted correctly in approximately 90% of the cases investigated.

Vision based Calculator for Speech and Hearing Impaired using Hand Gesture Recognition

Even after more than two decades of development input devices such as data gloves, infrared cameras, many people still find the interaction with computers an uncomfortable experience. Efforts should be made to adapt computers to our natural means of communication: speech and body language. This paper is the proposal of a real time and fast command system through hand gesture recognition, using low cost sensors, like a simple personal computer and an USB web cam, so any user could make use of it in his industry or home. This paper describes the new methodology for vision based, fast and real time hand gesture recognition which can be used in many HCI applications. The proposed algorithm first detects and segments the hand region. Then using our novel approach, it locates the fingers and classifies the gesture. The proposed algorithm is invariant to hand position, orientation or distance from web cam. We have developed a gesture based mathematical tool (Calculator) as an application of proposed algorithm.

Evaluation of three input mechanisms for wearable computers

1997

zyxwvutsrq This paper reports on an experiment investigating the functionality and usability of novel input devices on a wearable computer, for text entry tasks. Over a three week period, twelve subjects used three different input devices to create and save short textual messages. The virtual keyboard, forearm keyboard, and Kordic keypad input devices were assessed as to their eficiency and usability for simple text entry tasks. Results collected included the textual data created by the subjects, the duration zyxwvut of activities, the survey data and observations made by supervisors. The results indicated that the forearm keyboard is the best performer for accurate and efJicient text entry while other devices zyxwvuts mq benefit from more work on designing specialist CUls for the wearable computer.