Dhruv Jain - Academia.edu (original) (raw)
Papers by Dhruv Jain
Proceedings of the 2020 International Symposium on Wearable Computers, 2020
Sound can provide important information about the environment, human activity, and situational cu... more Sound can provide important information about the environment, human activity, and situational cues but can be inaccessible to deaf or hard of hearing (DHH) people. In this paper, we explore a wearable tactile technology to provide sound feedback to DHH people. After implementing a wrist-worn tactile prototype, we performed a four-week field study with 12 DHH people. Participants reported that our device increased awareness of sounds by conveying actionable cues (e.g., appliance alerts) and 'experiential' sound information (e.g., bird chirp patterns). CONCEPTS • Human-centered computing ~ Accessibility technologies
The 22nd International ACM SIGACCESS Conference on Computers and Accessibility, 2020
Figure 1: Illustrations of HoloSound showing sound identity, source location, and speech transcri... more Figure 1: Illustrations of HoloSound showing sound identity, source location, and speech transcription. The three most recent sounds are shown at the bottom left of the display, the locations of at most four simultaneous sound sources are shown as circular arcs in the center, and the speech transcription is either shown as subtitles or can be positioned close to the speakers in the 3D space (not shown). See supplementary video.
Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies, 2021
Automated sound recognition tools can be a useful complement to d/Deaf and hard of hearing (DHH) ... more Automated sound recognition tools can be a useful complement to d/Deaf and hard of hearing (DHH) people's typical communication and environmental awareness strategies. Pre-trained sound recognition models, however, may not meet the diverse needs of individual DHH users. While approaches from human-centered machine learning can enable non-expert users to build their own automated systems, end-user ML solutions that augment human sensory abilities present a unique challenge for users who have sensory disabilities: how can a DHH user, who has difficulty hearing a sound themselves, effectively record samples to train an ML system to recognize that sound? To better understand how DHH users can drive personalization of their own assistive sound recognition tools, we conducted a three-part study with 14 DHH participants: (1) an initial interview and demo of a personalizable sound recognizer, (2) a week-long field study of in situ recording, and (3) a follow-up interview and ideation se...
Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems, 2020
Smartwatches are the most preferred portable device for sound awareness • Seen as useful, sociall... more Smartwatches are the most preferred portable device for sound awareness • Seen as useful, socially acceptable, and glanceable • Advantageous for both haptic and visual feedback Prior work is limited to a short, lab-based study of six participants (Mielke & Brück, 2015)
Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, 2019
To investigate preferences for mobile and wearable sound awareness systems, we conducted an onlin... more To investigate preferences for mobile and wearable sound awareness systems, we conducted an online survey with 201 DHH participants. The survey explores how demographic factors affect perceptions of sound awareness technologies, gauges interest in specific sounds and sound characteristics, solicits reactions to three design scenarios (smartphone, smartwatch, head-mounted display) and two output modalities (visual, haptic), and probes issues related to social context of use. While most participants were highly interested in being aware of sounds, this interest was modulated by communication preference-that is, for sign or oral communication or both. Almost all participants wanted both visual and haptic feedback and 75% preferred to have that feedback on separate devices (e.g., haptic on smartwatch, visual on head-mounted display). Other findings related to sound type, full captions vs. keywords, sound filtering, notification styles, and social context provide direct guidance for the design of future mobile and wearable sound awareness systems.
Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems, 2021
Accessibility research has grown substantially in the past few decades, yet there has been no lit... more Accessibility research has grown substantially in the past few decades, yet there has been no literature review of the field. To understand current and historical trends, we created and analyzed a dataset of accessibility papers appearing at CHI and ASSETS since ASSETS' founding in 1994. We qualitatively coded areas of focus and methodological decisions for the past 10 years (2010-2019, N=506 papers), and analyzed paper counts and keywords over the full 26 years (N=836 papers). Our findings highlight areas that have received disproportionate attention and those that are underserved-for example, over 43% of papers in the past 10 years are on accessibility for blind and low vision people. We also capture common study characteristics, such as the roles of disabled and nondisabled participants as well as sample sizes (e.g., a median of 13 for participant groups with disabilities and older adults). We close by critically reflecting on gaps in the literature and offering guidance for future work in the field. CCS CONCEPTS • Human-centered computing~Accessibility~Accessibility theory, concepts and paradigms • Social and professional topics~User characteristics~People with disabilities
Proceedings of the 20th International ACM SIGACCESS Conference on Computers and Accessibility, 2018
Prior work has explored communication challenges faced by people who are deaf and hard of hearing... more Prior work has explored communication challenges faced by people who are deaf and hard of hearing (DHH) and the potential role of new captioning and support technologies to address these challenges; however, the focus has been on stationary contexts such as group meetings and lectures. In this paper, we present two studies examining the needs of DHH people in moving contexts (e.g., walking) and the potential for mobile captions on head-mounted displays (HMDs) to support those needs. Our formative study with 12 DHH participants identifies social and environmental challenges unique to or exacerbated by moving contexts. Informed by these findings, we introduce and evaluate a proof-of-concept HMD prototype with 10 DHH participants. Results show that, while walking, HMD captions can support communication access and improve attentional balance between the speakers(s) and navigating the environment. We close by describing open questions in the mobile context space and design guidelines for future technology.
The 22nd International ACM SIGACCESS Conference on Computers and Accessibility, 2020
SoundWatch uses a deep-CNN based sound classifier to classify and provide feedback about environm... more SoundWatch uses a deep-CNN based sound classifier to classify and provide feedback about environmental sounds on a smartwatch in real-time. Images show different use cases of the app and one of the four architectures we built (watch+phone).
Proceedings of the 2018 ACM Conference Companion Publication on Designing Interactive Systems, 2018
Figure 1a. Prototype 1: AR Windows displays captions in a HoloLens web browser window. Caption wi... more Figure 1a. Prototype 1: AR Windows displays captions in a HoloLens web browser window. Caption windows can be placed close to speakers or visual materials, such as lecture slides in 3D space. So this here x plus y equals Figure 1b. Prototype 2: AR Subtitles displays one caption window that is placed at a fixed distance in front of the user and moves with user's head.
The 21st International ACM SIGACCESS Conference on Computers and Accessibility, 2019
Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, 2019
The home is filled with a rich diversity of sounds from mundane beeps and whirs to dog barks and ... more The home is filled with a rich diversity of sounds from mundane beeps and whirs to dog barks and children's shouts. In this paper, we examine how deaf and hard of hearing (DHH) people think about and relate to sounds in the home, solicit feedback and reactions to initial domestic sound awareness systems, and explore potential concerns. We present findings from two qualitative studies: in Study 1, 12 DHH participants discussed their perceptions of and experiences with sound in the home and provided feedback on initial sound awareness mockups. Informed by Study 1, we designed three tablet-based sound awareness prototypes, which we evaluated with 10 DHH participants using a Wizard-of-Oz approach. Together, our findings suggest a general interest in smarthome-based sound awareness systems particularly for displaying contextually aware, personalized and glanceable visualizations but key concerns arose related to privacy, activity tracking, cognitive overload, and trust. CCS CONCEPTS •Human-centered computing-Empirical studies in accessibility •Human-centered computing-Accessibility technologies KEYWORDS Deaf and hard of hearing, smart home, sound awareness.
Proceedings of the 29th Annual Symposium on User Interface Software and Technology, 2016
We present Amphibian, a simulator to experience scuba diving virtually in a terrestrial setting. ... more We present Amphibian, a simulator to experience scuba diving virtually in a terrestrial setting. Amphibian is novel because it simulates a wider variety of sensations experienced underwater compared with to existing diving simulators that mostly focus on visual and aural displays. Users rest their torso on a motion platform to feel buoyancy. Their outstretched arms and legs are placed in a suspended harness to simulate drag as they swim. An Oculus Rift head-mounted display (HMD) and a pair of headphones delineate the visual and auditory ocean scene. Additional senses simulated in Amphibian are breathing-induced motion, temperature changes, and tactile feedback through various sensors.
Indian Pediatrics, 2015
Improved survival seen in Acute Lymphoblastic Leukemia (ALL) cases has led to increased reports o... more Improved survival seen in Acute Lymphoblastic Leukemia (ALL) cases has led to increased reports of second malignant neoplasms. A 12-year-old female treated for ALL using UK ALL XI protocol nine years back presented with progressively increasing pre-auricular swelling. Investigations revealed it to be a Mucoepidermoid carcinoma. Mucoepidermoid carcinoma should be a differential in any parotid swelling of treated case of pediatric ALL.
Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems - CHI '15, 2015
Asia-Pacific Journal of Clinical Oncology
Asia-Pacific Journal of Clinical Oncology
International Journal of Case Reports and Images, 2013
European Urology, 2014
activation of a brain network consisting of regions for motor control, executive function, and em... more activation of a brain network consisting of regions for motor control, executive function, and emotion processing. Further studies are planned to create a model of brain activity during normal voiding in women.
Indian Journal of Pathology and Microbiology, 2014
Proceedings of the 2020 International Symposium on Wearable Computers, 2020
Sound can provide important information about the environment, human activity, and situational cu... more Sound can provide important information about the environment, human activity, and situational cues but can be inaccessible to deaf or hard of hearing (DHH) people. In this paper, we explore a wearable tactile technology to provide sound feedback to DHH people. After implementing a wrist-worn tactile prototype, we performed a four-week field study with 12 DHH people. Participants reported that our device increased awareness of sounds by conveying actionable cues (e.g., appliance alerts) and 'experiential' sound information (e.g., bird chirp patterns). CONCEPTS • Human-centered computing ~ Accessibility technologies
The 22nd International ACM SIGACCESS Conference on Computers and Accessibility, 2020
Figure 1: Illustrations of HoloSound showing sound identity, source location, and speech transcri... more Figure 1: Illustrations of HoloSound showing sound identity, source location, and speech transcription. The three most recent sounds are shown at the bottom left of the display, the locations of at most four simultaneous sound sources are shown as circular arcs in the center, and the speech transcription is either shown as subtitles or can be positioned close to the speakers in the 3D space (not shown). See supplementary video.
Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies, 2021
Automated sound recognition tools can be a useful complement to d/Deaf and hard of hearing (DHH) ... more Automated sound recognition tools can be a useful complement to d/Deaf and hard of hearing (DHH) people's typical communication and environmental awareness strategies. Pre-trained sound recognition models, however, may not meet the diverse needs of individual DHH users. While approaches from human-centered machine learning can enable non-expert users to build their own automated systems, end-user ML solutions that augment human sensory abilities present a unique challenge for users who have sensory disabilities: how can a DHH user, who has difficulty hearing a sound themselves, effectively record samples to train an ML system to recognize that sound? To better understand how DHH users can drive personalization of their own assistive sound recognition tools, we conducted a three-part study with 14 DHH participants: (1) an initial interview and demo of a personalizable sound recognizer, (2) a week-long field study of in situ recording, and (3) a follow-up interview and ideation se...
Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems, 2020
Smartwatches are the most preferred portable device for sound awareness • Seen as useful, sociall... more Smartwatches are the most preferred portable device for sound awareness • Seen as useful, socially acceptable, and glanceable • Advantageous for both haptic and visual feedback Prior work is limited to a short, lab-based study of six participants (Mielke & Brück, 2015)
Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, 2019
To investigate preferences for mobile and wearable sound awareness systems, we conducted an onlin... more To investigate preferences for mobile and wearable sound awareness systems, we conducted an online survey with 201 DHH participants. The survey explores how demographic factors affect perceptions of sound awareness technologies, gauges interest in specific sounds and sound characteristics, solicits reactions to three design scenarios (smartphone, smartwatch, head-mounted display) and two output modalities (visual, haptic), and probes issues related to social context of use. While most participants were highly interested in being aware of sounds, this interest was modulated by communication preference-that is, for sign or oral communication or both. Almost all participants wanted both visual and haptic feedback and 75% preferred to have that feedback on separate devices (e.g., haptic on smartwatch, visual on head-mounted display). Other findings related to sound type, full captions vs. keywords, sound filtering, notification styles, and social context provide direct guidance for the design of future mobile and wearable sound awareness systems.
Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems, 2021
Accessibility research has grown substantially in the past few decades, yet there has been no lit... more Accessibility research has grown substantially in the past few decades, yet there has been no literature review of the field. To understand current and historical trends, we created and analyzed a dataset of accessibility papers appearing at CHI and ASSETS since ASSETS' founding in 1994. We qualitatively coded areas of focus and methodological decisions for the past 10 years (2010-2019, N=506 papers), and analyzed paper counts and keywords over the full 26 years (N=836 papers). Our findings highlight areas that have received disproportionate attention and those that are underserved-for example, over 43% of papers in the past 10 years are on accessibility for blind and low vision people. We also capture common study characteristics, such as the roles of disabled and nondisabled participants as well as sample sizes (e.g., a median of 13 for participant groups with disabilities and older adults). We close by critically reflecting on gaps in the literature and offering guidance for future work in the field. CCS CONCEPTS • Human-centered computing~Accessibility~Accessibility theory, concepts and paradigms • Social and professional topics~User characteristics~People with disabilities
Proceedings of the 20th International ACM SIGACCESS Conference on Computers and Accessibility, 2018
Prior work has explored communication challenges faced by people who are deaf and hard of hearing... more Prior work has explored communication challenges faced by people who are deaf and hard of hearing (DHH) and the potential role of new captioning and support technologies to address these challenges; however, the focus has been on stationary contexts such as group meetings and lectures. In this paper, we present two studies examining the needs of DHH people in moving contexts (e.g., walking) and the potential for mobile captions on head-mounted displays (HMDs) to support those needs. Our formative study with 12 DHH participants identifies social and environmental challenges unique to or exacerbated by moving contexts. Informed by these findings, we introduce and evaluate a proof-of-concept HMD prototype with 10 DHH participants. Results show that, while walking, HMD captions can support communication access and improve attentional balance between the speakers(s) and navigating the environment. We close by describing open questions in the mobile context space and design guidelines for future technology.
The 22nd International ACM SIGACCESS Conference on Computers and Accessibility, 2020
SoundWatch uses a deep-CNN based sound classifier to classify and provide feedback about environm... more SoundWatch uses a deep-CNN based sound classifier to classify and provide feedback about environmental sounds on a smartwatch in real-time. Images show different use cases of the app and one of the four architectures we built (watch+phone).
Proceedings of the 2018 ACM Conference Companion Publication on Designing Interactive Systems, 2018
Figure 1a. Prototype 1: AR Windows displays captions in a HoloLens web browser window. Caption wi... more Figure 1a. Prototype 1: AR Windows displays captions in a HoloLens web browser window. Caption windows can be placed close to speakers or visual materials, such as lecture slides in 3D space. So this here x plus y equals Figure 1b. Prototype 2: AR Subtitles displays one caption window that is placed at a fixed distance in front of the user and moves with user's head.
The 21st International ACM SIGACCESS Conference on Computers and Accessibility, 2019
Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, 2019
The home is filled with a rich diversity of sounds from mundane beeps and whirs to dog barks and ... more The home is filled with a rich diversity of sounds from mundane beeps and whirs to dog barks and children's shouts. In this paper, we examine how deaf and hard of hearing (DHH) people think about and relate to sounds in the home, solicit feedback and reactions to initial domestic sound awareness systems, and explore potential concerns. We present findings from two qualitative studies: in Study 1, 12 DHH participants discussed their perceptions of and experiences with sound in the home and provided feedback on initial sound awareness mockups. Informed by Study 1, we designed three tablet-based sound awareness prototypes, which we evaluated with 10 DHH participants using a Wizard-of-Oz approach. Together, our findings suggest a general interest in smarthome-based sound awareness systems particularly for displaying contextually aware, personalized and glanceable visualizations but key concerns arose related to privacy, activity tracking, cognitive overload, and trust. CCS CONCEPTS •Human-centered computing-Empirical studies in accessibility •Human-centered computing-Accessibility technologies KEYWORDS Deaf and hard of hearing, smart home, sound awareness.
Proceedings of the 29th Annual Symposium on User Interface Software and Technology, 2016
We present Amphibian, a simulator to experience scuba diving virtually in a terrestrial setting. ... more We present Amphibian, a simulator to experience scuba diving virtually in a terrestrial setting. Amphibian is novel because it simulates a wider variety of sensations experienced underwater compared with to existing diving simulators that mostly focus on visual and aural displays. Users rest their torso on a motion platform to feel buoyancy. Their outstretched arms and legs are placed in a suspended harness to simulate drag as they swim. An Oculus Rift head-mounted display (HMD) and a pair of headphones delineate the visual and auditory ocean scene. Additional senses simulated in Amphibian are breathing-induced motion, temperature changes, and tactile feedback through various sensors.
Indian Pediatrics, 2015
Improved survival seen in Acute Lymphoblastic Leukemia (ALL) cases has led to increased reports o... more Improved survival seen in Acute Lymphoblastic Leukemia (ALL) cases has led to increased reports of second malignant neoplasms. A 12-year-old female treated for ALL using UK ALL XI protocol nine years back presented with progressively increasing pre-auricular swelling. Investigations revealed it to be a Mucoepidermoid carcinoma. Mucoepidermoid carcinoma should be a differential in any parotid swelling of treated case of pediatric ALL.
Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems - CHI '15, 2015
Asia-Pacific Journal of Clinical Oncology
Asia-Pacific Journal of Clinical Oncology
International Journal of Case Reports and Images, 2013
European Urology, 2014
activation of a brain network consisting of regions for motor control, executive function, and em... more activation of a brain network consisting of regions for motor control, executive function, and emotion processing. Further studies are planned to create a model of brain activity during normal voiding in women.
Indian Journal of Pathology and Microbiology, 2014