Jacob Wobbrock | University of Washington (original) (raw)

Papers by Jacob Wobbrock

Research paper thumbnail of Human-Centered Approach Evaluating Mobile Sign Language Video Communication

Mobile video is becoming a mainstream method of communication. Deaf and hard-of-hearing people be... more Mobile video is becoming a mainstream method of communication. Deaf and hard-of-hearing people benefit the most because mobile video enables real-time sign language communication. However, mobile video quality can become unintelligible due to high video transmission rates causing network congestion and delayed video. My dissertation research focuses on making mobile sign language video more accessible and affordable without relying on higher cellular network capacity while extending cellphone battery life. I am investigating how much frame rate and bitrate of sign language video can be reduced before compromising video intelligibility. Web and laboratory studies are conducted to evaluate perceived intelligibility of video transmitted at low frame rates and bitrates. I also propose the Human Signal Intelligibility Model (HSIM) addressing the lack of a universal model to base video intelligibility evaluations.

Research paper thumbnail of Increasing Mobile Sign Language Video Accessibility by Relaxing Video Transmission Standards

The current recommended video transmission standards, Telecommunication Standardization Sector (I... more The current recommended video transmission standards, Telecommunication Standardization Sector (ITU-T) Q.26/16, of 25 frames per second at 100 kilobits per second or higher make mobile sign language video communication less accessible than it could be with a more relaxed standard. The current bandwidth requirements are high enough that network congestion may occur, causing delays or lost information. In addition, limited data plans may cause higher cost to video communication users. To increase the accessibility and affordability of video communication, we explore a relaxed standard for video transmission using lower frame rates and bitrates. We introduce a novel measure, the Human Signal Intelligibility Model, to accomplish this. We propose web and laboratory studies to validate lower bounds on frame rates and bitrates for sign language communication on small mobile devices. Author

Research paper thumbnail of An Aligned Rank Transform Procedure for Multifactor Contrast Tests

The 34th Annual ACM Symposium on User Interface Software and Technology, 2021

Data from multifactor HCI experiments often violates the assumptions of parametric tests (i.e., n... more Data from multifactor HCI experiments often violates the assumptions of parametric tests (i.e., nonconforming data). The Aligned Rank Transform (ART) has become a popular nonparametric analysis in HCI that can find main and interaction effects in nonconforming data, but leads to incorrect results when used to conduct post hoc contrast tests. We created a new algorithm called ART-C for conducting contrast tests within the ART paradigm and validated it on 72,000 synthetic data sets. Our results indicate that ART-C does not inflate Type I error rates, unlike contrasts based on ART, and that ART-C has more statistical power than a t-test, Mann-Whitney U test, Wilcoxon signed-rank test, and ART. We also extended an open-source tool called ARTool with our ART-C algorithm for both Windows and R. Our validation had some limitations (e.g., only six distribution types, no mixed factorial designs, no random slopes), and data drawn from Cauchy distributions should not be analyzed with ART-C.

Research paper thumbnail of Research contributions in human-computer interaction

Research paper thumbnail of Aligned Rank Transform

Research paper thumbnail of From User-Centered to Adoption-Centered Design

Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems, 2015

As we increasingly strive for scientific rigor and generalizability in HCI research, should we en... more As we increasingly strive for scientific rigor and generalizability in HCI research, should we entertain any hope that by doing good science, our discoveries will eventually be more transferrable to industry? We present an in-depth case study of how an HCI research innovation goes through the process of transitioning from a university project to a revenue-generating startup financed by venture capital. The innovation is a novel contextual help system for the Web, and we reflect on the different methods used to evaluate it and how research insights endure attempted dissemination as a commercial product. Although the extent to which any innovation succeeds commercially depends on a number of factors like market forces, we found that our HCI innovation with user-centered origins was in a unique position to gain traction with customers and garner buy-in from investors. However, since end users were not the buyers of our product, a strong user-centered focus obfuscated other critical needs of the startup and pushed out perspectives of nonuser-centered stakeholders. To make the research-toproduct transition, we had to focus on adoption-centered design, the process of understanding and designing for adopters and stakeholders of the product. Our case study raises questions about how we evaluate the novelty and research contributions of HCI innovations with respect to their potential for commercial impact.

Research paper thumbnail of SwitchBack

Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems, 2015

Smartphones and tablets are often used in dynamic environments that force users to break focus an... more Smartphones and tablets are often used in dynamic environments that force users to break focus and attend to their surroundings, creating a form of "situational impairment." Current mobile devices have no ability to sense when users divert or restore their attention, let alone provide support for resuming tasks. We therefore introduce SwitchBack, a system that allows mobile device users to resume tasks more efficiently. SwitchBack is built upon Focus and Saccade Tracking (FAST), which uses the frontfacing camera to determine when the user is looking and how their eyes are moving across the screen. In a controlled study, we found that FAST can identify how many lines the user has read in a body of text within a mean absolute percent error of just 3.9%. We then tested SwitchBack in a dual focus-of-attention task, finding that SwitchBack improved average reading speed by 7.7% in the presence of distractions.

Research paper thumbnail of Usable gestures for blind people

Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, 2011

Despite growing awareness of the accessibility issues surrounding touch screen use by blind peopl... more Despite growing awareness of the accessibility issues surrounding touch screen use by blind people, designers still face challenges when creating accessible touch screen interfaces. One major stumbling block is a lack of understanding about how blind people actually use touch screens. We conducted two user studies that compared how blind people and sighted people use touch screen gestures. First, we conducted a gesture elicitation study in which 10 blind and 10 sighted people invented gestures to perform common computing tasks on a tablet PC. We found that blind people have different gesture preferences than sighted people, including preferences for edge-based gestures and gestures that involve tapping virtual keys on a keyboard. Second, we conducted a performance study in which the same participants performed a set of reference gestures. We found significant differences in the speed, size, and shape of gestures performed by blind people versus those performed by sighted people. Our results suggest new design guidelines for accessible touch screen interfaces.

Research paper thumbnail of The need for research on mobile technologies for people with low-vision

We argue that there needs to be more research on technologies for people with low-vision. While t... more We argue that there needs to be more research on technologies for people with low-vision. While the vast majority of people with vision impairments have some functional vision, accessibility research tends to focus on nonvisual interaction. Researchers can make mobile devices more accessible to low-vision people by exploring target acquisition, text entry, and text and image output for this group of users. Researchers can also use mobile devices as tools to better enable lowvision users to access printed material and signage.

Research paper thumbnail of A web-based intelligibility evaluation of sign language video transmitted at low frame rates and bitrates

Proceedings of the 15th International ACM SIGACCESS Conference on Computers and Accessibility, 2013

Mobile sign language video conversations can become unintelligible due to high video transmission... more Mobile sign language video conversations can become unintelligible due to high video transmission rates causing network congestion and delayed video. In an effort to understand how much sign language video quality can be sacrificed, we evaluated the perceived lower limits of intelligible sign language video transmitted at four low frame rates (1, 5, 10, and 15 frames per second [fps]) and four low fixed bitrates (15, 30, 60, and 120 kilobits per second [kbps]). We discovered an "intelligibility ceiling effect" where increasing the frame rate above 10 fps decreased perceived intelligibility, and increasing the bitrate above 60 kbps produced diminishing returns. Additional findings suggest that relaxing the recommended international video transmission rate, 25 fps at 100 kbps or higher, would still provide intelligible content while considering network resources and bandwidth consumption. As part of this work, we developed the Human Signal Intelligibility Model, a new conceptual model useful for informing evaluations of video intelligibility.

Research paper thumbnail of Reject me

CHI '12 Extended Abstracts on Human Factors in Computing Systems, 2012

The HCI research community grows bigger each year, refining and expanding its boundaries in new w... more The HCI research community grows bigger each year, refining and expanding its boundaries in new ways. The ability to effectively review submissions is critical to the growth of CHI and related conferences. The review process is designed to produce a consistent supply of fair, high-quality reviews without overloading individual reviewers; yet, after each cycle, concerns are raised about limitations of the process. Every year, participants are left wondering why their papers were not accepted (or why they were). This SIG will explore reviewing through a critical and constructive lens, discussing current successes and future opportunities in the CHI review process. Goals will include actionable conclusions about ways to improve the system, potential alternative peer models, and the creation of materials to educate newcomer reviewers.

Research paper thumbnail of Enhancing independence and safety for blind and deaf-blind public transit riders

Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, 2011

Blind and deaf-blind people often rely on public transit for everyday mobility, but using transit... more Blind and deaf-blind people often rely on public transit for everyday mobility, but using transit can be challenging for them. We conducted semi-structured interviews with 13 blind and deaf-blind people to understand how they use public transit and what human values were important to them in this domain. Two key values were identified: independence and safety. We developed GoBraille, two related Braille-based applications that provide information about buses and bus stops while supporting the key values. GoBraille is built on MoBraille, a novel framework that enables a Braille display to benefit from many features in a smartphone without knowledge of proprietary, devicespecific protocols. Finally, we conducted user studies with blind people to demonstrate that GoBraille enables people to travel more independently and safely. We also conducted co-design with a deaf-blind person, finding that a minimalist interface, with short input and output messages, was most effective for this population.

Research paper thumbnail of PassChords

Proceedings of the 14th international ACM SIGACCESS conference on Computers and accessibility, 2012

Blind mobile device users face security risks such as inacces sible authentication methods, and a... more Blind mobile device users face security risks such as inacces sible authentication methods, and aural and visual eaves dropping. We interviewed 13 blind smartphone users and found that most participants were unaware of or not con cerned about potential security threats. Not a single par ticipant used optional authentication methods such as a password-protected screen lock. We addressed the high risk of unauthorized user access by developing PassChords, a non-visual authentication method for touch surfaces that is robust to aural and visual eavesdropping. A user enters a PassChord by tapping several times on a touch surface with one or more fingers. The set of fingers used in each tap defines the password. We give preliminary evidence that a four-tap PassChord has about the same entropy, a measure of password strength, as a four-digit personal identification number (PIN) used in the iPhone's Passcode Lock. We con ducted a study with 16 blind participants that showed that PassChords were nearly three times as fast as iPhone's Passcode Lock with VoiceOver, suggesting that PassChords are a viable accessible authentication method for touch screens.

Research paper thumbnail of Designing and evaluating text entry methods

CHI '12 Extended Abstracts on Human Factors in Computing Systems, 2012

Our workshop has three primary goals. The first goal is community building: we want to get text e... more Our workshop has three primary goals. The first goal is community building: we want to get text entry researchers that are active in different communities into one place. Our second goal is to promote CHI as a natural and compelling focal point for all kinds of text entry research. The third goal is to discuss some difficult issues that are hard or near impossible to handle within the traditional format of research papers.

Research paper thumbnail of GripSense

Proceedings of the 25th annual ACM symposium on User interface software and technology, 2012

We introduce GripSense, a system that leverages mobile device touchscreens and their built-in ine... more We introduce GripSense, a system that leverages mobile device touchscreens and their built-in inertial sensors and vibration motor to infer hand postures including one-or two-handed interaction, use of thumb or index finger, or use on a table. GripSense also senses the amount of pressure a user exerts on the touchscreen despite a lack of direct pressure sensors by observing diminished gyroscope readings when the vibration motor is "pulsed." In a controlled study with 10 participants, GripSense accurately differentiated device usage on a table vs. in hand with 99.7% accuracy; when in hand, it inferred hand postures with 84.3% accuracy. In addition, GripSense distinguished three levels of pressure with 95.1% accuracy. A usability analysis of GripSense was conducted in three custom applications and showed that pressure input and hand-posture sensing can be useful in a number of scenarios.

Research paper thumbnail of Analyzing the intelligibility of real-time mobile sign language video transmitted below recommended standards

Proceedings of the 16th international ACM SIGACCESS conference on Computers & accessibility - ASSETS '14, 2014

Mobile sign language video communication has the potential to be more accessible and affordable i... more Mobile sign language video communication has the potential to be more accessible and affordable if the current recommended video transmission standard of 25 frames per second at 100 kilobits per second (kbps) as prescribed in the International Telecommunication Standardization Sector (ITU-T) Q.26/16 were relaxed. To investigate sign language video intelligibility at lower settings, we conducted a laboratory study, where fluent ASL signers in pairs held real-time free-form conversations over an experimental smartphone app transmitting real-time video at 5 fps/25 kbps, 10 fps/50 kbps, 15 fps/75 kbps, and 30 fps/150 kbps, settings well below the ITU-T standard that save both bandwidth and battery life. The aim of the laboratory study was to investigate how fluent ASL signers adapt to the lower video transmission rates, and to identify a lower threshold at which intelligible realtime conversations could be held. We gathered both subjective and objective measures from participants and calculated battery life drain. As expected, reducing frame rate monotonically extended battery life. We discovered all participants were successful in holding intelligible conversations across all frame rates. Participants did perceive the lower quality of video transmitted at 5 fps/25 kbps and felt that they were signing more slowly to compensate; however, participants' rate of fingerspelling did not actually decrease. This and other findings support our recommendation that intelligible mobile sign language conversations can occur at frame rates as low as 10 fps/50 kbps while optimizing resource consumption, video intelligibility, and user preferences.

Research paper thumbnail of Activity analysis enabling real-time video communication on mobile phones for deaf users

Proceedings of the 22nd annual ACM symposium on User interface software and technology, 2009

We describe our system called MobileASL for real-time video communication on the current U.S. mob... more We describe our system called MobileASL for real-time video communication on the current U.S. mobile phone network. The goal of MobileASL is to enable Deaf people to communicate with Sign Language over mobile phones by compressing and transmitting sign language video in real-time on an off-the-shelf mobile phone, which has a weak processor, uses limited bandwidth, and has little battery capacity. We develop several H.264-compliant algorithms to save system resources while maintaining ASL intelligibility by focusing on the important segments of the video. We employ a dynamic skin-based region-of-interest (ROI) that encodes the skin at higher quality at the expense of the rest of the video. We also automatically recognize periods of signing versus not signing and raise and lower the frame rate accordingly, a technique we call variable frame rate (VFR). We show that our variable frame rate technique results in a 47% gain in battery life on the phone, corresponding to an extra 68 minutes of talk time. We also evaluate our system in a user study. Participants fluent in ASL engage in unconstrained conversations over mobile phones in a laboratory setting. We find that the ROI increases intelligibility and decreases guessing. VFR increases the need for signs to be repeated and the number of conversational breakdowns, but does not affect the users' perception of adopting the technology. These results show that our sign language sensitive algorithms can save considerable resources without sacrificing intelligibility.

Research paper thumbnail of A multi-site field study of crowdsourced contextual help

Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, 2013

We present a multi-site field study to evaluate LemonAid, a crowdsourced contextual help approach... more We present a multi-site field study to evaluate LemonAid, a crowdsourced contextual help approach that allows users to retrieve relevant questions and answers by making selections within the interface. We deployed LemonAid on 4 different web sites used by thousands of users and collected data over several weeks, gathering over 1,200 usage logs, 168 exit surveys, and 36 one-on-one interviews. Our results indicate that over 70% of users found LemonAid to be helpful, intuitive, and desirable for reuse. Software teams found LemonAid easy to integrate with their sites and found the analytics data aggregated by LemonAid a novel way of learning about users' popular questions. Our work provides the first holistic picture of the adoption and use of a crowdsourced contextual help system and offers several insights into the social and organizational dimensions of implementing such help systems for real-world applications.

Research paper thumbnail of Understanding Expressions of Unwanted Behaviors in Open Bug Reporting

2010 IEEE Symposium on Visual Languages and Human-Centric Computing, 2010

Open bug reporting allows end-users to express a vast array of unwanted software behaviors. Howev... more Open bug reporting allows end-users to express a vast array of unwanted software behaviors. However, users' expectations often clash with developers' implementation intents. We created a classification of seven common expectation violations cited by endusers in bug report descriptions and applied it to 1,000 bug reports from the Mozilla project. Our results show that users largely described bugs as violations of their own personal expectations, of specifications, or of the user community's expectations. We found a correlation between a reporter's expression of which expectation was being violated and whether or not the bug would eventually be fixed. Specifically, when bugs were expressed as violations of community expectations rather than personal expectations, they had a better chance of being fixed.

Research paper thumbnail of Understanding usability practices in complex domains

Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, 2010

Although usability methods are widely used for evaluating conventional graphical user interfaces ... more Although usability methods are widely used for evaluating conventional graphical user interfaces and websites, there is a growing concern that current approaches are inadequate for evaluating complex, domain-specific tools. We interviewed 21 experienced usability professionals, including in-house experts, external consultants, and managers working in a variety of complex domains, and uncovered the challenges commonly posed by domain complexity and how practitioners work around them. We found that despite the best efforts by usability professionals to get familiar with complex domains on their own, the lack of formal domain expertise can be a significant hurdle for carrying out effective usability evaluations. Partnerships with domain experts lead to effective results as long as domain experts are willing to be an integral part of the usability team. These findings suggest that for achieving usability in complex domains, some fundamental educational changes may be needed in the training of usability professionals.

Research paper thumbnail of Human-Centered Approach Evaluating Mobile Sign Language Video Communication

Mobile video is becoming a mainstream method of communication. Deaf and hard-of-hearing people be... more Mobile video is becoming a mainstream method of communication. Deaf and hard-of-hearing people benefit the most because mobile video enables real-time sign language communication. However, mobile video quality can become unintelligible due to high video transmission rates causing network congestion and delayed video. My dissertation research focuses on making mobile sign language video more accessible and affordable without relying on higher cellular network capacity while extending cellphone battery life. I am investigating how much frame rate and bitrate of sign language video can be reduced before compromising video intelligibility. Web and laboratory studies are conducted to evaluate perceived intelligibility of video transmitted at low frame rates and bitrates. I also propose the Human Signal Intelligibility Model (HSIM) addressing the lack of a universal model to base video intelligibility evaluations.

Research paper thumbnail of Increasing Mobile Sign Language Video Accessibility by Relaxing Video Transmission Standards

The current recommended video transmission standards, Telecommunication Standardization Sector (I... more The current recommended video transmission standards, Telecommunication Standardization Sector (ITU-T) Q.26/16, of 25 frames per second at 100 kilobits per second or higher make mobile sign language video communication less accessible than it could be with a more relaxed standard. The current bandwidth requirements are high enough that network congestion may occur, causing delays or lost information. In addition, limited data plans may cause higher cost to video communication users. To increase the accessibility and affordability of video communication, we explore a relaxed standard for video transmission using lower frame rates and bitrates. We introduce a novel measure, the Human Signal Intelligibility Model, to accomplish this. We propose web and laboratory studies to validate lower bounds on frame rates and bitrates for sign language communication on small mobile devices. Author

Research paper thumbnail of An Aligned Rank Transform Procedure for Multifactor Contrast Tests

The 34th Annual ACM Symposium on User Interface Software and Technology, 2021

Data from multifactor HCI experiments often violates the assumptions of parametric tests (i.e., n... more Data from multifactor HCI experiments often violates the assumptions of parametric tests (i.e., nonconforming data). The Aligned Rank Transform (ART) has become a popular nonparametric analysis in HCI that can find main and interaction effects in nonconforming data, but leads to incorrect results when used to conduct post hoc contrast tests. We created a new algorithm called ART-C for conducting contrast tests within the ART paradigm and validated it on 72,000 synthetic data sets. Our results indicate that ART-C does not inflate Type I error rates, unlike contrasts based on ART, and that ART-C has more statistical power than a t-test, Mann-Whitney U test, Wilcoxon signed-rank test, and ART. We also extended an open-source tool called ARTool with our ART-C algorithm for both Windows and R. Our validation had some limitations (e.g., only six distribution types, no mixed factorial designs, no random slopes), and data drawn from Cauchy distributions should not be analyzed with ART-C.

Research paper thumbnail of Research contributions in human-computer interaction

Research paper thumbnail of Aligned Rank Transform

Research paper thumbnail of From User-Centered to Adoption-Centered Design

Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems, 2015

As we increasingly strive for scientific rigor and generalizability in HCI research, should we en... more As we increasingly strive for scientific rigor and generalizability in HCI research, should we entertain any hope that by doing good science, our discoveries will eventually be more transferrable to industry? We present an in-depth case study of how an HCI research innovation goes through the process of transitioning from a university project to a revenue-generating startup financed by venture capital. The innovation is a novel contextual help system for the Web, and we reflect on the different methods used to evaluate it and how research insights endure attempted dissemination as a commercial product. Although the extent to which any innovation succeeds commercially depends on a number of factors like market forces, we found that our HCI innovation with user-centered origins was in a unique position to gain traction with customers and garner buy-in from investors. However, since end users were not the buyers of our product, a strong user-centered focus obfuscated other critical needs of the startup and pushed out perspectives of nonuser-centered stakeholders. To make the research-toproduct transition, we had to focus on adoption-centered design, the process of understanding and designing for adopters and stakeholders of the product. Our case study raises questions about how we evaluate the novelty and research contributions of HCI innovations with respect to their potential for commercial impact.

Research paper thumbnail of SwitchBack

Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems, 2015

Smartphones and tablets are often used in dynamic environments that force users to break focus an... more Smartphones and tablets are often used in dynamic environments that force users to break focus and attend to their surroundings, creating a form of "situational impairment." Current mobile devices have no ability to sense when users divert or restore their attention, let alone provide support for resuming tasks. We therefore introduce SwitchBack, a system that allows mobile device users to resume tasks more efficiently. SwitchBack is built upon Focus and Saccade Tracking (FAST), which uses the frontfacing camera to determine when the user is looking and how their eyes are moving across the screen. In a controlled study, we found that FAST can identify how many lines the user has read in a body of text within a mean absolute percent error of just 3.9%. We then tested SwitchBack in a dual focus-of-attention task, finding that SwitchBack improved average reading speed by 7.7% in the presence of distractions.

Research paper thumbnail of Usable gestures for blind people

Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, 2011

Despite growing awareness of the accessibility issues surrounding touch screen use by blind peopl... more Despite growing awareness of the accessibility issues surrounding touch screen use by blind people, designers still face challenges when creating accessible touch screen interfaces. One major stumbling block is a lack of understanding about how blind people actually use touch screens. We conducted two user studies that compared how blind people and sighted people use touch screen gestures. First, we conducted a gesture elicitation study in which 10 blind and 10 sighted people invented gestures to perform common computing tasks on a tablet PC. We found that blind people have different gesture preferences than sighted people, including preferences for edge-based gestures and gestures that involve tapping virtual keys on a keyboard. Second, we conducted a performance study in which the same participants performed a set of reference gestures. We found significant differences in the speed, size, and shape of gestures performed by blind people versus those performed by sighted people. Our results suggest new design guidelines for accessible touch screen interfaces.

Research paper thumbnail of The need for research on mobile technologies for people with low-vision

We argue that there needs to be more research on technologies for people with low-vision. While t... more We argue that there needs to be more research on technologies for people with low-vision. While the vast majority of people with vision impairments have some functional vision, accessibility research tends to focus on nonvisual interaction. Researchers can make mobile devices more accessible to low-vision people by exploring target acquisition, text entry, and text and image output for this group of users. Researchers can also use mobile devices as tools to better enable lowvision users to access printed material and signage.

Research paper thumbnail of A web-based intelligibility evaluation of sign language video transmitted at low frame rates and bitrates

Proceedings of the 15th International ACM SIGACCESS Conference on Computers and Accessibility, 2013

Mobile sign language video conversations can become unintelligible due to high video transmission... more Mobile sign language video conversations can become unintelligible due to high video transmission rates causing network congestion and delayed video. In an effort to understand how much sign language video quality can be sacrificed, we evaluated the perceived lower limits of intelligible sign language video transmitted at four low frame rates (1, 5, 10, and 15 frames per second [fps]) and four low fixed bitrates (15, 30, 60, and 120 kilobits per second [kbps]). We discovered an "intelligibility ceiling effect" where increasing the frame rate above 10 fps decreased perceived intelligibility, and increasing the bitrate above 60 kbps produced diminishing returns. Additional findings suggest that relaxing the recommended international video transmission rate, 25 fps at 100 kbps or higher, would still provide intelligible content while considering network resources and bandwidth consumption. As part of this work, we developed the Human Signal Intelligibility Model, a new conceptual model useful for informing evaluations of video intelligibility.

Research paper thumbnail of Reject me

CHI '12 Extended Abstracts on Human Factors in Computing Systems, 2012

The HCI research community grows bigger each year, refining and expanding its boundaries in new w... more The HCI research community grows bigger each year, refining and expanding its boundaries in new ways. The ability to effectively review submissions is critical to the growth of CHI and related conferences. The review process is designed to produce a consistent supply of fair, high-quality reviews without overloading individual reviewers; yet, after each cycle, concerns are raised about limitations of the process. Every year, participants are left wondering why their papers were not accepted (or why they were). This SIG will explore reviewing through a critical and constructive lens, discussing current successes and future opportunities in the CHI review process. Goals will include actionable conclusions about ways to improve the system, potential alternative peer models, and the creation of materials to educate newcomer reviewers.

Research paper thumbnail of Enhancing independence and safety for blind and deaf-blind public transit riders

Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, 2011

Blind and deaf-blind people often rely on public transit for everyday mobility, but using transit... more Blind and deaf-blind people often rely on public transit for everyday mobility, but using transit can be challenging for them. We conducted semi-structured interviews with 13 blind and deaf-blind people to understand how they use public transit and what human values were important to them in this domain. Two key values were identified: independence and safety. We developed GoBraille, two related Braille-based applications that provide information about buses and bus stops while supporting the key values. GoBraille is built on MoBraille, a novel framework that enables a Braille display to benefit from many features in a smartphone without knowledge of proprietary, devicespecific protocols. Finally, we conducted user studies with blind people to demonstrate that GoBraille enables people to travel more independently and safely. We also conducted co-design with a deaf-blind person, finding that a minimalist interface, with short input and output messages, was most effective for this population.

Research paper thumbnail of PassChords

Proceedings of the 14th international ACM SIGACCESS conference on Computers and accessibility, 2012

Blind mobile device users face security risks such as inacces sible authentication methods, and a... more Blind mobile device users face security risks such as inacces sible authentication methods, and aural and visual eaves dropping. We interviewed 13 blind smartphone users and found that most participants were unaware of or not con cerned about potential security threats. Not a single par ticipant used optional authentication methods such as a password-protected screen lock. We addressed the high risk of unauthorized user access by developing PassChords, a non-visual authentication method for touch surfaces that is robust to aural and visual eavesdropping. A user enters a PassChord by tapping several times on a touch surface with one or more fingers. The set of fingers used in each tap defines the password. We give preliminary evidence that a four-tap PassChord has about the same entropy, a measure of password strength, as a four-digit personal identification number (PIN) used in the iPhone's Passcode Lock. We con ducted a study with 16 blind participants that showed that PassChords were nearly three times as fast as iPhone's Passcode Lock with VoiceOver, suggesting that PassChords are a viable accessible authentication method for touch screens.

Research paper thumbnail of Designing and evaluating text entry methods

CHI '12 Extended Abstracts on Human Factors in Computing Systems, 2012

Our workshop has three primary goals. The first goal is community building: we want to get text e... more Our workshop has three primary goals. The first goal is community building: we want to get text entry researchers that are active in different communities into one place. Our second goal is to promote CHI as a natural and compelling focal point for all kinds of text entry research. The third goal is to discuss some difficult issues that are hard or near impossible to handle within the traditional format of research papers.

Research paper thumbnail of GripSense

Proceedings of the 25th annual ACM symposium on User interface software and technology, 2012

We introduce GripSense, a system that leverages mobile device touchscreens and their built-in ine... more We introduce GripSense, a system that leverages mobile device touchscreens and their built-in inertial sensors and vibration motor to infer hand postures including one-or two-handed interaction, use of thumb or index finger, or use on a table. GripSense also senses the amount of pressure a user exerts on the touchscreen despite a lack of direct pressure sensors by observing diminished gyroscope readings when the vibration motor is "pulsed." In a controlled study with 10 participants, GripSense accurately differentiated device usage on a table vs. in hand with 99.7% accuracy; when in hand, it inferred hand postures with 84.3% accuracy. In addition, GripSense distinguished three levels of pressure with 95.1% accuracy. A usability analysis of GripSense was conducted in three custom applications and showed that pressure input and hand-posture sensing can be useful in a number of scenarios.

Research paper thumbnail of Analyzing the intelligibility of real-time mobile sign language video transmitted below recommended standards

Proceedings of the 16th international ACM SIGACCESS conference on Computers & accessibility - ASSETS '14, 2014

Mobile sign language video communication has the potential to be more accessible and affordable i... more Mobile sign language video communication has the potential to be more accessible and affordable if the current recommended video transmission standard of 25 frames per second at 100 kilobits per second (kbps) as prescribed in the International Telecommunication Standardization Sector (ITU-T) Q.26/16 were relaxed. To investigate sign language video intelligibility at lower settings, we conducted a laboratory study, where fluent ASL signers in pairs held real-time free-form conversations over an experimental smartphone app transmitting real-time video at 5 fps/25 kbps, 10 fps/50 kbps, 15 fps/75 kbps, and 30 fps/150 kbps, settings well below the ITU-T standard that save both bandwidth and battery life. The aim of the laboratory study was to investigate how fluent ASL signers adapt to the lower video transmission rates, and to identify a lower threshold at which intelligible realtime conversations could be held. We gathered both subjective and objective measures from participants and calculated battery life drain. As expected, reducing frame rate monotonically extended battery life. We discovered all participants were successful in holding intelligible conversations across all frame rates. Participants did perceive the lower quality of video transmitted at 5 fps/25 kbps and felt that they were signing more slowly to compensate; however, participants' rate of fingerspelling did not actually decrease. This and other findings support our recommendation that intelligible mobile sign language conversations can occur at frame rates as low as 10 fps/50 kbps while optimizing resource consumption, video intelligibility, and user preferences.

Research paper thumbnail of Activity analysis enabling real-time video communication on mobile phones for deaf users

Proceedings of the 22nd annual ACM symposium on User interface software and technology, 2009

We describe our system called MobileASL for real-time video communication on the current U.S. mob... more We describe our system called MobileASL for real-time video communication on the current U.S. mobile phone network. The goal of MobileASL is to enable Deaf people to communicate with Sign Language over mobile phones by compressing and transmitting sign language video in real-time on an off-the-shelf mobile phone, which has a weak processor, uses limited bandwidth, and has little battery capacity. We develop several H.264-compliant algorithms to save system resources while maintaining ASL intelligibility by focusing on the important segments of the video. We employ a dynamic skin-based region-of-interest (ROI) that encodes the skin at higher quality at the expense of the rest of the video. We also automatically recognize periods of signing versus not signing and raise and lower the frame rate accordingly, a technique we call variable frame rate (VFR). We show that our variable frame rate technique results in a 47% gain in battery life on the phone, corresponding to an extra 68 minutes of talk time. We also evaluate our system in a user study. Participants fluent in ASL engage in unconstrained conversations over mobile phones in a laboratory setting. We find that the ROI increases intelligibility and decreases guessing. VFR increases the need for signs to be repeated and the number of conversational breakdowns, but does not affect the users' perception of adopting the technology. These results show that our sign language sensitive algorithms can save considerable resources without sacrificing intelligibility.

Research paper thumbnail of A multi-site field study of crowdsourced contextual help

Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, 2013

We present a multi-site field study to evaluate LemonAid, a crowdsourced contextual help approach... more We present a multi-site field study to evaluate LemonAid, a crowdsourced contextual help approach that allows users to retrieve relevant questions and answers by making selections within the interface. We deployed LemonAid on 4 different web sites used by thousands of users and collected data over several weeks, gathering over 1,200 usage logs, 168 exit surveys, and 36 one-on-one interviews. Our results indicate that over 70% of users found LemonAid to be helpful, intuitive, and desirable for reuse. Software teams found LemonAid easy to integrate with their sites and found the analytics data aggregated by LemonAid a novel way of learning about users' popular questions. Our work provides the first holistic picture of the adoption and use of a crowdsourced contextual help system and offers several insights into the social and organizational dimensions of implementing such help systems for real-world applications.

Research paper thumbnail of Understanding Expressions of Unwanted Behaviors in Open Bug Reporting

2010 IEEE Symposium on Visual Languages and Human-Centric Computing, 2010

Open bug reporting allows end-users to express a vast array of unwanted software behaviors. Howev... more Open bug reporting allows end-users to express a vast array of unwanted software behaviors. However, users' expectations often clash with developers' implementation intents. We created a classification of seven common expectation violations cited by endusers in bug report descriptions and applied it to 1,000 bug reports from the Mozilla project. Our results show that users largely described bugs as violations of their own personal expectations, of specifications, or of the user community's expectations. We found a correlation between a reporter's expression of which expectation was being violated and whether or not the bug would eventually be fixed. Specifically, when bugs were expressed as violations of community expectations rather than personal expectations, they had a better chance of being fixed.

Research paper thumbnail of Understanding usability practices in complex domains

Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, 2010

Although usability methods are widely used for evaluating conventional graphical user interfaces ... more Although usability methods are widely used for evaluating conventional graphical user interfaces and websites, there is a growing concern that current approaches are inadequate for evaluating complex, domain-specific tools. We interviewed 21 experienced usability professionals, including in-house experts, external consultants, and managers working in a variety of complex domains, and uncovered the challenges commonly posed by domain complexity and how practitioners work around them. We found that despite the best efforts by usability professionals to get familiar with complex domains on their own, the lack of formal domain expertise can be a significant hurdle for carrying out effective usability evaluations. Partnerships with domain experts lead to effective results as long as domain experts are willing to be an integral part of the usability team. These findings suggest that for achieving usability in complex domains, some fundamental educational changes may be needed in the training of usability professionals.