Ajay Chander - Academia.edu (original) (raw)

Papers by Ajay Chander

Research paper thumbnail of Creation of User Friendly Datasets: Insights from a Case Study concerning Explanations of Loan Denials

arXiv (Cornell University), Jun 11, 2019

Most explainable AI (XAI) techniques are concerned with the design of algorithms to explain the A... more Most explainable AI (XAI) techniques are concerned with the design of algorithms to explain the AI's decision. However, the data that is used to train these algorithms may contain features that are often incomprehensible to an end-user even with the best XAI algorithms. Thus, the problem of explainability has to be addressed starting right from the data creation step. In this paper, we studied this problem considering the use-case of explaining loan denials to end-users as opposed to AI engineers or domain experts. Motivated by the lack of datasets that are representative of user-friendly explanations, we build the firstof-its-kind dataset that is representative of userfriendly explanations for loan denials. The paper shares some of the insights gained in curating the dataset. First, existing datasets seldom contain features that end users consider as acceptable in understanding a model's decision. Second, understanding of the explanation's context such as the human-in-the-loop seeking the explanation, and the purpose for which an explanation is sought, aids in the creation of user-friendly datasets. Thus, our dataset, which we call Xnet, also contains explanations that serve different purposes: those that educate the loan applicants, and help them take appropriate action towards a future approval. We hope this work will trigger the creation of new user friendly datasets, and serve as a guide for the curation of such datasets.

Research paper thumbnail of ONE - A Personalized Wellness System

Research paper thumbnail of Exploring the Dynamics of Relationships Between Expressed and Experienced Emotions

Intelligent Human Computer Interaction, 2017

Conversational user interfaces (CUIs) are rapidly evolving towards being ubiquitous as human-mach... more Conversational user interfaces (CUIs) are rapidly evolving towards being ubiquitous as human-machine interfaces. Often, CUI backends are powered by a combination of human and machine intelligence, to address queries efficiently. Depending on the type of conversation issue, human-to-human conversations in CUIs (i.e. a human end-user conversing with the human in the CUI backend) could involve varying amounts of emotional content. While some of these emotions could be expressed through the conversation, others are experienced internally within the individual. Understanding the relationship between these two emotion modalities in the end-user could help to analyze and address the conversation issue better. Towards this, we propose an emotion analytic metric that can estimate experienced emotions based on its knowledge about expressed emotions in a user. Our findings point to the possibility of augmenting CUIs with an algorithmically guided emotional sense, which would help in having more effective conversations with end-users.

Research paper thumbnail of Last Mile End-User Programmers: Programming Exposure, Influences, and Preferences of the Masses

2017 IEEE/ACM 39th International Conference on Software Engineering Companion (ICSE-C), 2017

In this paper, we set out to explore the level of programming experience present among the masses... more In this paper, we set out to explore the level of programming experience present among the masses (the last mile end-user programmers), the influence of various factors such as early exposure to software, as well as age, on programming experience, their effects on the types of software people mightwant to create, and the software development approaches they prefer.

Research paper thumbnail of Crowdsourcing in the Absence of Ground Truth - A Case Study

ArXiv, 2019

Crowdsourcing information constitutes an important aspect of human-in-the-loop learning for resea... more Crowdsourcing information constitutes an important aspect of human-in-the-loop learning for researchers across multiple disciplines such as AI, HCI, and social science. While using crowdsourced data for subjective tasks is not new, eliciting useful insights from such data remains challenging due to a variety of factors such as difficulty of the task, personal prejudices of the human evaluators, lack of question clarity, etc. In this paper, we consider one such subjective evaluation task, namely that of estimating experienced emotions of distressed individuals who are conversing with a human listener in an online coaching platform. We explore strategies to aggregate the evaluators choices, and show that a simple voting consensus is as effective as an optimum aggregation method for the task considered. Intrigued by how an objective assessment would compare to the subjective evaluation of evaluators, we also designed a machine learning algorithm to perform the same task. Interestingly,...

Research paper thumbnail of Biases in AI systems

Communications of the ACM, 2021

A survey for practitioners.

Research paper thumbnail of Biases in AI systems

Communications of the ACM, 2021

A survey for practitioners.

Research paper thumbnail of Evaluating Explanations by Cognitive Value

Lecture Notes in Computer Science, 2018

The transparent AI initiative has ignited several academic and industrial endeavors and produced ... more The transparent AI initiative has ignited several academic and industrial endeavors and produced some impressive technologies and results thus far. Many state-of-the-art methods provide explanations that mostly target the needs of AI engineers. However, there is very little work on providing explanations that support the needs of business owners, software developers, and consumers who all play significant roles in the service development and use cycle. By considering the overall context in which an explanation is presented, including the role played by the human-in-the-loop, we can hope to craft effective explanations. In this paper, we introduce the notion of the "cognitive value" of an explanation and describe its role in providing effective explanations within a given context. Specifically, we consider the scenario of a business owner seeking to improve sales of their product, and compare explanations provided by some existing interpretable machine learning algorithms (random forests, scalable Bayesian Rules, causal models) in terms of the cognitive value they offer to the business owner. We hope that our work will foster future research in the field of transparent AI to incorporate the cognitive value of explanations in crafting and evaluating explanations.

Research paper thumbnail of Creation of User Friendly Datasets: Insights from a Case Study concerning Explanations of Loan Denials

arXiv (Cornell University), Jun 11, 2019

Most explainable AI (XAI) techniques are concerned with the design of algorithms to explain the A... more Most explainable AI (XAI) techniques are concerned with the design of algorithms to explain the AI's decision. However, the data that is used to train these algorithms may contain features that are often incomprehensible to an end-user even with the best XAI algorithms. Thus, the problem of explainability has to be addressed starting right from the data creation step. In this paper, we studied this problem considering the use-case of explaining loan denials to end-users as opposed to AI engineers or domain experts. Motivated by the lack of datasets that are representative of user-friendly explanations, we build the firstof-its-kind dataset that is representative of userfriendly explanations for loan denials. The paper shares some of the insights gained in curating the dataset. First, existing datasets seldom contain features that end users consider as acceptable in understanding a model's decision. Second, understanding of the explanation's context such as the human-in-the-loop seeking the explanation, and the purpose for which an explanation is sought, aids in the creation of user-friendly datasets. Thus, our dataset, which we call Xnet, also contains explanations that serve different purposes: those that educate the loan applicants, and help them take appropriate action towards a future approval. We hope this work will trigger the creation of new user friendly datasets, and serve as a guide for the curation of such datasets.

Research paper thumbnail of ONE - A Personalized Wellness System

Research paper thumbnail of Exploring the Dynamics of Relationships Between Expressed and Experienced Emotions

Intelligent Human Computer Interaction, 2017

Conversational user interfaces (CUIs) are rapidly evolving towards being ubiquitous as human-mach... more Conversational user interfaces (CUIs) are rapidly evolving towards being ubiquitous as human-machine interfaces. Often, CUI backends are powered by a combination of human and machine intelligence, to address queries efficiently. Depending on the type of conversation issue, human-to-human conversations in CUIs (i.e. a human end-user conversing with the human in the CUI backend) could involve varying amounts of emotional content. While some of these emotions could be expressed through the conversation, others are experienced internally within the individual. Understanding the relationship between these two emotion modalities in the end-user could help to analyze and address the conversation issue better. Towards this, we propose an emotion analytic metric that can estimate experienced emotions based on its knowledge about expressed emotions in a user. Our findings point to the possibility of augmenting CUIs with an algorithmically guided emotional sense, which would help in having more effective conversations with end-users.

Research paper thumbnail of Last Mile End-User Programmers: Programming Exposure, Influences, and Preferences of the Masses

2017 IEEE/ACM 39th International Conference on Software Engineering Companion (ICSE-C), 2017

In this paper, we set out to explore the level of programming experience present among the masses... more In this paper, we set out to explore the level of programming experience present among the masses (the last mile end-user programmers), the influence of various factors such as early exposure to software, as well as age, on programming experience, their effects on the types of software people mightwant to create, and the software development approaches they prefer.

Research paper thumbnail of Crowdsourcing in the Absence of Ground Truth - A Case Study

ArXiv, 2019

Crowdsourcing information constitutes an important aspect of human-in-the-loop learning for resea... more Crowdsourcing information constitutes an important aspect of human-in-the-loop learning for researchers across multiple disciplines such as AI, HCI, and social science. While using crowdsourced data for subjective tasks is not new, eliciting useful insights from such data remains challenging due to a variety of factors such as difficulty of the task, personal prejudices of the human evaluators, lack of question clarity, etc. In this paper, we consider one such subjective evaluation task, namely that of estimating experienced emotions of distressed individuals who are conversing with a human listener in an online coaching platform. We explore strategies to aggregate the evaluators choices, and show that a simple voting consensus is as effective as an optimum aggregation method for the task considered. Intrigued by how an objective assessment would compare to the subjective evaluation of evaluators, we also designed a machine learning algorithm to perform the same task. Interestingly,...

Research paper thumbnail of Biases in AI systems

Communications of the ACM, 2021

A survey for practitioners.

Research paper thumbnail of Biases in AI systems

Communications of the ACM, 2021

A survey for practitioners.

Research paper thumbnail of Evaluating Explanations by Cognitive Value

Lecture Notes in Computer Science, 2018

The transparent AI initiative has ignited several academic and industrial endeavors and produced ... more The transparent AI initiative has ignited several academic and industrial endeavors and produced some impressive technologies and results thus far. Many state-of-the-art methods provide explanations that mostly target the needs of AI engineers. However, there is very little work on providing explanations that support the needs of business owners, software developers, and consumers who all play significant roles in the service development and use cycle. By considering the overall context in which an explanation is presented, including the role played by the human-in-the-loop, we can hope to craft effective explanations. In this paper, we introduce the notion of the "cognitive value" of an explanation and describe its role in providing effective explanations within a given context. Specifically, we consider the scenario of a business owner seeking to improve sales of their product, and compare explanations provided by some existing interpretable machine learning algorithms (random forests, scalable Bayesian Rules, causal models) in terms of the cognitive value they offer to the business owner. We hope that our work will foster future research in the field of transparent AI to incorporate the cognitive value of explanations in crafting and evaluating explanations.