Donghee "Don" Shin | Texas Tech University (original) (raw)
Papers by Donghee "Don" Shin
International Journal of Technology Management
Distributed, Ambient and Pervasive Interactions, 2016
Journal of Intercultural Communication Research, 2021
ABSTRACT This study evaluates the impact of casting ethnically identical actors in advertisements... more ABSTRACT This study evaluates the impact of casting ethnically identical actors in advertisements to elicit a favourable attitude towards the advertisement and the product. Our study explores the effectiveness of cross-cultural advertising and investigates the effectiveness of advertisements. In this experimental study, a national sample (N = 252) was recruited with an equal number of Indian and Middle Eastern subjects, which were randomly chosen across the United States. The results reflect that attitude formation patterns for Indians and Middle Easterners are dissimilar when they are exposed to advertisements containing actors with varied ethnicities
Journal of Information Science, 2021
The recent proliferation of artificial intelligence (AI) gives rise to questions on how users int... more The recent proliferation of artificial intelligence (AI) gives rise to questions on how users interact with AI services and how algorithms embody the values of users. Despite the surging popularity of AI, how users evaluate algorithms, how people perceive algorithmic decisions, and how they relate to algorithmic functions remain largely unexplored. Invoking the idea of embodied cognition, we characterize core constructs of algorithms that drive the value of embodiment and conceptualizes these factors in reference to trust by examining how they influence the user experience of personalized recommendation algorithms. The findings elucidate the embodied cognitive processes involved in reasoning algorithmic characteristics-fairness, accountability, transparency, and explainability-with regard to their fundamental linkages with trust and ensuing behaviors. Users use a dual-process model, whereby a sense of trust built on a combination of normative values and performance-related qualities of algorithms. Embodied algorithmic characteristics are significantly linked to trust and performance expectancy. Heuristic and systematic processes through embodied cognition provide a concise guide to its conceptualization of AI experiences and interaction. The identified user cognitive processes provide information on a user's cognitive functioning and patterns of behavior as well as a basis for subsequent metacognitive processes. Keywords Algorithm experience; embodied cognition; enactive algorithm; explainability; heuristic and systematic process; human-artificial intelligence interaction Algorithms are continuously shaping the everyday lives of billions of people. Although algorithms are growing ever more pervasive, powerful and sophisticated, algorithms themselves are literal-minded, and context and nuance often elude them [1]. Algorithm designs are a reflection of user values, priorities and preferences, but they are not always ideal, not always neutral, and human biases can affect the algorithms [2]. Algorithms based on machine learning tend to bear serious risks, making it important to ensure that these algorithms are not biased towards any gender, race, ethnicity or other sensitive variables [3]. These concerns are related to recent debates on fairness, accountability, transparency and explain-ability (FATE), which are intrinsically embedded into the contemporary algorithmic technologies [4]. Issues as to what can be done to ensure the decisions made by algorithms are fair, transparent, ethical, and do not discriminate remain unre-solved and require continuing deliberation [5, 6]. Given that such algorithmically informed decisions have the potential for significant societal impact, these issues continue to evolve and ensuring challenges will be faced in the future of AI [7, 8]. Recent research on algorithmic interaction [9] have shown the important role of FATE in the experience of algorith-mic services [10]. Against the black box of algorithms, users are tasked to evaluate the vague qualities of algorithms, which are inherently heuristic and subjective because there are no specific values to define fair, transparent, or accountable [11]. Thus, in understanding such issues, scholars [8] argue that we should focus on the users' cognitive process through which we interpret our experiences and come to our own unique understandings that are related to user
Int Journal of Human Computer Studies, 2021
Artificial intelligence and algorithmic decision-making processes are increasingly criticized for... more Artificial intelligence and algorithmic decision-making processes are increasingly criticized for their black-box nature. Explainable AI approaches to trace human-interpretable decision processes from algorithms have been explored. Yet, little is known about algorithmic explainability from a human factors' perspective. From the perspective of user interpretability and understandability, this study examines the effect of explainability in AI on user trust and attitudes toward AI. It conceptualizes causability as an antecedent of explainability and as a key cue of an algorithm and examines them in relation to trust by testing how they affect user perceived performance of AI-driven services. The results show the dual roles of causability and explainability in terms of its underlying links to trust and subsequent user behaviors. Explanations of why certain news articles are recommended generate users trust whereas causability of to what extent they can understand the explanations affords users emotional confidence. Causability lends the justification for what and how should be explained as it determines the relative importance of the properties of explainability. The results have implications for the inclusion of causability and explanatory cues in AI systems, which help to increase trust and help users to assess the quality of explanations. Causable explainable AI will help people understand the decision-making process of AI algorithms by bringing transparency and accountability into AI systems.
New Media and Society, 2021
How much do anthropomorphisms influence the perception of users about whether they are conversing... more How much do anthropomorphisms influence the perception of users about whether they are conversing with a human or an algorithm in a chatbot environment? We develop a cognitive model using the constructs of anthropomorphism and explainability to explain user experiences with conversational journalism (CJ) in the context of chatbot news. We examine how users perceive anthropomorphic and explanatory cues, and how these stimuli influence user perception of and attitudes toward CJ. Anthropomorphic explanations of why and how certain items are recommended afford users a sense of humanness, which then affects trust and emotional assurance. Perceived humanness triggers a two-step flow of interaction by defining the baseline to make a judgment about the qualities of CJ and by affording the capacity to interact with chatbots concerning their intention to interact with chatbots. We develop practical implications relevant to chatbots and ascertain the significance of humanness as a social cue in CJ. We offer a theoretical lens through which to characterize humanness as a key mechanism of human-artificial intelligence (AI) interaction, of which the eventual goal is humans perceive AI as human beings. Our results help to better understand human-chatbot interaction in CJ by illustrating how humans interact with chatbots and explaining why humans accept the way of CJ.
Information, Communication and Society, Jan 2, 2019
Telecommunications Policy
ABSTRACT With the rapid diffusion of a wide variety of smartphones, quality issues have become ce... more ABSTRACT With the rapid diffusion of a wide variety of smartphones, quality issues have become central to consumers. While customer satisfaction of most goods and services has been well researched, little research seems to exist on satisfaction and loyalty with respect to advanced mobile services, such as smartphones. This study applied a customer satisfaction index (CSI) model to the smart mobile sector in order to derive a smart-service CSI (SCSI). The SCSI model and its hypotheses were then tested using partial least square analysis and index calculation. The findings showed that the perceived value and customer satisfaction are key variables mediating the relationship between quality and customer loyalty. The proposed model demonstrated strong explanatory power, with satisfactory reliability and validity. The SCSI model establishes a foundation for future smart service categories on the basis of providing a powerful tool for quality assessment. The results of this study provide useful insights for the telecom industry and policymakers, for the forging of effective policies and competitive strategies.
The Journal of the Korea Contents Association
Journal of the Korean earth science society
Information Technology and Management
ABSTRACT Organizations invest resources to develop information capabilities, to utilize personal ... more ABSTRACT Organizations invest resources to develop information capabilities, to utilize personal and impersonal information. While the utilization of knowledge is likely to improve organizational performance, it is unclear what the consequences of utilizing personal and impersonal sources of information are for individuals. This study sought to increase understanding of the performance implications of using personal and impersonal information, by examining four business units of a large financial institution. Utilizing competing theoretical models, we tested whether personal and impersonal sources of information substituted for, or complemented each other. The results indicated that individuals who utilized information from personal and impersonal sources of knowledge in a complementary fashion had superior performance. Parsing impersonal knowledge sourcing, we found that the use of specific impersonal information repositories increased performance, while the use of general impersonal information decreased performance. In general, this study shed light on the origins of individual knowledge capabilities, and indicated that individuals gain advantages, by engaging in particular knowledge-sourcing routines.
Journal of the Korean earth science society
Journal of The Korean Association For Science Education
ABSTRACT The purpose of this study is to investigate the possibilities of science ethics educatio... more ABSTRACT The purpose of this study is to investigate the possibilities of science ethics education with history of science (HOS) and to develop its teaching and learning model for secondary school students. A total of 72 cases about science ethics were extracted from 20 or more HOS books, journal articles, and newspaper articles. These cases were categorized into 8 areas, such as forgery, fabrication, violation of bioethics in testing, plagiarism and stealth, unfair allocation of credit, over slander, conjunction with ideologies, and social responsibility problems. The results of this study are as follows. First, research forgery, occurring in the process of the research, was the most frequent in HOS. Second, we developed eight teaching lesson plans for each area. Third, we proposed a teaching and learning model based on the developed lesson plans as well as related teaching and learning models in the fields of science ethics education, ethics education, and history education. Our model has five steps, 'investigating-suggesting casesclarifying problems-finding alternatives-summarizing'.
Journal of The Korean Association For Research In Science Education
Behaviour & Information Technology
ABSTRACT A cloud learning environment enables an enriched learning experience compared to convent... more ABSTRACT A cloud learning environment enables an enriched learning experience compared to conventional methods of learning. Employing a value-sensitive approach, we undertook theoretical and empirical analyses to explore the values that influence potential users’ adoption of cloud courseware, by integrating cognitive motivations and user values as primary determining factors. We found that users’ intentions and behaviours are largely influenced by their perceptions of what is valuable about the cloud courseware in terms of sociability, learnability, and usability. These evaluations were found to be significant antecedents of cloud-computing intentions. This study makes a contribution to theory development as our model extends existing technology acceptance models and can be used to design user interfaces and promote the acceptance of cloud computing. For practical applications, the study findings can be used by industries promoting cloud services to increase user acceptance by addressing user values and incorporating them into cloud-computing design.
International Journal of Human-Computer Interaction
International Journal of Technology Management
Distributed, Ambient and Pervasive Interactions, 2016
Journal of Intercultural Communication Research, 2021
ABSTRACT This study evaluates the impact of casting ethnically identical actors in advertisements... more ABSTRACT This study evaluates the impact of casting ethnically identical actors in advertisements to elicit a favourable attitude towards the advertisement and the product. Our study explores the effectiveness of cross-cultural advertising and investigates the effectiveness of advertisements. In this experimental study, a national sample (N = 252) was recruited with an equal number of Indian and Middle Eastern subjects, which were randomly chosen across the United States. The results reflect that attitude formation patterns for Indians and Middle Easterners are dissimilar when they are exposed to advertisements containing actors with varied ethnicities
Journal of Information Science, 2021
The recent proliferation of artificial intelligence (AI) gives rise to questions on how users int... more The recent proliferation of artificial intelligence (AI) gives rise to questions on how users interact with AI services and how algorithms embody the values of users. Despite the surging popularity of AI, how users evaluate algorithms, how people perceive algorithmic decisions, and how they relate to algorithmic functions remain largely unexplored. Invoking the idea of embodied cognition, we characterize core constructs of algorithms that drive the value of embodiment and conceptualizes these factors in reference to trust by examining how they influence the user experience of personalized recommendation algorithms. The findings elucidate the embodied cognitive processes involved in reasoning algorithmic characteristics-fairness, accountability, transparency, and explainability-with regard to their fundamental linkages with trust and ensuing behaviors. Users use a dual-process model, whereby a sense of trust built on a combination of normative values and performance-related qualities of algorithms. Embodied algorithmic characteristics are significantly linked to trust and performance expectancy. Heuristic and systematic processes through embodied cognition provide a concise guide to its conceptualization of AI experiences and interaction. The identified user cognitive processes provide information on a user's cognitive functioning and patterns of behavior as well as a basis for subsequent metacognitive processes. Keywords Algorithm experience; embodied cognition; enactive algorithm; explainability; heuristic and systematic process; human-artificial intelligence interaction Algorithms are continuously shaping the everyday lives of billions of people. Although algorithms are growing ever more pervasive, powerful and sophisticated, algorithms themselves are literal-minded, and context and nuance often elude them [1]. Algorithm designs are a reflection of user values, priorities and preferences, but they are not always ideal, not always neutral, and human biases can affect the algorithms [2]. Algorithms based on machine learning tend to bear serious risks, making it important to ensure that these algorithms are not biased towards any gender, race, ethnicity or other sensitive variables [3]. These concerns are related to recent debates on fairness, accountability, transparency and explain-ability (FATE), which are intrinsically embedded into the contemporary algorithmic technologies [4]. Issues as to what can be done to ensure the decisions made by algorithms are fair, transparent, ethical, and do not discriminate remain unre-solved and require continuing deliberation [5, 6]. Given that such algorithmically informed decisions have the potential for significant societal impact, these issues continue to evolve and ensuring challenges will be faced in the future of AI [7, 8]. Recent research on algorithmic interaction [9] have shown the important role of FATE in the experience of algorith-mic services [10]. Against the black box of algorithms, users are tasked to evaluate the vague qualities of algorithms, which are inherently heuristic and subjective because there are no specific values to define fair, transparent, or accountable [11]. Thus, in understanding such issues, scholars [8] argue that we should focus on the users' cognitive process through which we interpret our experiences and come to our own unique understandings that are related to user
Int Journal of Human Computer Studies, 2021
Artificial intelligence and algorithmic decision-making processes are increasingly criticized for... more Artificial intelligence and algorithmic decision-making processes are increasingly criticized for their black-box nature. Explainable AI approaches to trace human-interpretable decision processes from algorithms have been explored. Yet, little is known about algorithmic explainability from a human factors' perspective. From the perspective of user interpretability and understandability, this study examines the effect of explainability in AI on user trust and attitudes toward AI. It conceptualizes causability as an antecedent of explainability and as a key cue of an algorithm and examines them in relation to trust by testing how they affect user perceived performance of AI-driven services. The results show the dual roles of causability and explainability in terms of its underlying links to trust and subsequent user behaviors. Explanations of why certain news articles are recommended generate users trust whereas causability of to what extent they can understand the explanations affords users emotional confidence. Causability lends the justification for what and how should be explained as it determines the relative importance of the properties of explainability. The results have implications for the inclusion of causability and explanatory cues in AI systems, which help to increase trust and help users to assess the quality of explanations. Causable explainable AI will help people understand the decision-making process of AI algorithms by bringing transparency and accountability into AI systems.
New Media and Society, 2021
How much do anthropomorphisms influence the perception of users about whether they are conversing... more How much do anthropomorphisms influence the perception of users about whether they are conversing with a human or an algorithm in a chatbot environment? We develop a cognitive model using the constructs of anthropomorphism and explainability to explain user experiences with conversational journalism (CJ) in the context of chatbot news. We examine how users perceive anthropomorphic and explanatory cues, and how these stimuli influence user perception of and attitudes toward CJ. Anthropomorphic explanations of why and how certain items are recommended afford users a sense of humanness, which then affects trust and emotional assurance. Perceived humanness triggers a two-step flow of interaction by defining the baseline to make a judgment about the qualities of CJ and by affording the capacity to interact with chatbots concerning their intention to interact with chatbots. We develop practical implications relevant to chatbots and ascertain the significance of humanness as a social cue in CJ. We offer a theoretical lens through which to characterize humanness as a key mechanism of human-artificial intelligence (AI) interaction, of which the eventual goal is humans perceive AI as human beings. Our results help to better understand human-chatbot interaction in CJ by illustrating how humans interact with chatbots and explaining why humans accept the way of CJ.
Information, Communication and Society, Jan 2, 2019
Telecommunications Policy
ABSTRACT With the rapid diffusion of a wide variety of smartphones, quality issues have become ce... more ABSTRACT With the rapid diffusion of a wide variety of smartphones, quality issues have become central to consumers. While customer satisfaction of most goods and services has been well researched, little research seems to exist on satisfaction and loyalty with respect to advanced mobile services, such as smartphones. This study applied a customer satisfaction index (CSI) model to the smart mobile sector in order to derive a smart-service CSI (SCSI). The SCSI model and its hypotheses were then tested using partial least square analysis and index calculation. The findings showed that the perceived value and customer satisfaction are key variables mediating the relationship between quality and customer loyalty. The proposed model demonstrated strong explanatory power, with satisfactory reliability and validity. The SCSI model establishes a foundation for future smart service categories on the basis of providing a powerful tool for quality assessment. The results of this study provide useful insights for the telecom industry and policymakers, for the forging of effective policies and competitive strategies.
The Journal of the Korea Contents Association
Journal of the Korean earth science society
Information Technology and Management
ABSTRACT Organizations invest resources to develop information capabilities, to utilize personal ... more ABSTRACT Organizations invest resources to develop information capabilities, to utilize personal and impersonal information. While the utilization of knowledge is likely to improve organizational performance, it is unclear what the consequences of utilizing personal and impersonal sources of information are for individuals. This study sought to increase understanding of the performance implications of using personal and impersonal information, by examining four business units of a large financial institution. Utilizing competing theoretical models, we tested whether personal and impersonal sources of information substituted for, or complemented each other. The results indicated that individuals who utilized information from personal and impersonal sources of knowledge in a complementary fashion had superior performance. Parsing impersonal knowledge sourcing, we found that the use of specific impersonal information repositories increased performance, while the use of general impersonal information decreased performance. In general, this study shed light on the origins of individual knowledge capabilities, and indicated that individuals gain advantages, by engaging in particular knowledge-sourcing routines.
Journal of the Korean earth science society
Journal of The Korean Association For Science Education
ABSTRACT The purpose of this study is to investigate the possibilities of science ethics educatio... more ABSTRACT The purpose of this study is to investigate the possibilities of science ethics education with history of science (HOS) and to develop its teaching and learning model for secondary school students. A total of 72 cases about science ethics were extracted from 20 or more HOS books, journal articles, and newspaper articles. These cases were categorized into 8 areas, such as forgery, fabrication, violation of bioethics in testing, plagiarism and stealth, unfair allocation of credit, over slander, conjunction with ideologies, and social responsibility problems. The results of this study are as follows. First, research forgery, occurring in the process of the research, was the most frequent in HOS. Second, we developed eight teaching lesson plans for each area. Third, we proposed a teaching and learning model based on the developed lesson plans as well as related teaching and learning models in the fields of science ethics education, ethics education, and history education. Our model has five steps, 'investigating-suggesting casesclarifying problems-finding alternatives-summarizing'.
Journal of The Korean Association For Research In Science Education
Behaviour & Information Technology
ABSTRACT A cloud learning environment enables an enriched learning experience compared to convent... more ABSTRACT A cloud learning environment enables an enriched learning experience compared to conventional methods of learning. Employing a value-sensitive approach, we undertook theoretical and empirical analyses to explore the values that influence potential users’ adoption of cloud courseware, by integrating cognitive motivations and user values as primary determining factors. We found that users’ intentions and behaviours are largely influenced by their perceptions of what is valuable about the cloud courseware in terms of sociability, learnability, and usability. These evaluations were found to be significant antecedents of cloud-computing intentions. This study makes a contribution to theory development as our model extends existing technology acceptance models and can be used to design user interfaces and promote the acceptance of cloud computing. For practical applications, the study findings can be used by industries promoting cloud services to increase user acceptance by addressing user values and incorporating them into cloud-computing design.
International Journal of Human-Computer Interaction