HCI UnifyingHCI&AI (original) (raw)

defined Artificial Intelligence (AI) as both "the science and engineering of intelligent machines, especially computer programs" and the "computational part of the ability to achieve goals in the world." Today, AI is increasingly deployed across many domains of direct societal relevance, such as transportation, retail, criminal justice, finance, and health. But these very domains that AI is aiming to revolutionize may also be where human implications are the most momentous. The potential negative effects of AI on society, whether amplifying human biases or the perils of automation, cannot be ignored, and as a result, such topics are increasingly discussed in scholarly and popular press contexts. As the New York Times notes: "… if we want [AI] to play a positive role in tomorrow's world, it must be guided by human concerns" (Li, 2018). The relationship between technology and humans is the direct focus of human-computer interaction (HCI) research. However, conversations about the relationship between HCI and AI are not new. For the past 20 years, the HCI community has proposed principles, guidelines, and strategies for designing and interacting with user interfaces that employ or are powered by AI in a general sense (Norman, 1994; Ho¨o¨k, 2000). For example, an early discussion by Shneiderman and Maes (1997) challenged whether AI should be a primary metaphor in the human interface to computers: Should interactions between a human and a computer mimic human-human interaction? Or are there practical or even philosophical objections to assigning human attributes and abilities to computers? Putting aside these fundamental questions about what human-AI interactions might look like, Norman (2014) and Ho¨O¨(2000) adopt a more practical approach to designing AI systems. They recommend building in safeguards like verification steps or regulating users' agency so as to prevent unwanted behaviors or undesirable consequences arising from these systems. More broadly, other HCI researchers have contrasted the differences in approaches and philosophies adopted by HCI and AI researchers, particularly around how we understand people and create technologies for their benefit (Winograd, 2006). Grudin (2009) also described alternating cycles in which one approach flourished, while the other suffered a "winter," characterized by a period of reduced funding, accompanied by low academic and popular interest. Building upon Grudin, Winograd (2006) contrasted the strengths and limitations of each, as well as the relevance of rationalistic versus design approaches offered by AI and HCI, respectively, when applied to "messy" human problems. Winograd's overall conclusion was rather surprising: he conjectured that the two fields are not so distinct. He concluded that their philosophies are both rooted in common attempts to push the computer metaphor onto all of reality, as evidenced in most twentieth-century science and technology research. Formative and notable work by Horvitz (1999) also attempted to reconcile many of the seeming differences between HCI and AI by highlighting key challenges and opportunities for building "mixed-initiative user interfaces." These are interfaces that enable users and AI to collaborate efficiently. Horvitz states principles for balancing autonomous CONTACT David A.