Jo Ann Oravec | University of Wisconsin-Whitewater (original) (raw)

Papers by Jo Ann Oravec

Research paper thumbnail of Dramaturgical and Ethical Approaches to the Dark Side: An Introduction

Research paper thumbnail of Negative Dimensions of Human-Robot and Human-AI Interactions: Frightening Legacies, Emerging Dysfunctions, and Creepiness

Social and cultural studies of robots and AI, 2022

Research paper thumbnail of Modeling the AI-Driven Age of Abundance: Applying the Human-to-AI Leverage Ratio (HAILR) to Post-Labor Economics

This paper explores the transformative impact of AI on automating knowledge work leading to the a... more This paper explores the transformative impact of AI on automating knowledge work leading to the anticipated 'Age of Abundance' in a post-labor society where work is performed by machines rather than human labor. Through a detailed model incorporating variables such as cost of computing, AI model efficiency, and human-equivalent production output (derived from the human-to-AI leverage ratio, or HAILR), we provide a nuanced albeit tentative analysis of future productivity trends and economic realities. The model, integrating conservative estimates like a 30% annual improvement in AI model efficiency, projects a substantial increase in productivity; by 2044 it indicates that just four hours of productive human labor could yield as much as 636 years of equivalent output. The model is not intended as a precise prediction, rather a framework to allow scientists and laypersons to visualize the inevitability of the coming Age of Abundance. The assumptions are incidental. If work is automated at scale, one may reasonably change the assumptions in the model and still arrive at the same conclusion: extreme abundance. This research also critically examines the potential job displacement in knowledge and office work sectors, suggesting a loss of 9 out of 10 jobs by 2044 due to AI automation. The model also shows how the remaining workers will be empowered with their efforts "leveraged" by AI technologies. We highlight the economic and societal implications of these findings, including the need for proactive public policy and corporate strategy to navigate the challenges and opportunities presented by AI-driven transformations. The study underscores the criticality of grasping these shifts in timely ways for future workforce planning and societal adaptation. Although the model will certainly need to be revised to accommodate technological, political, and social changes, we believe that its simplicity, flexibility, and clarity can earn it a significant role in policy discourse. 1 Electronic copy available at: https://ssrn.com/abstract=4663704 Literature Review The notion of 'knowledge worker' was developed by Peter Drucker in his 1959 book, The Landmarks of Tomorrow (1). Drucker was a pioneer in contrasting knowledge work with manual labor in his managerial analyses. In our paper, the phrase "knowledge work" includes cognitive work generally performed on a computer. This includes the efforts of programmers, scientists, writers, and engineers who produce and handle information. (2) Office work (from simple clerical tasks to complex, multi-stage efforts) is especially amenable to AI automation. AI capabilities are also making many knowledge work efforts involving high-level thinking (such as medical and legal jobs) amenable to automation. Many early efforts to analyze and understand the dimensions of knowledge work from economic perspectives were inspired by the 1970s writings of Marc Porat (3) , following the lead of Fritz Machlup (4) in the 1960s. Mapping the impact of particular technologies such as AI on the productivity of workers has been an activity of many researchers in the decades that followed, as outlined in the se ctions to come. The modeling effort described in this paper is intended to use straightforward terms and common-sense concepts in ways that make the models usable in public policy deliberations as well as in community outreach or business planning. Providing clear yet powerful data visualizations and conceptual tools in these forums will focus these discussions and stimulate the production of useful insights.

Research paper thumbnail of The Moral Imagination in an Era of “Gaming Academia”: Implications of Emerging Reputational Issues in Scholarly Activities for Knowledge Organization Practices

KNOWLEDGE ORGANIZATION, 2015

is a professor of information technology in the College of Business and Economics at the Universi... more is a professor of information technology in the College of Business and Economics at the University of Wisconsin at Whitewater. She received her MBA, MS, MA, and PhD at UW-Madison. She chaired the Privacy Council of the State of Wisconsin, the nation's first state-level council dealing with information technology and privacy. She is author of Virtual Individuals, Virtual Groups: Human Dimensions of Groupware and Computer Networking. She has written extensively on privacy, American studies, futurism, online reputational systems, artificial intelligence, and emerging technologies. She has held visiting fellow positions at both Cambridge and Oxford. Oravec, Jo Ann. The Moral Imagination in an Era of "Gaming Academia": Implications of Emerging Reputational Issues in Scholarly Activities for Knowledge Organization Practices. Knowledge Organization. 42(5), 316-323. 24 references.

Research paper thumbnail of Promoting Honesty in Children, or Fostering Pathological Behaviour?

M/C Journal

Introduction Many years ago, the moral fable of Pinocchio warned children about the evils of lyin... more Introduction Many years ago, the moral fable of Pinocchio warned children about the evils of lying (Perella). This article explores how children are learning lie-related insights from genres of currently marketed polygraph-style “spy kits”, voice stress analysis apps, and electric shock-delivering games. These artifacts are emerging despite the fact that polygraphy and other lie detection approaches are restricted in use in certain business and community contexts, in part because of their dubious scientific support. However, lie detection devices are still applied in many real-life settings, often in critically important security, customs, and employment arenas (Bunn). A commonly accepted definition of the term “lie” is “a successful or unsuccessful deliberate attempt, without forewarning, to create in another a belief which the communicator considers to be untrue” (Vrij 15), which includes the use of lies in various gaming situations. Many children’s games involve some kind of dece...

Research paper thumbnail of TECHNOLOGICAL INTIMACIES: LOVE FOR ROBOTS, SMARTPHONES, AND OTHER AI-ENHANCED ENTITIES

Peace Chronicle, 2023

What kinds of romantic attachments are humans forming with robots, smartphones, and other artific... more What kinds of romantic attachments are humans forming with robots, smartphones, and other artificial intelligence (AI)-enhanced entities? Movies, books, and television shows with these themes are becoming commonplace, and the notion of individuals being tightly coupled with their devices is increasingly familiar. People who share intimate thoughts with their smartphones and laptops are often manifesting strong feelings toward those entities (including affection), but the question of whether love is involved looms large. As the central case study in this essay relates, marriages between people and their sex robots have been recorded. We will explore whether using the word “love” to refer to such romantic human-robot-AI attachments makes sense and analyze some seemingly-positive perspectives that can serve to generate discourse on the topic. This approach may infuriate people who feel that it denigrates humans to compare them so closely with robots in terms of love and romance; however (with apologies) this essay is intended to generate discourse that illuminates trends rather than fomenting upset feelings. The essay also explores the unsettling potential for these human-machine attachments to diminish the social and psychological influences of intimate relationships among human beings. Will humans cherish each other less if non-human romantic alternatives are available?

Research paper thumbnail of Artificial Intelligence Implications for Academic Cheating: Expanding the Dimensions of Responsible Human-AI Collaboration with ChatGPT and Bard

Journal of Interactive Learning Research, 2023

Cheating is a growing academic and ethical concern in higher education. This article examines the... more Cheating is a growing academic and ethical concern in higher education. This article examines the rise of artificial intelligence (AI) generative chatbots for use in education and provides a review of research literature and relevant scholarship concerning the cheating-related issues involved and their implications for pedagogy. The technological “arms race”
that involves cheating-detection system developers versus
technology savvy students is attracting increased attention to
cheating. AI has added new dimensions to academic cheating challenges as students (as well as faculty and staff) can easily access powerful systems for generating content that can be presented in assignments, exams, or published papers as their own. AI methodology is also providing some emerging anticheating approaches, including facial recognition and watermarking. This article provides an overview of human/AI collaboration
approaches and frames some educational misuses of such AI generative systems as ChatGPT and Bard as forms of “misattributed co-authorship.” As with other kinds of collaborations, the work that students produce with AI assistance can be presented in transparent and straightforward modes or (unfortunately) in opaquer and ethically-problematic ways. However, rather than just for catching or entrapping students, the emerging varieties of technological cheating-detection strategies can be used to assist students in learning how to document and attribute properly their AI-empowered as well as human-human collaborations. Construing misuses of AI generative systems as misattributed co-authorship can recognize the growing capabilities of these tools and how stressing responsible and mindful usage by students can help prepare them for a highly collaborative, AI-saturated future.

Research paper thumbnail of Promoting Honesty in Children, or Fostering Pathological Behaviour? Emerging Varieties of Lie Detection Toys and Games

M/C Journal, 2023

A commonly accepted definition of the term “lie” is “a successful or unsuccessful deliberate atte... more A commonly accepted definition of the term “lie” is “a successful or unsuccessful deliberate attempt, without forewarning, to create in another a belief which the communicator considers to be untrue” (Vrij 15), which includes the use of lies in various gaming situations. Many children’s games involve some kind of deception, and mental privacy considerations are important in many social contexts (such as “keeping a poker face”). The dystopian scenario of children learning basic honesty notions through technologically-enabled lie detection games scripted by corporate developers presents frightening prospects. These lie detection toys and games impart important moral perspectives through technological and algorithmic means (including electrical shocks and online shaming) rather than through human modelling and teaching. They normalise and lessen the seriousness of lying by reducing it into a game. In this article I focus on United States and United Kingdom toys and games, but comparable lie detection approaches have permeated other nations and cultures.

Research paper thumbnail of Love, Sex, and Robots: Technological Shaping of Intimate Relationships

Social and cultural studies of robots and AI, 2022

Research paper thumbnail of Cyber Bullying

Research paper thumbnail of Experts in a Box

Research paper thumbnail of Not Now, Perhaps Later: Time Capsules as Communications with the Future + Statement

Leonardo electronic almanac, 2013

Time capsules are designed to remove selected objects (both physical and virtual) from the stream... more Time capsules are designed to remove selected objects (both physical and virtual) from the streams of everyday use and destruction, toward the goal of placing them in the reach of individuals in the future. This article describes how items are sequestered from present applications and transported to a physical and conceptual space in which they will be received at a particular time in the future with minimal alteration and modification. It analyzes inadvertent time capsule construction (tar pits and tunnels) as well as more deliberate familial, community, didactic, and survivalist varieties. Future archivists or historians are often optimistically posited in time capsule construction who are supposedly prepared to place the items in appropriate context once the capsule is opened. The article also explores the future of time capsule development with projections as to how the character of these efforts may change as the number of online capsules (and amount of digital material stored)...

Research paper thumbnail of Virtual Individuals, Virtual Groups: Bibliography

Research paper thumbnail of Emerging Innovations in Cheating Detection in Online Educational and Workplace Contexts: Social Dimensions of Dishonesty Surveillance

Research paper thumbnail of Emerging “cyber hygiene” practices for the Internet of Things (IoT): Professional issues in consulting clients and educating users on IoT privacy and security

2017 IEEE International Professional Communication Conference (ProComm), 2017

“Cyber hygiene” strategies for the Internet of Things (IoT) may soon expand far beyond the variou... more “Cyber hygiene” strategies for the Internet of Things (IoT) may soon expand far beyond the various approaches that are prescribed for today's computing technologies in workplaces and households (the latter including frequently changing passwords and installing malware protections). This paper explores the various roles of health professionals, insurance agents, marketers, lawyers, financial specialists, and other professionals who are working with clients and consumers during this period in which IoT devices are evolving rapidly and the IoT-produced data's privacy and other legal statuses are still murky. Roles of technology developers and implementers may also shift as IoT data are retained for an assortment of technical and diagnostic purposes but are later requested or subpoenaed for specific investigations or legal proceedings. The paper will also outline the needs for input to public policy discourse by communications professionals who have some insights as to how IoT advances may impact their clients and society as a whole. In the near future, education and communications professionals may also empower households by designing instructional materials for use in establishing cyber hygiene routines and resolving IoT-related concerns.

Research paper thumbnail of “Don't Be Evil” and Beyond for High Tech Organizations

Handbook of Research on Civic Engagement and Social Change in Contemporary Society, 2018

Societal pressures on high tech organizations to define and disseminate their ethical stances are... more Societal pressures on high tech organizations to define and disseminate their ethical stances are increasing as the influences of the technologies involved expand. Many Internet-based businesses have emerged in the past decades; growing numbers of them have developed some kind of moral declaration in the form of mottos or ethical statements. For example, the corporate motto “don't be evil” (often linked with Google/Alphabet) has generated considerable controversy about social and cultural impacts of search engines. After addressing the origins of these mottos and statements, this chapter projects the future of such ethical manifestations in the context of critically-important privacy, security, and economic concerns. The chapter analyzes potential influences of the ethical expressions on corporate social responsibility (CSR) initiatives. The chapter analyzes issues of whether “large-grained” corporate mottos can indeed serve to supply social and ethical guidance for organization...

Research paper thumbnail of Entering the Second Century of Robotics and Intelligent Technologies: An Opening Note

Social and cultural studies of robots and AI, 2022

This opening chapter describes how the problem of “bad robots” may not be solved by making robots... more This opening chapter describes how the problem of “bad robots” may not be solved by making robots seem more human. We may be living with robots, automated vehicles, and other AI-related entities that many of us perceive to be “dark” and “creepy” for many years to come. Some of these dark traits are the result of designers’ decisions, such as the manners in which certain robots apparently elicit fear on the part of some humans. Others are part of the co-production efforts of their users, such as in the way that a legally-available sex robot can often be modified to become a creepy and malicious child sex robot. The possibility that humans will have a low social place in relation to robots relates to another of the major themes of this book, that of efforts to construe robots as “outclassing” humans in substantial ways. The combination of robots often acting in rogue or unpredictable ways and humans themselves as feeling significantly outclassed signals critical social and economic problems for societies.

Research paper thumbnail of Rage against robots: Emotional and motivational dimensions of anti-robot attacks, robot sabotage, and robot bullying

Technological Forecasting and Social Change

An assortment of kinds of attacks and aggressive behaviors toward artificial intelligence (AI)-en... more An assortment of kinds of attacks and aggressive behaviors toward artificial intelligence (AI)-enhanced robots has recently emerged. This paper explores questions of how the human emotions and motivations involved in attacks of robots are being framed as well as how the incidents are presented in social media and traditional broadcast channels. The paper analyzes how robots are construed as the "other" in many contexts, often akin to the perspectives of "machine wreckers" of past centuries. It argues that focuses on the emotions and motivations of robot attackers can be useful in mitigating anti-robot activities. "Hate crime" or "hate incident" characterizations of some anti-robot efforts should be utilized in discourse as well as some future legislative efforts. Hate crime framings can aid in identifying generalized antagonism and antipathy toward robots as autonomous and intelligent entities in the context of antirobot attacks. Human self-defense may become a critical issue in some anti-robot attacks, especially when apparently malfunctioning robots are involved. Attacks of robots present individuals with vicarious opportunities to participate in anti-robot activity and also potentially elicit other aggressive, copycat actions as videos and narrative accounts are shared via social media as well as personal networks.

Research paper thumbnail of Rage against robots: Emotional and motivational dimensions of anti-robot attacks, robot sabotage, and robot bullying

Technological Forecasting & Social Change, 2023

An assortment of kinds of attacks and aggressive behaviors toward artificial intelligence (AI)-en... more An assortment of kinds of attacks and aggressive behaviors toward artificial intelligence (AI)-enhanced robots has recently emerged. This paper explores questions of how the human emotions and motivations involved in attacks of robots are being framed as well as how the incidents are presented in social media and traditional broadcast channels. The paper analyzes how robots are construed as the "other" in many contexts, often akin to the perspectives of "machine wreckers" of past centuries. It argues that focuses on the emotions and motivations of robot attackers can be useful in mitigating anti-robot activities. "Hate crime" or "hate incident" characterizations of some anti-robot efforts should be utilized in discourse as well as some future legislative efforts. Hate crime framings can aid in identifying generalized antagonism and antipathy toward robots as autonomous and intelligent entities in the context of antirobot attacks. Human self-defense may become a critical issue in some anti-robot attacks, especially when apparently malfunctioning robots are involved. Attacks of robots present individuals with vicarious opportunities to participate in anti-robot activity and also potentially elicit other aggressive, copycat actions as videos and narrative accounts are shared via social media as well as personal networks.

Research paper thumbnail of AI, biometric analysis, and emerging cheating detection systems: The engineering of academic integrity?

Education Policy Analysis Archives

Cheating behaviors have been construed as a continuing and somewhat vexing issue for academic ins... more Cheating behaviors have been construed as a continuing and somewhat vexing issue for academic institutions as they increasingly conduct educational processes online and impose metrics on instructional evaluation. Research, development, and implementation initiatives on cheating detection have gained new dimensions in the advent of artificial intelligence (AI) applications; they have also engendered special challenges in terms of their social, ethical, and cultural implications. An assortment of commercial cheating–detection systems have been injected into educational contexts with little input on the part of relevant stakeholders. This paper expands several specific cases of how systems for the detection of cheating have recently been implemented in higher education institutions in the US and UK. It investigates how such vehicles as wearable technologies, eye scanning, and keystroke capturing are being used to collect the data used for anti-cheating initiatives, often involving syst...

Research paper thumbnail of Dramaturgical and Ethical Approaches to the Dark Side: An Introduction

Research paper thumbnail of Negative Dimensions of Human-Robot and Human-AI Interactions: Frightening Legacies, Emerging Dysfunctions, and Creepiness

Social and cultural studies of robots and AI, 2022

Research paper thumbnail of Modeling the AI-Driven Age of Abundance: Applying the Human-to-AI Leverage Ratio (HAILR) to Post-Labor Economics

This paper explores the transformative impact of AI on automating knowledge work leading to the a... more This paper explores the transformative impact of AI on automating knowledge work leading to the anticipated 'Age of Abundance' in a post-labor society where work is performed by machines rather than human labor. Through a detailed model incorporating variables such as cost of computing, AI model efficiency, and human-equivalent production output (derived from the human-to-AI leverage ratio, or HAILR), we provide a nuanced albeit tentative analysis of future productivity trends and economic realities. The model, integrating conservative estimates like a 30% annual improvement in AI model efficiency, projects a substantial increase in productivity; by 2044 it indicates that just four hours of productive human labor could yield as much as 636 years of equivalent output. The model is not intended as a precise prediction, rather a framework to allow scientists and laypersons to visualize the inevitability of the coming Age of Abundance. The assumptions are incidental. If work is automated at scale, one may reasonably change the assumptions in the model and still arrive at the same conclusion: extreme abundance. This research also critically examines the potential job displacement in knowledge and office work sectors, suggesting a loss of 9 out of 10 jobs by 2044 due to AI automation. The model also shows how the remaining workers will be empowered with their efforts "leveraged" by AI technologies. We highlight the economic and societal implications of these findings, including the need for proactive public policy and corporate strategy to navigate the challenges and opportunities presented by AI-driven transformations. The study underscores the criticality of grasping these shifts in timely ways for future workforce planning and societal adaptation. Although the model will certainly need to be revised to accommodate technological, political, and social changes, we believe that its simplicity, flexibility, and clarity can earn it a significant role in policy discourse. 1 Electronic copy available at: https://ssrn.com/abstract=4663704 Literature Review The notion of 'knowledge worker' was developed by Peter Drucker in his 1959 book, The Landmarks of Tomorrow (1). Drucker was a pioneer in contrasting knowledge work with manual labor in his managerial analyses. In our paper, the phrase "knowledge work" includes cognitive work generally performed on a computer. This includes the efforts of programmers, scientists, writers, and engineers who produce and handle information. (2) Office work (from simple clerical tasks to complex, multi-stage efforts) is especially amenable to AI automation. AI capabilities are also making many knowledge work efforts involving high-level thinking (such as medical and legal jobs) amenable to automation. Many early efforts to analyze and understand the dimensions of knowledge work from economic perspectives were inspired by the 1970s writings of Marc Porat (3) , following the lead of Fritz Machlup (4) in the 1960s. Mapping the impact of particular technologies such as AI on the productivity of workers has been an activity of many researchers in the decades that followed, as outlined in the se ctions to come. The modeling effort described in this paper is intended to use straightforward terms and common-sense concepts in ways that make the models usable in public policy deliberations as well as in community outreach or business planning. Providing clear yet powerful data visualizations and conceptual tools in these forums will focus these discussions and stimulate the production of useful insights.

Research paper thumbnail of The Moral Imagination in an Era of “Gaming Academia”: Implications of Emerging Reputational Issues in Scholarly Activities for Knowledge Organization Practices

KNOWLEDGE ORGANIZATION, 2015

is a professor of information technology in the College of Business and Economics at the Universi... more is a professor of information technology in the College of Business and Economics at the University of Wisconsin at Whitewater. She received her MBA, MS, MA, and PhD at UW-Madison. She chaired the Privacy Council of the State of Wisconsin, the nation's first state-level council dealing with information technology and privacy. She is author of Virtual Individuals, Virtual Groups: Human Dimensions of Groupware and Computer Networking. She has written extensively on privacy, American studies, futurism, online reputational systems, artificial intelligence, and emerging technologies. She has held visiting fellow positions at both Cambridge and Oxford. Oravec, Jo Ann. The Moral Imagination in an Era of "Gaming Academia": Implications of Emerging Reputational Issues in Scholarly Activities for Knowledge Organization Practices. Knowledge Organization. 42(5), 316-323. 24 references.

Research paper thumbnail of Promoting Honesty in Children, or Fostering Pathological Behaviour?

M/C Journal

Introduction Many years ago, the moral fable of Pinocchio warned children about the evils of lyin... more Introduction Many years ago, the moral fable of Pinocchio warned children about the evils of lying (Perella). This article explores how children are learning lie-related insights from genres of currently marketed polygraph-style “spy kits”, voice stress analysis apps, and electric shock-delivering games. These artifacts are emerging despite the fact that polygraphy and other lie detection approaches are restricted in use in certain business and community contexts, in part because of their dubious scientific support. However, lie detection devices are still applied in many real-life settings, often in critically important security, customs, and employment arenas (Bunn). A commonly accepted definition of the term “lie” is “a successful or unsuccessful deliberate attempt, without forewarning, to create in another a belief which the communicator considers to be untrue” (Vrij 15), which includes the use of lies in various gaming situations. Many children’s games involve some kind of dece...

Research paper thumbnail of TECHNOLOGICAL INTIMACIES: LOVE FOR ROBOTS, SMARTPHONES, AND OTHER AI-ENHANCED ENTITIES

Peace Chronicle, 2023

What kinds of romantic attachments are humans forming with robots, smartphones, and other artific... more What kinds of romantic attachments are humans forming with robots, smartphones, and other artificial intelligence (AI)-enhanced entities? Movies, books, and television shows with these themes are becoming commonplace, and the notion of individuals being tightly coupled with their devices is increasingly familiar. People who share intimate thoughts with their smartphones and laptops are often manifesting strong feelings toward those entities (including affection), but the question of whether love is involved looms large. As the central case study in this essay relates, marriages between people and their sex robots have been recorded. We will explore whether using the word “love” to refer to such romantic human-robot-AI attachments makes sense and analyze some seemingly-positive perspectives that can serve to generate discourse on the topic. This approach may infuriate people who feel that it denigrates humans to compare them so closely with robots in terms of love and romance; however (with apologies) this essay is intended to generate discourse that illuminates trends rather than fomenting upset feelings. The essay also explores the unsettling potential for these human-machine attachments to diminish the social and psychological influences of intimate relationships among human beings. Will humans cherish each other less if non-human romantic alternatives are available?

Research paper thumbnail of Artificial Intelligence Implications for Academic Cheating: Expanding the Dimensions of Responsible Human-AI Collaboration with ChatGPT and Bard

Journal of Interactive Learning Research, 2023

Cheating is a growing academic and ethical concern in higher education. This article examines the... more Cheating is a growing academic and ethical concern in higher education. This article examines the rise of artificial intelligence (AI) generative chatbots for use in education and provides a review of research literature and relevant scholarship concerning the cheating-related issues involved and their implications for pedagogy. The technological “arms race”
that involves cheating-detection system developers versus
technology savvy students is attracting increased attention to
cheating. AI has added new dimensions to academic cheating challenges as students (as well as faculty and staff) can easily access powerful systems for generating content that can be presented in assignments, exams, or published papers as their own. AI methodology is also providing some emerging anticheating approaches, including facial recognition and watermarking. This article provides an overview of human/AI collaboration
approaches and frames some educational misuses of such AI generative systems as ChatGPT and Bard as forms of “misattributed co-authorship.” As with other kinds of collaborations, the work that students produce with AI assistance can be presented in transparent and straightforward modes or (unfortunately) in opaquer and ethically-problematic ways. However, rather than just for catching or entrapping students, the emerging varieties of technological cheating-detection strategies can be used to assist students in learning how to document and attribute properly their AI-empowered as well as human-human collaborations. Construing misuses of AI generative systems as misattributed co-authorship can recognize the growing capabilities of these tools and how stressing responsible and mindful usage by students can help prepare them for a highly collaborative, AI-saturated future.

Research paper thumbnail of Promoting Honesty in Children, or Fostering Pathological Behaviour? Emerging Varieties of Lie Detection Toys and Games

M/C Journal, 2023

A commonly accepted definition of the term “lie” is “a successful or unsuccessful deliberate atte... more A commonly accepted definition of the term “lie” is “a successful or unsuccessful deliberate attempt, without forewarning, to create in another a belief which the communicator considers to be untrue” (Vrij 15), which includes the use of lies in various gaming situations. Many children’s games involve some kind of deception, and mental privacy considerations are important in many social contexts (such as “keeping a poker face”). The dystopian scenario of children learning basic honesty notions through technologically-enabled lie detection games scripted by corporate developers presents frightening prospects. These lie detection toys and games impart important moral perspectives through technological and algorithmic means (including electrical shocks and online shaming) rather than through human modelling and teaching. They normalise and lessen the seriousness of lying by reducing it into a game. In this article I focus on United States and United Kingdom toys and games, but comparable lie detection approaches have permeated other nations and cultures.

Research paper thumbnail of Love, Sex, and Robots: Technological Shaping of Intimate Relationships

Social and cultural studies of robots and AI, 2022

Research paper thumbnail of Cyber Bullying

Research paper thumbnail of Experts in a Box

Research paper thumbnail of Not Now, Perhaps Later: Time Capsules as Communications with the Future + Statement

Leonardo electronic almanac, 2013

Time capsules are designed to remove selected objects (both physical and virtual) from the stream... more Time capsules are designed to remove selected objects (both physical and virtual) from the streams of everyday use and destruction, toward the goal of placing them in the reach of individuals in the future. This article describes how items are sequestered from present applications and transported to a physical and conceptual space in which they will be received at a particular time in the future with minimal alteration and modification. It analyzes inadvertent time capsule construction (tar pits and tunnels) as well as more deliberate familial, community, didactic, and survivalist varieties. Future archivists or historians are often optimistically posited in time capsule construction who are supposedly prepared to place the items in appropriate context once the capsule is opened. The article also explores the future of time capsule development with projections as to how the character of these efforts may change as the number of online capsules (and amount of digital material stored)...

Research paper thumbnail of Virtual Individuals, Virtual Groups: Bibliography

Research paper thumbnail of Emerging Innovations in Cheating Detection in Online Educational and Workplace Contexts: Social Dimensions of Dishonesty Surveillance

Research paper thumbnail of Emerging “cyber hygiene” practices for the Internet of Things (IoT): Professional issues in consulting clients and educating users on IoT privacy and security

2017 IEEE International Professional Communication Conference (ProComm), 2017

“Cyber hygiene” strategies for the Internet of Things (IoT) may soon expand far beyond the variou... more “Cyber hygiene” strategies for the Internet of Things (IoT) may soon expand far beyond the various approaches that are prescribed for today's computing technologies in workplaces and households (the latter including frequently changing passwords and installing malware protections). This paper explores the various roles of health professionals, insurance agents, marketers, lawyers, financial specialists, and other professionals who are working with clients and consumers during this period in which IoT devices are evolving rapidly and the IoT-produced data's privacy and other legal statuses are still murky. Roles of technology developers and implementers may also shift as IoT data are retained for an assortment of technical and diagnostic purposes but are later requested or subpoenaed for specific investigations or legal proceedings. The paper will also outline the needs for input to public policy discourse by communications professionals who have some insights as to how IoT advances may impact their clients and society as a whole. In the near future, education and communications professionals may also empower households by designing instructional materials for use in establishing cyber hygiene routines and resolving IoT-related concerns.

Research paper thumbnail of “Don't Be Evil” and Beyond for High Tech Organizations

Handbook of Research on Civic Engagement and Social Change in Contemporary Society, 2018

Societal pressures on high tech organizations to define and disseminate their ethical stances are... more Societal pressures on high tech organizations to define and disseminate their ethical stances are increasing as the influences of the technologies involved expand. Many Internet-based businesses have emerged in the past decades; growing numbers of them have developed some kind of moral declaration in the form of mottos or ethical statements. For example, the corporate motto “don't be evil” (often linked with Google/Alphabet) has generated considerable controversy about social and cultural impacts of search engines. After addressing the origins of these mottos and statements, this chapter projects the future of such ethical manifestations in the context of critically-important privacy, security, and economic concerns. The chapter analyzes potential influences of the ethical expressions on corporate social responsibility (CSR) initiatives. The chapter analyzes issues of whether “large-grained” corporate mottos can indeed serve to supply social and ethical guidance for organization...

Research paper thumbnail of Entering the Second Century of Robotics and Intelligent Technologies: An Opening Note

Social and cultural studies of robots and AI, 2022

This opening chapter describes how the problem of “bad robots” may not be solved by making robots... more This opening chapter describes how the problem of “bad robots” may not be solved by making robots seem more human. We may be living with robots, automated vehicles, and other AI-related entities that many of us perceive to be “dark” and “creepy” for many years to come. Some of these dark traits are the result of designers’ decisions, such as the manners in which certain robots apparently elicit fear on the part of some humans. Others are part of the co-production efforts of their users, such as in the way that a legally-available sex robot can often be modified to become a creepy and malicious child sex robot. The possibility that humans will have a low social place in relation to robots relates to another of the major themes of this book, that of efforts to construe robots as “outclassing” humans in substantial ways. The combination of robots often acting in rogue or unpredictable ways and humans themselves as feeling significantly outclassed signals critical social and economic problems for societies.

Research paper thumbnail of Rage against robots: Emotional and motivational dimensions of anti-robot attacks, robot sabotage, and robot bullying

Technological Forecasting and Social Change

An assortment of kinds of attacks and aggressive behaviors toward artificial intelligence (AI)-en... more An assortment of kinds of attacks and aggressive behaviors toward artificial intelligence (AI)-enhanced robots has recently emerged. This paper explores questions of how the human emotions and motivations involved in attacks of robots are being framed as well as how the incidents are presented in social media and traditional broadcast channels. The paper analyzes how robots are construed as the "other" in many contexts, often akin to the perspectives of "machine wreckers" of past centuries. It argues that focuses on the emotions and motivations of robot attackers can be useful in mitigating anti-robot activities. "Hate crime" or "hate incident" characterizations of some anti-robot efforts should be utilized in discourse as well as some future legislative efforts. Hate crime framings can aid in identifying generalized antagonism and antipathy toward robots as autonomous and intelligent entities in the context of antirobot attacks. Human self-defense may become a critical issue in some anti-robot attacks, especially when apparently malfunctioning robots are involved. Attacks of robots present individuals with vicarious opportunities to participate in anti-robot activity and also potentially elicit other aggressive, copycat actions as videos and narrative accounts are shared via social media as well as personal networks.

Research paper thumbnail of Rage against robots: Emotional and motivational dimensions of anti-robot attacks, robot sabotage, and robot bullying

Technological Forecasting & Social Change, 2023

An assortment of kinds of attacks and aggressive behaviors toward artificial intelligence (AI)-en... more An assortment of kinds of attacks and aggressive behaviors toward artificial intelligence (AI)-enhanced robots has recently emerged. This paper explores questions of how the human emotions and motivations involved in attacks of robots are being framed as well as how the incidents are presented in social media and traditional broadcast channels. The paper analyzes how robots are construed as the "other" in many contexts, often akin to the perspectives of "machine wreckers" of past centuries. It argues that focuses on the emotions and motivations of robot attackers can be useful in mitigating anti-robot activities. "Hate crime" or "hate incident" characterizations of some anti-robot efforts should be utilized in discourse as well as some future legislative efforts. Hate crime framings can aid in identifying generalized antagonism and antipathy toward robots as autonomous and intelligent entities in the context of antirobot attacks. Human self-defense may become a critical issue in some anti-robot attacks, especially when apparently malfunctioning robots are involved. Attacks of robots present individuals with vicarious opportunities to participate in anti-robot activity and also potentially elicit other aggressive, copycat actions as videos and narrative accounts are shared via social media as well as personal networks.

Research paper thumbnail of AI, biometric analysis, and emerging cheating detection systems: The engineering of academic integrity?

Education Policy Analysis Archives

Cheating behaviors have been construed as a continuing and somewhat vexing issue for academic ins... more Cheating behaviors have been construed as a continuing and somewhat vexing issue for academic institutions as they increasingly conduct educational processes online and impose metrics on instructional evaluation. Research, development, and implementation initiatives on cheating detection have gained new dimensions in the advent of artificial intelligence (AI) applications; they have also engendered special challenges in terms of their social, ethical, and cultural implications. An assortment of commercial cheating–detection systems have been injected into educational contexts with little input on the part of relevant stakeholders. This paper expands several specific cases of how systems for the detection of cheating have recently been implemented in higher education institutions in the US and UK. It investigates how such vehicles as wearable technologies, eye scanning, and keystroke capturing are being used to collect the data used for anti-cheating initiatives, often involving syst...

Research paper thumbnail of Good Robot, Bad Robot: Dark and Creepy Sides of Robotics, Autonomous Vehicles, and AI

Palgrave Macmillan Springer

This book explores how robotics and artificial intelligence (AI) can enhance human lives but also... more This book explores how robotics and artificial intelligence (AI) can enhance human lives but also have unsettling "dark sides." It examines expanding forms of negativity and anxiety about robots, AI, and autonomous vehicles as our human environments are reengineered for intelligent military and security systems and for optimal workplace and domestic operations. It focuses on the impacts of initiatives to make robot interactions more humanlike and less creepy (as with domestic and sex robots). It analyzes the emerging resistances against these entities in the wake of omnipresent AI applications (such as "killer robots" and ubiquitous surveillance). It unpacks efforts by developers to have ethical and social influences on robotics and AI, and confronts the AI hype that is designed to shield the entities from criticism. The book draws from science fiction, dramaturgical, ethical, and legal literatures as well as current research agendas of corporations. Engineers, implementers, and researchers have often encountered users' fears and aggressive actions against intelligent entities, especially in the wake of deaths of humans by robots and autonomous vehicles. The book is an invaluable resource for developers and researchers in the field, as well as curious readers who want to play proactive roles in shaping future technologies.

Research paper thumbnail of On-Line Advocacy of Violence and Hate-Group Activity: The Internet as a Platform for the Expression of Youth Aggression and Anxiety

Hate-group websites on the Internet are growing in number and variety, along with information rel... more Hate-group websites on the Internet are growing in number and variety, along with information related to violence, harassment, and bomb-making.