Maya Indira Ganesh - Profile on Academia.edu (original) (raw)

Papers by Maya Indira Ganesh

Research paper thumbnail of Between metaphor and meaning

Research paper thumbnail of Between metaphor and meaning: AI and being human

Interactions , 2022

A survey of AI metaphors in advertising, marketing, policy, strategy, and other public documentat... more A survey of AI metaphors in advertising, marketing, policy, strategy, and other public documentation in 13 countries and nine languages

Research paper thumbnail of On the Machine Learning of Ethical Judgments from Natural Language

Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies

Ethics is one of the longest standing intellectual endeavors of humanity. In recent years, the fi... more Ethics is one of the longest standing intellectual endeavors of humanity. In recent years, the fields of AI and NLP have attempted to address ethical issues of harmful outcomes in machine learning systems that are made to interface with humans. One recent approach in this vein is the construction of NLP morality models that can take in arbitrary text and output a moral judgment about the situation described. In this work, we offer a critique of such NLP methods for automating ethical decision-making. Through an audit of recent work on computational approaches for predicting morality, we examine the broader issues that arise from such efforts. We conclude with a discussion of how machine ethics could usefully proceed in NLP, by focusing on current and near-future uses of technology, in a way that centers around transparency, democratic values, and allows for straightforward accountability.

Research paper thumbnail of A Word on Machine Ethics: A Response to Jiang et al. (2021)

ArXiv, 2021

Ethics is one of the longest standing intellectual endeavors of humanity. In recent years, the fi... more Ethics is one of the longest standing intellectual endeavors of humanity. In recent years, the fields of AI and NLP have attempted to wrangle with how learning systems that interact with humans should be constrained to behave ethically. One proposal in this vein is the construction of morality models that can take in arbitrary text and output a moral judgment about the situation described. In this work, we focus on a single case study of the recently proposed Delphi model and offer a critique of the project’s proposed method of automating morality judgments. Through an audit of Delphi, we examine broader issues that would be applicable to any similar attempt. We conclude with a discussion of how machine ethics could usefully proceed, by focusing on current and near-future uses of technology, in a way that centers around transparency, democratic values, and allows for straightforward accountability.

Research paper thumbnail of Privacy, visibility, anonymity: Dilemmas in tech use by marginalised communities

Privacy, visibility, anonymity: Dilemmas in tech use by marginalised communities

Technology for transparency and accountability (T4T&A) initiatives intend to make the pub... more Technology for transparency and accountability (T4T&A) initiatives intend to make the public functioning of government visible, and states accountable to citizens for their actions. This research assumes that privacy and anonymity are important tactics for activists using technology, especially in transparency and accountability work that challenges institutions and authorities. However, privacy is practically impossible to maintain on popular, commonly available, proprietary platforms, many of which are deployed in T4T&A activities. Does this limit activists’ work with technology and if so, how? What are the other risks and barriers marginalised people face in their use of technology? his paper synthesises reflections and learnings from two studies, in Kenya and South Africa, about how marginalised communities – lesbian, gay, bisexual, trans and queer (LGBTQ) people in Nairobi, Kenya, and economically marginalised housing and urban development rights activists in Johannesburg, South Africa – use technologies commonly applied in transparency and accountability work, and the limits of their use of these technologies.

Research paper thumbnail of A problem with trolleys

A problem with trolleys

This talk will present an overview of how the commercial development of self driving cars is sign... more This talk will present an overview of how the commercial development of self driving cars is significantly shaping conceptions of ethics in data societies, and what this means for an understanding of human and machine interactions, intelligence and autonomy.

Research paper thumbnail of Reference editing

Reference editing

EROTICS: Sex, rights and the internet An exploratory research study APC would like to thank the F... more EROTICS: Sex, rights and the internet An exploratory research study APC would like to thank the Ford Foundation for their support of this innovative research.

Research paper thumbnail of For Association for Progressive Communications Women's Networking Support Program

For Association for Progressive Communications Women's Networking Support Program

Research paper thumbnail of 5 Spectres of AI

Artificial intelligence (AI) is arguably the new spectre of digital cultures. By filtering inform... more Artificial intelligence (AI) is arguably the new spectre of digital cultures. By filtering information out of existing data, it determines the way we see the world and how the world sees us. Yet the vision algorithms have of our future is built on our past. What we teach these algorithms ultimately reflects back on us and it is therefore no surprise when artificial intelligence starts to classify on the basis of race, class and gender. This odd 'hauntology' 1 is at the core of what is currently discussed under the labels of algorithmic bias or pattern discrimination. 2 By imposing identity on input data, in order to filter, that is to discriminate signals from noise, machine learning algorithms invoke a ghost story that works at two levels. First, it proposes that there is a reality that is not this one, and that is beyond our reach; to consider this reality can be unnerving. Second, the ghost story is about the horror of the past-its ambitions, materiality and promises-returning compulsively and taking on a present form because of something that went terribly wrong in the passage between one conception of reality and the next. The spectre does not exist, we claim, and yet here it is in our midst, creating fear, and reshaping our grip on reality. 3 Over the last few years, we have been witnessing a shift in the conception of artificial intelligence: away from so-called 'expert systems'

Research paper thumbnail of The ironies of autonomy

Humanities and Social Sciences Communications, 2020

Current research on autonomous vehicles tends to focus on making them safer through policies to m... more Current research on autonomous vehicles tends to focus on making them safer through policies to manage innovation, and integration into existing urban and mobility systems. This article takes social, cultural and philosophical approaches instead, critically appraising how human subjectivity, and human-machine relations, are shifting and changing through the application of big data and algorithmic techniques to the automation of driving. 20th century approaches to safety engineering and automation—be it in an airplane or automobile-have sought to either erase the human because she is error-prone and inefficient; have design compensate for the limits of the human; or at least mould human into machine through an assessment of the complementary competencies of each. The ‘irony of automation’ is an observation of the tensions emerging therein; for example, that the computationally superior and efficient machine actually needs human operators to ensure that it is working effectively; and ...

Research paper thumbnail of Entanglement

A Peer-Reviewed Journal About, 2017

This paper is based on driver-less car technology as currently being developed by Google and Tesl... more This paper is based on driver-less car technology as currently being developed by Google and Tesla, two companies that amplify their work in the media. More specifically, I focus on the moment of real and imagined crashes involving driver-less cars, and argue that the narrative of ‘ethics of driver-less cars’ indicates a shift in the construction of ethics, as an outcome of machine learning rather than a framework of values. Through applications of the ‘Trolley Problem’, among other tests, ethics has been transformed into a valuation based on processing of big data. Thus ethics-as-software enables what I refer to as big data-driven accountability. In this formulation, ‘accountability’ is distinguished from ‘responsibility’; responsibility implies intentionality and can only be assigned to humans, whereas accountability includes a wide net of actors and interactions (in Simon). ‘Transparency’ is one of the more established, widely acknowledged mechanisms for accountability; based on ...

Research paper thumbnail of ‘Mobile Love Videos Make Me Feel Healthy’: Rethinking ICTs for Development

IDS Working Papers, 2010

ICT4D discourses tell stories of poor farmers using the internet to compare crop prices, and nurs... more ICT4D discourses tell stories of poor farmers using the internet to compare crop prices, and nurses who use SMS to remind people to take their antiretrovirals. Do nurses also use work-mobiles to make private phone calls? Do farmers surf for pornography when they are supposed to be comparing crop prices? In the ICTs for Development discourse, ICTs are positioned as tools and processes to fight poverty and facilitate empowerment through economic and educational gains; I argue that this discourse ignores the diverse ways in which the poor and the marginalised use media technologies in their everyday lives for social networking, entertainment, to produce and participate in intimate and erotic economies, and to express and experience their sexuality, relationships, pleasure and intimacy in ways that could also be considered empowering. Media use (like development) is an area where sexualities are actively made and remade. ICT4D needs to include an understanding of the potential emotional and sexual effects of interventions. Ethnographic studies of media consumption and use are needed to provide a deeper understanding of sexuality in a way that contributes to applications in a development context. This paper presents one such ethnographic study on how a community uses mobile phones, with the hope that it may provide clues and cues for people and organisations working across these related areas of ICT4D, sexuality, culture and gender. This paper presents a short pilot project of in-depth interviews with six self-identified Kothis, a South Asian feminine male identity. This was supported by observations of and participation in weekly support group meetings in an HIV related NGO of which they are members. The study finds that ICTs changes possibilities for finding sex, love and social mobility, as well as presenting new channels for harassment by police and others.

Research paper thumbnail of Entanglement: Machine learning and human ethics in driver-less car crashes

Entanglement: Machine learning and human ethics in driver-less car crashes

Algorithmic regulation of everyday life, institutions and social systems increases with little ov... more Algorithmic regulation of everyday life, institutions and social systems increases with little oversight or transparency, and yet usually with significant social outcomes (Angwin et al.; Pasquale). Therefore, the need for an ‘ethics of algorithms’ (Ananny; CIHR) and ‘accountability’ of algorithms (Diakopolous) has been raised. The “constellation of technologies” we have come to refer to as ‘artificial intelligence’[1] (Crawford and Whittaker) enable an anxiety that sits alongside the financial speculation, experimentation and entrepreneurial enthusiasm that feeds the Silicon Valley gold rush of ‘innovation’. How can machine intelligence optimise its decision-making and avoid errors, mistakes and accidents? Where machines are not directly programmed but learn, then who or what is accountable for errors and accidents, and how can this accountability be determined?

Research paper thumbnail of Two computer scientists and a cultural scientist get hit by a driver-less car: a method for situating knowledge in the cross-disciplinary study of F-A-T in machine learning: translation tutorial

Two computer scientists and a cultural scientist get hit by a driver-less car: a method for situating knowledge in the cross-disciplinary study of F-A-T in machine learning: translation tutorial

In a workshop organized in December 2017 in Leiden, the Netherlands, a group of lawyers, computer... more In a workshop organized in December 2017 in Leiden, the Netherlands, a group of lawyers, computer scientists, artists, activists and social and cultural scientists collectively read a computer science paper about 'improving fairness'. This session was perceived by many participants as eye-opening on how different epistemologies shape approaches to the problem, method and solutions, thus enabling further cross-disciplinary discussions during the rest of the workshop. For many participants it was both refreshing and challenging, in equal measure, to understand how another discipline approached the problem of fairness. Now, as a follow-up we propose a translation tutorial that will engage participants at the FAT* conference in a similar exercise. We will invite participants to work in small groups reading excerpts of academic papers from different disciplinary perspectives on the same theme. We argue that most of us do not read outside our disciplines and thus are not familiar ...

Research paper thumbnail of Privacy, anonymity, visibility: dilemmas in tech use by marginalised communities

Privacy, anonymity, visibility: dilemmas in tech use by marginalised communities

Research paper thumbnail of The Difference that Difference Makes

s essay, "AI and the Imagination to Overcome Difference" examines how the imagination of AI syste... more s essay, "AI and the Imagination to Overcome Difference" examines how the imagination of AI systems emerges from the instrumentalization of technology -that a singular, unified technology will address an astonishing diversity of nuanced social conditions like language translation, work, and the automation of war. There is a flattening of differences, they say, between human and machine, that ignores the social, cultural and political dimensions of these complex technologies. In this comment piece, I want to think through 'difference' in terms of some of its synonyms, such as 'gap', 'distinction', 'diversity' and 'discrimination'; and differences not just between human and machine, but also between humans; and thus discuss the further implications of AI technologies in society.

Research paper thumbnail of Resistance and refusal to algorithmic harms: Varieties of ‘knowledge projects’

Resistance and refusal to algorithmic harms: Varieties of ‘knowledge projects’

Media International Australia

Industrial, academic, activist, and policy research and advocacy movements formed around resistin... more Industrial, academic, activist, and policy research and advocacy movements formed around resisting ‘machine bias’, promoting ‘ethical AI’, and ‘fair ML’ have discursive implications for what constitutes harm, and what resistance to algorithmic influence itself means, and is deeply connected to which actors makes epistemic claims about harm and resistance. We present a loose categorization of kinds of resistance to algorithmic systems: a dominant mode of resistance as ‘filtering up’ and being translated into design fixes by Big Tech; and advocacy and scholarship which bring a critical frame of lived experiences and scholarship around algorithmic systems as socio-technical entities. Three recent cases delve into how Big Tech responds to harms documented by marginalized groups; these highlight how harms are valued differently. Finally, we identify modes of refusal that recognize the limits of Big Tech's resistance; built on practices of feminist organizing, decoloniality, and New-L...

Research paper thumbnail of The Center For Humane Technology Does Not Want Your Attention II. On Time Well Spent and Ethics.

The Center For Humane Technology Does Not Want Your Attention II. On Time Well Spent and Ethics.

Research paper thumbnail of It's not you, it's your data: Trust, identity and fintech

It's not you, it's your data: Trust, identity and fintech

Cyborgology, 2018

Research paper thumbnail of Dating in CRISPR futures. Speculating about the future and new technologies

Dating in CRISPR futures. Speculating about the future and new technologies

Cyborgology, 2018

Values, technology, genetic engineering, CRISPR, ehics

Research paper thumbnail of Between metaphor and meaning

Research paper thumbnail of Between metaphor and meaning: AI and being human

Interactions , 2022

A survey of AI metaphors in advertising, marketing, policy, strategy, and other public documentat... more A survey of AI metaphors in advertising, marketing, policy, strategy, and other public documentation in 13 countries and nine languages

Research paper thumbnail of On the Machine Learning of Ethical Judgments from Natural Language

Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies

Ethics is one of the longest standing intellectual endeavors of humanity. In recent years, the fi... more Ethics is one of the longest standing intellectual endeavors of humanity. In recent years, the fields of AI and NLP have attempted to address ethical issues of harmful outcomes in machine learning systems that are made to interface with humans. One recent approach in this vein is the construction of NLP morality models that can take in arbitrary text and output a moral judgment about the situation described. In this work, we offer a critique of such NLP methods for automating ethical decision-making. Through an audit of recent work on computational approaches for predicting morality, we examine the broader issues that arise from such efforts. We conclude with a discussion of how machine ethics could usefully proceed in NLP, by focusing on current and near-future uses of technology, in a way that centers around transparency, democratic values, and allows for straightforward accountability.

Research paper thumbnail of A Word on Machine Ethics: A Response to Jiang et al. (2021)

ArXiv, 2021

Ethics is one of the longest standing intellectual endeavors of humanity. In recent years, the fi... more Ethics is one of the longest standing intellectual endeavors of humanity. In recent years, the fields of AI and NLP have attempted to wrangle with how learning systems that interact with humans should be constrained to behave ethically. One proposal in this vein is the construction of morality models that can take in arbitrary text and output a moral judgment about the situation described. In this work, we focus on a single case study of the recently proposed Delphi model and offer a critique of the project’s proposed method of automating morality judgments. Through an audit of Delphi, we examine broader issues that would be applicable to any similar attempt. We conclude with a discussion of how machine ethics could usefully proceed, by focusing on current and near-future uses of technology, in a way that centers around transparency, democratic values, and allows for straightforward accountability.

Research paper thumbnail of Privacy, visibility, anonymity: Dilemmas in tech use by marginalised communities

Privacy, visibility, anonymity: Dilemmas in tech use by marginalised communities

Technology for transparency and accountability (T4T&A) initiatives intend to make the pub... more Technology for transparency and accountability (T4T&A) initiatives intend to make the public functioning of government visible, and states accountable to citizens for their actions. This research assumes that privacy and anonymity are important tactics for activists using technology, especially in transparency and accountability work that challenges institutions and authorities. However, privacy is practically impossible to maintain on popular, commonly available, proprietary platforms, many of which are deployed in T4T&A activities. Does this limit activists’ work with technology and if so, how? What are the other risks and barriers marginalised people face in their use of technology? his paper synthesises reflections and learnings from two studies, in Kenya and South Africa, about how marginalised communities – lesbian, gay, bisexual, trans and queer (LGBTQ) people in Nairobi, Kenya, and economically marginalised housing and urban development rights activists in Johannesburg, South Africa – use technologies commonly applied in transparency and accountability work, and the limits of their use of these technologies.

Research paper thumbnail of A problem with trolleys

A problem with trolleys

This talk will present an overview of how the commercial development of self driving cars is sign... more This talk will present an overview of how the commercial development of self driving cars is significantly shaping conceptions of ethics in data societies, and what this means for an understanding of human and machine interactions, intelligence and autonomy.

Research paper thumbnail of Reference editing

Reference editing

EROTICS: Sex, rights and the internet An exploratory research study APC would like to thank the F... more EROTICS: Sex, rights and the internet An exploratory research study APC would like to thank the Ford Foundation for their support of this innovative research.

Research paper thumbnail of For Association for Progressive Communications Women's Networking Support Program

For Association for Progressive Communications Women's Networking Support Program

Research paper thumbnail of 5 Spectres of AI

Artificial intelligence (AI) is arguably the new spectre of digital cultures. By filtering inform... more Artificial intelligence (AI) is arguably the new spectre of digital cultures. By filtering information out of existing data, it determines the way we see the world and how the world sees us. Yet the vision algorithms have of our future is built on our past. What we teach these algorithms ultimately reflects back on us and it is therefore no surprise when artificial intelligence starts to classify on the basis of race, class and gender. This odd 'hauntology' 1 is at the core of what is currently discussed under the labels of algorithmic bias or pattern discrimination. 2 By imposing identity on input data, in order to filter, that is to discriminate signals from noise, machine learning algorithms invoke a ghost story that works at two levels. First, it proposes that there is a reality that is not this one, and that is beyond our reach; to consider this reality can be unnerving. Second, the ghost story is about the horror of the past-its ambitions, materiality and promises-returning compulsively and taking on a present form because of something that went terribly wrong in the passage between one conception of reality and the next. The spectre does not exist, we claim, and yet here it is in our midst, creating fear, and reshaping our grip on reality. 3 Over the last few years, we have been witnessing a shift in the conception of artificial intelligence: away from so-called 'expert systems'

Research paper thumbnail of The ironies of autonomy

Humanities and Social Sciences Communications, 2020

Current research on autonomous vehicles tends to focus on making them safer through policies to m... more Current research on autonomous vehicles tends to focus on making them safer through policies to manage innovation, and integration into existing urban and mobility systems. This article takes social, cultural and philosophical approaches instead, critically appraising how human subjectivity, and human-machine relations, are shifting and changing through the application of big data and algorithmic techniques to the automation of driving. 20th century approaches to safety engineering and automation—be it in an airplane or automobile-have sought to either erase the human because she is error-prone and inefficient; have design compensate for the limits of the human; or at least mould human into machine through an assessment of the complementary competencies of each. The ‘irony of automation’ is an observation of the tensions emerging therein; for example, that the computationally superior and efficient machine actually needs human operators to ensure that it is working effectively; and ...

Research paper thumbnail of Entanglement

A Peer-Reviewed Journal About, 2017

This paper is based on driver-less car technology as currently being developed by Google and Tesl... more This paper is based on driver-less car technology as currently being developed by Google and Tesla, two companies that amplify their work in the media. More specifically, I focus on the moment of real and imagined crashes involving driver-less cars, and argue that the narrative of ‘ethics of driver-less cars’ indicates a shift in the construction of ethics, as an outcome of machine learning rather than a framework of values. Through applications of the ‘Trolley Problem’, among other tests, ethics has been transformed into a valuation based on processing of big data. Thus ethics-as-software enables what I refer to as big data-driven accountability. In this formulation, ‘accountability’ is distinguished from ‘responsibility’; responsibility implies intentionality and can only be assigned to humans, whereas accountability includes a wide net of actors and interactions (in Simon). ‘Transparency’ is one of the more established, widely acknowledged mechanisms for accountability; based on ...

Research paper thumbnail of ‘Mobile Love Videos Make Me Feel Healthy’: Rethinking ICTs for Development

IDS Working Papers, 2010

ICT4D discourses tell stories of poor farmers using the internet to compare crop prices, and nurs... more ICT4D discourses tell stories of poor farmers using the internet to compare crop prices, and nurses who use SMS to remind people to take their antiretrovirals. Do nurses also use work-mobiles to make private phone calls? Do farmers surf for pornography when they are supposed to be comparing crop prices? In the ICTs for Development discourse, ICTs are positioned as tools and processes to fight poverty and facilitate empowerment through economic and educational gains; I argue that this discourse ignores the diverse ways in which the poor and the marginalised use media technologies in their everyday lives for social networking, entertainment, to produce and participate in intimate and erotic economies, and to express and experience their sexuality, relationships, pleasure and intimacy in ways that could also be considered empowering. Media use (like development) is an area where sexualities are actively made and remade. ICT4D needs to include an understanding of the potential emotional and sexual effects of interventions. Ethnographic studies of media consumption and use are needed to provide a deeper understanding of sexuality in a way that contributes to applications in a development context. This paper presents one such ethnographic study on how a community uses mobile phones, with the hope that it may provide clues and cues for people and organisations working across these related areas of ICT4D, sexuality, culture and gender. This paper presents a short pilot project of in-depth interviews with six self-identified Kothis, a South Asian feminine male identity. This was supported by observations of and participation in weekly support group meetings in an HIV related NGO of which they are members. The study finds that ICTs changes possibilities for finding sex, love and social mobility, as well as presenting new channels for harassment by police and others.

Research paper thumbnail of Entanglement: Machine learning and human ethics in driver-less car crashes

Entanglement: Machine learning and human ethics in driver-less car crashes

Algorithmic regulation of everyday life, institutions and social systems increases with little ov... more Algorithmic regulation of everyday life, institutions and social systems increases with little oversight or transparency, and yet usually with significant social outcomes (Angwin et al.; Pasquale). Therefore, the need for an ‘ethics of algorithms’ (Ananny; CIHR) and ‘accountability’ of algorithms (Diakopolous) has been raised. The “constellation of technologies” we have come to refer to as ‘artificial intelligence’[1] (Crawford and Whittaker) enable an anxiety that sits alongside the financial speculation, experimentation and entrepreneurial enthusiasm that feeds the Silicon Valley gold rush of ‘innovation’. How can machine intelligence optimise its decision-making and avoid errors, mistakes and accidents? Where machines are not directly programmed but learn, then who or what is accountable for errors and accidents, and how can this accountability be determined?

Research paper thumbnail of Two computer scientists and a cultural scientist get hit by a driver-less car: a method for situating knowledge in the cross-disciplinary study of F-A-T in machine learning: translation tutorial

Two computer scientists and a cultural scientist get hit by a driver-less car: a method for situating knowledge in the cross-disciplinary study of F-A-T in machine learning: translation tutorial

In a workshop organized in December 2017 in Leiden, the Netherlands, a group of lawyers, computer... more In a workshop organized in December 2017 in Leiden, the Netherlands, a group of lawyers, computer scientists, artists, activists and social and cultural scientists collectively read a computer science paper about 'improving fairness'. This session was perceived by many participants as eye-opening on how different epistemologies shape approaches to the problem, method and solutions, thus enabling further cross-disciplinary discussions during the rest of the workshop. For many participants it was both refreshing and challenging, in equal measure, to understand how another discipline approached the problem of fairness. Now, as a follow-up we propose a translation tutorial that will engage participants at the FAT* conference in a similar exercise. We will invite participants to work in small groups reading excerpts of academic papers from different disciplinary perspectives on the same theme. We argue that most of us do not read outside our disciplines and thus are not familiar ...

Research paper thumbnail of Privacy, anonymity, visibility: dilemmas in tech use by marginalised communities

Privacy, anonymity, visibility: dilemmas in tech use by marginalised communities

Research paper thumbnail of The Difference that Difference Makes

s essay, "AI and the Imagination to Overcome Difference" examines how the imagination of AI syste... more s essay, "AI and the Imagination to Overcome Difference" examines how the imagination of AI systems emerges from the instrumentalization of technology -that a singular, unified technology will address an astonishing diversity of nuanced social conditions like language translation, work, and the automation of war. There is a flattening of differences, they say, between human and machine, that ignores the social, cultural and political dimensions of these complex technologies. In this comment piece, I want to think through 'difference' in terms of some of its synonyms, such as 'gap', 'distinction', 'diversity' and 'discrimination'; and differences not just between human and machine, but also between humans; and thus discuss the further implications of AI technologies in society.

Research paper thumbnail of Resistance and refusal to algorithmic harms: Varieties of ‘knowledge projects’

Resistance and refusal to algorithmic harms: Varieties of ‘knowledge projects’

Media International Australia

Industrial, academic, activist, and policy research and advocacy movements formed around resistin... more Industrial, academic, activist, and policy research and advocacy movements formed around resisting ‘machine bias’, promoting ‘ethical AI’, and ‘fair ML’ have discursive implications for what constitutes harm, and what resistance to algorithmic influence itself means, and is deeply connected to which actors makes epistemic claims about harm and resistance. We present a loose categorization of kinds of resistance to algorithmic systems: a dominant mode of resistance as ‘filtering up’ and being translated into design fixes by Big Tech; and advocacy and scholarship which bring a critical frame of lived experiences and scholarship around algorithmic systems as socio-technical entities. Three recent cases delve into how Big Tech responds to harms documented by marginalized groups; these highlight how harms are valued differently. Finally, we identify modes of refusal that recognize the limits of Big Tech's resistance; built on practices of feminist organizing, decoloniality, and New-L...

Research paper thumbnail of The Center For Humane Technology Does Not Want Your Attention II. On Time Well Spent and Ethics.

The Center For Humane Technology Does Not Want Your Attention II. On Time Well Spent and Ethics.

Research paper thumbnail of It's not you, it's your data: Trust, identity and fintech

It's not you, it's your data: Trust, identity and fintech

Cyborgology, 2018

Research paper thumbnail of Dating in CRISPR futures. Speculating about the future and new technologies

Dating in CRISPR futures. Speculating about the future and new technologies

Cyborgology, 2018

Values, technology, genetic engineering, CRISPR, ehics

Research paper thumbnail of The ironies of autonomy

Humanities and social sciences communication, 2020

Current research on autonomous vehicles tends to focus on making them safer through policies to m... more Current research on autonomous vehicles tends to focus on making them safer through policies to manage innovation, and integration into existing urban and mobility systems. This article takes social, cultural and philosophical approaches instead, critically appraising how human subjectivity, and human-machine relations, are shifting and changing through the application of big data and algorithmic techniques to the automation of driving. 20th century approaches to safety engineering and automation—be it in an airplane or automobile-have sought to either erase the human because she is error-prone and inefficient; have design compensate for the limits of the human; or at least mould human into machine through an assessment of the complementary competencies of each. The ‘irony of automation’ is an observation of the tensions emerging therein; for example, that the computationally superior and efficient machine actually needs human operators to ensure that it is working effectively; and that the human is inevitably held accountable for errors, even if the machine is more efficient or accurate. With the emergence of the autonomous vehicle (AV) as simultaneously AI/ ‘robot’, and automobile, and distributed, big data infrastructural platform, these beliefs about human and machine are dissolving into what I refer to as the ironies of autonomy. For example, recent AV crashes suggest that human operators cannot intervene in the statistical operations underlying automated decision-making in machine learning, but are expected to. And that while AVs promise ‘freedom’, human time, work, and bodies are threaded into, and surveilled by, data infrastructures, and re-shaped by its information flows. The shift that occurs is that human subjectivity has socio-economic and legal implications and is not about fixed attributes of human and machine fitting into each other. Drawing on Postphenomenological concepts of embodiment and instrumentation, and excerpts from fieldwork, this article argues that the emergence of AVs in society prompts a rethinking of the multiple relationalities that constitute humanity through machines.