Karni Chagal-Feferkorn | University of Ottawa | Université d'Ottawa (original) (raw)
Papers by Karni Chagal-Feferkorn
Social Science Research Network, 2022
Cambridge University Press eBooks, Oct 31, 2020
Social Science Research Network, Apr 17, 2020
Self-learning algorithms are gradually dominating more and more aspects of our lives. They do so ... more Self-learning algorithms are gradually dominating more and more aspects of our lives. They do so by performing tasks and reaching decisions that were once reserved exclusively for human beings. And not only that-in certain contexts, their decision-making performance is shown to be superior to that of humans. However, as superior as they may be, self-learning algorithms (also referred to as artificial intelligence (AI) systems, "smart robots," or "autonomous machines") can still cause damage. When determining the liability of a human tortfeasor causing damage, the applicable legal framework is generally that of negligence. To be found negligent, the tortfeasor must have acted in a manner not compliant with the standard of "the reasonable person." Given the growing similarity of self-learning algorithms to humans in the nature of decisions they make and the type of damages they may cause (for example, a human driver and a driverless vehicle causing similar car accidents), several scholars have proposed the development of a "reasonable algorithm" standard, to be applied to self-learning systems. To date, however, academia has not attempted to address the practical question of how such a standard might be applied to
Social Science Research Network, Feb 27, 2018
Over the years mankind has come to rely increasingly on machines. Technology is ever advancing, a... more Over the years mankind has come to rely increasingly on machines. Technology is ever advancing, and in addition to relinquishing physical and mere computational tasks to machines, algorithms' self-learning abilities now enable us to entrust machines with professional decisions, for instance, in the fields of law, medicine and accounting. A growing number of scholars and entities now acknowledge that whenever certain "sophisticated" or "autonomous" decision-making systems cause damage, they should no longer be subject to products liability but deserve different treatment from their "traditional" predecessors. What is it that separates "traditional" algorithms and machines that for decades have been subject to traditional product liability legal framework from what I would call "thinking algorithms," that seem to warrant their own custom-made treatment? Why have "auto-pilots," for example, been traditionally treated as "products," while autonomous vehicles are suddenly perceived as a more "human-like" system that requires different treatment? Where is the line between machines drawn? Scholars who touch on this question, have generally referred to the system's level of autonomy as a classifier between traditional products and systems incompatible with products liability laws (whether autonomy was mentioned expressly, or reflected in the specific questions posed). This article, however, argues that a classifier based on autonomy level is not a good one, given its excessive complexity, the vague classification process it dictates, the inconsistent results it might lead to, and the fact said results mainly shed light on the system's level of autonomy, but not on its compatibility with products liability laws. This article therefore proposes a new approach to distinguishing traditional products from "thinking algorithms" for the determining whether products liability should apply. Instead of examining the vague concept of "autonomy," the article analyzes the system's specific features and examines whether they promote or hinder the rationales behind the products liability legal framework. The article thus offers a novel, practical method for decision-makers wanting to decide when products liability should continue to apply to "sophisticated" systems and when it should not.
Social Science Research Network, Jan 2, 2018
Social Science Research Network, Aug 31, 2021
The Cambridge Handbook of the Law of Algorithms
Michigan Technology Law Review, 2021
Self-learning algorithms are gradually dominating more and more aspects of our lives. They do so ... more Self-learning algorithms are gradually dominating more and more aspects of our lives. They do so by performing tasks and reaching decisions that were once reserved exclusively for human beings. And not only that—in certain contexts, their decision-making performance is shown to be superior to that of humans. However, as superior as they may be, self-learning algorithms (also referred to as artificial intelligence (AI) systems, “smart robots,” or “autonomous machines”) can still cause damage. When determining the liability of a human tortfeasor causing damage, the applicable legal framework is generally that of negligence. To be found negligent, the tortfeasor must have acted in a manner not compliant with the standard of “the reasonable person.” Given the growing similarity of self-learning algorithms to humans in the nature of decisions they make and the type of damages they may cause (for example, a human driver and a driverless vehicle causing similar car accidents), several scho...
Systematic Reviews
Background Medical innovations offer tremendous hope. Yet, similar innovations in governance (law... more Background Medical innovations offer tremendous hope. Yet, similar innovations in governance (law, policy, ethics) are likely necessary if society is to realize medical innovations’ fruits and avoid their pitfalls. As innovations in artificial intelligence (AI) advance at a rapid pace, scholars across multiple disciplines are articulating concerns in health-related AI that likely require legal responses to ensure the requisite balance. These scholarly perspectives may provide critical insights into the most pressing challenges that will help shape and advance future regulatory reforms. Yet, to the best of our knowledge, there is no comprehensive summary of the literature examining legal concerns in relation to health-related AI. We thus aim to summarize and map the literature examining legal concerns in health-related AI using a scoping review approach. Methods The scoping review framework developed by (J Soc Res Methodol 8:19-32, 2005) and extended by (Implement Sci 5:69, 2010) and...
Social Science Research Network, Aug 31, 2021
LSN: Tort Litigation, 2018
Algorithmic decision-makers dominate many aspects of our lives. Beyond simply performing complex ... more Algorithmic decision-makers dominate many aspects of our lives. Beyond simply performing complex computational tasks, they often replace human discretion and even professional judgement. As sophisticated and accurate as they may be, autonomous algorithms may cause damage. A car accident could involve both human drivers and driverless vehicles. Patients may receive an erroneous diagnosis or treatment recommendation from either a physician or a medical-algorithm. Yet because algorithms were traditionally considered "mere tools" in the hands of humans, the tort framework applying to them is significantly different than the framework applying to humans, potentially leading to anomalous results in cases where humans and algorithms decision-makers could interchangeably cause damage. This article discusses the disadvantages stemming from these anomalies and proposes to develop and apply a "reasonable algorithm" standard to non-human decision makers- similar to the "...
Communications of the ACM
Learning Responsible AI together.
Journal of Law, Technology and Policy, 2018
Algorithmic decision-makers dominate many aspects of our lives. Beyond simply performing complex ... more Algorithmic decision-makers dominate many aspects of our lives. Beyond simply performing complex computational tasks, they often replace human discretion and even professional judgement. As sophisticated and accurate as they may be, autonomous algorithms may cause damage.
A car accident could involve both human drivers and driverless vehicles. Patients may receive an erroneous diagnosis or treatment recommendation from either a physician or a medical-algorithm. Yet because algorithms were traditionally considered "mere tools" in the hands of humans, the tort framework applying to them is significantly different than the framework applying to humans, potentially leading to anomalous results in cases where humans and algorithms decision-makers could interchangeably cause damage.
This article discusses the disadvantages stemming from these anomalies and proposes to develop and apply a "reasonable algorithm" standard to non-human decision makers- similar to the "reasonable person" or "reasonable professional" standard that applies to human tortfeasors.
While the economic advantages of a similar notion have been elaborated on in the literature, the general concept of subjecting non-humans to a reasonableness analysis has not been addressed. Rather, current anecdotal references to applying a negligence or a reasonableness standard on autonomous machines mainly discarded the entire concept, primarily because "algorithms are not persons". This article identifies and addresses the conceptual difficulties stemming from applying a "reasonableness" standard on non-humans, including the intuitive reluctance of subjecting non-humans to human standards; the question of whether there is any practical meaning of analysing the reasonableness of an algorithm separately from the reasonableness of its programmer; the potential legal implications of a finding that the algorithm "acted" reasonably or unreasonably; and whether such an analysis reconciles with the rationales behind tort law.
Other than identifying the various anomalies resulting from subjecting humans and non-humans conducting identical tasks to different tort frameworks, the article's main contribution is, therefore, explaining why the challenges associated with applying a "reasonable standard" to algorithms are overcome.
Stanford Law & Policy Review, 2019
Over the years mankind has come to rely increasingly on machines. Technology is ever advancing, a... more Over the years mankind has come to rely increasingly on machines. Technology is ever advancing, and in addition to relinquishing physical and mere computational tasks to machines, algorithms' self-learning abilities now enable us to entrust machines with professional decisions, for instance, in the fields of law, medicine and accounting.
A growing number of scholars and entities now acknowledge that whenever certain "sophisticated" or "autonomous" decision-making systems cause damage, they should no longer be subject to products liability but deserve different treatment from their "traditional" predecessors. What is it that separates "traditional" algorithms and machines that for decades have been subject to traditional product liability legal framework from what I would call "thinking algorithms," that seem to warrant their own custom-made treatment? Why have "auto-pilots," for example, been traditionally treated as "products," while autonomous vehicles are suddenly perceived as a more "human-like" system that requires different treatment? Where is the line between machines drawn?
Scholars who touch on this question, have generally referred to the system's level of autonomy as a classifier between traditional products and systems incompatible with products liability laws (whether autonomy was mentioned expressly, or reflected in the specific questions posed). This article, however, argues that a classifier based on autonomy level is not a good one, given its excessive complexity, the vague classification process it dictates, the inconsistent results it might lead to, and the fact said results mainly shed light on the system's level of autonomy, but not on its compatibility with products liability laws.
This article therefore proposes a new approach to distinguishing traditional products from "thinking algorithms" for the determining whether products liability should apply. Instead of examining the vague concept of "autonomy," the article analyzes the system's specific features employed in each part of its decision-making process and examines whether they promote or hinder the rationales behind the products liability legal framework. The article thus offers a novel, practical method for decision-makers wanting to decide when products liability should continue to apply to "sophisticated" systems and when a new tort regime ought to be considered.
Drafts by Karni Chagal-Feferkorn
Michigan Technology Law Review , 2020
Self-learning algorithms are gradually dominating more and more aspects of our lives. They do so ... more Self-learning algorithms are gradually dominating more and more aspects of our lives. They do so by performing tasks and reaching decisions that were once reserved exclusively for human beings. And not only that-in certain contexts, their decision-making performance is shown to be superior to that of humans. However, as superior as they may be, self-learning algorithms (also referred-to as artificial intelligence (AI) systems, "smart robots", or "autonomous machines", among other terms) can also cause damage. When determining the liability of a human tortfeasors causing damage, the applicable legal framework is generally that of negligence. To be found negligent, the tortfeasor must have acted in a manner not compliant with the standard of "the reasonable person". Given the growing similarity of self-learning algorithms to humans in the nature of decisions they make and the type of damages they may cause, several scholars have proposed the development of a "reasonable algorithm" standard, to be applied to self-learning systems. To date, however, the literature has not attempted to address the practical question of how such a standard might be applied to algorithms, and what the content of analysis ought to be in order to achieve the goals behind tort law of promoting safety and victims' compensation on the one hand, and achieving the right balance between them and encouraging the development of beneficial technologies on the other. This paper analyses the "reasonableness" standard used in tort law, as well as the unique qualities, weaknesses and strengths of algorithms versus humans, and examines whether the reasonableness standard is at all compatible with self-learning algorithms. Concluding that it generally is, the paper's main contribution is its proposal of a concrete "reasonable algorithm" standard that could be practically applied by decision-makers. Said standard accounts for the differences between human and algorithmic decision-making, and allows the application of the reasonableness standard to algorithms in a manner that promotes the aims of tort law and at the same time avoids a dampening effect on the development and usage of new, beneficial technologies.
Social Science Research Network, 2022
Cambridge University Press eBooks, Oct 31, 2020
Social Science Research Network, Apr 17, 2020
Self-learning algorithms are gradually dominating more and more aspects of our lives. They do so ... more Self-learning algorithms are gradually dominating more and more aspects of our lives. They do so by performing tasks and reaching decisions that were once reserved exclusively for human beings. And not only that-in certain contexts, their decision-making performance is shown to be superior to that of humans. However, as superior as they may be, self-learning algorithms (also referred to as artificial intelligence (AI) systems, "smart robots," or "autonomous machines") can still cause damage. When determining the liability of a human tortfeasor causing damage, the applicable legal framework is generally that of negligence. To be found negligent, the tortfeasor must have acted in a manner not compliant with the standard of "the reasonable person." Given the growing similarity of self-learning algorithms to humans in the nature of decisions they make and the type of damages they may cause (for example, a human driver and a driverless vehicle causing similar car accidents), several scholars have proposed the development of a "reasonable algorithm" standard, to be applied to self-learning systems. To date, however, academia has not attempted to address the practical question of how such a standard might be applied to
Social Science Research Network, Feb 27, 2018
Over the years mankind has come to rely increasingly on machines. Technology is ever advancing, a... more Over the years mankind has come to rely increasingly on machines. Technology is ever advancing, and in addition to relinquishing physical and mere computational tasks to machines, algorithms' self-learning abilities now enable us to entrust machines with professional decisions, for instance, in the fields of law, medicine and accounting. A growing number of scholars and entities now acknowledge that whenever certain "sophisticated" or "autonomous" decision-making systems cause damage, they should no longer be subject to products liability but deserve different treatment from their "traditional" predecessors. What is it that separates "traditional" algorithms and machines that for decades have been subject to traditional product liability legal framework from what I would call "thinking algorithms," that seem to warrant their own custom-made treatment? Why have "auto-pilots," for example, been traditionally treated as "products," while autonomous vehicles are suddenly perceived as a more "human-like" system that requires different treatment? Where is the line between machines drawn? Scholars who touch on this question, have generally referred to the system's level of autonomy as a classifier between traditional products and systems incompatible with products liability laws (whether autonomy was mentioned expressly, or reflected in the specific questions posed). This article, however, argues that a classifier based on autonomy level is not a good one, given its excessive complexity, the vague classification process it dictates, the inconsistent results it might lead to, and the fact said results mainly shed light on the system's level of autonomy, but not on its compatibility with products liability laws. This article therefore proposes a new approach to distinguishing traditional products from "thinking algorithms" for the determining whether products liability should apply. Instead of examining the vague concept of "autonomy," the article analyzes the system's specific features and examines whether they promote or hinder the rationales behind the products liability legal framework. The article thus offers a novel, practical method for decision-makers wanting to decide when products liability should continue to apply to "sophisticated" systems and when it should not.
Social Science Research Network, Jan 2, 2018
Social Science Research Network, Aug 31, 2021
The Cambridge Handbook of the Law of Algorithms
Michigan Technology Law Review, 2021
Self-learning algorithms are gradually dominating more and more aspects of our lives. They do so ... more Self-learning algorithms are gradually dominating more and more aspects of our lives. They do so by performing tasks and reaching decisions that were once reserved exclusively for human beings. And not only that—in certain contexts, their decision-making performance is shown to be superior to that of humans. However, as superior as they may be, self-learning algorithms (also referred to as artificial intelligence (AI) systems, “smart robots,” or “autonomous machines”) can still cause damage. When determining the liability of a human tortfeasor causing damage, the applicable legal framework is generally that of negligence. To be found negligent, the tortfeasor must have acted in a manner not compliant with the standard of “the reasonable person.” Given the growing similarity of self-learning algorithms to humans in the nature of decisions they make and the type of damages they may cause (for example, a human driver and a driverless vehicle causing similar car accidents), several scho...
Systematic Reviews
Background Medical innovations offer tremendous hope. Yet, similar innovations in governance (law... more Background Medical innovations offer tremendous hope. Yet, similar innovations in governance (law, policy, ethics) are likely necessary if society is to realize medical innovations’ fruits and avoid their pitfalls. As innovations in artificial intelligence (AI) advance at a rapid pace, scholars across multiple disciplines are articulating concerns in health-related AI that likely require legal responses to ensure the requisite balance. These scholarly perspectives may provide critical insights into the most pressing challenges that will help shape and advance future regulatory reforms. Yet, to the best of our knowledge, there is no comprehensive summary of the literature examining legal concerns in relation to health-related AI. We thus aim to summarize and map the literature examining legal concerns in health-related AI using a scoping review approach. Methods The scoping review framework developed by (J Soc Res Methodol 8:19-32, 2005) and extended by (Implement Sci 5:69, 2010) and...
Social Science Research Network, Aug 31, 2021
LSN: Tort Litigation, 2018
Algorithmic decision-makers dominate many aspects of our lives. Beyond simply performing complex ... more Algorithmic decision-makers dominate many aspects of our lives. Beyond simply performing complex computational tasks, they often replace human discretion and even professional judgement. As sophisticated and accurate as they may be, autonomous algorithms may cause damage. A car accident could involve both human drivers and driverless vehicles. Patients may receive an erroneous diagnosis or treatment recommendation from either a physician or a medical-algorithm. Yet because algorithms were traditionally considered "mere tools" in the hands of humans, the tort framework applying to them is significantly different than the framework applying to humans, potentially leading to anomalous results in cases where humans and algorithms decision-makers could interchangeably cause damage. This article discusses the disadvantages stemming from these anomalies and proposes to develop and apply a "reasonable algorithm" standard to non-human decision makers- similar to the "...
Communications of the ACM
Learning Responsible AI together.
Journal of Law, Technology and Policy, 2018
Algorithmic decision-makers dominate many aspects of our lives. Beyond simply performing complex ... more Algorithmic decision-makers dominate many aspects of our lives. Beyond simply performing complex computational tasks, they often replace human discretion and even professional judgement. As sophisticated and accurate as they may be, autonomous algorithms may cause damage.
A car accident could involve both human drivers and driverless vehicles. Patients may receive an erroneous diagnosis or treatment recommendation from either a physician or a medical-algorithm. Yet because algorithms were traditionally considered "mere tools" in the hands of humans, the tort framework applying to them is significantly different than the framework applying to humans, potentially leading to anomalous results in cases where humans and algorithms decision-makers could interchangeably cause damage.
This article discusses the disadvantages stemming from these anomalies and proposes to develop and apply a "reasonable algorithm" standard to non-human decision makers- similar to the "reasonable person" or "reasonable professional" standard that applies to human tortfeasors.
While the economic advantages of a similar notion have been elaborated on in the literature, the general concept of subjecting non-humans to a reasonableness analysis has not been addressed. Rather, current anecdotal references to applying a negligence or a reasonableness standard on autonomous machines mainly discarded the entire concept, primarily because "algorithms are not persons". This article identifies and addresses the conceptual difficulties stemming from applying a "reasonableness" standard on non-humans, including the intuitive reluctance of subjecting non-humans to human standards; the question of whether there is any practical meaning of analysing the reasonableness of an algorithm separately from the reasonableness of its programmer; the potential legal implications of a finding that the algorithm "acted" reasonably or unreasonably; and whether such an analysis reconciles with the rationales behind tort law.
Other than identifying the various anomalies resulting from subjecting humans and non-humans conducting identical tasks to different tort frameworks, the article's main contribution is, therefore, explaining why the challenges associated with applying a "reasonable standard" to algorithms are overcome.
Stanford Law & Policy Review, 2019
Over the years mankind has come to rely increasingly on machines. Technology is ever advancing, a... more Over the years mankind has come to rely increasingly on machines. Technology is ever advancing, and in addition to relinquishing physical and mere computational tasks to machines, algorithms' self-learning abilities now enable us to entrust machines with professional decisions, for instance, in the fields of law, medicine and accounting.
A growing number of scholars and entities now acknowledge that whenever certain "sophisticated" or "autonomous" decision-making systems cause damage, they should no longer be subject to products liability but deserve different treatment from their "traditional" predecessors. What is it that separates "traditional" algorithms and machines that for decades have been subject to traditional product liability legal framework from what I would call "thinking algorithms," that seem to warrant their own custom-made treatment? Why have "auto-pilots," for example, been traditionally treated as "products," while autonomous vehicles are suddenly perceived as a more "human-like" system that requires different treatment? Where is the line between machines drawn?
Scholars who touch on this question, have generally referred to the system's level of autonomy as a classifier between traditional products and systems incompatible with products liability laws (whether autonomy was mentioned expressly, or reflected in the specific questions posed). This article, however, argues that a classifier based on autonomy level is not a good one, given its excessive complexity, the vague classification process it dictates, the inconsistent results it might lead to, and the fact said results mainly shed light on the system's level of autonomy, but not on its compatibility with products liability laws.
This article therefore proposes a new approach to distinguishing traditional products from "thinking algorithms" for the determining whether products liability should apply. Instead of examining the vague concept of "autonomy," the article analyzes the system's specific features employed in each part of its decision-making process and examines whether they promote or hinder the rationales behind the products liability legal framework. The article thus offers a novel, practical method for decision-makers wanting to decide when products liability should continue to apply to "sophisticated" systems and when a new tort regime ought to be considered.
Michigan Technology Law Review , 2020
Self-learning algorithms are gradually dominating more and more aspects of our lives. They do so ... more Self-learning algorithms are gradually dominating more and more aspects of our lives. They do so by performing tasks and reaching decisions that were once reserved exclusively for human beings. And not only that-in certain contexts, their decision-making performance is shown to be superior to that of humans. However, as superior as they may be, self-learning algorithms (also referred-to as artificial intelligence (AI) systems, "smart robots", or "autonomous machines", among other terms) can also cause damage. When determining the liability of a human tortfeasors causing damage, the applicable legal framework is generally that of negligence. To be found negligent, the tortfeasor must have acted in a manner not compliant with the standard of "the reasonable person". Given the growing similarity of self-learning algorithms to humans in the nature of decisions they make and the type of damages they may cause, several scholars have proposed the development of a "reasonable algorithm" standard, to be applied to self-learning systems. To date, however, the literature has not attempted to address the practical question of how such a standard might be applied to algorithms, and what the content of analysis ought to be in order to achieve the goals behind tort law of promoting safety and victims' compensation on the one hand, and achieving the right balance between them and encouraging the development of beneficial technologies on the other. This paper analyses the "reasonableness" standard used in tort law, as well as the unique qualities, weaknesses and strengths of algorithms versus humans, and examines whether the reasonableness standard is at all compatible with self-learning algorithms. Concluding that it generally is, the paper's main contribution is its proposal of a concrete "reasonable algorithm" standard that could be practically applied by decision-makers. Said standard accounts for the differences between human and algorithmic decision-making, and allows the application of the reasonableness standard to algorithms in a manner that promotes the aims of tort law and at the same time avoids a dampening effect on the development and usage of new, beneficial technologies.