Charlotte A Tschider | Loyola University Chicago (original) (raw)

Papers by Charlotte A Tschider

Research paper thumbnail of The new EU–US data protection framework’s implications for healthcare

Journal of Law and the Biosciences, 2024

In July 2023, the United States and the European Union introduced the Data Privacy Framework (DPF... more In July 2023, the United States and the European Union introduced the Data Privacy Framework (DPF), introducing the third generation of cross-border data transfer agreements constituting adequacy with respect to personal data transfers under the General Data Protection Regulation (GDPR) between the European Union (EU) and the US. This framework may be used in cross-border healthcare and research relationships, which are highly desirable and increasingly essential to innovative health technology development and health services deployment. A reliable model meeting EU adequacy requirements could enhance the transfer of patient and research participant data. While the DPF might present a familiar terrain for US organizations, it also brings unique challenges. A notable concern is the ability of individual EU Member States to establish individual and additional requirements for health data that are more restrictive than GDPR requirements, which are not anticipated by the DPF. This article highlights the DPF’s potential impact on the healthcare and research sectors, finding that the DPF may not provide the degree of lawful health data transfer desirable for healthcare entities. We examine the DPF against a background of existing Health Insurance Portability and Accountability Act obligations and other GDPR transfer tools to offer alternatives that can improve the likelihood of reliable, lawful health data transfer between the US and EU.

Research paper thumbnail of Revealing the Limits of Cybersecurity Law for Healthcare AI

Proceedings of the ACM Workshop on Cybersecurity in Healthcare (HealthSec’24), October 14–18, 2024, Salt Lake City, UT, USA. ACM, NY, NY, USA., 2024

Healthcare technologies are responsible for critical health functions. From electronic health rec... more Healthcare technologies are responsible for critical health functions.
From electronic health record databases to complex artificially
intelligent medical devices, the future of human health is largely
tethered to an internet connection. Healthcare technologies also
collect, transfer, store, and retain some of the most sensitive personal
information that can be created, from medical data to behavioral
characteristics, biometric data, and genetic data. It is no surprise that
most countries categorize health technologies as “critical
infrastructure”: their improper function can precipitate cataclysmic
results.

Despite the inherent risks in operating health technologies,
cybersecurity legal requirements applicable to them are largely
generic, reliant on administrative agency interpretation and
application. These requirements differ depending on whether an
organization is a Health Insurance Portability and Accountability Act
(HIPAA) covered entity, a medical device manufacturer, or a
consumer health device company. In all, very few cybersecurity legal
requirements are applicable to AI.

Federal administrative agencies like the U.S. Food and Drug
Administration, the Office for Civil Rights, and the Federal Trade
Commission have exercised discretion and substituted rulemaking in
cybersecurity with guidance, studies, and enforcement actions to
establish these requirements for the healthcare sector. However,
following recent U.S. Supreme Court decisions impacting
administrative decision-making and limiting the power of
administrative agencies, the necessity of enforceable cybersecurity
requirements in the healthcare sector may need to be re-examined.

Research paper thumbnail of Medical Imaging and Privacy in the Era of Artificial Intelligence: Myth, Fallacy, and the Future

Journal of the American College of Radiology, 2020

Research paper thumbnail of Humans Outside the Loop

Yale Journal of Law and Technology, 2024

Artificial Intelligence (AI) is not all artificial. Despite the need for high-powered machines th... more Artificial Intelligence (AI) is not all artificial. Despite the need for high-powered machines that can create complex algorithms and routinely improve them, humans are instrumental in every step used to create AI. From data selection, decisional design, training, testing, and tuning to managing AI’s development as it is used in the human world, humans exert agency and control over the choices and practices underlying AI products. AI is now ubiquitous: it is part of every sector of the economy and many people’s everyday lives. When AI development companies create unsafe products, however, we might be surprised to discover that very few legal options exist to remedy any wrongs.

This Article introduces the myriad of choices humans make to create safe and effective AI products and explores key issues in existing liability models. Significant issues in negligence and products liability schemes, including contractual limitations on liability, distance the organizations creating AI products from the actual harm they cause, obscure the origin of issues relating to the harm, and reduce the likelihood of plaintiff recovery. Principally, AI offers a unique vantage point for analyzing the limits of tort law, challenging long-held divisions and theoretical constructs. From the perspectives of both businesses licensing AI and AI users, this Article identifies key impediments to realizing tort doctrine’s goals and proposes an alternative regulatory scheme that shifts liability from humans in the loop to humans outside the loop.

Research paper thumbnail of Will a Cybersecurity Safe Harbor Raise All Boats?

Lawfare, 2024

Supply chain cybersecurity incidents are incidents that compromise one party but affect another, ... more Supply chain cybersecurity incidents are incidents that compromise one party but affect another, and they now dominate the cybersecurity landscape. As organizations rely more often on third-party providers, the digital supply chain is one of the most significant risks to organizational security practices. Sixty percent of security professionals reported in a 2022 survey that third-party data breaches are increasing, and 59 percent of companies surveyed experienced a third-party data breach, the vast majority of which occurred in 2022. Technology professionals cited lack of control, complexity, lack of resources to track third-party activities, third-party turnover, and lack of priority as key reasons for third-party, or “supply chain,” risk.

When supply chain cybersecurity incidents occur and consumers or business customers are harmed, litigation will likely result. However, the U.S. tort system, designed largely to address “wrongs” and allocate liability between parties, is rife with challenges that may punish responsible players and may enable organizations with poor practices to escape liability. In part, this is because the tort system is designed mostly for physical failures, not digital ones.

This paper argues for the use of a liability safe harbor consistent with industry standards and safeguards that will both improve domestic cybersecurity practices and reinforce confidence in business transactions. A private certification model, leveraging best-in-class cybersecurity assessment and audit practices, could be bolstered by public auditors and reinforced by downstream litigation models with relatively little cost to U.S. taxpayers.

In this paper, I first examine the unique nature of contemporary cybersecurity challenges, in particular the challenges of managing cybersecurity across a broad supply chain involving multiple technology players that may influence the security of a downstream product. Next, I briefly discuss liability challenges for the supply chain and describe why an alternative path may be needed. Finally, I examine how leveraging a private certification model as a liability safe harbor can provide consistent direction for courts resolving litigation between entities within the technology supply chain.

Specifically, I propose an executive order and associated statute that will establish a process for reviewing and approving preexisting, dominant, and extensive certification types already being used. It will also designate a safe harbor defense to liability for organizations that legitimately qualify for these certifications. Many of these certifications, funded by private organizations, have been used since the early 2000s as a basis for establishing trust between entities, such as those in a technology supply chain, and are well understood in the technology and service provider ecosystem.

A cybersecurity certification safe harbor can evolve and improve as adversaries and threat models inevitably change. If a safe harbor establishes a reasonable floor for expected cybersecurity practices but also provides reasonable updates over time, organizations using this safe harbor to avoid potential liability will collectively and consistently improve their cybersecurity practices. To accomplish this, as well as truly improve confidence in the digital supply chain, the U.S. must determine which certification models will adequately ensure these practices and certify associated certification-granting organizations.

Using cybersecurity certification as the basis for providing a complete defense to liability may not prevent every harm from occurring. However, if organizations invest in certification to avoid legal liability, this should collectively improve the resilience and quality of technology products in the United States and beyond.

**This paper was originally published by Lawfare as part of the Security by Design Paper Series.**

Research paper thumbnail of Unto the (Data) Breach

University of Richmond Law Review (forthcoming), 2024

Since the early 2000s, U.S. courts have begun hearing "data breach" liability cases, the inevitab... more Since the early 2000s, U.S. courts have begun hearing "data breach" liability cases, the inevitable result of a growing internet-connected technology infrastructure. The relatively recent development of case law signals a body of law in development, stunted by significant limiting factors that prevent the coalescence of legal principles. To date, no holistic empirical exploration of data breach cases has offered sufficient detail to explore these factors. This descriptive empirical study analyzes, in detail, 225 data breach cases from 2005-2022, reviewing these cases over a three-year period to descriptively identify key trends and changes within a case's life on the docket.

This study identifies the type of plaintiff, settlement amounts, type and disposition of information compromised, claims, common motions, and key strategies most likely to result in a favorable plaintiff outcome. It also explores broad trends in data breach litigation, including acceptance of claims by courts, the status of future harms in 12(b)(1) and 12(b)(6) standing challenges, and the degree to which courts are willing to let cases proceed beyond preliminary motions. These results will provide crucial information for litigating parties, their counsel, judges, and policy makers. 1 Charlotte Tschider is an Associate Professor of Law at the Loyola University Chicago School of Law. I would like to thank Matthew Sag, Justin (Gus) Hurwitz, David Thaw, Steven Bellovin, and the Law and Technology Workshop participants for their comments on an earlier version of this article. A special thanks to Jay Edelson and Aaron Charfoos for their perspectives on data breach litigation and for offering their expertise with my classes on this topic. I would especially like to thank Annalisa Kolb for her exceptional research skills generating and validating cases in this study.

Research paper thumbnail of Locking Down "Reasonable" Cybersecurity Duty

Yale Law & Policy Review, 2023

Following a data breach or other cyberattack, the concept of “reasonable” duty, broadly construed... more Following a data breach or other cyberattack, the concept of “reasonable” duty, broadly construed, is essential to a plaintiff’s potential causes of action, such as negligence, negligence per se, breach of contract, breach of fiduciary duty, and any number of statutory claims. The impact of an organization’s discretionary choices, such as whether to take specific security steps for a system, may result in potential risk to an individual, another organization, or the organization itself. Although organizations regularly engage in cybersecurity risk analysis, they may not understand what practices will be considered reasonable in a court of law and are therefore unable to anticipate downstream legal issues. Attorneys are likewise unable to confidently advise their clients on how to best avoid liability. This Article examines, in detail, potential sources for reasonably defining duty, and how organizations and attorneys might consider legal duty through the lens of cybersecurity risk management.

Specifically, I call for a two-part cybersecurity duty analytic model: static, or objective duty informed by industry practices, and dynamic, or subjective duty informed by situational risk. For some doctrinal areas, this may work primarily as an analytic model, while for others, such as negligence, this could be formalized as a test. By offering a model for analyzing what cybersecurity duty ought to be, organizations can adequately understand how potential legal risk might be evaluated in order to implement practices that protect would-be plaintiffs and avoid liability. Moreover, courts can use this model to determine whether organizations have made decisions that avoid real, foreseeable risk to the plaintiff. Indeed, amidst an increasing frequency and diversity of cyberliability claims, legal analysis informed by actual risk analysis ensures that reasonable, rather than perfect, cybersecurity practices can be developed precedentially over time.

Research paper thumbnail of Prescribing Exploitation

Maryland Law Review, 2023

Patients are increasingly reliant temporarily, if not indefinitely, on connected medical devices ... more Patients are increasingly reliant temporarily, if not indefinitely, on connected medical devices and wearables, many of which use artificial intelligence ("AI") infrastructures and physical housing that directly interacts with the human body. The automated systems that drive the infrastructures of medical devices and wearables, especially those using complex AI, often use dynamically inscrutable algorithms that may render discriminatory effects that alter paths of treatment and other aspects of patient welfare. Previous contributions to the literature, however, have not explored how AI technologies animate exploitation of medical technology users. Although all commercial relationships may exploit users to some degree, some forms of health data exploitation exceed the bounds of normative acceptability.

The factors that illustrate excessive exploitation that should require some legal intervention include: (1) existence of a fiduciary relationship or approximation of such a relationship, (2) a technology-user relationship that does not involve the expertise of the fiduciary, (3) existence of a critical health event or health status requiring use of a medical device, (4) ubiquitous sensitive data collection essential to AI functionality, (5) lack of reasonably similar analog technology alternatives, and (6) compulsory reliance on a medical device. This Article makes three key contributions to the existing literature. First, this Article establishes the existence of a type of exploitation that is not only exacerbated by technology but creates additional risk by its ongoing use. Second, this Article illustrates the need for cross-disciplinary engagement between privacy scholarship and AI ethics scholarship, both of which could balance data collection for fairness and safety with other

Research paper thumbnail of Meaningful Choice: A History of Consent and Alternatives to the Consent Myth

North Carolina Journal of Law & Technology, 2021

Research paper thumbnail of Innovation in the public sphere: reimagining law and economics to solve the National Institutes of Health publishing controversy

Journal of Law and the Biosciences, 2014

Research paper thumbnail of Legal Opacity: Artificial Intelligence's Sticky Wicket

Iowa Law Review, 2021

Proponents of artificial intelligence ("AI") transparency have carefully illustrated the many way... more Proponents of artificial intelligence ("AI") transparency have carefully illustrated the many ways in which transparency may be beneficial to prevent safety and unfairness issues, to promote innovation, and to effectively provide recovery or support due process in lawsuits. However, impediments to transparency goals, described as opacity, or the "black-box" nature of AI, present significant issues for promoting these goals. An undertheorized perspective on opacity is legal opacity, where competitive, and often discretionary legal choices, coupled with regulatory barriers create opacity. Although legal opacity does not specifically affect AI only, the combination of technical opacity in AI systems with legal opacity amounts to a nearly insurmountable barrier to transparency goals. Types of legal opacity, including trade secrecy status, contractual provisions that promote confidentiality and data ownership restrictions, and privacy law independently and cumulatively make the black box substantially opaquer.

Research paper thumbnail of AI's Legitimate Interest: Towards A Public Benefit Privacy Model

Houston Journal of Health Law & Policy, 2021

Health data uses are on the rise. Increasingly more often, data are used for a variety of operati... more Health data uses are on the rise. Increasingly more often, data are
used for a variety of operational, diagnostic, and technical uses, as in
the Internet of Health Things. Never has quality data been more
necessary: large data stores now power the most advanced artificial
intelligence applications, applications that may enable early diagnosis
of chronic diseases and enable personalized medical treatment. These
data, both personally identifiable and de-identified, have the potential
to dramatically improve the quality, effectiveness, and safety of
artificial intelligence.
Existing privacy laws do not 1) effectively protect the privacy
interests of individuals and 2) provide the flexibility needed to support
artificial intelligence applications. This paper identifies some of the
key challenges with existing privacy laws, including the
ineffectiveness of de-identification and data minimization protocols in
practice and issues with notice and consent as they apply to artificial
intelligence applications, then proposes an alternative privacy model.
This model specifically rejects a notice and consent model in favor of
legitimate interest analysis. This approach introduces a more
restrictive application of health privacy law while adopting a flexible,
interest-balancing approach to permit additional data uses that
primarily benefit individuals and communities.

Research paper thumbnail of The Healthcare Privacy-Artificial Intelligence Impasse

Santa Clara High Technology Law Journal, 2020

Research paper thumbnail of Meaningful Choice: A History of Consent and Alternatives to the Consent Myth

North Carolina Journal of Law and Technology, 2021

Although the first legal conceptions of commercial privacy were identified in Samuel Warren and L... more Although the first legal conceptions of commercial privacy were identified in Samuel Warren and Louis Brandeis’s foundational 1890 article, The Right to Privacy, conceptually, privacy has existed since as early as 1127 as a natural concern when navigating between personal and commercial spheres of life. As an extension of contract and tort law, two common relational legal models, U.S. privacy law emerged to buoy engagement in commercial enterprise, borrowing known legal conventions like consent and assent. Historically, however, international legal privacy frameworks involving consent ultimately diverged, with the European Union taking a more expansive view of legal justification for processing as alternatives to consent.

Unfortunately, consent as a procedural substitute for individual choice has created a number of issues in achieving legitimate and effective privacy protections for Americans. The problems with consent as a proxy for choice are well known. This Article explores the twin history of two diverging bodies of law as they apply to the privacy realm, then introduces the concept of legitimate interest balancing as an alternative to consent. Legitimate interest analysis requires an organization formally assess whether data collection and use ultimately result in greater benefit to individuals than the organization with input from actual consumers. This model shifts responsibility from individual consumers having to protect their own interests to organizations that must engage in fair data use practices to legally collect and use data. Finally, this Article positions the model in relation to common law, federal law, Federal Trade Commission activities, and judicial decision-making as a means for separating good-intentioned organizations from unethical ones.

Research paper thumbnail of Beyond the "Black Box"

Denver Law Review, 2021

As algorithms have become more complex, privacy and ethics scholars have urged artificial intelli... more As algorithms have become more complex, privacy and ethics scholars have urged artificial intelligence (AI) transparency for purposes of ensuring safety and preventing discrimination. International statutes are increasingly mandating that algorithmic decision-making be explained to affected individuals when such decisions impact an individual’s legal rights, and U.S. scholars continue to call for transparency in automated decision-making.

Unfortunately, modern AI technology does not function like traditional, human-designed algorithms. Due to the unavailability of alternative intellectual property (IP) protections and their often dynamically inscrutable status, algorithms created by AI are often protected under trade-secrecy status, which prohibits sharing the details of a trade secret, lest destroy the trade secret. Furthermore, dynamic inscrutability, the true “black box,” makes these algorithms secret by definition: even their creators cannot easily explain how they work. When mandated by statute, it may be tremendously difficult, expensive, and undesirable from an IP perspective to require organizations to explain their AI algorithms. Despite this challenge, it may still be possible to satisfy safety and fairness goals by instead focusing on AI system and process disclosure.

This Article first explains how AI differs from historically defined software and computer code. This Article then explores the dominant scholarship calling for opening the black box and the reciprocal pushback from organizations likely to rely on trade secret protection—a natural fit for AI’s dynamically inscrutable algorithms. Finally, using a simplified information fiduciary framework, I propose an alternative for promoting disclosure while balancing organizational interests via public AI system disclosure and black-box testing.

Research paper thumbnail of Medical Device Artificial Intelligence: The New Tort Frontier

BYU Law Review, 2021

The medical device industry and new technology start-ups have dramatically increased investment i... more The medical device industry and new technology start-ups have dramatically increased investment in artificial intelligence (AI) applications, including diagnostic tools and AI-enabled devices. These technologies have been positioned to reduce climbing health costs while simultaneously improving health outcomes. Technologies like AI-enabled surgical robots, AI-enabled insulin pumps, and cancer detection applications hold tremendous promise, yet without appropriate oversight, they will likely pose major safety issues. While preventative safety measures may reduce risk to patients using these technologies, effective regulatory-tort regimes also permit recovery when preventative solutions are insufficient.

The Food and Drug Administration (FDA), the administrative agency responsible for overseeing the safety and efficacy of medical devices, has not effectively addressed AI system safety issues for its clearance processes. If the FDA cannot reasonably reduce the risk of injury for AI-enabled medical devices, injured patients should be able to rely on ex post recovery options, as in products liability cases. However, the Medical Device Amendments Act (MDA) of 1976 introduced an express preemption clause that the U.S. Supreme Court has interpreted to nearly foreclose liability claims, based almost completely on the comprehensiveness of FDA clearance review processes. At its inception, MDA preemption aimed to balance consumer interests in safe medical devices with efficient, consistent regulation to promote innovation and reduce costs.

Although preemption remains an important mechanism for balancing injury risks with device availability, the introduction of AI software dramatically changes the risk profile for medical devices. Due to the inherent opacity and changeability of AI algorithms powering AI machines, it is nearly impossible to predict all potential safety hazards a faulty AI system might pose to patients. This Article identifies key preemption issues for AI machines as they affect ex ante and ex post regulatory-tort allocation, including actual FDA review for parallel claims, bifurcation of software and device reviews, and dynamics of the technology itself that may enable plaintiffs to avoid preemption. This Author then recommends an alternative conception of the regulatory-tort allocation for AI machines that will create a more comprehensive and complementary safety and compensatory model.

Research paper thumbnail of Balancing the Halo: Data Surveillance and Algorithmic Opacity in Smart Hearing Aids (w/Dr. Krista Kennedy and Noah Wilson)

RHETORIC OF HEALTH AND MEDICINE, 2020

Medical device manufacturers and other high-technology companies increasingly incorporate algorit... more Medical device manufacturers and other high-technology companies increasingly incorporate algorithmic data surveillance in next-generation medical wearables. These devices, including hearing aids, leverage patient data created through human-computer interaction to not only power devices but also increase corporate profits. Although data protection laws establish privacy requirements for personal information collection and use, these companies continue to use patients' personal information with little notice or education, significantly curtailing the agency of wearers. We explore the complex ecology of the Starkey Halo smart hearing aid, focusing on the opacity of its algorithmic functionality and examining patient education materials for disclosures of data surveillance. We contextualize these findings within relevant privacy doctrines in the United States and European Union that are relevant to algorithmic surveillance and recommend specific steps to enhance wearer agency through informed decision-making. The sales brochure for the Starkey Halo smart hearing aid is cleanly designed, full of sans-serif type and reassuring hues of blue. Rather than showcasing the hearing aid itself, the brochure emphasizes the results of that technology: improved social interactions and physical health. In sharp photos, two women confide with delight over coffee in a sunlit

Research paper thumbnail of Data Discrimination: The International Regulatory Impasse of AI-Enabled Medical Wearables (w/Dr. Krista Kennedy)

LEGAL, SOCIAL, AND ETHICAL PERSPECTIVES ON HEALTH & TECHNOLOGY, 2020

In this chapter, we explore the regulatory challenges of data protection laws in the United State... more In this chapter, we explore the regulatory challenges of data protection laws in the United States (U.S.) and EU and how these challenges create disproportionate issues for individuals reliant on medical devices. We map the realities of data generated by compulsory medical wearables, attending especially to implications for the patient, their agency, and their options for education about the algorithmically generated data that these devices produce. We focus on smart hearing aids as our primary example, given that they are a compulsory medical wearable that is distributed globally and generates rich streams of data that are frequently inaccessible to the patient. The introduction of AI-enabled medical devices, complete with big data infrastructures, have complicated patient understanding and complicated many contemporary data protection laws’ communication models, central to informed decision-making.

Research paper thumbnail of The Consent Myth: Improving Choice for Patients of the Future

Washington University Law Review, 2019

Consent has enjoyed a prominent position in the American privacy system since at least 1970, thou... more Consent has enjoyed a prominent position in the American privacy system since at least 1970, though historically, consent emerged from traditional notions of tort and contract. Largely because consent has an almost deferential power as a proxy for consumer choice, organizations increasingly use consent as a de facto standard for demonstrating privacy commitments. The Department of Health and Human Services and the Federal Trade Commission have integrated the concept of consent into health care, research, and general commercial activities. However, this de facto standard, while useful in some contexts, does not sufficiently promote individual patient interests within leading health technologies, including the Internet of Health Things and Artificial Intelligence. Despite consent's prominence in United States law, this Article seeks to understand, more fully, consent's role in modern health applications, then applies a philosophical-legal lens to clearly identify problems with consent in its current use. This Article identifies the principle issues with substituting consent for choice, the "consent myth," a collection of five problems, then proposes principles for addressing these problems in contemporary health technologies.

Research paper thumbnail of Regulating the IoT: Discrimination, Privacy, and Cybersecurity in the Artificial Intelligence Age

Denver Law Review, 2018

The field of consumer Internet of Things (IoT) has exploded as business and researchers have soug... more The field of consumer Internet of Things (IoT) has exploded as business and researchers have sought to not only develop Internetconnected products but also define the common structure in which IoT devices will operate, including technological standards and responsive architectures. Yet, consumer IoT continues to present a host of potential risks to consumers, cascading from the multidimensional nature of IoT devices: IoT combines well-known consumer products with cutting-edge infrastructures including big data solutions, distributed data storage or “cloud,” and artificial intelligence (AI) utilities. The consumer device is no longer only the product, it is the product, the data, the algorithms, and the infrastructure. Consumer products have shifted from analog to connected technologies, introducing new risks for consumers related to personal privacy, safety issues, and potential for discriminatory data. Broad, ubiquitous data collection, internet connectivity, predictive algorithms, and overall device functionality opacity threaten to undermine IoT market benefits by causing potential consumer injury: broad unfairness and disparate impact, data breaches, physical safety issues, and property damage. Existing regulatory regimes have not anticipated these damages to effectively avoid injury, and it is yet unknown how existing products liability, common law civil recovery under contracts or torts schemes, and due process procedures will apply to these products and the data they process. This Article explores the technology and market of IoT, potential consumer impacts resulting from a lack of consistent and complete legal framework, whether IoT regulation is appropriate, and how the United States can balance market needs for innovation with consistent oversight for IoT manufacturers and distributors.

Research paper thumbnail of The new EU–US data protection framework’s implications for healthcare

Journal of Law and the Biosciences, 2024

In July 2023, the United States and the European Union introduced the Data Privacy Framework (DPF... more In July 2023, the United States and the European Union introduced the Data Privacy Framework (DPF), introducing the third generation of cross-border data transfer agreements constituting adequacy with respect to personal data transfers under the General Data Protection Regulation (GDPR) between the European Union (EU) and the US. This framework may be used in cross-border healthcare and research relationships, which are highly desirable and increasingly essential to innovative health technology development and health services deployment. A reliable model meeting EU adequacy requirements could enhance the transfer of patient and research participant data. While the DPF might present a familiar terrain for US organizations, it also brings unique challenges. A notable concern is the ability of individual EU Member States to establish individual and additional requirements for health data that are more restrictive than GDPR requirements, which are not anticipated by the DPF. This article highlights the DPF’s potential impact on the healthcare and research sectors, finding that the DPF may not provide the degree of lawful health data transfer desirable for healthcare entities. We examine the DPF against a background of existing Health Insurance Portability and Accountability Act obligations and other GDPR transfer tools to offer alternatives that can improve the likelihood of reliable, lawful health data transfer between the US and EU.

Research paper thumbnail of Revealing the Limits of Cybersecurity Law for Healthcare AI

Proceedings of the ACM Workshop on Cybersecurity in Healthcare (HealthSec’24), October 14–18, 2024, Salt Lake City, UT, USA. ACM, NY, NY, USA., 2024

Healthcare technologies are responsible for critical health functions. From electronic health rec... more Healthcare technologies are responsible for critical health functions.
From electronic health record databases to complex artificially
intelligent medical devices, the future of human health is largely
tethered to an internet connection. Healthcare technologies also
collect, transfer, store, and retain some of the most sensitive personal
information that can be created, from medical data to behavioral
characteristics, biometric data, and genetic data. It is no surprise that
most countries categorize health technologies as “critical
infrastructure”: their improper function can precipitate cataclysmic
results.

Despite the inherent risks in operating health technologies,
cybersecurity legal requirements applicable to them are largely
generic, reliant on administrative agency interpretation and
application. These requirements differ depending on whether an
organization is a Health Insurance Portability and Accountability Act
(HIPAA) covered entity, a medical device manufacturer, or a
consumer health device company. In all, very few cybersecurity legal
requirements are applicable to AI.

Federal administrative agencies like the U.S. Food and Drug
Administration, the Office for Civil Rights, and the Federal Trade
Commission have exercised discretion and substituted rulemaking in
cybersecurity with guidance, studies, and enforcement actions to
establish these requirements for the healthcare sector. However,
following recent U.S. Supreme Court decisions impacting
administrative decision-making and limiting the power of
administrative agencies, the necessity of enforceable cybersecurity
requirements in the healthcare sector may need to be re-examined.

Research paper thumbnail of Medical Imaging and Privacy in the Era of Artificial Intelligence: Myth, Fallacy, and the Future

Journal of the American College of Radiology, 2020

Research paper thumbnail of Humans Outside the Loop

Yale Journal of Law and Technology, 2024

Artificial Intelligence (AI) is not all artificial. Despite the need for high-powered machines th... more Artificial Intelligence (AI) is not all artificial. Despite the need for high-powered machines that can create complex algorithms and routinely improve them, humans are instrumental in every step used to create AI. From data selection, decisional design, training, testing, and tuning to managing AI’s development as it is used in the human world, humans exert agency and control over the choices and practices underlying AI products. AI is now ubiquitous: it is part of every sector of the economy and many people’s everyday lives. When AI development companies create unsafe products, however, we might be surprised to discover that very few legal options exist to remedy any wrongs.

This Article introduces the myriad of choices humans make to create safe and effective AI products and explores key issues in existing liability models. Significant issues in negligence and products liability schemes, including contractual limitations on liability, distance the organizations creating AI products from the actual harm they cause, obscure the origin of issues relating to the harm, and reduce the likelihood of plaintiff recovery. Principally, AI offers a unique vantage point for analyzing the limits of tort law, challenging long-held divisions and theoretical constructs. From the perspectives of both businesses licensing AI and AI users, this Article identifies key impediments to realizing tort doctrine’s goals and proposes an alternative regulatory scheme that shifts liability from humans in the loop to humans outside the loop.

Research paper thumbnail of Will a Cybersecurity Safe Harbor Raise All Boats?

Lawfare, 2024

Supply chain cybersecurity incidents are incidents that compromise one party but affect another, ... more Supply chain cybersecurity incidents are incidents that compromise one party but affect another, and they now dominate the cybersecurity landscape. As organizations rely more often on third-party providers, the digital supply chain is one of the most significant risks to organizational security practices. Sixty percent of security professionals reported in a 2022 survey that third-party data breaches are increasing, and 59 percent of companies surveyed experienced a third-party data breach, the vast majority of which occurred in 2022. Technology professionals cited lack of control, complexity, lack of resources to track third-party activities, third-party turnover, and lack of priority as key reasons for third-party, or “supply chain,” risk.

When supply chain cybersecurity incidents occur and consumers or business customers are harmed, litigation will likely result. However, the U.S. tort system, designed largely to address “wrongs” and allocate liability between parties, is rife with challenges that may punish responsible players and may enable organizations with poor practices to escape liability. In part, this is because the tort system is designed mostly for physical failures, not digital ones.

This paper argues for the use of a liability safe harbor consistent with industry standards and safeguards that will both improve domestic cybersecurity practices and reinforce confidence in business transactions. A private certification model, leveraging best-in-class cybersecurity assessment and audit practices, could be bolstered by public auditors and reinforced by downstream litigation models with relatively little cost to U.S. taxpayers.

In this paper, I first examine the unique nature of contemporary cybersecurity challenges, in particular the challenges of managing cybersecurity across a broad supply chain involving multiple technology players that may influence the security of a downstream product. Next, I briefly discuss liability challenges for the supply chain and describe why an alternative path may be needed. Finally, I examine how leveraging a private certification model as a liability safe harbor can provide consistent direction for courts resolving litigation between entities within the technology supply chain.

Specifically, I propose an executive order and associated statute that will establish a process for reviewing and approving preexisting, dominant, and extensive certification types already being used. It will also designate a safe harbor defense to liability for organizations that legitimately qualify for these certifications. Many of these certifications, funded by private organizations, have been used since the early 2000s as a basis for establishing trust between entities, such as those in a technology supply chain, and are well understood in the technology and service provider ecosystem.

A cybersecurity certification safe harbor can evolve and improve as adversaries and threat models inevitably change. If a safe harbor establishes a reasonable floor for expected cybersecurity practices but also provides reasonable updates over time, organizations using this safe harbor to avoid potential liability will collectively and consistently improve their cybersecurity practices. To accomplish this, as well as truly improve confidence in the digital supply chain, the U.S. must determine which certification models will adequately ensure these practices and certify associated certification-granting organizations.

Using cybersecurity certification as the basis for providing a complete defense to liability may not prevent every harm from occurring. However, if organizations invest in certification to avoid legal liability, this should collectively improve the resilience and quality of technology products in the United States and beyond.

**This paper was originally published by Lawfare as part of the Security by Design Paper Series.**

Research paper thumbnail of Unto the (Data) Breach

University of Richmond Law Review (forthcoming), 2024

Since the early 2000s, U.S. courts have begun hearing "data breach" liability cases, the inevitab... more Since the early 2000s, U.S. courts have begun hearing "data breach" liability cases, the inevitable result of a growing internet-connected technology infrastructure. The relatively recent development of case law signals a body of law in development, stunted by significant limiting factors that prevent the coalescence of legal principles. To date, no holistic empirical exploration of data breach cases has offered sufficient detail to explore these factors. This descriptive empirical study analyzes, in detail, 225 data breach cases from 2005-2022, reviewing these cases over a three-year period to descriptively identify key trends and changes within a case's life on the docket.

This study identifies the type of plaintiff, settlement amounts, type and disposition of information compromised, claims, common motions, and key strategies most likely to result in a favorable plaintiff outcome. It also explores broad trends in data breach litigation, including acceptance of claims by courts, the status of future harms in 12(b)(1) and 12(b)(6) standing challenges, and the degree to which courts are willing to let cases proceed beyond preliminary motions. These results will provide crucial information for litigating parties, their counsel, judges, and policy makers. 1 Charlotte Tschider is an Associate Professor of Law at the Loyola University Chicago School of Law. I would like to thank Matthew Sag, Justin (Gus) Hurwitz, David Thaw, Steven Bellovin, and the Law and Technology Workshop participants for their comments on an earlier version of this article. A special thanks to Jay Edelson and Aaron Charfoos for their perspectives on data breach litigation and for offering their expertise with my classes on this topic. I would especially like to thank Annalisa Kolb for her exceptional research skills generating and validating cases in this study.

Research paper thumbnail of Locking Down "Reasonable" Cybersecurity Duty

Yale Law & Policy Review, 2023

Following a data breach or other cyberattack, the concept of “reasonable” duty, broadly construed... more Following a data breach or other cyberattack, the concept of “reasonable” duty, broadly construed, is essential to a plaintiff’s potential causes of action, such as negligence, negligence per se, breach of contract, breach of fiduciary duty, and any number of statutory claims. The impact of an organization’s discretionary choices, such as whether to take specific security steps for a system, may result in potential risk to an individual, another organization, or the organization itself. Although organizations regularly engage in cybersecurity risk analysis, they may not understand what practices will be considered reasonable in a court of law and are therefore unable to anticipate downstream legal issues. Attorneys are likewise unable to confidently advise their clients on how to best avoid liability. This Article examines, in detail, potential sources for reasonably defining duty, and how organizations and attorneys might consider legal duty through the lens of cybersecurity risk management.

Specifically, I call for a two-part cybersecurity duty analytic model: static, or objective duty informed by industry practices, and dynamic, or subjective duty informed by situational risk. For some doctrinal areas, this may work primarily as an analytic model, while for others, such as negligence, this could be formalized as a test. By offering a model for analyzing what cybersecurity duty ought to be, organizations can adequately understand how potential legal risk might be evaluated in order to implement practices that protect would-be plaintiffs and avoid liability. Moreover, courts can use this model to determine whether organizations have made decisions that avoid real, foreseeable risk to the plaintiff. Indeed, amidst an increasing frequency and diversity of cyberliability claims, legal analysis informed by actual risk analysis ensures that reasonable, rather than perfect, cybersecurity practices can be developed precedentially over time.

Research paper thumbnail of Prescribing Exploitation

Maryland Law Review, 2023

Patients are increasingly reliant temporarily, if not indefinitely, on connected medical devices ... more Patients are increasingly reliant temporarily, if not indefinitely, on connected medical devices and wearables, many of which use artificial intelligence ("AI") infrastructures and physical housing that directly interacts with the human body. The automated systems that drive the infrastructures of medical devices and wearables, especially those using complex AI, often use dynamically inscrutable algorithms that may render discriminatory effects that alter paths of treatment and other aspects of patient welfare. Previous contributions to the literature, however, have not explored how AI technologies animate exploitation of medical technology users. Although all commercial relationships may exploit users to some degree, some forms of health data exploitation exceed the bounds of normative acceptability.

The factors that illustrate excessive exploitation that should require some legal intervention include: (1) existence of a fiduciary relationship or approximation of such a relationship, (2) a technology-user relationship that does not involve the expertise of the fiduciary, (3) existence of a critical health event or health status requiring use of a medical device, (4) ubiquitous sensitive data collection essential to AI functionality, (5) lack of reasonably similar analog technology alternatives, and (6) compulsory reliance on a medical device. This Article makes three key contributions to the existing literature. First, this Article establishes the existence of a type of exploitation that is not only exacerbated by technology but creates additional risk by its ongoing use. Second, this Article illustrates the need for cross-disciplinary engagement between privacy scholarship and AI ethics scholarship, both of which could balance data collection for fairness and safety with other

Research paper thumbnail of Meaningful Choice: A History of Consent and Alternatives to the Consent Myth

North Carolina Journal of Law & Technology, 2021

Research paper thumbnail of Innovation in the public sphere: reimagining law and economics to solve the National Institutes of Health publishing controversy

Journal of Law and the Biosciences, 2014

Research paper thumbnail of Legal Opacity: Artificial Intelligence's Sticky Wicket

Iowa Law Review, 2021

Proponents of artificial intelligence ("AI") transparency have carefully illustrated the many way... more Proponents of artificial intelligence ("AI") transparency have carefully illustrated the many ways in which transparency may be beneficial to prevent safety and unfairness issues, to promote innovation, and to effectively provide recovery or support due process in lawsuits. However, impediments to transparency goals, described as opacity, or the "black-box" nature of AI, present significant issues for promoting these goals. An undertheorized perspective on opacity is legal opacity, where competitive, and often discretionary legal choices, coupled with regulatory barriers create opacity. Although legal opacity does not specifically affect AI only, the combination of technical opacity in AI systems with legal opacity amounts to a nearly insurmountable barrier to transparency goals. Types of legal opacity, including trade secrecy status, contractual provisions that promote confidentiality and data ownership restrictions, and privacy law independently and cumulatively make the black box substantially opaquer.

Research paper thumbnail of AI's Legitimate Interest: Towards A Public Benefit Privacy Model

Houston Journal of Health Law & Policy, 2021

Health data uses are on the rise. Increasingly more often, data are used for a variety of operati... more Health data uses are on the rise. Increasingly more often, data are
used for a variety of operational, diagnostic, and technical uses, as in
the Internet of Health Things. Never has quality data been more
necessary: large data stores now power the most advanced artificial
intelligence applications, applications that may enable early diagnosis
of chronic diseases and enable personalized medical treatment. These
data, both personally identifiable and de-identified, have the potential
to dramatically improve the quality, effectiveness, and safety of
artificial intelligence.
Existing privacy laws do not 1) effectively protect the privacy
interests of individuals and 2) provide the flexibility needed to support
artificial intelligence applications. This paper identifies some of the
key challenges with existing privacy laws, including the
ineffectiveness of de-identification and data minimization protocols in
practice and issues with notice and consent as they apply to artificial
intelligence applications, then proposes an alternative privacy model.
This model specifically rejects a notice and consent model in favor of
legitimate interest analysis. This approach introduces a more
restrictive application of health privacy law while adopting a flexible,
interest-balancing approach to permit additional data uses that
primarily benefit individuals and communities.

Research paper thumbnail of The Healthcare Privacy-Artificial Intelligence Impasse

Santa Clara High Technology Law Journal, 2020

Research paper thumbnail of Meaningful Choice: A History of Consent and Alternatives to the Consent Myth

North Carolina Journal of Law and Technology, 2021

Although the first legal conceptions of commercial privacy were identified in Samuel Warren and L... more Although the first legal conceptions of commercial privacy were identified in Samuel Warren and Louis Brandeis’s foundational 1890 article, The Right to Privacy, conceptually, privacy has existed since as early as 1127 as a natural concern when navigating between personal and commercial spheres of life. As an extension of contract and tort law, two common relational legal models, U.S. privacy law emerged to buoy engagement in commercial enterprise, borrowing known legal conventions like consent and assent. Historically, however, international legal privacy frameworks involving consent ultimately diverged, with the European Union taking a more expansive view of legal justification for processing as alternatives to consent.

Unfortunately, consent as a procedural substitute for individual choice has created a number of issues in achieving legitimate and effective privacy protections for Americans. The problems with consent as a proxy for choice are well known. This Article explores the twin history of two diverging bodies of law as they apply to the privacy realm, then introduces the concept of legitimate interest balancing as an alternative to consent. Legitimate interest analysis requires an organization formally assess whether data collection and use ultimately result in greater benefit to individuals than the organization with input from actual consumers. This model shifts responsibility from individual consumers having to protect their own interests to organizations that must engage in fair data use practices to legally collect and use data. Finally, this Article positions the model in relation to common law, federal law, Federal Trade Commission activities, and judicial decision-making as a means for separating good-intentioned organizations from unethical ones.

Research paper thumbnail of Beyond the "Black Box"

Denver Law Review, 2021

As algorithms have become more complex, privacy and ethics scholars have urged artificial intelli... more As algorithms have become more complex, privacy and ethics scholars have urged artificial intelligence (AI) transparency for purposes of ensuring safety and preventing discrimination. International statutes are increasingly mandating that algorithmic decision-making be explained to affected individuals when such decisions impact an individual’s legal rights, and U.S. scholars continue to call for transparency in automated decision-making.

Unfortunately, modern AI technology does not function like traditional, human-designed algorithms. Due to the unavailability of alternative intellectual property (IP) protections and their often dynamically inscrutable status, algorithms created by AI are often protected under trade-secrecy status, which prohibits sharing the details of a trade secret, lest destroy the trade secret. Furthermore, dynamic inscrutability, the true “black box,” makes these algorithms secret by definition: even their creators cannot easily explain how they work. When mandated by statute, it may be tremendously difficult, expensive, and undesirable from an IP perspective to require organizations to explain their AI algorithms. Despite this challenge, it may still be possible to satisfy safety and fairness goals by instead focusing on AI system and process disclosure.

This Article first explains how AI differs from historically defined software and computer code. This Article then explores the dominant scholarship calling for opening the black box and the reciprocal pushback from organizations likely to rely on trade secret protection—a natural fit for AI’s dynamically inscrutable algorithms. Finally, using a simplified information fiduciary framework, I propose an alternative for promoting disclosure while balancing organizational interests via public AI system disclosure and black-box testing.

Research paper thumbnail of Medical Device Artificial Intelligence: The New Tort Frontier

BYU Law Review, 2021

The medical device industry and new technology start-ups have dramatically increased investment i... more The medical device industry and new technology start-ups have dramatically increased investment in artificial intelligence (AI) applications, including diagnostic tools and AI-enabled devices. These technologies have been positioned to reduce climbing health costs while simultaneously improving health outcomes. Technologies like AI-enabled surgical robots, AI-enabled insulin pumps, and cancer detection applications hold tremendous promise, yet without appropriate oversight, they will likely pose major safety issues. While preventative safety measures may reduce risk to patients using these technologies, effective regulatory-tort regimes also permit recovery when preventative solutions are insufficient.

The Food and Drug Administration (FDA), the administrative agency responsible for overseeing the safety and efficacy of medical devices, has not effectively addressed AI system safety issues for its clearance processes. If the FDA cannot reasonably reduce the risk of injury for AI-enabled medical devices, injured patients should be able to rely on ex post recovery options, as in products liability cases. However, the Medical Device Amendments Act (MDA) of 1976 introduced an express preemption clause that the U.S. Supreme Court has interpreted to nearly foreclose liability claims, based almost completely on the comprehensiveness of FDA clearance review processes. At its inception, MDA preemption aimed to balance consumer interests in safe medical devices with efficient, consistent regulation to promote innovation and reduce costs.

Although preemption remains an important mechanism for balancing injury risks with device availability, the introduction of AI software dramatically changes the risk profile for medical devices. Due to the inherent opacity and changeability of AI algorithms powering AI machines, it is nearly impossible to predict all potential safety hazards a faulty AI system might pose to patients. This Article identifies key preemption issues for AI machines as they affect ex ante and ex post regulatory-tort allocation, including actual FDA review for parallel claims, bifurcation of software and device reviews, and dynamics of the technology itself that may enable plaintiffs to avoid preemption. This Author then recommends an alternative conception of the regulatory-tort allocation for AI machines that will create a more comprehensive and complementary safety and compensatory model.

Research paper thumbnail of Balancing the Halo: Data Surveillance and Algorithmic Opacity in Smart Hearing Aids (w/Dr. Krista Kennedy and Noah Wilson)

RHETORIC OF HEALTH AND MEDICINE, 2020

Medical device manufacturers and other high-technology companies increasingly incorporate algorit... more Medical device manufacturers and other high-technology companies increasingly incorporate algorithmic data surveillance in next-generation medical wearables. These devices, including hearing aids, leverage patient data created through human-computer interaction to not only power devices but also increase corporate profits. Although data protection laws establish privacy requirements for personal information collection and use, these companies continue to use patients' personal information with little notice or education, significantly curtailing the agency of wearers. We explore the complex ecology of the Starkey Halo smart hearing aid, focusing on the opacity of its algorithmic functionality and examining patient education materials for disclosures of data surveillance. We contextualize these findings within relevant privacy doctrines in the United States and European Union that are relevant to algorithmic surveillance and recommend specific steps to enhance wearer agency through informed decision-making. The sales brochure for the Starkey Halo smart hearing aid is cleanly designed, full of sans-serif type and reassuring hues of blue. Rather than showcasing the hearing aid itself, the brochure emphasizes the results of that technology: improved social interactions and physical health. In sharp photos, two women confide with delight over coffee in a sunlit

Research paper thumbnail of Data Discrimination: The International Regulatory Impasse of AI-Enabled Medical Wearables (w/Dr. Krista Kennedy)

LEGAL, SOCIAL, AND ETHICAL PERSPECTIVES ON HEALTH & TECHNOLOGY, 2020

In this chapter, we explore the regulatory challenges of data protection laws in the United State... more In this chapter, we explore the regulatory challenges of data protection laws in the United States (U.S.) and EU and how these challenges create disproportionate issues for individuals reliant on medical devices. We map the realities of data generated by compulsory medical wearables, attending especially to implications for the patient, their agency, and their options for education about the algorithmically generated data that these devices produce. We focus on smart hearing aids as our primary example, given that they are a compulsory medical wearable that is distributed globally and generates rich streams of data that are frequently inaccessible to the patient. The introduction of AI-enabled medical devices, complete with big data infrastructures, have complicated patient understanding and complicated many contemporary data protection laws’ communication models, central to informed decision-making.

Research paper thumbnail of The Consent Myth: Improving Choice for Patients of the Future

Washington University Law Review, 2019

Consent has enjoyed a prominent position in the American privacy system since at least 1970, thou... more Consent has enjoyed a prominent position in the American privacy system since at least 1970, though historically, consent emerged from traditional notions of tort and contract. Largely because consent has an almost deferential power as a proxy for consumer choice, organizations increasingly use consent as a de facto standard for demonstrating privacy commitments. The Department of Health and Human Services and the Federal Trade Commission have integrated the concept of consent into health care, research, and general commercial activities. However, this de facto standard, while useful in some contexts, does not sufficiently promote individual patient interests within leading health technologies, including the Internet of Health Things and Artificial Intelligence. Despite consent's prominence in United States law, this Article seeks to understand, more fully, consent's role in modern health applications, then applies a philosophical-legal lens to clearly identify problems with consent in its current use. This Article identifies the principle issues with substituting consent for choice, the "consent myth," a collection of five problems, then proposes principles for addressing these problems in contemporary health technologies.

Research paper thumbnail of Regulating the IoT: Discrimination, Privacy, and Cybersecurity in the Artificial Intelligence Age

Denver Law Review, 2018

The field of consumer Internet of Things (IoT) has exploded as business and researchers have soug... more The field of consumer Internet of Things (IoT) has exploded as business and researchers have sought to not only develop Internetconnected products but also define the common structure in which IoT devices will operate, including technological standards and responsive architectures. Yet, consumer IoT continues to present a host of potential risks to consumers, cascading from the multidimensional nature of IoT devices: IoT combines well-known consumer products with cutting-edge infrastructures including big data solutions, distributed data storage or “cloud,” and artificial intelligence (AI) utilities. The consumer device is no longer only the product, it is the product, the data, the algorithms, and the infrastructure. Consumer products have shifted from analog to connected technologies, introducing new risks for consumers related to personal privacy, safety issues, and potential for discriminatory data. Broad, ubiquitous data collection, internet connectivity, predictive algorithms, and overall device functionality opacity threaten to undermine IoT market benefits by causing potential consumer injury: broad unfairness and disparate impact, data breaches, physical safety issues, and property damage. Existing regulatory regimes have not anticipated these damages to effectively avoid injury, and it is yet unknown how existing products liability, common law civil recovery under contracts or torts schemes, and due process procedures will apply to these products and the data they process. This Article explores the technology and market of IoT, potential consumer impacts resulting from a lack of consistent and complete legal framework, whether IoT regulation is appropriate, and how the United States can balance market needs for innovation with consistent oversight for IoT manufacturers and distributors.

Research paper thumbnail of Artificial Intelligence and Intellectual Property in Healthcare Technologies

Research Handbook on Health, AI and the Law (Ed. Barry Solaiman & I. Glenn Cohen), 2024

Artificial intelligence (AI) healthcare technologies involve a wide variety of AI innovations tha... more Artificial intelligence (AI) healthcare technologies involve a wide variety of AI innovations that could potentially qualify for intellectual property (IP) protection, corresponding to multiple forms of protection. In addition, protection for AI raises novel issues that may require modifying existing laws. This chapter examines how current IP law applies to human-generated AI creations and policy issues that should be considered as organisations and countries re-examine IP policy. After this brief introduction, section 2 provides an introduction to IP, section 3 details AI in healthcare to better understand IP issues and section 4 addresses issues AI owners will likely encounter in IP strategy. Finally, section 5 addresses policy issues for lawmakers to consider.

Research paper thumbnail of Legal Issues in Cybernetics and Biorobotics

The Law of Artificial Intelligence and Smart Machines (ABA Business Law Section, Ed. Theodore Claypoole), 2019

For as long as human beings have been forced to confront the fragility of our bodies, we have con... more For as long as human beings have been forced to confront the fragility of our bodies, we have considered the ways that technology can enhance our limited abilities. Feeble brains, missing limbs and poor sensory receptors have been early candidates for man-made enhancements. Our reach has always exceeded our grasp.

As AI becomes a necessity for the continuance of human evolution, improvement, and social efficiency, the law must evolve to respond to these substantial changes. Cybernetics and biorobotics create numerous challenges for our current intellectual property models, including appropriate incentives for AI-enabled devices and questions of patentability. They also intensify current issues around medical device regulation, including AI oversight, potential cybersecurity issues, and safety. Existing issues in medical device regulation, including preemption, also complicate traditional notions of tort recovery in this space.
Cybernetics and biorobotics challenge our conception of privacy, with the potential to dramatically increase data volume while simultaneously frustrating traditional notions of identifiability. They also demand important answers to other social issues, including the impact of elective improvement, or biohacking, on our social structure and associated opportunities. The law performs an important responsive role by anticipating these challenges and considering their effects.

Research paper thumbnail of International Privacy Law and Artificial Intelligence

The Law of Artificial Intelligence and Smart Machines (ABA Business Law Section, Ed. Theodore Claypoole), 2019

The international landscape for data management has exploded in the past five years, including su... more The international landscape for data management has exploded in the past five years, including substantial additions to omnibus privacy legislation, a new focus on cybersecurity both for personal information and for critical infrastructure purposes, and the growth of data localization or data nationalism. These laws dramatically restrict how organizations implement AI and, to what extent, true AI functionality may be realized.

The international policy landscape increasingly includes privacy, cybersecurity, and data localization (PCL) laws. International PCL laws have become increasingly restrictive and highly specific in their application, limiting options for organizations building AI capabilities, especially those using sensitive personal information (SPI), such as health information and banking or financial data. PCL laws individually focus on different goals, yet all restrict how data may be collected, processed, transferred, used, or accessed.