Washington's Growing AI Anxiety (original) (raw)

from the perhaps-AI-can-help-us-deal-with-AI dept

Most people don’t understand the nuances of artificial intelligence (AI), but at some level they comprehend that it’ll be big, transformative and cause disruptions across multiple sectors. And even if AI proliferation won’t lead to a robot uprising, Americans are worried about how AI and automation will affect their livelihoods.

Recognizing this anxiety, our policymakers have increasingly turned their attention to the subject. In the 115th Congress, there have already been more mentions of ?artificial intelligence? in proposed legislation and in the Congressional Record than ever before.

While not everyone agrees on how we should approach AI regulation, one approach that has gained considerable interest is augmenting the federal government’s expertise and capacity to tackle the issue. In particular, Sen. Brian Schatz has called for a new commission on AI; and Sen. Maria Cantwell has introduced legislation setting up a new committee within the Department of Commerce to study and report on the policy implications of AI.

This latter bill, the ?FUTURE of Artificial Intelligence Act? (S.2217/H.4625), sets forth a bipartisan proposal that seems to be gaining some traction. While the bill’s sponsors should be commended for taking a moderate approach in the face of growing populist anxiety, it’s not clear that the proposed advisory committee would be particularly effective at all it sets out to do.

One problem with the bill is how it sets the definition of AI as a regulatory subject. For most of us, it’s hard to articulate precisely what we mean when we talk about AI. The term ?AI? can describe a sophisticated program like Apple’s Siri, but it can also refer to Microsoft’s Clippy, or pretty much any kind of computer software.

It turns out that AI is a difficult thing to define, even for experts. Some even argue that it’s a meaningless buzzword. While this is a fine debate to have in the academy, prematurely enshrining a definition in a statute ? as this bill does ? is likely to be the basis for future policy (indeed, another recent bill offers a totally different definition). Down the road, this could lead to confusion and misapplication of AI regulations. This provision also seems unnecessary, since the committee is empowered to change the definition for its own use.

The committee’s stated goals are also overly-ambitious. In the course of a year and a half, it would set out to ?study and assess? over a dozen different technical issues, from economic investment, to worker displacement, to privacy, to government use and adoption of AI (although, notably, not defense or cyber issues). These are all important issues. However, the expertise required to adequately deal with these subjects is likely beyond the capabilities of 19 voting members of the committee, which includes only five academics. While the committee could theoretically choose to focus on a narrower set of topics in its final report, this structure is fundamentally not geared towards producing the sort of deep analysis that would advance the debate.

Instead of trying to address every AI-related policy issue with one entity, a better approach might be to build separate, specialized advisory committees based in different agencies. For instance, the Department of Justice might have a committee on using AI for risk assessment, the General Services Administration might have a committee on using AI to streamline government services and ITinfrastructure, and the Department of Labor might have a committee on worker displacement caused by AI and automation or on using AI in employment decisions. While this approach risks some duplicative work, it would also be much more likely to produce deep, focused analysis relevant to specific areas of oversight.

Of course, even the best public advisory committees have limitations, including politicization, resource constraints and compliance with the Federal Advisory Committee Act. However, not all advisory bodies have to be within (or funded by) government. Outside research groups, policy forums and advisory committees exist within the private sector and can operate beyond the limitations of government bureaucracy while still effectively informing policymakers. Particularly for those issues not directly tied to government use of AI, academic centers, philanthropies and other groupscould step in to fill this gap without any need for new public expenditures or enabling legislation.

If Sen. Cantwell’s advisory committee-focused proposal lacks robustness, Sen. Schatz’s call for creating a new ?independent federal commission? with a mission to ?ensure that AI is adopted in the best interests of the public? could go beyond the bounds of political possibility. To his credit, Sen. Schatz identifies real challenges with government use of AI, such as those posed by criminal justice applications, and in coordinating between different agencies. These are real issues that warrant thoughtful solutions. Nonetheless, the creation of a new agency for AI is likely to run into a great deal of pushback from industry groups and the political right (like similar proposals in the past), making it a difficult proposal to move forward.

Beyond creating a new commission or advisory committees, the challenge of federal expertise in AI could also be substantially addressed by reviving Congress’ Office of Technology Assessment (which I discuss in a recent paper with Kevin Kosar). Reviving OTA has a number of advantages: OTA ran effectively for years and still exists in statute, it isn’t a regulatory body, it is structurally bipartisan and it would have the capacity to produce deep-dive analysis in a technology-neutral manner. Indeed, there’s good reason to strengthen the First Branch first, since Congress is ultimately responsible for making the legal frameworks governing AI as well as overseeing government usage.

Lawmakers are right to characterize AI as a big deal. Indeed, there are trillions of dollars in potential economic benefits at stake. While the instincts to build expertise and understanding first make for a commendable approach, policymakers will need to do it the right way ? across multiple facets of government ? to successfully shape the future of AI without hindering its transformative potential.

Filed Under: ai, artificial intelligence, brian schatz, committees, machine learning, maria cantwell, regulation