SB-1047 will stifle open-source AI and decrease safety – Answer.AI (original) (raw)
Note from Jeremy: This is my personal submission to the authors of bill SB-1047. It’s not an official Answer.AI statement.
This is a comment from Jeremy Howard regarding SB-1047. I am an AI researcher and entrepreneur. I am the CEO of Answer.AI, an AI R&D lab registered to do business in California. I am the author of popular AI software including the fastai library, a widely used AI training system. I am the co-author of Deep Learning for Coders with Fastai and PyTorch: AI Applications Without a PhD, a widely-praised book with a 4.7 rating on Amazon based on nearly 500 reviews, and am the creator of the Practical Deep Learning series of free courses, the longest-running deep learning course in the world, with over 5 million views. I co-authored the paper Universal Language Model Fine-tuning for Text Classification, which created the 3-stage language model pre-training and fine-tuning approach on which all of today’s language models (including ChatGPT and Gemini) are based.
While the intent of SB-1047 to ensure the safe and secure development of AI is commendable, certain provisions within the bill raise serious concerns regarding their potential impact on open-source developers, small businesses, and overall innovation within the AI sector. This response aims to highlight these concerns and suggest alternative approaches that could achieve the desired safety goals without stifling the dynamism of the AI ecosystem.
Ironically, by imposing these restrictions on open-source development, SB-1047 could actually reduce overall safety within the AI ecosystem, in particular through reducing:
- Transparency and Collaboration: Open-source development fosters transparency and collaboration, allowing a wider range of experts to identify and address potential safety concerns. Restricting this open development model limits the ability of the broader community to contribute to safety solutions.
- Diversity and Resilience: Open-source projects contribute to a more diverse and resilient AI landscape. Concentrating control within a few large entities creates single points of failure and increases the potential for systemic risks.
Concerns Regarding Open-Source Development
Open source has been a key enabler of the success of the US software industry, and has allowed many Americans to access critical software tools which would otherwise be unavailable to them. Open source has, in particular, provided many of the fundamental building blocks for modern artificial intelligence, and is the basis on which nearly all academic research (including safety and security research) is done. Harming open source will harm developers, consumers, academics, and obstruct the development of new startups. The bill would cause harm in a number of ways:
- Overly Broad Definitions: The definition of “covered model” within the bill is extremely broad, potentially encompassing a wide range of open-source models that pose minimal risk. This could inadvertently criminalize the activities of well-intentioned developers working on beneficial AI projects.
- Dual use: An AI model is a general purpose piece of software that runs on a computer, much like a word processor, calculator, or web browser. The creator of a model can not ensure that a model is never used to do something harmful – any more so that the developer of a web browser, calculator, or word processor could. Placing liability on the creators of general purpose tools like these mean that, in practice, such tools can not be created at all, except by big businesses with well funded legal teams.
- Restrictive Requirements: The bill imposes significant burdens on developers, including mandatory shutdowns, extensive reporting, and compliance with potentially ambiguous “covered guidance.” These requirements could disproportionately impact open-source developers who often lack the resources of larger corporations to navigate complex regulatory processes.
- Disincentivizing Openness: The fear of legal repercussions and bureaucratic hurdles could discourage open-source development, hindering the collaborative spirit that has been instrumental in driving AI advancements. This reduction in transparency could also make it more difficult to identify and address potential safety concerns.
Impact on Small Businesses and Innovation
The proposed regulations create significant barriers to entry for small businesses and startups looking to innovate in the AI space. The costs associated with compliance, coupled with the legal risks, could deter entrepreneurs and limit competition. This would ultimately stifle innovation and concentrate power within established corporations.
- Barrier to Entry: The substantial costs associated with compliance, including fees, audits, and legal counsel, could create a significant barrier to entry for small businesses and startups. This would limit competition and concentrate power within established corporations, ultimately hindering innovation.
- Chilling Effect on Research: The fear of inadvertently triggering the bill’s provisions could lead researchers and developers to self-censor or avoid exploring promising avenues of AI research. This would stifle scientific progress and limit the potential of AI to address societal challenges.
- Loss of Talent: The restrictive environment created by the bill could drive talented AI researchers and developers out of California, harming the state’s economy and weakening its position as a leader in AI innovation.
California plays a critical role in driving US innovation, particularly in the technology sector. By placing undue burdens on AI development, SB-1047 risks hindering the state’s leadership in this crucial field. This could have ripple effects throughout the US, slowing down overall progress in AI research and development.
Alternative Approaches
Instead of focusing on regulating AI model development, I urge you to consider alternative approaches that address the actual risks associated with AI applications.
- Support Open-Source Development: Encourage and facilitate the open-source development of AI models to foster collaboration, transparency, and a more diverse and resilient AI ecosystem.
- Focus on Usage, Not Development: Instead of regulating the development of AI models, the focus should be on regulating their applications, particularly those that pose high risks to public safety and security. Regulate the use of AI in high-risk areas such as healthcare, criminal justice, and critical infrastructure, where the potential for harm is greatest, would ensure accountability for harmful use, whilst allowing for the continued advancement of AI technology.
- Promote Transparency and Collaboration: Encourage the development and adoption of best practices for responsible AI development through collaboration between industry, academia, and government. This could involve creating industry standards, fostering open-source development, and investing in AI safety research.
- Invest in AI Expertise: Provide resources to government agencies to develop expertise in AI and build capacity to effectively monitor and address potential risks. This would enable a more informed and nuanced approach to AI regulation that balances safety with innovation.
Conclusion
California has a unique opportunity to lead the way in responsible AI development. However, SB-1047, in its current form, risks stifling innovation and undermining the state’s leadership in AI. By adopting alternative approaches that prioritize accountability for harmful use while fostering a vibrant and open AI ecosystem, California can ensure the safe and beneficial advancement of this transformative technology.
Specific Sections of Concern
- Section 22602 (f): The definition of “covered model” is overly broad and could encompass a wide range of open-source models.
- Section 22603 (b): The requirements for developers are overly burdensome and could discourage open-source development.
- Section 22606 (a): The potential for civil penalties could have a chilling effect on research and innovation.
- Section 11547.6 (c)(11): The ability to levy fees could create a barrier to entry for small businesses.