SB-1047 will stifle open-source AI and decrease safety – Answer.AI (original) (raw)

Note from Jeremy: This is my personal submission to the authors of bill SB-1047. It’s not an official Answer.AI statement.

This is a comment from Jeremy Howard regarding SB-1047. I am an AI researcher and entrepreneur. I am the CEO of Answer.AI, an AI R&D lab registered to do business in California. I am the author of popular AI software including the fastai library, a widely used AI training system. I am the co-author of Deep Learning for Coders with Fastai and PyTorch: AI Applications Without a PhD, a widely-praised book with a 4.7 rating on Amazon based on nearly 500 reviews, and am the creator of the Practical Deep Learning series of free courses, the longest-running deep learning course in the world, with over 5 million views. I co-authored the paper Universal Language Model Fine-tuning for Text Classification, which created the 3-stage language model pre-training and fine-tuning approach on which all of today’s language models (including ChatGPT and Gemini) are based.

While the intent of SB-1047 to ensure the safe and secure development of AI is commendable, certain provisions within the bill raise serious concerns regarding their potential impact on open-source developers, small businesses, and overall innovation within the AI sector. This response aims to highlight these concerns and suggest alternative approaches that could achieve the desired safety goals without stifling the dynamism of the AI ecosystem.

Ironically, by imposing these restrictions on open-source development, SB-1047 could actually reduce overall safety within the AI ecosystem, in particular through reducing:

Concerns Regarding Open-Source Development

Open source has been a key enabler of the success of the US software industry, and has allowed many Americans to access critical software tools which would otherwise be unavailable to them. Open source has, in particular, provided many of the fundamental building blocks for modern artificial intelligence, and is the basis on which nearly all academic research (including safety and security research) is done. Harming open source will harm developers, consumers, academics, and obstruct the development of new startups. The bill would cause harm in a number of ways:

Impact on Small Businesses and Innovation

The proposed regulations create significant barriers to entry for small businesses and startups looking to innovate in the AI space. The costs associated with compliance, coupled with the legal risks, could deter entrepreneurs and limit competition. This would ultimately stifle innovation and concentrate power within established corporations.

California plays a critical role in driving US innovation, particularly in the technology sector. By placing undue burdens on AI development, SB-1047 risks hindering the state’s leadership in this crucial field. This could have ripple effects throughout the US, slowing down overall progress in AI research and development.

Alternative Approaches

Instead of focusing on regulating AI model development, I urge you to consider alternative approaches that address the actual risks associated with AI applications.

Conclusion

California has a unique opportunity to lead the way in responsible AI development. However, SB-1047, in its current form, risks stifling innovation and undermining the state’s leadership in AI. By adopting alternative approaches that prioritize accountability for harmful use while fostering a vibrant and open AI ecosystem, California can ensure the safe and beneficial advancement of this transformative technology.

Specific Sections of Concern