California AI Bill is Wrongheaded and Threatens American Innovation
August 13, 2024
August 13, 2024
Research in recent years has demonstrated that new businesses – “startups” – are disproportionately responsible for the innovations that drive productivity growth and economic growth, and account for virtually all net new job creation. More recently, the age of artificial intelligence (AI) has arrived, with its transformative implications being compared to other revolutionary technologies like Gutenberg’s printing press, railroads, electricity, and the Internet. Given their inherently innovative nature, AI startups will play a leading role in the development and application of AI, dramatically accelerating productivity gains, economic growth, and opportunity expansion.
Except perhaps in California. SB 1047, the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act, introduced by State Senator Scott Wiener in February, is written as if AI is an inherent threat to the world that must be preemptively tamed. In taking that approach, the bill would short-circuit the emergence of one of history’s most revolutionary technologies, denying California’s citizens transformational enhancements in how human beings work, play, learn, conduct research, produce, and innovate. The bill must be substantially re-written or discarded.
AI is a specialized field of computer science that creates systems that can replicate human intelligence and problem-solving. It does so by processing vast amounts of data from which various patterns and relationships can be identified or inferred – “training” the machine – which is then used to analyze additional data to draw conclusions, predict outcomes, or produce requested outputs.
Though AI seemed to appear out of nowhere in November of 2022 with the launch of ChatGPT by research and development company OpenAI, it is not new. As a theoretical notion – mechanical devices performing human-like computations – artificial intelligence dates back thousands of years. The modern field emerged in the years following World War II. In 1950, English mathematician and master code-breaker Alan Turing published “Computer Machinery and Intelligence,” in which Turing proposed a test of machine intelligence called The Imitation Game. The term “artificial intelligence” was first used in 1955 at a Dartmouth University workshop arranged by computer science professor John McCarthy.
AI has been part of modern life for decades, powering everything from chatbots to Internet search results, targeted digital advertising to algorithm-driven social media. The field accelerated dramatically in recent years with the availability of ever-greater amounts of data coupled with easier and increasingly affordable access to immense computing power.
Innovative startups are now using AI to solve problems and create value. Two quick examples – Chicago-based Varuna is an AI-powered platform that helps cities measure and analyze water quality, and Tampa-based COI Energy uses AI in its digital energy management platform to eliminate energy waste in buildings. In coming years, AI – if developed properly – will power once unimaginable advances in science, transportation, energy, combating climate change, and medicine. In June, European researchers announced the development of an AI-enhanced blood test that can predict Parkinson’s disease up to seven years before the onset of symptoms.
SB 1047 is a threat to such progress for several important reasons. Most fundamentally, the bill fails to recognize that AI models and tools are developed by complex ecosystems that include model developers, AI developers who build tools to use models, and end-users who leverage the models and tools for their own innovative purposes. SB 1047 singles out developers, requiring them to certify the safety of model applications for which they are not responsible, and imposing punishing legal liability for downstream effects, outcomes, and scenarios they cannot foresee or predict. This approach is utterly inconsistent with existing product liability standards, and would severely undermine AI innovation by exposing developers to open-ended liability.
This fundamental flaw would also powerfully disincentivize open sourcing of advanced AI models. Open-source AI entails freely accessible source code, fostering a collaborative environment for developers to utilize, modify, and distribute AI technologies. As the Federal Trade Commission recently observed, open sourcing promotes competition and innovation, improves consumer choice, and reduces costs. Disincentivizing open sourcing would narrow the market to only a few proprietary AI models, undermining choices for developers and consumers.
Disincentivizing open sourcing also makes AI development less safe. As the world has learned from the development of other code-based technologies including cyber security, open sourcing promotes safety and security by allowing for independent evaluation and innovation community feedback. By stark contrast, closed model AI would mean trusting proprietary developers.
Perhaps most importantly, undermining open sourcing would sever the collaborative connection between AI and entrepreneurship that promises to power extraordinary innovation across countless fields in coming years.
Entrepreneurs have always been America’s principal innovators, contributing the most significant, transformational, “disruptive” innovations that define the economic landscape. From the cotton gin and the steam engine, to the automobile, airplane, air conditioning and refrigeration, and wireless communication, entrepreneurs experiment and innovate in ways existing businesses cannot or will not. They are the intrepid trailblazers who map the frontier of economic progress.
Open source AI puts one of the most powerful innovation tools in history in entrepreneurs’ hands. But SB 1047, by powerfully disincentivizing open sourcing, would short-circuit that critical collaboration, shutting millions of entrepreneurs out from accessing and leveraging AI models and tools, and narrowing use of the technology to a handful of large companies.
On June 21, 2023 Senate Majority Leader Chuck Schumer delivered a major address in which he declared AI to be “world-altering” and “here to stay,” and proposed a framework for Congressional policy action called the SAFE Innovation Framework for AI Policy. “If people think AI innovation is not done safely, if there are not adequate guardrails in place, it will stifle or even halt innovation altogether.” Establishing those guardrails is the appropriate role of government. But as Leader Schumer also warned: “Innovation must be our North Star…because the United States has always been a leader in innovating on the greatest technologies that shape the modern world.”
Simply put, if AI is a truly transformational technology – and it is – and if entrepreneurs are the engine of innovation and economic growth – and they are – then preserving the critical collaboration between the two is essential to America’s AI-powered future. SB 1047 is fatally flawed, profoundly wrongheaded, and a danger to that future. It should be substantially re-written or discarded.