The Arguments Against California AI’s Bill Are Classic Diversionary Tactics — and Nothing More
In politics, when you don’t want something to happen — a bill, a policy, a regulation, an idea — but you also don’t have a good reason for it not to happen (other than that it’s not good for you), here’s what you typically do:
(1). Attack the motives or character of the person proposing the bill or idea or policy you don’t like; and/or
(2). Raise vague but threatening visions of what theoretically could happen if said policy/bill/ idea happens.
This is exactly what the opponents of California’s new AI legislation — the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act (SB1047) — have done. And it’s why California Governor Gavin Newsom should sign the bill immediately.
The most common objection raised to the bill has been the good old strawman that any regulation of any technology will stifle innovation. There are plenty of regulations and taxes that can stifle innovation. California has considered an unrealized gains tax. That would completely stifle innovation. Cutting research and development funding, grants and budgets stifles innovation. The FTC’s position against virtually all mergers and acquisitions ultimately stifles new company funding and formation, which stifles innovation.
But in this case, the biggest threat to AI is not regulation. It’s that the unintended consequences of AI are so great, people decide it’s too dangerous to explore it at all (there are countless experts who have called for pauses in any sort of AI development).
You know would give people comfort and reassurance? The sense that there are some guardrails in place to ensure that AI models aren’t developed and launched without any regard for public safety. That’s what SB1047 does — it sets reasonable standards for safety testing, just like we do for cars, just like we do for planes. Would anyone reasonably argue we shouldn’t regulate how cars and planes are built? Would anyone get in a car or plane that evaded all regulatory review, testing and standards? Of course not. The same holds true here, only the risk with AI is even greater.
The second main argument against the bill revolves around money — threats that either AI jobs will flee California or that AI companies will avoid doing business in California because the new rules are so burdensome. First of all, the rules are pretty light touch and will not cost AI companies material amounts of money (as an early stage venture investor, I can tell you that even the newest AI companies with the least traction and revenue are able to easily raise tens of millions of dollars — so money is not a problem). Second, that’s what you always say when you don’t have a good objection to a policy or a bill. You claim that, somehow, it’ll cost jobs, knowing that jobs and the economy always ranks among the top concerns of voters and maybe that’ll scare politicians away.
California is the biggest market in the nation. If it were its own country, California would have the world’s fifth largest economy. AI companies are going to avoid the California market just so they don’t have to comply with basic safety testing and regulations? Other states are likely to follow California’s model next year, which means now the threat has to expand to “AI companies won’t want to do business in the United States at all.” Really? They’re going to avoid the world’s most important and lucrative market? Of course not. This objection doesn’t pass even basic scrutiny.
The third is an accusation that the bill is designed to lock in the advantages of large AI companies to the detriment of smaller AI startups who will have a harder time complying with the law’s costs. The sponsors of the bill were then attacked for being too cozy with big tech and secretly doing their bidding.
If that were the case, why did OpenAI oppose the bill? Why did Meta oppose the bill? Google. Apple. IBM. Microsoft. Amazon. All against the bill. Plus all of the industry associations that represent the big players — TechNet, the Consumer Technology Association and the California Chamber of Commerce — vociferously opposed the bill. Companies and industry associations don’t oppose legislation that they want to see happen.
The argument that the bill is bad for small tech companies would be more credible if the big tech companies at least pretended to like the bill so you could then frame it as David vs. Goliath and take David’s side. They didn’t even bother to do that. The provisions of SB 1047 would apply exclusively to the largest AI models, specifically those with development costs of at least $100 million and capable of performing 10^26 floating-point operations per second (FLOPS). And if the bill’s sponsors were in the pocket of big tech, why would they fight tooth and nail to pass a bill that big tech actively tried to kill?
The fourth argument revolves around concerns that the bill focuses too much on hypothetical risks. The entire concern around AI is about hypothetical risk. No one thinks it’s bad that you can use a large language model to craft a better powerpoint or figure out how the NFL can play a game every day of the week (although my attempts to learn this seem to have stumped the models so far). Many, many people are worried that creating a fully sentient being through AI could lead to a great power that humans ultimately can’t control. Right now, that’s not the case. But hypothetically, it could be. That’s exactly why we need the bill. The bill mandates a safety protocol to prevent misuse of covered AI products, including the implementation of an "emergency stop" button.
Nancy Pelosi and many members of the California congressional delegation oppose the bill. Their arguments are just as vague and unconvincing as the objections above. So why do they oppose the bill? My guess is the culprit is what it always is: politics. When the biggest companies and industry associations ask for an easy favor, likely holding out the promise and threat of campaign funds in an election year, especially on a state issue where you’re doing nothing more than writing a letter, you tend to find a way to make them happy. Maybe that’s not what happened here but any other explanation strains credulity.
It wasn’t like the legislature even took Pelosi’s opposition seriously. The bill passed the Assembly by a margin of 41-9 and passed the Senate 32-1 with 7 abstentions. Gavin Newsom’s ambition, like the other 49 Governors, 100 U.S. Senators and 435 House members, is to be president one day. The soonest he can even try is four years from now, and that’s only if Harris loses this November and isn’t the incumbent running for re-election.
By then, any promises of support or funding from the big tech companies will be long forgotten. But if he vetoes the bill and concerns about AI do emerge between now and his run for the White House, he owns any and all problems posed by AI. Why take that political risk for something that absolutely no one knows how will unfold?
The arguments against this bill are nothing more than classic political diversions and disinformation tactics. They deserve no credence. Newsom should sign the bill. It shouldn’t even be a tough call.