New York’s AI Safety Bill Sparks $100M Super PAC Battle

New York's AI Safety Bill Sparks $100M Super PAC Battle - Professional coverage

According to CNBC, a bipartisan super PAC called “Leading the Future” launched in August with over $100 million in funding and is now targeting Democratic congressional candidate Alex Bores for championing New York’s RAISE Act. The bill would require large AI companies to publish safety and risk protocols while disclosing serious safety incidents. The super PAC is backed by high-profile tech figures including OpenAI President Greg Brockman, Palantir co-founder Joe Lonsdale, Andreessen Horowitz, and AI startup Perplexity. Bores, who has served as a New York State Assembly member since 2023 and previously worked at Palantir, co-sponsored the legislation and launched his congressional campaign in October after Representative Jerry Nadler announced he wouldn’t seek reelection.

Special Offer Banner

The real stakes in this fight

Here’s the thing – this isn’t just about one bill in one state. This is shaping up to be the opening battle in what could become a nationwide war over who gets to regulate AI. The super PAC’s position, which aligns with the Trump administration’s view, is that federal laws should preempt state regulations. Basically, they want one set of rules for the whole country rather than dealing with different requirements in New York, California, and other states.

But why are they spending millions to target a single state assemblyman? Because if New York passes this kind of legislation, it could create a domino effect. Other blue states might follow suit, and suddenly AI companies would be dealing with a patchwork of regulations across the country. That’s exactly what the tech industry wants to avoid – they’d rather fight this battle once at the federal level than 50 times in different states.

What the RAISE Act actually requires

Looking at the RAISE Act text, the requirements are actually pretty reasonable. Large AI companies would need to publicly share their safety protocols and report “serious safety incidents.” We’re not talking about crushing innovation here – it’s about basic transparency. Bores himself says he’s “very bullish on the power of AI” but recognizes that the same technology that could cure diseases could potentially be misused to build bioweapons.

So what’s the big deal? The industry’s concern, as expressed in their official statement, is that state-level regulations could “stifle innovation” and help China gain AI superiority. But is that the real concern, or are they worried about having to be more transparent about their safety practices?

Why this matters beyond New York

This fight has implications for every stakeholder in the AI ecosystem. For developers and startups, state-level regulations could mean compliance headaches and increased costs. For enterprises using AI, it might mean more confidence in the technology’s safety but also potential restrictions. And for users? Well, better safety protocols could mean more trustworthy AI systems, but there’s always the risk that over-regulation slows down beneficial innovations.

The timing is particularly interesting given that Representative Nadler’s retirement created this opening. Bores is positioning himself as the tech-savvy candidate who understands both the potential and risks of AI, as he’s been highlighting on his social media. But now he’s facing what could be millions in opposition spending from some of the biggest names in tech.

What’s fascinating is that this isn’t a traditional left-right political battle. You have a Democratic assemblyman who actually worked in the tech industry now being targeted by a bipartisan super PAC that includes his former employer. The lines are being drawn in unexpected ways, and this is probably just the beginning of how AI will reshape our political landscape.

Leave a Reply

Your email address will not be published. Required fields are marked *