In a bid to prevent risks posed by advanced artificial intelligence, regulators are focusing on AI systems that perform over 10^26 floating-point operations per second (FLOPS). This level of computing, equivalent to 100 septillion calculations per second, is now being flagged by U.S. authorities, with similar proposals in California, which add a minimum cost of $100 million for AI models to trigger regulatory oversight.
AI safety advocates see these benchmarks as critical for overseeing systems with capabilities that could lead to severe threats like cyberattacks or the proliferation of weapons of mass destruction. However, critics, including tech investors like Ben Horowitz, argue that such thresholds might hinder innovation, claiming they could stifle industries aiming to use powerful models for breakthroughs, like curing cancer. While these regulations aim to control risk, experts agree that the thresholds will evolve as AI technology advances.
Conclusion: As AI continues to develop rapidly, the debate over how to regulate the most powerful models grows. While the current FLOPS threshold is a key metric, the future will likely require more nuanced measures to ensure both safety and innovation coexist.