U.S. Government Implements Mandatory Safety Testing for Frontier AI Models
New Regulatory Framework for AI Safety The United States government, through the National Institute of Standards and Technology (NIST), has formalized agreements with major technology firms including Google DeepMind, Microsoft, and xAI to subject their most powerful artificial intelligence models to rigorous national security testing before public release. This initiative marks a significant shift in…
