The U.S. government has unveiled a new regulation aimed at restricting global access to advanced artificial intelligence (AI) chips and technology. This move is designed to maintain control over the sophisticated GPUs (graphics processing units) required for cutting-edge AI applications, such as those used in training large-scale models like OpenAI’s ChatGPT.
GPUs, originally developed for graphics rendering, have become essential in AI due to their ability to process massive datasets simultaneously. Industry leaders like Nvidia produce advanced GPUs like the H100, which are crucial for training AI models. To curb the proliferation of these powerful chips, the U.S. is imposing limits based on total processing performance (TPP). Countries with restrictions will face a cap of 790 million TPP through 2027 equivalent to approximately 50,000 H100 GPUs.
“This is sufficient to power global-scale AI operations, from chatbots to real-time fraud detection systems,” said Divyansh Kaushik, an AI expert at Beacon Global Strategies.
The regulation offers exemptions for entities that meet specific criteria, such as Amazon Web Services and Microsoft Azure. These companies, classified as “Universal Verified End Users,” are not subject to caps, enhancing transparency and reducing the risk of unauthorized transfers. National authorizations also allow select firms to access up to 320,000 GPUs over two years.
For smaller GPU orders up to 1,700 H100 chips only a government notification is required, facilitating faster distribution to low-risk sectors like universities and research institutions. Gaming GPUs are excluded from these restrictions.
Eighteen destinations, including key U.S. allies like Canada, Germany, Japan, and South Korea, are exempt from the caps.
In addition to controlling chips, the U.S. is setting security standards for “model weights,” the critical numerical parameters that fine-tune AI models. This aims to safeguard proprietary algorithms, ensuring that advanced AI technologies remain in secure environments.
Overall, the regulation underscores the U.S. strategy to maintain its leadership in AI innovation while safeguarding sensitive technology from potential misuse.