Leading AI Companies Change Their Position, Advocate for Reduced Regulations with New U.S. Administration
Tech giants retract their previous backing for AI regulation, striving for wider access to data and exemptions from copyright limitations in light of rising global competition.
In May 2023, executives from prominent artificial intelligence firms including OpenAI, Google's DeepMind, and Anthropic urged U.S. lawmakers to establish federal regulations for the swiftly evolving AI industry.
Their requests for oversight emphasized the need to address existential risks associated with advanced AI systems, suggesting measures like algorithmic audits, content labeling, and shared risk data among companies.
At that time, the U.S. administration coordinated with AI developers to adopt voluntary commitments intended to ensure the safety and fairness of AI technologies.
In October 2023, a presidential executive order solidified these principles, mandating federal agencies to assess the possible implications of AI systems on privacy, workers' rights, and civil liberties.
After a change in administration, the approach to AI policy saw a significant transformation.
In the first week of the new presidential term, an executive order was issued to revoke the previous administration's directives and advocate for policies that bolster American AI capabilities.
The new order called for the development of a national strategy to remove regulatory obstacles within one hundred and eighty days.
In the weeks following the policy shift, AI companies submitted documents and proposals to help define the new framework.
One fifteen-page document from OpenAI urged the federal government to stop individual U.S. states from establishing their own AI regulations.
It also mentioned the Chinese AI firm DeepSeek, which created a competitive model using far fewer computational resources than typical American models, making a case for broader federal data access for model training.
OpenAI, Google, and Meta have also lobbied for expanded rights to utilize copyrighted materials—including books, films, and artworks—for training AI models.
All three companies are currently facing ongoing legal challenges related to copyright infringement.
They have sought executive clarification or legislative measures to affirm that using publicly available information for model training constitutes fair use.
A significant U.S. venture capital firm also presented a policy paper advocating against any new AI-specific regulations, asserting that existing consumer safety and civil rights laws are adequate.
The firm recommended punitive measures against harmful actors while opposing requirements that would impose regulatory responsibilities based on speculative risks.
This policy shift aligns with increasing apprehensions among AI developers regarding escalating global competition.
During the prior administration, major U.S. companies operated under the belief that their substantial investments and computational advantages provided a durable edge, especially as restrictions were placed on exporting advanced AI chips to nations like China.
Recent events, including advanced models being released by smaller foreign competitors, have tested this assumption.
Some U.S. AI firms have reevaluated the extent of their technological lead and are now pursuing expedited access to resources along with fewer regulatory limitations.
This reassessment has resulted in a notable change in industry lobbying, with leading AI companies now prioritizing competitive positioning over their previous calls for cautious and collaborative oversight.