Regulation
Big Tech Companies Acknowledge AI Risks in Regulatory Filings
In a series of recent filings with the SEC, major technology companies including Microsoft, Google, Meta, and NVIDIA have highlighted the significant risks associated with the development and deployment of artificial intelligence (AI).
These revelations reflect growing concerns about AI’s potential to damage reputations, attract legal liability and attract regulatory scrutiny.
AI Concerns
Microsoft The company expressed optimism about AI, but warned that poor implementation and development could cause “reputational or competitive harm or liability” for the company itself. It highlighted the broad integration of AI into its offerings and the potential risks associated with these advances. The company highlighted several concerns, including flawed algorithms, biased datasets, and harmful content generated by AI.
Microsoft acknowledged that poor AI practices could lead to legal, regulatory, and reputational issues. The company also highlighted the impact of current and proposed legislation, such as the EU AI Act and the US AI Executive Order, that could further complicate the deployment and acceptance of AI.
Google The filing reflects many of Microsoft’s concerns, highlighting the evolving risks associated with its AI efforts. The company has identified potential issues related to harmful content, inaccuracies, discrimination and data privacy.
Google highlighted the ethical challenges posed by AI and the need for significant investment to manage these risks responsibly. The company also acknowledged that it may not be able to identify or address all AI-related issues before they arise, which could lead to regulatory action and reputational damage.
Meta The company said it “may not be successful” in its AI initiatives, which pose the same business, operational and financial risks. The company warned of significant risks involved, including the potential for harmful or illegal content, misinformation, bias and cybersecurity threats.
Meta expressed concerns about the changing regulatory landscape, noting that new or increased controls could negatively impact its business. The company also highlighted competitive pressures and challenges posed by other companies developing similar AI technologies.
Nvidia The company did not devote a section to AI risk factors, but it did address the issue extensively in its regulatory concerns. The company discussed the potential impact of various laws and regulations, including those related to intellectual property, data privacy, and cybersecurity.
NVIDIA highlighted specific challenges posed by AI technologies, including export controls and geopolitical tensions. The company noted that the increasing focus on AI by regulatory authorities could lead to significant compliance costs and operational disruptions.
Along with other companies, Nvidia has stressed the EU’s willingness AI Law as an example of a regulation that could lead to regulatory action.
Risks are not necessarily probable
Bloomberg was the first to report the news on July 3noting that the risk factors disclosed are not probable outcomes. Rather, the disclosures are an effort to avoid being singled out as responsible.
Adam Pritchard, professor of corporate and securities law at the University of Michigan Law School, told Bloomberg:
“If a company fails to disclose a risk that its peers pose, it may become the target of legal action.”
Bloomberg also identified Adobe, Dell, Oracle, Palo Alto Networks and Uber as other companies that have disclosed information about AI risks in SEC filings.