News & Analysis

Big Tech Pledges to Cut AI Risks

Of course, there are those who argue that such a pledge could be just one more way to restrict all AI development within themselves

In what is being seen as a win for President Joe Biden’s White House, the top seven big tech companies have voluntarily committed to jointly work to reduce the risks involved in artificial intelligence. The US president met with Google, Microsoft, Meta, OpenAI, Amazon, Anthropic and Inflection last week to make this happen. 

What are these companies promising?

Representatives of these companies agreed to emphasize on safety, security and trust while developing artificial intelligence based technologies. Their safety pledge focuses on three key areas of (a) ensuring the safety of their products before release, (b) building systems that put security first and (c ) winning the public’s trust through a series of steps listed below:  

  • They will conduct internal and external security testing of AI systems before release
  • Share information across the industry and with governments, civil society and academia on AI risk management
  • Facilitate third-party discovery and reporting of vulnerabilities, if any
  • Develop robust technical mechanisms to ensure users know when the content is artificially generated, and 
  • Publicly report AI systems capabilities, limitations and appropriate use

What set off the alarm bells?

Media reports suggest that the White House is currently in the process of developing an Executive Order which would be followed by bipartisan legislation to regulate the AI industry. There have been discussions around regulating the industry following the launch of ChatGPT by OpenAI back in November last year. 

However, with the arrival of ChatGPT4 earlier in March this year, the challenges around use of generative AI models came to the fore, especially when it was proved that this large language model could provide answers for a bar examination. On the flip side, it also resulted in chatbots spewing out erroneous answers and quoting sources that sometimes did not exist. 

Does voluntary regulation really help?

The latest bid by OpenAI to collaborate with governments, civil society organizations across the world to advance AI governance comes at a time when policy planners across several countries are themselves considering laws to govern the AI systems. The company, which received $10 billion investments from Microsoft, says the latest round of commitments from the big tech companies only reinforces the ongoing discussions. 

The two key names missing from the list of Big Tech giants that met with Biden include Apple’s Tim Cook and Elon Musk, whose new AI company is already promising cutting edge solutions in the not-too-distant future. Of course, the effectiveness of the AI safety pledge remains to be seen as there is virtually no discussion on who or how it gets monitored. 

Cyber experts warn that this ain’t enough

Cyber experts are already voicing their concerns over a voluntary system, more specifically around the threat actors operating outside of the legal and ethical boundaries. They believe that putting guardrails would only help those that respect such boundaries. If someone is dead set on using it for destructive purposes, none of this will work, they argue. 

Others are concerned about the vagueness of the current guidelines that include investing in cybersecurity and insider threat safeguards to protect proprietary and unreleased AI models. By doing so, enterprises are essentially only safeguarding their own intellectual property. The challenge comes when binding regulations are introduced across the AI industry and becomes a part of future national-level security regulations across the world. 

Is it over-regulation though?

Some referred to the HIPAA (health insurance portability and accountability act) guidelines itself being tough to implement and wondered whether such the potential introduction of binding regulations for the AI industry would make it so heavily regulated as to curtail creativity and innovation across the board. 

What’s more, the voluntary signings have been done by a bunch of companies that would end up becoming competitors in the field of AI-modeling and development. Given the productivity boost that AI offers, every company would be keen to monetise the advantage of a better AI product against their corporate rivals. This is where lack of transparency into AI applications could prove to be the real challenge.  But, more on that in another post at another time. 

Detractors of the move say the US is already lagging behind China and the EU in regulation as well as in AI development. China had recently revealed guidelines around the industry while the EU released a draft AI Act last month that classifies AI systems by risk and suggests more compliance rules for them. 

Leave a Response