News & Analysis

Govts Gear Up for AI Safety

Creates a new organization as part of its DeepMind that innovates on Gemini and AI products

Earlier this month, a new AI safety organization in the UK claimed to have found ways that technology can deceive human users and produce biased outcomes. Right on cue, the US House of Reps set up a task force on AI though the response seemed hurried. Now, Google has joined the fray seeking to thwart criticism of its flagship GenAI model Gemini by setting up a new organization under Google DeepMind, its AI research division. 

The AI Safety Institute, announced last October, has now published its research into advanced AI systems built around large language models. In a nutshell, the report says that it was able to bypass LLM safeguards that power chat bots such as ChatGPT and Gemini by using basic prompts and obtain assistance both for military and civilian purposes. 

Chatbots are going crazy at times

In fact, several media reports highlighted how these AI-led chatbots were making up unverified content and disinformation. This report shows how both Google and Microsoft’s chatbots were creating Super Bowl statistics on the fly, while another one notes how the chatbot often confidently fibs and even contradicts its own search results.  

Then there was this report in Wall Street Journal that said Microsoft’s Copilot suite, also powered by GenAI models similar to Gemini, often makes errors in collating and presenting meeting summaries and spreadsheet formulas. Of course, currently everyone blames such lapses on hallucination, a word that describes the tendency of GenAI to make things up. 

Covering the tracks and deflecting criticism

Small wonder then that Google is trying to cover its tracks and divert some of the angst from policymakers who are convinced about how disinformation can be generated at the click of a few buttons. The AI Safety and Alignment division at DeepMind comprises existing divisions working on AI safety and some special cohorts of GenAI researchers and engineers. 

If we think the efforts made by the House of Reps is tokenism, then it’s all the more so in the case of Google’s efforts as the company doesn’t tell us details of how this organization will work and what new roles would get added on. All we know is a new team focused on safety around artificial general intelligence or hypothetical systems will be set up. 

As for the US Congress, the Task Force aims to ensure that “America continues leading in this (AI) strategic area,” says Speaker Mike Johnson. The body will be chaired by California Rep Ted Lieu and Jay Obernolte, appearing like lip service at a time when AI is becoming the focus of all technology investments across the US and the rest of the world. 

Though some may say that it’s a welcome sign of Congress doing something at a time when the technology is running rings around regulations and lawmakers globally, there is still partisanship and obstruction that exists. Obernolte said, House Reps and Dems will work together to create a comprehensive report detailing regulatory standards and congressional actions. But, there seems to be little or no takers for this bravado! 

India too is coming out with its AI regulation

All of this comes at a time when India too is working on a draft AI regulation framework. IT Minister Rajeev Chandrasekhar noted that the report could be released in the June-July timeframe and would aim to provide means to harness AI for economic growth while addressing potential risks and harms. 

The minister also pitched for a global governance framework to address issues related to safety and trust of AI citing the “ubiquitous and boundary-agnostic nature” of the emerging technology, while also underlining that India was determined to set up guardrails to prevent AI misuse as well as building a homegrown talent pool. 

“We will fully exploit the potential of AI but set up the guardrails as well to prevent misuse. We are today seen by the world at the forefront to harness AI technology. We are all for deploying AI across use cases, from farm to factories and we want to use AI for economic growth, healthcare, agriculture, and farmer productivity,” he told the apex IT body Nasscom members.

By when would these outcomes become reality?

Coming back to Google’s latest move to create a safety research program, it is quite obviously a catch-up effort to the Super Alignment division that was set up by OpenAI last July. Google says its new team will work alongside an existing AI-safety research group in London called Scalable Alignment that is exploring solutions to tech challenges around controlling AI. 

You could ask why have two groups working on the same problem? Well, when the defined outcome is to generate activity and not necessarily outcomes, the more, the merrier. To put things in perspective, if enterprises need more than one division to identify and fix AI issues, how long would a global governance framework take? 

Maybe, that’s a question we should ask ChatGPT and Gemini!