AI Players Unite – But, What Next?
Leading AI companies are engaging with policymakers in the US but will it lead to potentially controlled development isn’t clear
Ever since artificial intelligence shifted gears to create generative AI, there have been conflicting voices about its impact on easing lives and the potential to create chaos in doing so. In fact, when OpenAI boss Sam Altman, who debuted ChatGPT called for regulation, a few joined hands and others cried foul claiming it was an attempt to monopolize AI into Big Tech.
However, over the past few months, there has been consistent activity around the group seeking to create consensus on a regulatory framework. In the US, the innovators in Silicon Valley seem to have joined forces with the big tech of DC as the wealthiest players in AI are adopting a public policy approach to get their ideas implemented.
Writing in TechCrunch, Ben Kobren, a name that’s closely associated with leading public policy thought around AI-based companies, feels engaging with policymakers is the right thing to do but wonders if all of it is a facade as it is a known fact that Congress in the United States takes years to get its act ready.
China’s rules and OpenAI’s efforts to make them
In fact, China has led the world by announcing its own framework for AI development that seeks to “balance development and security”. The rules themselves aren’t complicated and prohibit the use of AI across everything from pornography to terrorism and racism across any content format with a warning note that algorithms could adversely influence public opinion.
Across the seas, Altman first called for a global regulatory body for AI on the lines of nuclearization while threatening to move out of the EU, which is working on its own set of regulations. Almost immediately, he announced a $100,000 grant to anyone who could create guide rails for development of AI.
Which is why Kobren feels that irrespective of what the motivations are, the fact that all the large AI model players are coming together and agreeing on broad safety principles and regulatory guard rails. “… it demonstrates just how seriously they view AI’s potential risks, as well as its unprecedented opportunities,” he writes in TechCrunch.
What’s required is consensus and transparency
In fact, he goes on to argue that never before has anyone seen such concerted action around a technology in the private sector where governments are being lobbied with such verve. In fact, even in India, the conversations around this topic have grown in the recent past, especially since discussions have veered around “sovereign AI” and the proponents are big bosses of big tech with an Indian origin.
Of course, all of it started with Altman once again. When asked about India’s potential to develop AI models, he shrugged it off, leaving most Indian captains fuming. After a few chest thumps around the Chandrayaan mission, even the government has put its might behind Indian IT industry’s mission AI to develop indigenous models that solve local issues.
Kobren argues that in the past Silicon Valley has either ignored the US Congress or mocked it but this time many appeared before legislative bodies to share their views. However, the real test comes now when the divergent views need to be collated, differences ironed out and a draft policy framework created from absolute chaos. Maybe, ChatGPT itself has the answer!
Educate stakeholders, be honest and do not renege
Towards this end, Kobren, who is also co-founder of CKR Solutions, an enterprise that supports public policy initiatives, says industry needs to educate stakeholders around the current AI models as part of transparency initiatives while sharing the outcomes of new undiscovered risks. Companies must avoid ambiguity in agreements that allow them to later wriggle out and must bring out issues that concern them right up front, and not later.
Similarly, the personal outreach program should go beyond just the US Congress or select individual lawmakers who, the lobbyists think make the most noise or are the biggest influencers in defining public policy. Getting industry think tanks and advocacy groups on board, especially those that are sounding the biggest warnings, would help create better guardrails.
In fact, Kobren is critical of earlier efforts at public policy building where Congress often struggled with technical questions and industry seldom stood up to help. Providing information when required to educate lawmakers would be critical to creating a good law, he argues while also noting the need for similar efforts at the state administration level.
Finally, he notes that the biggest challenge could be for the industry itself where certain members may welcome regulation but speak in generalities while lobbying to remove aspects of the policy that do not work for them. Instead of being two-faced, they should come out and clearly state why they are against a provision and then work with others to find a via media. It is better to be criticized for disagreeing than be seen as lying, Kobren concludes.
Most of what the public policy experts say above fits the bill for India too, given what we have seen in recent times with the Data Privacy law where the government was caught between the vested interests of two groups and kept oscillating between various versions of the same draft legislation. It would be good if the Indian industry listens to Kobren and creates consensus now.