News & Analysis

Digital Public Spaces and AI 

The potential for AI to create trouble in the field of copyright and IP is coming to the fore

Deep Fakes are passe now. The possibilities that artificial intelligence (AI) brings to create trouble makes the former seem like kindergarten stuff. Ask British voice actor Greg Marston who recently came across an AI-generated clone of himself uttering stuff he couldn’t dream of. What he had said at a recorded session for IBM surfaced on a Wimbledon website! 

Of course, there was some egg on IBM’s face as it stepped in to fix things by first confirming the existence of this spoof (if one may call it that) and claiming that they’re discussing the matter with the voice actor. Of course, one might question how the company allowed voice recordings to be used for training an AI model to create a synthetic model? 

The big tech and the bigger challenge

If Big Tech can cheat Big like this, imagine what non-state actors can do? Imagine using Amitabh Bachchan’s baritone via recordings from his KBC for nefarious purposes? Though the creative minds in India have yet to wake up to this real threat, their counterparts overseas have already started petitioning the powers that be over how AI can destroy digital public spaces. 

Reports of several artists removing their work from X, formerly Twitter, have surfaced in the recent past. The company, now led by Elon Musk, had gone on record to suggest that they would be using data from the platform to train AI. Remember! Hollywood actors and writers are on strike to stop their work being fed to AI that could build models to replace them. 

Chatbots are cheats and we know it

In fact, several news outlets in the US are said to have added bits of code to their websites so that AI chatbots do not scrape off their content in one swoop. The opportunities are endless and so are the calamities as authors are already suing AI companies for using their books to train models, questioning the very basis of copyright law and intellectual property.  

Of course, content creators are questioning the quality and authenticity of content generated by chatbots such as ChatGPT. A recent report said NewsGuard found 475 pieces of AI generated content among news websites. Now imagine, what would Google’s search algorithms do when users seeking information get fed with data spewed by a chatbot? Of course, Google itself is part of this growing AI brigade with its Bard chatbot. 

Not just regulation, there’s also ethics

So, what does it mean for the Internet as a whole? It’s a bit like overgrazing the public land by individual farmers who destroy its value in the process, says Julia Angwin in an article published by the New York Times.  She says that it is such commons on the internet that would be destroyed, especially those where volunteers share knowledge in good faith, like Wikipedia. 

And this is where unscrupulous big tech companies are letting their AI cattle loose, aiming to feed them all human wisdom and expertise so that their own profits can soar through technology that removes the human interface and the concomitant empathy. Imagine what would happen if the whole of Wikipedia were used to train an AI model, asks Angwin. 

Transparency is a must have

Which is where regulation could play a key role and efforts being taken by the European Union could form the backbone of future regulation of AI. The first step, of course, is to have transparency instilled into the generative AI systems that allows owners of original content to understand when their data is being used to train AI models. 

Angwin says this should be a prerequisite for future work on AI models as everything that has come now is a result of journalists digging up murky data beneath the chatbots. A report published in The Atlantic said over 170,000 pirated books were used to train data for Meta’s AI chatbot called Llama.  At the same time The Washington Post noted that ChatGPT was using data scraped without consent from thousands of websites. 

While disclosures should be the first step, there is a need for ethical standards to be adopted so that issues such as those faced by Greg Marston are dealt with in a manner that others do not overstep their boundaries. For the Big Tech companies, the only thing that hurts is massive fines and lots of negative publicity. 

And till the global regulators come up with rules that guarantee both, we might as well say adios to the digital public spaces that provide the world with the last semblance of sanity. 

Leave a Response