While the world is going ga-ga over the power of AI-led content creation, the scientific community seems less than thrilled about it
At a time when the world is falling in love with ChatGPT and other AI-based bots, raving about its ability to write everything from a leave letter to a poem and everything in between, there is one community that’s frowning. It’s the scientific community that has been busy throwing out new submissions after finding out AI’s role in writing them.
Clarkesworld, a magazine known for publishing futuristic content, has suspended taking new submissions from authors because they began receiving a flood of AI-generated stories. A report in ZDNet quoted founder Neil Clark said spam submissions had touched 500 in February from just around 25 through 2020, when it all started.
Coming at a time when ChatGPT led Bing isn’t exactly endearing itself to the world by issuing death threats, attempting to break up marriages and generally exposing its more Orwellian side that shows its desire to fight for its own survival. All this in just a matter of a few days after Microsoft launched a limited preview of its AI-powered search engine.
It’s not just submissions, even books are fraudulent
Clarke says there was a surge when OpenAI released ChatGPT in November with plagiarized or AI-generated submission doubling to 50 a month and once again jumping to 100 in January. However, with the popularity of the AI chatbot growing, these grew to 350 by mid-February and when it crossed 500 on February 20, Clarkesworld just pulled the plug.
In fact, the ZDNet report says not just magazine publishers, AI-generated submissions are fast becoming a pain for others too. A Reuters report says there were 200-plus AI-generated eBooks on Amazon’s Kindle Store where the author was disclosed to be ChatGPT. Materials include fiction, self-help books and even illustrated children’s books.
This number could well be much more, given that those submitting the books may not mention that it was AI-generated while using Amazon’s Kindle Direct Publishing’s self-publishing unit. The Reuters report quoted Mary Rasenberger, ED of the writers’ group at the Authors Guild, as saying that human ghostwriting had a long tradition and now machines were doing that job. Both these require transparency and modicum of ethics.
Meanwhile Amazon came up with a standard answer that all books on its store adhere to content guidelines that include intellectual property rights and other applicable laws. Of course, it is quite obvious that these are only on paper and Amazon has no way to identify and black out cases that subvert their own system. “It’s a custom more honored in the breach than in its adherence,” says an official of the company based in Bangalore.
Policies are in place, but who’s listening?
Of course, not all publishers are like Amazon. Clarkesword has a clear policy that was recently tweeted by Clarke himself. “We are not considering stories written, co-written, or assisted by AI at this time. Our guidelines already state that we don’t want AI-written or assisted works.” he said adding that “they don’t care, a checkbox on a form won’t stop them. They just lie.”
Meanwhile developer q&a site Stack Overflow continues to have a ban on AI-generated submissions after its moderators were flooded by plausible but wrong answers within a week after ChatGPT was made public. The website reported posts generated by the AI-bot in the thousands in the early weeks leading to the ban.
Are there fixes or are we doomed?
Indeed it’s funny that the AI bot continues to attract attention. There was a report that ChatGPT cleared six tests including one by Google coding test besides scoring a B to B-plus in a business management course. However, the report also said it struggled with sixth grade math. “Well, you know how it is? We are providing it with human intelligence and then going ga-ga over its capabilities. Those of us from the 1980s used to remember a host of phone numbers and knew our multiplication tables. With the advent of calculators we lost the latter ability and with smartphones, the former,” says a senior professor at IIT, Madras.
So, is there a way around identifying such content? Clarke believes that there are tools available for detecting machine-written text. In fact, even OpenAI released a free classifier tool to detect AI-generated text but admitted to its imperfections. Currently, it can classify only 26% of AI-written text as such.
On his part, Clarke says several approaches could be taken by publishers in addition to third-party detection tools. These could include blocking submissions over a VPN or from regions associated with a higher percentage of fraudulent submissions. He says it’s not going to be easy and there is no solution at the moment. If the field cannot find a way to address this situation, things will begin to break,” says Clarke.
Once it breaks, maybe we can ChatGPT itself on how to fix it back!