News & Analysis

Academics Worry Over AI Cheating

Of course, one could ask whether privacy and intellectual property rights really matter to the teeming billions across the world?

There are only two types of people in the world today – those that love ChatGPT and those that haven’t used it yet! However, a third segment is slowly emerging out of the woodworks that think artificial intelligence or GenAI tools (they aren’t the same) can result in a mushrooming of data privacy, and plagiarism. 

Two days before ChatGPT completes one year since its launch, most users are on the same page with regard to its capabilities, value and what they’re willing to pay for something that’s no more than a virtual assistant. Which is why this new report around academics becoming wary of GenAI tools appears to be a bit thick. Bro! These issues were always there. Why now? 

Are the concerns real or just a wet blanket?

Imagine having a real assistant who runs away with your bank account username and password or the combination lock to your vault? It’s not as if societal norms and laws do not exist for theft. Which is why the efforts by governments across the world to publish guidelines and best practices around AI seems as useful as an umbrella in a typhoon. 

Of course, these learned men owing allegiance to some ed-tech companies in the US accept that (80% of them do) GenAI will add value to education, but some remain worried about accessibility issues (remember online learning courses during the pandemic?) and mixed messaging about the technology could cause havoc. 

The world of binaries continues – once we were divided as banked and unbanked, mobile handsets and non-mobile, digital and non-digital, and now AI and non-AI. At a time when the collective intelligence of humankind needs to be questioned (look at the violence on fellow humans and on nature), this new binary of ChatGPT users and non-users is precious too! 

The questions are real, but the answers aren’t 

Anyway, coming back to the academics, a recent report said ed-tech company Course Hero officials who ran a four-week “AI Academy” course for 350 educators faced some pointed questions. The first one related to academic dishonesty among students and exposure of personal data required to use GenAI tools. 

Questions came thick and fast around what kind of data is required? Or what information is needed before using the tool? Is it just a name and email or would they require tax details and social security numbers? How are AI models trained and where is the data coming from? How can implicit bias be detected and what’s the solution around it? 

Maybe, ChatGPT could design a statutory warning such as the ones we find on smoking stick boxes. “This GenAI tool could be prejudicial to your native intelligence” sounds good for now! But, of course, who reads statutory warnings? As for academic dishonesty, maybe the same GenAI tools can be used to detect plagiarism and negatively mark them too. 

Aren’t Honesty and ChatGPT oxymorons? 

Nicole Jones Young, one of the participants in the course, is a professor at Franklin and Marshall college in Pennsylvania. The publication quotes her to suggest that most faculty was concerned that students would plug in questions on an assignment and reproduce whatever ChatGPT says. 

Yes, that’s a concern when online student assignments are a way of life. Things change when it comes to physical classes and assignments. Also, as we said before, there could be a new GenAI model being trained somewhere out there that can figure out what’s a verbatim transcript of a ChatGPT answer and what isn’t. Google may not care about original content,  but there are several other tools already available to run plagiarism checks. 

The challenge is real, but change is inevitable

Of course, there is bound to be a short-term challenge now as existing plagiarism checks may not detect GenAI handiwork in this regard, given the massive data sets that they draw inputs from. But, the time is not far when even Google will be forced to run plagiarism checks for its indexed pages to see what’s original and what’s far from it. 

That GenAI tools can function as a valuable set of tools for students and everyone else is no more a secret. The challenge emerges only when we start listing out the subversive uses of such a system. Maybe, schools and universities should do away with examinations and rely instead on assessing a person’s learning ability mapped to her curiosity for knowledge. 

In fact, a far more important issue that needs to be tackled relates to the various definitions flying around AI, GenAI and its kin. Google’s DeepMind did some early work, but there’s lots that needs to be done before users and creators of AI tools land on the same page – one that is essential in order to avoid miscommunication and misconceptions.