By Ankush Tiwari
In the digital age, where technology seamlessly integrates into everyday life, cyber threats have evolved in complexity and scale. As AI emerge, so do opportunities for malicious actors.
A few months ago, the World Economic Forum warned in its report that the shortage faced by the cybersecurity industry globally stands at 4 million. At a time when AI-enabled cyber threats are surging, the AI ecosystem lacks the required defences. A Skillsoft report confirmed the disquiet when survey responses of over 5,100 global IT decision-makers revealed that AI & Cybersecurity ranked top in IT leaders’ investment list. Because AI & Cybersecurity are areas where considerable overlap exists, it’s accurate to conclude that the most significant chunk of global investments in technology will be channelled towards AI that can mitigate cybersecurity threats.
Deepfake Frauds: The Dark Side of AI
Deepfakes are AI-generated hyper-realistic images, audio or videos indistinguishable from real ones. Being undetectable to the naked eye, deepfakes scams thrive on AI-powered digital interactions to defraud and defame individuals and organisations. It relies on social conditioning to lower the guard and strike even tech-savvy professionals. CEO’s and CFO’s who are privy to sensitive access are humans too and equally vulnerable whether they accept it or not. The costs of an outrageous Deepfake of CFO is too high. It can wreak havoc on the company internally and its equity prices in the stock market. A temporary jolt eliminates thousands of crores from its market cap.
The deepfake of the NSE CEO that prompted the stock exchange to issue a clarification is like a warning shot for CEO’s of all listed companies. Companies are just one deepfake away from instability if they don’t have preventive mechanisms for tackling AI.
Imagine a cybercriminal or short seller who wants to profit from a collapse in share prices. They only need the company’s CXO’s audio clips, images, and body language to devise a harmful deepfake. In most instances, social media handles, websites, and YouTube channel feature the required videos, making them readily available for crooks. With AI tools, he can create and spread a deepfake, for instance, wherein the CEO, CFO or Auditor is expressing concerns that the recently released annual report didn’t account for the fact that the going concern assumption, as required under IND AS, isn’t adequately disclosed or accounted. If such a video goes viral through the night, mutual funds, HNIs, and hedge funds will dump stocks in the first trading session. At the same time, the company’s management will wake up to their share prices nosediving before even realising what struck them.
Welcome to the era of AI, where you unwittingly give up control over the data and will need to pay a ransom or fall prey to maligning deepfakes. Apart from training employees, deepfake detection tools that rely on AI models to uncover fakery and alert humans are necessary today. Enterprises and governments worldwide are investing in these detection mechanisms to protect sensitive information and prevent the misuse of synthetic media.
Cyberbullying: AI as a Shield
Cyberbullying is another growing concern in today’s hyper-connected society. Unlike traditional forms of bullying, cyberbullying leverages anonymity and reach, often leaving victims feeling powerless and exposed. The emotional toll on victims is profound, leading to anxiety, depression, and, in extreme cases, self-harm.
With the evolution of AI, we have entered an era where you may get bullied by AI language learning models. They can replicate the behaviour pattern of bullies to coerce you to submit to their influence. If combined with deepfakes of police officers or government agencies, they can mentally denigrate you until their objective is met. AI tools that detect synthetic activities in your IT system are a prerequisite for mental health.
India has already seen multiple instances where AI was used for digital arrest by deepfakes posing as Income Tax officers or police officers who bully and demand bribes by threatening people of framing them in false cases and drug deals. In many instances, elaborate stories about their loved ones have been developed, making the threat seem credible. These digital interactions firing over the shoulders of deepfakes of real officials are set to rise. Enterprises and governments need AI safeguards to take it on. If a man can run a fake court in Gujarat for many years without getting caught, as discovered in Ahmedabad, cyberbullying isn’t a far-fetched threat.
Identity Fraud: A Persistent Cyber Threat
Identity fraud has been a longstanding issue in cybersecurity, exacerbated by the digital transition of personal and financial data. Fraudsters exploit stolen credentials to impersonate individuals, access sensitive information, or conduct unauthorised transactions. Traditional methods of combating identity theft, such as password protection and manual verification, are increasingly insufficient against sophisticated techniques like phishing and data breaches.
Does your Aadhar card resemble you closely enough? It’s unlikely because the picture quality makes it seem like it’s their gorilla cousin. In the case of IT systems that use face recognition or face verification, your laptop camera may provide similar output quality. What if AI is used to create better pictures, audio and videos to create fake bank accounts in your company name and carry out benami transactions? Not only will the face of key managerial personnel of the company seem more realistic in the fake bank account than real bank account, companies will also have difficulty convincing a human banker that it’s not you. This risk is multifold higher when the company is a financial institution where video KYC’s are part of the routine customer onboarding process. The only feasible solution in mitigating threats is to create AI safeguards that detect and alert deepfakes rather than doing away with video KYC altogether.
Conclusion
About a couple of years ago, AI mania struck the world when Internet users first experienced the wonder called ChatGPT. Big Tech’s have been scrambling since then to create innovative ecosystems to edge out each other in the AI arms race. Advancements and innovations in AI, while transformative, have also introduced significant cybersecurity challenges as rogue actors increasingly exploit the accessibility and capabilities of these technologies. Hitherto, unknown threats and novel means of perpetrating cyber attacks are expected to emerge as AI gets more sophisticated over time. As the AI enigma rises, it’s heartening that IT investment decision-makers are rightly focusing on cybersecurity and AI when making investment decisions.
(The author is Ankush Tiwari, Founder & CEO, pi-labs.ai, and the views expressed in this article are his own)