Specials

Emerging Technologies and Cybersecurity: Addressing Security Challenges in India

cybersecurity
Image courtesy : https://thecybersecurityplace.com/is-there-a-weak-link-in-your-encryption-strategy/

By IC- Sukanya Mandal

The  practise of protecting systems, networks, and data from destruction or cyber attack- Cybersecurity is facing vulnerabilities and challenges in the age of  emerging technologies. Artificial intellignce (AI), Internet of Things (IoT), and blockchain offer tremendous benefits, they also bring forth security challenges that demand immediate attention. According to the Check Point report, the number of attacks per organisation per week in India increased by 18% during the first quarter of 2023 compared to the same time in 2022, with 2,108 attacks per organisation per week on average. As per the information reported to and tracked by CERT-In, the number of cyber security incidents during the years 2018, 2019, 2020, 2021 and 2022 are 2.08 Lakh, 3.94 Lakh, 11.58 Lakh, 14.02lakh and 13.91 Lakh respectively.

Generative Artificial Intelligence (AI) refers to a class of machine learning models that are capable of generating new data that resembles the data on which they were trained. Such AI systems have shown remarkable capabilities in producing images, text, audio, and other forms of content that are often indistinguishable from those created by humans. While Generative AI holds enormous potential for innovation and automation across various industries, the increasing reliance on AI systems also exposes vulnerabilities that can be exploited by cybercriminals.. Some top threats that consumers and businesses need to be aware of:

  • Deep Fakes: Generative AI can create highly realistic fake videos and images, known as deepfakes. These can be used for malicious purposes such as spreading misinformation, blackmail, and impersonating individuals for fraud.
  • Manipulative Content: AI-generated content can be used to manipulate opinions and emotions. This could include generating fake news articles, fake reviews, or social media posts that are designed to manipulate public opinion or consumer behavior.
  • Phishing and Social Engineering Attacks: To create highly targeted phishing emails or social engineering attacks. By using AI to analyze data about individuals, attackers can craft messages that are more likely to deceive recipients into revealing sensitive information or performing actions that compromise security.
  • Automated Hacking: To automate and optimize hacking attempts. By analyzing patterns in security systems, AI can generate new strategies for breaching defenses more effectively than human hackers.
  • Synthetic Identity Fraud: Generative AI is used to create synthetic identities that combine real and fake information. These identities can be used to apply for credit, make fraudulent purchases, or deceive businesses and individuals.
  • Voice Impersonation: Generative AI can be used to create highly realistic synthetic voices. This could be used for voice phishing attacks, where attackers use a synthesized voice to impersonate a trusted individual over the phone and deceive victims into providing sensitive information.
  • IP Theft and Counterfeiting: Generative models can be used to replicate designs, art, or other intellectual properties. This could lead to an increase in counterfeiting and intellectual property theft, impacting businesses financially.
  • AI-generated Propaganda: Generative AI can be employed in generating propaganda content at scale, which can be used by malicious actors to influence public opinion for political or ideological purposes.
  • Data Poisoning Attacks: In data poisoning attacks, malicious actors can use generative AI to create data that, when fed into machine learning systems, can cause them to make incorrect predictions or decisions. This could be used to sabotage business competitors or manipulate automated systems.
  • Job Displacement: While not a malicious use, it’s important to recognize that generative AI could lead to job displacement in certain industries, such as content creation, as tasks become more automated.
  • Unintended Bias: Generative AI models could inadvertently perpetuate or amplify societal biases if they are trained on biased data. This could have serious implications in areas like hiring, lending, or any sector where AI is used to make decisions affecting individuals’ lives.

 

When it comes to cyber hygiene, are the fundamentals sufficient?

The fundamentals of cyber hygiene, such as using strong passwords and enabling multi-factor authentication (MFA), are essential first steps in protecting against cyber threats. However, as the landscape of cyber threats continues to evolve, relying solely on these fundamentals may not be sufficient. Here are additional measures that should be considered for comprehensive cyber hygiene:

  • Regular Software Updates
  • Educating and Training
  • Using a VPN
  • Implementing Firewalls and Antivirus Software
  • Regular Backups
  • Monitoring and Logging
  • Secure Configurations
  • Access Controls and Principle of Least Privilege
  • Physical Security
  • Regular Security Audits and Penetration Testing
  • Incident Response Plan
  • Awareness of Legal and Regulatory Requirements

In a nutshell, while the fundamentals are crucial, comprehensive cyber hygiene requires a multi-layered approach to security. This involves not only technical measures but also education, policies, and practices that together build a culture of security.

 

Awareness and vigilance are key to mitigating these risks. Individuals and organizations should educate themselves on the capabilities and limitations of generative AI, and take appropriate measures to safeguard against its potential misuse. This includes staying informed about the latest security practices, keeping systems updated, and critically evaluating the authenticity of content and communications. Moreover, policy makers and regulatory bodies should also take part in formulating guidelines and regulations that promote the responsible use of generative AI technology.

 

 

(The author is IC- Sukanya Mandal, Senior Member IEEE, and the views expressed in this article are her own)

Leave a Response