News & Analysis

Who are the Internet’s Bad Actors?

The internet was never a safe place. With AI capabilities, doomsday predictions have grown exponentially, leading us to probe some more

The Internet can never be a safe space, not till the time human greed for money exists. And the best example to showcase it is the way big tech companies have utilized our data to grow their revenues within the realms of legal prudence (not always!) while others who seek to lay their hands on the same data illegally are dubbed as bad actors. In this scenario, all we users can do is stay safe and follow some precautions

Heck no! The purpose of this post isn’t to define ethics or morality. So, in case that opening paragraph put you off, our apologies. We request you to read on as the topic under review relates more around how everything appears to be right, so long as one doesn’t get found out. As in the case with a little known hosting company that helped state-sponsored hackers. 

When the bad actor is at home itself

A report published by TechCrunch quoted researchers at cybersecurity company Halcyon to claim that Cloudzy, a small web hosting and internet services company registered in the US, had knowingly or unknowingly provided services to more than 20 state-sponsored hacking groups and commercial spyware operators.  

The company reported that Cloudzy was acting as a command and control provider to some hacking groups by allowing them to host virtual private servers and other anonymous services used by ransomware affiliates to carry out cyber extortion. Some names that Halcyon dropped include Chinese espionage group APT10, North Korea’s Kimsuky and Moscow’s Nobelium. 

And, if you thought that makes the list dangerous, the report adds that Cloudzy also included such hacking groups from Iran, Pakistan and Vietnam along with Israel’s Candiru. The latter is known to sell phone-snooping spyware to government customers and was placed under a sanction by the US government in 2021. 

Though the report hasn’t pinned any blame on Cloudzy, the fact remains that the company only required a working email address and anonymous cryptocurrency payments make things a bit more shady. Of course, the company website boldly declares that no illegal activities are allowed via its service and that if found so, would result in immediate termination.  

Overrated AI solutions on the prowl

Which now brings us to the second topic around which much hoopla has been generated. Ever heard of WormGPT or FraudGPT? These so-called bad actors have been advertising on the dark web to support phishing campaigns using large language models (generative AI). The fear mongers have us believe that these bad actors can signal the end of the Internet. 

However, the truth rests somewhere in between (as always). For example WormGPT runs on an early AI model called GPT-J launched by research group FleutherAI in 2021. Tests reveal that the model actually answers questions that a bad actor normally refuses, especially those related to hacking. How can such a tool help any serious bad actor? 

In fact, back then data scientist Alberto Romero noted that GPT-J was worse than GPT3, the predecessor of GPT4 at coding or writing plausible text. So, how could anyone in their right senses as a bad actor use a tool that cannot generate a convincing email that can get users to click on that egregious link? 

(Image source:

FraudGPT was no different. Its creator described the technology as cutting edge on dark web forums suggesting that it can create undetectable malware and uncover websites to initiate credit card frauds. Once again, GPT3 formed the backbone of this effort. 

And if you thought this was funny, consider how the big tech companies and smaller wannabe startups are pulling the same trick by adding AI to every product or solution they have. What’s more, the media laps it up and so does Google, which indexes these pages assuming that those behind it hold the expertise to speak about the topic. 

So, what does it mean for the Internet? Not much! Because both the good and the bad actors appear to be chasing headlines and Google Search with their comments and some positive results – both in the good actor and the bad actor space. 

Our own tests with ChatGPT4 proves that a human oversight is necessary for making any AI-generated content passable, leave alone consumable. So, let’s just wait and watch how AI develops. And in the meantime, have some fun at the use cases that creative minds are developing – with or without adequate tests. 

Leave a Response