News & Analysis

China off-the-mark with AI Rules

When the rest of the world is debating the need for rules around generative AI, China has gone ahead and brought out first draft that it intends to follow

Generative AI has been the flavor of the year, since the world opened up to ChatGPT last November. Expert opinion is divided down the middle over its future development and use cases that may emerge. However, China appears to have stolen a march by defining how this fast-changing technology can be used and what’s a strictly no-go area. 

Of course, that there’s little or no need for debate or discussion in China might have made things far easier than across the rest of the world where the folks at OpenAI even offered $100K for any help on framing rules. The Beijing administration has simply put together a set of rules that include a licensing regime for service providers. 

These rules were brought on Thursday (July 13) by China’s cyberspace regulator. It includes provisions for both service providers and those that create APIs. Of course, questions around stifling innovation don’t come in as yet, given that most of the country’s tech has government involvement, directly or indirectly. It’s just a question of turning a knob somewhere! 

Of course, the authorities were aware of the global debate around this topic and hence added a note that the rules aim to “balance development and security” though there’s silence on what could tip the scales one way or another. The rules are quite simple as they stand though one cannot really tell what’s in store down a few months. 

For starters, the new rules prohibit everything from pornography to terrorism, and racism across any content form or format, especially those that “threaten” the country’s national security. (Now, where have we heard that one before, that too closer home?) There’s also a warning around algorithms that could influence public opinion. 

Such algorithms need to be registered with the Chinese authorities and generative AI providers also need to get an administrative license as per the statutes. However, for now there’s no reference to who might have to apply for these licenses. However, all of what China says does not relate to enabling the authorities to snoop around. 

The rules are quite stringent when it comes to user protection where it says algorithms must not discriminate based on factors such as ethnicity, age, gender, profession, health etc. and cannot be used for anti-competitive actions by the service providers or users. There are also provisions to create anti-addiction provisions for child users, specifically around video games. 

The onus of restraining misuse of these provisions rests with the service providers. Any use of generative AI for illegal content rests with them as does the subsequent actions around fixing the algorithms and reporting the issues to the relevant authorities. Regulators also have the right to know the specifics of each generative AI model such as training data, size, tags etc. 

Given that AI development in China has a top-down approach instead of a bubble-up one that the rest of the world seems to be working around, the rules also bring in the creation of a public data training platform as well as compulsory sharing of computing power. There’s mention of a state-backed centralized platform for allocating public cloud resources. 

What China did yesterday is a culmination of a process they initiated back in December 2022. The Cyberspace Administration of China brought regulation on deep synthesis technology, defining it as tech that uses deep learning, virtual reality, and other algorithms to generate text, images, audio and virtual scenes. These regulations came into force on January 10 itself. 

Most of the rules, barring a few that speak of user security, are in line with China’s real-time verification apparatus that makes anonymity on the Internet a virtual impossibility. The country forces users to link their online accounts to phone numbers that are registered with their government IDs. Did someone say Aadhar??

Leave a Response