News & Analysis

Online Child Safety: UK Model Works for India

Barely days after President Biden signed a bill to protect kids, the UK has followed suit

Barely hours after we have espoused the case for India’s cyber security agencies to prioritize child sexual safety through new regulations that include statutory compliance from social media platforms, the regulatory body tracking internet crime in the United Kingdom announced a series of steps to crackdown on Instagram and other such platforms. 

The new Children’s Safety Code from Ofcom (UK’s regulator) will push the likes of Instagram, YouTube and 150,00 other services to run better age checks, filter and rank down content and apply 40 other measures to block harmful content around suicides, self-harm, pornography to those aged under 18 years. 

The new law is currently in draft form and open to feedback till July 17, though Ofcom may only publish it next year post the code’s enforcement. As with the recent REPORT Act, signed into law by President Biden, these new laws would force a major shift in how internet companies approach online safety. 

Ofcom pooh-poohs burden on tech firms argument

The British government has sidestepped critics who said the law will burden tech firms with crippling compliance costs while making it harder for citizens to access information. In fact, it has suggested fines of up to 10% of global turnover for violations with criminal liability for senior managers in certain scenarios. Messrs. Rishi Sunak and Co. do mean business! 

Per the draft, the focus is on robust age verification processes with age verification and estimation technologies getting applied to a wider range of services. Photo-ID matching, facial age estimation, and reusable digital identity services would replace self-declarations of age and contractual restrictions on use of services by children. 

Additionally, the draft also defines how specific content not suitable for children is handled. Some like self-harm and pornography will have to get actively filtered off minors while others related to violence will be ranked downwards. Also, Ofcom expects firms to focus on the volume and intensity of what kids get exposed to while designing safety interventions. 

With the Rishi Sunak administration getting the Online Safety Bill cleared last winter, Ofcom is now charting out its implementation strategy that includes designing detailed guidance and enforcement powers. The regulator is working with some large social media platforms such as Facebook and Instagram to help them design compliance plans. 

What steps is Ofcom taking around online CSA?

For starters, Ofcom says that services must prevent children from encountering most harmful content relating to suicide, pornography, self-harm and eating disorders. It must also reduce their exposure to content related to violence, hate and abusive material, bullying and those that promote dangerous challenges. 

It called for age checks in order to comply with the above requirements and confirmed that in some cases the entire website or app should be out of children’s reach while in others it could mean age-restricting parts of it. Per the law, all websites need to take child safety measures with age verification or age estimation technology becoming a part of the content audit. Which could mean adding age proof each time one browses, but why not? 

In order to mitigate past trends where tech companies have used “not many minors are watching” arguments, the UK regulator has also added a follow-on assessment measure that requires these companies to assess whether “a significant number” of kids used the service and thereafter implement tougher restrictions on the same. 

Another focus area from Ofcom relates to content moderation and suggests that all user-to-user services (where they interact via multiple mediums like chat) must set up content moderation systems that ensures swift action against harmful content and its perpetrators. It also suggests use of AI for moderation at scale and is open to innovation in this area. 

The regulator also brings search engines into the ambit of safe content noting that they are expected to take similar action as social media platforms and “where a user is believed to be a child, large search services must implement a ‘safe search’ setting which cannot be turned off and must filter out the most harmful content.”

Just policies aren’t enough.. Ofcom puts the onus on Tech Firms

Besides a clear set of policies on services related to permitted content types and its prioritization, Ofcom wants these to be open for review at all times besides having a named person responsible for compliance with child safety within a company. It noted unequivocally that large platforms posed the greatest safety risks. 

As a result, the most extensive expectations too rest on these platforms when it comes to compliance. ““Services cannot decline to take steps to protect children merely because it is too expensive or inconvenient — protecting children is a priority and all services, even the smallest, will have to take action as a result of our proposals,” it warned.

Finally, the regulation also suggests additional services to support children and the adults who care for them by having clear and accessible terms of service and ensuring that children can easily report content or register complaints. It also suggests that children are provide with a set of support tools to enable better control over their online interactions. 

These include declining group invites, blocking and muting user accounts, disabling comments on their posts etc. All of this points to the regulator wanting tech companies to be proactive about child safety. It also threw in fair warning that Ofcom would use its powers in case they did not get their houses in order. 

“We are clear that companies who fall short of their legal duties can expect to face enforcement action, including sizable fines,” Ofcom warned in a press release, calling its efforts a “reset” for children’s online safety. With the US regulations and Ofcom’s recent initiative, India’s lawmakers should find it that much easier to make the Internet safer for the millions of children who access it daily for education and entertainment.