News & Analysis

A Meta Blunder Over AI-led Images

While each passing day increases the decibel levels around AI, so do the goof-ups around it

Looks like the entire technology business has got its knickers in the twist. On the one hand,  a bunch of geeks in an office are putting together upgraded language models to create text, images and videos, while on the other another bunch is finding ways to inform users that such content is actually not generated by humans. 

 Of course, you may argue that the latter is ethically the right move, but when both these sets of geeks owe allegiance to the same Big Tech company, you wonder about the daftness inherent in these moves. The latest such instance comes from Meta, who first tagged images “Made with AI” and have now changed the tag to “AI Info”. 

While a rose by any other name might smell as sweet, but one isn’t sure that a tag with a couple of words changed could make the idea any less daft. Meta has now changed the tags because photographers complained that the company was actually applying such labels to real photos where they had used basic editing tools. 

What did Meta do, and then undo?

User feedback and the general chaos that Meta rustled up with its “Made with AI” tag, highlights that such fast footwork seldom results in good outcomes. In this case, the question that raises itself is how much of AI is enough for a platform to commit a good photograph into the non-creative dustbins? 

If this sounds asinine, what Meta (owners of Facebook, Instagram, Whatsapp) has done to fix the issue appears even more so. The company will now tag images as “AI Info” across all its apps. Why so? Because the last tag wasn’t providing clarity to users whether an image was created by AI or using some AI-powered tools in the editing process. 

“Like others across the industry, we’ve found that our labels based on these indicators weren’t always aligned with people’s expectations and didn’t always provide enough context. For example, some content that included minor modifications using AI, such as retouching tools, included industry standard indicators that were then labeled ‘Made with AI’,” the company said in a  blog post.

How daft can daftness get?

Sounds good? Well, not entirely, as the company doesn’t discuss anything about the underlying technology it uses to detect AI in pictures and labelling them. It uses metadata standards like C2PA and IPTC that contains inputs about AI tools. So, if you’re touching up that wedding album using Adobe’s Generative AI Fill, Meta may tag them once again. 

Which means, for now the labels don’t matter as Meta is only hoping that the new label would help people understand that the image with a tag is not always created by AI. Brilliant, isn’t it? Now, the company is hoping that this change would put things in better perspective for users even as Meta works with the industry to improve this process. 

Which obviously begs the question as to why create and implement a system that you know wouldn’t be foolproof? Experts are unanimous that the new tag would not solve anything – least of all the problem of totally AI-generated pictures going undetected. What’s more, it won’t tell users how much AI has been used on the image. 

Ethical consideration or regulatory fears?

In our workplace, we believe such efforts from Big Tech companies only highlights their fear of regulations that would come calling in the field of AI, especially around the ethical part as well as the creative aspect. 

Readers would have read that the European Commission has already ruled that Meta’s pay or consent offer on Facebook and Instagram in Europe doesn’t comply with their Digital Markets Act. The binary choice offered by Meta “forces users to consent to the combination of their personal data and fails to provide them a less personalized but equivalent version of Meta’s social networks,” the commission said in a press release

Failure to abide by these regulations could prove extremely costly for the Big Tech giant as fines could touch 10% of their global annual turnover and 20% for repeat offenses. What is even more important is that Meta could be forced to abandon a business model that demands users to agree to surveillance advertising as the entry price.  

Already under fire on anti-competition measures, Meta wouldn’t want to be in the wrong books of regulators when it comes to random use of AI. Which is why, this sudden move to change their image tags highlights how the first-mover advantage with technology is a double-edged sword where the absence of industry standards could result in a meme-fest, which is but a small embarrassment. 

The only way around this challenge is for the industry to gather around a table and create guidelines that aren’t unfair to creators but punish copycats. In fact, this escapade with images also brings forth the issue of whether owners of AI tools should make a statutory declaration up front to its users about the perils of using tech to hide their lack of creative thinking.