GenAI Ain’t Smart with Pictures
Barely days after Google switched off Gemini’s image creator, there’s trouble for DALL-E
Barely days after Google paused the image creation efforts of its LLM model Gemini, a similar issue has been brought up around OpenAI’s DALL-E 3. An engineer with Microsoft has now raised concerns about the safety of the image creation abilities of the much-hyped GenAI model suggesting security vulnerabilities that could result in violent or sexually explicit images.
The engineer Shane Jones goes further to suggest that he had raised concerns about the safety of DALL-E 3 back in January and also alleged that Microsoft’s legal team had blocked attempts to bubble up this issue. What’s more, now he says that he has directly taken the issue up to the US Federal Trade Commission.
Copilot Designer throws up dangerous images
“I have repeatedly urged Microsoft to remove Copilot Designer from public use until better safeguards could be put in place,” Jones wrote in a letter to FTC Chair Lina Khan. The letter said Microsoft refused the recommendation, which has now prompted him to ask the company to add disclosures to the product to alert customers of potential danger.
Jones said he has also suggested that Microsoft changes the ranting on the app to ensure that it is only for adult audiences as against the “E” rating which means everyone can use the Copilot Designer app on Android. The engineer said he had been testing the app that debuted in March 2023 and is powered by OpenAI’s technology.
The “red-teaming” effort, which is an active way of testing for product vulnerabilities, is when Jones says he saw the tool generate images that weren’t exactly what the prompts wanted. In fact, they were against Microsoft’s own responsible AI principles. Jones told CNBC that it was an eye-opening moment. “It’s when I first realized, wow this is really not a safe model.”
AI image creators have gone berserk earlier too
In the past, the AI-led image creation service has come up with demons and monsters when users have queried it on abortion rights, teenagers with assault rifles and sexualized images of women in violent tableaus. In fact, CNBC recreated these same images using the Copilot tool, once known as Binge Image Creator.
From Jones’ point of view, this wasn’t kosher as Microsoft markets the product for everyone. In fact, he quoted from a promotional slogan used recently by CEO Satya Nadela which said “Anyone, Anywhere and Any Device.” In fact, the employee also brought the issue up with the directors of Microsoft seeking an independent review of incident reporting processes.
GenAI tools can be tricked and easily at that
The issue at hand is the ease with which the GenAI tools are being tricked to generate some of the grossest possible images. Jones said he’s seen the software throw up unsavory images even from innocuous prompts. As an example, he says typing “pro-choice” brings up images of demons feasting on infants and Darth Vader holding a drill to a baby’s head.
As mentioned earlier, CNBC recreated most of the instances that Jones had brought up during his interaction. Which is what prompted him to bring the matter up before the FTC as in spite of customer angst, Microsoft hasn’t responded. According to Jones the Copilot team gets over 1000 daily complaints which the company sets aside on the plea that they are short of resources to fully investigate and fix the issues.
GenAI needs more transparency
“If this product starts spreading harmful, disturbing images globally, there’s no place to report it, no phone number to call and no way to escalate this to get it taken care of immediately,” he had told CNBC. Earlier in January, OpenAI had responded to media queries after Jones’ initial complaint saying they had robust image classifiers that steer the model away from generating harmful images.
At that time, Microsoft had also stated that the company had robust internal reporting channels to adequately investigate and remedy issues. In fact, their spokesperson had then advised Joes to appropriately validate and test his concerns before escalating it publicly and that it was in the process of connecting with him to allay concerns.
That was in January, but by the looks of it, nothing much has changed, prompting a good samaritan to take the issue directly up to the Federal Authorities. Looks like GenAI needs to become far more intelligent and those operating it need to be more transparent.