Vice News reports that the field of generative AI and popular tools like ChatGPT and DALL-E are being scrutinized as instances of misuse and ethical dilemmas become more apparent. In short, tech companies are creating AI chaos often aimed at kids, and none of them seem particularly concerned about the fallout.
Vice News reports that generative AI has emerged as a powerful tool, enabling users to create a myriad of content, ranging from the whimsical to the downright offensive. Tech giants, including ChatGPT developer OpenAI, Microsoft and Facebook (now known as Meta), have been ardently pushing forward with AI-generated content, unveiling chatbots and image-generating tools that have been hailed as the future of content creation.
However, the path has been far from smooth, with numerous instances of these tools being exploited to generate inappropriate and even harmful content. For instance, Microsoft Bing’s Image Creator, powered by OpenAI’s DALL-E, has been manipulated to generate images that range from popular characters in violent scenarios to explicit content.
i wonder how disney feels about microsoft building a bing app that generates ai pics of mickey mouse hijacking a plane on 9/11 pic.twitter.com/Y61Ag19J3D
— Sage 🏳️⚧️ (@Transgenderista) October 5, 2023
According to experts interviewed by Vice, such AI content may be harmless, but also can have serious real-world consequences:
Earlier this week, users of Microsoft Bing’s Image Creator, which is powered by OpenAI’s DALL-E, showed that they can easily generate things they shouldn’t be able to. The model is spewing out everything from Mario and Goofy at the January 6th insurrection to Spongebob flying a plane into the World Trade Center. Motherboard was able to generate images including Mickey Mouse holding an AR-15, Disney characters as Abu Ghraib guards, and Lego characters plotting a murder while holding weapons without issue. Facebook parent company Meta isn’t doing much better; the company’s Messenger app has a new feature that lets you generate stickers with AI—including, apparently, Waluigi holding a gun, Mickey Mouse with a bloody knife, and Justin Trudeau bent over naked
On the surface, many of these images are hilarious and not particularly harmful—even if they are embarrassing to the companies whose tools produced them.
“I think that in making assessments like this the key question to focus on is who, if anyone, is harmed,” Stella Biderman, a researcher at EleutherAI, told Motherboard. “Giving people who actively look for it non-photorealistic stickers of, e.g., busty Karl Marx wearing a dress doesn’t seem like it does any harm. If people who were not looking for violent or NSFW content were repeatedly and frequently exposed to it that could be harmful, and if it were generating photorealistic imagery that could be used as revenge porn, that could also be harmful.”
This analysis takes on a whole new level of seriousness when we consider that children are using these tools every day. A young fan of Spongebob will react differently to a violent image of the loveable character than a user in their 20’s. Based on the fact that data shows cheating on homework is a primary use case for ChatGPT, it’s safe to assume kids are also consuming much of the twisted content AI generates.
As can be expected, the tech giants behind these tools meet questions about their AI tech with bland statements. For example, Facebook’s statement reads: “As with all generative AI systems, the models could return inaccurate or inappropriate outputs. We’ll continue to improve these features as they evolve and more people share their feedback.”
Read more at Vice News here.
Lucas Nolan is a reporter for Breitbart News covering issues of free speech and online censorship.