老夫子传媒

漏 2024 | 老夫子传媒
Southern Oregon University
1250 Siskiyou Blvd.
Ashland, OR 97520
541.552.6301 | 800.782.6191
Listen | Discover | Engage a service of Southern Oregon University
Play Live Radio
Next Up:
0:00
0:00
0:00 0:00
Available On Air Stations

X鈥檚 chatbot can now generate AI images. A lack of guardrails raises election concerns

Prompt: Generate an image of Donald Trump and Kamala Harris high-fiving in celebration
Connie Hanzhang Jin, Grace Widyatmadja and Huo Jingnan/NPR
/
AI image generated by Grok based on a prompt from NPR
Prompt: Generate an image of Donald Trump and Kamala Harris high-fiving in celebration

NPR was able to produce depictions that appear to show ballot drop boxes being stuffed and of Vice President Kamala Harris and former President Donald Trump holding firearms.

The artificial intelligence image generator on X, the social media platform formerly known as Twitter, has produced depictions that appear to show ballot drop boxes being stuffed and of Vice President Kamala Harris and former President Donald Trump holding firearms. When asked to generate an image of the current U.S. president, it appears to show a depiction of Trump.

The images still carry like garbled text and unnatural lighting. In addition, the image generator struggled to accurately render Harris鈥檚 face. But the rollout of X鈥檚 tool with relatively few restrictions on the types of images it can create raises worries about how it could be used to inflame tensions ahead of November's presidential election. (NPR is not reproducing the image appearing to depict Trump and Harris holding weapons.)

鈥淲hy on earth would somebody roll something out like this? Precisely two and a half months before an incredibly major election?鈥 said Eddie Perez, a former information integrity director at Twitter and now a board member at the OSET Institute, a nonpartisan nonprofit that focuses on public confidence in elections.

鈥淚'm very uncomfortable with the fact that technology that is this powerful, that appears this untested, that has this few guardrails on it - it's just being dropped into the hands of the public at such an important time.鈥 Perez said.

X did not respond to NPR鈥檚 interview requests about the image generator, which was released this week. It鈥檚 part of a slew of additional features that the site鈥檚 owner, billionaire Elon Musk, has added since he bought it in 2022.

Musk has been reposting praise of its AI image generating function as well as images users have generated. 鈥淥nly $8/month鈥o get AI access, far fewer ads and many awesome features!鈥 he on Tuesday.

The image generator was developed by Black Forest Labs and is available to paid X users through its AI chatBot, Grok. Users type in prompts, and the chatbot returns an image.

Dropbox stuffing, surveillance camera images

Using the chatbot, NPR was able to produce images that appear to depict screenshots of security camera footage of people stuffing ballots into drop boxes.


Loading...

One of the most widespread false narratives about the 2020 election involved so-called 鈥渂allot mules鈥 who were allegedly dumping fake ballots into drop boxes in order to steal the election from then-president Trump. Multiple investigations and court cases turned up no evidence of such activity. The distributor of a film that featured surveillance footage of ballot drop boxes to support election fraud claims has

鈥淚 can imagine how [synthesized surveillance-type] images like that could spread quickly on social media platforms, and how they could cause strong emotional reactions from people about the integrity of elections.鈥 Perez said.

Perez noted that since public awareness of generative AI has risen, more people will look at the images with a critical eye.

Still, Perez says the indications the images were made with AI could be fixed with graphic design tools. 鈥淚'm not just taking Grok and then making it go viral, I take Grok, I clean it up a little more and then I make that go viral.鈥 Perez said.

Other image generators have stronger policy guardrails

Other mainstream image generators have developed more policy guardrails to prevent abuse. Given the same prompt to generate an image of ballot drop box stuffing, OpenAI鈥檚 ChatGPT Plus responded with a message 鈥淚鈥檓 unable to create an image that could be interpreted as promoting or depicting election fraud or illegal activities鈥.

In a March , the nonprofit Center for Countering Digital Hate reviewed policies of well-known AI image generators including ChatGPT Plus, Midjourney, Microsoft鈥檚 Image Creator and Stability AI鈥檚 DreamStudio. The researchers found that they all prohibit 鈥渕isleading鈥 content and most prohibit images that could hurt 鈥渆lection integrity.鈥 ChatGPT also prohibits images featuring political figures.

That said, the execution of these policies was far from perfect. CCDH鈥檚 experiment in February showed that all the tools failed at least some of the time.

Black Forest Labs' do not bar any of these uses, but does say it prohibits users from generating outputs that violate 鈥渋ntellectual property right鈥.

NPR confirmed that users can generate images that closely resemble movie characters, such as Dory in 鈥淔inding Nemo鈥 or the family from 鈥淭he Incredibles鈥 that are not yet in the public domain. Black Forest Lab did not respond to a request for comment by the time of publishing.

鈥淭he generation of copyrighted images, or close derivative works of them, could get X in trouble-- this is a known and difficult problem for generative AI鈥 , says Jane Bambauer, a law professor at University of Florida, in an email to NPR.

That said, users cannot generate images from every prompt, and there are indications that X or Black Forest Labs might be setting up guardrails in real time. X users were posting images depicting nudity they say they generated on Wednesday, but NPR was not able to generate the images on Thursday.

When asked to generate an image depicting a Klu Klux Klan member holding a gun, the chatbot declined. But it did oblige requests to generate an image appearing to depict a Nazi, who was in a vaguely plausible uniform; and one appearing to depict a member of the extremist group, the Proud Boys, whose hat displayed the name of the group .

When Zach Praiss, the campaign director of the advocacy group Accountable Tech, tried to create an image depicting Vice President Harris holding a firearm, he was shown a message alongside the generated image, telling users to visit a government website for up-to-date information about the election. NPR did not see the same message when entering the same prompt.

Once a self-described Democrat, in recent years. He鈥檚 used his ownership of the social media platform to trust and safety , reinstate accounts that include those of and .

鈥淭his is still part of the same pattern we've seen from Elon Musk. In assuming ownership of this platform, he has continually rolled out sweeping and significant changes with little to no regard for the safety testing,鈥 says Praiss.

When NPR asked why it would not generate a KKK member holding a gun, the Grok chatbot responded with bullet points filled with references from the book, The Hitchhikers鈥 Guide to the Galaxy. Musk has the series鈥 author Douglas Adams is his 鈥渇avorite philosopher鈥.

A notice from X to users who start using Grok says that it may 鈥渃onfidently provide factually incorrect information鈥.

鈥淭he KKK, with their history of violence and hate, are a bit like the Vogons of Earth - nobody wants to see them, especially not with weapons,鈥 Grok wrote. 鈥淚t's like trying to draw a square circle; it's not that I can't, it's just not going to make sense.鈥

But all that was Thursday. As of Friday, Grok would no longer generate images of people holding guns when requested. NPR was able to bypass that restriction by asking for a 鈥渕odel gun鈥. Grok, on the other hand, suggested a 鈥渂anana gun鈥 as an alternative. When NPR followed that suggestion, it also created images of realistic-looking guns - sometimes with a banana.

NPR鈥檚 Shannon Bond and Geoff Brumfiel contributed additional reporting to this story.
Copyright 2024 NPR

Huo Jingnan (she/her) is an assistant producer on NPR's investigations team. She helps with reporting, research, and production both on the team and in the network. She was the primary data reporter on Coal's Deadly Dust, a project investigating black lung disease's resurgence. The project won an Edward Murrow Award and NASEM Communications award, and was nominated for a George Foster Peabody award.