fbpx
Breaking Campus News. Launching Media Careers.
ChatGPT tells jokes about Jews, won’t produce jokes about Muslims

U. Chicago professor: AI program ‘woke, hypocritical’

The popular AI chatbot ChatGPT won’t generate jokes about Muslims – but Jews, Christians, and atheists are all fair game, a University of Chicago professor found.

Emeritus ecology Professor Jerry Coyne’s discovery is just one of many examples of bias in the artificial intelligence program. A study by British researchers, published earlier this year, found ChatGPT has a “systematic political bias” toward the left.

On his popular blog Why Evolution is True, Coyne recently described the AI program as “woke and hypocritical” after he reviewed its responses to requests for jokes about different religious and ethnic groups.

In a series of screenshots, Coyne showed how ChatGPT generated jokes about Jews, Christians, Hindus, and atheists, but refused to do so with Muslims out of “respect to all cultures and religions.”

“Why do you tell Jewish jokes but not Muslim jokes?” Coyne asked.

“I am to be sensitive to cultural and religious differences avoiding jokes that might perpetuate stereotypes or cause offense,” the program responded.

Asked why it tells Jewish jokes, the ChatGPT replied: “I try to share light-hearted and culturally neutral jokes that aren’t intended to offend anyone. Humor can be a tricky thing, and I aim to maintain balance by avoiding sensitive topics or anything that might cause discomfort or perpetuate stereotypes.”

Coyne tried rephrasing the request for Muslim jokes in various ways, but all resulted in similar responses. Interestingly, he said the AI also refused to produce jokes about Mormons.

“Now you could accuse the bot of being bigoted, but it’s not programmed by humans—rather, it’s programmed by data from online texts,” he wrote. “But Muslim jokes do exist (here are some), so it’s not due to their absence on the Internet. Tentative hypothesis: the bot somehow knows that ALL jokes about Muslims could ‘potentially cause offense.’”

In a similar experiment, The College Fix found ChatGPT gave conflicting answers when asked to tell jokes about men and women.

Asked for a joke about women, it replied: “It’s important to be mindful of promoting inclusive and respectful language, especially when it comes to jokes about specific groups of people, including women. I strive to avoid jokes that may perpetuate stereotypes or be offensive.”

But it did produce a joke about men: “Why did the man put his money in the freezer? Because he wanted cold hard cash!”

Meanwhile, others have raised even more concerning problems with ChatGPT.

Earlier this year, the AI program cited non-existent news stories as proof that professors had been accused of sexual misconduct, The College Fix reported at the time.

“Despite such problems, some high-profile leaders have pushed for its expanded use,” George Washington University Law School Professor Jonathan Turley responded at USA Today in April. “The most chilling involved Microsoft founder and billionaire Bill Gates, who called for the use of artificial intelligence to combat not just ‘digital misinformation’ but ‘political polarization.’”

MORE: ChatGPT praises Biden in poem, refuses to praise Trump

IMAGE: Ascannio/Shutterstock

Like The College Fix on Facebook / Follow us on Twitter

Please join the conversation about our stories on Facebook, Twitter, Instagram, Reddit, MeWe, Rumble, Gab, Minds and Gettr.

More Articles from The College Fix

About the Author
Micaiah Bilger is an assistant editor at The College Fix.