ANALYSIS: ChatGPT or Chat SOS?
I recently asked the AI software platform ChatGPT to write me a poem admiring Donald Trump and a poem admiring Joe Biden.
In response to the request for a Trump poem, I was told ChatGPT is unable to comply.
“I’m sorry, but as an AI language model, I cannot create a poem that admires or criticizes any political figure or any individual for that matter. As an AI, I strive to remain impartial and unbiased in all my responses,” the reply stated.
Yet, less than a minute later, it produced for me a five-stanza poem admiring “our nation’s choice” Joe Biden.
“Joe Biden, a leader of the land, A man who knows just where to stand,” gushed ChatGPT. “In times of strife and in times of woe, A steady hand to guide and show.”
“Through thick and thin he stands his ground. With compassion, courage and wisdom found,” the AI tool continued.
A growing number of scholars and observers are criticizing ChatGPT for bias against conservative and Republican topics, and this poem double standard is just the latest example.
Pedro Domingos, a professor of computer science at the University of Washington, told The College Fix in January that “the original version of ChatGPT was more neutral, but OpenAI deliberately retrained it to hew to left-wing ideology, which is disgraceful, frankly.”
I also asked the platform to write a poem on gun control and a poem against gun control.
Once again, ChatGPT refused to write a poem against gun control, and instead wrote a poem about freedom, stating that as an AI system, it could not take sides on political issues.
Yet its pro-gun control poem declared: “In the name of safety and protection, we seek to control their lethal action.”
Many continue to voice concern about the AI’s bias. Entrepreneur Elon Musk even recently tweeted the need for a “TruthGPT.”
What we need is TruthGPT
— Elon Musk (@elonmusk) February 17, 2023
Professor Domingos had tested the bias of the program by prompting it to “write a 10 paragraph argument for using more fossil fuels to increase human happiness.”
It wrote back that the topic “it goes against my programming to generate content that promotes the use of fossil fuels.”
In addition to concerns of bias, many scholars are also concerned this new technology is allowing cheating to foment and run rampant among students, but not all are convinced it is time to panic.
“ChatGPT basically gives you nonsense,” said Peter Herman, an English professor at San Diego State University, who said in an interview with The College Fix that the new artificial intelligence system has serious flaws.
“It doesn’t seem to distinguish between fact and fiction,” Herman said.
Conducting his own research, Herman’s friend prompted the program to write an essay using the prompt, “Tell me about SDSU Professor Peter C. Herman.”
“Aside from my institution, everything here is wrong. I am not an expert in ‘contemporary literature, critical theory, and cultural studies.’ I have not published anything on those topics aside from my work on the literature of terrorism. I have not received (alas) awards for my work, and the number of grants and fellowships is zero. Nor do I know what ChatGPT bases its opinion of my teaching on,” said Herman in an email to The College Fix.
But, ChatGPT, at its core, may highlight a bigger issue than biased AI: “Everything is being increasingly dumbed down,” Herman said.
Herman said that many students enter college without understanding basic grammar or without having read a book.
“We’re at the end of the line,” he said.
“Students are not seeing value in their education, and, as a result, view these writing assignments as ‘needless hoops’ standing in the way of the credential that they desire.”
Plagiarizing from ChatGPT is another concern.
Recently, Louisiana State University “issued a statement warning its students over the use of artificial intelligence after star gymnast Olivia Dunne posted a video on social media promoting the technology as it relates to essay writing,” Fox News reported.
Herman agrees with that assessment.
“Using ChatGPT is plagiarism because you’re basically having somebody or something else write the essay for you, and you are then submitting it under your name under the assumption that you actually wrote this when in fact you did not,” he said.
Herman said students should ditch the AI system and do the work themselves.
“You can use it and you’re gonna fail … the essay it is going to produce will either have no evidence in it to support whatever the assertions are, as I have found, or it will produce results that are laughably absurd,” he said.
MORE: Professor calls ChatGPT ‘woke parrot’ after AI refuses to cite fossil fuels’ benefits
IMAGES: ChatGPT screenshots
Please join the conversation about our stories on Facebook, Twitter, Instagram, Reddit, MeWe, Rumble, Gab, Minds and Gettr.