Executive Summary

Practical Effects surveyed 300 internet users about artificial intelligence and web content. Each respondent read several texts either written by a human or generated with AI. Passages included selections from journalism, medicine, fiction, and marketing copy. Readers were asked several questions, including about the authorship of the passage, their assessment of the piece’s credibility, and what words they would use to describe the text.

  • 53% reported concern about the use of AI to generate web content, driven by ethics, accountability, and social effects.
  • Accuracy, authenticity, and accountability concerns fuel skepticism. 49% reported concern about increased misinformation and disinformation; 44 percent reported concern about an erosion of authenticity; and 34 percent reported concern about lack of accountability for harmful material.
  • Respondents had difficulty differentiating between content written by humans and content generated with AI. Of those who read an excerpt from the Jules Verne classic Journey to the Center of the Earth 27% believed it to be AI-generated.
  • Human-crafted content was rated 10% more credible than content generated with AI. The credibility gap for medical writing was most pronounced: an excerpt from an expert-written article was 21% more credible than an AI alternative.

An Erosion of Authenticity

OpenAI launched large language model interface ChatGPT in 2022

Artificial intelligence in media is controversial. Humans have been writing for at least six thousand years, beginning with cuneiform in ancient Persia and Mesopotamia. We communicate our knowledge, our transactions, our desires, and our commitments in writing. Now, for the first time in the history of our species, one cannot be certain the words you are reading were written by another human being. 

Practical Effects is a writing and research firm, and we set out to study artificial intelligence in our industry. What are consumers’ expectations? What drives confusion and skepticism?

To reveal true reactions we asked respondents to read and respond to passages of text. Some were generated with the large language model interface ChatGPT, while others were sourced from National Geographic, WebMD, Jules Verne’s Journey to the Center of the Earth, and our own Practical Effects content team. Participants responded to questions on the passages credibility and quality, shared primary descriptive associations, and indicated whether they thought the piece was written by a human author or generated with AI. General questions on the state of artificial intelligence followed the test.

Many readers are concerned about the propagation of AI content on the internet. 53 percent reported concern about the use of AI to generate web content, and concerns cited the greater circulation of misinformation and disinformation, an erosion of authenticity, and a lack of accountability. These concerns are justified: very few readers in our study could identify when content was AI generated. 

In this report we explore how readers react to content generated with artificial intelligence, and discuss how brands can address consumer concerns about authenticity with a renewed commitment to human creativity. Consumers are not luddites, and in fact are excited by the technological frontier. At the same time they harbor deep and valid concerns about downstream effects of AI on the job market, the credibility of information they read on the internet, and an erosion of authenticity in our communities.

Reading Between the Lines

“AI cannot tell the difference between fact and or fiction so we will always need humans,” one respondent told Practical Effects. “AI should be rarely, if ever, used in creative projects,” wrote another. 

These readers were among the 75 percent of survey participants who found a passage about the 2018 Camp Fire in California credible. That passage was generated with ChatGPT. That story is not an outlier in our research. 

All passages, whether generated with AI or authored by humans were highly credible. On average, 54 percent of readers enjoyed human-authored passages, the same proportion that enjoyed comparable passages generated with AI.

Jason Allen's AI-generated art won a prize at the Colorado State Fair

Practical Effects found that readers had difficulty distinguishing human writing from passages generated with AI, and that this inability to differentiate is coupled with fear and worry about the impact of artificial intelligence on our society.

On average, passages written by real human authors were recognized as 10 percent more credible, and texts generated with ChatGPT were more likely to be described as “long-winded” and “boring.” These important differences, however, had no relationship with readers' ability to identify passages generated with AI: only 18 percent of readers, on average, could correctly identify passages from ChatGPT. 

“It will progress to a point where AI writing will be indistinguishable from human works,” a participant wrote. It appears we are already there.

Consumers are frustrated with the spread of AI-generated content online, and this frustration appears to stem from an inability to differentiate authored content from material put together by Generative AI. 

It is also clear that readers find content written by fellow humans more trustworthy and engaging, even when they cannot identify why. 

These findings have important implications for business leaders: reliance on artificial intelligence to create content is likely to disturb consumers and lower the credibility of brand messaging. Further, more than half of readers believe brands should rely on humans, not AI, to create content authentically. A human voice has the power to elevate a brand, communicate a message credibly, and parse the real from the unreal. As one respondent put it: “AI don’t live in the real world: people do.”

An Uncertain Frontier

Practical Effects explored not only readers’ revealed perspectives on AI-generated content, but also their perspectives and opinions about artificial intelligence generally. The technology is immensely powerful, exciting, and straight from the realm of science fiction. Who among us didn’t grow up wondering at the possibility of an intelligence as sophisticated as Data on Star Trek: The Next Generation? People are rightly excited about pushing the frontiers of technological possibility; they also expressed valid worries about misinformation and job security across sectors.

“I believe that artificial intelligence is going to make the world a better place,” wrote one respondent, among the 5 percent of readers not at all concerned with AI-generated content on the internet. 

“I think AI is an amazing tool that I have been waiting to see come to life since I was a little kid, and I enjoy it,” another enthusiast wrote. “The only problem I have is the misinformation… you have to be smart enough to know what's true and what's not. Check your facts: always check your facts.”

Misinformation is a problem. 49 percent of readers indicated that the spread of misinformation and disinformation is a top concern as AI-generated content propagates online. “Who will be held accountable for harmful or biased information that causes problems for society?” one reader asked. 

Social hazards ranked top among respondent concerns about artificial intelligence. “AI-created texts lack human emotion and sound too dry and polished,” one reader shared. 44 percent worry about the loss of a human touch in media and an erosion of authenticity. “There are enough humans around that we don't need computers to do the things that require human emotions,” wrote another.

“I believe artificial intelligence will eventually threaten the availability of jobs for humans in a variety of fields,” wrote one participant. 29 percent fear AI-fueled job displacement. “Companies should hire humans instead of using artificial intelligence in order to increase job opportunities,” another emphasized.

Intellectual property rights are also a key point of discussion for readers. “AI is not writing: it is plagiarism on a grand scale,” declared one participant. 31 percent largely agreed, citing concerns about the ethics of using copyrighted material in AI models.

On the whole, consumers have a nuanced yet cautious perspective on AI. They expect brand leaders and public officials to take a similarly cautious stance. “I believe AI has the potential to improve efficiency and productivity for the general good,” shared one respondent. “I also believe that the use of AI should be regulated and subject to the use of a strict code of ethics, guidelines, protocols and controls.” 

People are excited about new capabilities artificial intelligence can unlock for business and society, yet they are clear that leaders must prioritize social cohesions, authenticity, and job stability.

Conclusions & Implications

Practical Effects uncovered substantial excitement and apprehension about the rapid development of artificial intelligence in media, finding that content authored by real human writers is 10 percent more likely to be perceived as credible. Authenticity matters, and consumers expect brands to prioritize human creativity and human writers, even as they are eager to discover the full capabilities of new technologies.

Though it can appear cheaper, artificial intelligence is not more effective than writers. It does not produce better results in terms of enjoyment or qualitative evaluation. On the contrary, content produced by ChatGPT was seen as less credible, lower quality, and less informative than traditional content. 

Wise marketers will invest in human creativity and authentic content. Sentiment around artificial intelligence leans towards concern, interaction statistics revealing lower performance on communication metrics, and consumers expect business leaders to protect them from job insecurity across sectors.

Continue the Conversation

What's your take on the revolution in Generative AI? Email hello@practical.nyc or reach out to your account executive to continue the conversation. Download PDF of this study, including exhibits and data, here.