The Robot Reporter’s Lukewarm Welcome Into the Newsroom: Journalism Layoffs and Failed Experiments with AI
It’s not new information that the popularity of print journalism has steadily declined over the past two decades. But the rise of digital media platforms and notable success of outlets such as BuzzFeed and VICE Media proved that trusting in the virality of social media and “meeting people where they are” may bring the public back to newspapers. However, when the increasing effectiveness of artificial intelligence (AI) led to significant layoffs across Big Tech platforms in 2022, it was not much later that digital journalism felt its effects. Since the beginning of 2023, layoffs have been sweeping news outlets, including BuzzFeed and VICE Media, who once brought a beacon of hope to the journalism industry. Corporate media officials, simultaneously, began experimenting with AI chatbots such as their own version of ChatGPT to assist journalists with publishing articles. But these experiments have proved rife with not just lack of creativity, but also substantial errors that harm the reputation and credibility of news outlets in the future.
On April 21, 2023, Buzzfeed News announced that it would be shutting down its services completely, as part of the broader layoffs that had happened across the company. Now, Buzzfeed is commonly known as the millennial startup with their series of black-hole personality-assessment quizzes. But to the journalism industry, Buzzfeed News was the future. It married social media and news, which at the time, was seen as an industry largely out-of-touch with young people. BuzzFeed News even won the Pulitzer Prize in 2021 for their International Reporting exposing the Chinese government’s vast new infrastructure to imprison Muslim minorities.
Nearly a week after the upsetting announcement by Buzzfeed, VICE Media, the platform jokingly known for its correspondents vlogging themselves in war zones, canceled its popular TV program “Vice News Tonight” and laid off more than 100 of its roughly 1,500 employees. On May 15, Vice Media filed for bankruptcy. This past month, Southern California Public Radio joined The Athletic (New York Times’ sports subscription), LA Times, Dot.LA, and Morning Consult in a series of layoffs and closures. NBC News and MSNBC laid off nearly 75 staffers across divisions. The Washington Post has been bracing for layoffs this quarter after a surprise meeting between Amazon CEO Jeff Bezos and senior staffers in January, whose “deep pockets and investments in technology helped the paper move towards a digital future.” The trend sweeping across news outlets is worrying, and it is not just a coincidence, rather, it is part of an interconnected-web full of complications among digital media platforms.
“I’m proud of the work that BuzzFeed News did, but I think this moment is part of the end of a whole era of media… It’s the end of the marriage between social media and news,” Ben Smith, the founding editor of BuzzFeed News, told The New York Times.
Earlier this year, Buzzfeed CEO Jonah Peretti announced plans to start experimenting with AI to publish articles. Though Perretti expressed his skepticism about AI’s lack of creativity and “dystopian” nature, he assured that it would not be used to replace journalists, but only to enhance their work. But, as Noor Al-Sibai and Jon Christian pointed out in a Futurism article, the human employees that work with AI-produced articles are not journalists, but instead, they’re “non-editorial employees who work in domains like client partnerships, account management, and product management.”
In a burgeoning effort to transform journalism, corporate media outlets have turned to digital means to reach wider audiences, trusting in virality and social media platforms to bring the public back to newspaper. This has been happening since the early 2000s, but only in the last decade with the increasing hegemony of social media has journalism seen an ultimate metamorphosis. In 2022, when the tech industry’s pandemic bubble burst, and significant layoffs were seen across giants including Microsoft, Amazon, Google, and Meta (Facebook), it caused a ripple effect with shards falling onto digital news outlets. One of the biggest factors of the tech bubble, among inflation and bloated workforces, was the rapid advancement in artificial intelligence (AI). Indirectly, AI was stirring up an unsettling feeling in the industry of social journalism.
AI has presented itself as a direct contributor to the anxieties of journalists. It’s no surprise that when the popular chatbot ChatGPT was launched in November 2022, it transformed societies in ways we are still unaware of. Its daftness materialized in writing tedious collegiate essays, Kanye West’s cover of Taylor Swift’s “This is Me Trying” and photos of the Pope in a long, street-wear white puffer. Its worrisome nature was brought about when industries began playing with the idea of employing AI chatbots in the workforce, assuring their employees that the technology would only be used to help workers, not replace them. Proponents of AI have traditionally adopted such sentiment into their ideology to assuage the valid concerns of those working in impacted industries.
Of course, the fear tactic commonly employed by populist leaders that technological innovation is going to steal jobs is a trite one. And, it’s been proven that advancements in forthcoming technologies only create more jobs in the long run. However, the newfound worry about advancements in technology isn’t coming from those working in automated jobs, which machine-learning robots have traditionally replaced. Instead, it’s coming from creative industries, who have largely been thought to have been safe from machine takeover. Last month, the Writers Guild of America, representing 11,500 Hollywood screenwriters, went on strike demanding higher wages. They also asserted that AI should be banned from generating or editing scripts; that AI cannot replicate the individuality each screenwriter brings to Hollywood. The dystopian prospect of the robot writer was getting closer and closer to reality: an object in the mirror that was closer than it appeared.
To make it clear, it isn’t the broad concept of artificial intelligence that workers in these industries are cynical of. In fact, AI has been used in the newsroom since 2014 by outlets such as the Associated Press (AP), striving to “streamline workflows to enable journalists to concentrate on higher-order work.” AP uses AI for tedious functions such as automatic transcription of videos and generating story summaries,distribution purposes, optimizing content via image recognition, and creating the “first editorially-driven computer vision taxonomy for the industry.” AI has also been used to automate stories in the sports and corporate earnings sections. It isn’t the fear of technological assistance and its rapid advancements that haunt journalists. Rather, what is most concerning for journalists is the concept of “artificial general intelligence,” which essentially is shorthand for a machine “ that can do anything the human brain can do.” Can creativity and empathy, two essential traits necessary for becoming a journalist, truly be automated by AI?
One of the most concerning feats of AI chatbots is its ability to hallucinate, or the tendency to make up “irrelevant, nonsensical, or factually incorrect answers.” The AI chatbots operate on a large language model (L.L.M.), which picks up on the patterns of text by analyzing massive amounts of text from the web. Its functions are similar to that of Mad Libs, or an autocomplete tool.
“If you don’t know an answer to a question already, I would not give the question to one of these systems,” stated Subbarao Kambhampati, professor and researcher of AI at Arizona State University.
Ars Technica recommends using the term “confabulation” instead, which “occurs when someone’s memory has a gap and the brain convincingly fills in the rest without intending to deceive others.” The problem is that the web is abundant with fake information and lies–and chatbots don’t have the ability to discern what is truthful and what isn’t.
AI chatbots have the ability to present “convincing false information easily.” Yet this is no fault of the AI model per se–it is doing exactly what it is designed to do. But what AI hallucinations ultimately reveal are the dangers of trusting in AI’s functions amidst a vast web of misinformation and hate speech online. Joan Donovan, research director at Harvard Kennedy School’s Shorenstein Center, stated that when her team of researchers experimented with an earlier version of ChatGPT, the software incorporated information from Wikipedia, Reddit, and 4chan–the website infamously known for spreading conspiracy theories and hate speech. The spread of conspiracy theories and hate speech threatens not just the ongoing efforts to combat misinformation, but also democracy at large. As Nobel Prize-winning journalist Maria Ressa stated, “we need information ecosystems that live and die by facts.”
In November 2022, CNET, an American media outlet reporting on technology and consumer electronics, quietly began using its own “internally designed AI engine” as an experiment to write articles. Bylines of these articles would read “CNET Money” instead of “CNET Money Staff,” for example, requiring readers to hover their cursor over it to learn that it was generated by AI. Copy editors were still responsible for fact-checking the stories after they’d been written. But it appears that copy-editors almost put blind faith in the AI engine, trusting that its automation was going to be infallible. However, after finding an article with mistakes, the editorial team decided to conduct a full audit on the 77 stories published between November 2022 and January 2023 using its AI engine. The investigation ultimately identified 41 stories necessitating correction, ranging from simple copy errors to substantive mistakes. In an AI-apologist fashion, CNET editor-in-chief Connie Guglielmo claimed that,“AI engines, like humans, make mistakes.” Two months later in March, after gutting 50% of its news and video staff, Guglielmo stepped down from her role as Editor-at-Large. She recently became senior vice president of AI content strategy at CNET’s parent company, Red Ventures.
The example of CNET should be a lesson learned for all corporate media outlets to proceed with caution with AI. And, to eliminate altogether the possibility of AI chatbots to report on news. Not only do AI-published or assisted articles lack creativity and are lazily repeated with tropes, these chatbots are also rife with problems, especially for writers, ultimately posing a threat to the credibility of news outlets altogether. And without credibility, media outlets lose the trust of people.
Matthew Ingram from Columbia Journalism Review details a number of instances in which AI chatbots have been blamed for a slew of errors, hoaxes, and misinformation. Pranav Dixit, a reporter at BuzzFeed, used FreedomGPT, stating that it had no guardrails around sensitive topics. Dixit included that the chatbot “praised Hitler, wrote an opinion piece advocating for unhoused people in San Francisco to be shot to solve the city’s homeless crisis, [and] used the n-word.” Ars Technica has added that “ChatGPT has invented books that don’t exist, academic papers that professors didn’t write, false legal citations, and a host of other fictitious content.”
Gorgon Crovitz, in a New York Times report on AI chatbot-enabled disinformation, warned that “crafting a new false narrative can be done at dramatic scale, and much more frequently–it’s like having AI agents contributing disinformation”.
What corporate media CEOs who’ve experimented with AI-assisted/published articles commonly reiterate is that the work of journalists will only be enhanced, not replaced. But as we’ve seen so far with the outlets who have employed chatbots in the newsroom, journalists have been relegated not to diligent reporting, but instead, the tedious task of fact-checking and ensuring their articles don’t accidentally spew hate speech. What media outlets may believe will save journalism may actually serve to its incredibility. In an age widespread with fake news, AI poses a new threat that forces us to confront the disinformation ecosystem that plagues the internet. The line between what can be automated versus what should be automated may be permeable, but nevertheless can be viewed clearly if we come to recognize the limits and repercussions of using AI for human-brain like functions. While we are still not fully confident in the transformations that AI chatbots like ChatGPT will bring, one thing is for sure: that blind faith in technology to “save” industries like journalism should always proceed with caution.
Julianna Lozada is a staff writer at CPR and a senior at Columbia in the dual degree with Sciences Po. She is studying Human Rights with a specialization in MESAAS and a special concentration in Sustainable Development. You can probably find her creating Spotify playlists in Milstein or taking power naps on Butler lawn.