AI and its Journalistic Implications

By Grace Sargent

Since its inception, artificial intelligence (AI) has prompted in-depth conversations regarding its potential benefits, as well as its drawbacks. There has been constant debate over whether AI tools, such as ChatGPT, should be viewed with enthusiastic optimism or realistic skepticism. Based on the current trends over the past year, it seems  journalists should maintain a position somewhere in between these two extremes, not only for the sake of the continuation and success of their jobs, but for the success of journalism as a whole.

One of the biggest controversies currently surrounding OpenAI is the criticism from writers who argue it is stealing their work to advance its intelligence. John Herrman from New York Magazine explains “In their view, AI companies are systematically stealing content in order to train software models to copy it” (Herrman). As we have seen, AI learns from human created content; this is why some of our harmful biases have manifested themselves in AI functions. This has even been found in AI responsible for prospective employee applications, when it tended to ignore women’s applications in favor of men’s (Dastin). Once finding out about the ways in which AI builds off of published content, many news sites have chosen to block the GPTBot crawler from accessing and subsequently exploiting any further information to help develop their AI tools. Dan Milmo points out that the company he works for, The Guardian, is part of a growing list of organizations that are taking these kinds of steps to decrease the amount of information taken by OpenAI; others include CNN, the Washington Post, Bloomberg and the New York Times (Milmo). Over in the UK, people have even urged the Prime Minister to “make clear that intellectual property law must be respected when AI systems are being built,” demonstrating the magnitude of our situation with AI (Milmo).

In an attempt to use AI to its fullest potential, some news sites have experimented with its ability to produce entire articles, yet the process has proved difficult. A frequent problem was that the language “can often read as trite rather than the more sophisticated and complex analysis that human writers can produce” (Willing). Even worse, its dependence on completed articles published on the internet have resulted in a tendency for AI to commit outright plagiarism—an act that goes against the journalistic standard of originality and integrity. Aside from article generation, journalists have been able to take advantage of AI in ways that help them take care of lower level (yet time consuming) tasks (Willing). For example, machine learning (ML) and natural language processing (NLP) are both tools used to aid in the research journalists need to conduct (Willing). Specifically, “ML can identify trends and patterns in large volumes of data,” which is beneficial in saving time for stressed journalists under constant pressure to meet deadlines. In addition to saving time, AI can be used for tedious tasks that do not need to be done by humans but are still integral to the prosperity of news sites. One great example would be the use of chatbots for consumer inquiries. This carries with it a couple of benefits: Firstly, readers are able to enjoy immediate responses to a variety of questions without having to wait for a worker to be paired up with them; we can then suppose that this allows more readers overall to be helped, and in a more efficient manner. Thus, secondly, we can posit the idea of AI helping create a better experience for a population of customers, which not only increases the likelihood of loyalty, but also of increased advertising or subscription revenue (Willing). 

Although each of these benefits are appealing, it is still important to maintain an awareness of the flaws in AI tools, and their effect on journalism specifically. One of the most prominent instances of this is the tendency for biases to seep into the presentation of information. When using AI to produce articles or writing of any kind, we must remember that we are its teacher; in other words, AI is bound to reflect any form of discrimination already inherent in our writing. Devin Partida from Techopedia highlights the fact that “When unconscious and conscious biases find their way into the data sets used to train the model, they will also sneak their way into AI outputs” (Partida). Therefore, journalists’ attention should be paid to both the possibility for plagiarism as well as the potential for underlying biases within AI generated content.

When it comes to AI and its various uses, no clear cut answer emerges as to whether it would be altogether beneficial or harmful to the future of journalism. Instead of becoming completely reliant on AI or vehemently opposing its presence in news organizations, a proper overview of its current effects suggests we use it in moderation alongside human intelligence. Ironically, journalists could even try using AI to combat the flaws consistent with AI writing: Edward Tian, a college student, created GPTZero, which helps figure out whether a certain text could have been AI generated (Bullard). Tools similar to this allow journalists to receive assistance with tedious tasks while still having the opportunity to review the work of AI before publication, ultimately ensuring content is still written both efficiently and effectively. The field of journalism has an opportunity to combine the evolved human intelligence it depends on with the newly developed capabilities of AI in ways that could either destroy or elevate journalism as we know it. 

(Fun fact: the above graphic is AI generated!)

Works Cited

Bullard, Gabe. “Smart Ways Journalists Can Exploit Artificial Intelligence.” Nieman Reports, 22 June 2023, niemanreports.org/articles/artificial-intelligence-newsrooms/. 

Dastin, Jeffrey. “Amazon Scraps Secret AI Recruiting Tool That Showed Bias against Women.” Reuters, Thomson Reuters, 10 Oct. 2018, http://www.reuters.com/article/us-amazon-com-jobs-automation-insight-idUSKCN1MK08G. 

Herrman, John. “How Ai Will Change the News Business: 3 Theories.” Intelligencer, 1 Aug. 2023, nymag.com/intelligencer/2023/08/how-ai-will-change-the-news-business.html?utm_campaign=Need+to+Know+newsletter&utm_medium=email&_hsmi=268814861&_hsenc=p2ANqtz–2UVZ2SekImOsaE5QzXQb6HpbE0HSdf-1abZ1VN5c62TsDHY7PYQZpPQqMs6atfPnpERQelv_Iyl3dOH2lfZqvJm5PPMvs3XgDNm55Rh8GviTgwa4&utm_content=268814861&utm_source=hs_email. 

Milmo, Dan. “The Guardian Blocks CHATGPT Owner OpenAI from Trawling Its Content.” The Guardian, Guardian News and Media, 1 Sept. 2023, http://www.theguardian.com/technology/2023/sep/01/the-guardian-blocks-chatgpt-owner-openai-from-trawling-its-content. 

Partida, Devin. “Why Does AI Have Biases? – Techopedia.” Techopedia, 11 July 2023, http://www.techopedia.com/can-ai-have-biases/2/34037. 

Willing, Nicole. “Ai Journalism: Can Humans and AI Coexist in the Newsroom? – Techopedia.” Techopedia, 16 Aug. 2023, http://www.techopedia.com/ai-journalism-where-will-the-rise-of-automated-news-writing-and-fact-checking-take-the-industry. 

Share your response