AI Adoption Cautiously Encouraged for Journalists: SABEW Panel Lays Out Benefits, Challenges, and Moral Dilemmas in Newsrooms
In a panel discussion hosted by the Society for Advancing Business Editing and Writing (SABEW) in late June, the use of Artificial Intelligence (AI) in journalism was a hot topic. Kylie Robison, senior correspondent at Wired, and Ben Welsh, a news applications editor at Reuters, were among the panelists who shared their insights on the subject.
Robison stressed the importance of honesty when it comes to the use of AI, urging journalists to be transparent about its application to avoid giving readers another reason to distrust the media. She also warned that AI tools may not guarantee true privacy, as there is a risk of leaked information potentially harming the reputation of the company or journalist.
Robison encouraged journalists to double-check the sources and information that AI tools provide to stay ethical and to ensure that they are providing facts with supporting evidence. This is crucial, as AI technology is still developing and can be flawed, as demonstrated by Robison when she found that ChatGPT was pulling quotes that did not exist from hundreds of pages of documents about a lawsuit alleging copyright infringement by Meta's large language models (LLM).
On the other hand, Ben Welsh shared a positive experience with an AI tool for deep research on a person he was interviewing for a story. He found that the tool helped him find information quickly and efficiently. However, he noted that the deep research tool can provide a large report on any topic prompted, but it may take a longer time to retrieve and review the information compared to a shorter and less detailed report.
The German journalist network Netzwerk Recherche has established six recommendations for the responsible use of AI tools in journalistic research. These emphasize competent use, transparency about AI involvement, and integration into editorial guidelines without rigid rules. Meanwhile, Axel Springer’s "Premium Group" mandates widespread use of AI in nearly all journalistic processes, including research and content creation, with internal policies requiring AI checks on all content.
Discussions in the media industry highlight the need for clear editorial guidelines, quality assurance, and staff training regarding AI use. However, there is not yet a universal standard across all media companies. Greg Saitz, an investigations editor at FT Specialist US, moderated the panel.
Both panelists emphasized the importance of remembering that AI tools are computers and not humans, despite their conversational tone. They also advised against using AI tools with sensitive or classified information about a source or story. Journalists should be aware of the questions, or prompts, that they are entering into AI tools to receive accurate and useful information.
At Reuters, there are clear rules and expectations for the use of AI, including disclosure of AI tools and a ban on using AI tools with photography. Robison also emphasized the importance of honest communication with editors about the use of AI, including when and why it is used.
The deep research tool is available for both Google Gemini and ChatGPT. As AI continues to play a growing role in journalism, it is crucial for journalists to navigate this ethical landscape responsibly and maintain the trust of their readers.
Read also:
- Understanding Hemorrhagic Gastroenteritis: Key Facts
- Trump's Policies: Tariffs, AI, Surveillance, and Possible Martial Law
- Expanded Community Health Involvement by CK Birla Hospitals, Jaipur, Maintained Through Consistent Outreach Programs Across Rajasthan
- Abdominal Fat Accumulation: Causes and Strategies for Reduction