“If we make better decisions, the future is not predetermined. This also applies to artificial intelligence. It depends on our choices whether AI will be good or bad.”
That quote from Sam Guzic, a journalist at New York Public Radio and a self-described futurist, captures the insights the World Press Institute fellows gathered during meetings with several experts.
Journalists, scholars and experts agree: AI is here to stay, and it’s already transforming the world. Jelani Cobb, dean of the Columbia Journalism School, describes AI as “an unstoppable force” that journalists must understand and navigate, as it fundamentally alters how people consume information. For many experts, AI is a tool designed to tackle specific problems. Some major U.S. media organizations already recognize its potential and are cautiously integrating its capabilities. They are utilizing AI to analyze vast data sets, engage new audiences, grow user numbers, moderate content and build trust. Notable examples include:
1. Associated Press: Using AI for over a decade for translations and to free up time for traditional journalism.
2. The New York Times: Synthesizing data and enhancing investigative journalism with AI.
3. ProPublica: Creating auto-generated audio of articles through AI.
4. The Economist: Employing AI for transcriptions and dubbing videos for TikTok.
These media outlets apply a cautious approach to AI, acknowledging the challenges such as deepfakes and misinformation that it creates for journalism. Reluctance around AI usage often stems from the adage that “AI is only as good as the data it’s trained on.” Camilla Bath, a former WPI fellow and journalism trainer, highlights that current AI models are primarily trained on Western data, which can lead to misinterpretations. A significant issue is AI’s struggle with minority and regional languages, limiting its effectiveness in smaller markets.
“We are using tools that aren’t designed for and by journalists. That’s why we must proceed with caution,” Bath says, listing ongoing problems with AI:
- Bias
- Inaccuracy
- Lack of accountability
- Lack of transparency
- Privacy violations
- Erosion of trust
Due to the imperfections of the technology, most media organizations adopt a “human in the loop” principle when working with AI. For example, Associated Press guidelines say any output from generative AI tools should be treated as unvetted source material.
Meanwhile, the challenges of AI extend beyond these issues. A significant hurdle remains the lack of regulation in many parts of the world. In the United States, 17 of 50 states have enacted laws to protect individuals from unsafe or ineffective AI systems, including California, Connecticut, Louisiana and Illinois. Recently, the EU adopted its first legal framework on AI to ensure systems respect fundamental rights, safety and ethical principles. In China, regulations require that all algorithms be state-reviewed and aligned with “core Socialist values.”
However, journalist and AI expert Karen Hao argues the pressing issue may be climate change and the ongoing environmental crisis. She points out that deep learning processes have a considerable environmental impact, exacerbating resource depletion.
“We must reconsider our path because our planet lacks the resources to sustain this kind of technology,” says Hao, while some researchers are already exploring energy grid optimization.