Journalism keeps evolving over the years trying to catch up with technological and psychological advancement. Media has transitioned from being primarily print-based into radio, television and most recently the internet.
Artificial Intelligence (AI) is the latest invention seen as a tool that will greatly reshape the media landscape. Many media houses are still trying to figure out what it means to the industry, how they can tap into it and what the impact of AI will likely be in years to come. And, like the many other technological advancements that came before it, AI is here to stay – and journalists have to embrace it.
Already AI is changing the media industry. Some AI tools have been developed to ease the way journalist do their work. Some of the tools reformat human-created content into multiple different formats, such as text, audio and video.
AI is also widely used by journalists to summarize lengthy documents and translate articles into different languages, making their jobs much easier.
But AI is also making life more difficult for the media with regard to the sophistication and proliferation of false news. This is a big problem facing the media industry. It’s already greatly reduced the credibility of media around the globe, as it gets more difficult for citizens to figure out what is true or false.
As U.S. citizens prepare to go to polls, for instance, Barbara McQuade, a professor at the University of Michigan and author of the book “Attack from Within” on how disinformation is sabotaging America, said AI is being used to create fake celebrity endorsements, political statements, audio for robocalls, ads, newscasts, and even news media itself. Examples include pro/anti statements targeting both sides for the Russian war on Ukraine, fake robocalls from “President Joe Biden,” and realistic spoofs of The Washington Post and Fox News.
The spread of such false news and deceptive information can incite violence and hatred and, frankly, is a threat to the fabric of democracy.
According to McQuade, this problem should be met with regulation, labelling and technical detection.
But “unfortunately there are no laws by the American government that can hold tech companies liable for information shared through them,” she said.
Technical detection tools are available, however, to identify whether a text has been generated using AI writing tools, such as ChatGPT, GPT-4 or Bard, through various methods and criteria for evaluating the text, including statistical, semantic, stylometric and behavioural analysis. But these tools have difficulty achieving perfect results due to AI’s increasing sophistication. By design and through ongoing innovation, AI systems continuously improve, making it harder for humans to distinguish content generated from them versus humans.
An approach that could help is to develop more intelligent AI models capable of recognizing their own text and passing laws to hold tech companies responsible for AI-generated contents aimed at deceiving the public.
But, on these solutions, “we are way behind,” McQuade said.
Nonetheless, experts have advised journalist to ensure the application of ethics when using AI in news production.
Dalia Hashim, who is the program and research lead for AI and media integrity in the organisation Partnership for AI, said, newsrooms need to set clear goals for adopting any AI tool and must also ensure transparency. As such journalists should explain and also be ready to be accountable whenever they decide to use the tools.
In a session organized by MacArthur foundation to discuss the use of AI in media industry, Hashim said “media industry should also keep monitoring the evolution of AI in order to keep track of any changes that may arise.”