Artificial intelligence (AI) is transforming the way news is being distributed and consumed. From algorithmic news distribution and AI-powered news aggregators to the growing habit of asking AI chatbots (like ChatGPT or Gemini) to summarise stories, the technology is increasingly embedded in the daily flow of information.
Given the scale of this change, understanding how AI is being used in news creation and journalism has never been more urgent. And much of that happens out of sight: before a reader clicks a headline or reads a sentence. However, given the public’s lack of understanding of how AI systems work, combined with low levels of media and AI literacy risk, people are left confused and vulnerable to making harmful decisions.
To better understand those risks, we examine three areas where AI is reshaping the news ecosystem in search, fact-checking, and personalised feeds.
READ I India’s national accounts data need urgent reforms after IMF snub
AI in news search: Accuracy and bias risks
Type almost any question into Google and you will no longer be met with a list of links. Instead, the first overview response is provided by Google’s AI tool, Gemini, powered by large language models (LLMs). These “AI Overviews” pull together information from across the web and present it as a ready-made summary.
At their best, these LLM-driven search engines, or AI chatbots with web-search capabilities, can help people quickly make sense of complex topics or get updates during emergencies.
Yet these systems are far from reliable. Studies have shown AI search tools frequently produce factual errors, amplify biases, and mis-attribute information. Some of the earliest outputs from AI search systems were notably problematic. Google’s AI Overviews famously advised users to put glue on pizza so the cheese would stick and, in another case, suggested that eating rocks might offer health benefits, errors that circulated widely as examples of the technology’s unreliability. These incidents highlighted not only factual inaccuracy but also the potential risks when such systems offer advice that appears authoritative.
In response to these early issues, Google has introduced a series of targeted fixes: it has built better detection mechanisms for nonsensical queries, reduced the influence of satirical and user-generated content, tightened safeguards around sensitive domains such as health and news, and limited the likelihood of incorporating misleading snippets.
Yet even with these adjustments, concerns remain. Even when the tools provide citations, these attributes are often inaccurate or misleading. In effect, these tools are borrowing the authority and credibility of established news outlets without meaningfully supporting them.
Despite these facts, public attitudes are shifting. According to the Digital News Report 2025: One in five Australians (21 percent) say they are comfortable with news that is mainly produced by AI — a higher level of acceptance than in many other countries.
Among those interested in AI generated news, news summaries (29 percent) and story recommendations (22 percent) are the most appealing uses, and paid online news subscribers are the group most likely to use AI chatbots for news.
Only thirty per cent say they don’t want their news personalised by AI. Among those who are interested, news summaries (29 percent) and story recommendations (22 percent) are viewed as the most appealing uses of this technology.
These numbers show a country that is divided but increasingly open to AI news, even as concerns about accuracy, transparency and accountability continue to rise.
AI and automated fact-checking
AI is also being increasingly used to check facts. While human fact-checkers are still hard at work, many of their key tasks are being handed over to machines to speed up the verification process.
It’s called Automated Fact Checking ( AFC) and it can help in three ways. It can help find incorrect claims (like ‘the world is flat’), search for evidence to prove or disprove the claim, and verify the claim.
AFC uses natural language processing to process large amounts of text to identify relevant categories. Different AI models are also trained to achieve these tasks, so you might have one model trained to find claims and another separate one focused on verification.
However, we are not rushing towards a fully automated fact-checked future. These models take a significant amount of time to build and are not completely reliable even when deployed. They are also typically built for working with only one type of claim (such as those that are written or spoken) rather than working across modes (such as photographs, data visualisations, or videos).
Personalisation without transparency
AI is also used to distribute news to people through algorithms. We often think about the Facebook news feed or TikTok algorithm, but even the most simple news websites use an algorithm for selecting what content to show you, especially if you’re logged in.
The webpage may be mainly curated by humans, but a ‘Most Viewed’ top ten is a perfect example of an algorithm that selects and presents news to you.
People have been concerned about filter bubbles for some time but there is limited evidence to support the claim that people get trapped in information silos. Recent empirical research has shown that platforms show people different types of news, and personalisation is not as intense as once feared. Research is now exploring if social media does indeed amplify divisive views.
While social media algorithms will keep on promoting engagement, and news websites will focus on popularity, there is a growing movement — by a small but growing group of news organisations — focusing on building better news algorithms for the public good.
For instance, a Swedish public service organisation has developed a public value metric, which means that their algorithms don’t just promote what is popular, but what is most important to the public.
AI brings some efficiencies. It can accelerate newsroom workflows, help audiences navigate complex issues, and expand access to essential local information. But it also has the power to obscure stories, distort facts, and produce confident errors that pass unnoticed.
The future of trustworthy news depends on how wisely the industry handles this moment. If used well, AI can support journalism and strengthen democracy. But if used poorly, it risks weakening the very foundations of informed citizenship.
Shir Weinbrand is a computational scientist and PhD student at Queensland University of Technology. Dr James Meese is an associate professor at RMIT University where he co-leads the News, Technology, and Society Network and is an associate investigator at the ARC Centre of Excellence for Automated Decision-Making and Society (ADM+S). Dr Joanne Kuai is a research fellow at RMIT University and an affiliate at the ADM+S. This is the second article in a 360info series on AI, Journalism and Democracy. The first is here. Originally published under Creative Commons by 360info
.