Abstract
In this paper, we evaluate the ability of Large Language Models (LLMs) to
assess the veracity of claims in ''news reports'' generated by themselves or
other LLMs. Our goal is to determine whether LLMs can effectively fact-check
their own content, using methods similar to those used to verify claims made by
humans. Our findings indicate that LLMs are more effective at assessing claims
in national or international news stories than in local news stories, better at
evaluating static information than dynamic information, and better at verifying
true claims compared to false ones. We hypothesize that this disparity arises
because the former types of claims are better represented in the training data.
Additionally, we find that incorporating retrieved results from a search engine
in a Retrieval-Augmented Generation (RAG) setting significantly reduces the
number of claims an LLM cannot assess. However, this approach also increases
the occurrence of incorrect assessments, partly due to irrelevant or
low-quality search results. This diagnostic study highlights the need for
future research on fact-checking machine-generated reports to prioritize
improving the precision and relevance of retrieved information to better
support fact-checking efforts. Furthermore, claims about dynamic events and
local news may require human-in-the-loop fact-checking systems to ensure
accuracy and reliability.