As the popularity of AI systems surges, concerns about their reliability and the justification of their outputs grow, highlighting the need for critical engagement with AI-generated information.
The Misleading Comfort of AI: Understanding the Limits of Large Language Models
January 2023, World Economic Forum — The potential and pitfalls of artificial intelligence (AI) were examined as OpenAI CEO Sam Altman delivered a pointed reassurance about the future of AI: “I can’t look in your brain to understand why you’re thinking what you’re thinking. But I can ask you to explain your reasoning and decide if that sounds reasonable to me or not. … I think our AI systems will also be able to do the same thing. They’ll be able to explain to us the steps from A to B, and we can decide whether we think those are good steps.”
This statement touches on a core expectation from AI systems such as ChatGPT and Gemini, trusted monthly by over 500 million users for a wide array of information. However, there is growing concern over the reliability and trustworthiness of these AI-generated responses, which lack the capacity to reason or provide genuine justifications for their outputs.
Understanding AI’s Functioning
Large language models (LLMs) like ChatGPT operate on principles quite distinct from human reasoning. Trained on extensive datasets of human writing, these models excel at detecting intricate patterns in language and predicting continuations of text prompts provided by users. This algorithmic process is adept at creating outputs that can convincingly mimic knowledgeable human communication. However, these responses are not underpinned by genuine reasoning or a concern for truth; their primary aim is to continue the detected language patterns convincingly.
The Justification Problem
For any belief to amount to knowledge, it needs sound justification—something LLMs are fundamentally unable to provide. For instance, while an LLM might correctly state that today’s haze in Tennessee is due to Canadian wildfires, it lacks the ability to substantiate this assertion with genuine evidence or reasoning. The disconnect between the model’s convincing outputs and its inability to provide valid justifications becomes particularly concerning as users may unknowingly adopt these assertions as truths without the necessary scrutiny.
To illustrate the pitfalls, we can refer to the philosophical conundrum known as Gettier cases. These scenarios combine true beliefs with ignorance about the justification behind those beliefs. A classic example given by 8th-century Indian Buddhist philosopher Dharmottara compares this knowledge to seeing a mirage of water which, by coincidence, has real water underneath it. The belief that there was water was not based on valid evidence but on an illusion, much like the knowledge LLMs provide—truths potentially paired with unfounded justifications.
Implications for Trust
Altman’s reassurances that AI can explain its reasoning fall short of addressing the fundamental problem: when asked to justify its assertions, an LLM cannot provide a real justification. Instead, it generates responses that may appear as justifications but are merely extensions of language patterns. Hicks, Humphries, and Slater argue that this makes AI outputs akin to “bullshit”—statements crafted to appear truth-apt without concern for actual truth.
As AI systems improve and their justifications become more convincing, two potential outcomes emerge. Those who understand the intrinsic limitations of AI will see through the deceptive veneer, recognising that AI lacks genuine explanatory power. Conversely, those unaware of these limitations may be misled into accepting AI outputs as valid truths, thus blurring the line between fact and fiction.
The Need for Critical Engagement
Acknowledging LLMs as powerful tools doesn’t negate the need for critical engagement with their outputs. Experts in various fields already leverage AI’s capabilities while applying their expertise to validate and refine AI-generated content. However, the general public often looks to AI for guidance in areas where they lack competence, such as teenagers researching algebra or seniors seeking investment advice. This reliance necessitates a clear understanding of AI’s limitations and the importance of validating its outputs.
While users may inherently know that cooking pasta in petrol is unwise, AI’s involvement in more complex queries requires a careful approach to ensure the information received is both accurate and justified. The key takeaway from Altman’s statements and the philosophical examination by experts is clear: understanding AI’s strengths and limitations is crucial for effectively navigating its outputs.
Source: Noah Wire Services











