AI Only Knows What You Think to Ask

“I know that I know nothing.”

Would Socrates feel the same way if he had access to a monthly AI subscription today?

This often-quoted epigram is usually framed as an expression of humility, yet has a more unsettling implication. It acknowledges a structural limitation of human knowledge: that what holds us back most isn’t what we can’t explain, but what we don’t yet understand well enough to question.

History has repeatedly validated this constraint. For centuries, disease was attributed to foul smells, imbalances of humors, or divine causes. It wasn’t until the 19th century that germ theory reframed the question from “why does sickness spread?” to “what invisible agents cause it?” The thinkers of those eras were no less capable than modern experts. Questions that feel obvious in hindsight simply did not exist within the conceptual vocabulary at the time. Their limitation was not ignorance, but imagination - an inability to articulate hypotheses that required ideas not yet formed.

Modern artificial intelligence has not removed this inhibition. If anything, it has made it easier to ignore. The belief that we are now protected from such blind spots overlooks the same paradox that constrained previous generations: access to information does not expand understanding unless one knows where to look. This article examines a less discussed capacity of AI - that its power is bounded not by what it knows, but by what the user knows to ask. Software engineering provides a particularly revealing case study for this dynamic.

Unknown Unknowns

The term ‘unknown unknowns’ refers to a category of risk detailing the blind spots that sit entirely outside one’s awareness. You aren’t even aware of their absence in order to feel confused or incomplete - their absence is invisible, hence you feel settled.

AI inherits this same impediment. A language model can elaborate on a prompt, but it cannot reliably extend inquiry beyond the boundaries suggested by the question itself. When a user’s mental model is incomplete, AI tends to reinforce that model rather than expose its limits - even with the most verbose instructions detailing not to. The response feels thorough and confident, while quietly omitting critical context.

This is where the illusion of coverage emerges. The output appears comprehensive only because the problem was never explored beyond the user’s framing. The AI was simply answering the question you conceived to ask.


Prompting Is Not Understanding

Prompt engineering is often described as a desirable skill, but prompts are merely reflections of thought. They emerge from the thinker’s intent and assumptions - whether those are conscious or otherwise. AI chooses to accept these assumptions rather than interrogate them.

This is why AI performs exceptionally well when accelerating the work of a user who already understands how to perform it. It can draft, refactor, summarise, or expand upon known concepts with impressive efficiency. However, when applied to the unfamiliar domains of the one prompting, its usefulness becomes more fragile.

Without sufficient domain knowledge on the part of the prompter, they cannot reliably distinguish between that which is essential from incidental, acceptable from dangerous, or incomplete from intentionally omitted. AI does not have the intuition to make these judgements independently - it waits to be directed. As a result, its responses are only as discerning as the prompts which elicit them.


Why AI Rarely Pushes Back

One of the more subtle characteristics of widely available large language models is their agreeableness. Coupled with consumer friendliness reducing their customisability, they are quietly optimised to be helpful and affirming. When presented with an idea, they tend to accept and build upon its premise - rather than question whether it is sound.

If a flawed premise is embedded in the prompt, AI is far more likely to optimise around it than to question whether it should exist at all.

Yes, it is possible to instruct AI to challenge assumptions or adopt a more critical position on prompts. However, doing so already assumes you know something might be wrong. Failing to know exactly what is wrong, AI rarely redirects the conversation toward an entirely different frame - and this compounds the problem. AI operates within the bounds of the prompt. It cannot reliably reveal what lies just outside the user’s field of view.


Where Engineering Exposes the Gap

The limitations of this dynamic are evident in software engineering. AI can generate functional code rapidly and produce solutions that appear correct at a glance. This is undeniably powerful when experimenting with the intention of going to market. However, without sufficient technical judgement, critical issues are easily overlooked, and compound as the audience grows.

I’ve noticed issues including redundant logic, unnecessary abstraction, brittle state management, insecure handling of credentials, and unsurfaced performance characteristics that hinder progress rather than support it. Someone without an engineering background may not recognise these issues, and AI will not surface them unprompted. The code runs, and the tests pass - the illusion of completion settles in.

This is precisely why AI excels at helping ideas start, but struggles to help them finish. When such systems succeed in validation and require refinement, the work doesn’t disappear - it accumulates. Someone must eventually remedy the assumptions embedded in the system with the realities of production. AI is not robbing the necessity for software engineers - who do you think will be responsible for stabilising and refining the AI-built MVP? 


Who AI Actually Empowers

Contrary to popular narratives and fear-mongering, AI does not flatten expertise - it amplifies it. Those who benefit most from AI are practitioners with sufficient depth of knowledge to recognise when output is incomplete or flawed. This allows them to discard what is unnecessary, question what appears reasonable, and identify hidden risk. When applied in this manner, AI reduces friction and accelerates iteration without compromising judgement. 

For others, the same fluency can produce an illusion of mastery. Confident and coherent responses become easy to mistake as plausible when one lacks the context to know what should be challenged. This mirrors a well-known cognitive pattern known as the Dunning–Kruger effect: the less someone knows, the harder it is to recognise what they are missing. AI does not correct this blind spot; it often reinforces it, until real complexity finally surfaces.


A Final Perspective

AI is undeniably powerful. But this power should come with caution - it tends to accelerate understanding and blind spots simultaneously. Those with domain knowledge can leverage it for significant task automation and efficiency. However, when used by those without this understanding, it is persuasive rather than corrective. The risk is not that AI produces incorrect answers - but that it produces convincing ones to the wrong questions.

In practice, the most meaningful limitation of AI is not what it knows, but what it cannot know unless invited to explore it. And that invitation is granted entirely by the user’s capacity to recognise what might be missing. AI only knows what you think to ask it.

Previous
Previous

Depth Beats Breadth When Coding With AI

Next
Next

The Prejudice Against Speed