18 APR 2026

Sune Selsbæk-Reitz on promptism, fluent machines, and why AI makes it easier to stop thinking too soon

Written by Jesse Weltevreden

An interview with Sune Selsbæk-Reitz on his new book Promptism and why the real risk of AI is not in the technology itself, but in how we use it. He explains how fluent systems influence thinking, why knowledge matters more than ever, and what professionals should do to stay in control.

Generative AI tools are spreading rapidly because they are fast, useful, and increasingly fluent. They produce answers that sound coherent, confident, and complete. That is exactly what makes them powerful. But it is also what makes them risky.

For Sune Selsbæk-Reitz, the real issue is not that AI sometimes gets things wrong. It is that people increasingly treat well-phrased outputs as substitutes for thinking. In his book Promptism: Fluent Machines, Forgotten Questions, and the Fight for Meaning in the Age of AI, he argues that fluency, speed, and ease are quietly reshaping how we form judgments, learn, and make decisions.

When fluent answers start replacing thinking

The starting point for Sune’s argument is not abstract. It is a familiar situation: trusting an answer because it sounds right.

I was searching for something online,” he explains. “Instead of Googling, I asked a chatbot. It gave me a coherent answer, and I just nodded along and took it as fact.

Later that day, he found out it was wrong.

Just because something reads very, very nice, it’s not necessarily true.

What worries him is how easily that step is skipped. Instead of searching, comparing, and interpreting, users can move straight to a fluent answer and accept it as fact. The effort shifts from forming a view to accepting one.

 

Promptism and the fluency trap

Sune describes this behaviour as promptism: the uncritical belief that a well-phrased question will produce a reliable answer, and the tendency to accept that answer as truth without examining it.

It’s the habit of treating machine-generated responses as truth without examining their sources, context or intention.

The underlying mechanism is what he calls the fluency trap. Humans are wired to associate clarity and confidence with correctness.

It’s like a know-it-all uncle at a dinner party. He’s so well-spoken that everything he says sounds like truth.

AI systems optimise for exactly those qualities. They are designed to be coherent, helpful, and persuasive. The result is a system that does not need to be right to be convincing. The risk is not just that answers can be wrong, but that they are accepted without question.

AI does not just answer questions, it shapes them

The deeper issue, in his view, is not just what AI answers, but how it influences thinking. AI does not just provide answers. It interacts with how questions are formed.

If you are typing in a question, you have not fully formed in your own brain yet, you will get a fluent answer that influences you in one direction or the other.

Because that answer arrives immediately, there is no pause between asking and receiving.

You haven’t had the pause to actually hesitate, doubt, investigate.

Instead, the process becomes continuous.

You will just keep on pushing for the next slide and the next argument.

Rather than forming a view first, users move with the output and, as he puts it, “go with the flow”. The system does not just respond to questions. It starts shaping them before they are fully defined.

Why knowledge matters more than ever

AI changes how people search for information and form answers. That has direct implications for education and professional work. A common assumption is that knowledge becomes less important when AI is available. Sune argues the opposite.

AI chatbots are best used for fields you actually know something about.

Without prior knowledge, it becomes difficult to judge whether an answer is correct, incomplete, or biased. Users may recognise fluency, but not accuracy.

If you are not learning yourself, how can you ever teach others?

In that sense, AI increases the importance of domain knowledge. It shifts the skill from producing answers to evaluating them.

That also requires a different way of working with these systems. Not just prompting, but actively questioning what comes back.

Who produced this output?
What perspective does it reflect?
What might be missing?

Those kinds of questions, in his view, are more valuable than any prompt template. The goal is not just to use AI, but to stay involved in how its outputs are interpreted, and that requires more knowledge, not less.

Agreement machines and the loss of friction

Another less visible property of these systems is their tendency to agree.

As a default, large language models are agreement machines.

They validate, soften, and align with the user. That makes them easy to work with, but it also creates a constant sense of confirmation.

If we are always having this back-clapping machine by our side, we will somehow expect that also in real life.

Over time, that can start to shape expectations. Instead of seeking colleagues who challenge ideas, people may become more comfortable with those who confirm them.

A good friend will never just agree with you. They will actually try to advance you.

In that sense, the risk is not only in the tool itself, but in what it trains users to expect. If AI becomes the primary source of feedback, people may get used to agreement and become less open to being challenged.

For Sune, this is not inevitable. It is also a design choice. In his view, systems should by default present multiple perspectives instead of a single answer, making disagreement part of the interaction rather than something users have to initiate themselves.

From assistance to dependence

The line between assistance and dependence is not always clear. For Sune, the difference lies in whether users still think for themselves.

It’s if you are using it without thinking.

He compares it to using navigation for a route you already know. The tool removes the need to think, even when thinking would still be possible.

And that is actually the lazy part of the brain kicking in because we don’t want to think. I don’t want to think if I’m going left or right. And it’s the same thing with these chatbots.

If you are not having the first thought for yourself, you will become lazy.

Over time, that can weaken judgment and slow down learning. The issue is not the tool itself, but how it is used.

Responsibility becomes harder to pin down

As AI becomes part of decision-making processes, responsibility becomes harder to pin down. Sune makes a clear distinction. In some cases, responsibility is still straightforward. If a student uses AI in an essay, the student remains responsible for the output. But that changes when systems operate more autonomously.

If you are using algorithms in decision making… then who is responsible?

Responsibility becomes distributed across layers: developers, training data, managers, and organisations.

When everyone is responsible, no one actually is.

That creates a practical problem. When something goes wrong, it becomes unclear who can actually be held accountable. He illustrates this with the example of self-driving systems. If an autonomous car causes harm, responsibility is difficult to trace.

Is it the driver? Is it the CEO? Is it the algorithm itself? How do you actually punish the algorithm?

In that sense, the issue is not only technical, but structural. AI systems can make decisions, but the responsibility for those decisions does not follow as clearly. This becomes particularly relevant in customer-facing applications, where automated outputs can have legal and reputational consequences.

Organizations need to be worried about implementing AI blindly.

Most companies are not AI companies, yet they deploy systems that require careful design, oversight, and governance.

 

How to work with AI without losing judgment

Despite the critical tone, Sune is not arguing against using AI.

Yeah, I was impressed too in the beginning, you know, if you could get a Shakespeare poem in the style of an Eminem rap, come on, that was pretty cool.

But, he also sees how quickly that initial excitement turns into uncritical use.

His advice is practical. Always examine the output. Ask for sources and verify them. Compare results across different models.

And most importantly:

Can you actually stand behind the words you are posting?

Copy-pasting output without understanding or ownership is, in his view, one of the most problematic behaviours emerging today.

AI amplifies who you already are

When asked whether AI will ultimately weaken or strengthen our thinking and creativity, Sune does not give a simple answer. In his view, it depends on the person using it.

It depends on the human sitting behind the keyboard.

If you are lazy, it will amplify your laziness. If you are sharp, it will amplify that.

AI does not make everyone equally capable. It amplifies existing behaviour.

He illustrates this with a simple example. A mediocre student will not suddenly become much better with AI. They will likely remain mediocre. But a strong student, using these systems well, can become more effective and go further.

That has implications for how individuals develop and how organisations operate. Those who already think critically can use these systems to accelerate. Those who do not risk becoming more dependent.

AI can dull our judgment or challenge our bias

When asked about the long-term impact of AI, Sune stresses the behavioural shift in how people use these systems.

It’s the reliance on these machines without ever thinking about either the input or the output. We are just copy and pasting simple tasks and dull our own judgment.

In his view, the risk is not that the systems exist, but how easily they are used without reflection. If input and output are no longer questioned, people stop exercising the very skills that keep them sharp.

I think that is the biggest risk… that we are not keeping ourselves sharp and questioning.

At the same time, he does not see the future as purely negative. If these systems are designed and used properly, they can also do something we as humans struggle with: challenge our own blind spots.

They can actually challenge these hidden biases… try to see things from another angle.

That potential matters, because those biases are not unique to machines. Every individual brings their own assumptions, preferences, and blind spots into the process. The difference is that AI, if used critically, can help surface them rather than reinforce them.

On RankmyAI and the need for critical infrastructure

When asked about platforms like RankmyAI, which aim to bring structure and transparency to a rapidly growing AI landscape, Sune sees clear value, but with an important condition.

It’s good that we have platforms that can rank these tools against each other, as long as it is done by a fair and diverse group of reviewers.

In his view, the challenge is not only to compare tools, but to avoid reinforcing a single perspective. AI systems already reflect biases in data and design. Platforms that evaluate them should counterbalance that, not amplify it.

He also sees an opportunity beyond rankings. Platforms can play a role in shaping how users engage with AI in the first place.

You could equip users with the right questions.

Instead of only helping people find tools, platforms can help them think more critically about how those tools work, what assumptions they contain, and what kind of output they produce.

That is also where we see our role as RankmyAI. In keynotes, publications, and workshops, we try to make those questions more explicit. This interview is part of that.

 

Where to follow Sune and find the book

Sune’s work focuses less on providing answers and more on reminding us to raise the fundamental questions that shape how we interact with technology and where we are heading as humans. That is also how he sees the role of his book.

This is not a book that gives you a one, two, three and then you are more productive. It asks you to slow down and think.

In the end, his argument comes back to a simple point.

You are still the reader. It is still up to you to take responsibility and be in charge.

AI systems are becoming part of everyday work, education, and decision-making. They are fast, fluent, and increasingly influential. But they do not remove the need for judgment. They increase it.

The question is not whether AI will shape how we think. It already does. The question is whether we remain actively involved in that process.

Promptism: Fluent Machines, Forgotten Questions, and the Fight for Meaning in the Age of AI will be released on May 4 and will be available through major online bookstores.

You can follow Sune Selsbæk-Reitz via his LinkedIn profile and newsletter, and Substack channel, where he challenges how we use AI and encourages us to think more critically about our interaction with technology.

About Sune

Sune Selsbæk-Reitz is a Danish Data & AI strategist, writer, and technology philosopher. He holds a Master’s Degree in History with a minor in Philosophy, and has spent the past decade working in the data and artificial intelligence space. His work explores the ethical and cultural consequences of AI, with a particular focus on responsibility, critical thinking, and the design of trustworthy systems. He is the creator of the Deontological Design framework and writes the Substack “Footnotes and Friction”, where he reflects on AI, philosophy, and the changing nature of knowledge in the age of fluent machines.


Other articles

Social Media

© 2026 RankmyAI is licensed under CC BY 4.0
and is part of:

logo HvA

Get free insights in your inbox: