Postcards from the Museum: AI Can Imitate Thought but Cannot Think

AI can mimic language convincingly, but without lived experience, intention, or responsibility, it cannot truly think or understand meaning, writes Barry Cooper.

Listen to this Article

In The Myth of Sisyphus, Albert Camus wrote “Properly speaking, nothing has been experienced but what has been lived and made conscious.” From Camus’ perspective, experience is everything. It is not an accessory to thought, it is its condition, and it comes from lived life and conscious choices, not from abstraction.

This distinction matters because it establishes a standard for what can be considered an idea. If thought is inseparable from experience, then meaning must pass through a lived point of view. This raises a rather uncomfortable modern-day question about knowledge production: does a system that does not live, does not choose, and does not experience, have the capacity to produce ideas in any meaningful sense? Or does it simply appear to do so?

If there is no lived experience, can what is produced still be called an idea? Camus would say no. What is generated has not been lived and therefore cannot be said to exist as thought. At best, it resembles what Jean Baudrillard called a simulacrum: a copy without the qualities of the original.

Let’s consider an example. You walk into the Rijksmuseum in Amsterdam and stand before George Hendrik Breitner’s The Singel Bridge at the Paleisstraat. Later, in the giftshop, you buy a postcard of the painting. The print reproduces the image but none of its original substance – not the paint, the frame, the scale, the lighting, or the hush of the room in which it hangs. It is recognizable, legible, even pleasing, but it is not the thing itself.

Much of what is now produced by LLMs functions in precisely this way. Their outputs resemble ideas (especially when we stamp them with our own understanding), but they are assembled without experience or authorship, simulating the forms of human thought while lacking its substance. They are postcards from the museum. They have not been lived, and so they are not real in the sense Camus demands, and certainly not real to us.

An LLM does not produce based on meaning, but on statistical probability and patterns in language. It uses a generative process akin to the cut-up-technique associated with the Dadaists and later adopted by writers such as William Burroughs and musicians like David Bowie, in which existing fragments of text were rearranged to create new meaning from old ideas. It takes the weight of human language, sifts through all the pieces, and constructs an answer in a form it does not comprehend to answer a question it did not understand.

The algorithm is good, it’s very good, but the meaning it spits out is neither intentional nor authorial. The meaning comes from the reader; there is no intention of meaning from the system itself. In other words, the machine takes a prompt, identifies the most likely words to fit that prompt, and arranges them in something that resembles an answer.

Camus would argue that the development of thought cannot be left to an artificial process that optimizes coherence without experience. In this way, nuance, intention, and lived perspective are lost and what is left is a façade of meaning, behind which sits an algorithm that delivers a best statistical fit rather than considered judgment.

Bowie and Burroughs, however, did produce powerful work through similar techniques. But they were not defined by the method alone. Meaning also emerged from the context, from the body of work, from how a particular combination produced was publishable. AI does not do that.

Much of generative AI is concerned with output rather than action.

According to Camus, “Understanding the world for a man is reducing it to the human, stamping it with his seal.” To this way of thinking, understanding is something that happens as we create our world through our personal framing and description of it. We make sense of the world by imposing perspective and judgment and the difficulty with AI is that it invites us to understand the world but only through a simplified lens. Jean-Paul Sartre offers a more determined version of existentialism: “Man simply is. Not that he is simply what he conceives himself to be, but he is what he wills.” While Camus talks of experience, Sartre is all about action.

So, does our use of AI constitute action? Is the exchange of prompt and response an act, a determination of will? Or is it a rejection of responsibility, a way of avoiding the effort to think thoroughly? When responsibility for action is handed to a system that can only produce a simulacrum, Sartre’s reasoning leads to the conclusion that we are in fact not acting at all (and therefore not fully living.) To outsource one’s self to an algorithm, is, on Sartrean terms, to forfeit the rights to one’s destiny.

In fact, much of generative AI is concerned with output rather than action. And for this, Sartre offers an interesting point on creativity and the nature of the work of writers: “The genius of Proust is the totality of the works of Proust; The genius of Racine is the series of his tragedies, outside of which there is nothing. Why should we attribute to any the capacity to write another work?”

A writer’s capacity to create does not survive them. There can be no further work written by Racine, Proust, or Pratchett. With their deaths, their authorship, and their capacity to create, ceases. What AI offers instead is the possibility of simulation: texts that resemble the surface features of an others’ work without its intention, judgment, or lived perspective behind it.

And AI can imitate thought very well. But what kind of relationship to thinking does such imitation produce? When a system generates texts in the style of the already deceased, it detaches the work from the conditions that once gave them urgency and context. Racine wrote for a courtly audience shaped by absolutism; Proust wrote from within the fractures of modernity; Pratchett’s humor emerged from a late-twentieth-century moral sensibility. Their work responded to pressures, conflicts, and contingencies that were evolving in real-time alongside them as writers.

To generate new texts in their likeness is to extract them from circumstance. So, what we get is preservation without development, work that is recognizable and polished but ultimately flat and, in this sense, they are not only simulacra of authorship, but simulacra of thinking itself. This is why the distinction matters. Thinking is not the production of elegant language, but the act of responding to a world that can answer back. With AI-generated work, there is no risk of error, no cost to misunderstanding, no demand to revise one’s position. The exchange remains frictionless, and therefore consequence-free.

It is this frictionless abundance that brings me to Arthur Schopenhauer, a dog lover and perpetual grump. He laid the foundation for the development of existentialism, particularly around the difference between knowledge and understanding. Schopenhauer might well have celebrated the technical achievement of AI, especially its labor-saving abilities and promise to quit us of the mundanities of the world while organizing vast amounts of information. Used in the right way, such tools can save time, energy, resources, and maybe even lives.

But the admiration would have likely stopped there. “As the biggest library if it is in disorder is not as useful as a small but well-arranged one, so you may accumulate a vast amount of knowledge but it will be a far less value to you than a smaller amount if you have not thought it over for yourself; because only through ordering what you know by comparing every truth with every other truth can you take complete possession of your knowledge and get it into your power.”

For Schopenhauer, the organization of knowledge through the use of AI would have indeed been exciting, but he would not have celebrated it – because it is organized understanding that Schopenhauer sought. A perfectly indexed world of ideas, ready-made for consumption, would have sent him into apoplexy. “The thoughts of another that we have read are crumbs from another’s table, the cast-off-clothes of an unfamiliar guest.”

If this is what Schopenhauer thought of books – written, revised, and delivered in hardback – what might have been his reaction to AI-generated text? Even at its most sophisticated, AI remains an agent, without struggle, coherence without understanding. “Reading is merely a surrogate for thinking for yourself; It means letting someone else direct your thoughts.”

Thinking is what makes us. Not the accumulation of others’ ideas, not information, books, or a tapestry of facts, but the effort of forming judgment that is one’s own. To think requires a certain determination, and requires the act of constantly collecting experience, of revising one’s position, of changing one’s mind, of defending one’s opinion.

So much is lost when we confuse the appearance of thinking with thinking itself. Systems that generate language are indeed impressive tools and can be used to support inquiry, but they cannot think on our behalf and come up with personal conclusions for us. What is produced in such cases is a polished recombination of what has already been said. Creation and understanding requires more.

The danger is not that machines will begin to think, but that we become so comfortable with letting them stand in for our own thinking that we mistake their output for our own understanding. What happens when intelligence is reduced to production? To mass production? Once thought is detached from will, experience, and the ability (and desire) to question, what is left may very well be convincing but it is no longer alive.

 

© IE Insights.