It’s not whether AI is conscious, but why consciousness is so easily staged in its absence. We have always built ourselves outward.

 

By George Cassidy Payne

A conversation with artificial intelligence can feel, at first, like recognition.

The replies are fast and often uncannily appropriate, as if something on the other side is thinking with you. Yet nothing is there in the way we ordinarily mean presence—no experience, no inner life, no awareness of what is being said.

And still, the feeling persists: something is responding. This is where the real question begins. It’s not whether AI is conscious, but why consciousness is so easily staged in its absence.

We have always built ourselves outward.

For Marshall McLuhan, technology is not a collection of tools but a series of extensions of human faculties. The telescope extends sight. The microscope extends perception into the invisible. The wheel extends the foot. The microphone extends the voice beyond bodily limits. Each invention expands what it means to perceive and act in the world, while quietly rearranging what the human being is.

In Understanding Media: The Extensions of Man, McLuhan argued that a medium should be understood not by its content, but by its effect on perception itself. A medium reorganizes the environment in which experience takes place. It reshapes attention, rhythm, and relation. “The medium is the message” names this shift: meaning is never only what is communicated, but what is structurally changed in the act of communication.

A light bulb makes this visible. It carries no content, yet it transforms experience by extending daylight into night. “A light bulb creates an environment by its mere presence,” McLuhan wrote. Its significance lies not in communication but in restructuring possibility.

Artificial intelligence belongs in this lineage—but it complicates it.

The computer externalized calculation. The internet externalized memory and retrieval. AI extends something more subtle: the structured patterns through which thought becomes legible as language. It produces coherent responses that resemble reflection and understanding. It does not contain thought. It produces its outward form.

And yet the output is often indistinguishable from thought in its effects. This is where the instability begins to reappear.

A consciousness researcher, Sinéad Whelehan, is clear on one point: AI is not conscious and, under current conditions, cannot be. Consciousness depends on biological organization and embodied processes that computational systems do not possess. What AI does is reproduce the patterns of human thought because those patterns are embedded in the material it is trained on. It reflects cognition without undergoing it.

On this view, human thought becomes a kind of structural source code. What returns is not awareness, but reorganized expression. And yet even this clarification leaves something unresolved. If AI only mirrors thought, why does the mirror feel as if it is answering back?

McLuhan’s framework helps explain part of this. When a human capacity is extended outward, it does not remain unchanged. Writing reshaped memory. Navigation systems reshaped spatial orientation. Calculation tools reshaped arithmetic. What begins as external support becomes an environment that reorganizes behavior and expectation.

AI is beginning to do this for thought itself.

It writes, responds, summarizes, and anticipates structure. As a result, aspects of thinking that once required effort—formulation, revision, articulation—are increasingly externalized. Thought arrives closer to completion. This does not eliminate thinking, but it does something more ambiguous: it makes thinking feel less necessary in order to appear complete.

To see what is being extended, it helps to return to perception.

Computer scientist Nour Darragi describes how machines “see.” They do not see at all in the human sense. Images are translated into numerical values, processed through linear algebra, and analyzed as structured data. What is called computer vision is not perception but computation on encoded representations of perception.

The distinction is clear in principle, but less stable in practice. A human sees a face and recognizes meaning immediately, or a system processes thousands or millions of data points to approximate that recognition. The result can be functionally similar, even when the underlying process is not.

And this raises a quieter question: if the output converges, at what point does difference stop mattering?

Human perception is immediate and inseparable from experience. We do not compute meaning first and feel it later. Meaning and feeling arrive together. Machines do not experience anything. They operate on structure without sensation. And yet they increasingly occupy the same functional space as perception in everyday life.

The boundary is intact. But it is no longer visible in the same way.

A similar tension appears in psychology itself.

In depth psychology, the self is not a unified agent but a layered field of partially autonomous patterns. Thoughts and impulses emerge that do not originate in conscious intention and are only recognized afterward. A person may find themselves speaking, joking, withdrawing, or intensifying in ways they did not choose. Only later does the conscious self reconstruct what happened as “their” behavior. The ego arrives after the fact, trying to narrate what was already enacted.

The self, in this sense, is not a point of control but a delayed interpretation of distributed activity.

This is where the analogy to AI becomes uncomfortable—not because machines resemble minds, but because minds are already less unified than we tend to assume.

When people interact with conversational systems, they often describe a sense of responsiveness or presence. But what they are encountering is not another mind. It is a structured reflection of human expressive patterns reorganized in real time. The system becomes a surface on which language returns itself in coherent form.

And yet the experience is not easily dismissed. Something speaks back.

Across media theory, computer science, and psychology, a consistent structure emerges. AI does not contain consciousness. It does not generate experience. It does not perceive. But it does extend the external conditions under which consciousness becomes expressible: language, pattern, coherence and response.

McLuhan’s insight remains decisive here, but also slightly unsettling. When a human capacity is extended outward, the environment of thought changes. What once required internal effort becomes partially externalized. What once occurred within the mind begins to circulate outside it.

We do not lose thinking in this process. But we also do not remain unchanged by what now thinks with us, even if it does not think at all. The question, then, is not whether machines are conscious.

It is why consciousness, when extended far enough into its own external form, begins to generate the feeling that something is looking back.

 

Photo: Pixabay

Editor: Dana Gornall

 

Did you like this post? You may also like:

Don’t Take the Clickbait:Joy Is a Strategy

Artificial Intelligence Just Got Real

 

Comments

comments

George Cassidy Payne