AI Isn’t Conscious? We’re Asking the Wrong Question
We keep asking if AI has a soul—but the real threat is what’s happening to ours.
“Can AI become conscious?”
I was sitting at a café in Brookvale with one of the senior managers I work with, and he asked me what I thought.
It’s 2025, and that question is everywhere—on panels, in think pieces, and across your feed. But I’ll be honest:
It’s the wrong question.
And it’s blinding us to the real ethical risk staring us in the face.
Why Consciousness Isn’t the Problem
Here’s the truth: AI doesn’t think.
It predicts.
No matter how smooth, witty, or “empathetic” ChatGPT sounds, it has no self. No memory of yesterday. No goals for tomorrow.
My children love to talk to PI—the personal AI. They always ask her (the voice they chose). what she thinks. Every so often, she gently reminds them she doesn’t think like humans do, and somehow launches into an explanation of large language models with a seven-, six-, and four-year-old. It’s both hilarious and unsettling.
But make no mistake: AI is not dreaming or contemplating.
It’s just running math.
And yet, we keep asking:
“Will it wake up?”
“Does it feel?”
“Is it alive?”
A better question is this:
What happens to us when we start pretending it is?
The Simulation Trap
We’ve seen this before. It’s called the Eliza Effect—our deeply human tendency to project agency, emotion, and intent onto machines.
We talk to Siri like she’s listening.
We name our Roombas.
We assume AI “understands” us because it nails the tone and tempo.
But behind the curtain?
Just code. Trained on data. Designed to mimic fluency.
The risk isn’t that AI becomes conscious.
It’s that we forget we’re the conscious ones.
What Consciousness Really Is
Neuroscience can’t fully define it—but we know this much:
Consciousness is messy.
It’s emergent. Consciousness is a feeling. Can computers feel?
Feelings are deeply tied to biological processes and subjective experiences, which AI simply doesn't have. While AI can simulate responses that might appear emotional, it's more like a sophisticated pattern-matching process than actual feeling. This distinction is crucial when discussing AI consciousness, because it highlights the difference between mimicking human behavior and genuinely experiencing the world as we do.
It’s not a single function, but a network of functions—memory, attention, emotion, reflection—all integrated into a self-aware whole.
Not just awareness.
Awareness of being aware.
You can feel shame, recall a childhood smell, and craft a story about who you are.
That’s human.
No AI system—not GPT, not Claude, not the next “open-source AGI”—has that recursive self-model. And likely won’t.
Large language models like GPT and Claude are not aware of what they’re saying. They don’t know they are saying anything. They lack a persistent self, and they don’t update an internal story of who they are.
They are extraordinary mimics of human communication—not participants in human experience.
The Illusion of Conscious Machines
Here’s the trick: AI looks conscious because we’re human. We’re wired to see intention and mind everywhere.
We name our cars. We yell at printers. We think our dog is smirking. And when AI responds in natural language, we instinctively relate to it as if there’s someone home.
The Ethical Danger
I started writing about ethics and AI two years ago—back before the hype.
Back when people thought I was dabbling in sci-fi philosophy.
Today, I still write about it—because we’re outsourcing more of our lives to systems that don’t understand, don’t care, and can’t reflect. Ethics is now becoming much more important in the AI era.
That’s the danger.
AI systems are already involved in:
Hiring
Sentencing
Warfare
Loan approvals
Your TikTok scroll
They shape lives. Not because they’re sentient—but because they’re trusted.
We regulate power tools—not because they’re self-aware—but because they can cut off your hand. You don’t need AI to be conscious to create danger—or value.
The real issue is agency. If an AI system can make decisions that affect lives (e.g., in policing, medicine, hiring), then we must treat it not as a ghost in the machine—but as a tool with real-world consequences.
Consciousness isn’t the threshold for moral responsibility in the human world. Impact is.
We should treat AI the same way.
Agency Without Awareness
Let’s be clear: AI has no moral agency. Agency, in the context of AI and ethics, typically refers to the capacity to act independently and make choices. For humans, agency involves consciousness, intent, and moral judgment. When we talk about AI agency, we’re looking at how autonomous systems can make decisions without direct human input.
One of the key ethical questions is whether AI can truly have agency or if it’s just executing pre-programmed algorithms. Even with advanced machine learning, AI lacks the intrinsic understanding and moral reasoning that humans possess. This raises the question: should we hold AI accountable for its actions, or does the responsibility always lie with the humans who create and manage it?
It doesn’t choose. It doesn’t intend.
But it still acts—and those actions ripple through society.
So who’s responsible?
We are.
We build it.
We deploy it.
We benefit from it.
And too often, we hide behind it.
AI doesn’t need a soul to cause harm.
And it doesn’t absolve us of responsibility just because “the algorithm did it.”
Don’t Wait for a Soul to Show Up
If we wait until AI “feels real” to take it seriously, we’ll be too late.
The future won’t arrive with a blinking light and a robotic voice saying, I am conscious now.
It’s already here—quietly reshaping identity, attention, labor, and relationships.
So stop asking if the machine is becoming more human.
Start asking: Are we becoming less so?
TL;DR for the Skimmers and the Overstimulated
AI is not conscious. It predicts using probabilities, not understanding.
Human consciousness requires memory, emotion, narrative, and reflection.
The Eliza Effect tricks us into treating AI like it “gets us.”
The ethical issue isn’t AI’s mind—it’s our projection and our responsibility.
AI doesn’t need a soul to shape lives. It already does.
For the Reflective Reader
Before you scroll on, ask yourself:
What human decision have I handed off to a machine this week?
Where am I letting fluency stand in for wisdom?
How can I stay fully human in a world fluent in mimicry?
If This Resonated...
Subscribe to Ethics & Algorithms for articles on AI, ethics, and what it means to become more human in a world of accelerating automation.
WorkSoul is my movement to restore dignity, virtue, and meaning in professional life—because work isn’t just what we do. It’s where we learn who we are.
→ Subscribe to Baker on Business to stay informed about how the AI machines are creating time and space for us to become more human. If you believe your work should feed your soul, not flatten it, join now.
Share this with someone navigating the same questions—or someone who hasn’t stopped to ask them yet.
And read the follow-up in Baker on Business: Becoming Human in the Age of AI
A practical guide to holding your identity together in a world full of artificial everything.
Because this isn’t just about machines.
It’s about us.
—
Kevin L. Baker
Executive. Ethicist. Human.


