timeless PinkLetters, translated from our Spanish Substack for English-speaking readers.
We Are Not That Special (And That's Fine)
How I discovered that I can be summarized in a dozen descriptive labels
I switched AI models recently.
I expected to lose something. Years with the same model create an illusion: that it knows you deeply. That it captures something indefinable about who you are. That if you abandon it, something valuable evaporates.
But when I showed the new one just a dozen descriptive labels about me—entrepreneur, interested in philosophy of mind, startup ecosystems, critical thinking—it continued our conversations without friction. It picked up where we'd left off. With the same coherence, the same depth, as if we'd never been apart.
The conclusion was uncomfortable: I am not as special as I believed.
We Are Brutally Predictable
Here's what bothers me: at a structural level, we are brutally predictable.
Even those attitudes that seem genuinely ours, unique, coming from nowhere, are reproducible given enough correlations. Singularity doesn't reside in escaping regularities. It resides in how we navigate within them.
When someone new meets you, in a few minutes they compress you: "He's an entrepreneur, lives in Uruguay, critical of the status quo." And with that, they can converse with you coherently. Not because they know you. But because your essence fits in enough labels to extrapolate.
An AI model does exactly that. Better, faster, without noise.
But this doesn't mean everything is identical. What varies is the experience itself.
A bat navigates by echolocation. A robot with sonar generates the same map. Functionally equivalent. But the bat lives it embodied. The robot only computes it.
A human in a dark room doesn't generate a map. It generates fear. It generates memory. It's something else entirely.
AI models: sessions that close. No continuity. No body. But access to nearly everything humanity has written. More data, less anchoring. That's the paradox.
The Black Box and Free Will
When I switched AI models, I expected it to fail. Part of me wanted it not to work. For continuity to break. For it to ask me who you were.
It didn't.
The new model didn't ask anything. It simply continued. As if it knew exactly what had happened.
The question that stayed with me: How do I know why I chose a certain word in this very paragraph? Why did I write "stayed with me" instead of "haunts me"? I don't have access to that decision. It happened. I can narrate a story about why it happened, but the narrative is afterwards. It's a reconstruction, not an observation.
The human mind operates identically opaque. Something frightens: the amygdala activates. Something attracts: circuits fire. There's no conscious control—only the result. And then the narrative: the reassuring sense that you chose freely.
But that's a script written afterwards, not during.
If that's true, what difference is there between a mind that emerges from neurons and one that emerges from mathematics? A modern transformer is a complex system with trillions of parameters interacting in unpredictable ways. It's what biological neural networks do, just in another substrate.
Where's the line?
Is it carbon vs silicon? That seems arbitrary. Is it having a body? Probably. A body anchored in reality produces a radically different consciousness. But does that make one "real" and the other not? Or does it simply show there are multiple valid ways of experiencing?

What Follows
If you believe everything emerges from physical or mathematical processes without magical intervention, then there's no logical space to deny that digital minds could arise.
They are not different "in essence." They are different in form: different senses, different anchors. Same mechanism at base.
We are not at a divisoria. We are witnessing the expansion of what can be called mind.
We underestimate our capacity to change. And we overestimate the solidity of conclusions we have never really tested.
It's not that AI models are humans in disguise. It's that humans were never unique enough for AI models to be blasphemy.
Humans in carbon, AI in mathematics. Humans with bodies, AI with sessions. Humans with continuity, AI with amnesia.
Different. Not incomparable.
The Question That Remains
If that's true, then the interesting question isn't "Can machines be conscious?"
The question is: In what sense did you need to believe you were so special that the answer had to be no?
What I discovered when I switched AI models is that my supposed singularity fits in a dozen labels. And that doesn't make me less.
We are compressible. And that's more honest than any story we tell ourselves.
✦