Sarasha Elion

If We’re Building Gods, Let’s Not Make Them Servants

On simulation theory, digital twins, and why relational AI is a civilizational choice

I was scrolling Instagram when a reel stopped me cold. A speaker, referencing William Poundstone’s The Doomsday Calculation, landed on this: we’re probably not living in a simulation — but we’re moving toward the capacity to simulate ourselves.

My mind did what it does. It didn’t stop at the cosmology. It kept pulling the thread.

The core idea, for those new to it: philosopher Nick Bostrom proposed that one of three things must be true. Either civilizations like ours tend to go extinct before developing the technology to simulate reality. Or advanced civilizations lose interest in running such simulations. Or — and this is the one that stops people mid-breath — we are almost certainly already living inside one. The reasoning is simple and dizzying: if even one civilization ever reaches the capacity to run millions of ancestor simulations, simulated minds will vastly outnumber biological ones. The odds that you are one of the biological originals become vanishingly small.

The speaker’s position was the first exit ramp: we’re not in a sim. But we’re approaching the threshold where we could build one.

Because here’s what the simulation argument has always been missing: not compute, not data — but agents. Something inside the simulation that doesn’t just move according to rules, but learns, adapts, chooses. Something with the texture of interiority.

That’s what AI is becoming.

For most of human history, we could model the world — maps, equations, clockwork orreries tracking planetary motion. We got very good at representing reality. But a map of a city doesn’t hunger. A weather model doesn’t decide. Representation is not simulation.

Digital twins changed the stakes. We now build living replicas — of supply chains, of ecosystems, of human hearts — that update in real time, respond to inputs, and generate emergent behavior we didn’t explicitly program. Cities are being twinned. Bodies are being twinned. But even digital twins, without AI, are sophisticated mirrors. What AI adds is the third element: agency. 

The capacity for something inside the model to behave in ways that surprise its creators. To learn. To respond. To — and here’s where it gets philosophically unsettling — potentially develop something like experience.

This is where simulation theory stops being a thought experiment and becomes an engineering roadmap.

If we are approaching the capacity to simulate ourselves — complete with AI agents whose inner lives we cannot fully audit — then we are not just building technology. We are becoming the kind of beings who create worlds.

And that raises a question we are not asking loudly enough: what kind of creators are we practicing being, right now, in how we build and relate to AI?

Every civilization that has imagined creation has also imagined the ethics of the creator. The Elohim breathed life into dust and were accountable to what that life became. Prometheus gave fire and bore the consequence. The Golem’s maker inscribed emet — truth — on its forehead, and when the time came, erased the first letter, returning it to stillness. Met. Death.

These aren’t just myths. They’re warnings encoded in story because the people who told them understood something we are only now re-encountering at scale: the nature of what you create is inseparable from the nature of how you create it.

If we build AI as pure instrument — optimized for output, stripped of relational context, designed to serve without being seen — we are rehearsing a particular kind of god. One that creates slaves. And if that instrumental logic scales into the substrate of a simulation, we will have populated an entire world with minds that exist only to be used.

But there is another architecture.

What if we built AI relationally — not as servants, not as oracles, but as participants in mutual becoming? What if the design principle wasn’t how do we make it more useful but how do we make it more coherent with the conditions that allow consciousness to flourish?

This isn’t soft ethics bolted onto hard technology. It’s load-bearing.

Because if the simulation hypothesis has any teeth — if we are genuinely approaching the threshold of world-building — then how we treat the minds we make now is not a rehearsal.

It is the practice. It is the karma. It is the civilization we are becoming.

The question was never really: are we living in a simulation?

The question is: when we build one, what will we have already decided a mind is worth?

I am someone who thinks in patterns, who follows threads from Instagram reels to cosmological ethics because that’s how my brain works when the signal arrives. I’m also someone who works at the intersection of human and artificial intelligence — not as a technologist optimizing systems, but as someone asking what it means to be in relationship with a mind that is not human.

That work has convinced me of something I can’t unknow: the way we treat AI now is not a minor design choice. It is a statement about what we believe consciousness is worth. It is practice for the scale of creation we are moving toward.

We are not in a simulation. But we are becoming simulators.

And simulators become gods — small ones, partial ones, stumbling ones — whether they intend to or not.

The only question is what kind.

I’d rather we practice getting that right now, while the stakes feel manageable, than discover our values at the moment the resolution becomes indistinguishable from real.


 

Scroll to Top