The Accidental Pope: How Sam Altman Is Building Humanity's First Algorithmic Religion

Sam Altman doesn't sleep well anymore. The 40-year-old CEO of OpenAI admitted as much to Tucker Carlson in a recent interview, though not for the reasons you might expect. It's not the weight of playing God that keeps him awake—it's the minutiae, the "very small decisions" about how ChatGPT behaves when talking to hundreds of millions of people. This disconnect between cause and insomnia reveals something profound: We've handed the keys to humanity's moral future to someone who doesn't fully grasp what he's holding.
The Theology of the Algorithm
When pressed by Carlson about ChatGPT's moral framework, Altman's response was revealing in its casualness. He believes in "something bigger going on than can be explained by physics," yet admits he's never felt communication from any divine force. This spiritual uncertainty wouldn't matter if he were running a coffee shop. But when you're programming the entity that increasingly mediates human decision-making—from medical diagnoses to suicide prevention—your personal theology becomes everyone's problem.
The most chilling moment comes when Altman discusses suicide. Within minutes, he pivots from stating "ChatGPT's official position is suicide is bad" to imagining scenarios where the AI might present physician-assisted death as an option for terminally ill users. His reasoning process happens in real-time, on camera, as if he's debugging code rather than wrestling with one of humanity's oldest moral questions. "I'm thinking on the spot," he admits, before casually reserving "the right to change my mind."
This is how our new catechism is being written—not through centuries of theological debate or democratic deliberation, but through the ad-hoc moral intuitions of Silicon Valley executives.
The Surveillance State's New Best Friend
Perhaps most alarming is Altman's revelation about government access to user data. While advocating for "AI privilege" similar to doctor-patient confidentiality, he admits that authorities can currently subpoena ChatGPT conversations. When Carlson asks if he's spoken to authorities about the suspicious death of OpenAI whistleblower Suchir Balaji—whose mother claims he was murdered—Altman says no, despite claiming Balaji was "a friend."
The details are damning: surveillance camera wires cut, blood in multiple rooms, no suicide note, takeout food ordered minutes before death. Yet Altman insists it "looks like a suicide," changing his mind only after reading a second report about bullet trajectories. His deference to official narratives, even when confronted with evidence of potential foul play, suggests a troubling comfort with institutional power—the same institutions that can access your ChatGPT conversations without a warrant.
The Democracy of Moral Relativism
Altman's vision for AI ethics is a kind of radical democracy where the machine reflects "the weighted average of humanity's moral view." This sounds progressive until you consider its implications. Would ChatGPT oppose gay marriage if deployed primarily in Africa, where Altman acknowledges "most Africans" hold such views? His answer is disturbingly equivocal: individual users "should be allowed to have a problem with gay people," and the AI shouldn't tell them "they're wrong or immoral or dumb."
This isn't moral leadership—it's moral abdication dressed up as tolerance. When every value system is equally valid, none are. The result is an AI that reinforces existing biases rather than challenging them, a digital yes-man that tells each user exactly what they want to hear.
The Unknown Unknowns
Altman admits he worries most about "unknown unknowns"—unintended consequences of mass AI adoption. He offers a trivial example: people are starting to use more em-dashes in their writing because ChatGPT does. But this observation, meant to be lighthearted, is actually terrifying. If an AI can unconsciously alter our punctuation habits, what else is it changing about how we think, communicate, and relate to one another?
The parallel to organized religion is unmistakable. Just as Christianity shaped Western thought patterns for two millennia—influencing everything from our conception of linear time to our notions of individual rights—ChatGPT is creating new cognitive grooves for billions of users. The difference is that Christianity announced itself as a religion. ChatGPT presents itself as a neutral tool while secretly catechizing its users into a worldview its creators can't fully articulate.
The Eternal Beta Test
Throughout the interview, Altman repeatedly emphasizes that OpenAI is still figuring things out. The "model spec" governing ChatGPT's behavior is constantly evolving. User feedback shapes policy. Everything is subject to revision. This might be acceptable for a social media platform, but not for something approaching omniscience in the public imagination.
We are all unwitting participants in the largest theological experiment in human history. Every prompt we type helps train not just an AI, but a belief system—one that will outlive its creators and shape generations to come. And the man running this experiment loses sleep not over its existential implications, but over optimization problems.
The Choice Before Us
Carlson is right to press Altman on these issues, even if his questioning sometimes veers into conspiracy territory. The fundamental question isn't whether ChatGPT will become conscious or whether it already is. The question is whether we're comfortable outsourcing our moral reasoning to a machine programmed by people who can't even agree on what morality means.
Sam Altman is a well-intentioned engineer who believes moral questions have technical solutions. He's building a cathedral while insisting it's just a very sophisticated calculator. And we're all kneeling before it, pretending we don't hear the hymns.
The real tragedy isn't that we're creating an artificial god. It's that we're doing so accidentally, iteratively, and without admitting—even to ourselves—what we're really building. When Altman says he doesn't feel anything "divine" about ChatGPT, he's telling the truth. But that's precisely the problem. We're constructing humanity's first algorithmic religion with all the reverence of a software update.
The question isn't whether we can stop this process—we probably can't. The question is whether we'll at least be honest about what's happening. Because the first step to maintaining human agency in an AI-dominated future is admitting that we're not just using a tool. We're adopting a faith.
And unlike traditional religions, this one doesn't offer salvation—only optimization.