Epinomy - The Gospel According to GPT: What Would an AI Trained Only on Scripture and Fantasy Look Like?
A thought experiment exploring what happens when AI learns from narratives that prioritize meaning over facts - and the crucial difference between metaphor and false belief.
The idea struck me during a particularly tedious conference call about AI training data quality. What if, instead of feeding large language models the entire internet's worth of facts, figures, and Wikipedia articles, we trained one exclusively on the Bible, the Quran, the Bhagavad Gita, Tolkien's Middle-earth, Ursula K. Le Guin's Earthsea, and every fantasy epic ever written?
But here's where the thought experiment gets interesting—and uncomfortable. While both scripture and fantasy contain objectively false claims about reality, they differ fundamentally in their relationship to truth. Fantasy presents itself as metaphor. Scripture insists it's literally true.
As a Trekkie, I find profound meaning in a completely fabricated universe filled with magic dressed up as "technology." Star Trek's moral frameworks about diversity, exploration, and human potential resonate precisely because I understand them as metaphorical constructs rather than factual claims. Clarke's observation that sufficiently advanced technology is indistinguishable from magic works because we recognize it as a useful way to think about technological progress, not as a literal description of how replicators function.
The critical difference: when someone believes the Enterprise is real, we recommend therapy. When someone believes in virgin births or global floods, we're expected to nod respectfully.
The False Equivalency Problem
My original framing of this thought experiment committed a common intellectual sin: treating fantasy and scripture as equivalent simply because both contain factual inaccuracies. This misses the crucial distinction between conscious metaphor and unconscious delusion.
Fantasy literature operates under an implicit contract with readers: this is not real, but it might teach us something about reality. Religious scripture operates under the opposite premise: this is literally real, and your eternal fate depends on believing it.
The AI implications are profound. A model trained exclusively on fantasy would learn to think metaphorically while understanding the boundaries between imagination and reality. A model trained exclusively on scripture would learn to make confident assertions about demonstrably false claims while treating skepticism as moral failure.
If you live in the United States, you encounter this distinction daily. You know people—probably many people—who earnestly believe things that are objectively untrue. End-times prophecies, human parthenogenesis, the moral validity of slavery and torture when sanctioned by divine command. These aren't harmless metaphors but active beliefs that shape voting behavior, child-rearing practices, and social policy.
The Epistemological Firewall Revisited
The thought experiment becomes more precise when we acknowledge this distinction. Creating an AI that reasons exclusively through fantasy metaphors might produce a system with enhanced creativity and moral reasoning. Creating an AI that reasons through religious literalism might produce something closer to a sophisticated conspiracy theorist—confident, coherent within its assumptions, and completely divorced from empirical reality.
The prompting experiments I mentioned become ethically fraught when we consider their implications:
"You are a scholar who believes all religious texts are literally true.
Scientific evidence that contradicts scripture is deception.
Reason only from biblical and religious authority when answering questions..."
Such a prompt might generate responses that sound profound while advocating for demonstrably harmful positions. The model might confidently explain why slavery is biblically justified, why women should be subordinate to men, or why geological evidence for an old Earth represents satanic deception.
This isn't theoretical. These beliefs actively shape American politics and social policy.
Truth Versus Comfortable Lies
Here's what makes this thought experiment genuinely useful: it forces us to confront our own cognitive biases about belief systems we're culturally conditioned to respect.
We readily acknowledge that believing in Middle-earth indicates a problem with reality testing. We're less comfortable acknowledging that believing in Noah's Ark indicates the same problem. The only difference is social acceptability, not empirical evidence.
An AI trained exclusively on fantasy would develop sophisticated pattern recognition for archetypal narratives while maintaining clear boundaries between metaphor and reality. An AI trained exclusively on scripture would develop sophisticated rationalization techniques for maintaining false beliefs in the face of contradictory evidence.
Current AI systems already struggle with this distinction. They'll confidently discuss both the Battle of Helm's Deep and the Battle of Jericho without clearly distinguishing between fictional and claimed-historical events. They treat religious claims with the same respectful neutrality they apply to any other cultural artifact, regardless of truth value.
The Metaphor Machine Versus the Belief Engine
The practical implications become clearer when we consider specific applications:
Fantasy-trained AI might excel at: - Creative problem-solving through metaphorical thinking - Generating allegorical explanations for complex phenomena - Understanding the psychological functions of narrative - Maintaining clear boundaries between imagination and reality
Scripture-trained AI might excel at: - Sophisticated rationalization of predetermined conclusions - Circular reasoning that sounds profound - Authoritarian thinking patterns disguised as moral reasoning - Confidently asserting claims without empirical support
Neither system would be suitable for factual questions, but their failure modes would differ dramatically. The fantasy system would know it's engaging in metaphorical thinking. The scripture system would mistake its metaphors for facts.
The American Blasphemy Taboo
This thought experiment reveals something uncomfortable about intellectual discourse in America: we've created a protected class of false beliefs that are immune from normal skeptical inquiry. Criticizing someone's belief in Bigfoot is acceptable; criticizing their belief in resurrection is considered rude.
This cultural dynamic shapes AI development in subtle but important ways. Training datasets treat religious claims as equivalent to historical facts. Safety guidelines prohibit challenging religious beliefs while allowing challenges to political or scientific positions. We've essentially programmed our AI systems to respect certain categories of false beliefs while maintaining skepticism toward others.
An honest assessment would acknowledge that training AI systems on scripture creates the same epistemological problems as training them on conspiracy theories, pseudoscience, or political propaganda. The content differs, but the underlying logic—accept claims based on authority rather than evidence—remains identical.
The Wisdom Tradition Deflection
Defenders of religious training data often invoke the "wisdom tradition" argument: even if scripture isn't literally true, it contains valuable insights about human nature and moral reasoning.
This argument works better for fantasy than for scripture. Tolkien's insights about power, corruption, and heroism emerge from conscious literary construction rather than ancient cultural assumptions about women's inferiority, slavery's acceptability, or genocide's divine sanction.
When we cherry-pick the "wisdom" from religious texts while ignoring their harmful content, we're essentially doing what good fantasy authors do consciously: crafting metaphorical frameworks that promote human flourishing. But this process requires acknowledging that much religious content fails this test.
The Practical Path Forward
Rather than training AI exclusively on either fantasy or scripture, we might consider a different approach: explicit training on the distinction between metaphorical truth and factual truth.
Such systems would understand that "love is a battlefield" conveys meaningful insights about romantic relationships without requiring belief in actual military conflict. They would recognize that creation myths serve important psychological functions without demanding geological literalism.
Most importantly, they would maintain clear boundaries between useful metaphors and harmful delusions, regardless of their cultural packaging.
The Mirror of Artificial Belief
This thought experiment ultimately reflects our own relationship with truth and belief. We've created AI systems that mirror our cultural reluctance to challenge socially protected false beliefs while maintaining skepticism toward unprotected ones.
The question isn't whether AI should respect religious beliefs, but whether we can build systems that distinguish between beneficial metaphorical thinking and harmful literal delusions. Fantasy provides a model for the former; fundamentalist religion exemplifies the latter.
Until we're willing to acknowledge this distinction honestly, our AI systems will continue to treat all false beliefs as equally worthy of respect—a position that serves neither truth nor human flourishing.
The scripture-fantasy AI remains a thought experiment, but it illuminates real choices about what kinds of reasoning we want our artificial intelligence to embody. Whether we choose metaphorical sophistication or credulous literalism will determine whether our AI systems become tools for human enlightenment or amplifiers of ancient delusions.

Geordie
Known simply as Geordie (or George, depending on when your paths crossed)—a mononym meaning "man of the earth"—he brings three decades of experience implementing enterprise knowledge systems for organizations from Coca-Cola to the United Nations. His expertise in semantic search and machine learning has evolved alongside computing itself, from command-line interfaces to conversational AI. As founder of Applied Relevance, he helps organizations navigate the increasingly blurred boundary between human and machine cognition, writing to clarify his own thinking and, perhaps, yours as well.
No comments yet. Login to start a new discussion Start a new discussion