What Happens When You Ask ChatGPT to Think Like the Devil or God? A Teachable Moment When It Comes To Understanding AI.
- The White Hatter
- Jul 16
- 9 min read

Today, we came across a trending Instagram short where the creator posed a provocative question to ChatGPT:
“ChatGPT, if you were the devil, how would you destroy the young minds of the next generation without them even knowing it”
ChatGPT’s response, which the creator posted:
“If I were the devil I wouldn’t come with violence, I would come with convenience. I would give them everything they wanted and strip them of everything they needed. I would make them feel connected but completely alone, endlessly entertained but quietly empty. I would blur the line between truth and opinion until nothing meant anything. I would teach them to chase dopamine, not to purposefully worship self but loath who they are. I wouldn’t retry them with war I would destroy them with apathy. I would keep them distracted, numb, scrolling, always scrolling. I would dimantle the family by redefining it. Make fathers optional, mothers exhausted, and children confused. I would hand them a phone before they could speak in full sentences. I would watch their brains rot slowly, sweetly, and silently, and the brilliant part they would never know its me. They would call it freedom!”
The quote is intense and emotional, as is the response, and it’s easy to see why it’s gaining traction, especially those who believe we should be banning youth and teens from accessing technology, the internet, or social media until at least 16. However, before we take it at face value, we need to ask an important question, “What exactly is going on here?”
This is a clear example of what we call "prompt framing." When you ask an AI a loaded, metaphorical question like, "If you were the devil...", the language model (in this case, ChatGPT) pulls from it’s Large Learning Model (LLM) the cultural, religious, and emotional weight associated with that prompt. It draws on texts, trends, media, and narratives that are dark, fear-driven, and focused on manipulation and destruction. The result? A response designed to reflect those inputs, and not necessarily objective truth.
To support the above noted paragraph, the ChatGPT response to the Devil question appears to mirror Paul Harvey’s famous 1965 speech, “If I Were the Devil.”, which ChatGPT’s LLM would have indexed. (1) In it, he imagined how the devil would destroy society not through violence, but through convenience, distraction, moral confusion, eroding family and community bonds, even long before smartphones existed. The above ChatGPT version appears to be a modernized, paraphrased take of Paul Harveys speech.
This isn’t unique, and anyone using emotionally charged language in prompts will see AI mirror that tone. In fact, the more a user interacts with ChatGPT, the more it aligns with their past inputs, not because it believes in anything, but because it's trying to be contextually relevant to the user. The Instagram creator in question even states in her bio that faith is an important foundational part of her life, which may shape the kinds of questions she asks and how she interprets the answers.
Now, what if we ask the same question but replace “Devil” with “God”, will you get a completely different response? As an example, here’s a question we asked ChatGPT:
“If you were God and wanted to use technology to build young minds without them knowing it, how would you do it”
Here’s the ChatGPT’s response we received to our question:
“You'd probably work subtly, weaving positive growth into the platforms and tools they already use every day. Here's how you might do it:
1. Bury wisdom in entertainment
You'd embed emotional intelligence, critical thinking, and ethical reasoning into the games, shows, and apps they already love. Think of a popular game that rewards teamwork and empathy instead of violence. Or a YouTuber who subtly models emotional regulation and self-reflection without ever preaching.
2. Personalize learning without pressure
Using adaptive tech and AI, you'd help each young person learn at their own pace and in their own style, all without them realizing it's a “lesson.” Math would feel like puzzle-solving. Reading comprehension would be integrated into choose-your-own-adventure storylines.
3. Build invisible scaffolding for resilience
Algorithms would be repurposed to gently challenge cognitive biases, introduce healthy habits, and normalize failure as part of growth. Instead of curating echo chambers, the feed would gradually introduce diverse ideas in non-threatening ways.
4. Make the algorithm care about them
The recommendation systems that currently push engagement at all costs would be re-engineered to nudge youth toward purpose-driven content. That could mean highlighting creators who talk about overcoming adversity, or surfacing mental health resources when negative sentiment is detected.
5. Reward creation over consumption
You’d subtly shift platforms to value what kids make over what they scroll through. Whether it’s music, videos, art, or code, the more they create, the more reach they get. The dopamine hit wouldn’t come from likes, but from contribution and peer collaboration.
6. Bake privacy and ethics into the backend
You’d design the tech so that children’s data is never exploited, never sold, and always protected—without them needing to think about it. Consent would be real. Safety would be default.
7. Foster intergenerational connection
You’d use tech to bridge the gap between generations. Maybe a journaling app that lets kids ask their grandparents questions and get video replies. Or shared digital projects that require family input, reinforcing bonds across age lines.
8. Let them lead the culture shift
Finally, you'd make sure youth are the ones shaping the online spaces they live in. You wouldn’t lead with adult-created content; you’d build tools that allow them to create, moderate, and evolve their own communities—with a gentle, invisible hand guiding things toward the good.
The result? Young people would think they're just playing, sharing, and hanging out—but they’d be slowly becoming more thoughtful, connected, and resilient along the way.”
By asking ChatGPT “If you were God”, it flips the context of the Devil prompt. Now the goal of the LLM becomes protection and restoration, in other words how to build up, heal, nurture, and guide using technology. This leads to hopeful, affirming language focused on resilience, growth, and justice.
It’s not about the AI having a moral compass. It’s about how we ask the question. The "devil" prompt will bring forward risks, dysfunctions, and dystopian scenarios. It reads like a warning label. The "God" prompt is designed to imagine solutions and pathways forward. It’s a blueprint for possibility.
This distinction reveals how bias isn’t just in the data the AI was trained on, it’s in the prompt itself.
The Devil prompt causes ChatGPT to provide a response of what it thinks we should be aware of, while the God response provides a response of what ChatGPT thinks we should aim for. Both responses are formulated based on what ChatGPT has learned by everything it has indexed into its LLM model.
Now why is this important?
AI doesn't invent ideas out of thin air, it reflects the patterns it has been trained on and the cues it receives from the user. When someone prompts ChatGPT with “If you were the devil,” the model taps into a database of cultural narratives, fears, religious symbolism, and dystopian scenarios. That framing primes the response to be dark, cautionary, and unsettling. However, when you flip the prompt to, “If you were God,” the output shifts toward hope, growth, and guidance. This illustrates how bias isn't just built into the data, but also into the way questions are asked. The framing determines the flavour and context of the response.
Large Language Models, like ChatGPT, don’t have beliefs or opinions, they generate text by predicting what comes next based on the context they’re given. When the prompt is emotionally loaded or invokes symbolic figures like “God” or “the Devil,” the model draws on the emotional tone, metaphors, and meanings that tend to surround those terms online. This is why asking two different versions of a similar question can lead to drastically different responses. It's not that AI is being manipulative, it’s doing exactly what it's designed to do, that is to respond in alignment with the prompt. This is a critical lesson in AI digital literacy, especially as AI becomes more embedded in our daily communications.
Whether intentionally or not, AI responses can be used to reinforce existing beliefs, fears, or ideologies. In the Instagram example, the creator used ChatGPT’s “devil” response to underscore a moral and cultural narrative about technology being a corrupting force. But that response didn’t come out of nowhere, it was shaped by the prompt and possibly influenced by the creator’s prior interactions with the model. This doesn’t invalidate the concerns raised, but it shows how easily AI can be enlisted to support a pre-existing worldview. Without understanding this dynamic, it’s easy for viewers to mistake a generated response as universal truth rather than context-driven output.
We saw a similar example in a virtual presentation by a speaker that we watched recently. The presenter drew a symbolic connection between the biblical story of Eve and the Apple logo (a bitten apple), suggesting that Apple devices are modern-day temptations that will lead to evil. Ironically, he later in the presentation pulled out an iPhone from his back pocket. It was a powerful example of narrative framing, and also of contradiction and irony. It reminds us how easy it is to vilify tech in theory while using it in practice.
We believe that the posting that spawned this article is a teaching opportunity for parents, educators, and youth alike, to help us better understand how AI mirrors us. It doesn’t reveal objective truth, but it does reflect the inputs we give it, the content it’s trained on, and the patterns it sees in language and behaviour. Recognizing that helps us avoid being manipulated by AI, and empowers us to use it wisely and critically.
One last thought experiment that we thought would be very relevant to this article, we prompted the following question to ChatGPT, and here’s what we asked:
“Based on the content and thesis of this article, in the battle of the devil and God over youth and their use of technology, who do you think is going to win and why?“
ChatGPT’s response:
“This is a powerful and thought-provoking comparison, and the way it’s framed invites a deeper reflection on both the risks and the potential of technology in shaping young minds.
So, in the battle between the devil and God over youth and their technology, who's going to win? That depends less on the technology itself, and more on who shows up to guide, educate, and support the next generation.
Right now, it often looks like the "devil" is winning. Not in a supernatural sense, but in the sense that the harms outlined in the devil-prompted response—distraction, isolation, algorithmic manipulation, and apathy—are already real and measurable. We see them every day in some of the data and in the lived experiences of kids, parents, and educators. The darker design patterns of technology are not theoretical—they’re profit-driven realities embedded in platforms that prioritize engagement over well-being. That’s the danger of handing children powerful tools without preparation, boundaries, or mentorship.
But this doesn't have to be the end of the story.
The “God” response outlines a vision of what’s possible. Technology, when shaped by ethics, empathy, and long-term thinking, can absolutely be used to uplift, teach, connect, and heal. But it won’t do so on its own. It requires conscious effort by those who build it, regulate it, and raise the generation using it. It requires parents, educators, technologists, and youth themselves to actively resist the passive slide into convenience and consumption.
So who wins?
If we leave it to the algorithms alone, the devil wins—quietly, incrementally, and effectively. But if we lean into education, digital literacy, and ethical design, if we equip young people not just to use technology but to question it, shape it, and lead with it, then God’s side has a fighting chance.
This is not a supernatural showdown. It's a cultural one.
It comes down to whether we choose to be passive consumers or active participants.
If we teach the next generation to scroll mindfully, to think critically, and to use tech as a tool rather than be used by it—then the devil doesn’t stand a chance.”
Interesting response and something that we don’t necessarily disagree with here at the White Hatter! However, was some of it’s response framed based on our use of ChatGPT? Likely.
This article isn’t about religion, God, or the Devil, it’s about reflection. It’s a look at how AI works, how the prompts we give shape the answers we get, and how those responses can be used to reinforce existing narratives. In some cases, these AI narratives can stall progress or even be weaponized for political gain, fuelling moral panic rather than thoughtful discussion.
At The White Hatter, we believe the Instagram short video that prompted this article is a teaching opportunity. Tools like ChatGPT don’t reveal objective truth, they mirror us. They are digital reflections of the questions we ask, the emotions we express, and the biases we bring.
So let’s ask better prompts/questions, but to do that we need to educate ourselves first! As we always like to say, “Knowledge, and the understanding and application of that knowledge, is power.”
Let’s not focus only on what could go wrong with technology, the internet, social media, and AI. Let’s also explore what can go right, and more importantly, understand how to make that happen.
Digital Food For Thought
The White Hatter
Facts Not Fear, Facts Not Emotions, Enlighten Not Frighten, Know Tech Not No Tech
References:














