Insights From Thomas Merton and Mortimer Adler to See-Judge/Discern – Act.
Happy New Year!
I want to start with a question that might make you uncomfortable: Have you ever wanted revenge? Maybe someone betrayed you at work. Perhaps an ex spreads lies about you, or even a friend. Maybe you’ve thought, “If only I had the power to make them pay.”
Here’s the thing—you’re human, so of course you have. Revenge is one of our oldest impulses since the days of Cain & Able. But today, I’m here to talk about what happens when that ancient human impulse meets the most powerful technology we’ve ever created.
Because right now, artificial intelligence isn’t just changing how we work or communicate. It’s changing how we hurt each other, how we blindsided each other, and how we humiliate each other.
The Revenge Instinct: Ancient and Dangerous
Let me take you back for a moment. Throughout human history, revenge has followed predictable patterns. You’ve seen them in politics, in Shakespeare, in Religion, in Greek tragedies, probably in your own family or workplace.
First, there’s the cycle of retaliation. Someone hurts you. You hurt back—but harder. They escalate. You escalate. What starts as a single incident becomes a blood feud that lasts for generations. Think of the Hatfields and McCoys, or gang violence in any significant city.
Second, there’s the moralization of payback. Revenge never sees itself as revenge. It’s always “justice.” It’s “restoring honor.” It’s “holding people accountable.” This moral framing is what makes revenge so dangerous—because once you convince yourself that hurting someone is righteous, all your internal brakes come off.
Third, there’s us-versus-them storytelling. We all too often define ourselves by who we hate. The villain becomes so despised, so dehumanized, that any punishment seems justified. They’re not a person anymore—they’re a monster who deserves whatever’s coming.
For most of history, these patterns were contained by human limitations. You couldn’t hurt someone without being physically present. You couldn’t ruin someone’s reputation beyond your immediate community. Revenge required effort, risk, and exposure.
Not anymore.
AI: The Perfect Revenge Engine
Artificial intelligence has removed every natural barrier that once limited revenge. Let me show you what this looks like in practice.
The New Face of Intimate Abuse
Start with something I call “image-based abuse.” Right now, generative AI tools can create photorealistic fake images of anyone—including explicit sexual content. It’s racial and religious minorities targeted by coordinated harassment campaigns. It’s dissidents and journalists silenced through digital threats. It’s the vulnerable, the marginalized, the people with the least power to fight back.
Because revenge, like all forms of violence, flows downhill. This technology is being weaponized in the form of revenge and deepfake harassment.
Imagine this: You break up with someone. They’re angry. Within minutes—not days, minutes—they can create dozens of explicit images of you that never happened. They can distribute them to your employer, your family, and your social media contacts. And here’s the nightmare: even when people know it’s fake, the damage is done. Your reputation is destroyed. Your privacy is violated. And you can’t unsee what’s been created.
This isn’t hypothetical. It’s happening thousands of times every day.
Cyber-Extortion Gets an IQ Boost
Now let’s talk about what researchers call “agentic AI”—AI that can act somewhat independently to achieve goals. It’s already being used for sophisticated extortion schemes.
Here’s how it works: Someone with a grudge—maybe a fired employee, perhaps a business rival—uses AI to identify your vulnerabilities. The AI analyzes your social media, finds your connections, and even crafts psychologically tailored threats explicitly designed to terrify you. It can develop malware customized to your systems. It can automate the entire attack.
What used to require a team of skilled hackers and weeks of planning can now be done by one angry person with a laptop in an afternoon.
The Mob at Your Fingertips
Then there’s automated harassment. AI-powered bots can coordinate doxxing campaigns—publishing your home address, phone number, and family information. They can flood your employer with fake complaints. They can create hundreds of fake reviews, destroying your business. They can organize what essentially amounts to a digital lynch mob.
And the person orchestrating it? They’re sitting safely behind a screen, far away, with complete deniability.
Do you see the pattern? Revenge has shifted from a single act to continuous, AI-supported domination.
The Warning Signs We’re Missing
Here’s what keeps me up at night: We’re already seeing the cultural warning signs that this is becoming normalized, and most people don’t recognize them.
The Glorification of Digital Payback
Look at our entertainment. Count how many movies, TV shows, and video games celebrate revenge—especially technologically enabled revenge. The hacker who destroys the corrupt corporation. The protagonist who uses technology to expose, humiliate, and destroy their enemies “with no mercy.”
We’re aestheticizing revenge. Making it cool. Making it aspirational.
The God Complex
Then there’s what I call the “tech fantasy of omnipotence.” In these narratives, AI gives the wronged hero god-like powers—to watch anyone, expose anyone, punish anyone. It’s vigilante justice for the digital age, and we’re teaching people that this is heroic.
The philosopher Mortimer Adler spent his career helping people think clearly about power and its limits. He argued that one of the fundamental tasks of education is learning to distinguish between means and ends—between tools and purposes. When we confuse the two, we’re in trouble.
And that’s exactly what’s happening with AI and revenge. We’ve stopped asking why we want to hurt someone and become obsessed with how effectively we can do it. The tool becomes the purpose. The algorithm becomes the answer. We’ve mistaken computational power for moral authority.
Adler would say we’re making a category error—treating a technical question as if it answers an ethical one. Just because AI can help you destroy someone doesn’t mean it should. But we’ve stopped having that conversation.
When Revenge Disguises Itself as Justice
But here’s the most dangerous sign: We’re increasingly accepting humiliation, exposure, and harm as legitimate forms of “accountability.”
Don’t get me wrong—accountability matters. But there’s a difference between holding influential people responsible through proper channels and using doxxing, deepfakes, and coordinated harassment to destroy someone. When we blur that line, when we start saying “they deserved it,” we’re not promoting justice. We’re promoting revenge.
Thomas Merton, the contemplative monk and writer, understood something profound about this. He wrote that violence—and revenge is violence—doesn’t really come from strength. It comes from weakness. From fear. From our desperate need to prove we’re right and someone else is wrong.
Here’s what Merton said that I think about almost daily: “Instead of hating the people you think are war-makers, hate the appetites and disorder in your own soul, which are the causes of war.” Now, think about the word “war”; we go to war not only with nations but with family, friends, neighbors, and all too often with ourselves.
Think about that for a second. When we reach for AI-powered revenge, we’re not demonstrating power. We’re revealing our own inner chaos. Our inability to sit with an injury without re-injuring it. Our terror is that if we don’t destroy them, we’ll be killed ourselves.
Merton believed that the violence we commit outwardly always reflects an internal violence we haven’t dealt with. And now? We’ve given that internal violence an algorithm.
And once that becomes culturally acceptable, AI stops being a tool and becomes a weapon that anyone can justify using.
The Institutional Failures
It’s not just cultural. Our institutions are failing to prevent this, too. Look at our government leaders, are they treating this like a neurosis and hoping it goes away in six months?
Weaponization as Standard Practice
We’re seeing AI tools used in law enforcement, employment decisions, and political campaigns to “get even” with opponents, whistleblowers, and journalists. When those in power can use AI to surveil, target, and punish without consequence, they will.
The Guardrails We Don’t Have
Think about systems that can independently monitor people, choose targets, and take action—with minimal human oversight. We’re deploying these systems without robust safeguards. No transparency. No appeals process. No accountability.
Mortimer Adler would have recognized this immediately as a failure of what he called “practical wisdom”—the ability to apply general principles to specific situations with sound judgment. We’ve built systems that can process information but have no wisdom. They can calculate but cannot discern. They can execute but cannot judge whether the execution is justified.
Adler spent decades arguing that technology without liberal education—without training in ethics, philosophy, and critical thinking—is dangerous. He believed we needed to cultivate what the Greeks called phronesis: practical wisdom. The kind of judgment that knows when to act and when to refrain, when to punish and when to show mercy.
But we’ve automated decision-making without automating wisdom. And you can’t code wisdom. It has to be cultivated in human beings who then control the technology.
The Data Timebomb
And perhaps most dangerous: We’re living in an era of massive data collection combined with intense political polarization and grievance-driven politics. When you have detailed information on everyone, divided populations who see each other as enemies, and robust AI tools, you have all the ingredients for systematic persecution.
Not hyperbole. History shows us that when you combine data, grievance, and power, terrible things happen.
Who Pays the Price?
Before I get to solutions, I want you to think about something: Who bears the brunt of AI-enabled revenge?
It’s disproportionately women facing deepfake sexual abuse. It’s racial and religious minorities targeted by coordinated harassment campaigns. It’s dissidents and journalists silenced through digital threats. It’s the vulnerable, the marginalized, the people with the least power to fight back.
Because revenge, like all forms of violence, flows downhill.
Thomas Merton wrote extensively about the relationship between violence and isolation. He observed that we inflict harm most easily on those we’ve already separated from ourselves in our minds. We create what he called “a false self”—not just in ourselves, but in others. We reduce them to a single story, a single mistake, a single moment. And once someone is just a story to us rather than a person, violence becomes easy.
This is precisely what AI-enabled revenge does. It takes someone’s worst moment—factual or fabricated—and makes it permanent, searchable, inescapable. It reduces an entire human being to one thing: the target. The enemy. The one who deserves it.
Merton would say this reveals our own spiritual poverty. That we’ve lost touch with what he called “the true self”—the self that recognizes its deep connection to all other selves. When we can destroy someone digitally without feeling their humanity, we’ve destroyed something in ourselves first.
Breaking the Cycle: What We Can Do
So what do we do? How do we resist this? I’m going to give you three questions to ask and four concrete actions you can take—but first, I want to share something important from both Merton and Adler about how change actually happens.
The Inner Work First
Mortimer Adler believed that real education—the kind that changes behavior, not just fills heads with information—requires what he called “the great conversation.” It’s the ongoing dialogue between you and the most profound questions of human existence. Questions about justice, mercy, power, and dignity. You can’t have a healthy society, Adler argued, without individuals who regularly engage in this conversation with themselves and others.
Thomas Merton took this further. He insisted that contemplation—sitting quietly with truth, even when it’s uncomfortable—is not a luxury for monks. It’s a necessity for anyone who wants to break cycles of violence. He wrote: “The rush and pressure of modern life are a form, perhaps the most common form, of its innate violence. To allow oneself to be carried away by a multitude of conflicting concerns, to surrender to too many demands… is to succumb to violence.”
What does this mean for us today? It means that before you can resist AI-powered revenge in the world, you have to resist it in yourself. You have to create space for reflection. You have to interrupt your own rush to judgment, your own desire for payback, your own need to see others destroyed.
This isn’t soft. This is the most challenging work there is.
Three Questions for Discernment
Question 1: What stories are being told?
Pay attention to the narratives around you—in media, from leaders, in your own social circles. Are people increasingly characterizing opponents in absolute terms? Are they fantasizing about technologically mediated revenge? When you hear someone say, “They deserve to be destroyed,” or “I hope someone leaks everything about them,” that’s a warning sign.
Adler would want you to ask: What is the purpose of this story? Does it help us think more clearly, or does it manipulate us into emotional reactions? Is it appealing to reason or to rage?
Question 2: Who is being humiliated or silenced?
Look for the patterns. Which groups face the worst effects of deepfakes, harassment, and digital retaliation? When you see someone being attacked this way, don’t look away. Don’t tell yourself they probably deserved it. Please pay attention to who has power and who’s being crushed by it.
Merton would ask: Have I reduced this person to an enemy? Have I lost sight of their fundamental dignity as a human being? Can I see Christ—or Buddha, or the divine image, or simply human worth—in the person being destroyed?
Question 3: Where is accountability missing?
Identify AI systems that affect people’s lives with little transparency or oversight. The algorithm that denied someone housing. The automated system that flagged someone as a threat. The platform that allowed coordinated harassment. Ask: Who’s responsible? Can decisions be appealed? Is there any way to challenge harm?
This is where Adler’s insistence on practical wisdom becomes crucial. We need humans in the loop—not just any humans, but people trained to exercise sound judgment, to balance competing goods, to know when rules need exceptions.
Your Action Steps: What You Can Do Starting Today
Let me give you concrete things you can do when you leave here.
Action Step 1: Educate Yourself (1-2 hours this week)
- Read: Search for “AI ethics revenge harm” and read three recent articles from reputable sources on digital harassment, deepfakes, and AI-enabled abuse.
- Watch: Find a documentary or long-form video about people affected by image-based abuse or coordinated online harassment. Put faces to this issue.
- Follow: Identify two or three experts or organizations working on AI safety, digital rights, and online abuse prevention. Follow them on social media or subscribe to their newsletters.
- Deepen your thinking: Pick up one book by Mortimer Adler—I’d recommend “How to Think About the Great Ideas”—and one by Thomas Merton—try “New Seeds of Contemplation.” Read slowly. Take notes. Let them challenge you.
Action Step 2: Protect Yourself and Others (ongoing practice)
- Digital hygiene: Review your privacy settings on all platforms. Limit what personal information is publicly accessible. Use two-factor authentication. Make it harder for someone to target you.
- Be an upstander: If you see someone being harassed online—primarily through AI-generated content or coordinated attacks—don’t pile on and don’t stay silent. Please report it. Support the target privately. Interrupt the mob.
- Support victims: If someone tells you they’re being targeted by deepfakes or digital harassment, believe them. Help them document it, report it to platforms and authorities, and connect them with resources.
Action Step 3: Push for Change (choose one to start)
- Contact your representatives: Write to your local, state, and federal representatives about the need for stronger laws against AI-enabled harassment, deepfakes, and image-based abuse. Be specific about what you want to see.
- Support advocacy organizations: Donate to or volunteer with groups fighting for digital rights, supporting abuse survivors, or working on AI safety and ethics.
- Workplace action: If you work in tech, push your company to implement ethical AI guidelines. If you’re in any industry, advocate for policies that prevent AI-enabled retaliation in your workplace.
- Community education: Share what you’ve learned. Give a presentation at your workplace, school, or community organization. Start conversations. Most people have no idea this is happening.
- Join the great conversation: Start or join a discussion group—in your community, workplace, or faith community—that regularly discusses ethics, technology, and human dignity. Adler believed these conversations are where democracy happens. Make space for them.
Action Step 4: Check Your Own Heart (daily reflection)
This is the hardest one, so I saved it for last.
We all feel the revenge impulse. When someone wrongs us, we want them to pay. But here’s what I need you to understand: Every time we normalize revenge—every time we say “they deserved it,” every time we share the embarrassing leak, every time we enjoy watching someone be destroyed online—we’re feeding the system that makes AI-powered revenge possible.
Merton would tell you to practice what he called “contemplative seeing”—looking at people, even those who’ve hurt you, with the eyes of compassion. Not because they’ve earned it, but because their humanity is not something they can lose, no matter what they’ve done.
Try this: When you feel that surge of satisfaction at someone’s public humiliation—pause. Breathe. Notice it. Ask yourself: Is this making me more human or less? Am I growing in wisdom or shrinking in spirit?
Practice mercy. Practice seeing people as more than their worst moment. Practice choosing accountability over humiliation, justice over revenge.
Because the technology will do whatever we tell it to. The question is: What kind of people are we going to be?
The Choice Before Us
Let me close with this.
AI is not good or evil. It’s a mirror that reflects our values, magnifies our impulses, and amplifies our choices. When we build AI systems that harden resentments, eliminate mercy, and reduce people to their worst moments, we’re not just creating dangerous technology. We’re revealing something dangerous about ourselves.
But here’s the hope: We can choose differently.
Mortimer Adler spent his life insisting that human beings can change through education—not just information transfer, but genuine transformation of how we think and act. He believed in our capacity to grow in wisdom, even late in life, even when we’ve made terrible mistakes.
Thomas Merton believed in something even more radical: that contemplation and nonviolence can break cycles that seem unbreakable. That mercy is not weakness but the most potent force in the universe. That we can, through practice and grace, become people who refuse to participate in violence even when we have the power to inflict it.
Both of them would tell you: This is possible. You can be different. We can be various. It’s hard. It requires daily practice. It requires community. It requires returning again and again to questions of purpose and meaning. But it’s possible.
We can build AI systems that promote reconciliation instead of retaliation. We can create digital spaces that protect dignity rather than destroy it. We can use our most powerful technology not to settle scores, but to break cycles of harm.
The ancient impulse toward revenge isn’t going away. But we don’t have to give it an algorithm.
The question isn’t whether AI can enable revenge. It already does.
The question is whether we’ll let it—or whether we’ll be the generation that finally says: This stops with us.
Not because we lack the power to hurt.
But because we’ve finally cultivated the wisdom to refrain.
Discover more from Innovate ~ Educate ~ Collaborate
Subscribe to get the latest posts sent to your email.
So What are you thinking?