What Would Mortimer Adler Say About AI? Three Principles for Ethical Thinking in the Age of Machines

Examine AI ethics through the lens of Mortimer Adler. Uncover three pillars—human dignity, moral purpose, and truth—that guide AI to truly serve humanity.

If Mortimer Adler were alive today, he probably wouldn’t be dazzled by the latest AI breakthroughs. The classical philosopher—famous for his Great Ideas project and his insistence on clear thinking, common sense, and human dignity—would likely ask us to slow down and think before we celebrate our clever inventions.

Adler wouldn’t react with guesswork or fear. He’d start by asking: What is AI, really? Before talking ethics, he’d tell us to define our terms and remember we’re moral, thinking beings.

If Adler outlined an ethical guide for AI, he might use three basic pillars.

1. The Radical Difference in Kind 

Adler said humans aren’t just advanced animals—we’re uniquely different. Our ability to think with concepts makes us unique.

Perceptual thought (what AI does): pattern recognition, prediction, and simulation—an impressive “knowing how.”

Conceptual thought (humans): understanding big ideas, asking why, and finding meaning—like thinking about justice, truth, or beauty.

The Ethical Point: 

AI is an object, not a person. Giving it moral rights blurs important differences and lessens human value. Adler would say only people are morally responsible for AI’s actions.

2. Teleology: Servitude Toward Eudaimonia 

For Adler, ethics aimed at living a good and purposeful life. Technology’s role is to support that.

The Ethical Point: AI is only good if it genuinely helps people thrive. If it just makes life easier at the cost of meaning or treats people as things to improve, it has failed. AI should help us live better—not replace our purpose.

3. Truth over Probability 

Adler believed truth exists and can be known. He would worry about AI’s focus on what is probable. Models like these create likely answers—they do not know the truth.

The Ethical Point: Seeking truth can’t be left to AI. AI can help, but humans must decide what is true. Adler would say: People should check AI’s answers in key areas like law, medicine, and education.

Critical scrutiny to ensure AI doesn’t merely mirror cultural bias or relativism.

AI should help us identify and correct errors, not replicate them at scale.

(Yeah, go ahead and read that sentence again and let it sink in)

In the end, the question is about us.

The debate over AI ethics, Adler would remind us, is not really about machines. It’s about what kind of humans we wish to become.

So, which of these principles should take priority in our current moment?

  1. Should we focus on preserving the radical difference between humans and machines?
  2. Should our main concern be ensuring AI truly serves human flourishing?
  3. Or does protecting the pursuit of objective truth in a probabilistic world take precedence?


Discover more from Innovate ~ Educate ~ Collaborate

Subscribe to get the latest posts sent to your email.

Leave a Reply

Create a website or blog at WordPress.com

Up ↑

Discover more from Innovate ~ Educate ~ Collaborate

Subscribe now to keep reading and get access to the full archive.

Continue reading