AI, Machines, and Humanity
 
AI, Machines, and Humanity
Written By Bill Muehlenberg   |   06.30.25
Reading Time: 7 minutes

What is our future in an AI world?

Many have been warning about where AI is taking us, and how the various goods it may bring our way can easily be outweighed by the many problems and dangers. There have already been many benefits arising, such as in the field of medicine, but also many downsides that are being regularly documented. Consider just two of so many.

One quite recent study that has received a lot of attention has found that regular use of things like ChatGPT is dumbing us down and making us lazy. One article on this begins:

Participants using ChatGPT showed reduced engagement in 32 brain regions and produced less creative, “soulless” essays. Users struggled to recall their own AI-assisted content later, indicating weak integration into long-term memory. Researchers urge caution, especially in schools, warning that early AI exposure may harm cognitive development in young minds. 

See a link to the actual study HERE.

Being dumbed down by the use of things like ChatGPT may not bother many folks. But another major worry certainly should concern us all: the uses of AI for sextortion and deepfakes. As one news item recently reported:

The advancement and accessibility of AI technology has triggered a “tidal wave” of sexually explicit ‘deepfake’ images and videos, and children are among the most vulnerable targets. “Accessing and using AI software to create sexual deepfake images is alarmingly easy,” Jake Moore, Global Cybersecurity Advisor at ESET, tells 9honey.

From 2022 to 2023, the Asia Pacific region experienced a 1530 per cent surge in deepfake cases, per Sumsub’s annual Identity Fraud Report. One platform, DeepFaceLab, is responsible for about 95 per cent of deepfake videos and there are free platforms available to anyone willing to sign up with an email address.

They can then use real photos of the victim (usually harmless snaps from social media accounts) to generate whatever AI image they want; in about 90 per cent of cases, those images are explicit, according to Australia’s eSafety Commissioner. “We’ve got cases of deepfakes and people’s faces being used in images which are absolutely and utterly horrific,” reveals Bowden, CEO at the International Centre for Missing & Exploited Children (ICMEC) Australia. 

Or as another puts it:

Sexual extortion of children and teenagers is being fuelled by use of AI technologies, with the online safety regulator warning that some perpetrators are motivated by taking “pleasure in their victims’ suffering and humiliation” rather than financial reward. The eSafety Commissioner has warned that “organised criminals and other perpetrators of all forms of sextortion have proven to be ‘early adopters’ of advanced technologies”.

Sexual extortion is a form of blackmail, often involving threats to distribute intimate images of a victim. “For instance, we have seen uses of ‘face swapping’ technology in sextortion video calls and automated manipulative chatbots scaling targets on mainstream social media platforms,” an eSafety spokesperson said. 

Whither humanity?

This is just the tip of the iceberg. But a more general concern is how AI can lead to the diminution, if not extinction, of humanity. Many have discussed this. Let me offer two such warnings, one from weeks ago, and another from decades ago.

Last month two writers heavily involved in the tech world penned a piece with this ominous title: “AI Will Change What It Is to Be Human. Are We Ready?” They say they are not “doomers,” but they ask; “Are we helping create the tools of our own obsolescence?” They continue:

We stand at the threshold of perhaps the most profound identity crisis humanity has ever faced. As AI systems increasingly match or exceed our cognitive abilities, we’re witnessing the twilight of human intellectual supremacy—a position we’ve held unchallenged for our entire existence. This transformation won’t arrive in some distant future; it’s unfolding now, reshaping not just our economy but our very understanding of what it means to be human beings….

Both of us have an intense conviction that this technology can usher in an age of human flourishing the likes of which we have never seen before. But we are equally convinced that progress will usher in a crisis about what it is to be human at all.

Our children and grandchildren will face a profound challenge: how to live meaningful lives in a world where they are no longer the smartest and most capable entities in it. To put it another way, they will have to figure out how to prevent AI from demoralizing them. But it is not just our descendants who will face the issue, it is increasingly obvious that we do, too. 

Image of Technopoly: The Surrender of Culture to Technology

Technopoly: The Surrender of Culture to Technology (Neil Postman)

It is this aspect of how AI might be undermining what it means to be a human that has so many others concerned. One writer and thinker was well ahead of the game here. Thirty-three years ago Neil Postman penned the very important and prescient book Technopoly: The Surrender of Culture to Technology (Vintage Books, 1992, 1993).

But Postman was sounding the alarm on how technologies are changing our world – and often for the worse. As he writes early on: “It is a mistake to suppose that any technological innovation has a one-sided effect. Every technology is both a burden and a blessing; not either-or, but this-and-that.” (pp. 4-5)

Bear in mind that this was very early days as to things like personal computers and all that has transpired in the past few decades. But in Ch. 7 of the book he deals with “The Ideology of Machines: Computer Technology.” It is well worth revisiting. In it he briefly recounts how we got here.

Thus he discusses how Charles Babbage in 1822 invented a machine to perform simple arithmetical calculations. He reminds us of how the English mathematician Alan Turing in 1936 demonstrated how a machine could be used to act like a problem-solving human being. And he notes how John McCarthy invented the term “artificial intelligence” in 1956. Then he writes:

McCarthy claims that “even machines as simple as thermostats can be said to have beliefs.” To the obvious question, posed by philosopher John Searle, “What beliefs does your thermostat have?,” McCarthy replied, “My thermostat has three beliefs—it’s too hot in here, it’s too cold in here, and it’s just right in here.”

What is significant about this response is that it has redefined the meaning of the word “belief.” The remark rejects the view that humans have internal states of mind that are the foundation of belief and argues instead that “belief” means only what someone or something does. The remark also implies that simulating an idea is synonymous with duplicating the idea. And, most important, the remark rejects the idea that mind is a biological phenomenon.

In other words, what we have here is a case of metaphor gone mad. From the proposition that humans are in some respects like machines, we move to the proposition that humans are little else but machines and, finally, that human beings are machines. And then, inevitably, as McCarthy’s remark suggests, to the proposition that machines are human beings. It follows that machines can be made that duplicate human intelligence, and thus research in the field known as artificial intelligence was inevitable. What is most significant about this line of thinking is the dangerous reductionism it represents. Human intelligence, as Weizenbaum has tried energetically to remind everyone, is not transferable. The plain fact is that humans have a unique, biologically rooted, intangible mental life which in some limited respects can be simulated by a machine but can never be duplicated. Machines cannot feel and, just as important, cannot understand. ELIZA can ask, “Why are you worried about your mother?,” which might be exactly the question a therapist would ask. But the machine does not know what the question means or even that the question means. (Of course, there may be some therapists who do not know what the question means either, who ask it routinely, ritualistically, inattentively. In that case we may say they are acting like a machine.) It is meaning, not utterance, that makes mind unique. I use “meaning” here to refer to something more than the result of putting together symbols the denotations of which are commonly shared by at least two people. As I understand it, meaning also includes those things we call feelings, experiences, and sensations that do not have to be, and sometimes cannot be, put into symbols. They “mean” nonetheless. Without concrete symbols, a computer is merely a pile of junk. Although the quest for a machine that duplicates mind has ancient roots, and although digital logic circuitry has given that quest a scientific structure, artificial intelligence does not and cannot lead to a meaning-making, understanding, and feeling creature, which is what a human being is.

All of this may seem obvious enough, but the metaphor of the machine as human (or the human as machine) is sufficiently powerful to have made serious inroads in everyday language. People now commonly speak of “programming” or “deprogramming” themselves. They speak of their brains as a piece of “hard wiring,” capable of “retrieving data,” and it has become common to think about thinking as a mere matter of processing and decoding. (pp. 111-113)

As mentioned, he was concerned about all this over three decades ago. But other prophetic voices go back even earlier. One of them was C. S. Lewis. Back in the 1940s he was speaking about where we were headed, even titling one of his prescient books, The Abolition of Man.

In my chapter “C S Lewis, Tyranny, Technology and Transcendence” in the newly released book, Against Tyranny edited by Augusto Zimmermann and Joshua Forrester, this is what the Abstract says about my contribution:

Numerous voices over the past century have warned of the damaging and devastating results of a sinister convergence – an unhealthy coming together of things like runaway statism, unchecked scientism, technological tyranny, and moral myopia. It was quickly becoming apparent to these observers that the stuff of dystopian novels was no longer limited to the realm of fiction; those who were alert and aware started to see too many real life cases of this happening – and with horrific results. C S Lewis was one such prophetic writer who warned constantly about where we were heading, be it in his works of fiction or nonfiction. Writing from the 40s through to the 60s, his many important volumes on philosophy, theology and social criticism were very much needed back then – but sadly far too often ignored. We now are paying the price for neglecting this prescient watchman on the wall. (p. 227)

That book can be ordered HERE.

Warnings about the new technologies, AI and related issues have long been with us. We keep ignoring them at our own peril.

Postscript

For a list of 40 key volumes on AI, transhumanism and the new technologies, see this piece.


This article was originally published at BillMuehlenberg.com.

Bill Muehlenberg
Bill Muehlenberg is an American-born apologist and ethicist who currently lives in Melbourne, Australia. He has a BA with honors in philosophy (Wheaton College, Chicago), an MA with highest honors in theology (Gordon-Conwell Theological Seminary, Boston). He has his own ministry called CultureWatch, which features Christian commentary on the issues of the day: billmuehlenberg.com. He is a prolific author, and a much sought after media commentator, and has been featured on most television and radio current affairs programs. Bill teaches ethics, apologetics and theology at several Melbourne Bible Colleges. He is the author of “The Challenge of Euthanasia ...
IFI Featured Video
Cut Funding for Planned Parenthood
Get Our New App!