When I first set out on this journey nearly ten years ago, I had a simple idea in mind…get the computer to talk to me and “understand” what I was saying. Well, that turned out to be far more difficult than I imagined! It was so difficult that I simply got busy with other things.
My, how things have changed! The development of large language models (LLMs) in recent years has accomplished what I originally set out to do and much more that I ever imagined possible. I’m constantly amazed by what these tools can do. (For example, I could ask one of them to read this post and rewrite it to make me sound smarter than I am.)
Development has progressed so far that people are seriously starting to ask if we’re approaching the “singularity” that has been the darling of science fiction. These kinds of predictions have been around for a while, but there seems to be more frenzy of late. Wikipedia1 says, “The technological singularity—or simply the singularity—is a hypothetical point in time at which technological growth becomes uncontrollable and irreversible, resulting in unforeseeable consequences for human civilization.” The main concern is that the events that may occur after the singularity are completely unpredictable.
One of the signposts has been the development of an artificial general intelligence (AGI), which many see as the beginning of an accelerating cycle of artificial intelligent (AI) systems leading to the singularity. It’s hard not to see the rapid development of systems based on LLMs and not think they are close to AGI.
A post on Popular Mechanics2 in November 2024 used a measure of the quality of AI language translation to predict the arrival of human-level AI by the end of the decade (with doom to follow). A post on Time3 raised another kind of concern… we may not be able to depend of the AIs we create to behave themselves–they cheat! They might develop a mind of their own and have their own objectives.
So where does this leave us?
Well, I think these systems may well continue to accelerate their capabilities, but I suspect we’re going to run into some kind of law of diminishing returns that prevents the kind of runaway acceleration needed to produce a singularity. These predictions are all based on extrapolating the growth of intelligence based on past growth. There are many examples in our recent past that ought to serve as warnings against extrapolation like this. I remember The Limits to Growth, a book from 1972 that predicted disaster caused by exponential population growth. There continues to be considerable debate about its predictions.
I also think we’re find out that there’s more to the human mind than can be replicated in silicon (or some other appropriate substrate). People like Ray Kurzweil have long discussed the possibility of uploading their consciousness into a machine and living forever. I’m not buying it, but that’s largely a matter of worldview. You see, I’m not a physicalist. There’s more to reality than the physical, and the human mind is only partly physical.
There’s a spiritual response to these developments, and other similar doomsday predictions like them. I believe God has a plan for creation that is moving along as it should. God’s plan is for human beings to live in creation. I don’t know if God’s plan includes super-intelligent AIs or not, but I’m pretty certain it includes human beings looking after things. I’m comfortable trusting God and enjoying the ride as best I can.
- Technological Singularity, https://en.wikipedia.org/wiki/Technological_singularity, Accessed 2024-03-02 ↩︎
- Humanity May Reach Singularity Within Just 6 Years, Trend Shows, https://www.popularmechanics.com/technology/robots/a63057078/when-the-singularity-will-happen/, Accessed 2025-03-02 ↩︎
- When AI Thinks It Will Lose, It Sometimes Cheats, Study Finds, https://time.com/7259395/ai-chess-cheating-palisade-research/, Accessed 2025-03-02 ↩︎