web analytics

it's words on the web, it's all just words.

Category: Artificial Intelligence

Where Is It All Going?

When I first set out on this journey nearly ten years ago, I had a simple idea in mind…get the computer to talk to me and “understand” what I was saying. Well, that turned out to be far more difficult than I imagined! It was so difficult that I simply got busy with other things.

My, how things have changed! The development of large language models (LLMs) in recent years has accomplished what I originally set out to do and much more that I ever imagined possible. I’m constantly amazed by what these tools can do. (For example, I could ask one of them to read this post and rewrite it to make me sound smarter than I am.)

Development has progressed so far that people are seriously starting to ask if we’re approaching the “singularity” that has been the darling of science fiction. These kinds of predictions have been around for a while, but there seems to be more frenzy of late. Wikipedia1 says, “The technological singularity—or simply the singularity—is a hypothetical point in time at which technological growth becomes uncontrollable and irreversible, resulting in unforeseeable consequences for human civilization.” The main concern is that the events that may occur after the singularity are completely unpredictable.

One of the signposts has been the development of an artificial general intelligence (AGI), which many see as the beginning of an accelerating cycle of artificial intelligent (AI) systems leading to the singularity. It’s hard not to see the rapid development of systems based on LLMs and not think they are close to AGI.

A post on Popular Mechanics2 in November 2024 used a measure of the quality of AI language translation to predict the arrival of human-level AI by the end of the decade (with doom to follow). A post on Time3 raised another kind of concern… we may not be able to depend of the AIs we create to behave themselves–they cheat! They might develop a mind of their own and have their own objectives.

So where does this leave us?

Well, I think these systems may well continue to accelerate their capabilities, but I suspect we’re going to run into some kind of law of diminishing returns that prevents the kind of runaway acceleration needed to produce a singularity. These predictions are all based on extrapolating the growth of intelligence based on past growth. There are many examples in our recent past that ought to serve as warnings against extrapolation like this. I remember The Limits to Growth, a book from 1972 that predicted disaster caused by exponential population growth. There continues to be considerable debate about its predictions.

I also think we’re find out that there’s more to the human mind than can be replicated in silicon (or some other appropriate substrate). People like Ray Kurzweil have long discussed the possibility of uploading their consciousness into a machine and living forever. I’m not buying it, but that’s largely a matter of worldview. You see, I’m not a physicalist. There’s more to reality than the physical, and the human mind is only partly physical.

There’s a spiritual response to these developments, and other similar doomsday predictions like them. I believe God has a plan for creation that is moving along as it should. God’s plan is for human beings to live in creation. I don’t know if God’s plan includes super-intelligent AIs or not, but I’m pretty certain it includes human beings looking after things. I’m comfortable trusting God and enjoying the ride as best I can.

  1. Technological Singularity, https://en.wikipedia.org/wiki/Technological_singularity, Accessed 2024-03-02 ↩︎
  2. Humanity May Reach Singularity Within Just 6 Years, Trend Shows, https://www.popularmechanics.com/technology/robots/a63057078/when-the-singularity-will-happen/, Accessed 2025-03-02 ↩︎
  3. When AI Thinks It Will Lose, It Sometimes Cheats, Study Finds, https://time.com/7259395/ai-chess-cheating-palisade-research/, Accessed 2025-03-02 ↩︎

Stalled and Distracted

Not much visible progress in the last several weeks. I’ve been off wandering in the world of Python software development with forays into the Twitter API,  using sockets for inter-process communication, and a dozen other fascinating areas. Lots of new code written. Almost all the old code seems broken in some way or other.

So, it’s time to take stock of where we are on this journey. To that end, I’m going to  try and document the bits that I have, and the challenges as I see them now. The bits will be in the order I think of them, so probably won’t make sense.

1. SemNet

This bit started life as code I found here that “defined several simple classes for building and using semantic networks.” It defined three classes: Entity, Relation, and Fact. It allows statements like these:

>>> animal = Entity("animal")
>>> fish = Entity("fish")
>>> trout = Entity("trout")

>>> isa = Relation("is-a",True)

>>> Fact(fish, isa, animal)
>>> Fact(trout, isa, fish)

Having defined these variables, statements like this were easy:

>>> print "trout is a fish?", isa(trout,fish)
trout is a fish? True
>>> print "trout is an animal?", isa(trout,animal)
trout is an animal? True

The trouble came when I tried to persist the relations to a file. It turns out that the “Entity” object is storing the “isa” object in its list of known facts (actor-relation-object). Of course when the entity object is reloaded from disk, the relation object is a different object than the signature that was stored, so things come unraveled.

That has lead me into trying to figure out how to persist and recreate objects without losing the network relationships. I suspect there is an easy pattern for this sort of thing, but I’m not a skilled enough programmer to see it immediately.

More adventures will follow…

Scope control…

It’s easy to get out of control on a project like this, and I think that’s where I’ve been for the last several weeks.

I’ve read more about AI and natural language processing than I ever knew existed a few months ago. This is a large and active field, and I’m in awe of the amazing work that’s going on all over the world. Places like the University of Rochester are full of smart people working long hours for years to get PhDs in this area. I have read several of the papers available on their CS department web site, and it sounds like they have already accomplished much of what I set out to do and have moved on.

Other, non-academics have made significant inroads into the problem. They are beginning to move beyond simple script-driven AIML bots to more sophisticated programs. I had added a link to one of them (Jeeney) to this post so you could see what I mean, but it’s not working. It’s not quite human-level interaction, but it’s not bad. I really like this approach and intend to follow a similar route if  I can.

Which brings me back to the scope question… What am I really going to attempt here? I have asked this question several times already, and will likely do so again. Here’s my answer for today. This is heavily influenced by Homer [S. Vere and T. Bickmore, “A basic agent” (Computational Intelligence 1990)].

  • Parser/Interpreter – This will tear apart the incoming text stream and convert it into some kind of standard internal form. I don’t expect this to be a flawless English parser because that’s both too difficult and not necessary. I need to extract the meaning from sentences at the level of a 3rd-grader at most. Vocabulary is pretty easy given all the work the Princeton folks have put into WordNet.
  • Episodic Memory – This is what we might call our short-term memory. It’s surprisingly important in our ability to make sense of language. Basically I think of it as a list of the recent communication/event stream. I’m not sure how to “compress” this short-term memory into general memory, but it will need to be done.
  • General Memory – All the stuff the computer “knows” stored in some kind of formal ontology. I have spent a lot of time worrying about how to do this, and have decided it probably doesn’t matter too much right now. If the thing ever gets really big I may have scaling problems. I don’t know how to solve or even pose them at this point.
  • Planner/Reasoner – Figure out what to say, or perhaps do. This is a new idea for me, but it makes sense. Once the computer has understood the meaning of an input stream, what does it do? That’s the planner’s job. It will identify goals and come up with the steps needed to achieve the goal. I don’t think I’m too clear on how to do this.
  • Executer – Figures out whether a plan step can and should be executed right now. Executes the step.
  • Learner – Adds new knowledge to General Memory. No idea how to do this well. Jeeney claims to be reading through Wikipedia!
  • Text generator – The opposite of the Parser/Interpreter. Converts from the standard internal form generated elsewhere in the program to simple 3rd-grade English.

I have several useful-looking snippets of Python code spread across these areas and have been trying to figure out where they go. I don’t know whether I’ll be able to tell if any of them will work before I get a basic shell constructed for each function.

Scope control… How much can one person get done in a few hours a week? Probably not as much as I’d like to. Oh well!

© 2025 webwords

Theme by Anders NorenUp ↑