Brands: AI seems very smart - while following set rules

Brands: AI seems very smart – while following set rules

Spread the love

larry nobles

In the 211 podcast, Larry Nobles reads an excerpt from Chapter Two of Robert J. Marks’s Uncomputable You: What You Do That Artificial Intelligence Will Never Do (Discovery Institute Press, 2022). The book is now available as an audiobook as well as in Kindle format and, of course, in paperback.

Chapter 2 addresses the question “Can AI be creative?” Pablo Picasso did not think so. He reportedly said, “Computers are useless. They can only give you answers.

Nobles reads Dr. Marks’ account of how he and a colleague got a “swarm” of small programs (Dweebs) to develop a solution to a problem that required a lot of creativity from him and his colleague Ben Thompson – but not from the swarm or the computer:


A partial transcript follows:

The Office of Naval Research hired Ben Thompson of the Penn State Applied Research Laboratory and myself and asked us to evolve the behavior of the swarms. As we saw in Chapter One, simple swarm rules can lead to unexpected swarm behavior, such as Skittles stacking. With simple rules, it is easy to find the corresponding emergent behavior. Just run a simulation. But the inverse design problem is more difficult. If you want a swarm to perform a task, what simple rules should bugs in the swarm follow?

To solve this problem, we applied scalable computer AI. This process ended up looking at thousands of possible rules to find a set that offered the closest solution to the desired performance. One problem we looked at involved a predator-prey swarm. All actions took place in a closed and square virtual room. Predators called Bullies ran around hunting prey called Dweebs. The bullies captured the Dweebs and killed them. We wondered what the performance would be if the objective was to maximize the survival time of the Dweeb swarm. The swarm’s survival time was measured until the last Dweeb was killed.

After running the evolutionary search, we were surprised by the result. The Dweebs submitted to self-sacrifice in order to maximize the overall lifespan of the swarm. Here is what we saw:. A single Dweeb caught the attention of all the Bullies who chased the Dweeb in circles around the room. They spun again and again, adding seconds to the overall lifespan of the swarm.

During the chase, all of the other Dweebs huddled in a corner of the room shaking in what appeared to be fear. Eventually, the pursuing Bullies killed the sacrificial Dweeb and pandemonium erupted as the surviving Dweebs scattered in fear.

Eventually another sacrificial Dweeb was identified and the process was repeated. The new sacrificial Dweeb spun the Bullies around as the remaining Dweebs cowered in the corner. Dweeb’s sacrificial result was unexpected. A complete surprise. There was nothing written in the evolutionary computer code explicitly calling these sacrificial Dweebs. Is this an example of AI doing something we didn’t program it to do? Did he pass the Lovelace test? Absolutely not.

We had programmed the computer to sort through millions of strategies that would maximize the lifespan of the Dweeb swarm, and that’s what the computer did. He weighed the options and chose the best one. The result was a surprise, but did not pass the Lovelace test of creativity. The program did exactly what it was written to do, and the seemingly frightened Dweebs weren’t actually shaking in fear. Humans tend to project human emotions onto non-sentient things. The Dweebs quickly adjusted to stay as far away from the nearest Bully as possible. They were programmed for this.

If the sacrificial action Dweeb… [does] not pass the Lovelace test, what would? The answer is something outside of what the code was programmed to do.

Here is an example from the predator-prey swarm example. Lovelace’s test would pass if some Dweebs became aggressive and started attacking and killing Bullies alone, a potential action we haven’t programmed into the possible strategies suite. But that didn’t happen. And because a Dweeb’s ability to kill a Bully isn’t written in code, it will never happen… But remember, AlphaGo software as written couldn’t even provide an explanation on his own programmed behavior, the game of Go.

To note: An excerpt from the first chapter is also available here, as read by Larry Nobles (October 6, 2022). A transcript is also available there.

Additional Resources

  • Uncomputable You: What You Do That Artificial Intelligence Will Never Do by Robert J. Marks on Amazon
  • Robert J. Marks on Discovery.org

Download the podcast transcript

Leave a Comment

Your email address will not be published.