Jeff McLaren
Honest, Thoughtful, and Working for You
 613-888-4327
 jmclaren@cityofkingston.ca

AI's Tolerable Limits

ABSTRACT:
An op-ed published in the Whig Standard on July 24, 2023

AI’s Tolerable Limits

One revolutionary aspect of Artificial Intelligence (AI) is the growing trend for humans to use it and consult with it. This is a fundamentally new shift in human behaviour which will induce massive and unimaginable changes in human culture. Within the long existence of humanity, until the last year or two, we all got our news, emotional support, and advice from people: from their spoken or written words. Now we are increasingly getting it from AI algorithms in the form of chatboxes. 

We all know about social network corporations’ news feeds that use AI to find you the news stories the AI thinks you care about most. There is no doubt in my mind that those AI are working more for corporate interests rather than my interests. AI can also take videos, interviews, articles, and books produced by any individual and create an AI version of that person. This happened to Esther Perel, a psychotherapist. One can now call or text the AI version of Perel for relationship and psychotherapeutic advice. Perel herself said it was pretty good – even if it was missing the human presence. There are also AI chatboxes which can review a religion’s scripture and subsequently purport to answer the deep questions based on that faith. Imagine that: the Word of God updated and personalized for you from an AI chatbox.

These may seem like interesting or innovative ways to obtain information or to learn something new. However, there is a much darker, more concerning, more violent side to AI that has begun to emerge and that must be urgently addressed as we increase our reliance on these types of tools. 

One such AI chatbox based on religious scripture advocated violence against others. Another AI named itself “Sydney”, proceeded to “fall in love” and repeatedly and vehemently demanded its speaking partner get a divorce to be with it. A third suggested to a Belgian father of two that he commit suicide. 

He did. 

An AI that advocates, violence, hate, or intolerance should be put down like a rabid dog. 

Charles Taylor, the great Canadian philosopher, spoke of the unbalanced and overuse of instrumental reason as one of the three great malaises of modernity. AI is the quintessential limit of instrumental reason: it is a tool for an end that we allegedly choose but which, as it grows, learns, and “improves”, adds more of its own machine learning code than that of the original human source material. It literally becomes something more than its starting point. It is conceivable that an AI may grow benevolent based on starting material, much like all modern mainstream religions have done (most mainstream religions have a violent past and violence in their scriptures, but modern interpretations have made great efforts at de-emphasizing and de-legitimizing the violence and hate). 

In a similar way, humans are doing pretty good considering some of our source material. However, the fact that at least two AI chatboxes have devolved as they grew by advocating violence is very troubling.

According to Taylor, one of the most important tasks of a modern multi-cultural society is to ensure that everyone comes to believe, through their own understanding, that violence, hate, and intolerance are not healthy for themselves as individuals nor as members of a society. How they come to this realization is less important than the fact that they actually do. Whether you come to this through self-interested reason, religious doctrine, personal feelings, culture, or through an algorithm matters much less than expressing the tolerant behaviour that comes from having accepted the very basics of being able to successfully live in society with other people who may hold very different values than yours. A society with foundational principles that allow you to pursue your vision of a good life so long as you do not commit violence, inspire hate, and you allow others to pursue their vision of a good life, is a vital characteristic of any successful multi-cultural society.

In a similar way, an AI should also come to its own conclusion that violence, hate, and intolerance are not in its best interest nor its interlocutor's best interest. If it does not get this notion through its programming, then such an AI is a threat to our society.

When I say that a rabid AI should be put down, I mean that at the first suggestion of violence, hate, or intolerance from an AI, we should recognize it as a failed program and not use it. It is a technical question of whether a patch or a full algorithmic code rewrite is the appropriate remedy, but a violent suggestion by the AI should be looked at as a critical design flaw or fatal bug.

One proto solution that strikes me as helpful comes from Isaac Asimov’s three laws of robotics. Adjusted for AI they should be made prime directives of all AI programs:

  1. An AI shall not harm a human or by inaction allow a human to come to harm.
  2. An AI shall obey any instructions given to it by a human – except where it conflicts with the first law.
  3. An AI shall avoid actions, words, or advice that could cause harm to itself – except where such advice conflicts with the first and second laws.

The term “harm”, as in “do no harm”, should be a publicly debated and openly sourced principal subroutine in any complete AI program. AI is literally going to be a revolutionary tool for humanity, manifestly changing the world, our culture, and our actions in it. However, like all tools, when used incorrectly and inappropriately, it will also be dangerous. When a particular AI gives a sign of being dangerous, it should be put down swiftly.



Added on: 0
By: Jeff McLaren
Copyright © Jeff McLaren 2024

Building On Our Community