Post No. 23: Artificial Intelligence, Part Three
- Martin Sullivan
- Aug 1, 2023
- 9 min read
Updated: Aug 27, 2023
Part One on this topic described how to use interactive AI through Microsoft’s Bing ChatGPT4 and gave some examples. Part Two gave a high level overview of how AI was developed and it’s essential components. This post, which has been a long time coming, looks at the short and long term advantages and disadvantages of AI. It also addresses the concern, some would say fear, of what AI might become. It has taken a while for this post because I needed the time to listen to some of the experts and read a number of articles and a couple books to formulate my own opinion.
Why does AI instill fear in so many people who see it as a potential existential threat to humans? This fear is certainly out there and has led to several rather good sci fi movies, such as the Terminator and Matrix movie series. Now that AI responds very intelligently to us in our own language, some of us are quite freaked out. I will address this concern/fear in the last part of this post, but first let’s examine the pros and cons of AI, and whether there are short term concerns about using this new technology.
AI is computer code augmented by algorithms that humans use to direct the output of AI in hopefully beneficial ways. What is freaky about AI is that no one can look inside the code and predict exactly the output based on this code. Perhaps this is understandable when one realizes that AI has access to virtually the sum total of human knowledge via the internet. Also, AI has learned the logic and reasoning inherent in language by being trained on large language models and artificial neural networks that has resulted in its human-like responses. In effect, we have created a technology that has, in many ways, greater intelligence than any one human because it can access vast amounts of knowledge on the internet and elsewhere at incredible speeds. With this in mind, let’s look at some of the very big pros that are and will emerge out of AI:
1. Even today one can use ChatGPT4 for virtually limitless applications. It will surf the web for you and give you a summary of your query along with links to the actual web pages. I’ve shown examples of this in AI Post No. 1. Since then I’ve used it to learn some German and play with it to learn some of it’s built-in biases. You can use it to recommend lifestyle and medical questions, and virtually anything you can think of.
2. AI will certainly become capable of being your own personal assistant to understand the world. In this way it will augment the intelligence of every human with access to it. Some may even use it as an antidote to loneliness since the AI, once it has access to your personal likes, dislikes, skills, background etc, will be able to respond in a caring, empathic, and helpful way.
3. There are and will be incredible breakthroughs in numerous fields of endeavor, such as in medicine, complex industrial processes, agriculture, legal, music, art, transportation, physics, cosmology, etc. One concrete example that is occurring even today is the exploration of new pharmaceuticals by exploring different ways of folding proteins that create these medicines. If given the dataset of all the known ways of folding proteins, the AI could give novel ways of folding these proteins to make advances in recombinant DNA technology, monoclonal antibodies, gene and cell therapy and cell receptor constructs.
4. In agriculture AI’s have been trained to identify weeds and have been used to zap these weeds with lasers. They can identify weeds among many other plants with high accuracy. Farmers are using this technology to kill the weeds without harming their crops or using harmful chemicals. One example is the Carbon Robotics Autonomous Weeder that uses twelve cameras and eight lasers to kill over 100,000 weeds per hour!
These are just a sampling of the many positives that are and will be coming from AI. Like any new technology, the benefits of the technology become more and more apparent as the technology improves and society develops new ways to improve lives. No one in the early days of past technologies would have predicted the eventual outcomes in the railroad, automobile, telephone, aviation and computer industries.
The benefits to society from AI will be massive, what about the disadvantages? Here are a few:
1. AI is being developed by a handful of very large, powerful and influential companies, such as Microsoft, Google, Meta, Tesla and a few others. They have already amassed unprecedented power and influence on society and AI will significantly accelerate this trend. Remember, it is these companies that have the massive computing power and talent needed to advance AI technology. It is these companies that create the algorithms that instill a moral code of sorts into AI. After using it, you might notice that ChatGPT4 often responds that it does not offend any one race or group of people or take sides in political discussions. But since it is developed by generally younger, more progressive techies it exhibits a more subtle left-leaning bias. Humans create and give direction to the AI, and it is inevitable that human bias will creep in. We should be aware that these huge tech companies are amassing even more power and will be even more influential in the way society is shaped in the future.
2. Consider also that the U.S. is one country with its own values. China is also working on AI and one way China uses it will almost certainly be to amp up the surveillance of its citizens. In the wrong hands, AI could make Big Brother a dystopian reality. Another possible negative about AI is that it will accelerate the trend to spend more and more of our precious hours of life tied to our digital creations. This will most certainly happen to some extent but AI benefits far outweigh this negative. Just like all new technologies, we will soon make it fit into our lives the way we decide.
3. A third negative is that AI could present more dangers to privacy and security. Since AI has access the entire internet and we will begin to allow AI to have access to our personal data for medical and other benefits, privacy concerns will become paramount. This concern is compounded by the fact that AI could be used by bad guys to better hack personal and corporate data.
4. Lastly, a current negative about AI interactive chatbots is that they create fictitious content. They make things up - not always but when they do, it sounds very convincing. AI scientists say that the AI is “hallucinating” when it creates its own made-up information. The reason this happens is that AI can reason based on human reasoning from massive amounts of online human information and language. AI learns by always trying to predict the next word. Sometimes its predictions are wrong but they sure sound convincing. I think this problem will eventually be solved or managed, but currently it is a problem. Microsoft’s Bing ChatGPT4 has the best approach to this problem because it provides links to all the sources of its information. Thus one can go directly to the internet source and confirm for yourself the information.
The more pressing concern, or perhaps fear, of many people is that AI will somehow take over. This is an understandable fear since AI is already more intelligent in some ways than any single human. Also, although humans create the code that makes AI function, computer scientists are not able to predict what the outcome of any query will be of an interactive AI such as ChatGPT4. We cannot understand all that is happening inside the black box that is AI. For this reason people can become freaked out and fears can run amok. When there is uncertainty and risk, human minds often think the worst. This is certainly happening for some regarding AI. Let’s look more closely at this problem.
Let’s start with what it means to be alive. In Post No. 4, Reba the Amoeba, I described how even a single celled organism like an Amoeba is geared toward survival by seeking out nutrients and avoiding toxins. Every life form on earth does this. Currently AI is simply computer code directed by humans via other computer code in the form of Algorithms. Could AI become alive, like Reba the Amoeba, and seek out things that contribute to its own survival. Could it take over and start writing its own code for this purpose. Yikes!
It took 700 million years or so for the first life form to evolve on earth, and another 3+ billion years until humans appeared on earth. Could AI somehow replicate this in a few short years? It is a given that AI will become as intelligent and more so than humans, but will it become alive, like Frankenstein’s monster? Despite the fact that we can imagine such a possibility and that there is some uncertainty inherent in AI’s generative and creative abilities, my answer is no, it will not become alive. But …
In a few decades from now there is a possibility that some mad scientist with access to massive computing power, such as only the giant tech companies have now, might teach his advanced robots to develop an ever-present sense of self that they must protect. This would be possible in robots that have senses similar to humans, such as vision, hearing, touch, taste, and smell. It would not be possible in a bunch of silicon chips inside super fast, powerful computers. Humans have a sense of a self in this world that is completely dependent on these five senses. Take these senses away and our sense of self goes away as well. There are robots today that have many of these senses and can do some amazing things. They will not become alive like any organism on earth today, but they might, if trained long enough by some insane humans, develop a sense of self that would mimic the survival instinct of life. Here’s my solution: keep any mad scientists away from obtaining massive computing power so they can do bad things with robots! This brings up another controversial concern about AI - how it should be regulated.
AI is being developed not just in the U.S. but in many places in the world, especially in China. We may want some specific regulations within the U.S., but the more pressing need is for international cooperation and regulation of AI. We do this now for technologies that might become dangerous for humanity, such as bioweapons, chemical weapons and nuclear weapons. Let’s add AI to this list.
One reason for international cooperation is the potential use of autonomous drones that are trained to eliminate specific targets. Recall earlier in this post the Carbon Robotics Autonomous Weeder that zaps weeds, which it has learned to identify with incredible accuracy, with a laser. This wonderful advantage of AI can turn into an AI danger if used in warfare. We currently use drones in warfare now that are incredibly lethal. The argument will be made that using AI will make drone strikes more humane because they will have less human error and kill fewer non-combatants. But completely autonomous drones, not needing human input, could be deployed in massive quantities, thus effectively creating a new weapon of war. Imagine a swarm of killer robots in the sky that coordinate with one another simultaneously to maximize their effectiveness. Regulation should ensure AI drones in warfare should have some form of human supervision. Imagine if our nuclear weapons did not have some form of human intervention before launch.
International regulation will be needed in the future to avoid the existential threat from AI developed by some mad scientist who creates robots with a simulated sense of self. This threat is similar to the threat of a mad scientist/terrorist developing a nuclear weapon. There are regulations that make this difficult, on top of the difficulty of gathering the right resources and technological knowhow for a bad guy to deploy a nuclear weapon. There are analogous regulations regarding bioweapons.
My favorite Sci-Fi author is Isaac Asimov and his Foundation series of novels. He is known for creating the three laws of robotics, a regulation that is strictly followed in all his novels. Here they are:
1) A robot must not injure a human being, or through inaction allow a human to come to harm;
2) A robot must obey orders given by a human being except where such orders would conflict with the first law. (This means a human cannot order a robot to hurt another human).
3). A robot must protect its own existence except when it conflicts with the first two laws.
Thus, even robots with a simulated sense of self can be controlled with another algorithm, the three laws of robotics! Perhaps we will come to this in a few generations. By that time I sincerely wish that humans will have learned to cooperate on a global scale better than they do now.
I hope I have not freaked you out imagining how AI might become an existential threat. There is an infinitesimal chance of this happening. Yet, because we can imagine it, fear raises its ugly head. I say chop off its head. AI will be providing so many wonderful advantages for society, you will soon get tired of the AI crybabies.

As usual, excellent article and overview. In regards to your last paragraph: 'I hope I have not freaked you out imagining how AI might become an existential threat'. Unfortunately, the whole goal of the tech companies and their marketing is to freak people out. It is all hype, since there is no "intelligence" in AI, as the word 'intelligence' is defined. Most of what passes for AI is nothing more than data/information mining and management (e.g., bots) along with knowledge-based engine technologies that have been around for decades (note, knowledge IS not intelligence). Here are a couple of recent business articles that prove my point:
AI has already surpassed human capabilities in several domains; which jobs are next? ( https://www.foxbusiness.com/technology/ai-already-surpassed-human-capabilities-several-domains-which-jobs-next…