Thnking About AI
My name is Enoch Maarduk.
I am the CEO of PHANTASM, which is an acronym for Preventing Horrors and Nightmares Through Active Spectrum Monitoring. Was a movie too, but since it’s a regular dictionary word, we use it.
As our previous, and now comatose, CE phrased it our mission statement is:
When a person, or a group of people, seek to exploit the weakness of their fellow humans by utilizing technology in a manner that attempts to tap into the electromagnetic (EM) Spectrum to directly influence human behavior in a negative fashion, we intervene.
Obviously AI (artificial intelligence), or machine intelligence or algorithmic intelligence or whatever you want to call it, is a form of EM wave manipulation.
At PHANTASM we acknowledge that words (and sound, and images, and video) have power, and all of these must reflect truth, virtue, wisdom and a higher consciousness.
Keeping an eye on that is not easy, but we do the best we can.
The words below are my opinion, and everyone has one — hope mine doesn’t stink…
After a long day at work I settled into my favorite chair, relishing the solitary adult beverage I normally permit myself, and sat scanning through the news on my digital device of choice, looking desperately for something which, upon reading, wouldn’t prompt me to run screaming into the night and calling for my binky – I failed with great misery.
Staring up at me from the pixilated depths of digital chaos was a panel of erstwhile tech entrepreneurs speaking out about the dangers of artificial intelligence (AI) and expressing concern about the inappropriate usage of AI.
What – the attentive reader may wrathfully ask – is “inappropriate usage?” Well, let’s take a look at AI and how it is currently used – and how it might be used in the very near future.
Any human activity that is augmented by software or hardware is, technically speaking, interacting with AI. The thermostat that you set at 72 degrees and shuts off when the room cools down to that temperature is an AI. It has intelligence because it has been programmed to shut off when it senses an ambient air temp of 72, and it is not human (it is artificial).
Now, granted, it is a very, very, dumb AI and if you asked your thermostat to solve “1 + 1” you will have many electric bills to pay before you ever get an answer.
That little notice that appears on your car’s display that says your right front tire is under-inflated? AI.
The text message your security system sends you that the back door was opened at 2:12 a.m., by your now grounded daughter? AI.
Stop for a moment to think about all the messages you receive (digital display on some device, email, text message or other) that entails an AI or other automated system passing sensory data (often interpreted) forward to you.
It will be more than you think, until you think about it.
There is nothing wrong with this because the AI is in passive mode – simply gathering facts. No danger of Alexa or Siri ordering a pizza without being told to do so.
The next advance is to program the AI in such a way that not only does it gather data (either directly through sensory input, or indirectly by humans inputting the data), but also compares such data against norms and standards as it ‘interprets’ the data and offers recommendations and, if permission is granted, to take proactive steps to “solve” any potential issues. As I write this diatribe, all those recommendations are still based on a wide range of algorithmic possibilities all created by humans – the AI simply processes the data faster and can assess more of those recommendations in various scenarios than humans.
Oh, and we have a great big argument brewing about what are those ‘norms and standards’ the AI uses for reference, and who is responsible for establishing those values. Trust me when I say it is done by humans and not AI.
No matter what you want to think – though – the AI still makes recommendations of future action that are wholly based on a range of options preset by the human programmers.
Not only that, but AIs generally are not permitted (by any human that has half a brain) to make decisions that involve action unless that action has already been planned and reviewed by humans as well. Letting an AI auto-shutdown a dangerous problem in a nuclear plant is still based on human decision-making scenarios pre-programmed into the system.
The great risk in AI is when – at some point in the future – AI use is widespread and not under the careful and constant supervision of intelligent and discerning humans. The great risk is when humans abdicate responsibility and allow AI not only to gather, interpret and make decisions based on known options, but also permit AI to make decisions which may not be found on the preset and human-programmed decision tree.
In humans we like to call this “thinking outside the box,” but in AI this could be catastrophic since the ripple-down effect on humans cannot – despite the processing power of AI – be adequately calculated.
The Utopian vision of an AI is basically one in which the AI operates and thinks just like a very smart human – just faster, bigger, better, stronger and all that. However, obviously that is not enough because you cannot rely on an AI which makes decisions solely based on the logic and reason of advanced programming. Hey – I think that was a Star Trek episode!
Anyway, in other words, we are creatures of body, mind, and soul – humans have emotions, passions, as well as ethics and morality. Unless you can teach that to an AI, you will have AI-rendered decisions and actions that are only based on the best possible outcome – and the best possible AI outcome may not necessarily be the best possible human outcome.
More disturbing than this is the underlying basis of the AI programming. You can easily make a rather disturbing argument (by examining modern human relations throughout the world) that global AI systems will NOT agree on fundamental precepts nor approved actions. Sadly, AI systems around the world will be based on biased and prejudiced programming attributes. Ask yourself this – do you honestly expect global AI systems to agree when the programming basis is influenced by any of the world’s largest adaptive strategies, such as Christianity, Islam, Judaism, Hinduism, Buddhism, Capitalism, Socialism, Communism, Atheism, et cetera, et cetera.
These belief systems do not agree, so can we honestly expect the AI to agree?
Most people have the notion that humans have made incredible progress since our cave-dwelling days by simply being average and permitting a small percentage of gifted humans to create technological advances where we all benefit. Well, that’s the idea anyway. The reality is often bad because even the so-called intelligent humans fail to see the ripple down impact of their glorious technological innovations. I know you can think of many things that fall under that category.
When the AI genie is let out of the bottle, we run the risk that human reliance on this form of intelligence will not only create a dangerous dependency, but also allow average humans to control technologies that are way beyond their comprehension and understanding.
Worse than that, I can easily imagine those average humans – fully cognizant of their average nature – granting the right of control of the AIs over to humans supposedly “gifted” enough to understand the technology, trusting such people to always make decisions about the AIs that will benefit everyone equally.
What could possibly go wrong with that scenario?
The truth is, many intelligent people do not trust that average people can control the technology, nor (in their heart of hearts I believe) do they trust their intelligent brethren either. Getting humans to do the right thing is a job than even an AI would have trouble in executing.
I think using the word “executing” in the same sentence as “AI” is probably one of the things bugging all modern day intelligent thinkers.
AIs are coming (are here), and we need standards of performance and structure. If the AIs will eventually be able to do all the things that the techies now wax prosaic about, then we are creating beings that will be godlike.
Anyone who has studied mythology knows how irrational and petty the gods actually were, and being a god does not of necessity mean you act like a god.
Anyway, I have to go now because I just got a text message from my pharmacy, an email from my refrigerator (out of milk), apparently my car just auto-subscribed to some kind of music service and my robotic vacuum just ate my wife’s favorite rug.
AI, AI, it’s off to work we go.
Read More →