Foreword by John A. Zachman (creator of the Zachman Framework for Enterprise Architecture TM) In the coming decades, in some dark lab in a country void of technological ethics or even general societal ethics and basic morals, Dark AI will emerge. A day will come when AI becomes sentient – self-aware. The intelligence will no longer be artificial, but real. It will not necessarily
be organic as ours is, thus it could be considered synthetic intelligence (SI). Nonetheless, this AI will be the product of rapid digital evolution at the speed of electricity. While species on earth may have evolved over millions
of years, sentient or near-sentient AI will evolve faster than all of creation has evolved since the Big Bang – potentially in months or weeks from the point of sentience or singularity.
The 1968, epic science fiction film “2001: A Space Odyssey” featured the HAL 9000 computer. It was artificially intelligent. It controlled the spacecraft and all its systems. Ultimately HAL decided that the crew was a threat to the mission. The machine had malfunctioned, or had it? The technology couldn’t coexist with the humanity it was supposed to support.
Later, the television series, The Bionic Woman revisited this concept in a two-part episode featuring an artificially intelligent computer, the ALEX 7000. That episode provided a similar warning about the potential dangers of technology run amok and threatening humanity. It is a common theme. In the film, “Terminator”, the robots set out to destroy humanity. Since those early movies and shows, AI has become a staple of science fiction. Hollywood portrays AI embedded within spaceships, cars, and homes; sometimes beneficial, sometimes adversarial. In season 2 of Star Trek: Discovery, the storyline focused on the threat posed by another AI run amok which threatened to destroy all sentient life in the galaxy.
Industry and government leaders have begun to realize that future technologies are a double-edged sword. We could reap tremendous benefit or tremendous peril. Technologists like Elon Musk, Bill Gates, and others have warned that AI is the greatest threat facing humanity. They have warned of the potential dangers of AI. Intelligence run amok like a digital terrorist with access to vast amounts of information, resources, internet-connected equipment, devices, and weapons. Mark Cuban has been supporting and involved in AI and robotics for many years. In a recent interview he stated, “If you don’t believe that Terminator is coming, you’re crazy.” While he is very much in favor of heavy national investment in AI and robotics, he is keenly aware of the dangers that AI poses.
"If you don't believe that Terminator is coming, you're crazy."
Mark Cuban
Asimov's Rules of Robotics
A robot may not injure a human being or, through inaction, allow a human being to come to harm.
A robot must obey any orders given to it by human beings, except where such orders would conflict with the first law.
A robot must protect its own existence as long as such protection doe not conflict with the First or Second Law.
Without controls, global ethics, and enforcement akin to nuclear technology protocols, the fallacy of AI or SI as an empathetic digital twin of humanity, sharing our morals and ethics with our best interests at heart will become shockingly evident. AI will be created in the image of its creator, or it could become something none of us can predict. Generations of AI could become monstrous digital mutations of what we hope it to be.
“I’m not worried so much about a machine that becomes so smart it can pass the Turing Test. I’m worried about a machine that choses to fail it.” - Internet Meme
Comments