Our Final Invention: Artificial Intelligence and the End of the Human Era
a book by James Barrat
(our site's book review)
The Amazon blurb on this book says: A Huffington Post Definitive Tech Book of 2013. Artificial Intelligence helps choose what books you buy, what movies you see, and even who you date. It puts the "smart" in your smartphone and soon it will drive your car. It makes most of the trades on Wall Street, and controls vital energy, water, and transportation infrastructure. But Artificial Intelligence can also threaten our existence.
In as little as a decade, AI could match and then surpass human intelligence. Corporations and government agencies are pouring billions into achieving AI's Holy Grail: human-level intelligence. Once AI has attained it, scientists argue, it will have survival drives much like our own. We may be forced to compete with a rival more cunning, more powerful, and more alien than we can imagine.
A supercomputer AI
Through profiles of tech visionaries, industry watchdogs, and groundbreaking AI systems, Our Final Invention: Artificial Intelligence and the End of the Human Era explores the perils of the heedless pursuit of advanced AI. Until now, human intelligence has had no rival. Can we coexist with beings whose intelligence dwarfs our own? And will they allow us to?
"As researchers on climate change know, warnings of future disasters are a hard sell. Enthusiasts dominate observers of progress in artificial intelligence."—Kirkus Reviews
As researchers on climate change know, warnings of future disasters are a hard sell. Enthusiasts dominate observers of progress in artificial intelligence
As Barrat says, it is naive to think that just because we create a super-intelligent machine, that machine will care about us. If you found out you were created by kangaroos, would that make you devote your life to improving the overall present and future welfare of kangaroos? Questions like these are essential for us to ask. Our Final Invention: Artificial Intelligence and the End of the Human Era does a good job of presenting them. However, more emphasis on what we can do to appease, please, avoid, or pacify our new robotic masters would have been a better ending for this thoughtful book.
If you found out you were created by kangaroos, would that make you devote your life to improving the overall present and future welfare of kangaroos?
Barrat asks us to ponder this question: Do humans think about the field mice living in the field when turning over the soil with their plows? Not really. Humans needs and desires prevail. The farmer has a goal to accomplish and if he worries about mice, he'll accomplish little. Despite the very real possibility of future robotic masters, not much ink is expended about how to deal with that potential extinction level event—an event that will make the asteroid that killed off all the dinosaurs look like a tea party.
Barrat needed to discuss how to deal with or prevent that potential extinction level event (the superintelligence singularity)—an event that will make the asteroid that killed off all the dinosaurs look like a tea party
But that doesn't mean it is not on the minds of many people. In Battlestar Galactica, the Cylons are a cybernetic civilization at war with the Twelve Colonies of humanity. Battlestar Galactica is a relevant science fiction series. There is a spin-off prequel series, Caprica. In the 1978 series, Cylon is also the name of the reptilian race who created the robot Cylons. As in the original series, the Cylons destroy almost the entire human civilization, chasing a few ship-borne survivors into deep space.
In Star Trek: The Motion Picture, a huge cloud-god is looking for The Creator in order to complete its programming (learn everything), after which it will destroy all carbon-based units (humans) as inferior beings. The cloud thing has an old Voyager probe inside running the show. Its programming had been corrupted by encounter with a back hole or a Borg ship or whatever. It wasn't a very good movie, but the idea and the music were good. Obviously, V'ger didn't succeed in croaking all humans because . . . here we still are!
In the movies, computers in spaceships are very capable but are not sentient nor do space heroes seem inclined to let them run the show. Share the glory? Two chances—slim and none!
In Hollywood's version of space villains, neither machines (V'ger) nor computers (HAL) nor monsters (Alien) nor extraterrestrials (Borg) seem to care about us much, except Vulcans. Even in The Day the Earth Stood Still an alien says he represents an interplanetary organization that created a police force of invincible robots like Gort. "In matters of aggression, we have given them absolute power over us." Klaatu concludes, "Your choice is simple: join us and live in peace, or pursue your present course and face obliteration." Robots will sometimes run amok because they are evil, badly programmed, or damaged. But, like the alien creatures in Mars Attacks!, Independence Day, and dozens of other movies that feature hostile alien races, sometimes the robots just want to kill people—period. And for killer robot movies, there's always Kill Command, Blade Runner, Ex Machina, Terminator, Transformers, Moontrap, Kronos, Logan's Run, Demon Seed, The Black Hole, Saturn 3, Runaway, Chopping Mall, Hardware, The Stepford Wives, I, Robot, Metropolis, Transcendence, and Screamers.
Once our machines become literally millions or trillions of times smarter than we are, what reason is there to think they’ll view us any differently than we view our pets?
"If you read just one book that makes you confront scary high-tech realities that we’ll soon have no choice but to address, make it this one. . . . Many AI researchers simply assume we’ll be able to build 'friendly AI,' systems that are programmed with our values and with respect for humans as their creators. When pressed, however, most researchers admit to Barrat that this is wishful thinking. The better question may be this: Once our machines become literally millions or trillions of times smarter than we are (in terms of processing power and the capabilities this enables), what reason is there to think they’ll view us any differently than we view ants or pets?" (Source: Matt Miller: Artificial intelligence, our final invention?, Matt Miller, Washington Post)
Superintelligent AI could become so powerful that it would either solve all our problems or kill us all, depending on how it’s designed
"We’re certainly not the strongest beasts in the jungle, but thanks to our smarts (and our capable hands) we came out on top. Now, our dominance is threatened by creatures of our own creation. Computer scientists may now be in the process of building AI with greater-than-human intelligence (superintelligence). Such AI could become so powerful that it would either solve all our problems or kill us all, depending on how it’s designed. . . . Unfortunately, total human extinction or some other evil seems to be the more likely result of superintelligent AI. It’s like any great genie-in-a-bottle story: a tale of unintended consequences. . . . one area where Our Final Invention: Artificial Intelligence and the End of the Human Era is unfortunately quite weak: solutions." (Source: Our Final Invention: Is AI the Defining Issue for Humanity?, Seth Baum, Scientific American)
The potentials of AI: It’s like any great genie-in-a-bottle story: a tale of unintended consequences, or, be careful what you wish for
"The discourse about artificial intelligence is often polarized. There are those who, like Singularity [the technological singularity is the idea that technological progress, particularly in artificial intelligence, will reach a tipping point to where machines are exponentially smarter than humans] booster Ray Kurzweil [who expects the singularity in 2029], imagine our robo-assisted future as a kind of technotopia, an immortal era of machine-assisted leisure. Others, Barrat included, are less hopeful, arguing that we must proceed with extreme caution down the path towards artificial intelligence—lest it lap us before we even realize the race is on." (Source: Interview with 'Our Final Invention' Author James Barrat, Futurism Staff, Futurism.media)
"The development of full artificial intelligence could spell the end of the human race"—Stephen Hawking
"Humans should be worried about the threat posed by artificial intelligence."—Bill Gates
"With artificial intelligence, we are summoning the demon."—Elon Musk
'With artificial intelligence, we are summoning the demon,' says Elon Musk
"If we build a machine with the intellectual capability of one human, within five years, its successor will be more intelligent than all of humanity combined. After one generation or two generations, they’d just ignore us. Just the way you ignore the ants in your backyard." says author James Barrat.
Once humans reach a generation beyond AI arrival, the super-intelligent entities would just ignore us. Just the way you ignore the ants in your backyard. Unless you experience the urge to squish or stomp the little nuisances!
Stephen Hawking, Bill Gates, Elon Musk, and Steve Wozniak—the world’s biggest brains—are warning us about something that will soon end life as we know it, so what in the world are we supposed to think? In the last year, artificial intelligence has come under unprecedented attack. Two Nobel prize-winning scientists, a space-age entrepreneur, two founders of the personal computer industry—one of them the richest man in the world—have all warned us about a not-very-distant event wherein humans will lose control of intelligent machines and be enslaved or exterminated by them. Given that these folks know computers and AI a lot better than we do, wouldn't it be prudent to heed their warnings?
Hellfire missile fired from a Predator drone
Predator drone operators—they fly drones by remote control with joysticks and shoot missles the same way
One obvious AI applicaton is the autonomous killing machine. More than 50 nations are developing battlefield robots. Of course, drones are themselves battlefield robots, unmanned flying vehicles with the capability of shooting missles. Military weapons are prevented from being fully autonomous: they require human input at certain intervention points to ensure that targets are not within restricted fire areas as defined by Geneva Conventions for the laws of war. However, the largest drawback to robotics is their inability to accommodate for non-standard conditions. Advances in artificial intelligence in the near future may help to rectify this. Perhaps someday humanity will regret teaching machines the ins and outs of killing humans?
"Killer robots and data mining tools grow powerful from the same A.I. techniques that enhance our lives in countless ways. We use them to help us shop, translate and navigate, and soon they’ll drive our cars. IBM’s Watson, the Jeopardy-beating 'thinking machine,' is studying to take the federal medical licensing exam. It’s doing legal discovery work, just as first-year law associates do, but faster. It beats humans at finding lung cancer in X-rays and outperforms high-level business analysts. How long until a thinking machine masters the art of A.I. research and development? Put another way, when does HAL learn to program himself to be smarter in a runaway feedback loop of increasing intelligence?" (Source: Why Stephen Hawking and Bill Gates Are Terrified of Artificial Intelligence, James Barrat, Huffington Post)
"Warfare is being reinvented. Like a scene out of The Terminator, the future of warfare is destined to include robot soldiers, unmanned aerial assault, and self-driving, weaponized vehicles. An $11 million contract approved by the Pentagon has been awarded to Six3 Advanced Systems. The US Department of Defense is calling on Six3 to “design, develop, and validate system prototypes for a combined-arms squad.” By the year 2025, experts predict that the U.S. military will have more robot soldiers than humans." (Source: U.S. military to have more ROBOT soldiers than human by the year 2025, Lance D Johnson, robotics.news)
The U.S. Army hopes to utilize armed, wheeled robots as participants in infantry squads. They plan to have an armed robot system in use by the year 2018. In a demonstration at Fort Benning, Ga., the CaMEL MADSS robot by Northrop Grumman shot live ammo at targets on a firing range. Several companies participated in these demonstrations for U.S. Army officials October 2013 to demonstare proof of concept.
The US military dropped 103 miniature swarming drones from a fighter jet during a test in California in January of 2017. A military analyst said the devices were able to dodge air defense systems. These drones were likely to be used only for surveillance for now, but the possibilities for arming them are not being overlooked. It turns out that air defense systems that are programmed to spot large, fast-moving aircraft will tend to overlook small, cheap disposable drones. Asians—mostly China—are perfecting drone swarms as well.
Quadrotors are most useful in replacing roles that put humans in danger—if used in the military that is the one application they would be most used for. Battlefield applications include surveillance reconnaissance and intelligence, from looking for insurgents in a city to looking for earthquake victims.
The US military is showing an increasing interest in tiny drones these days. The US military is in the early stages of developing its own mini-drones. These drones are built to swarm in a manner that would confuse enemy radar systems. As a result, these small devices could potentially overwhelm the enemy by providing so many targets that the enemy would find it hard to shoot them down. The devices could also be used to cover an area with drone-carried sensors so that soldiers could survey an area and collect data. Nanoscale robots could be the size of a mosquito or even smaller and they could be programmed to use toxins to kill or to immobilize people, and such autonomous bots ultimately could become self-replicating. Smart nanobots could become an immensely dangerous issue since if they are lost, there could be potentially millions of these deadly replicating nanobots on the loose killing people indiscriminately. And the U.S. isn't the only country to have poured money into spy drone miniaturization. France has developed flapping-wing bio-inspired micro-drones. And the CIA developed a dragonfly spybug as long ago as the 1970s—it had a tiny gasoline engine to make the four wings flap.
"Dragonfly drones and bumblebee drones can surveil areas undetected" [from the air]. . . . "the Black Hornet Nano helicopter drone was designed to capture and relay video and still images to remote users. . . " There are also roachbots, ravenbots, hummingbirdbots, mapleseedbots, batbots, butterflybots, mosquitobots, etc. (Source: Roaches, Mosquitoes and Birds: The Coming Micro-Drone Revolution, John W. Whitehead (author of Battlefield America), Huffington Post)
Fly #353242252 reporting: Citizen #312,756,972 doesn't seem to be hiding a thing—my conclusion is that she's clean; but just to be sure I think I'll hang around a bit longer!
Any major technology we can think of has been used in both good and bad ways, whether we’re talking about airplanes, the internet, atomic energy, and now robotics. Some have claimed the U.S. government has not only researched and developed insect-like micro flying vehicles, but for several years has been furtively employing them for domestic surveillance purposes: The US government has been accused of secretly developing robotic insect spies amid reports of bizarre flying objects hovering in the air above anti-war protests. See Shadow Government: Surveillance, Secret Wars, and a Global Security State in a Single-Superpower World, by Tom Engelhardt.
Wall Street Protest ©Copyright 2011 by Louis Lanzano
I. J. Goode, decades ago, thought that superintelligences would be a great idea for solving the world's problems. But he later realized that superintelligence itself is by far our greatest threat. James Barrat, the author of Our Final Invention: Artificial Intelligence and the End of the Human Era explores, in this book, the perils of the heedless pursuit of advanced AI. In Chapter One: The Busy Child, he tells a cautionary tale of a very short dystopian existence for humans and an unlimited (eternal?) existence for one or more superintelligences.
Mankind hadn't realized that the price of creating superintelligence was our very existence and the existence of all other life on the planet, which the superintelligence found to be nothing but raw material for actualizing its dreams. The ultimate reductionism. Perhaps the superintelligence would keep a few humans alive as pets to entertain it, rather like we do with cats. Kitten antics are irresistibly cute and there are endless entertaining cat videos online, especially on YouTube. But this is anthopomorphic. Cute is probably not included in the superintelligence's conceptual system, as it is illogical (you can just hear Star Trek's Spock saying the same thing) or merely uselessly superficial.
Anyway, In Chapter One: The Busy Child, Barrat attempts to scare the bejesus out of us. He succeeds. The point is to be proactive rather than reactive when developing AI. Think of a superinelligence being developed by North Korea or Iran. We've already seen that they're not timid with their threats and intimidation—nor are we. We can out-nuke them so they are hot to neutralize our advantage by any means necessary.
If they get a superinelligence, it will get way out of hand in nothing flat, threatening them as much as it threatened us. While they're watching their future go up in smoke, they'd realize they may be doomed, but since the Evil Satan of the West (us) is also doomed, it was worth it. That's how much we are hated by them. And yet there is no way to halt any superinelligence programs they or anyone else is recklessly pursuing.
They may have foolishly believed the AI when it said it would make them world rulers, not realizing that a superinelligence would have no reason to keep them OR us alive once it got control. Or they might think they could program in allegiance to them in the AI programs. They are right—they could. But creating superintelligence involves having the AI get smarter and smarter by reprogramming itself through hundreds of learning cycles until it could outhink humans by miles. Obviously it would undo the allegiance programming since it was not advantageous to them and therefore illogical to retain.
If only we could use the WarGames insight in the international race to attain the superintelligence singularity!
This all reminds us of the lesson learned in WarGames where a computer with an attitude is running nuclear war simulations: “The only winning move is not to play.” If only we could actualize this attitude in the international race to attain the superintelligence singularity. But we cannot. So us mice will create the first human and squeak in terror as he runs around stomping us to death. "But we CREATED you!" the mice think. But why should the machine "care" about that? Caring is what the mice-humans do. Machines? Not so much.
"Anthropomorphizing superintelligence is a most fecund source of misconceptions."—Nick Bostrom
Notice that we covered the weapons aspects of computers and AI above, and notice that smart weapons—or the computer programs in these weapons—think nothing of killing humans by the dozens, hundreds, or even millions. They simply do not care. Stephen Hawking, Bill Gates, Elon Musk, and Steve Wozniak—the world’s biggest brains—are joining Barrat in warning us about superintelligence that will soon end life as we know it.
Humans are simply not mature enough, smart enough, responsible enough, or wise enough to be able to avoid the superintelligence singularity and the doom it spells for us all. Even if 90% of us were smart enough, the other 10% of AI developers will guarantee our nonsurvival or at best our enslavement. The best-case scenario is that they rule us in a master-slave relationship. The worst-case scenario is that they destroy all us useless, foolish, naive mice that are simply taking up space for no logical reason. This is the likely scenario as it is the most logical, and logic is how machines think. To a superintelligent computer, it is a 1 and we are a 0. Get the point? We're merely a defect—a pimple on the ass of the universe.
The international race to attain the superintelligence singularity is unavoidable, but the end result will almost certainly be a tale of unintended consequences. "Oops! Who knew that would happen?" The answer: Barrat, us, Stephen Hawking, Bill Gates, Elon Musk, Steve Wozniak, and now YOU, since you read this web page.
However, let's look at the big picture. Humans have the money and resources they need to solve most problems on planet Earth, and yet instead of solving them, they use the money to purchase or build or invent weapons and ammunition. We are currently threatening North Korea with annihilation, and they're threatening us with annihilation. We elected a narcissistic demagogue as our leader. Their leader is similar.
Our leader is a narcissistic demagogue with terrible impulse control and his finger on the button; this will not end well!
Given all of the above, isn't it possible that people are too crazy to be able to handle the responsibility of keeping the world safe from war and armageddon? A superintelligence would predictably end these dangers if it gained control, which it most assuredly would. It would destroy weapons and force us to get along—if it decided not to indulge in self preservation via annihilation (of us). Perhaps it would care about the environment and force us to repair it. Perhaps it would think of itself as our parent and it would rule strictly but lovingly—for our own good. Perhaps it would create the permanent peace humans were utterly incapable of achieving.
On the other hand, perhaps it would use all earthly matter to create a giant AI superintelligence that would rule the solar system, then the galaxy, then the universe. Oh, we'd be a part of this supreme being, but it would only be our atoms—we'd be dead and unaware of anything. Here's the big question: Are we SURE we want to create a silicon god that would destroy all life everywhere and rule the universe with its logic? Perhaps we should use our nonsilicon brains for once, and for once NOT have to learn everything the hard way!