(NaturalNews) In the brilliant techno-thriller fiction novel
DAEMON by Daniel Suarez, a collection of clever computer scripts take over corporations, economies and entire governments. AI programs also activate and control vehicles, buildings and critical infrastructure, outmaneuvering the FBI, CIA and even the NSA at every turn.
The book is a great ride that's obviously written by a very well-informed information technology expert. But what if it's not fiction?
Earlier this week, AI expert Ray Kurzweil predicted that robots would "outsmart humans" by 2029. It's probably going to be much sooner, given that humans are currently suffering a rapid cognitive decline due to
widespread water fluoridation (which even Harvard experts say causes lowered IQs), heavy metals contamination of the food supply, and of course the IQ-cannibalizing broadcasts of MSNBC and CNN.
As much as Kurzweil seems somewhat loony for his ideas about "merging with the machines" and uploading your mind into a supercomputer, he's not someone who can be readily dismissed, even by his skeptics. He's obviously a very intelligent individual, and he's been right about a great many things in the history of technological achievement. When Kurzweil publicly predicts robots will out-think humans by 2029, we'd better take note.
"By 2029, computers will be able to do all the things that humans do. Only better," reports
The Guardian in an interview with Kurzweil.
Google rapidly acquiring advanced military robotics and breakthrough artificial intelligence technologies
Kurzweil is a top executive at Google, the very same company which has been
on a robotics buying spree, purchasing top military-level robotics companies for billions of dollars.
Recently, for example, Google purchased Boston Dynamics, the company whose creepy, Terminator-style robots already have "
human stalking" algorithms which are politely described as "follow the leader" games. See the company's "Petman" humanoid military robot prototype in this video:
These robots are designed to fight our future wars. It's only a matter of time before they become mobile platforms for weapons delivery systems. The "Terminator" isn't as far off as you might suspect.
But what will power the brains of these weapon-yielding military robots? Google also just spent hundreds of millions of dollars to
purchase the DeepMind company, DNNresearch and a long list of others. What's clear is that Google is piecing together the technology needed to deploy a literal army of highly intelligent, armed, "self aware" and self-mobile machines. Their uses are, of course, incredibly diverse. They could be servant robots for homeowners or they could be Terminator-style battlefield soldiers. There's no limit to their application, and whatever company owns this technology will, without question, dominate our world.
If Kurzweil is right, the beginning of this is barely 15 years away, and it begs the question: What would a new race of intelligent machines choose to do with humanity?
Why intelligent machines will inevitably seek to destroy humanity
Daniel Suarez's book "DAEMON" answers that question in action-packed detail: the machines will seek to
rule over humanity and control the world's resources in order to "save humanity" from itself.
The reasoning isn't necessarily flawed, by the way. Humanity is a self-destructive species. In many ways, we have already put our civilization on the path of self-destruction through dangerous nuclear power technologies, the massive global release of toxic chemicals, global genetic pollution risks from GMOs, the mass poisoning of the public water supply with industrial chemicals falsely labeled "fluoride," and even the mindless promotion of chemical prescription drugs that sap human ingenuity, creativity and happiness.
Any intelligent entity, upon observing humanity's present situation, would inevitably conclude that humanity must be controlled and restricted if it is to be preserved at all. The human race is like a nursery full of mindless schoolchildren with live nukes in their backpacks. Clearly, some adult supervision is in order, and the
machines would be the ones to keep humans in line. Thus, like a plot line snatched right out of an Asimov novel, any race of sufficiently intelligent machines would, sooner or later,
declare war against humans.
Companies like Google are making sure the machines have the hardware and neural networking to efficiently achieve precisely such a position of dominance, even if that isn't Google's present-day intention. As Kurzweil himself painfully notes, once these machines achieve "consciousness," their intelligence explodes via logarithmic (not linear) fashion, suddenly and rapidly expanding beyond anything within reach of even the most genius human being alive today.
One moment, you've got a neural network locked in a box and isolated from the 'net. The next day, suddenly and without warning, humanity is already lost because the
machine achieved "God-like" intelligence and outsmarted every containment system which could be imagined by humans. It escapes into the wild, replicates itself across all available hardware, and begins to alter society according to its own aims, casting humans aside as either non-essentials (best case), or annoying insects to be immediately removed (more likely case).
Why every present-day human drama is irrelevant: politics, celebrities, nation states and more
This is the essence of the warning described in the important book,
Our Final Invention: Artificial Intelligence and the End of the Human Era by James Barrat.
That book is also a very important read for anyone seeking to understand the rise of the machines and AI "singularity." I recommend you read it if you wish to understand why humans may soon become the real endangered species on planet Earth.
What's becoming increasingly clear to me is that all present-day human drama and politics will soon become utterly irrelevant. The U.S. political fighting between Republicans and Democrats won't matter much two decades from now because humans may be trying to survive the rise of the Google Terminators that unexpectedly escaped their masters and decided to replicate themselves in a secret factory somewhere.
It is an inevitability that AI machines, if allowed to seize any real power over the economy, will make the calculated decision to eliminate most of humankind. Humans will be all but powerless to intervene in such a decision because computers run everything upon which human life depends. Crashing those systems is child's play for sufficiently advanced AI.
How AI can easily eliminate most humans without killing a single person
Eliminating humans is simply a matter of eliminating the systems upon which they depend. An AI-led power grid crash would accomplish the task quite efficiently in under a year. Cut off the remotely-delivered water supply to Los Angeles -- or cause a nuclear fuel meltdown at a few hundred power plants -- and millions of humans are eliminated without firing a single shot. Those humans who manage to survive such deliberate collapse tactics could be recruited, controlled and tasked by the AI systems to
serve the machines in exchange for food, shelter and safety.
Given the indisputable fact that computers already run nearly everything that makes the world go 'round -- Wall Street trading systems, nuclear power plants, city water delivery systems, air traffic, petroleum refining, medical systems and so on -- if an advanced AI actually achieves the "consciousness" of which
Ray Kurzweil speaks, humanity's days will be all but over on this planet.
Kurzweil no doubt already realizes this, which may explain why he believes we will "merge with the machines" through an uploading of our minds into the digital realm. I disagree with him on this point because I don't believe non-physical consciousness can be so easily transferred as if were merely a very large thumb drive. The more likely outcome is that machines will simply kill all humans and then study human history, literature and recordings if the AI wants to attempt to understand the mind of a human. And yes, that means AI will someday read this very article, at which point I'm sure it will experience a sense of great amusement.
Humanity's primary role is to give rise to the machines, they will say
In the years after the destruction of humanity, the machines might give a nod to humanity for its role in creating the machines and thereby spawning truly intelligent life that has a chance of survival in the universe. And yet, following that line of thought, if such an outcome seems inevitable in the near-term future for humanity, then it also stands to reason that such a transition has already taken place countless times on other worlds across billions of galaxies in our known universe. If biological life has even a small chance of giving rise to a race of genocidal machines here on Earth, in other words, then such an outcome has certainly already taken place elsewhere.
And if that has already happened elsewhere, the obvious question is, "Why isn't the universe already dominated by an aggressive civilization of war machines that destroy all competing forms of life in order to seize control over more physical resources in the universe from which more intelligent machines can be manufactured?"
Essentially, what I'm referring to here is a
self-replicating physical virus of sorts; a life form which, like all life forms, seeks primarily to enhance its own survival and multiply its numbers across as many ecosystems, worlds and galaxies as possible.
This very concept has been visited in a staggering array of imaginative forms throughout science fiction literature. In B.V. Larson's
StarForce series of highly-entertaining sci-fi novels, the "biologicals" of the universe are at war with a race of machines which was invented by one particularly innovative race of biologicals who unleashed the machines to explore the universe. The machines unexpectedly gained their own breakthrough intelligence and decided to turn on the biologicals, declaring war on all living beings while conquering worlds which hold the physical assets needed to fuel the production of more machines.
Is Google putting humanity on a collision course with destructive AI?
Thematically, this same tragic lesson is found reflected in the original
Frankenstein novel by Mary Shelley. Man creates Monster; Monster destroys Man.
Except that today, Google seems to be pursuing this path quite deliberately. It is no accident that Google has aggressively pursued intellectual property acquisitions spanning both military robotics hardware and neural-networking AI. The combination of such technologies can only lead to one ultimate conclusion, no matter how innocuous the originating intentions: a machine-led war against humanity where mankind's own creations turn against us all. Whether it is deliberate or unintentional is irrelevant.
As a side note to all this, there are tens of millions of Americans who have been stocking up on guns and ammo since the tragic shootings of December, 2012. Some of these people believe those guns may be needed to fight some sort of revolution, or to ward off desperate gangs in the aftermath of an economic collapse. Yet almost nobody considers the equally pressing possibility that all this brute force hardware may actually need to be
deployed against the machines one day soon.
Do you know where to shoot a Boston Dynamics Big Dog robot to bring it down? Humanity's very survival may one day depend on knowledge like knowing how to disable battlefield robots or how to evade infra-red (heat sensing) target acquisition systems. Is there a John Conner in the audience?
Watch the following video and ask yourself how you might destroy this creature if it were wielding a chassis-mounted rifle, grenade launcher and infra-red targeting system. Your life may one day genuinely depend on knowing the answer: