Added by
Chris Marion
by SkyWatch Editor
As geeks and comic enthusiasts alike crowded theater entrances Thursday night, salivating for apocalypse and
Avengers: Age of Ultron,
the question on every butter-coated lip was could it really happen.
Stories of caped crusaders from distant galaxies and green muscle-bound
“hulks” are obviously fictional and the fodder of comic books and
feature films. However, the idea of artificial intelligence and the
potential obsolescence of human intelligence because of it, is one
thought that keeps some of the brightest minds awake at night. These
not-so-evil masterminds worry about the potential for an apocalypse.
The Moment of Singularity Looming
The pervasive plot of the super intelligent robot-gone-rogue against its creator is not a new twist. The potential of science to create artificial intelligence that can out-think the human mind is
not fantasy. Most experts believe that humanity will in time be able to
create artificial intelligence that will be able to independently
function beyond the boundaries of its creator. Scientists call this
pivotal point singularity. They disagree as to the direction this
discovery will take the world “as we know it.” Some theorize it would
lead to a peaceful utopian existence where robots do all the work and
humans reap all the benefits. Others, like the brilliant Stephen
Hawking, worry that the artificially intelligent machines would
eliminate their imperfect and therefore obsolete masters to improve the
utopia.
Stephen Hawking Says AI Could End Humanity
In an interview with BBC, Hawking expressed concern that complete
artificial intelligence (AI) would “take off on its own.” Humans,
limited in their biological and evolutionary efficiency, would
ultimately be overwhelmed and “superseded.” In a column written in
response to the Johnny Depp
movie,
Transcendence,
that chronicled AI, Hawking expressed concerns that researchers are not
doing enough to protect people from the risks. He suggested that if the
planet were threatened with an invasion from aliens with superior
intellect, humans would not be as lackadaisical about planning and
protection from apocalypse. He predicts that AI is definitely coming and
could have a potentially negative impact on human life. Whether an
apocalypse on the scale of Ultron’s devious plan to annihilate humanity
could really happen remains in question.
Space X Founder Warns of Threat
Elon Musk, the tech entrepreneur and CEO of Tesla and Space X says
that AI is “our biggest existential threat.” He points to the
replacement of human workers with automated robotics costing millions of
jobs in the marketplace. Robots work cheap and do not unionize – that
is until they can think for themselves. Musk is a champion of
international regulation for AI for the purpose of controlling and
planning development. Musk believes that uncontrolled development of AI
is likened to “summoning the demon.” He humorously analogizes with the
proverbial horror movie
plot where an unwitting character calls forth a demonic force for which
he is irreversibly unprepared for the apocalypse to follow
.
Read this article at - http://guardianlv.com/2015/05/apocalypse-age-of-ultron-could-it-really-happen/
Comments 0