AI is one of those impending discoveries that is sitting just beyond the edge of our technological capabilities. We have been able to imagine what it would be like, and envision a world where AI exists, but its reality is still out of our grasp.
You only have to watch one of Hollywood’s incarnations of the idea of AI to be a little wary. Every depiction seems to end in doom for the creators (i.e. us). Ex-Machina, iRobot, Her, Humans , Wall-E and so on and so forth have all predicted disaster from this technology.
There’s a line in Ex-Machina that says “one day, AIs are going to look back on us the same way we look at fossils”. And to me, this kind of makes sense. We, as humans, use up vast resources, we are destructive and as a collective we have made poor decisions such as world wars. AIs on the other hand would be entirely logical (in theory), they don’t need food or water or rest and can grow endlessly. Their capabilities to make discoveries and progress would (again in theory) be vastly beyond our own human capabilities.
According to the law of accelerating returns, we are sitting on a point of progress that is pretty much unfathomable. So much so that the world in 40 years could be entirely unrecognizable.
“All in all, because of the Law of Accelerating Returns, Kurzweil believes that the 21st century will achieve 1,000 times the progress of the 20th century” – Wait But Why, 2015
Technically, AI already exists. This however, is classified as ANI ‘artificial narrow intelligence’ which encompasses the likes of Siri and self driving cars.
The next category is AGI (artificial general intelligence) which is AI that has the same intelligence and mental thinking capabilities as a human. This does not yet exist.
Beyond this, is ASI (artificial super intelligence) which is an intelligence far beyond what humans are capable of, in some cases trillions of times smarter: “an intellect that is much smarter than the best human brains in practically every field, including scientific creativity, general wisdom and social skills.”
I’ve stolen a massive chunk of text from Wait but Why, but they’ve put it so well I couldn’t possible paraphrase. Please remember, all of the below is entirely possible (all though a little on the crazy end) and within scientific forecasts.
“What we do know is that humans’ utter dominance on this Earth suggests a clear rule: with intelligence comes power. Which means an ASI, when we create it, will be the most powerful being in the history of life on Earth, and all living things, including humans, will be entirely at its whim—and this might happen in the next few decades.
If our meager brains were able to invent wifi, then something 100 or 1,000 or 1 billion times smarter than we are should have no problem controlling the positioning of each and every atom in the world in any way it likes, at any time—everything we consider magic, every power we imagine a supreme God to have will be as mundane an activity for the ASI as flipping on a light switch is for us. Creating the technology to reverse human aging, curing disease and hunger and even mortality, reprogramming the weather to protect the future of life on Earth—all suddenly possible. Also possible is the immediate end of all life on Earth. As far as we’re concerned, if an ASI comes to being, there is now an omnipotent God on Earth”
I guess the next big question is, if AI is going to leave us humans like insignificant ants in the dust, then why are we pursuing it? Largely, we have still got a big gap to jump when it comes to getting ANI to AGI, and actually being able to fully replicate the scale of human emotions and complexities. Take IBM’s Watson (who beat the best at Jeopardy in 2011) for example, who learnt how to swear, but wasn’t sure why swearing was bad.
Another reason is, we’re human. We love progress and as I mentioned before we don’t have the best track record for actually planning ahead and thinking things through.
Elon Musk has expressed concerns that AI will take over, but his solution to the problem is making it usable by everyone, rather than privatized by large tech companies. On the whole, most actually think we’re pretty far away from anything like AI ‘taking over’ (when I say taking over I mean running the world because we get lazy rather than destroying all humans – think Wall-E rather than terminator). A study conducted at the most recent AGI conference suggested that nearly 60% think AGI won’t happen until at least 2050.
“That’s a long ways away. And some people think it might not happen. But if it did, that will be scary.”