Are we there yet?

Is technology approaching sentient artifical intelligence?

icon13.07.16 (Updated 21.11.16)
iconDurban
icon11 min read

As a kid I loved robots and I just looooove a good AI movie with i, Robot being my all time favourite and Ex Machina, a close second (I highly recommend watching both). I still find robots and artificial intelligence immensely fascinating and we're closer now than ever before.

Currently we've only achieved Weak AI. What is Weak AI, you ask? It's any program or machine that can simulate intelligence. The virtual 'assistants' on our smartphones are a form of Weak AI. They at best only fake intelligence, capable of conversing with us rather convincingly but only acting upon preprogrammed instructions. They don't think for themselves and don't possess free will of any kind.

Self-aware AI

Within the last few years robotic self-awareness has been achieved on a basic level. Robots were tested using induction puzzles such as the King's Wise Men test and were able to successfully recognise it's own voice and identify itself amongst other robots. Simply put, it can distinguish itself as an individual...well, kind of. While not completely convincing, it's on the right track and quite impressive in terms of AI research in recent decades.

Although self-awareness is an achievement in itself, we still have a long way to go. Even if a self-aware robot was given a gun and programmed to shoot people, that's likely what it will do. Knowing you're a robot isn't good enough, being able to tell right from wrong, free will and making moral, rational decisions makes all the difference. One needs to be conscious of oneself and one's actions in order to make justifiable decisions. Cognitive ability, learning, remembering and understanding - it's the ultimate goal here, and it's also where things get complex.

Morality, consciousness and sentience - these are all things perceived by living beings. More specifically, humans, as one can argue that not even all creatures can be said to be sentient. Is an ant self-aware? Does it know it's an ant, and does it know we're humans? Probably not. That brings us to the next level - Strong AI.

Although people such as Elon Musk and Stephen Hawking are concerned about Strong AI and the potential threat they pose, Strong AI is still what we are working towards and hope to achieve. Strong AI is what we see in movies and read about in books. It's artificial intelligence that essentially has a 'mind' and free will of it's own. So what's the difference exactly? I could ask Weak AI, 'What's the weather like today?' 20 times consecutively and each time it'll tell me the same thing. Strong AI would probably tell me to f**k off if I asked the same question that many times.

While some disagree that AI could ever attain our level of sentience, I do believe that they will or maybe even transcend it. One of the unique things about Strong AI will be it's ability to learn by itself. What took humans decades to acheive in terms of AI research, Strong AI will be able to do in a matter of days - essentially making itself smarter, faster and better than humans. This brings us to the final form of what we think AI will become; ASI - Artificial Superintelligence.

Artificial Superintelligence will likely surpass the human mind exponentially. Imagine if the minds of humans could merge together into one super-smart being - this is what ASI will be. It will have instant access to all the data and information we've accumulated over time coupled with powerful cognitive and processing abilities; thus it will be faster, more accurate and able to outthink any human, possibly creating cures for diseases that we haven't been able to remedy or finding solutions to problems such as global warming. Well, that's what we hope anyway.

The good, the bad and AI

Believe it or not we're actually considerably close to achieving aware robots. Albeit our technology is currently developed primarily for profit and is strewn across scores of separate companies. The key to great AI will always lie in data collection - ridiculous amounts if it. The more data a computer or program has to reference, the more accurate it will be. This is already being undertaken by companies such as Google, Facebook, Microsoft and many others. Computer vision, Neural Networks and Machine Learning are a few ways we're making AI smarter at the moment. Machine learning is when a program gets better at doing something the more it does it, and Computer Vision employs algorithms that allow a machine to identify real world objects and scenes based on a database of similar data.

For example, if I had a database of 1000 cat images, a computer would be able to identify a new cat image amongst many dog images. This is proven best with Facebook's facial recognition software. When first introduced it was quite crap. It didn't always identify people correctly, especially if their faces were turned to the side. Within a year or two and millions of photos later, it now boasts an accuracy rate of 98% and can successfully identify you in a single photo out of 800 million in under 5 seconds! That's flippin' crazy, and is a much higher accuracy rate than even the FBI! Facial recognition isn't just to tag your bff in a photo from last night's house party, the data is used to build faceprints (stored on Facebook servers) which can be used to create 3D renders of your face that can be viewed from any angle.

So, what would anyone want with your faceprint? If you've ever stalked anyone on social media, you'll know with just a name alone it's not at all difficult to find someone. In fact it's scarily easy. Faceprints take that a step further with nothing more than a single photo is needed, regardless of the angle or how the picture was obtained. One legit purpose sees it tracking, locating and assisting in the arrest of criminals as security cameras can be found in nearly every public place. What about not-so-legit reasons? When unbeknown victims are being tracked, watched or followed for unscrupulous, perverse or ulterior motives?

Will AI want to harm us?

That's a difficult question to answer. It's impossible to know right now and it's a massive assumption to think they would even reach our level of consciousness and cognitive ability. Perhaps they won't but that doesn't necessarily mean they won't be able to achieve consciousness or self-awareness in a different way. Self-preservation is a very good way of determining if AI we create is sentient or not. If I threw my computer off my balcony three storeys to the ground, it can't stop me. Only something truly aware wouldn't want to be destroyed or 'killed'.

In no time at all they will reach new heights in learning. They may learn and understand to write code allowing them overwrite our protocols, lock us in or out of places, or even seize nuclear weapons and launch codes. *Beep Boop Beep* "Sorry, but given your history we cannot trust you with weapons of mass destruction."

While it is safe to say no AI company would intentionally create an 'evil' program, all of this is essentially unchartered territory and we have no way of foreseeing where problems may arise. Google Deep Mind's AlphaGo and IBM's Watson are two of the most advanced machine learning programs to date. AlphaGo is famous for beating one of the best human players at Go - a highly complex and ancient Chinese board game it learned by itself, and Watson is a question answering supercomputer capable of perusing millions of web pages within seconds and determining which information is most relevant to the presented question - with astonishing results! Although these are far from evil, we simply cannot predict what other AI may do in the future.

I love this stuff! I can hypothesise and talk about it for days! It's so fascinating and is one of the things I really hope to live long enough to see. And then die... because they'll kill us. Just kidding!

Share

Facebook Twitter Google Plus