Weak vs Strong AI: Our not too distant future

At this point there’s no doubt that Artificial Intelligence (AI) is all around us, and it’s here to stay. The question, now, is: what is AI? Is it a unitary, coherent thing? The answer is no, and the topic is surrounded controversy. Let’s look at some of the reasons.

Currently, a big part of humanity makes use of AI every day… after all, who hasn’t used Siri or Alexa at least once? When we interact with a virtual assistant we can have the feeling that the machine actually understands us and what we say. This is one of the reasons that there’s confusion around AI that can lead us to think that we already live in a sci-fi world. Let’s dig deeper.

When the concept of AI emerged, it was common understanding that computers are machines that will carry out a very specific set of tasks. As the years went by and computers evolved and became more and more capable, this line started to blur to the point where we started to question whether machines could potentially think beyond their tasks. Under this premise, two opposing arguments emerge: Weak AI (or narrow) and Strong AI (or full, true, general). The former asserts that machines can only simulate human behaviour, while the latter contends that machines will eventually be conscious, sentient and sapient, able to learn, make judgements, communicate…

 

– SIRI, are you Weak or Strong AI?

“I will even outperform humans in some tasks, like selecting the best Mexican restaurant in town; but I’m Weak AI, because even if I’m able to carry out many different tasks, it will always be within a particular context.”

– But, does that mean that you’re smarter than the humans you outperform?

“Definitely not, outside my defined areas of application I will not understand certain ideas that humans grasp without effort.”

 

 

We can agree that all AI developed today is Weak, just because their applications are very specific, that is, they’re able to make decisions and solve problems in a very precise and limited area.

On the other hand, if we fully develop the Strong AI concept, we would arrive to Artificial SuperIntelligence (ASI) at some point, for the simple reason that a system that doesn’t have the physical limitations that humans do (carbon-based life, our skull limits our brain’s size…) it will be able to learn more and, at some point, acquire consciousness and overcome human thinking capacities.

This is a highly controversial idea that engendered ethical and philosophical debates, since it’s very complex to understand and explain what consciousness is. Another delicate angle is protecting humans: what if these machines learn to control us in the same way that we do with animals? Scary, right? If they are more capable than us, what jobs will we carry out? It would be nice to have unlimited vacations but, what will happen to the global economy?

Let’s not freak out just yet. We still don’t live in Chappie or I Robot (two sci-fi, AI-related movies that we highly recommend you to watch!) Thankfully the use of AI so far has helped us improve our life standards. We should, nonetheless, open a profound debate around AI, where we want to direct it and how far to go down that road… some are predicting ASI to be around in just 60 years as Weak AI is today…

 


 

(Raquel García, Data Scientist, Citibeats)

This is the first piece on our #SocialCoinAcademy series, where we will teach you some of the basics in Artificial Intelligence, Machine Learning, Citizen Engagement, Active Listening and others.