AI, an innovation from the 2nd century B.C.

Marc Puig, CTO

Is it possible to formalise reasoning?

In recent years concepts like artificial intelligence or machine learning are everywhere. People think of them as some kind of magical powers (I’ve even seen some AI TV ads!). So I think I dare make two blunt statements about AI: it isn’t magic, and it’s here to stay and change the lives of us all. That’s why this post makes a brief chronology of its history, which starts more than 2,000 years ago; and will continue with a second post that reflects on how these changes can affect society.

What are we actually talking about when we talk about artificial intelligence? Defining the concept is not an easy task and we have to go back a few centuries to do so. Specifically, we can go back to II B.C. when greek philosophers like Aristotle or Euclid developed theories to try to formalise human reasoning with the intention of simulating and mechanising it. Centuries later, other philosophers like the Majorcan Ramon Llull (XIIIth century) with his ‘logic machines’, or Leibniz, Hobbes and Descartes (XVIIth century) explored the possibility of systematising reasoning through geometry and algebra.

Early in the XXth century it seemed that AI would become possible with the outbreak and development of mathematical logic. Scientists and mathematicians wanted to respond the fundamental question Is it possible to formalise all mathematical reasoning?. The answer was two-fold. On the one hand, they determined that mathematical logic had clear limits but, on the other hand, they were also able to determine that within those limits, any form of mathematical reasoning could be mechanised.

This last insight would be decisive for Alan Turing to create ‘The Turing Machine’ in 1936, an invention that served as inspiration to sparkle scientific argument about the possibility to create intelligent machines. It would be during the second world war that, based on Turing’s theories, the first modern computers were built (ENIAC, Colossus…) and, with them, scientists from different fields (mathematicians, psychologists, engineers, economists and political scientists) started to discuss the idea of creating an artificial brain.

It would be in 1956, during the Dartmouth Conference, when the field of research for artificial intelligence was formally defined as an academic discipline by John McCarthy. This is considered to be the moment of birth of AI.

 

Ups and downs: the golden years and the first winter

The period between the Dartmouth Conference and 1974 constitutes ‘the golden years’ and it was an age of exploration and discoveries. The programmes developed during those years were awe-inspiring: computers solving complex algebra problems, proving geometry theorems and learning to speak in English. The researchers showed great optimism and predicted that completely intelligent machines would come around in less than 20 years. The money shower started, greatly accelerating research.

During the 70’s, researchers realised that they had underestimated the hardships entailed in solving the problems that they had set out to solve. Their lofty optimism had set expectations extraordinarily high and, when the promised results lacked to appear, the investments vanished. That’s why the period between 1974 and 1980 is known as ‘the first AI winter’.

During the 80’s, a certain boom came about with the adoption of a certain type of AI called ‘expert systems’ by corporations around the world. During those years, governments from Japan, UAE or the UK restarted the investments in research around the field of AI. A new training method for neural networks, backpropagation, was popularised and new applications for OCR (Optical Character Recognition) and speech recognition based on those neural networks were successfully commercialised.

Towards the late 80’s and early 90’s, the world of AI suffered a series of turbulent events. The first was the sudden collapse of the AI-specialized hardware market in 1987. IBM and Apple’s desktop computers had been improving in speed and power and in that year they surpassed than those specialised and over-priced computers. There was no reason to buy them anymore, and a whole industry vanished overnight. The second was that, from the impressive list of objectives established for AI earlier in the decade, most remain unsolved. This is ‘the second AI winter’ (1987-1993).

By the end of the XXth century, half a century after the Dartmouth Conference, the field of AI had finally been able to achieve some of its oldest goals; and it started to be successfully applied industrially. Some of the reasons behind this success were the increased computing power of computers, as well as focusing on solving specific, isolated problems and approaching them with high scientific standards.

 

AI today

During the first two decades of the XXIst century, big amounts of data (big data), faster computers and advanced ML (Machine Learning) techniques have concurred in time and have made AI have a great economic impact across almost all sectors.

Now you have an overview of how AI came to be and where it stands nowadays. The next post will examine how artificial intelligence will affect society and where we might stand tomorrow.