The real meaning of artificial intelligence

Posted on

My discussion on AlphaGo received a lot of attention at a recent conference on AI. Man versus machine was the mainstream media’s framing of the historic matches in March between Korean champion Lee Sedol and Google’s AlphaGo AI programme. Sedel, the reigning Go world champion, lost 4-1 against the AI, which fascinated the world.

The group of tech professionals continued their conversation, but did they all understand each other? The term “artificial intelligence” (AI) has become vague enough to be applied to many different areas of technology, leading to divergent views on what AI is ultimately meant to accomplish.

When it comes down to it, what exactly did AlphaGo’s victory signify for the future of AI? The history of artificial intelligence can shed light on this question.


Since the term was first created in 1956, AI has suffered from fluctuating definitions. In the second Dartmouth conference that John McCarthy, a pioneer in the field of AI, oversaw, the phrase “artificial intelligence” was first used. The phrase “simulation of intelligent behaviour by computers” appears frequently in definitions of artificial intelligence. However, a more exact definition can be found in one of the most widely used AI textbooks.

Artificial intelligence (AI) is defined by Stuart Russell and Peter Norvig in their book Artificial Intelligence: A Modern Approach as the process of creating intelligent agents that can take in information from their surroundings and act upon it. With this perspective on artificial intelligence, many formerly separate areas of study—such as computer vision, speech processing, natural language understanding, reasoning, knowledge representation, learning, and robotics—are brought together to accomplish a common goal.

Because of AI’s rapid development, it has becoming increasingly fragmented. Any area of artificial intelligence that gains widespread acceptance is quickly renamed. Then, whatever hasn’t been discovered yet is labelled artificial intelligence. Artificial intelligence (AI) formerly encompassed technologies like handwriting recognition and voice recognition. The emergence of commercial text and speech recognition systems, however, has removed these tasks outside the purview of artificial intelligence. Thus, as technology develops, it has become impossible to provide a specific definition of AI.


The field of artificial intelligence has occasionally utilised Man vs. Machine games as a measure for accomplishing and displaying progress due to the difficulties of defining intelligence and artificial intelligence. Humans generally demonstrate a great deal of intelligence, academic prowess, or physical exertion in such activities. Over the past few decades, we’ve witnessed artificial intelligence (AI) beat humans at chess, Jeopardy!, and most recently, Go. We could try out soccer in the near future.

While these are impressive feats, the reality of most games is far different from the ideal we strive for. For starters, each game follows a set of rules and ends in one of several predetermined ways (e.g., win, loss or tie). Second, the activities of the players in these games only have an impact on other players in the system. Third, there are no actual repercussions for anyone outside the system if the AI is taught with many failures (such as losing the game).

Unsurprisingly, real life rarely presents such opportunities. Although this victory by AlphaGo is certainly a triumph for deep learning, it is important to remember that the world of games is very different from the actual world. Artificial intelligence programmes still have a long way to go before they can compete successfully with the difficult tasks that people must perform on a daily basis.


The truth is that we view AI as a spectrum:

Assistive intelligence, in which AI has supplanted human labour for many routine and standardised jobs.

Augmented intelligence, in which robots and people work together to expand and improve upon previous collaborations through mutual learning.

Intelligent systems that can think for themselves; this happens when adaptive and continuous systems are put to use.

The decision of whether or not to transition from augmented to autonomous intelligence will be largely up to us and will depend on a number of factors such as the speed with which humans can make decisions, the technical feasibility of making autonomous decisions, the cost of building such solutions, and the degree to which we trust such solutions.

When companies think about implementing AI across their functional domains, it’s helpful to specify what level of AI they hope to achieve; for example, would they only be automating mundane jobs and giving supported intelligence? By combining human and machine intelligence to make judgments, are you radically altering the nature of work? Is autonomous intelligence doing all the decision making for you, or are you just letting it?