The Evolution of AI: From Ancient Myths to ChatGPT
Estimated reading time: 5 minutes
Experts believe artificial intelligence tech will spark a new industrial revolution that could add trillions to global GDP. But the concept of AI goes back millennia – and the origin of today’s tech is more than 60 years old…
In 1726, Jonathan Swift’s Gulliver’s Travels described a magical machine, The Engine, that could generate books on any subject, allowing even the most ignorant person to write without study. Was this an early vision of AI? Possibly. Others trace the concept back to Ancient Greece.
What is certain is that today, something like The Engine exists. Tools like ChatGPT are reshaping industries, and IDC research estimates AI will contribute $19.9 trillion to the global economy between now and 2030;
But how did AI evolve from a fantasy to reality? The modern AI revolution has its roots in computing research from the 1940s.
Let’s trace its fascinating history.
1947: Alan Turing speaks on computer intelligence, envisioning machines that “learn from experience.” He introduces the Turing Test to distinguish computers from humans.
1958: John McCarthy develops Lisp, the first AI programming language.
1959: Arthur Samuel coined the term machine learning in a seminal paper explaining that the computer could be programmed to outplay its programmer.
1963: Donald Michie builds MENACE, an early game-playing noughts and crosses.
1965: Daniel G. Bobrow creates STUDENT, an early natural language processing (NLP) program that solves algebra word problems.
1966: Joseph Weizenbaum develops ELIZA, a computer program that can converse in English. The model reveals how people can become emotionally attached to a simulation.
Stanford Research Institute unveils ‘Shakey’ – the first general-purpose mobile robot that combined AI, computer vision, navigation and NLP. It's the grandfather of self-driving cars and drones.
1972: Stanford University unveils MYCIN, one of the first AI programs to help medics diagnose infectious diseases.
Waseda University in Japan creates the WABOT-1 anthropomorphic robot. It can walk and talk.
1977: This year is widely agreed to be the start of the AI winter, as funding and interest decline due to unmet expectations.
1980: Lisp Machines launches in Cambridge, Massachusetts to build devices based on the AI programming language, LISP.
1982: John Hopfield creates the Hopfield network – an artificial neural network to store and retrieve memory like the human brain
1988-1990: the second AI winter begins, marking a period of reduced funding, declining interest, and skepticism about AI’s capabilities.
1997: IBM's Deep Blue chess computer beats the reigning world champion, Garry Kasparov, in a six-game match. This is impressive, but some say it is more about processing power than AI. Linguist Noam Chomsky says the win is like 'a bulldozer winning an Olympic weightlifting competition'.
2000: Montreal University unveils a neural probabilistic language model to learn the probability function of sequences of words in a language.
Honda introduces its now-famous humanoid, Asimo.
2001: The SmarterChild bot, developed by ActiveBuddy, launches across instant messaging networks. It gains more than 30 million ‘buddies’.
2004: DARPA starts its driverless car competition. Contestants have to build autonomous vehicles that can complete a 175-mile desert course.
2005: IBM's Blue Brain project aims to create a digital reconstruction of the mouse brain.
2009: Computer scientist Fei Fei Li creates ImageNet, a database of images that leads to advances in image recognition algorithms.
Stanford publishes landmark research on large scale unsupervised learning using graphics processors.
2010: UK firm DeepMind makes its debut. It will make many breakthroughs and eventually be acquired by Google in 2014.
2011: Researchers develop a Convolutional Neural Network (CNN) that sets new records for computer vision. It wins the German Traffic Sign Recognition competition.
In the same year, IBM’s natural language AI Watson wins a game of Jeopardy! on TV and Apple releases Siri, a voice-powered personal assistant that can generate responses and take actions in response to voice requests.
2012: Nvidia uses its new programmable processors to power the University of Toronto’s AlexNet, a revolutionary AI program for classifying images. The project establishes Nvidia as the world’s leading chip designer for the age of AI.
2014: Amazon unveils its Alexa voice recognition system, which will underpin the rise of the smart speaker market.
2015: OpenAI is launched. It will become one of the most significant companies of the new AI era.
Scientists make a breakthrough in ‘one shot learning’. This enables machines to learn from a single example rather than a large data set.
2016: DeepMind’s AlphaGo beats the world's best Go player Lee Sedol.
Uber launches its first self-driving car pilot.
Goodfellow, Bengio, Courville publish Deep Learning. It is considered the foundational text on how computers can learn from experience and understand the world in terms of a hierarchy of concepts.
2017: Google researchers introduce the concept of transformers in a paper called “Attention Is All You Need.” This neural network architecture is the basis for large language models (LLMs).
Carnegie Mellon’s Libratus program defeats four top professional poker players in no-limit Texas Hold’em game.
2018: OpenAI releases GPT (Generative Pre-trained Transformer) with 117 million parameters. It lays the foundation for subsequent LLMs.
Google unveils Bidirectional encoder representations from transformers (BERT). It learns to represent text as a sequence of vectors using self-supervised learning.
2019: Microsoft reveals a new deep learning language model, the Turing Natural Language Generation (T-NLG), with a record-breaking 17 billion parameters.
2020: DeepMind’s systems predict the structure of almost every protein catalogued by science, leading to advances in combating malaria, antibiotic resistance and plastic waste. The work will win the 2024 Nobel Prize in Chemistry.
Toyota reveals T-HR3, the company's third generation humanoid robot.
2021: OpenAI launches Dall-E, a multimodal AI system that generates unique images from text descriptions.
2022: OpenAI releases Chat GPT-3 with 175 billion parameters. In just two months it attracts 100 million users.
Engineered Arts gains attention for Ameca – a humanoid robot that can mimic facial expressions, answer questions and tell jokes.
2024: Nvidia achieves a market capitalisation of $3.3 trillion to become the world’s most valuable publicly traded company.
The Road Ahead
AI has come a long way from early theories to becoming a technology shaping the modern world. From healthcare to autonomous vehicles, it is redefining industries. Bain & Company forecasts that the global market for AI-related products will approach $1 trillion by 2027, highlighting the rapid adoption and disruptive potential of AI technologies across various industries.