Artificial Intelligence: Man at the crossroads*
What is there to say about Artificial Intelligence that hasn't already been said? Is it some evil monster intent on devouring our hard-won freedoms? Or a magical trunk with the powers to deliver humankind from its lowly insufficiencies? In truth it's neither: AI in our lifetime will be anything and everything we decide to make it.
The battle between man and machine is nothing new. Remember the Luddites in 19th century England, who destroyed factory machinery out of fear that their ancestral crafts would go to waste. But the arrival of Artificial Intelligence takes the battle to a whole new level, because this time the fear is that machines will make us humans quite simply irrelevant.
AI can replicate certain human functions, but it cannot fully replicate a person because humans are "multi-taskers" while AI only handles one task at a time.
The AI we know today is called weak AI because it is focused on one narrow task. But it works incredibly fast and can analyse unfathomably huge amounts of data. It also has cognitive powers, observing its surroundings and spotting specific details, learning from what it sees and evolving from what it learns. It can make sense of its observations and draw intelligent conclusions. But does that mean AI can replace humans?
It's unlikely. And for a variety of reasons, it really isn't a very good idea.
First, AI can replicate certain human functions, but it cannot fully replicate a person because humans are "multi-taskers" while AI only handles one task at a time. Another essential difference is that AI lacks the sentience, free will and consciousness that define us as human beings.
Second, this weak AI needs data — lots of data — to learn from. So it's entirely dependent on the data that's provided, and ultimately, therefore, on the people who provide it.
That brings us to the question of the veracity and integrity of the data used in the learning process. There have been cases of AI systems that are racist because the data they learned from was racially biased. And in one experiment, a machine was incapable of recognising a lion if it wasn't in its natural habitat. To teach a computer to recognise a lion, you have to show it millions of pictures of different lions in different poses. And in the experiment, it turned out the machine was learning to recognise the habitat, not the lion. So analysis based on AI is clearly far from infallible, and, intentionally or not, it might be biased.
These weaknesses may not be particularly worrisome in the consumer applications that use AI today. But they could have dramatic consequences in defence, security or transport systems, for example. It would be unrealistic and unconscionable to leave AI to its own devices when the lives and security of millions of people are at stake.
Machines and humans need to interact constantly and in real time — and it must be the human who makes the final decision.
This is why Thales advocates a form of AI that is explainable, verifiable and ethical. This is our overriding concern. And as a corollary, another key area of research for Thales is aimed at better understanding how humans and machines interact.
Technology is neither good nor bad — it all depends how people use it. It's not alright to take a knife on board an aircraft, but you can own as many knives as you want if you're a butcher! AI needs to explain to humans how it has reached its conclusions, and what the consequences of its analysis might be. So machines and humans need to interact constantly and in real time — and it must be the human, and only the human, who makes the final decision.
* Man at the Crossroads was a fresco begun by Diego Rivera in New York City's Rockefeller Center in the 1930s and later repainted by the artist in his native Mexico under the variant title Man, Controller of the Universe. A salutary tale.
This article is part of a series of publications associated with Thales Media Day in Montreal, January 24, devoted to the Autonomous world & artificial intelligence, in the presence of Patrice Caine, Thales Group CEO & Yoshua Bengio, Full Professor, Department of Computer Science Operations Research, Canada Research Chair in Statistical Learning Algorithms.