Cybersecurity: the challenges of artificial intelligence
Cybersecurity is a bit like a game of chess. The winner will be the player who can best discern the opponent’s intentions and anticipate their moves. So how has the advent of artificial intelligence changed the rules of the game?
Alarmists might disagree, but artificial intelligence (AI), like any other technology, is neither good nor bad — it all depends how people use it. And the role of AI in cybersecurity is a good example: hackers use it to do harm, cybersecurity experts use it to help thwart their attacks.
“Cyberattacks are becoming increasingly sophisticated and are evolving all the time,” says Olivier Bettan, a cybersecurity expert at ThereSIS IT research laboratory in Palaiseau, near Paris. “Hackers are using AI to select their targets, strengthen their intrusion capabilities and determine the best vulnerabilities to exploit. They’re also using it to stay, as far as possible, under the radar of cyberthreat detection systems.”
To identify and counter these new types of attacks, it’s absolutely vital that we know and understand them as closely as possible — and AI technologies help us to do this. By detecting cyberattacks as early and quickly as possible, AI allows humans to focus on high value-added tasks in a sector where the right talent is hard to find and demand is growing all the time.
Two approaches
“We can detect attacks in two ways,” explains Olivier Bettan. The first is called a "supervised" approach, which is now in widespread use and involves detecting unusual situations in vast streams of mixed-type data. AI progressively learns known behaviours in order to better identify future attacks.
These AI algorithms can be configured with our customers, based on our knowledge of their specific sector of activity, the type of IT assets they use, what we want to observe and what we know about the attacker. This tailored approach is fully in line with the Thales TrUE AI standard1, since the customer controls its algorithm and has a better understanding of the results generated by the AI system — which is a decisive advantage compared with other available solutions.
“This is the approach we take with our Cybels Analytics solution,” adds Olivier Bettan. “Cybels Analytics provides cybersecurity analysts with advanced detection and forensics capabilities, either in real time or after the fact, so they can identify the most complex attacks based on data that is relevant to each customer’s sector and operations.”
This platform incorporates Big Data Analytics technologies and personalised AI algorithms, as well as Cyber Threat Intelligence databases, which further improve its detection and analysis capabilities. This is coupled with a file analysis centre, which detects even the most complex malicious code. By combining the expertise of Thales’s AI and Big Data specialists, or data scientists, and our cybersecurity experts, Cybels Analytics enables all of the user’s detection tools to complement one another and work together in unison.
The "unsupervised" approach is still at a relatively early stage of development. It involves allowing AI to categorise the raw data, then observing the resulting groupings and structures. But the system doesn’t assign meaning to these automatically defined groups. Only analysts can do that.
In cybersecurity, an AI-based decision won’t be applied unless it can be justified in a way that a human operator can understand.
So which approach is best? As with any application of AI, humans must choose how much room for manoeuvre they’re prepared to give the technology. As an example, we’re aware of the obstacles to developing driverless cars that are fully controlled by AI. We know AI can't read road signs completely reliably, for example, and that it isn’t really capable of categorising a person in a wheelchair as a pedestrian.
“In cybersecurity, we combine both approaches,” concludes Olivier Bettan. “An unsupervised approach is more effective in the case of a massive and sudden attack, which requires an immediate response. But in the event of a pernicious attack with knock-on effects, then a supervised approach is more appropriate.”
1Thales has developed an approach called “Thales TrUE AI”, a trusted AI that’s Transparent (can be seen to meet specifications and follows clear rules), Understandable (can explain why a decision is made and implemented in a language understandable to humans) and Ethical (complies with legal and moral frameworks).