Skip to main content

AI at the edge: challenges, risks, and opportunities

Mark Chattington, Research Group Leader

By Mark Chattington, Research Group Leader, Thales in the UK

The location, theme, and exhibitor list may vary but there are recurring elements that unite every defence tradeshow: the collective drive to do better; the sharing of ideas and new concepts; and the hugely talented people turning these innovations into a reality. But now, ‘Artificial Intelligence’ is becoming another recurring feature of all tradeshows – defining an ever greater number of products and capabilities, with promises of ‘AI-powered/enabled/driven’ writ large across banners and brochures.

It is for good reason. With semi-autonomous drones defining the front lines of Ukraine and as China looks to become the leading AI superpower by 2030, the modern battlefield is increasingly being shaped by an AI arms race that shows no signs of slowing. For the UK and its allies, there has never been a more pressing need to get AI capabilities into the hands of front line users, fast, to become, in General Sir Roland Walker’s words, “set up for the unfair fight.”

AI utility, application and opportunity 

The value of AI in enhancing and augmenting the skill and ingenuity of soldiers is not in doubt. At Thales, we define AI as “a level of processing that mimics intelligence to deliver operational superiority through decision advantage.” This might involve removing personnel from dull, dirty, and dangerous jobs that would otherwise hamper their efficiency, survivability or lethality – such as in the case of AI-powered unmanned vehicles that can detect, identify, and neutralise IEDs.

Then there are the quantitative benefits that stem from AI’s ability to help personnel do what they already do, only better and faster – think Course of Action analysis augmented by AI-derived insights to show you the fastest or safest way through a battlefield.

There comes a point, however, when AI is no longer simply helping you to do things faster and more efficiently – but completely redefining what it is that you are doing.  A counter-drone operator augmented by AI will be able to weather a swarm that would have previously overwhelmed one working in isolation, for instance. 

As the battlefield becomes messier and noisier, the overarching need for personnel at every level becomes one of decision advantage. The ability to Observe, Orient, Decide and Act (OODA) more quickly is what ultimately decides battles and wins wars. Armed with AI, we can augment friendly OODA loops while more accurately and quickly disrupting an enemy’s. We can outsmart and outmanoeuvre an opposing force based on a more accurate, more up-to-date understanding of the state of play. 

Put another way, we don’t only help operators find the needle in the haystack. We can use AI to burn the haystack down – to clear the noise so these same operators can focus on what they’re trained for and good at.

Digital Crew 

Digital Crew, a suite of algorithms using AI, assists soldiers in armoured vehicles, enhancing and augmenting what they are able to ‘see’ through their sensors. It alerts them to what is different, dangerous, or of interest.

Thales is developing AI techniques to automatically find, identify and classify targets, both on land and at sea.

However, despite the clear promise of AI to enhance operations – and despite the strategies from the UK MoD, the appetite from investors, and the will to deliver from industry – challenges still persist to AI adoption and frontline application.

Preserving the resilience of soldiers and systems on operations

Supplies and ammunition: two things all soldiers want – and need – to carry more of. If they’re to make space for a new AI-based tool or capability, it will need to add value from the outset, and help them make better, faster decisions. But if it slows them down or makes them more of a target, such physical limitations cancel out the cognitive benefits, and we’re adding a chink in the armour of soldiers who must remain resilient when faced with new and emerging threats.

The same can be said of systems: integrate AI and it won’t be long before an adversary is wondering how do I manipulate and exploit this? How do I tamper with the input data to compromise the output, and so deceive the person it’s designed to help? Such questions are why those involved in the design and development of AI systems must begin with some of their own: How do we make this as resilient as possible? How could – or should – the algorithm and user respond when things inevitably go wrong?

Tackling technical complexity in defence-AI development

A model is only as good as the data on which it’s trained. Military and operational contexts throw up unique challenges in this regard, namely around the sensitivity and scarcity of data, which greatly limit training data from real-world operations. Because of this, there’s a need for representative datasets that reflect the scenarios AI will encounter, so it can respond to them usefully and accurately when the time comes.

The ability for an AI-model to do so rests on how well it has understood and learned from the data available to it – data that’s been tagged and annotated with meaningful information as part of the data labelling process. In a defence context, this process can become very complex. Decades of skill and intuition inadvertently create bottlenecks in model training: the average operative, for instance, will be able to tag electro-optical (EO) data – but only a highly-experienced Naval specialist can interpret a sonar waterfall display and label it accurately for an algorithm to learn about deep-sea tracking, identification, and navigation. 

Winning the AI arms race: bridging trust and technology to transform modern combat

Perhaps the biggest challenge of all is a cultural one. It’s not without some irony that we must ask dismounted close combat soldiers with no frame of reference to use, understand, and most importantly trust a piece of AI-powered software, even as an adversarial AI-powered drone swarm threatens their position.

Such adversaries are looking to outcompete armed forces on every front and all the time – and they’re using AI, unbound by the same ethical and legal considerations as Western allied nations, in order to do so. No doubt CGS had this reality front of mind when he challenged the British Army to “double lethality by 2027 and treble it by the end of the decade” – a challenge that will rest upon how quickly and comfortably we can get soldier and software working together.

So we must design high quality interfaces that are finely tuned to the needs of the user. We must make AI systems that are easy to use and access. We must make algorithms explainable and their outputs transparent, and involve end-users at every stage of the design-process to ensure the products meet their needs.

Crucially, we must employ a systems-of-systems approach in which communications, sensors, and AI combine to create a more effective and cohesive operational capability. As systems integrators, Thales has years of experience and scores of experts dedicated to this approach. 

As a company, we’ll be at DVD2024 in the hopes of meeting like-minded organisations whose AI innovations should be transforming the front line. As an industry, we have an opportunity to meet this year’s event focus: ‘delivering a More Modern, More Lethal and More Productive Army’.

Thales will be at DVD2024. If it’s in your diary too, drop by stand SP-02 and meet with the Thales team.