Tackling the big questions that will bring autonomous technologies into reality
Today, autonomous technology is central to innovation in both the civilian domain and the defence industry. But unless developments are underpinned by trust and resilience, new autonomous innovations won’t get past the starting line. To realise the full potential of AI and autonomy, we must grapple with some big questions; questions around safety, certification and regulation – which must be both defined and met – among many others.
In recognition of this, Thales has put itself at the forefront of this innovation. Not only are we delivering the infrastructure – the network and interfaces – that make autonomous platforms work together safely, but crucially, we are leading engagement on autonomy and pushing the boundaries. By connecting industry and academia to tackle those big questions, Thales is driving an understanding of what needs to be in place to realise this autonomous vision of the future.
Our work with Trustworthy Autonomous Systems (TAS) Programme, a team of academics, early-career researchers, industry and third sector partners, and policymakers,associated with autonomy, has been central to this. And through Thales’s Autonomy Technology Centre (ATC), our specialists are helping Thales’s clients bring their own capabilities to market. Through this work, Thales’s aim is to get behind the sector and help the UK industry set the global standards for trusted, safe and ethical autonomous systems.

Along with academics and other industry players, we have established six key overlapping areas that are crucial to delivering an autonomous future.
These are outlined in greater detail below:

Functionality
Functionality is about guaranteeing the outcomes we want from an autonomous system as it adapts and responds to changes in real-world environments. Like twins that have grown up in the same village, two identical systems will eventually gain different experiences that shape their individual responses. So when we create and deploy an intelligent learning system, how do we continuously monitor, test and regulate it to ensure it is learning and adapting to its environment?
There are other big questions about functionality. For example, how do we get to the point that we can consider a system trustworthy in the first place if we allow it to learn? Consider training a dog to attack people running from the police: could it be trusted to do the right thing in other similar situations – say being alone with children?
And can autonomous technology be reactive enough to changes in the environment, society and technology itself? Consider a digital crew system used in the military to identify enemy tanks, where suddenly this enemy creates a whole new design or technology. How does the system work then? Can it refresh quickly enough to verify it and put it back out into service?
For a more detailed look at the research read the Trustworthy Autonomous Systems article here.
Functionality is about guaranteeing the outcomes we want from an autonomous system as it adapts and responds to changes in real-world environments.

Governance and regulation
Governance and regulation of autonomous technology involve compliance and engagement with legal, cultural and ethical elements. While there's a public perception that all autonomy and AI should be faultless, a 100% success rate is unlikely in the foreseeable future. So what’s an acceptable level of risk given that self-learning technology has the potential to veer from what it was originally asked to do? Also, how do we regulate a whole ecosystem throughout its entire life cycle, for example? And who ensures compliance oversight – from the internal company level to the international regulatory level?
Another big question is, how do we also ensure that the information we input for decision-making isn't biased or unethical? Imagine autonomous healthcare software that decides patient priority following MRI scans; what governance is required in selecting based on factors such as background and underlying health conditions. There are many questions. And combining academia and industry is crucial in how as a society we develop the right governance to deliver safe and trustworthy autonomous technology.
For a more detailed look at the research read the TAS article here.
Governance and regulation of autonomous technology involves compliance and engagement with legal, cultural and ethical elements.

Resilience
Resilience is an autonomous system’s ability to respond effectively when something goes wrong in its environment. If the programmed rules don’t work, we don’t want a safety-critical system to simply stop working and thus deliver adverse consequences.
Sure, we can teach a self-learning system the general right and wrong. But there are typically too many unpredictable and difficult-to-identify situations. An obvious example is a self-driving car, where resilience means safe, sensible and measured responses; rather than registering an obstacle and refusing to turn left, can the system safely navigate that obstacle and continue its journey rather than turn right and go off course?
Public perception is that AI and autonomy should be perfect. Yet failure happens. So another question is how do we build trust and demonstrate the benefits? And when an AI system does go wrong, who is responsible? There are also grey areas to address around the relationship between humans and machines. When should an operator take over for example? Here, a knee-jerk human response could lead to larger failure. So resilience has to be built into any autonomous system so it can deal with disruption and uncertainty arising from its environment.
For a more detailed look at the research read the TAS article here.
Resilience is an autonomous system’s ability to respond effectively when something goes wrong in its environment.

Security
Allowing a system to continuously learn from its environment exposes it to new security threats –both intentional and unintentional – that can adversely influence its behaviour. Such threats could range from a training repository hack to an accidental data input; by spoofing image classifiers for example, a system and its response are more than just confused – it becomes confidently changed. Consider the implications of a self-drive car that uses QR codes to understand speed limit signs; imagine the speed limit is incorrectly input to 60mph on a residential street, for example.
Security of autonomous systems goes way beyond traditional cyber security, to training machine learning systems to actively manage data to avoid any new vulnerabilities that might be exploited. It comes back to how we design and build autonomous systems in the first place, and ensuring the right protocols are in place.
Allowing a system to continuously learn from its environment exposes it to new security threats –both intentional and unintentional – that can adversely influence its behaviour.
For a more detailed look at the research read the TAS article here.

Trust
This is about human-machine interaction and whether users can have an ‘appropriate’ amount of trust in their systems. We use the word ‘appropriate’ because too much or too little trust foregoes the benefits of automation at best, and can be outright dangerous at worst. The challenge then, is identifying that sweet spot in the middle.
Humans put trust in autonomous systems all the time – say Satnav or Alexa or even adaptive cruise control in a car. And while demographics and population trends will determine the trust in any autonomous system, the biggest determinant is how we design it.
There are myriad questions. How do we build up user trust in the first place? And how can we recover this trust if it's lost due to an event? There’s also the question of where human decision-making stops and autonomous system decision-making takes over. Consider an autonomous system on an armoured military vehicle that monitors visuals; it might decide “that is not a tank and therefore not a threat”. In such a scenario, human lives depend squarely on being able to trust the system to deliver reliable information that a human can act on. Without the trust of its user, an autonomous system is rendered useless.
For a more detailed look at the research read the TAS article here.
This is about human-machine interaction and whether users can have an ‘appropriate’ amount of trust in their systems.

Verifiability
Verifiability is about ensuring that something – a system and every element connected to it – does what it is intended to do. Verification is not new. It's been the crux of engineering for decades. But it is extremely important in autonomous systems given that the nuances of autonomy and AI bring new complexities.
There are big questions around defining what you want something to do. How do we write the requirements and verify they are met? And what if autonomous assets interact with each other to deliver unplanned outcomes? After all, the non-deterministic nature of autonomous solutions means the output can be different each time depending on its experience (unlike traditional solutions that deliver the same output each time). We might want autonomous cars, for example – but how do we trust and regulate the decisions of the non-deterministic software behind them?
Given there are often so many requirements, we cannot test them all. And yet we want to deliver the user all the benefits while ensuring they are not harmed. This makes verifiability central to delivering trustworthy autonomous systems.
For a more detailed look at the research read the TAS article here.
Verifiability is about ensuring that something – a system and every element connected to it – does what it is intended to do.