Advancing UK Aerospace, Defence, Security & Space Solutions Worldwide

Features

Introducing AI into defence

Jim Green, AtkinsRéalis Technical Director AI & Data Solutions, outlines some of the challenges associated with introducing AI to the defence sector.

Image courtesy AtkinsRéalis

Artificial Intelligence (AI) could prove to be a gamechanger for the defence sector, in terms of intelligently supporting operational advantage and business efficiency but a well-informed approach is vital when it comes to matters of life or death.

AI has huge potential for the defence sector and it is likely to be a critical factor in ensuring success against competitors. In January 2024, the Defence Artificial Intelligence Centre (DAIC) published the Defence AI Playbook, a short publication providing insight into the ways that AI can support defence. Its introduction states: “This playbook was developed to illustrate the breadth of AI opportunities across the organisation, from strategic advantage on operations to efficiency in our business processes.”

Advertisement
ADS S &P RT

This is a significant development given the major national security threats facing the UK today. It signals a clear imperative to modernise and integrate AI into Ministry of Defence (MoD) processes. The AI question – to use, or not to use – is currently posing challenges for many large organisations in the UK and beyond and defence is no exception.

Higher risks involved
However, considering the unique environments in which defence operates, the risks associated with its use are much higher. The most critical element of adopting AI is countering the challenges around bias – defined as AI models showing a prejudice for, or against, objects and entities which they then process.

This bias skews the output, so it is no longer representative of the real world – and this can undermine the trust which is placed in automation. Within a military context, taking this to an extreme, it could have a serious impact on life or death decision-making.
So, the fundamental starting point must be to rigorously test and model outputs from AI, to ensure that activity does not fall victim to undetected bias. One of the defining characteristics of defence is uncertainty. The last 10 years have seen defence pivot from conducting counter insurgency operations in Iraq and Afghanistan, through to a return to major power competition with Russia.

This has been combined with a range of other operations, from countering Ebola in Africa, through to providing civil contingency support during the Covid-19 pandemic. The MoD prepares UK armed forces for these broad and changing environments through extensive and arduous training. If benefits are going to be realised, equal rigour must be applied when AI is incorporated into MoD processes.

Risk of failure at critical moments
If AI is rapidly integrated into processes, without consideration paid to the broad range of environments and scenarios which it will support, there is a real risk of failure at a critical moment. So, any AI model being used must allow for flexibility and the people using them must be skilled and trained and aware of AI’s limitations. In short, they must know when to use them and, importantly, when not to.

AI models are data hungry. Normally, thousands if not millions of data points are used to train models to classify, predict or generate content. However, if the data that is used to train models is not a fair representation it is likely the output will equally be skewed.

For example, one of the Defence AI Playbook use cases looks at spare parts’ failure prediction. Military vehicles spend most of their life in a cycle of training and maintenance – with only a very small proportion of their time spent in live operations. Any historical data captured around maintenance requirements and parts’ failure is likely to be biased towards this training and maintenance environment.

So, creating a predictive maintenance model around parts’ failure using this data, is likely to exhibit a bias towards the failure of parts which happen during peacetime manoeuvres – and may not sufficiently amplify the importance of operational failures.

Do not let bias skew the model
When AI models are being trained, there is a high risk of bias being accidentally incorporated into the model. This happens, for instance, when using data which does not represent the problem sufficiently, or through human biases, which can leak in. Another defence use case quoted in the playbook is intelligent search and document discovery for the defence intelligence community. Using natural language processing, objects and entities can be effectively extracted from documents, saving hours of analysts’ time.

Advertisement
ODU RT

However, if this is done using ‘black-box’ methods, there is a risk that the algorithm may be optimised on features which lead to a suboptimal output – the extraction algorithm may, for example, provide a bias towards frequency of terms, neglecting a less frequent but more important piece of information.

Whilst these are complicated problems, there are ways available to help mitigate such risks – primarily, by testing. Any model developed must be robustly tested to ensure biases are not built-in at any stage.

The problem with AI though, is that biases are harder to detect than everyday computer bugs. So, incorporating ethical data practices and evaluation throughout the lifecycle development is a necessity, from thorough data evaluations that ensure a diverse mix of data is presented for training – through to systematic reviews of the models selected.

Strong consideration must also be given to whether it is appropriate to use black-box solutions, or whether transparent AI is likely to yield a greater advantage and reduce the risk of undetected bias. Modelling and simulation are vital too, as they can be used to excellent effect to test the performance of models in different environments and under different conditions and nowhere is this more important than in defence.

This practise should incorporate domain experts, to review the output of models and look for deviations from the norm in a wide range of scenarios in which the intended system is due to perform.

Essential training and a culture shift
Finally − but just as importantly − comes training. People must be trained to use any automation solution that is deployed and critically evaluate automated output. The most dangerous course of action would be to deploy automated solutions where a user blindly trusts computer generated output and this could require a culture shift for some branches of the military.

The MoD should ensure that it is upskilling its personnel to understand exactly which AI solutions are being deployed and what their purpose is. The overarching culture, from the top down, must be to create an informed workforce that uses AI as a helpful tool, and is aware that it can never be 100% trustworthy.

Jim is a technical expert with 16 years of experience within defence, intelligence and cyber. He is particularly interested in exploring how to organise people, processes and technology in order to realise real-world benefits from data science and AI.

Advertisement
Northrop Grumman Northrop Grumman
Collaboration key to managing supply shocks

Features

Collaboration key to managing supply shocks

20 November 2024

Neil McManus, a Partner and aerospace and defence specialist at Vendigital, encourages manufacturers to collaborate closely with suppliers to mitigate the impact of shocks and supply chain disruptions.

Harnessing AI for aerospace cybersecurity

Features

Harnessing AI for aerospace cybersecurity

28 October 2024

Graham Younger, VP of Aerospace at Expleo, looks at how the use of AI technologies can enhance cybersecurity across the aerospace sector.

Winning with data on the battlefield

Features

Winning with data on the battlefield

14 October 2024

As real-time data streaming on kit, soldiers, assets and even from space transform the battlefield, Rob Mather, VP, aerospace and defence industries, IFS, examines why interconnected technologies and data streams are now critical to mission success.

Bridging the gap between design and production

Features

Bridging the gap between design and production

7 October 2024

Simon Farnfield, event director at Advanced Engineering, explores the potential of the digital thread concept to close the disconnect between design and production which plagues many manufacturing projects.

Advertisement
ODU RT
Countering defence cyber risks via zero trust

Features

Countering defence cyber risks via zero trust

1 August 2024

Gary Barlet, Public Sector CTO at Illumio, advocates implementation of zero trust strategies to address the biggest cyber threats to defence.

Delivering advanced UK air mobility by 2030

Features

Delivering advanced UK air mobility by 2030

1 June 2024

Jeff Hoyle, Executive Vice President of Global Aero, Space and Defence and Managing Director UK and North America, Expleo, considers whether there is time enough to build an advanced air mobility sector in the UK by 2030.

Advertisement
ODU RT 2