Advancing UK Aerospace, Defence, Security & Space Solutions Worldwide
  • Home
  • /
  • Security
  • /
  • NCSC chief warns security must come first with AI designs

Security

NCSC chief warns security must come first with AI designs

Lindy Cameron, CEO of the UK's National Cyber Security Centre (NCSC), today warned that security must be the primary consideration for developers of artificial intelligence (AI) in order to prevent designing systems that are vulnerable to attack.

Image courtesy NCSC

In a major speech, Lindy Cameron (above) highlighted the importance of security being baked into AI systems as they are developed and not as an afterthought. She also emphasised the actions that need to be taken by developers to protect individuals, businesses, and the wider economy from inadequately secure products.

Advertisement
ODU RT

Her comments were delivered to an audience at the influential Chatham House Cyber 2023 conference, which sees leading experts gather to discuss the role of cyber security in the global economy and the collaboration required to deliver an open and secure internet.

She said: “We cannot rely on our ability to retro-fit security into the technology in the years to come nor expect individual users to solely carry the burden of risk. We have to build in security as a core requirement as we develop the technology.

“Like our US counterparts and all of the Five Eyes security alliance, we advocate a ‘secure by design’ approach where vendors take more responsibility for embedding cyber security into their technologies, and their supply chains, from the outset. This will help society and organisations realise the benefits of AI advances but also help to build trust that AI is safe and secure to use.

“We know, from experience, that security can often be a secondary consideration when the pace of development is high.

“AI developers must predict possible attacks and identify ways to mitigate them. Failure to do so will risk designing vulnerabilities into future AI systems.”

The UK is a global leader in AI and has an AI sector that contributes £3.7 billion to the economy and employs 50,000 people. It will host the first ever summit on global AI Safety later this year to drive targeted, rapid, international action to develop the international guardrails needed for safe and responsible development of AI.

Advertisement
ODU RT

Reflecting on the National Cyber Security Centre’s role in helping to secure advancements in AI, she highlighted three key themes that her organisation is focused on. The first of these is to support organisations to understand the associated threats and how to mitigate against them. She said: “It’s vital that people and organisations using these technologies understand the cyber security risks – many of which are novel.

“For example, machine learning creates an entirely new category of attack: adversarial attacks. As machine learning is so heavily reliant on the data used for the training, if that data is manipulated, it creates potential for certain inputs to result in unintended behaviour, which adversaries can then exploit.

“And LLMs pose entirely different challenges. For example - an organisation's intellectual property or sensitive data may be at risk if their staff start submitting confidential information into LLM prompts.”

The second key theme Ms Cameron discussed was the need to maximise the benefits of AI to the cyber defence community. On the third, she emphasised the importance of understanding how our adversaries – whether they are hostile states or cyber criminals – are using AI and how they can be disrupted. She said: “We can be in no doubt that our adversaries will be seeking to exploit this new technology to enhance and advance their existing tradecraft.

“LLMs also present a significant opportunity for states and cyber criminals too. They lower barriers to entry for some attacks. For example, they make writing convincing spear-phishing emails much easier for foreign nationals without strong linguistic skills.”
 

Advertisement
FIA2026 animated banner
MGI conducts first TigerShark flights with Auterion

Aerospace Defence Security

MGI conducts first TigerShark flights with Auterion

2 April 2026

MGI Engineering Ltd (MGI) has announced the successful first flights of its TigerShark uncrewed deep strike platform, in partnership with Auterion.

Logiq acquires Savient

Security

Logiq acquires Savient

1 April 2026

Logiq has acquired Savient Ltd, a technology and data specialist focused on delivery in highly regulated environments, strengthening its capability and further expanding its presence in the South-West.

SIA introduces changes for close protection operatives

Security

SIA introduces changes for close protection operatives

1 April 2026

Today, the Security Industry Authority (SIA) have introduced changes to training for those holding, or applying for, a close protection licence.

NCSC warns of messaging app targeting

Security

NCSC warns of messaging app targeting

1 April 2026

Alongside international partners, the National Cyber Security Centre (NCSC) has issued actions for individuals at risk of attacks against messaging apps, as a result of growing malicious activity from Russia-based actors using messaging apps - such as WhatsApp, Messenger and Signal - to target high-risk individuals.

Advertisement
ODU RT
LexisNexis Risk Solutions releases Cybercrime Report

Security

LexisNexis Risk Solutions releases Cybercrime Report

31 March 2026

LexisNexis Risk Solutions has released its latest Cybercrime Report which reveals rapid growth in synthetic identity fraud, bot-driven attacks and account takeover activity across global markets, whilst first-party fraud remains the most reported fraud type.

Getac launches CommandCore

Defence Security

Getac launches CommandCore

27 March 2026

Getac has announced the launch of its CommandCore rugged drone control solution.

Advertisement
ODU RT
Advertisement
Gulfstream banner