
📰 Top stories this week
The Ministry of Defence published its Responsible AI Senior Officers' Report 2025 on October 2, establishing comprehensive governance frameworks for ethical AI deployment across defense operations. According to the MOD, the report addresses critical concerns about autonomous weapons systems while ensuring AI development remains aligned with international humanitarian law and UK values. The initiative coincides with the MOD's £1 billion investment in the Digital Targeting Web system, which will incorporate AI-driven battlefield decision-making capabilities by 2027.
The European Commission is launching its "Apply AI Strategy" to build European AI platforms and reduce dependency on American and Chinese technology infrastructure across critical sectors including healthcare, defense, and manufacturing, according to the Financial Times. EU Commissioner for Technological Sovereignty Henna Virkkunen will announce the €1 billion initiative, which prioritizes open-source generative AI solutions and leverages public sector demand to scale European startups. The strategy explicitly warns that current external dependencies in AI infrastructure "can be weaponized" by state and non-state actors, posing supply chain and security risks. Geopolitical tensions—including renewed concerns about US reliability under President Trump and China's growing AI influence—are driving the push for "sovereign frontier models" particularly for defense applications like AI-enabled command and control systems in NATO operations
NATO StratCom Centre Launches AI Laboratory for Cognitive Warfare
The NATO Strategic Communications Centre of Excellence announced the launch of a new AI Laboratory on October 6, designed to advance Alliance capabilities in cognitive and information warfare. According to HSToday, the lab will focus on five critical areas: developing synthetic environments and digital twins for contested domain operations, creating AI-powered training and audience simulation tools, improving election security against disinformation, building predictive decision-support systems for policymakers, and enhancing autonomous agentic capabilities in the information environment. The facility will serve as a research and experimentation hub collaborating with academia, industry, and defense partners to rapidly translate AI innovations into operational capabilities. NATO StratCom COE is currently recruiting personnel with advanced AI expertise to staff the laboratory.Google DeepMind announced an AI-powered agent called CodeMender that the company claims can instantly detect, patch, and rewrite vulnerable code using its Gemini models, marking what Google describes as a step-change in AI-driven cybersecurity. Google says CodeMender leverages what it calls "self-validating patching," routing proposed fixes through specialized AI "critique" agents that act as automated peer reviewers before final human sign-off. Google argues this approach accelerates defense against sophisticated threat actors as AI-powered vulnerability discovery outpaces human developers' ability to implement fixes.
📢 More Developments
Indian Army Deploys AI in Operation Sindoor
Lieutenant General Rajiv Kumar Sahni announced that home-grown AI applications and small language models were extensively used during Operation Sindoor to accelerate decision-making and enhance battlefield awareness. According to the Army, 23 applications created a common operational picture while AI achieved over 90% accuracy in electronic intelligence analysis using 26 years of collected data.
Safe Pro's AI Detects 36,000 Landmines in Ukraine
Safe Pro Group announced on October 5 that its AI models have identified over 36,000 landmines and explosive objects after analyzing 2 million drone images covering 22,500 acres in Ukraine. The company's battle-tested AI was recently selected by the US Army for participation in the 2026 Concept Focused Warfighter Experiment.
The Evolving Landscape of Military Unmanned Ground Vehicles in the US: Beyond Ordnance Disposal in Modern Warfare
U.S. Army deployments of unmanned ground vehicles (UGVs) have expanded from EOD roles to multifunctional platforms, with 675 robotic mules fielded in 2025. DARPA’s investments include autonomous medical evacuation robots. After Ukraine’s deployment of over 15,000 UGVs in combat, the Army retooled its Robotic Combat Vehicle program, funding autonomy-kit startups to retrofit Infantry Squad Vehicles.
G7 Cyber Expert Group flags urgent AI-driven risks to financial sector cybersecurity
On 6 October 2025, the UK Treasury (HMT) issued, G7 cyber expert group statement on Artificial Intelligence and Cybersecurity. Rather than set guidance or regulatory expectations, the statement seeks to raise awareness of artificial intelligence’s (AI) cybersecurity dimensions and outlines key considerations for financial institutions, regulatory authorities, and other stakeholders that support security and resilience in the financial sector.
📖 Longer Reads
Cyber news discuss how AI is transforming military drones from remote-controlled weapons into autonomous systems that can navigate, identify targets, and kill without human oversight—capabilities already being used in Ukraine and by multiple countries. The piece examines how this shift is compressing warfare to "machine speed," enabling smaller actors to challenge major powers, and asks the critical question of whether international law can keep pace with technology that lets algorithms decide who lives and dies
Caroline Baxter, who served as Deputy Assistant Secretary of Defense from 2021 to 2024, argues that widespread adoption of generative AI tools across the U.S. national security workforce could carry a hidden cognitive cost that undermines the very skills America depends on for defense decision-making.

AI National Security Review
Keep updated with all the major developments in AI and National Security 👇