
📰 Top stories this week
UK defense chief sets AI-autonomy push in Mansion House lecture
UK Defence Secretary John Healey announced a shift to “warfighting readiness,” with rapid investment in drones and other autonomous systems, a new UK Drone Centre “within months,” and a commitment to devote 10% of the equipment budget to novel technologies while ringfencing at least £400m annually for UK Defence Innovation, citing rapid advances in AI, ML, quantum, and autonomy as war-changing factors. He also flagged new legal powers to down unidentified drones over UK bases and pledged to double investment in drones and autonomous systems to over £4bn within this Parliament to keep the UK at NATO’s innovative edge.Netanyahu vows to make Israel an AI superpower
Israel’s prime minister said he would build up domestic arms production and leverage the National Artificial Intelligence Directorate to make Israel an AI power on par with its cyber prowess, tying the effort to close coordination with President Donald Trump’s administration and upcoming talks with Vice President JD Vance on security challenges. He framed AI as central to Israel’s long-term economy and defense posture amid efforts to scale strategic autonomy and resilience.MI5 chief: AI is reshaping the UK threat landscape
In his annual threat update, MI5 Director General Ken McCallum said AI is intensifying both terrorist and state‑backed activity, with adversaries exploiting generative tools for influence operations, spearphishing, reconnaissance and faster malware development. He added MI5 is scaling AI‑enabled triage, language translation and data fusion to sift vast datasets and accelerate disruption, while keeping humans in the loop with strengthened oversight. McCallum framed AI as a decisive factor in a “new era” of hybrid threats, driving tighter public‑private partnerships, faster decision cycles, and updated authorities to counter increasingly automated campaigns.Microsoft: Russia, China increasingly using AI to escalate cyberattacks on the U.S
CNBC reports Microsoft’s annual digital threats report found Russia, China, Iran and North Korea sharply expanded AI use in cyber operations against the U.S., with 200+ AI‑generated influence or attack artifacts identified in July 2025—over double July 2024 and ten times 2023. The report details AI‑assisted spearphishing, deepfake personas and automated intrusion tooling hitting government, critical infrastructure and supply chains amid lagging defenses at many U.S. organizations. Microsoft warns AI is lowering barriers to sophisticated tradecraft, accelerating tempo and complicating attribution, elevating modernization priorities for defenders.
📢 More Developments
Shield AI Unveils AI-Piloted Military Fighter Drone with Vertical Takeoff and 2,000-Mile Range
CNBC showcases Shield AI’s X-Bat, an unmanned, AI‑piloted fighter that pairs a jet engine and the Hivemind autonomy stack with vertical takeoff and landing, enabling runway‑independent operations from austere sites and ships. The company cites a 2,000‑mile range, 50,000‑ft ceiling, weapons carriage, and an estimated unit cost around $27 million—positioning X-Bat as attritable, combat‑capable mass versus traditional fighters. Shield AI, valued at $5.3B after a $240M raise, says this is the first time AI pilot and VTOL are combined in a next‑gen aircraft
Pentagon Modernizes Nuclear Warheads With AI
The Washington Times reports the Energy Department’s NNSA is integrating AI and machine learning across the nuclear weapons enterprise—using supercomputers for material discovery, design optimization, advanced manufacturing, experimental design, diagnostics, and facility operations—per Senate testimony by David Beck, nominee for deputy administrator for defense programs. Beck called AI indispensable to sustain and upgrade the stockpile and outlined ongoing programs—B61‑12, W88 Alt 370, W80‑4, W87‑1, W93, B61‑13, and SLCM‑N—framed against rising nuclear threats from China, Russia, and North Korea and the need to close deterrence gaps.
Auterion completes “Artemis” AI enabled long range, deep‑strike drone program
Auterion announced it successfully completed the Artemis program, delivering a deep‑strike drone proven in Ukraine with up to 1,000‑mile range and visual targeting designed to operate in GPS‑denied environments, underscoring a shift toward software‑defined, autonomous precision strike. The program highlights growing U.S.–Ukraine defense tech collaboration and signals maturing AI‑enabled autonomy pipelines for contested theaters. Its design emphasizes robust navigation and mission computing, foreshadowing scalable deployment of AI‑powered long‑range systems.
Spain runs first-of-its-kind AI combat cloud trial in Europe
Indra and the Spanish Air and Space Force flight-tested the “Nimbus” combat cloud, fusing radar and EO/IR feeds into a “supersensor” picture and blending real aircraft with AI-generated entities via LVC (Live-Virtual-Constructive) to speed decision-making and coordination across domains. Built on open architecture to preserve national sovereignty, Nimbus aims to scale collaborative combat, with live data streamed to Madrid for analysis and rapid upgrade cycles—an early European step toward AI-enabled multi-domain command and control.
📖 Longer Reads
Bloomberg’s feature explores how the U.S. military partners with startups like Shield AI to accelerate autonomous systems, battlefield algorithms, and risk-tolerant testing—while debating the threshold for human oversight.
The Alan Turing Institute’s CETaS assesses California’s new Frontier AI law and its cross-jurisdictional consequences, including governance of high-capability models, compute transparency, and potential spillovers shaping UK and allied approaches to security-relevant AI oversight. It’s a useful comparative lens on subnational regulation intersecting with national security concerns.
AI and Intelligence Analysis: Panacea or Peril?
War on the Rocks argues AI can meaningfully augment intelligence analysis—summarizing, translating, fusing multimodal data, and surfacing patterns—yet it cannot replace human judgment, especially amid data-quality issues, cognitive bias, and model hallucinations that demand a human-in-the-loop and rigorous validation like retrieval-augmented workflows. The piece outlines practical benefits (NLP at scale, triage, faster sense-making) alongside risks from biased training data and private-sector data dependencies, urging cautious adoption, better datasets, and sustained tradecraft to avoid amplifying analytical errors.Microsoft Digital Defense Report 2025
Microsoft’s latest report (July 2024–June 2025) finds financially motivated cybercrime dominates—over half of known‑motive attacks tied to extortion/ransomware—while both criminals and nation‑states increasingly weaponize generative AI to automate phishing, scale social engineering, accelerate malware, and intensify influence operations, even as defenders adopt AI to close detection gaps and harden identity defenses. It offers concrete priorities such as deploying phishing‑resistant MFA that can block over 99% of identity‑based attacks, modernizing defenses for critical services, and strengthening public‑private collaboration and credible state consequences to build collective deterrence in an AI‑accelerated threat landscape.Responsible behaviour in military AI starts with procurement
SIPRI argues that responsible military AI begins at the procurement stage—embedding governance, test-and-evaluation, and lifecycle oversight from the outset—to align capability development with international norms and reduce escalatory and humanitarian risks. It offers practical entry points for ministries and acquisition authorities grappling with rapid commercialization and dual-use diffusion.

AI National Security Review
Keep updated with all the major developments in AI and National Security 👇