Governing AI Under Fire in Ukraine

Ukraine offers an opportunity to examine how democracies may balance innovation, accountability, and human rights in the age of autonomous warfare

In its March 2025 report, the International Committee of the Red Cross issued a stark warning: without limits, the rise of autonomous weapons risks crossing a moral and legal threshold that humanity may not be able to reverse. AI-powered drones and targeting systems, capable of selecting and engaging humans without direct human input, are no longer science fiction—they are active players on the modern battlefield. As debates rage in Geneva and Brussels over how to rein in these technologies, Ukraine is already living the future.

With AI-enhanced drones buzzing across its skies and facial recognition tools scouring battlefields, Ukraine has become the world’s first real-time laboratory for the deployment and regulation of artificial intelligence in war. But innovation alone isn’t enough. In a country defending both its sovereignty and democratic values, the use of AI is being matched by new legal safeguards, human rights protections, and a bold attempt to build policy frameworks under fire.

The Strategic Use of AI in Ukraine’s War Effort

Since the launch of Russia’s full-scale invasion in 2022, Ukraine has rapidly transformed its defense sector into a hub of battlefield innovation, with artificial intelligence at its core. AI now underpins multiple operational domains: from real-time battlefield coordination and autonomous drones to cyber defense and digital forensics. In just two years, an entire ecosystem of Ukrainian defense technology has emerged, uniting over 1,500 developers and supporting more than 3,600 projects. Entire markets for breakthrough innovations have taken shape, including unmanned ground vehicles, short-range electronic warfare systems, interceptor drones, AI-enabled drones with terminal guidance, and fiber-optic drones, many designed to enhance precision, speed, and survivability on the front lines.

One of the most prominent use cases is AI-enabled drone warfare. Ukrainian engineers, supported by global tech partnerships, have trained AI systems on a vast repository of more than 2 million hours of drone footage. These systems enable drones to autonomously navigate contested airspace, evade electronic warfare interference, and conduct high-precision targeting with minimal human input. A 2024 report by Breaking Defense revealed that AI-enhanced drones have achieved a 3-4x higher target engagement rate than traditional manually operated platforms.

AI is also integral to situational awareness and command systems. Platforms such as Delta—developed with military and civilian collaboration, leverage AI to fuse data from drones, satellite imagery, open-source intelligence (OSINT), and geospatial inputs. This creates a unified, real-time operational map that drastically reduces decision-making time. According to defense analysts and reporting from Army Technology and CSIS, Ukraine’s AI-supported situational awareness platforms like Delta have compressed battlefield decision-making cycles from hours to under 30 minutes at the operational level—and as little as 5 minutes tactically. Delta is currently used across multiple brigades and integrated into joint operations with NATO-aligned partners for interoperability.

Beyond aerial systems, Ukraine has also begun deploying AI-powered unmanned ground vehicles (UGVs) such as Lyut 2.0. These battlefield robots, recently profiled by The Telegraph, are used for reconnaissance, targeting support, and fire assistance. They combine autonomy with remote control, using AI to navigate terrain, identify threats, and relay intelligence in real-time, even under degraded signal conditions caused by Russian electronic warfare.

One of the key innovations is the AI-supported control interface: a single operator can manage multiple robotic units simultaneously, thanks to reduced cognitive load and semi-autonomous navigation features. AI is also used in pattern recognition, enabling drones and UGVs to detect camouflaged enemy positions or anticipate enemy movements based on historical data.

Importantly, Ukrainian developers emphasize “human-in-the-loop” architecture, particularly when it comes to lethal targeting. AI supports decision-making, but humans retain control—a principle rooted in both military pragmatism and ethical caution. However, battlefield pressures are pushing the boundaries of this control model, raising new regulatory and moral challenges.

These systems reflect Ukraine’s broader doctrine of AI-enabled asymmetric warfare—leveraging speed, automation, and precision to offset Russia’s advantage in manpower and conventional equipment.

Meanwhile, AI is supporting digital forensics and war crimes documentation, enabling investigators to authenticate images, analyze patterns of attacks, and track suspected perpetrators using facial recognition and pattern-matching algorithms.

However, these advances also raise critical ethical and legal questions. Concerns have emerged around the erosion of human oversight, transparency in targeting decisions, and the protection of personal and battlefield data. As AI systems become more autonomous, Ukraine and its partners must navigate the delicate balance between operational effectiveness and compliance with international humanitarian law (IHL).

Brave1: Ukraine’s Defense-Tech Innovation Ecosystem

To streamline and scale its defense innovation pipeline, Ukraine launched Brave1 in April 2023 — a state-backed coordination platform for dual-use and military technology development. Spearheaded by the Ministry of Digital Transformation, the Ministry of Defense, and supported by the National Security and Defense Council, Brave1 has become a central hub for Ukraine’s wartime tech acceleration.

Brave1 supports the full innovation lifecycle: offering grants, testing facilities, legal guidance, and matchmaking between startups, researchers, and military end users. The platform prioritizes scalable, interoperable tools, including AI-driven surveillance systems, cyber defense technologies, and semi-autonomous drones designed for contested environments.

As of early 2025, Brave1 has evaluated over 500 proposals, approved funding for more than 70 projects, and facilitated cross-sector collaborations with international partners, including Estonia’s Defense Innovation Unit and NATO’s DIANA initiative. Several Brave1-backed systems have already been deployed to frontline units, while others are being prepared for export to allied nations facing similar hybrid threats.

Uniquely, Brave1 embeds legal and ethical oversight into its innovation pipeline. Each project is assessed not only for technical viability but also for compliance with Ukrainian law, IHL, and NATO-compatible standards. Legal experts, ethicists, and cybersecurity advisors are part of the evaluation process — an effort to ensure that speed does not come at the cost of accountability.

Beyond wartime needs, Brave1 is also positioning Ukraine as a post-war exporter of battle-tested defense technology, particularly in AI-powered surveillance, robotics, and cyber resilience. With strong government backing, Ukraine is building a defense-tech ecosystem that could outlast the war itself.

Autonomous Warfare and the Challenge of International Law

As AI technologies increasingly shape the nature of armed conflict, international legal frameworks are under growing pressure. Ukraine, like many other democracies exploring battlefield autonomy, faces a difficult task: deploying innovative systems while ensuring compliance with legal obligations that were not designed with artificial intelligence in mind.

International Humanitarian Law (IHL), including the Geneva Conventions and their Additional Protocols, remains the cornerstone of lawful conduct in armed conflict. These instruments enshrine essential principles: distinction, proportionality, precaution, and accountability, which must guide all military operations. Yet they were drafted in an era long before autonomous weapons, predictive targeting algorithms, or real-time surveillance powered by machine learning. While the principles still apply, their practical implementation becomes far more complex when decision-making is partially or fully supported by AI systems.

Among the key legal challenges is the question of how to apply IHL to autonomous targeting. Can an algorithm reliably distinguish between combatants and civilians in dynamic, often ambiguous environments? Can it calculate proportionality in real time, under pressure, and with the contextual awareness expected of a trained human commander?

These questions are made more urgent by the known limitations of AI systems themselves. Many are trained on historical or incomplete data, often lacking transparency or explainability. They can reflect hidden biases, misidentify targets, and produce false positives, especially in environments with limited or noisy data. When used in high-risk contexts like warfare, these technical flaws can lead to operational errors with humanitarian consequences.

The issue of accountability compounds these risks. Traditional IHL assumes a clear human chain of command. But autonomous systems blur that chain—raising unresolved questions about liability: who is responsible when an AI-guided decision causes unintended harm? The commander? The developer? The state?

Further complicating the legal landscape is the dual-use nature of many AI tools. Technologies like facial recognition, satellite surveillance, and language models may be designed for civilian use, but can be readily adapted to military applications. Regulating these tools without hindering beneficial innovation remains a central policy dilemma.

Lastly, cybersecurity vulnerabilities, such as adversarial manipulation, spoofing, and data poisoning, can degrade AI performance, causing systems to act unpredictably and potentially violate legal norms.

International discussions, including those under the UN Group of Governmental Experts on Lethal Autonomous Weapons Systems (GGE LAWS), have helped surface these issues, but consensus on binding norms remains elusive. There is growing recognition that existing legal frameworks, while essential, require interpretation, supplementation, or reform to meet the realities of modern warfare.

For Ukraine, this legal and ethical challenge is not theoretical. It is playing out in realtime. But with emerging oversight tools like the Council of Europe’s HUDERIA, and a demonstrated commitment to aligning innovation with international norms, Ukraine has the opportunity to help shape the global legal architecture for AI in armed conflict.

Regulating AI Under Fire: Ukraine’s National AI Governance Strategy

Amid the pressures of active warfare, Ukraine has taken a remarkably forward-looking approach to artificial intelligence governance. In October 2023, the Ministry of Digital Transformation unveiled its AI Regulation Roadmap—a strategic framework developed in collaboration with national institutions, civil society organizations, and international partners. This was accompanied by a White Paper on Artificial Intelligence in Ukraine, which elaborates on policy goals, outlines key risks, and provides a vision for how Ukraine aims to align its AI ecosystem with European and global standards.

Developed in consultation with national agencies, civil society, and international partners, both documents are rooted in the values of democratic accountability, transparency, and human rights. They emphasize inclusive regulation, legal harmonization, and the need to foster innovation without compromising ethical standards.

The Roadmap and White Paper draw heavily from global norms, modeling their structure and intent on the EU AI Act, the OECD AI Principles, and the UNESCO Recommendation on the Ethics of AI. While these documents primarily address civilian and commercial uses of AI, their publication during wartime sends a clear signal: Ukraine is committed to building a rights-respecting digital future, even amid a war for national survival.

Although the roadmap does not explicitly regulate military use of AI, its guiding principles, such as human-centric design, risk-based oversight, and transparency, reflect a national ethos of responsible innovation. In parallel, Ukraine has aligned itself with international defense norms, including the 2023 Political Declaration on the Responsible Military Use of Artificial Intelligence and Autonomy, emphasizing human control and ethical safeguards in battlefield AI systems.

Ukraine’s Use of the HUDERIA Framework

Complementing its national AI governance efforts, Ukraine has begun integrating the HUDERIA methodology—a Human Rights, Democracy, and Rule of Law Impact Assessment of AI Systems, adopted by the Council of Europe. As the first government embroiled in large-scale war to pilot this framework, Ukraine’s application of HUDERIA sets a vital precedent. At the heart of the framework is a commitment to ensuring that AI systems are not only technically functional but also legally sound and ethically defensible. 

The HUDERIA framework begins with Context-Based Risk Analysis (COBRA)—a process that assesses how an AI system interacts with its operational environment and societal structures. Rather than applying generic rules, COBRA ensures that regulatory responses are tailored to the specific use case, accounting for local legal systems, cultural contexts, and power dynamics.

In parallel, HUDERIA calls for inclusive stakeholder engagement throughout the assessment process. Legal experts, technologists, civil society groups, and impacted communities are invited to provide input, thereby improving transparency and grounding regulatory decisions in democratic principles.

Where risks are identified, the framework mandates targeted mitigation planning, including measures such as algorithmic audits, public disclosures, or technical adjustments. Crucially, HUDERIA is iterative by design, not a one-time exercise but a living process that evolves alongside the system it monitors. This is particularly important in contexts where the use of AI changes rapidly or unpredictably, such as in emergency response, national security, or public communication infrastructure.

While HUDERIA is not explicitly designed for military applications, its structure is flexible enough to inform governance in adjacent areas. However, as AI tools in defense contexts grow more sophisticated and autonomous, there is a pressing need for dedicated military testing environments that are legally bounded, ethically supervised, and technically transparent. Unlike traditional battle labs, these environments should embed human rights safeguards into the development cycle, allow for third-party evaluation, and simulate real-world scenarios to identify unintended consequences before deployment. The existence of such spaces is essential not only for national security, but also for long-term democratic resilience.

Risks and Unintended Consequences: Where AI Could Undermine Ukraine’s Own Values

Ukraine’s deployment of AI in defense is often celebrated as a model of innovation under existential threat. Yet even the most principled application of AI in warfare carries inherent risks—technical, legal, and ethical, that could undermine the very democratic and human rights values Ukraine seeks to defend.

One of the most pressing dangers is the potential for civilian harm caused by data bias or misclassification within AI systems. Even high-performing object recognition and facial recognition algorithms are susceptible to false positives, particularly in high-stress, data-poor, or adversarial environments. 

Research from institutions such as RAND and SIPRI has consistently highlighted the risks of misidentification in autonomous targeting systems. Even when trained on combat-specific datasets, AI systems often fail in simulations under degraded or manipulated data conditions. A 2024 SIPRI background paper, for instance, details how bias and adversarial manipulation can lead to false positives in dynamic battlefield environments—an especially dangerous vulnerability in warzones like Ukraine. 

Similarly, a RAND study on the Department of Defense’s AI data strategy stresses that battlefield AI often suffers from poor data labeling, insufficient representation of edge cases, and fragmented data ecosystems. These shortcomings can lead to significant errors in object recognition, target classification, and contextual understanding, especially in complex, high-pressure environments where human oversight is limited. 

There is also a profound psychological toll on civilians living under persistent AI-powered surveillance. Systems relying on facial recognition, gait analysis, and predictive behavioral algorithms, while developed for military or national security purposes, are increasingly crossing into civilian life. This blurring of lines between battlefield and society raises urgent concerns about dual-use technologies: tools originally designed for defense or intelligence that are later adopted for peacetime law enforcement, public administration, or commercial surveillance.

Human Rights Watch has warned that the normalization of AI surveillance, even during wartime, can erode public trust, stifle civic expression, and create long-term trauma, especially when oversight mechanisms are poorly defined or lack independent scrutiny. These risks are no longer abstract in Ukraine. 

Lately, the Ministry of Internal Affairs has proposed a draft law that would establish a nationwide video monitoring system, citing public safety as its goal. The system would rely on biometric data, including facial recognition, to identify individuals in real time and store personal information for up to 15 years under ministerial control. Cameras would continuously record in public spaces, schools, businesses, and healthcare facilities, making AI-driven surveillance an embedded feature of daily life, even outside active conflict zones.

Such proposals reflect a growing global trend. As Shoshana Zuboff writes in The Age of Surveillance Capitalism, “The foundational questions are about knowledge, authority, and power: Who knows? Who decides? Who decides who decides?” When surveillance tools extend beyond their original purpose, they not only collect data but also reshape social behavior, redefine power dynamics, and recondition civic space.

Even with the adoption of frameworks like HUDERIA, AI deployment without full transparency may lead to an erosion of public and international trust. If civilians or external observers perceive AI decisions, such as automated strike targeting or border surveillance, as opaque or unchallengeable, democratic legitimacy may be strained. Without transparent accountability mechanisms, Ukraine risks reinforcing the perception that AI-based decisions operate in a ‘black box’, potentially alienating both domestic constituencies and global partners.

Finally, there is a long-term risk of post-war misuse. Once hostilities subside, the institutional memory and infrastructure built for defense may be redirected toward domestic surveillance or political control, especially if regulatory guardrails are weakened in the name of national security. Surveillance tools introduced for battlefield intelligence could, without strict legal constraints, be repurposed to monitor political dissent, journalists, or minority communities. Numerous global examples, from Ethiopia to Myanmar, illustrate how defense AI systems, once unshackled from wartime justifications, can become instruments of authoritarian governance.

This consideration also extends to Ukraine’s ambitions as a post-war exporter of defense technologies. While platforms like Brave1 have embedded ethical principles into their innovation cycles, technologies such as autonomous drones, facial recognition software, or cybersecurity tools may eventually be adopted by international partners operating under different legal frameworks or political conditions. 

To safeguard Ukraine’s strong human rights reputation, it will be important to accompany tech exports with clear end-use monitoring, licensing controls, and ethical guidelines. Doing so would help ensure that Ukraine’s cutting-edge innovations continue to serve democratic and humanitarian aims, not only at home, but globally. With the right foresight, Ukraine’s emerging defense-tech leadership can set a valuable international precedent for responsible AI exports.

At the same time, strengthening the role of civil society watchdogs, both during and after the war, will be essential to ensure transparency, prevent mission creep, and maintain public trust in how these technologies are governed.

Lessons from Ukraine: Regulating AI Without Surrendering the Future

Ukraine’s war has become an unplanned but urgent testbed for AI in modern warfare—revealing both the strategic value of emerging technologies and the legal, ethical, and humanitarian challenges they create. Remarkably, even under existential threat, Ukraine has shown that technological innovation can be governed by democratic values and rule-of-law principles.

Its experience offers critical lessons for the global community. First, legal and ethical oversight must be embedded early, not retrofitted after deployment. Ukraine’s use of pre-deployment risk assessments, including through frameworks like HUDERIA, demonstrates the value of front-loading accountability into high-risk innovation cycles. Civilian oversight also remains essential, particularly in times of war. Ukraine’s coordination between government, civil society, and international partners has ensured transparency and public trust, even amid crisis.

Importantly, national legislation should align with international norms, as seen in Ukraine’s deliberate harmonization with the EU AI Act, OECD principles, and Council of Europe standards. These steps reflect a foundational belief: that human rights and democratic safeguards must not be sidelined, even under fire.

At the same time, banning military AI outright is neither realistic nor strategic. Russia and other authoritarian actors are unlikely to pause or restrict development. If democratic nations do not engage, test, and regulate these technologies, they risk falling behind, and surrendering the ethical ground to those who will not play by the same rules. The challenge is not whether to develop AI for defense, but how to do so responsibly—striking the right balance between national security and human rights.

The Cairo Review of Global Affairs
Privacy Overview

This website uses cookies so that we can provide you with the best user experience possible. Cookie information is stored in your browser and performs functions such as recognising you when you return to our website and helping our team to understand which sections of the website you find most interesting and useful.