Gaza: Israel’s AI Human Laboratory

Israel’s use of artificial intelligence has wreaked horrific destruction on Gaza; this technology will likely be sold across the globe in the near future

For years, Israel has been working to establish itself as a leader in developing AI-powered weapons and surveillance systems. Using these tools raises ethical, legal, and humanitarian concerns due to the potential risk to civilians. As Human Rights Watch has argued, Israel’s use of AI specifically in the war in Gaza risks violating international humanitarian law by targeting civilians instead of military targets. 

The Israel Defense Forces’ (IDF) Target Administration Division, established in 2019 by Lt. Gen. Aviv Kochavi, is responsible for developing Israel’s AI Decision Support System (DSS). Kochavi noted that this integration allows the IDF to identify as many targets in a month as it previously did in a year. Kochavi, directing military intelligence during Israel’s 2014 war in Gaza, has since aimed to speed up the generation of targets. He noted, “While the military had under 300 targets in Lebanon in 2006, the number has increased to thousands.” 

A New Type of War

This most recent bout of conflict is not the only time Israel has deployed AI in Gaza; Israel labeled its 2021 war in Gaza as the “first AI War”. Since then, Israel has been promoting itself as a leader in developing battlefield-tested AI weapons and tools. For instance, to show and market its capabilities to European allies, a week before October 7, 2023, Israel brought the NATO military committee chair to the Gaza border to showcase Israel’s automated border.

Since October 7, we have seen an escalation in the use and testing of new AI systems, which was revealed in an investigation by +972 Magazine. Official IDF figures show that in the first 35 days of the war the military attacked 15,000 targets. This is a significantly higher number than previous operations, which had utilized the assistance of AI systems. In an interview for The Jerusalem Post, a colonel who serves as the chief of the IDF “target bank”, which includes a list of potential Hamas operatives and key infrastructure, suggested, “the AI targeting capabilities had for the first time helped the IDF cross the point where they can assemble new targets even faster than the rate of attacks.”

Additionally, according to an investigation by The New York Times, after October 7, Israel “severely undermined its system of safeguards to make it easier to strike Gaza, and used flawed methods to find targets and assess the risk to civilians”. It is no surprise that, according to a recent Airwars report, “By almost every metric, the harm to civilians from the first month of the Israeli campaign in Gaza is incomparable with any 21st-century air campaign.” 

According to the report, in October 2023, at least 5,139 civilians, including 1,900 children, were killed in Gaza, marking the highest civilian casualties recorded in a single month since 2014 when Airwars started recording casualties. The majority of deaths occurred in residential buildings, with families often killed together, averaging 15 family members per incident.

The +972 Magazine investigation revealed numerous IDF programs that utilize this AI technology. Lavender is one such program, employing machine learning to assign residents of Gaza a numerical score indicating their suspected likelihood of being a member of an armed group. Early estimates showed that in the beginning weeks of the current war, Lavender identified 37,000 possible Palestinians and their homes as potential targets due to their assumed connection to Hamas.

Lavender utilizes surveillance data to assess individuals based on their suspected affiliation with a militant group. The criteria for identifying someone as a likely Hamas operative are concerningly broad, assuming that being a young male, living in specific areas of Gaza, or exhibiting particular communication behaviors is enough to justify arrest and targeting with weapons. 

An unnamed Israeli intelligence officer told +972 Magazine: “There were times when a Hamas operative was defined more broadly, and then the machine started bringing us all kinds of civil defense personnel, police officers, on whom it would be a shame to waste bombs. They help the Hamas government, but they don’t endanger soldiers.”  

This statement, even though critical of the target selection criteria for AI, is framed in terms of resource efficiency rather than the ethical obligation to protect civilians and non-combatants, reducing the taking of human life to a matter of cost-effectiveness. The officer’s statement also highlights the risk of civilian harm from imprecise or overly broad targeting systems. 

The Lavender program’s tendency to identify “civil defense personnel” as targets shows the risk of widening the AI selection criteria, which leads to preventable civilian casualties and violations of the principle of distinction, a cornerstone of international law requiring clear differentiation between combatants and civilians.

As it is, the investigation by +972 Magazine also revealed the many admitted mistakes and biases of these AI systems. First, Israeli officers interviewed for the report indicated that Lavender makes “mistakes” in roughly ten percent of situations. Another AI tool Israel uses is “Where is Daddy”, which utilizes mobile phone location tracking to find individuals identified as military targets when they arrive at a particular location, typically their homes. 

Israel claims all the targets get approved by a human; however, according to +972 Magazine sources, human approval of targets served “only as a ‘rubber stamp’ for the machine’s decisions, explaining how they would personally devote only about ‘20 seconds’ to each target before authorizing a bombing—just to make sure the Lavender-marked target is male.” In this sense, these programs have made “human decision-making too robotic, essentially transforming human operators themselves into ‘killer robots’”.

The operator’s assessment of the target can be compromised by bias, “especially when the system’s output confirms or matches the human user’s existing beliefs, perceptions, or stances”. Confirmation bias can influence officers’ decision-making when reviewing AI target recommendations. For instance, intelligence officers might quickly approve AI-generated target recommendations aligning with their preconceptions, even if those suggestions are based on flawed information or broad criteria. This tendency can intensify in high-pressure situations, where military personnel may make quick decisions based on AI recommendations that align with their views, resulting in significant civilian harm.

Additionally, when rules of engagement allow for high civilian death thresholds, these technologies become tools that facilitate mass casualties rather than mitigate them. For instance, according to a report from The Guardian, “dumb bombs” (bombs without a guidance system) were utilized to strike at individuals viewed as lower-ranking members of Hamas, resulting in the destruction of entire residences and the deaths of all individuals inside. 

We also know from the The New York Times investigation that Israel increased the threshold of acceptable civilian casualties at the beginning of the war to 20 and allowed strikes that could “harm more than 100 civilians….on a case-by-case basis”. For instance, if the target is considered a high-ranking Hamas leader, the permissible number of civilian casualties could exceed 100. 

Eventually, Israel removed any restrictions on the daily number of Palestinian civilians killed in airstrikes. Therefore, the use of AI-enhanced intelligence does not inherently lead to more precise or ethical warfare; instead, it can facilitate mass casualties if the decision-making framework allows for high civilian death thresholds. 

In addition to the surveillance run by Lavender, the Israeli military has also been identifying members of Hamas using a different AI program—a facial recognition program, originally meant to identify Israeli hostages. This technology is managed by Israel’s military intelligence, including Unit 8200, with support from Corsight, a private Israeli company, and Google Photos. 

As The New York Times reported, “At times, the technology wrongly flagged civilians as wanted Hamas militants.” One of the most prominent examples of wrongful identification is the temporary detention and interrogation case of the Palestinian poet Mosab Abu Toha, who was detained at an Israeli military checkpoint due to the program mistakenly identifying him as affiliated with Hamas. 

U.S. Tech Companies Enable AI Warfare

The technology Israel is using in Gaza relies on tools provided by private companies to handle the data, including some based in the United States. For instance, Google and Amazon signed a  1.2 billion dollar contract with the Israeli government in 2021, known as Project Nimbus. Project Nimbus helps Israel “store, process, and analyze data, including facial recognition, emotion recognition, biometrics and demographic information”. The project alarmed some Google and Amazon workers who started the campaign “No Tech for Apartheid.” Despite these campaigns, Google and Amazon continue to work with the Israeli government and military. 

More recently, a +972 Magazine investigation found that the Israeli army’s Center of Computing and Information Systems unit is utilizing cloud storage and artificial intelligence services provided by civilian tech giants in its operations in the Gaza Strip. This process began after the crash of the army’s internal cloud servers when they became overloaded with the number of new users during the ground invasion of Gaza in late October 2023. The army describes its internal cloud as a “weapons platform” with applications for marking targets, live Unmanned Aerial Vehicle (UAV) footage, and command and control systems.

The U.S.-based company Palantir, founded in 2003, also collaborates with various governmental, law enforcement, and military organizations, including those in Israel. Palantir’s AI programs rely on data on Palestinians from intelligence reports. According to documents released by Edward Snowden, the NSA whistleblower, one of the sources of such data was the U.S. National Security Agency. Other companies in Silicon Valley are involved, including Shield AI, which provides Israel with self-piloting drones for “close-quarters indoor combat”, and Skydio, which gives Israel “short-range reconnaissance drones” capable of navigating “obstacles autonomously and produce 3D scans of complex structures like buildings”. 

The integration of U.S. private tech companies into Israel’s military operations raises profound ethical concerns that extend well beyond the context of Gaza. The partnerships between Israel and corporations such as Google, Amazon, and Palantir reflect a deep entanglement between commercial profit motives and state violence, where military AI tools are developed not just for battlefield advantage but also for commercial scalability and international export.

Developed for Gaza, Sold Abroad

These AI-powered systems—trained, tested, and refined during the war on Gaza—are not developed in a vacuum. Gaza has functioned as a live laboratory for these technologies, allowing Israel and its corporate partners to demonstrate the effectiveness and ‘efficiency’ of AI-enhanced warfare in real time. As with previous military technologies (e.g., drones), what is developed and tested in Gaza is often marketed globally as ‘battle-tested’ solutions, fueling a profitable security industry that benefits from war and repression. Indeed, Israel is already one of the world’s largest arms exporters relative to its size, and its AI technologies are likely to become core components of its growing defense export portfolio.

As these AI systems mature and demonstrate their ‘effectiveness’ in high-casualty environments, there is an increasing risk that they will be sold to regimes with long histories of human rights abuses. Governments seeking to consolidate power, suppress dissent, or control marginalized populations will find in these AI technologies an attractive toolset. Surveillance platforms like facial recognition software and automated target selection systems, especially when paired with biometric databases or predictive policing algorithms, can become instruments of mass control and political persecution.

For instance, authoritarian governments could purchase and deploy variants of Lavender or facial recognition systems similar to those used in Gaza, repurposed to monitor and neutralize political opposition, ethnic minorities, or protest movements. Such systems, powered by partnerships with U.S. firms or trained on data from U.S.-linked cloud infrastructure, would be challenging to regulate once exported. Without enforceable international regulations, tech companies face few legal or financial consequences for supplying repressive regimes with tools of digital authoritarianism.

Furthermore, the revolving door between Silicon Valley, the Pentagon, and foreign militaries such as the Israel Defense Forces facilitates the rapid international spread of these technologies. With the proliferation of AI-enabled surveillance and targeting tools, the distinction between ‘defense technology’ and tools of domestic repression becomes increasingly blurred. 

As Matt Mahmoudi of Amnesty International warns, the opacity of these partnerships means that “U.S. technology companies contracting with Israeli defense authorities have had little insight or control over how their products are used by the Israeli government”—a dynamic that is likely to be replicated in other jurisdictions where authoritarianism is on the rise.

In this context, the Gaza war may represent not just a humanitarian catastrophe but also a pivotal moment in the globalization of AI-enabled warfare. If unregulated, the collaboration between private tech firms and military powers risks accelerating the spread of surveillance, repression, and high-casualty targeting strategies around the globe, placing civilians in authoritarian regimes—and even democratic ones—at unprecedented risk.

The Cairo Review of Global Affairs
Privacy Overview

This website uses cookies so that we can provide you with the best user experience possible. Cookie information is stored in your browser and performs functions such as recognising you when you return to our website and helping our team to understand which sections of the website you find most interesting and useful.