The ceasefire put in place by the Trump Administration to end Israel’s war in Gaza faces numerous challenges. Since the ceasefire went into effect, Israeli forces bombed Gaza on October 29, killing 104 people, claiming an Israeli soldier was killed in Rafah. The usual cornucopia of blame was fired between Israel and Hamas with the former accusing the militant group of breaching the ceasefire, and the latter denying any involvement. Israel has also condemned Hamas for failing to return the bodies of all the deceased hostages. Hamas says it does not have the required heavy machinery to find the corpses under mountains of rubble. In the meantime, clashes between Hamas and other armed Palestinian groups and families have left at least 27 dead since the ceasefire took hold.
Despite these setbacks, both Israel and Hamas have publicly reiterated their commitment to the ceasefire and the agreement remains in place. Israel has allowed humanitarian aid into the Strip, the living hostages were returned, and Israeli troops have partially withdrawn from sections of the Strip.
One of the most pressing challenges—assuming the ceasefire holds—is Gaza’s need for new leadership to guide the Strip through the difficult task of rebuilding a city reduced to rubble. Omar Shaban, founder and director of the Gaza-based Palthink for Strategic Studies, remains optimistic for the future despite the pressing challenges.
The ceasefire has faced many challenges, but it’s still holding. What is keeping the agreement from collapsing?
The long-awaited ceasefire is holding largely due to international support. Without the commitment of the international community and the pressure of the U.S. administration, this ceasefire wouldn’t have lasted more than a few days.
There has been a shift of power among the stakeholders, particularly regarding Qatar, Turkey, and Egypt. Qatar and Turkey have been hosting Hamas’ financial and political representatives, and Hamas listens to them more than the people in Gaza, who have been pleading for a ceasefire for months. Hamas only came to the table for this most recent ceasefire once Qatar and Turkey told them to.
But Qatar and Turkey were too close with Hamas and Israel bombed Doha. So they could not negotiate the ceasefire at the end of the day. When the U.S. pushed for a ceasefire, it went to Egypt instead.
Once Qatar and Turkey realized that the U.S.-brokered ceasefire would land in Egypt, they chose to go to Cairo to be involved in the decision-making process. The Egyptians want to harvest this success, they see it as an opportunity to shift the political capital of the Middle East away from the Gulf and toward Cairo.
What about the agreement itself? Was it well-made, or is it flimsy?
It’s not very good, but it’s realistic. I say it’s not very good because it would have been better if Hamas had negotiated the end of the war during the ceasefire we had from January to March, when it held more living hostages and a better bargaining position. Instead, Hamas ignored the voices of the Palestinians begging for the end of the war and prolonged the conflict, leading to another 20,000 deaths between April and October, and the continued Israeli destruction of the Strip.
Any ceasefire which follows two years of war is expected to have challenges. For example, we have already seen problems with the Rafah crossing being closed and aid being blocked by Israel. But in those cases, the United States intervened and Israeli Prime Minister Benjamin Netanyahu obeyed and opened the crossing for aid to enter.
There are still problems, though. The Rafah crossing is allowing aid in, but not allowing people out who desperately need medical treatment. Additionally, there have been problems finding the remaining bodies of the deceased hostages, likely because many are trapped under the 60 million tons of rubble covering the Strip. But I don’t believe these issues are enough to make either side abandon the ceasefire.
What do the Palestinians in Gaza think about the arrangement?
A primary concern for the Palestinians in Gaza is how much of Hamas will remain in the coming months. The ceasefire agreement stipulated that Hamas would disarm and allow a new security force to take over the Strip. However, this change will not be immediate.
I think President Trump agreed to allow Hamas to remain in the Strip to ‘clean the house’, so to speak. I believe there will be a two-month bridge period where Hamas will remain in power to disarm those Gazan families still holding weapons, subdue other armed groups in the Strip, and prepare for its own disarmament.
So, Hamas will be completely gone in the next few months?
Well, keep in mind that Hamas is more than just a resistance group, it is also a government that has run Gaza since 2006. Everyone employed in the government is also technically affiliated with Hamas. Beyond the al-Qassam brigades, the military branch who fight Israel, by my estimation there are 20-25,000 people employed by Hamas in health, education, and infrastructure. These people are very important to include in the post-war plans because they know how to run the city.
In my opinion, no Hamas members will be present in any future government or security force, but these public servants who were employed by Hamas must be involved in rebuilding Gaza. Also, including these members will help them accept the new order after the war. Many of these members are already known to the Israeli government and will be screened for approval by Israel for the new administration.
Okay, so there will be some returning figures from Hamas’ government who know how to run a city. But who will manage the security?
I am confident Hamas will not be a military presence once it hands over the Strip in the next two months, but Gaza still needs a security force to maintain order. Hamas had previously employed a police force of nearly 40,000 officers by my estimate, who were not involved in armed resistance. How many remain alive after two years of war remains to be seen and the returning police officers will need new training on how to run the Strip in the absence of Hamas.
Who will train them?
I think there will likely be an agreement with neighboring Arab states, such as Egypt, Morocco, and Jordan, to send a few hundred professional police officers to Gaza. They will help reorient the police in Gaza to see their roles as serving the Palestinian people, not Hamas. Since many of these officers joined Hamas for jobs rather than ideology, they will likely understand this new arrangement.
There are more issues beyond just the police, however. Some large Palestinian families have weapons and hold significant influence over the working of the city and may challenge the authority of the new police force. No Arab state would agree to send their own police officers to the Strip if there is a possibility they may be threatened by other armed groups or families in Gaza, or by the Israelis.
So there are other armed groups and families beyond Hamas that could pose a problem. How will that be resolved?
In my opinion, the recent fighting between Hamas and other armed groups since the ceasefire began is Hamas’ attempt to ‘clean house’ so a new security force can take over. This is not from any official source, it’s just my reading of the situation, but I think there is some agreement wherein Hamas is forcing powerful families and groups to hand over their weapons so they can’t attack the new security force.
So in the short term, it seems the pre-existing city employees from the Hamas government will help carry out the logistics of rebuilding the city, and a new police force will maintain order. What about the long-term?
I see two phases. First, a two-year interim of technocratic leadership supported by the international board outlined in the ceasefire. Second, an elected government voted in at some point during 2027.
There are three conditions for a successful transitional government. First, an interim committee, consisting of fifteen administrators, must be entirely independent and professional with no political ties. The administrators must be screened and approved by the key stakeholders: Israel, the United States, the Palestinian Authority, and Egypt. Second, the committee must have access to financial support from stakeholders to enact the rebuilding process. Third, the community in Gaza must accept this new committee; this can be achieved by providing tangible support, like securing food supplies, providing medical care, and establishing stability.
After two years, I hope we will have an election.
Gaza hasn’t had an election since 2006. Are you worried about what that might mean for the success of the new government?
There are certainly challenges. About 68 percent of the Gazan population are under 30 years old, which means two or three generations have never cast a vote. The West Bank does not offer a good example of effective democracy, either. Palestinians see the Palestinian Authority as corrupt and ineffective, so it is not a good template for a new government.
Despite that, civil society is not absent from Gaza, and organizations like PalThink offer education on the logistics of running a government, promote non-violence, and offer practice sessions simulating parliamentary sessions. I have spent years training young people in Gaza and I am confident in their abilities.
How do you think the Palestinians in Gaza will respond to this new post-war order?
The most important thing is to give the people hope. If we can spend two years rebuilding Gaza, getting people the support they need, and give them hope for a better future with a new election, then they will see they have more options than violence.
But this hope is not guaranteed. We need continued international support, even after Gaza fades from the headlines of global news. There are nearly forty thousand orphans in Gaza who saw their parents die before their eyes. We must give them reason to believe there is a better future waiting for them.
Much of the global discourse on the war has been about the issue of Palestinian determination and statehood. Do you think this ceasefire will lead to this?
I don’t think that is the question the Palestinians in Gaza are concerned with, actually. After two years of war the priority is to create a sustainable life. We want to survive and rebuild. We’re just like any other people. We want things like healthcare, infrastructure, schools, insurance.
In the past, Israel has employed a “divide and conquer” technique to keep Gaza subdued and the West Bank disempowered. Do you think Israel will really allow Gaza the chance to progress?
There are two schools within Israel that have existed since the Oslo Accords. One camp, which was led by Shimon Peres, believed at the time that a rich and prosperous neighboring Gaza would bring stability to Israel. This group believed that they cannot live nicely while their neighbor lives poorly. The second camp believed that Gaza must be subjugated, kept under Israeli control and restricted from progress. The second camp has been in charge for a long time.
I believe the first camp is becoming more popular. I think the Israelis are realizing there is a benefit to having a prosperous neighbor, both in terms of stability and business. With international funding, Gaza could become very attractive to Israeli businessmen. I can see Gaza becoming like Dubai, with its location as a port, beautiful coast, and access to oil. I am very optimistic about the future of Gaza right now, I think it has the potential to become very successful.
Artificial Intelligence and Intellectual Property On the Global Agenda
AI has expanded rapidly in the last decade and is now immersed in almost every area of human development, from health care and law to writing and art. The international community has recognized a need to regulate this rapidly developing technology, but its far-reaching nature makes it difficult to assign any pre-existing organization to be its caretaker. For example, the Intergovernmental Panel on Climate Change might be able to opine on AI’s impacts on the environment, but it can not advise UNESCO how to use AI to protect World Heritage Sites. As a result, many of these organizations have created proprietary programs to address the way AI impacts their projects specifically.
One example of this can be seen in the mechanisms governing intellectual property (IP). The UN’s World Intellectual Property Organization (WIPO) aims to encourage and protect human innovation. WIPO recognized the transformative capacity of AI over a decade ago and has offered an interesting path for how to develop institutional processes to address this new technology’s development.
The Cairo Review spoke with Walid Mahmoud Abdelnasser, former director of the Division for Arab Countries at the WIPO in Geneva from 2015 to 2024, to hear his personal views and perspectives on how multilateral organizations can develop their approach to AI.
Cairo Review: When did AI really “start”?
Walid Mahmoud Abdelnasser: The essence of what we call “artificial intelligence” can be traced back to ancient philosophy, through science fiction in the early 19th century, such as Mary Shelley’s famous Frankenstein (1818), and to published scientific research starting in the 1950s, and more precisely Alan Turing’s 1950 publication, Computer Machinery and Intelligence, which proposed a test of machine intelligence called The Imitation Game. This work is considered the birth of AI.
Similar to previous technological revolutions that changed life on earth, like the First Industrial Revolution in the 18th century, AI has brought with it challenges and opportunities. As in those previous cases, humanity will pass through periods where there will be a continued need to adjust; minimize risks and maximize benefits. In the past five years alone, digital technologies in general, including AI, have grown 172 percent faster than average; AI in particular grew by the astronomical figure of 718 percent.
That’s an impressive jump. How would you say the world has responded?
The question of AI has not been considered seriously as an item on the multilateral global agenda and, more specifically, by international organizations with a universal membership, until only slightly more than a decade ago.
Since that time, almost all countries of the world—the exception being those passing through turbulent internal conflicts or wars—became deeply interested in exploring the full potential of AI, getting to know more about it, learning how to derive and maximize benefits to serve national or collective interests. Countries were equally keen to protect themselves from any potential cross-border repercussions or risks that might be inflicted upon them by adversaries, whether at the strategic, political, economic, cultural, social or scientific/technological levels, such as cyber security attacks or similar dangers.
Such interest expressed and demonstrated by an increasing number of countries of the world has led to some sort of race and/or competition among a number of international organizations and fora.
What type of competition?
They were debating which among them, if any, would have the credentials, competences, capabilities, resources, mandate, and qualifications to initiate and manage the elaboration, evolution, and development of a global multilateral regime handling AI.
What would this “multilateral regime” include?
It would need to provide potential institutional structures, agreements, rules, norms, regulations, international cooperation mechanisms, and operations for AI management. The original idea was that there could be a globally agreed-upon roadmap and agenda to proceed on this front—even if just accepting the lowest common denominators shared by all countries in the world. This would imply setting up a universal global body, or making use of an existing one, to be administered by the international community at large in a multilateral framework characterized by universality, sovereign equality, and a transparent, well informed, and democratic decision-making process.
Is such an undertaking possible?
Well, one needs to be realistic, rather than idealistic, in looking at international positions regarding such efforts to regulate AI and its future developments to ensure its positive contribution to international cooperation. The more developed countries in the area of AI initially hesitated in agreeing to a multilateral regime to supervise AI usage, so as not to be bound by any rules that might arise and hinder progress and that would be considered restrictive from their perspective.
On the other hand, countries lagging behind on AI wanted to avoid the problems that arose when the internet became widespread, as it continued for many years without any international body to regulate it. When the international community began to develop conventions on the internet, many developing countries saw this as a rather late and incomplete process. As a result, these countries want to start the international multilateral process of managing AI use as early as possible.
However, both countries and international organizations soon realized that AI in its entirety and with all its multiple connections and interactions with almost all areas of human activity, does not fit in just one international organization’s mandate or competence; no single organization or forum has the resources and capabilities to address all dimensions of its usage. One of these dimensions is how AI relates to intellectual property.
So since AI is involved in so many fields, it’s too big for any one multilateral organization to handle all of its different aspects. Which organizations handle AI’s impact on IP?
The United Nations-affiliated Geneva-based World Intellectual Property Organization (WIPO), is one of those international organizations which first considered the question of AI informally more than a decade ago, under the leadership of its then Director-General Dr. Francis Gurry. After quiet but lengthy consideration within the secretariat of the organization, including the involvement of member states as well as leading AI consultants, WIPO decided to confine their handling of AI to its Intellectual Property (IP)-related aspects.
What is the connection between AI and IP?
While AI has been increasingly impacting the production and distribution of economic and cultural goods and services, one of the main objectives of the IP policy is to stimulate innovation and creativity in economic and cultural systems. WIPO’s decision to handle only the IP-related aspects of AI was both logical and normal. Pointedly, in the 2014-2023 decade there were 54,000 Generative AI patents, and those constituted 25% of patents in 2023 alone.
The decision required two measures to be undertaken by WIPO at both the intergovernmental level and the level of the secretariat.
What was the first measure the WIPO took?
First, WIPO realized that it should not rush into establishing an institutionalized and permanent intergovernmental body to handle the relationship between AI and IP, as it was too early to embark on such a path, particularly as there are still many unclear or inconclusive aspects of such a relationship. Therefore, WIPO started by calling upon member states and other stakeholders to convene what was agreed to be called the “AI Conversation”, as a rather flexible and open-ended forum that would allow increased knowledge about AI–IP relations.
The forum brought together worldwide experts on the subject from academia, research institutions, and the private sector, sharing and exchanging experiences, best practices, and lessons learned in this area among member states and the WIPO secretariat. In doing so, there was full awareness of the existing gaps in digital infrastructure among various countries and regions of the world.
What was the second measure WIPO took?
Second, WIPO proceeded to establish a unit within its secretariat to handle this new and important subject matter to be added to its already busy and expanding mandate. Here again, the idea was to start small and gradually develop that unit and add to its staff the expertise needed. However, when naming this unit the decision was to go broader than AI, so the name of “IP and Frontier Technologies”, was chosen for that newly established unit. The idea was to cover AI, but also “big data”, “the metaverse”, and other related subjects. Originally starting with one person, this unit evolved over time until it grew into a full-fledged division whose work program was streamlined into the mainstream work program of the organization.
When did this whole process start?
The first session of this conversation was held in September 2019, with large-scale interest and high level participation from most of the member states. Due to the lockdown in Switzerland because of the COVID-19 pandemic, the second session was held virtually in July 2020. The move to virtual mode of holding meetings, then to hybrid mode, increased world interest in following the AI Conversation sessions. This led them to start holding it twice a year rather than just once annually. This has been the case since November 2020.
What are the goals of these meetings?
The objective of such idea exchange has been to provide the basis for a shared understanding of the main questions that need to be addressed in the context of the relationship between AI and IP. There has been, as expected, a particular focus on the main legal questions raised by AI as far as IP rights and IP policy are concerned, including questions of ethics, privacy, and standards.
The focus by WIPO on the regulatory matters related to the relationship between AI and IP continued to be a main theme on the agenda of the successive sessions of their AI Conversation. Items presented and discussed included the question of AI regulation and how it relates to contractual arrangements. In this context as well, WIPO’s Secretariat managed to design an IP Policy Toolkit particularly aimed at helping regulators and IP offices in addressing IP and AI issues in a more and better structured manner.
How does WIPO handle the role of AI in business?
WIPO has continued to pay attention to the interlinkages within the AI–IP relationship on the one hand and its related business activities on the other. Such focus has helped attract stakeholder attention on the impact of AI on IP. To serve these purposes, the WIPO secretariat developed a guide on generative AI for enterprises and entrepreneurs to help in evaluating and mitigating IP risks in the adoption and deployment of AI tools. Furthermore, and since 2019, WIPO has held a series of sub-regional, regional, and inter-regional “IP Management Clinics”, whose programs were specifically tailored to address the needs of AI-driven small and medium enterprises (SMEs). One important program that started in this area was the one based on cooperation between Japan and the Arab region, holding its first session in Tokyo in October 2019, its second session virtually in 2021 and its third session at the Headquarters of the League of Arab States in Cairo in February 2023.
So AI might present a risk regarding IP that businesses need to be wary of. But is there any way that AI can help with addressing IP issues?
Yes, there has been significant attention on the rapidly growing role of AI in IP administration because AI applications have been increasingly used in the administration of applications for IP rights protection. For example, WIPO itself had developed two AI-based applications, namely WIPO Translate for automated translation and WIPO Brand Image Search for image recognition.
Several government-affiliated IP Offices around the world, whether at the national, sub-regional, or regional levels, have also developed and employed other AI-based applications. In this area, WIPO has been keen to promote exchange of relevant information among its member states as well as the possible sharing among these countries of the AI-based applications used in IP administration.
A big component of IP is patenting. How does AI interact with this aspect of IP?
Yes, patents are very important because they constitute the backbone of revenues generated by IP rights registration and protection. In this area, there has been a growing differentiation between AI as an inventor on the one hand versus AI-based and AI-assisted inventions on the other.
What do you mean by “inventor” versus “assistant”?
One issue is when the artificial intelligence itself is the inventor of the property, which raises questions and challenges about who is named the inventor in the patent. The other issue is when the inventor is using AI assistance, which raises questions about defining and determining the extent of human contribution in the invention. This leads us to questions about “joint inventors” and the “patentability” of inventions involving AI. A new creation must be inventive to be patented, so determining whether an AI-assisted project is eligible for patent is an important step. Is the AI itself the inventor, or is it just a tool that an already skilled and knowledgeable person is wielding? In this context, it is also important to discuss how to protect the IP rights of data used in the training of AI models.
These seem like really complicated questions. Is there any existing framework for these international bodies to follow when answering them?
The relevance of the AI impact on patents as far as the international agenda is concerned could be better understood if we realize that there has already been international cooperation in the area of transfer of technology through the patent system, particularly in the areas of technical assistance, human and institutional capacity building, research and development collaboration, and licensing.
Some of this cooperation takes place through WIPO and its various organs, including through Funds-in-Trust established by some member states to foster cooperation with other individual countries or groups of countries, while other forms of cooperation takes place bilaterally or at the regional or sub-regional level, such as through the European Union (EU), the ASEAN, and the League of Arab States (LAS).
One example of this is the impact of AI on healthcare and how it is linked to licensing rules and practices involving medical technologies. This subject comes under the mandate of three international organizations, namely WIPO, the World Health Organization and also constitutes an important segment of the famous World Trade Organization (WTO)’s Trade-Related Aspects of Intellectual Property Rights (TRIPS) Agreement. The three mentioned organizations have successfully managed to develop, for almost a decade, a trilateral coordination mechanism that also covers other areas which come under the mandates of the three of them.
One of the important and positive outcomes of the previous sessions of the WIPO’s AI Conversation has been the increased realization that as some of the issues raised go beyond the mandate of WIPO, there is a need for a coordinated approach among relevant international organizations and fora in order to be better positioned to address the broad and cross-sectoral issues raised by AI.
How have these international bodies responded to this need?
To this end, and at the bilateral level with other international organizations, WIPO has been regularly cooperating with two other United Nations system-affiliated and Geneva-based international organizations, namely the International Telecommunications Union (ITU) on the “AI for Good” initiative, and the World Health Organization (WHO) on the “AI Health” initiative.
WIPO has been equally supportive of the efforts undertaken by the Paris-based United Nations Education, Science and Culture Organization (UNESCO) to develop the first global normative instrument on the ethics of AI.
At a broader level, and from the perspective of the United Nations system at large, WIPO has been collaborating with various UN agencies to engage in broader high-level discussions on AI and the Global Digital Compact.
The Global Digital Compact is a multi-stakeholder action plan on digital cooperation, originally proposed by the current UN Secretary-General António Guterres in his “Common Agenda”, which is expected to provide an inclusive global framework to overcome digital, data and innovation divides for a sustainable digital future.
Equally important, and at the level of the WIPO Secretariat, the WIPO’s AI Conversation sessions positively contributed to empowering the relevant WIPO staff. This particularly includes those working in the IP and Frontier Technologies Division and those working more generally in the IP Infrastructure and Platforms Sector. These sessions have equally helped guide those working in the “IP for Business” Division and in the more general IP and Innovation Ecosystem Sector. Additionally, they have provided important guidance on emerging issues like “IP and the Future” in the framework of the IP and Global Challenges Sector, by enabling staff to acquire more knowledge and to upgrade their technical expertise.
What does the future hold for AI and IP?
In the process—and as far as AI is related to IP—there remains the need to set rules, norms and other regulatory measures at both the domestic and international levels to enable promoting and further expanding the innovation and creativity ecosystems through AI tools. This must be done while ensuring that all countries and regions of the world can equitably benefit from such potentials and that countries lagging behind on digital transformation are equipped, through international cooperation, with the necessary knowledge, information, and expertise to move toward closing or narrowing the digital gap.
IP appeared in the first place to encourage, promote and enhance human innovation and creativity, but AI is changing the ways and means human beings are undertaking innovation and creativity. Therefore, there is a need to ensure that human beings remain at the core and centre of the innovation and creativity ecosystems.
Governing AI Under Fire in Ukraine
In its March 2025 report, the International Committee of the Red Cross issued a stark warning: without limits, the rise of autonomous weapons risks crossing a moral and legal threshold that humanity may not be able to reverse. AI-powered drones and targeting systems, capable of selecting and engaging humans without direct human input, are no longer science fiction—they are active players on the modern battlefield. As debates rage in Geneva and Brussels over how to rein in these technologies, Ukraine is already living the future.
With AI-enhanced drones buzzing across its skies and facial recognition tools scouring battlefields, Ukraine has become the world’s first real-time laboratory for the deployment and regulation of artificial intelligence in war. But innovation alone isn’t enough. In a country defending both its sovereignty and democratic values, the use of AI is being matched by new legal safeguards, human rights protections, and a bold attempt to build policy frameworks under fire.
One of the most prominent use cases is AI-enabled drone warfare. Ukrainian engineers, supported by global tech partnerships, have trained AI systems on a vast repository of more than 2 million hours of drone footage. These systems enable drones to autonomously navigate contested airspace, evade electronic warfare interference, and conduct high-precision targeting with minimal human input. A 2024 report by Breaking Defense revealed that AI-enhanced drones have achieved a 3-4x higher target engagement rate than traditional manually operated platforms.
AI is also integral to situational awareness and command systems. Platforms such as Delta—developed with military and civilian collaboration, leverage AI to fuse data from drones, satellite imagery, open-source intelligence (OSINT), and geospatial inputs. This creates a unified, real-time operational map that drastically reduces decision-making time. According to defense analysts and reporting from Army Technology and CSIS, Ukraine’s AI-supported situational awareness platforms like Delta have compressed battlefield decision-making cycles from hours to under 30 minutes at the operational level—and as little as 5 minutes tactically. Delta is currently used across multiple brigades and integrated into joint operations with NATO-aligned partners for interoperability.
Beyond aerial systems, Ukraine has also begun deploying AI-powered unmanned ground vehicles (UGVs) such as Lyut 2.0. These battlefield robots, recently profiled by The Telegraph, are used for reconnaissance, targeting support, and fire assistance. They combine autonomy with remote control, using AI to navigate terrain, identify threats, and relay intelligence in real-time, even under degraded signal conditions caused by Russian electronic warfare.
One of the key innovations is the AI-supported control interface: a single operator can manage multiple robotic units simultaneously, thanks to reduced cognitive load and semi-autonomous navigation features. AI is also used in pattern recognition, enabling drones and UGVs to detect camouflaged enemy positions or anticipate enemy movements based on historical data.
Importantly, Ukrainian developers emphasize “human-in-the-loop” architecture, particularly when it comes to lethal targeting. AI supports decision-making, but humans retain control—a principle rooted in both military pragmatism and ethical caution. However, battlefield pressures are pushing the boundaries of this control model, raising new regulatory and moral challenges.
These systems reflect Ukraine’s broader doctrine of AI-enabled asymmetric warfare—leveraging speed, automation, and precision to offset Russia’s advantage in manpower and conventional equipment.
Meanwhile, AI is supporting digital forensics and war crimes documentation, enabling investigators to authenticate images, analyze patterns of attacks, and track suspected perpetrators using facial recognition and pattern-matching algorithms.
However, these advances also raise critical ethical and legal questions. Concerns have emerged around the erosion of human oversight, transparency in targeting decisions, and the protection of personal and battlefield data. As AI systems become more autonomous, Ukraine and its partners must navigate the delicate balance between operational effectiveness and compliance with international humanitarian law (IHL).
To streamline and scale its defense innovation pipeline, Ukraine launched Brave1 in April 2023 — a state-backed coordination platform for dual-use and military technology development. Spearheaded by the Ministry of Digital Transformation, the Ministry of Defense, and supported by the National Security and Defense Council, Brave1 has become a central hub for Ukraine’s wartime tech acceleration.
Brave1 supports the full innovation lifecycle: offering grants, testing facilities, legal guidance, and matchmaking between startups, researchers, and military end users. The platform prioritizes scalable, interoperable tools, including AI-driven surveillance systems, cyber defense technologies, and semi-autonomous drones designed for contested environments.
As of early 2025, Brave1 has evaluated over 500 proposals, approved funding for more than 70 projects, and facilitated cross-sector collaborations with international partners, including Estonia’s Defense Innovation Unit and NATO’s DIANA initiative. Several Brave1-backed systems have already been deployed to frontline units, while others are being prepared for export to allied nations facing similar hybrid threats.
Uniquely, Brave1 embeds legal and ethical oversight into its innovation pipeline. Each project is assessed not only for technical viability but also for compliance with Ukrainian law, IHL, and NATO-compatible standards. Legal experts, ethicists, and cybersecurity advisors are part of the evaluation process — an effort to ensure that speed does not come at the cost of accountability.
Beyond wartime needs, Brave1 is also positioning Ukraine as a post-war exporter of battle-tested defense technology, particularly in AI-powered surveillance, robotics, and cyber resilience. With strong government backing, Ukraine is building a defense-tech ecosystem that could outlast the war itself.
Autonomous Warfare and the Challenge of International Law
As AI technologies increasingly shape the nature of armed conflict, international legal frameworks are under growing pressure. Ukraine, like many other democracies exploring battlefield autonomy, faces a difficult task: deploying innovative systems while ensuring compliance with legal obligations that were not designed with artificial intelligence in mind.
International Humanitarian Law (IHL), including the Geneva Conventions and their Additional Protocols, remains the cornerstone of lawful conduct in armed conflict. These instruments enshrine essential principles: distinction, proportionality, precaution, and accountability, which must guide all military operations. Yet they were drafted in an era long before autonomous weapons, predictive targeting algorithms, or real-time surveillance powered by machine learning. While the principles still apply, their practical implementation becomes far more complex when decision-making is partially or fully supported by AI systems.
Among the key legal challenges is the question of how to apply IHL to autonomous targeting. Can an algorithm reliably distinguish between combatants and civilians in dynamic, often ambiguous environments? Can it calculate proportionality in real time, under pressure, and with the contextual awareness expected of a trained human commander?
These questions are made more urgent by the known limitations of AI systems themselves. Many are trained on historical or incomplete data, often lacking transparency or explainability. They can reflect hidden biases, misidentify targets, and produce false positives, especially in environments with limited or noisy data. When used in high-risk contexts like warfare, these technical flaws can lead to operational errors with humanitarian consequences.
The issue of accountability compounds these risks. Traditional IHL assumes a clear human chain of command. But autonomous systems blur that chain—raising unresolved questions about liability: who is responsible when an AI-guided decision causes unintended harm? The commander? The developer? The state?
Further complicating the legal landscape is the dual-use nature of many AI tools. Technologies like facial recognition, satellite surveillance, and language models may be designed for civilian use, but can be readily adapted to military applications. Regulating these tools without hindering beneficial innovation remains a central policy dilemma.
Lastly, cybersecurity vulnerabilities, such as adversarial manipulation, spoofing, and data poisoning, can degrade AI performance, causing systems to act unpredictably and potentially violate legal norms.
International discussions, including those under the UN Group of Governmental Experts on Lethal Autonomous Weapons Systems (GGE LAWS), have helped surface these issues, but consensus on binding norms remains elusive. There is growing recognition that existing legal frameworks, while essential, require interpretation, supplementation, or reform to meet the realities of modern warfare.
For Ukraine, this legal and ethical challenge is not theoretical. It is playing out in realtime. But with emerging oversight tools like the Council of Europe’s HUDERIA, and a demonstrated commitment to aligning innovation with international norms, Ukraine has the opportunity to help shape the global legal architecture for AI in armed conflict.
Regulating AI Under Fire: Ukraine’s National AI Governance Strategy
Amid the pressures of active warfare, Ukraine has taken a remarkably forward-looking approach to artificial intelligence governance. In October 2023, the Ministry of Digital Transformation unveiled its AI Regulation Roadmap—a strategic framework developed in collaboration with national institutions, civil society organizations, and international partners. This was accompanied by a White Paper on Artificial Intelligence in Ukraine, which elaborates on policy goals, outlines key risks, and provides a vision for how Ukraine aims to align its AI ecosystem with European and global standards.
Developed in consultation with national agencies, civil society, and international partners, both documents are rooted in the values of democratic accountability, transparency, and human rights. They emphasize inclusive regulation, legal harmonization, and the need to foster innovation without compromising ethical standards.
The Roadmap and White Paper draw heavily from global norms, modeling their structure and intent on the EU AI Act, the OECD AI Principles, and the UNESCO Recommendation on the Ethics of AI. While these documents primarily address civilian and commercial uses of AI, their publication during wartime sends a clear signal: Ukraine is committed to building a rights-respecting digital future, even amid a war for national survival.
Although the roadmap does not explicitly regulate military use of AI, its guiding principles, such as human-centric design, risk-based oversight, and transparency, reflect a national ethos of responsible innovation. In parallel, Ukraine has aligned itself with international defense norms, including the 2023 Political Declaration on the Responsible Military Use of Artificial Intelligence and Autonomy, emphasizing human control and ethical safeguards in battlefield AI systems.
Ukraine’s Use of the HUDERIA Framework
Complementing its national AI governance efforts, Ukraine has begun integrating the HUDERIA methodology—aHuman Rights, Democracy, and Rule of Law Impact Assessment of AI Systems, adopted by the Council of Europe. As the first government embroiled in large-scale war to pilot this framework, Ukraine’s application of HUDERIA sets a vital precedent. At the heart of the framework is a commitment to ensuring that AI systems are not only technically functional but also legally sound and ethically defensible.
The HUDERIA framework begins with Context-Based Risk Analysis (COBRA)—a process that assesses how an AI system interacts with its operational environment and societal structures. Rather than applying generic rules, COBRA ensures that regulatory responses are tailored to the specific use case, accounting for local legal systems, cultural contexts, and power dynamics.
In parallel, HUDERIA calls for inclusive stakeholder engagement throughout the assessment process. Legal experts, technologists, civil society groups, and impacted communities are invited to provide input, thereby improving transparency and grounding regulatory decisions in democratic principles.
Where risks are identified, the framework mandates targeted mitigation planning, including measures such as algorithmic audits, public disclosures, or technical adjustments. Crucially, HUDERIA is iterative by design, not a one-time exercise but a living process that evolves alongside the system it monitors. This is particularly important in contexts where the use of AI changes rapidly or unpredictably, such as in emergency response, national security, or public communication infrastructure.
While HUDERIA is not explicitly designed for military applications, its structure is flexible enough to inform governance in adjacent areas. However, as AI tools in defense contexts grow more sophisticated and autonomous, there is a pressing need for dedicated military testing environments that are legally bounded, ethically supervised, and technically transparent. Unlike traditional battle labs, these environments should embed human rights safeguards into the development cycle, allow for third-party evaluation, and simulate real-world scenarios to identify unintended consequences before deployment. The existence of such spaces is essential not only for national security, but also for long-term democratic resilience.
Risks and Unintended Consequences: Where AI Could Undermine Ukraine’s Own Values
Ukraine’s deployment of AI in defense is often celebrated as a model of innovation under existential threat. Yet even the most principled application of AI in warfare carries inherent risks—technical, legal, and ethical, that could undermine the very democratic and human rights values Ukraine seeks to defend.
One of the most pressing dangers is the potential for civilian harm caused by data bias or misclassification within AI systems. Even high-performing object recognition and facial recognition algorithms are susceptible to false positives, particularly in high-stress, data-poor, or adversarial environments.
Research from institutions such as RAND and SIPRI has consistently highlighted the risks of misidentification in autonomous targeting systems. Even when trained on combat-specific datasets, AI systems often fail in simulations under degraded or manipulated data conditions. A 2024 SIPRI background paper, for instance, details how bias and adversarial manipulation can lead to false positives in dynamic battlefield environments—an especially dangerous vulnerability in warzones like Ukraine.
Similarly, a RAND study on the Department of Defense’s AI data strategy stresses that battlefield AI often suffers from poor data labeling, insufficient representation of edge cases, and fragmented data ecosystems. These shortcomings can lead to significant errors in object recognition, target classification, and contextual understanding, especially in complex, high-pressure environments where human oversight is limited.
There is also a profound psychological toll on civilians living under persistent AI-powered surveillance. Systems relying on facial recognition, gait analysis, and predictive behavioral algorithms, while developed for military or national security purposes, are increasingly crossing into civilian life. This blurring of lines between battlefield and society raises urgent concerns about dual-use technologies: tools originally designed for defense or intelligence that are later adopted for peacetime law enforcement, public administration, or commercial surveillance.
Human Rights Watch has warned that the normalization of AI surveillance, even during wartime, can erode public trust, stifle civic expression, and create long-term trauma, especially when oversight mechanisms are poorly defined or lack independent scrutiny. These risks are no longer abstract in Ukraine.
Lately, the Ministry of Internal Affairs has proposed a draft law that would establish a nationwide video monitoring system, citing public safety as its goal. The system would rely on biometric data, including facial recognition, to identify individuals in real time and store personal information for up to 15 years under ministerial control. Cameras would continuously record in public spaces, schools, businesses, and healthcare facilities, making AI-driven surveillance an embedded feature of daily life, even outside active conflict zones.
Such proposals reflect a growing global trend. As Shoshana Zuboff writes in The Age of Surveillance Capitalism, “The foundational questions are about knowledge, authority, and power: Who knows? Who decides? Who decides who decides?” When surveillance tools extend beyond their original purpose, they not only collect data but also reshape social behavior, redefine power dynamics, and recondition civic space.
Even with the adoption of frameworks like HUDERIA, AI deployment without full transparency may lead to an erosion of public and international trust. If civilians or external observers perceive AI decisions, such as automated strike targeting or border surveillance, as opaque or unchallengeable, democratic legitimacy may be strained. Without transparent accountability mechanisms, Ukraine risks reinforcing the perception that AI-based decisions operate in a ‘black box’, potentially alienating both domestic constituencies and global partners.
Finally, there is a long-term risk of post-war misuse. Once hostilities subside, the institutional memory and infrastructure built for defense may be redirected toward domestic surveillance or political control, especially if regulatory guardrails are weakened in the name of national security. Surveillance tools introduced for battlefield intelligence could, without strict legal constraints, be repurposed to monitor political dissent, journalists, or minority communities. Numerous global examples, from Ethiopia to Myanmar, illustrate how defense AI systems, once unshackled from wartime justifications, can become instruments of authoritarian governance.
This consideration also extends to Ukraine’s ambitions as a post-war exporter of defense technologies. While platforms like Brave1 have embedded ethical principles into their innovation cycles, technologies such as autonomous drones, facial recognition software, or cybersecurity tools may eventually be adopted by international partners operating under different legal frameworks or political conditions.
To safeguard Ukraine’s strong human rights reputation, it will be important to accompany tech exports with clear end-use monitoring, licensing controls, and ethical guidelines. Doing so would help ensure that Ukraine’s cutting-edge innovations continue to serve democratic and humanitarian aims, not only at home, but globally. With the right foresight, Ukraine’s emerging defense-tech leadership can set a valuable international precedent for responsible AI exports.
At the same time, strengthening the role of civil society watchdogs, both during and after the war, will be essential to ensure transparency, prevent mission creep, and maintain public trust in how these technologies are governed.
Lessons from Ukraine: Regulating AI Without Surrendering the Future
Ukraine’s war has become an unplanned but urgent testbed for AI in modern warfare—revealing both the strategic value of emerging technologies and the legal, ethical, and humanitarian challenges they create. Remarkably, even under existential threat, Ukraine has shown that technological innovation can be governed by democratic values and rule-of-law principles.
Its experience offers critical lessons for the global community. First, legal and ethical oversight must be embedded early, not retrofitted after deployment. Ukraine’s use of pre-deployment risk assessments, including through frameworks like HUDERIA, demonstrates the value of front-loading accountability into high-risk innovation cycles. Civilian oversight also remains essential, particularly in times of war. Ukraine’s coordination between government, civil society, and international partners has ensured transparency and public trust, even amid crisis.
Importantly, national legislation should align with international norms, as seen in Ukraine’s deliberate harmonization with the EU AI Act, OECD principles, and Council of Europe standards. These steps reflect a foundational belief: that human rights and democratic safeguards must not be sidelined, even under fire.
At the same time, banning military AI outright is neither realistic nor strategic. Russia and other authoritarian actors are unlikely to pause or restrict development. If democratic nations do not engage, test, and regulate these technologies, they risk falling behind, and surrendering the ethical ground to those who will not play by the same rules. The challenge is not whether to develop AI for defense, but how to do so responsibly—striking the right balance between national security and human rights.
Fairness and Philosophy in the Age of Artificial Intelligence
Artificial intelligence has been implemented in a variety of fields to help humans make faster, smarter decisions, but this technology is not a magic wand. It is built on algorithms designed to make decisions based on data collected from our human world, one that is plagued by systemic inequalities. If these algorithms are trained on data that reflect an unfair reality, then how can we expect them to help us build a better world? And, perhaps more importantly, do we really know what a ‘better’ world looks like?
Philosophers have wrestled with what it means to be ‘fair’ or ‘just’ for centuries. In recent years, these debates have escaped the dusty tomes of the library and found their way into the realm of computer science. While software engineers tinker with ensuring the algorithm works correctly, philosophers ask what ‘correctly’ really means. As AI is now being used to decide which defendants are denied parole and which students are allowed into university, examining exactly what we expect from a ‘fair’ and ‘just’ world is equally important as exploring how we can use AI to achieve it.
To examine these topics, the Cairo Review’s Senior Editor Abigail Flynn spoke with Aaron Wolf, senior lecturer in university studies and research affiliate in philosophy at Colgate University, to share his insight on the nuances of algorithmic fairness.
Cairo Review: As someone who researches moral theory, what made you interested in studying AI?
Aaron Wolf: In the last handful of years, I’ve come to feel like the more pressing issue, a thing that’s way more interesting to me, is something called the ‘value alignment problem’.
It’s the question of how we get autonomous systems to behave on their own in ways that we would want. This is a somewhat more difficult problem than you might think, because it’s difficult to specify exactly what it is that we want. There are lots of interesting cases where an automated system thinks you want one thing and then gives you that thing, but that thing turns out to be very different from what you actually wanted. Before we allow machines to, you know, take over the world, it’s fairly important that we get them to behave in predictable and acceptable ways.
So, you’re worried about what might happen in the future as AI nears superintelligence?
Actually, my little slice of the value alignment problem is a more near-term thing. Lots of people working on value alignment have future AI applications in mind. But there’s also something that is happening now, and has been happening for some time, which is that we have these automated systems that are ubiquitous and make a lot of morally significant decisions about people here and now.
Artificial intelligence is already being used to make decisions in real life?
Sure. From job applications to insurance and loan applications, healthcare, finance, education. Even criminal justice, like in the case of COMPAS.
What’s COMPAS?
‘COMPAS’ is an algorithm, it stands for Correctional Offender Management Profiling for Alternative Sanctions. It takes about 115 data points about a criminal defendant and makes a prediction about the likelihood that they will be arrested again in the next handful of years. In other words, it predicts the likelihood that the defendant will be rearrested in the future if they are released on bond or parole now. The judge uses the system to make decisions about who gets parole and who doesn’t.
Now, this system isn’t technically AI in the sense that we talk about today, because it’s a hand-coded algorithm. But it’s still an important example of potential bias. ProPublica wrote a research white paper on it and alleged that the way that the algorithm makes its decisions is racially biased toward black defendants.
What did the designers of the algorithm say?
They wrote up a response saying something like, ‘No, it’s not like that, we went out of our way to make this algorithm fair. Here’s the metric which we used, and this metric is the industry standard’.
At that point, the computer scientists got involved and they said that there are two different metrics for fairness involved here: first, the ‘industry standard’ one being used by the designers of COMPAS, and second, our intuitive sense of fairness and unfairness that the ProPublica researchers were using. The computer scientists said that these two metrics of fairness were at odds with each other, and they can’t both be satisfied at the same time.
Wait—what does ‘metric’ mean here? Does each ‘metric’ mean a specific definition for fairness?
Well, COMPAS uses a metric that is mathematically defined, we can express it in probability. ProPublica used a more intuitive conception of fairness, but it can also be expressed mathematically. That’s how the computer scientists showed that both metrics can’t be satisfied at the same time.
The computer scientists concluded that fairness is impossible, or at least, total fairness is impossible. They say that there’s different flavors of fairness and you pick the one that’s best suited for your case and run with that. Most of the industry agrees; most the AI ethics desks at major U.S. consulting firms have statements like that on their landing page.
For them, that’s the end of the story.
But it’s not the end of the story, is it?
In my opinion, no. It strikes me as a bit too quick and also a bit dangerous. Imagine a software engineer is trying to defend the decisions that an AI product has made in court and the opposing lawyer is grilling them, asking ‘What makes this decision fair?’.
And the software engineer says, ‘Well, I don’t know. We just picked a metric.’
That seems like a really unsatisfying defense of the choices you’ve made. And that’s where people like me get involved. This kind of nihilistic approach, ‘just pick and choose’, rubs philosophers the wrong way.
So, as a philosopher, how do you approach the question?
The field of research that I’m in these days is trying to look at the mathematical possibilities and ask, ‘Which one of these is best at capturing the ordinary, humanistic, intuitive ideas about fairness? Which matches the concept of fairness that ordinary people on the street have, but also the concepts that philosophers have been generating over many centuries?’.
For philosophers, we’re really interested in what ‘just’ or ‘fair’ means, and whether the two are actually compatible.
How could something be fair but not just?
Let’s say that we’re defining fair as ‘everyone gets treated equally’. We could make an argument that as long as we treat people equally, the outcome is automatically just. But that doesn’t always reflect reality. Sometimes treating people exactly the same can produce unjust outcomes because the mechanisms by which people got to be where they are, are often deeply unjust.
If you want to de-bias your algorithm in a way that’s going to undo past injustice, you have to, in some way or another, put your thumb on the scale, so to speak. This goes against the idea of treating people exactly equally.
So you basically have two approaches to fairness here, one that assumes a very narrow definition of treating people exactly the same, and another that has a more proactive lens.
Is there a situation, somewhere in the future, where an algorithm can be de-biased and trained in a way that makes it perfectly just?
People’s sense of what is and what is not morally acceptable from the perspective of fairness is always changing over time. So we’re going to have to come up with new ways of re-tinkering, reorganizing, or reweighting the algorithm, putting our thumb back on the scale again in order to build a data set that tells the algorithm how to make decisions in a way that’s getting us the outcomes we want.
But we haven’t reached the point of a perfect algorithm yet. So how do we use these programs responsibly in the here and now?
I wrote a paper about this that just came out in May. It talks specifically about how algorithmic fairness plays out from a philosophical perspective in higher education.
Do you mean in the classroom, like AI education?
Not exactly. In this case, AI is being used by academic advisors to decrease the rate of students dropping out of university by steering them toward certain majors. Institutions do this to save money—if you can keep track of which types of students are most likely to drop out of a certain major, then you can advise similar students to pursue different options. It helps advisers who have several hundred students under their guidance to make decisions more efficiently. Some of these universities include the student’s race as a variable.
So if students from a certain ethnic background usually drop out of a specific major, the program will tell the adviser to guide other students from the same background away from that program. That sounds like it could get tricky fast.
Yeah, unfortunately. I’m interpolating a bit here from the original piece, but if the primary goal is to keep the student enrolled, then the easiest thing to do is steer them into the ‘safer major’. Usually this is something like area studies, like African American or Latin American studies, sociology, and so on.
There’s a few different ways to spin this, but these are fields that have a reputation for being more welcoming and, for lack of a better term, less competitive for grades. Compare this to economics, international relations, or pre-med, where the programs are actively trying to weed out students. This is already an issue for advisers, even without AI. When you combine this existing problem with an AI advising tool that makes recommendations primarily to increase retention rates, you can end up exacerbating inequality.
Black and brown college students might be pushed toward less competitive and less lucrative fields, because the system is designed to give the most weight to protecting retention, at the expense of allowing the student the freedom to choose the course of study that they think is best for them. And most of the time, the students aren’t aware that these programs are being used, and, by extension, why the adviser might be pushing them toward a specific field. The goals of the institution don’t necessarily overlap with the student’s own best interest.
It sounds like these programs are very biased. Should we throw them out altogether, or is there any way to use them responsibly?
I think we can use them responsibly. I’m not involved in this specific type of research, but there’s some very clever and interesting work being done about how we can take historical data and clean it up to train the algorithm to produce a more just world. There’s all kinds of ways for bias to creep into the data set, and lots of people are out there working on ways to de-bias data.
This is one of the fundamental tensions when it comes to automated or algorithmic decision making. We rely on historical data from a world that is very non-ideal in terms of fairness.
If the current data comes from a biased world, what’s the best way for humans to use this type of technology?
This relates to the concept of ‘human in the loop’, or how humans actually implement their AI programs. This is a problem that shows up all over the place, where the human is just pushing a button and letting the program function by itself.
People who use these machines should have a better-than-average understanding of what the tool is doing, how its results should be interpreted, and what action should be taken. To let the program decide without any meaningful human oversight is a disservice to the person whom we are making these decisions about.
As an adviser, maybe the program tells me that the student in front of me is likely to drop out of the econ program. But I know that the student is from a disadvantaged background, so maybe I’ll discount the algorithm’s decision because I know it’s based on historical patterns, not on the individual sitting in front of me. Maybe this student is highly motivated, so I’ll advise him to pursue econ anyway.
The algorithms are a tool and they can often catch things that the human in the loop might miss. But like all tools, it has strengths and limitations, and it needs to be used well.
Gaza: Israel’s AI Human Laboratory
For years, Israel has been working to establish itself as a leader in developing AI-powered weapons and surveillance systems. Using these tools raises ethical, legal, and humanitarian concerns due to the potential risk to civilians.As Human Rights Watch has argued, Israel’s use of AI specifically in the war in Gaza risks violating international humanitarian law by targeting civilians instead of military targets.
The Israel Defense Forces’ (IDF) Target Administration Division, established in 2019 by Lt. Gen. Aviv Kochavi, is responsible for developing Israel’s AI Decision Support System (DSS).Kochavi noted that this integration allows the IDF to identify as many targets in a month as it previously did in a year. Kochavi, directing military intelligence during Israel’s 2014 war in Gaza, has since aimed to speed up the generation of targets. He noted, “While the military had under 300 targets in Lebanon in 2006, the number has increased to thousands.”
A New Type of War
This most recent bout of conflict is not the only time Israel has deployed AI in Gaza; Israel labeled its 2021 war in Gaza as the “first AI War”. Since then, Israel has been promoting itself as a leader in developing battlefield-tested AI weapons and tools. For instance, to show and market its capabilities to European allies, a week before October 7, 2023, Israel brought the NATO military committee chair to the Gaza border to showcase Israel’s automated border.
Since October 7, we have seen an escalation in the use and testing of new AI systems, which was revealed in an investigation by +972 Magazine. Official IDF figures show that in the first 35 days of the war the military attacked 15,000 targets. This is a significantly higher number than previous operations,which had utilized the assistance of AI systems. In an interview for The Jerusalem Post,a colonel who serves as the chief of the IDF “target bank”, which includes a list of potential Hamas operatives and key infrastructure, suggested, “the AI targeting capabilities had for the first time helped the IDF cross the point where they can assemble new targets even faster than the rate of attacks.”
Additionally, according to an investigation by The New York Times, after October 7, Israel “severely undermined its system of safeguards to make it easier to strike Gaza, and used flawed methods to find targets and assess the risk to civilians”. It is no surprise that, according to a recent Airwars report, “By almost every metric, the harm to civilians from the first month of the Israeli campaign in Gaza is incomparable with any 21st-century air campaign.”
According to the report, in October 2023, at least 5,139 civilians, including 1,900 children, were killed in Gaza, marking the highest civilian casualties recorded in a single month since 2014 when Airwars started recording casualties. The majority of deaths occurred in residential buildings, with families often killed together, averaging 15 family members per incident.
The +972 Magazine investigation revealed numerous IDF programs that utilize this AI technology. Lavender is one such program, employing machine learning to assign residents of Gaza a numerical score indicating their suspected likelihood of being a member of an armed group. Early estimates showed that in the beginning weeks of the current war, Lavender identified 37,000 possible Palestinians and their homes as potential targets due to their assumed connection to Hamas.
Lavender utilizes surveillance data to assess individuals based on their suspected affiliation with a militant group.The criteria for identifying someone as a likely Hamas operative are concerningly broad, assuming that being a young male, living in specific areas of Gaza, or exhibiting particular communication behaviors is enough to justify arrest and targeting with weapons.
An unnamed Israeli intelligence officer told +972 Magazine: “There were times when a Hamas operative was defined more broadly, and then the machine started bringing us all kinds of civil defense personnel, police officers, on whom it would be a shame to waste bombs. They help the Hamas government, but they don’t endanger soldiers.”
This statement, even though critical of the target selection criteria for AI, is framed in terms of resource efficiency rather than the ethical obligation to protect civilians and non-combatants, reducing the taking of human life to a matter of cost-effectiveness. The officer’s statement also highlights the risk of civilian harm from imprecise or overly broad targeting systems.
The Lavender program’s tendency to identify “civil defense personnel” as targets shows the risk of widening the AI selection criteria, which leads to preventable civilian casualties and violations of the principle of distinction, a cornerstone of international law requiring clear differentiation between combatants and civilians.
As it is, the investigation by +972 Magazine also revealed the many admitted mistakes and biases of these AI systems. First, Israeli officers interviewed for the report indicated that Lavender makes “mistakes” in roughly ten percent of situations. Another AI tool Israel uses is “Where is Daddy”, which utilizes mobile phone location tracking to find individuals identified as military targets when they arrive at a particular location, typically their homes.
Israel claims all the targets get approved by a human; however, according to +972 Magazine sources, human approval of targets served “only as a ‘rubber stamp’ for the machine’s decisions, explaining how they would personally devote only about ‘20 seconds’ to each target before authorizing a bombing—just to make sure the Lavender-marked target is male.” In this sense, these programs have made “human decision-making too robotic, essentially transforming human operators themselves into ‘killer robots’”.
The operator’s assessment of the target can be compromised by bias, “especially when the system’s output confirms or matches the human user’s existing beliefs, perceptions, or stances”. Confirmation bias can influence officers’ decision-making when reviewing AI target recommendations. For instance, intelligence officers might quickly approve AI-generated target recommendations aligning with their preconceptions, even if those suggestions are based on flawed information or broad criteria. This tendency can intensify in high-pressure situations, where military personnel may make quick decisions based on AI recommendations that align with their views, resulting in significant civilian harm.
Additionally, when rules of engagement allow for high civilian death thresholds, these technologies become tools that facilitate mass casualties rather than mitigate them. For instance, according to a report from The Guardian, “dumb bombs” (bombs without a guidance system) were utilized to strike at individuals viewed as lower-ranking members of Hamas, resulting in the destruction of entire residences and the deaths of all individuals inside.
We also know from the The New York Times investigation that Israel increased the threshold of acceptable civilian casualties at the beginning of the war to 20 and allowed strikes that could “harm more than 100 civilians….on a case-by-case basis”. For instance, if the target is considered a high-ranking Hamas leader, the permissible number of civilian casualties could exceed 100.
Eventually, Israel removed any restrictions on the daily number of Palestinian civilians killed in airstrikes. Therefore, the use of AI-enhanced intelligence does not inherently lead to more precise or ethical warfare; instead, it can facilitate mass casualties if the decision-making framework allows for high civilian death thresholds.
In addition to the surveillance run by Lavender, the Israeli military has also been identifying members of Hamas using a different AI program—a facial recognition program, originally meant to identify Israeli hostages. This technology is managed by Israel’s military intelligence, including Unit 8200, with support from Corsight, a private Israeli company, and Google Photos.
As The New YorkTimesreported, “At times, the technology wrongly flagged civilians as wanted Hamas militants.” One of the most prominent examples of wrongful identification is the temporary detention and interrogation case of the Palestinian poet Mosab Abu Toha, who was detained at an Israeli military checkpoint due to the program mistakenly identifying him as affiliated with Hamas.
U.S. Tech Companies Enable AI Warfare
The technology Israel is using in Gaza relies on tools provided by private companies to handle the data, including some based in the United States. For instance, Google and Amazon signed a 1.2 billion dollar contract with the Israeli government in 2021, known as Project Nimbus. Project Nimbus helps Israel “store, process, and analyze data, including facial recognition, emotion recognition, biometrics and demographic information”. The project alarmed some Google and Amazon workers who started the campaign “No Tech for Apartheid.” Despite these campaigns, Google and Amazon continue to work with the Israeli government and military.
More recently, a +972 Magazine investigation found that the Israeli army’s Center of Computing and Information Systems unit is utilizing cloud storage and artificial intelligence services provided by civilian tech giants in its operations in the Gaza Strip. This process began after the crash of the army’s internal cloud servers when they became overloaded with the number of new users during the ground invasion of Gaza in late October 2023. The army describes its internal cloud as a “weapons platform” with applications for marking targets, live Unmanned Aerial Vehicle (UAV) footage, and command and control systems.
The U.S.-based company Palantir, founded in 2003, also collaborates with various governmental, law enforcement, and military organizations, including those in Israel. Palantir’s AI programs rely on data on Palestinians from intelligence reports. According to documents released by Edward Snowden, the NSA whistleblower, one of the sources of such data was the U.S. National Security Agency. Other companies in Silicon Valley are involved, including Shield AI, which provides Israel with self-piloting drones for “close-quarters indoor combat”, and Skydio, which gives Israel “short-range reconnaissance drones” capable of navigating “obstacles autonomously and produce 3D scans of complex structures like buildings”.
The integration of U.S. private tech companies into Israel’s military operations raises profound ethical concerns that extend well beyond the context of Gaza. The partnerships between Israel and corporations such as Google, Amazon, and Palantir reflect a deep entanglement between commercial profit motives and state violence, where military AI tools are developed not just for battlefield advantage but also for commercial scalability and international export.
Developed for Gaza, Sold Abroad
These AI-powered systems—trained, tested, and refined during the war on Gaza—are not developed in a vacuum. Gaza has functioned as a live laboratory for these technologies, allowing Israel and its corporate partners to demonstrate the effectiveness and ‘efficiency’ of AI-enhanced warfare in real time. As with previous military technologies (e.g., drones), what is developed and tested in Gaza is often marketed globally as ‘battle-tested’ solutions, fueling a profitable security industry that benefits from war and repression. Indeed, Israel is already one of the world’s largest arms exporters relative to its size, and its AI technologies are likely to become core components of its growing defense export portfolio.
As these AI systems mature and demonstrate their ‘effectiveness’ in high-casualty environments, there is an increasing risk that they will be sold to regimes with long histories of human rights abuses. Governments seeking to consolidate power, suppress dissent, or control marginalized populations will find in these AI technologies an attractive toolset. Surveillance platforms like facial recognition software and automated target selection systems, especially when paired with biometric databases or predictive policing algorithms, can become instruments of mass control and political persecution.
For instance, authoritarian governments could purchase and deploy variants of Lavender or facial recognition systems similar to those used in Gaza, repurposed to monitor and neutralize political opposition, ethnic minorities, or protest movements. Such systems, powered by partnerships with U.S. firms or trained on data from U.S.-linked cloud infrastructure, would be challenging to regulate once exported. Without enforceable international regulations, tech companies face few legal or financial consequences for supplying repressive regimes with tools of digital authoritarianism.
Furthermore, the revolving door between Silicon Valley, the Pentagon, and foreign militaries such as the Israel Defense Forces facilitates the rapid international spread of these technologies. With the proliferation of AI-enabled surveillance and targeting tools, the distinction between ‘defense technology’ and tools of domestic repression becomes increasingly blurred.
As Matt Mahmoudi of Amnesty International warns, the opacity of these partnerships means that “U.S. technology companies contracting with Israeli defense authorities have had little insight or control over how their products are used by the Israeli government”—a dynamic that is likely to be replicated in other jurisdictions where authoritarianism is on the rise.
In this context, the Gaza war may represent not just a humanitarian catastrophe but also a pivotal moment in the globalization of AI-enabled warfare. If unregulated, the collaboration between private tech firms and military powers risks accelerating the spread of surveillance, repression, and high-casualty targeting strategies around the globe, placing civilians in authoritarian regimes—and even democratic ones—at unprecedented risk.
Hybrid Writing: Authority, Ethics, Agency
Hybrid writing is no longer a matter of the future. It is already here, quietly reshaping the texture of academic work. Across seminar rooms, publication pipelines, and feedback loops, something has shifted. We are not confronting a technological future in the abstract; we are living inside its mundane, daily iterations. What once appeared as peripheral assistance—search engines, citation tools, proofreading algorithms—has become entangled with the act of writing itself. Sentences now appear half-formed. Transitions are suggested before the thought is fully there. The rhythm of composition has begun to move differently, as if another presence were softly intervening in the flow.
The shift is subtle, but its implications are not. Hybrid writing is a co-creative process merging human intentionality with AI generative engines to produce dynamic textual output. It touches on the foundational coordinates of scholarly life: what it means to write, to author, to be responsible for thought. For generations, the university has trained its members to move through writing toward recognition—not just professional visibility, but epistemic clarity, an intelligible positioning of the self in relation to knowledge. Hybrid writing complicates that trajectory. The arrival of the sentence no longer guarantees that someone labored to shape it. Ideas arrive with fluency but without clear origin. Style remains, but authorship flickers.
This moment demands attention. The ease of hybrid writing should not distract us from the seriousness of what it calls into question. If the foundational practices of scholarship—authorship, ethical labor, agency—are being mediated, supplemented, and in some cases, bypassed, then we must ask with fresh urgency: what remains non-negotiable? What do we still expect to be human?
We might name one emerging tendency vibe-writing—a form of composition that privileges tonal plausibility over argumentative necessity. The goal is no longer to advance a position, but to approximate the voice of someone who might. It’s a form that performs the gestures of scholarly work without necessarily asking the writer—or the reader—to take a stand. Like vibe-coding in software development, it may hold up just fine until something truly depends on it. And then the gaps begin to show.
I do not approach this from a position of resistance. I use these tools. I have prompted, coded, translated, even experimented with generative scaffolds to teach and to revise. At the same time, I remain committed to teaching writing as a practice of formation wherein the very act of writing shapes thought and identity by molding a voice. I ask students to write slowly, without aid, not because I believe in intellectual suffering, but because certain decisions—the ones that form judgment—can only be made in solitude. There is still value in wrestling with a sentence until it begins to mean what you meant, even if that meaning is provisional. We are, quietly and irreversibly, beyond the phase of prohibition. The question now is not whether to use these tools, but how to use them. This essay is one attempt to respond, by reflecting on three elements of scholarly life that hybrid writing does not erase, but places under renewed pressure: authorship, ethics, and agency.
On Authorship
There is a certain quietness to authorship. Not in its effects—those may resonate widely—but in its act. We write alone, mostly. Or half-alone, now, with tools that respond, suggest, phrase. And yet even then, authorship remains a solitary commitment: to shape, to select, to stand by what has been said. It is not invention exactly, nor is it confession. It is something slower—a becoming-accountable to what is made.
In academic life, authorship is more than just a signature. It is a way of entering into form. The disciplinary article, the monograph, the lecture manuscript—these are not merely vessels for ideas, but rituals of attunement. To write in law, or philosophy, or critical theory is to learn not only what can be said, but how it must sound to be taken seriously. And this sound—its cadence, its modulations, its delicately wielded citations—is not incidental. It is the content of the form, as Roland Barthes would have it: the substance that lies not behind language, but in it, shaped by the expectations and rituals of a discourse community that prizes certain gestures over others.
There is something curiously athletic about this. In some journals, one senses the expectation of a rhetorical triple axel—a deft theoretical landing, ideally inverted mid-air, followed by a dazzling recovery of complexity. One learns the choreography early: state the problem, cite the pantheon, pose a question that slightly shifts the coordinates, then ‘re-describe’ (the word of the past decade)—lightly, evocatively, ideally quoting someone beyond reproach for validation. There is nothing inherently wrong with this. Stylization is part of all art forms. But when the form begins to dictate the movement too rigidly, the risk is that we forget what the routine was meant to express.
Still, we abide. We adjust our tone, prune our adjectives, perform our hesitations in the approved key. Not because we are disingenuous, but because we understand that scholarly voice is a social act. It lives between the lines—in the spacing, in the rhythm, in the margin notes that never make it in. It’s not just what you think, it’s how long you wait before you let the reader know you think it. This kind of authority is not extrinsic but emerges from form. And form, in the humanities at least, is rarely neutral. It is historical, cultural, affective—it is political. And it is beautiful. According to philosopher Alexandre Kojève, history was the movement through which human beings come to be recognized by others—not for what they are, but for the forms they give to themselves. Authorship, too, is like that: a becoming-visible through structure. It is not the raw transmission of thought, but the staging of it. The argument must not only be made—it must arrive in costume.
Yet, the costume matters. The process of learning to write in a field is, at some level, the process of learning to be heard in it. Students often ask what they are meant to sound like. They’re not always asking for grammar. Most frequently they are asking for posture, or tempo, or for the subtle difference between assertion and exploration—the part of the sentence where you say “traditionally”, not because you have researched a tradition, but because you know that’s the note the paragraph requires.
Now, in the age of hybrid writing, when so much of the outer form can be simulated—citation styles, paragraph rhythms, transitions that glide—the question returns with new urgency. If style can be generated, what remains of authorship? But perhaps the question is mis-posed. Style was never the core of authorship. It was its expression, its shadow. What remains is not the novelty of the sentence, but the intention behind it. The human who meant it. The thinking that left marks—hesitations, returns, surplus phrasing, a metaphor that lingers just a bit too long. Vibe-writing—writing that captures tone without thought, form without formation—offers the surface, but not the stance. It can mimic the mood of scholarship, but not its moral investment.
Here, discernment still matters. Not as moral purity, but as sensibility. The judgment of when a sentence feels hollow, when a formulation is too easy, when the rhythm is too smooth. These are not algorithmic decisions. They are lived ones, shaped by reading, by writing badly, by time. And borrowing, of course, is part of all of it.
Still, a persistent worry remains: that even when discernment is exercised, hybrid writing may unconsciously gravitate toward the mean—toward the lexical, syntactic, and conceptual aggregates that language models are trained to reproduce. The risk is not just imitation, but convergence: a quiet narrowing of thought as it settles into the grooves of what has already been said, already valued, already anticipated. Even originality, in this register, may come to wear the mask of precedent. To resist this pull requires more than vigilance; it requires a cultivated discomfort with fluency itself, and a willingness to press against the ease with which the model completes what we have not yet fully begun to think.
There is no pure origin in scholarship. Every argument leans on another. Every insight is a rephrasing, a turn. The question was never whether we borrow, but whether we carry. Whether we place something old in a new light, or merely repeat it with better punctuation. Machines can repeat. But it’s not yet clear they can care about what they repeat. These ethical questions—of reuse, recognition, and responsibility—deserve their own attention, and I turn to them shortly.
The doctoral dissertation remains one of the most concentrated sites of this struggle. For all its bureaucratic sediment—the formatting rules, the committee procedures, the ritualized defence—it still holds something sacred: a stretch of writing that must be authored alone. Not because collaboration is shameful, but because the process of becoming a scholar is still tethered to the practice of forming thought in solitude. It is, perhaps, our last real rite of passage. Not an exam, not a product, but a long, uncertain inhabitation of one’s own mind. A person emerges from it changed, not because of what was learned, but because of how long they had to hold it.
One might return here to Raymond Queneau. In his Exercises in Style, first published in 1947, he turns a trivial bus-boarding quarrel into ninety-nine stylistic provocations. Each of the 99 versions—clinical, sarcastic, liturgical—remakes the meaning through inflection. The form is the story. So too in scholarship: it is not the claim alone, but the path through which it arrives that grants it life. Hybrid tools can suggest the claim, but they cannot quite walk the path. They do not know why this word should come before that one, or why the paragraph needs to exhale before it turns, or the sound of the pause. Hybrid tools can draft the claim but can’t inhabit the writer’s path, the lived process that shapes every rhythm and inflection.
This is not to sanctify the human but to press that writing, for now, is still one of the places where something of the human remains visible. This is not in the speed or even in the clarity, but in the pauses or in the way a sentence doesn’t quite land—then gets rebalanced. It is in the shadow of someone trying to mean something. And if that is authorship—this trying, this shaping—then it is not at risk. It is simply evolving. The tools may change, the form may soften, the genres may blend. But someone will still have to choose when to speak and on what to speak. And someone will have to listen for whether it was worth saying.
On Ethics
It is tempting to think of ethics as a code. And indeed, the university has them: policies on academic integrity, plagiarism checkers—paragraphs in syllabi printed in small, nervous font. They outline what counts as original, what must be acknowledged, and what will happen if one is caught crossing the line. But most of us do not live at the edges of those policies. Our ethical decisions in writing are quieter, more ambiguous. They emerge in the small moments of hesitation—when a paragraph comes too easily, when a phrasing feels borrowed, when the voice we’ve written in sounds more like someone else’s than our own.
Hybrid writing amplifies those moments. Not because the rules have changed, but because the ease has. The discomfort many feel when composing with AI tools is not usually about rules broken. It is about a certain recognition—the realization that the work feels finished before we’ve earned it, that the gestures of scholarship can now be assembled without the inward pressure that once accompanied them. We pause—not just because we might be found out, but because we are unsure whether we have, in some less visible way, disappeared from the page. But that pause is not new. Long before AI entered the process, we borrowed—openly, awkwardly, sometimes beautifully. We learned to write by echoing the sounds of others. In early drafts of term papers and dissertation chapters, we repeated the sentences we had heard in seminar rooms and read in footnotes. Not only the ideas, but their posture. Not only the conclusions, but the steps taken to reach them. The form of scholarly thought is itself transmitted mimetically.
T.S. Eliot famously suggests that immature poets imitate, while mature poets transform what they borrow into something recognizably their own. For him, tradition is not a constraint but a medium through which originality emerges. This remark, austere but fair, reminds us that originality has always been something of a performance. Its ethical measure is not in the novelty of the material but in the depth of transformation. To borrow well is to be changed by what one takes—and to take responsibility for what emerges.
I often remind my students—half in jest, half in warning—that when a scholar begins a chapter with the word “Traditionally”, they are not merely introducing context; they are staging authority. That opening move, punctuated just so, smuggles in assumptions under the guise of shared knowledge, rendering them nearly immune to dispute. And more often than not, the strategy succeeds.1 The academy rewards not only those who possess ideas, but those who can perform them with grace: the double somersault of critique, the mid-air twist of reframing, the clean landing of the counterpoint. Here, style is not ornament but argument. It travels not just through text, but through tone, gesture, cadence—modes of transmission we do not always consciously register. Style, in this sense, is not a flourish. It is a mode of recognition.
This is where the ethical terrain has always been soft. We do not usually cite these forms. We do not name the voices we have absorbed. The rhythms, the syllogisms, the trusted ways of turning an argument—they live inside us, learned but unmarked. They are as much a part of our writing as our grammar. And yet we do not feel dishonest. Why not? Because somewhere along the way, we came to understand that ethics in scholarship is not a matter of origin, but of presence. We are not responsible for inventing the form, only for meaning something through it.
That sense of responsibility, however, takes time to develop. It is not innate. It is formed through the friction of writing itself—through failure, revision, exposure. We begin by copying. Then we learn to hesitate. Then, slowly, to shape. And that shaping is not only technical. It is ethical. We become writers by deciding not just what to say, but how much of ourselves we are willing to place in the sentence.
This is what hybrid writing risks bypassing. Not meaning, not coherence, not structure—but the act of formation. A well-prompted model can now replicate the genre, approximate the cadence, even cite the expected thinkers. But it (still) cannot measure the effect of the pause. It cannot question whether the argument should be made in that way, or whether the tone fits the stakes. It cannot wonder if the thought belongs to it. And because it does not ask, it cannot learn to care.
Of course, originality has always been a myth with uneven edges. Picasso is often quoted to have said that “Good artists copy, great artists steal.” Whether or not he said it, the phrase captures a truth about how influence becomes form. Historical accuracy, after all, should never get in the way of a good myth. Our collective ‘knowledge_base’, to use a coder’s vernacular, is full of mis-attributed aphorisms, from Marie-Antoinette’s cake to President Mao’s views on the French Revolution, which nevertheless allow cultural recognition and foster a sense of ease between interlocutors who share the same ‘knowledge’. The mythistorical origin of this aphorism, mischievous and elliptical, exemplifies the fact that originality has little to do with invention ex nihilo. What matters is not that one begins with nothing, but that one ends with something that has passed through the fire of one’s own discernment. Great art, like good scholarship, steals only what it is willing to remake. That act of remaking is where the ethical trace resides.
This is why academic integrity, in its deepest sense, is not a matter of detection but of being-there. It requires a kind of inward acknowledgment: that this thought, however shaped by others, has passed through me, the writer—that I have tried, however imperfectly, to carry it with care. That kind of acknowledgment is not always visible. But it is what makes the writing ours. And so we must continue to teach not only how to write, but how to feel responsible for what one writes. This cannot be achieved only through honor codes or detection software. It must be cultivated in practice: in the revision that takes longer than expected, in the choice to rewrite a too-perfect paragraph, in the quiet moment where a student sees their own thinking, fragile and tentative, begin to take form. The ethical question, then, is not “Did you write this alone?” but “Were you present in what was written?” In a time when words can be produced without thought, this presence is what marks a scholar: the slow willingness to stand by a sentence, and to be questioned in its company.
On Agency
It is tempting to assume that the central problem with hybrid writing is who wrote what. But the more enduring question is who decided? At what point, and under whose hand, did the claim become a commitment? Whose judgment shaped the text, even if the words were not typed from scratch? In this sense, authorship is only the outer skin. Beneath it lies agency: the capacity, and willingness, to bear responsibility for what is said.
Agency is not the same as autonomy. We all write within constraints—of genre, of discipline, of language itself. Nor is agency a matter of doing everything by hand. One can be fully agentic while using reference managers, search engines, even auto-complete. What marks agency is not the absence of support, but the presence of discernment: the felt moment when a writer chooses to keep a sentence, to delete it, to shift the weight of an argument because something does not sit right. These are not technical edits. They are ethical acts. They locate the writer inside the work.
With generative tools, these moments can be obscured. A sentence appears, already structured. An argument arrives, already plausible. There is no friction, no stumble. The writer’s task becomes one of review, not composition. But review can be passive. It can slide into endorsement. And endorsement, unlike authorship, requires very little presence.
Still, the real risk is not that machines will replace our agency. It is that we will surrender it willingly—because it is easier, because it is faster, because there is no longer time or expectation to do otherwise. The culture around us already encourages this. In law, in journalism, in the academy, productivity often outruns reflection. The pressure is to produce, not to dwell. And yet dwelling—revising, hesitating, turning back—is where agency most often lives. Agency in writing is about being-there, living through the pain of creation, and assuming responsibility for the output.
There is a deeper problem here, one that exceeds the boundaries of academic life. Certain decisions in human society are structured around the belief that only humans must be the only entities allowed to take certain decisions. Not because we are always better, but because we are answerable. To terminate life, to declare war, to adjudicate guilt, to allocate resources—these are not only technical problems. They are moral actions. We insist they remain human precisely so that someone can be held accountable. So that the decision is not just made, but owned. Accountability is the condition under which moral life is intelligible. In courts, in classrooms, in governance, we ask not just what was done, but who decided. The entire law of liability (civil, criminal, etc) is premised on sharpening the dividing lines between categories. To blur that distinction—between tool and decider, proxy and agent, public good and private right—is to risk hollowing out the very structures of judgment upon which democracy, law, and even pedagogy depend.
Writing, for all its modesty, participates in this same architecture. When we write, we do more than produce language. We produce a public self—a voice that can be contested, interrogated, defended. This is not only performance. It is a civic act. To write is to enter a shared world with claims one must be willing to stand beside. The text becomes a site where thought is made accountable. And without that accountability, scholarship begins to drift—useful, perhaps, but hollow. Fluent, but unclaimed.
This is where the liberal arts tradition shines. Not as nostalgia, not as elitism, but as infrastructure. It is in these classrooms, in these long unhurried seminars, that students still encounter the feeling of being asked: what do you think—really think—and why? They are given space not only to find answers but to inhabit decisions. They learn to press pause before endorsing the plausible. They come to understand that choosing an argument is not just an intellectual exercise but a form of exposure.
Arendt argued, and most of us agreed, that action is what distinguishes human life: not just the capacity to think or to speak, but the willingness to appear, to take a position in the world, visible and accountable. Writing is one of the few remaining forms of such appearance. We come forward, even through our commas and footnotes. And in doing so, we become available to the judgment of others. This vulnerability is not incidental. It is what grants the text its legitimacy. Not that it is right, but that someone has risked speaking it.
This ethical architecture has long been recognized, if not always articulated. Our literature warns us. In 2001: A Space Odyssey, HAL 9000 does not malfunction randomly—it obeys its programming too well. Black Mirror and other serialized renditions of technological dystopia rehash the trope that when something goes wrong, there is no one left to blame. The voice that sounds so human cannot answer for its choices in human terms. And that absence—of responsibility, of narrative, of remorse—is the source of the viewer’s terror. The same logic drives Asimov’s robotic laws, built not to enhance capability, but to contain it within moral oversight. Machines may act. But only humans are held accountable. That, ultimately, is what is at stake. Not whether AI will write better, or faster, or more fluently—but whether it will become easier to forget that someone must still be held accountable for what is said. If we allow that ground to erode, we do not only alter the practice of writing. We diminish the human role in meaning-making itself.
This is why agency must be defended—not because it flatters us, but because it grounds us. To be human is not to write alone, but to assume responsibility for what is written. It is the part of scholarship, and of society, that should never be delegated. Because when the text speaks, it must still be possible to ask: who stands behind this? And the answer must still be: I do.
The Weight of the Signature
If the university still matters, it is because it remains one of the few places where we ask not only what can be written, but what should be. This question does not resist technology. It simply refuses to be answered by it. And it reminds us that writing—at least in its scholarly form—is not only an output but a reckoning. A way of being accountable in language.
To write is to declare a presence, to appear in the form of an argument. Hybrid tools may support that appearance, even scaffold it. But they cannot take its place. Because to appear in writing is not merely to produce text. It is to say: this is where I stand, this is what I think, this is how I came to know it. We can no longer afford to treat writing as a neutral delivery system. It is too ethically saturated, too socially encoded, too symbolically powerful. Whether we speak in our own name or through the form of a scholarly genre, the act of writing still carries with it the possibility of being questioned. And that possibility—of standing behind one’s claims, of answering when called—is what makes writing not just intellectual but human.
This, in the end, is what hybrid writing cannot automate: the quiet willingness to be held. Not just for what has been said, but for the life of thought that led there. That is the burden of authorship, the labor of ethics, and the dignity of agency. We may write with many tools. But we are still the ones who sign.
I should note that the “Traditionally, comma” routine is not mine by origin. I borrowed it—openly, and with permission—from my teacher, David Kennedy, many years ago. I always credit him when I use it in class. Still, students often attribute it to me. That, too, is instructive: a small example of how authorship accrues not simply through invention, but through repetition, context, and voice. ↩︎
Artificial Intelligence Offers Learning Opportunities in the Global South
With the USAID organization and funding in a quagmire, policymakers in the Global South must find creative and innovative methods to continue investing in their socioeconomic priorities. This is an urgent issue. According to UNESCO’s latest data, there is a concerning reversal in global education access, with 250 million children and adolescents now excluded from formal schooling—a dramatic increase of 6 million since 2021. This regression jeopardizes progress toward Sustainable Development Goal 4, which aims to ensure inclusive and equitable quality education for all by 2030.
In the Middle East and North Africa (MENA) region specifically, profound inequalities tragically define the education landscape. While a few students have access to well-resourced classrooms with personalized tutoring and state-of-the-art facilities, most school goers face the stark reality of overcrowded classrooms, overextended teachers, and a chronic shortage of learning materials. These disparities constrain the human potential of the region and perpetuates the cycle of disadvantage.
USAID’s funding cuts highlight a hard truth: external aid cannot provide long-term solutions for underserved communities in the MENA region. But now, with advances in artificial intelligence and machine learning models, there is a possibility of a powerful counter narrative.
This is demonstrated by the recent project in Nigeria that has the potential to empower self-sufficiency among educators. The program provides instructors an opportunity to use offline AI tools and gives local stakeholders control over digital infrastructure—initiatives that avoid reliance on unstable external donor funding. The program’s success is built on adapting existing resources to local contexts and offers a model that can be applied to other areas.
Instead of reinforcing dependency, this approach builds long-term independence, allowing education communities to continuously improve their solutions. The outcome is not temporary relief but lasting change, driven by communities themselves to ensure equitable education access. By harnessing these machine learning technologies, countries in the MENA region have a transformative opportunity to democratize education and bridge the persistent divide.
AI Adaptations for Local Learning
This pilot program, implemented in collaboration with local schools and community organizations, involved 759 students in Nigeria between June and July 2024. It utilized Microsoft Copilot and adaptive learning platforms to provide tailored content based on each student’s progress. Teachers were trained to integrate these tools into their after school sessions, ensuring that the technology complemented, rather than replaced, traditional instruction.
Students accessed the platform through tablets provided by the program, which were preloaded with content to minimize reliance on internet connectivity—a crucial adaptation for low-resource settings. During six weeks of after school sessions, students used written prompts to interact with the AI, focusing on English grammar, vocabulary, and critical thinking. Teachers supervised the students’ platform engagement, while a digital monitoring system tracked attendance and adapted the program to local challenges like seasonal flooding. Designed for low-tech integration, the program prioritized skill building by leveraging existing devices and internet infrastructure.
The results are astounding. Test scores jumped by 0.31 standard deviations on an assessment that included English proficiency and digital skills—a gain equivalent to two years of typical learning progress. This outcome is not just about numbers—it is about possibilities. If a program like this can transform learning outcomes in one community, it isn’t a stretch of the imagination to see its potential for other parts of the region.
One of the core strengths of this pilot program is the emphasis on personalized learning. The AI tutors delivered real-time feedback, identified learning gaps, and adapted the pace and content of instruction to meet each student’s unique needs. Such individualized attention is often unattainable in traditional classrooms, especially in low-resource settings with high student-to-teacher ratios.
It is particularly inspiring to observe how the program addressed gender disparities. Female students, who initially lagged their male peers, showed the most significant improvement. On average, female students improved at nearly double the rate of their male counterparts, narrowing the performance gap and highlighting the role of AI in promoting educational equity.
In future use, this personalized AI approach could boost confidence among female students, encourage deeper academic engagement, and help them overcome societal educational obstacles.
For boys and girls alike, the program’s outcomes extended beyond academic performance. Surveys conducted among participants revealed increased levels of engagement and confidence in their abilities. Students also reported feeling more motivated to attend school and participate in lessons, while teachers observed a marked improvement in classroom dynamics and student-teacher interactions.
The positive impact of this project on student engagement, motivation, confidence, and parent satisfaction should serve as a lesson for policymakers on prioritizing the scaling of these programs.
Complementing Traditional Structures
Far from replacing teachers, AI has the potential to empower them. By automating routine administrative tasks such as grading and attendance tracking, AI frees educators to focus on what they do best: teaching. Instructors in the pilot program used AI tools to gain insights into their students’ performance, enabling them to tailor lessons more effectively. For example, if several students struggled with fractions, the teacher could adjust the curriculum accordingly. Furthermore, AI can serve as a professional development tool for educators, offering real time feedback on teaching methods and access to a wealth of instructional resources. When teachers are equipped with the right tools and training, the possibilities are endless.
For most resource-constrained settings, access to high-quality educational materials is still inaccessible. AI-powered platforms can change this by providing students with a vast digital library of resources, from interactive simulations to virtual field trips, accessible anytime, anywhere. By democratizing access to knowledge, machine learning models can help level the playing field for learners in the Global South. The success of this pilot offers a roadmap for replicating this model in other resource constrained settings. At the same time, scaling such initiatives requires addressing several critical challenges. Reliable access to technology and the internet is a prerequisite for these programs. State agencies, non-governmental organizations, and private sector partners must invest in infrastructure and digital literacy programs to ensure equitable access.
Educators need comprehensive training to integrate these technologies effectively into their teaching. Professional development programs must emphasize digital literacy, pedagogical innovation, and the ethical use of AI. Issues like data privacy, algorithmic bias, and potential misuse of technology must be proactively addressed. Robust ethical frameworks are also essential to ensure that AI promotes inclusion and equity rather than exacerbating existing disparities. The goal is not to merely adopt new technologies but to fundamentally reimagine education as it is practiced today. By leveraging AI thoughtfully and inclusively, we can turn this vision into reality — building a future where education is not a privilege but a right for all.
Beyond Automation: Managing the Integration of AI into Human Civilization
Artificial Intelligence (AI) has swiftly emerged as one of the most transformative forces of the 21st century. Once confined to science fiction, AI is now interwoven with daily human processes, from automated language translation to algorithmic decision-making in healthcare, education, and public policy. As with past revolutionary technologies, integrating AI into society presents profound opportunities—and equally profound risks. Understanding how to manage this integration involves examining historical precedents, assessing current uses, recognizing limitations, and preparing for a future that may include artificial general intelligence (AGI).
This essay explores how AI is being incorporated into human systems like education, scientific research, and international governance, while drawing comparisons with other historic technologies that greatly impacted human processes. It will analyze both the benefits and drawbacks of widespread AI integration, with a particular focus on overdependence in education and social interactions, and explore strategies to mitigate such risks. Finally, it will consider the current technical limitations of AI and the potential societal consequences of more advanced iterations.
Comparing AI to Past Technological Revolutions
Two major technological developments that AI can be compared to is the invention of the Gutenberg press in the 15th century and the steam engine in the 18th century. The Gutenberg press revolutionized communication and education by making printed materials widely available. It dramatically accelerated the spread of knowledge, broke the monopoly of elites on information, and laid the foundation for widespread literacy and the Reformation. In this sense, the press democratized access to human knowledge and reshaped institutions.
Likewise, the steam engine catalyzed the Industrial Revolution in the 18th and 19th centuries. It mechanized labor, increased productivity, and transformed economies from agrarian to industrial. While it brought progress, it also exacerbated social dislocation, child labor, and environmental degradation.
In the same way that these technologies revolutionized the societies of their time, AI has also become the most impactful technology of its time. AI is unique in certain ways, however, including:
Pace of Development: It took several centuries for the Gutenberg Press to transform into the modern, digitized printing press we use today and for the steam engine to transition into our modern railways. AI, meanwhile, evolves at an unprecedented speed due to exponential increases in computing power and data availability.
Scope: Unlike the printing press or steam engine, which affected specific industries or sectors, AI pervades virtually every domain—from logistics to diplomacy to psychological well-being.
Intelligence Component: Previous technologies augmented physical or mechanical processes; AI mimics or replaces aspects of human cognition, raising ethical and philosophical questions about autonomy and control.
In essence, while AI follows the pattern of historically disruptive technology in some ways, its unique attributes make it difficult to use history to predict how AI will shape the future.
Integrating AI into Human Processes
The use of AI is spreading through every facet of human productivity, offering transformative impacts.
Education. AI is increasingly used in personalized learning platforms like Duolingo and Khan Academy’s AI Tutor, adapting content to individual learners’ paces and styles. It can identify gaps in understanding and automate assessments, thereby enhancing efficiency and scalability. It is also being used in traditional schools; one project in Nigeria has successfully implemented AI instruction alongside human teachers and significantly increased students’ test scores compared to their projected progress without AI assistance. When used responsibly and with an understanding of local needs, resources, and limitations, AI has a great potential to improve learning across the globe.
Scientific Research. In research, AI accelerates discovery by analyzing massive datasets, modeling protein structures (as demonstrated by DeepMind’s AlphaFold), and automating literature reviews. It shortens the time from hypothesis to result, allowing scientists to focus on innovation. Social and Economic Development. AI-driven analytics support economic planning by identifying trends, consumer behaviors, and infrastructure needs. Tools like satellite imagery analyzed by AI help track urbanization or agricultural productivity in real time. International Trade. AI enhances global trade logistics by optimizing shipping routes, predicting supply chain disruptions, and automating customs processing. Platforms like TradeLens, powered by IBM and Maersk, exemplify AI’s role in reducing trade friction. Debt Relief. Debt sustainability analysis, once manual, is now augmented with AI models that simulate economic trajectories and debt distress scenarios. This helps multilateral institutions design more informed and responsive debt-relief programs. Humanitarian Aid. AI can predict natural disasters, optimize relief logistics, and manage refugee registration using biometric and geospatial tools. For example, the UN World Food Programme uses AI to forecast food insecurity and allocate resources efficiently. International Law. AI supports legal analysis by scanning international treaties, case law, and legislation to assist policymakers and legal practitioners. Tools like ROSS Intelligence help in identifying relevant case precedents, streamlining international legal research. Environmental Policy. AI plays a crucial role in climate modeling, emissions tracking, and designing efficient energy grids. Google’s AI-powered Project Sunroof helps homeowners assess solar potential, while IBM’s Green Horizon project forecasts pollution levels.
Benefits of AI
The integration of AI offers a multitude of benefits across sectors:
Efficiency and Scalability: AI automates routine tasks, freeing human effort for higher-order thinking.
Personalization: In education and healthcare, AI tailors interventions to individual needs.
Predictive Power: In science, economics, and disaster response, AI identifies trends and patterns far beyond human cognitive capacity.
Data-Driven Governance: Governments can make better decisions using real-time data analytics powered by AI.
Equity and Access: In some cases, AI can extend services (e.g., education, legal aid) to remote or underserved populations through automated platforms.
These advantages demonstrate how AI can serve as a powerful partner in tackling complex global challenges when thoughtfully designed and deployed.
Cons of AI and the Dangers of Overdependence
Despite its promise, AI integration comes with significant risks:
Bias and Inequality: AI systems can perpetuate or amplify existing biases in data, leading to unfair outcomes in education, hiring, policing, lending, and others.
Surveillance and Privacy: Governments and corporations may misuse AI for intrusive monitoring, undermining civil liberties (see Amnesty International’s concerns on facial recognition).
Job Displacement: Automation threatens a wide range of jobs, particularly in logistics, customer service, and manufacturing.
Overdependence in Education and Social Interaction: AI’s growing role in education, especially during the COVID-19 pandemic, has raised concerns about excessive reliance on algorithmic instruction. Students may become passive consumers of pre-packaged content rather than active critical thinkers. Human teachers offer empathy, moral judgment, and mentorship—qualities AI cannot replicate. In social contexts, overreliance on AI-driven platforms (e.g., social media algorithms, AI chat companions) may hinder interpersonal skills, reduce meaningful dialogue, and foster echo chambers. The proliferation of AI-generated content risks blurring the line between genuine human expression and synthetic communication, potentially eroding trust.
Mitigating Overdependence on AI
Managing AI integration responsibly requires a multi-pronged approach:
Human-in-the-Loop Systems: Maintain human oversight in high-stakes domains like education, healthcare, and criminal justice to ensure ethical judgment and accountability.
Ethical Design: Incorporate fairness, explainability, and transparency in AI systems. Initiatives like the EU’s AI Act aim to regulate high-risk AI applications.
Digital Literacy: Equip individuals, especially students, with the skills to critically assess AI tools. Education should emphasize ethics, critical thinking, and human creativity.
Public Policy and Regulation: Governments must create adaptive policies that balance innovation with protection. Independent audits and regulatory sandboxes can help test AI systems before wide deployment.
Promoting Human-Centric AI: Encourage development of AI systems that augment rather than replace human capabilities, preserving human dignity and purpose.
Current Limitations of AI
Today’s AI systems are powerful but fundamentally limited:
Narrow Intelligence: Current AI excels at specific tasks but lacks general reasoning abilities.
Data Dependence: AI is only as good as the data it’s trained on. Poor data leads to flawed outputs.
Lack of Common Sense: AI struggles with context, nuance, and ambiguity.
Black Box Problem: Many AI models, particularly deep learning networks, are opaque, making it difficult to understand how they arrive at decisions.
Physical Limitations: While virtual AIs are widespread, embodied AI (like robotics) still struggles with dexterity, perception, and adaptability in real-world settings.
These limitations mean that AI, while powerful, cannot yet serve as a substitute for human reasoning or judgment in complex, unpredictable environments.
Future Developments and the Road to AGI
Artificial General Intelligence (AGI)—machines with human-level cognitive abilities—remains theoretical but increasingly plausible. Should AGI be realized, the implications for human processes would be immense:
Education: AGI could serve as a near-perfect tutor, adapting not just to cognitive needs but emotional states. However, it may also centralize control over knowledge, raising questions about epistemic authority.
Employment: AGI could replace high-level cognitive jobs, not just manual labor, leading to potential mass unemployment or necessitating a universal basic income.
Governance: AGI could help simulate policy outcomes with incredible precision, but may also challenge democratic deliberation if it becomes a de facto decision-maker.
Law and Ethics: Who would be liable for decisions made by AGI? Should AGIs have rights? These questions indicate a need for entirely new legal and moral frameworks.
International Power Dynamics: Nations that lead in AGI development may gain disproportionate power, potentially destabilizing global relations and increasing the risk of AI-driven arms races.
Ultimately, while AGI could solve many global problems, it could also magnify risks unless governed with unprecedented international cooperation and philosophical foresight.
Pathways Forward
Integrating AI into daily human processes is not merely a technical challenge—it is a societal transformation. Like the printing press and steam engine, AI has the potential to expand human capabilities and democratize access to knowledge and services. However, its cognitive nature and omnipresence make it uniquely capable of reshaping human behavior, institutions, and values.
To manage this integration responsibly, we must recognize both the opportunities and the dangers. We must design systems that are ethical, inclusive, and transparent; invest in education and policy to mitigate overdependence; and prepare for future developments with humility and global solidarity. AI should serve humanity, not the other way around. With care, we can ensure that this technology becomes a tool for empowerment, not alienation.
Editor’s Note
In this issue, we accepted that the only constant is change and, in keeping with that truism, we decided to evolve with the times.
Quite the innocent experiment, we typed “The System That Failed” into Google. The information that was returned in response to our query was an AI overview and suggestion that we write a science fiction short story with that title.
The plot? “To explore a future where advanced AI systems manage global resources and infrastructure, leading to a seemingly perfect world, but with unforeseen consequences,” the Gemini AI assistant wrote.
This suggestion isn’t … fiction. Our human society cannot possibly contemplate how AI will be integrated into human interactions and productivity three months from now, let alone three, thirty, or three hundred years into the future.
But there have been experiments in prescience by hundreds of acclaimed science fiction writers in the past, such as E.M. Forster who in 1909 published “The Machine Stops”, a dystopian tragedy set in 2081 where a dependent and subterranean human society has deified a ‘machine’ that sees to their every need. As the system begins to malfunction, the humans are lost, like children without guidance and paralyzed by fear of doing things for themselves. Unable to detach from their dependence, they quietly suffocate into extinction.
It is this dependence that we need to be wary of.
How do we manage integrating AI into the human machine much in the same way that the Gutenberg press or steam engine revolutionized the way we think and feel? That’s what we posed to AI itself in the machine-generated essay “Beyond Automation: Managing the Integration of AI into Human Civilization”.
Despite its promise, AI integration comes with significant risks, the essay said, adding that “AI systems can perpetuate or amplify existing biases in data, leading to unfair outcomes in education, hiring, policing, and lending, and others”.
True enough, the AI model fabricated three fake links, directing the reader to nonexistent pages on websites like the Smithsonian, History.com, and the World Bank. While a faulty reference for the origins of the Gutenberg press might seem inconsequential, the issue of AI models generating false sources has already caused real world ramifications, as seen in the recent fines levied against U.S. lawyers for using ChatGPT-generated research that produced fake case law.
In his poignant article on legal knowledge in the AI age, international jurist and Department of Law Chair at AUC Thomas Skouteris writes that definitions of licensing and credentialing are likely to change rapidly.
Furthermore, he writes of another danger: dependency. “If entire workflows are routineized through AI systems—so much so that new professionals are trained to trust the output unless otherwise directed—then even the capacity for oversight may atrophy,” Skouteris writes.
Education is not the only area impacted by AI; geopolitical dominance has become a key battleground as these machines continue to rapidly develop.
“Artificial general intelligence (AGI) or artificial superintelligence (ASI) have gone from theoretical concepts to apparently achievable outcomes within a much shorter timeframe than experts believed just five years ago,” writes senior vice president for China and Technology Policy Lead at DGA ASG Paul Triolo in this month’s essay, “A Costly Illusion of Control: No Winners, Many Losers in U.S.-China AI Race“.
He sees a new battleground over dominance of Global AI development and control, chiefly between China and the United States.
“The aggressive U.S. approach to China risks triggering conflict between the two nations while upending any progress to a global framework for AI safety and security,” he writes.
The AI battleground has become literal, in some cases. One of the most ominous—and deadly—uses of AI is in warfare, already battle-tested in Gaza. Anwar Mhajne, associate professor of political science at Stonehill College, explores this in “Gaza: Israel’s AI Human Laboratory”. Mhajne explains how Israel’s AI identification program, Lavender, uses “concerningly broad” criteria for identifying potential Hamas operatives, “assuming that being a young male, living in specific areas of Gaza, or exhibiting particular communication behaviors is enough to justify arrest and targeting with weapons”.
Alongside its use in warfare, UC Berkeley Professor Hany Farid warns that AI’s ability to falsify reality can and will threaten democracy.
“If we have learned anything from the past two decades of the technology revolution (and the disastrous outcomes regarding invasions of privacy and toxic social media), it is that things will not end well if we ignore or downplay the malicious uses of generative AI and deepfakes,” he writes.
And that’s just the point we’re trying to make in this issue with engaging content from our contributors: what is AI teaching us about ourselves?
This issue also marks a sad moment as we bid farewell to Cairo Review’s co-managing editor Karim Haggag, who leaves AUC to head the Stockholm International Peace Research Institute. Our loss is their gain, but we see much collaboration in our future. Adieu, Karim, you’ll be sorely missed.
Cairo Review Co-Managing Editors,
Karim Haggag
Firas Al-Atraqchi
Tehran’s Red Lines, Trump’s Maximum Pressure, and Regime Survival
More than a month into Iran–U.S. negotiations over the former’s nuclear program, the talks between the two erstwhile enemies appear to have reached a critical stage, as the goals and red lines of both come into greater focus. The most recent fifth round of talks on May 23 at the Omani Embassy in Rome revealed that the Trump administration’s latest demand for “zero [uranium] enrichment” collides with the Islamic Republic’s proclaimed “red line” to keep its domestic nuclear infrastructure intact. However, what is not clear is the impact a new nuclear agreement would have on Iran’s ballistic missiles and its regional policy.
Furthermore, there is no guarantee that a potential deal between the U.S. and Iran that falls short of addressing Israeli interests—especially in view of Tehran’s missile and nuclear challenges—will lead to sustainable security stabilization in the region. In the interim, the stakes for the Iranian regime could not be greater, as it not only faces massiveeconomic and military pressures from the outside, but an unprecedented domestic economic crisis that could trigger another wave of popular uprisings against it. Case in point, a nationwide truckers strike has been going on since May 19. In other words, the Islamic Republic fears for its viability with a perfect storm ever darkening on the horizon, especially if a deal with Washington fails to materialize.
Regime Red Lines and Sources of Power
For the Islamic Republic, there are several red lines which are informing its negotiation stance or diplomatic dealings with the United States. In fact, these red lines underpin the regime’s sources of power (and itsprojection) and should, therefore, not be confused with red lines regarding the national interests and sovereignty of the Iranian people. As such, the regime in Iran has little maneuverability—except for cosmetic or temporary concessions—to cross any of the following red lines:
(1) Keeping the nuclear programand its infrastructure, including domestic enrichment of uranium, in place. In regime jargon, this is often referred to as Iran’s “inalienable right” to a peaceful nuclear program as stipulated by the Nuclear Non-Proliferation Treaty (NPT). The goal here is to allow Tehran to reactivate its “nuclear escalation” strategy, whenever the need arises, for leverage. Tehran has used an expanding nuclear program and the concomitant threat of a nuclear-armed Iran to extract economic or geopolitical concessions in dealings with the West. Over the past two decades, the Islamic Republic has repeatedly pursued “nuclear escalation” with success, time and again forcing the West to the negotiating table, and securing sanctions relief or Western abandonment of a robust Iran policy.
Against this backdrop, at the present time, as an Iranianofficial even openly admitted, Tehran wants to assure its ability to re-engage in “nuclear escalation” if need be, particularly in case Washington (under the present or next administration) reneges on its deal obligations. In recent years, Tehran has dramatically expanded its nuclear program, assembling enough fissile material to build a few nuclear bombs while significantly reducing the International Atomic Energy Agency’s inspection regime. This situation led the IAEA Secretary-General Rafael Grossi to note that his Agency can no longer guarantee the peaceful nature of Iran’s nuclear program. Given Tehran’s increased military and regional-power vulnerabilities and weaknesses since 2024, some in the ruling establishment believe turning the Islamic Republic into a nuclear-armed state would constitute the last remaining option to safeguard regime survival (although such hopes may be misplaced). However, if Iran today further pursues “nuclear escalation”, it may risk inviting Israeli and/or U.S. military action against its nuclear infrastructure.
Given these external pressures, today Tehran openly declares its intentionto renounce nuclear weapons, as a basis for a nuclear deal. This is an ironic position for a state that has insisted that it is only pursuing a civilian nuclear program and that nuclear weapons are forbidden according to its Islamic credentials.
(2) Keeping its missile programand infrastructure intact. Since the 2015 nuclear deal (the so-called Joint Comprehensive of Action, or JCPOA), Iran has considerably expanded its missile and drone programs. In 2024, it demonstrated its willingness—and ability—to launch 500 missiles in its two first-ever direct assaults against Israel. While the missiles were overwhelmingly intercepted by Israel, the United States, and some of the latter’s regional Arab partners, Tel Aviv fears that a next wave could be more devastating, as a large number of missiles at once could overwhelm its air-defense systems. More recently, on the day of U.S. President Donald Trump’s May 13 visit to Riyadh, the commander-in-chief of the Islamic Revolutionary Guards Corps (IRGC), Hossein Salami, threatened Israel with the launch of 600 missiles, the bulk of which, in his view, could not be intercepted. Israeli experts assess that Iran has several hundred missiles left. In fact, Iran’s missile arsenal includes cruise as well as ballistic missiles. The latter, with a radius that even extends to parts of Europe, would reach Israel within minutes, while constituting the preferred delivery system for nuclear weapons.
Against this backdrop, the Islamic Republic views its expanding missile program not only as its chief project of prestige (potentially now even replacing the stature attributed to the nuclear program) but also as a key factor to deter, intimidate or attack external foes, thereby serving as one of the key guarantees for its survival.
(3) Nolimitations on Iraniansupport for the “Axis of Resistance” throughout the Middle East. While the Iran-led “Axis” suffered a historic defeat last year—in light of the decimation of Hezbollah and Hamas, and the collapse of the Assad regime in Syria (which served as a land bridge for Iranian weapons to the Levant)—Tehran still hopes to revitalize it. At the present stage, the only remnants of that “Axis” are pro-Iranian Shia militias in Iraq (the Popular Mobilization Forces, or PMF) and the Houthis in Yemen. As Iran tries to re-establish the strength of its regional-power network (as it currently is attempting to do in post-Assad Syria), it will not accept U.S. constraints and threats of military action enshrined in a potential deal. After all, for decades, the “Axis of Resistance” served as a primary means for Tehran’s power projection and leverage vis-à-vis the West.
Now, as Iran’s regional power has become a shadow of its former self, Tehran views its missile program as its single most important remaining card, to be used as leverage in negotiations with the West. This logic was summarized by Foreign Minister Abbas Araghchi on May 15 in statements to reporters in Tehran: “In fact, it is our defensive capabilities—the missiles of the Islamic Republic—that give strength and power to the negotiator to sit at the table, and it is these that cause the other side to give up and lose hope in a military attack.”
U.S. Hardening Toward ‘Zero Enrichment’
Under Trump II, the goals of the administration’s Iran diplomacy and policy have been inconsistent and contradictory, with one camp (including Special Envoy for the Middle East Steve Witkoff and Vice President J. D. Vance) favoring a nuclear deal only and another (including former National Security Advisory Mike Waltz and his successor as well as Secretary of State Marco Rubio) preferring a more comprehensive one that also addresses Iran’s ballistic-missile program and potentially its support for the “Axis”. In fact, the latter position is not only favored by so-called Iran hawks from the Republican Party but also—quite notably—by former Obama administration Secretary of State and JCPOA negotiator John Kerry. Both camps seem to rally behind the Trump II administration’s foreign-policy motto of “peace through strength”, a credo consistently stressed by Witkoff.
Trump himself has made clear during the campaign for re-election and since returning to the White House that his single most important policy goal toward Tehran is to avoid a nuclear-armed Iran. This narrow focus on the nuclear issue has been particularly on display during the first two rounds of Iran–U.S. talks (mediated by Oman and taking place in Muscat on April 12 and at the Omani Embassy in Rome on April 19), and has echoed Tehran’s preference. In fact, for the Islamic Republic, the first aim in diplomacy with the U.S. is to keep the focus on the nuclear issue so as to avoid discussing its missile program and its regional policies. In that light, those two initial rounds of talks have from the Iranian perspective gone according to plan.
However, moving into the third round of talks in Muscat on April 26, the gap between the two appeared in a clearer light. According to an Iranian official familiar with the talks, Tehran started seeing its missile program as a major sticking point in the negotiations. On the nuclear front, too, major sticking points were reported but little detail was provided.
The major turn in the U.S. public position to demanding ‘zero enrichment’ occurred between the fourth (on May 11 in Muscat) and fifth rounds (May 23 in Rome)—much to Tehran’s consternation. In a May 18 interview, U.S. negotiator Witkoff insisted that Iran could not be allowed “even one percent of an enrichment capability”; maintaining a domestic enrichment capability would allow for weaponization, he argued. The same argument was laid down two days later by Rubio before the Senate Foreign Relations Committee when he insisted that Iran could, like other states, import enriched uranium for its civilian uses. In other words, the U.S. position appears to have seemingly overcome the above-sketched, long-assumed intra-administration Iran factionalism, converging into the central demand of ‘zero enrichment’ with no domestic enrichment capabilities left for Tehran.
From the ‘Libyan model’ to ‘Zero Enrichment‘
Even beyond the figure of Prime Minister Benjamin Netanyahu aside, Israel has long made clear that it demanded the ‘Libyan model’ to be applied to the Islamic Republic—the complete dismantlement of Iran’s nuclear and missile programs with little to no related domestic infrastructures left thereof. This would, in fact, amount to nothing short of a wide-ranging military capitulation of the Iranian regime. If this can’t be reached through diplomacy, Israel maintains that it reserves the right to act militarily against Iran’s nuclear and missile infrastructures, with or without U.S. involvement. In fact, such Israeli goals have been widely shared among the country’s political and military establishment given the experiences with Iran following October 7, 2023, as the Islamic Republic—with its regional “Axis” and missile program—evolved into a veritable, if not existential, threat given the small size of Israeli territory.
In fact, ahead of the fourth, postponed round of talks between the United States and Iran on May 11 in Muscat, Israel’s Leader of the Opposition Yair Lapid announced “five basic principles that are necessary from Israel’s perspective”: 1) “zero” uranium enrichment; 2) removal of all enriched material from Iranian territory; 3) demolition of centrifuges; 4) dismantling the ballistic-missile program; and 5) close and unrestricted verification.
At the same time, since AprilNetanyahu has signaled Israel’s minimum requirement for an Iran deal: namely, the total dismantlement of Tehran’s nuclear enrichment program. Following Washington’s recent insistence on “zero enrichment”, he has reiterated that Israeli position. As such, U.S. and Israeli demands of Tehranended up converging. The shifting Israeli position, meanwhile, may indeed have reflected the desire not to risk a rift over Iran given the Trump-Netanyahu estrangement.
Main Points of Contention
The Nuclear Program
In the initial phase of the talks, a simple nuclear deal had been widely regarded as a strong possibility, with Iran having to merely renounce the militarization of its nuclear program in exchange for sanctions relief. Later, however, discord between the two sides over key details of a nuclear deal emerged. In Washington, there has been increasing talk of the necessity that Iran dismantle its nuclear program entirely, including abandoning domestic uranium enrichment efforts and instead importing the element. This new U.S. position now clearly collides with the above-described Iranian red lines, as Tehran insists on maintaining a civilian nuclear infrastructure on its territory. From the Iranian perspective, agreeing to the latest U.S. demands would deprive it from the last remaining bargaining chip it believes it possesses: its ability to restart “nuclear escalation” whenever necessary.
The Missile Program
Short of the Israeli demand for total dismantlement of Iran’s missile program, there are additional pressures from a number of other actors.
The first focuses onrolling back the ballistic-missile program, including missile range. In the past, the Islamic Republic has on occasion suggested a willingness to limit the range—sparing Europe but not necessarily Israel.
The second has to do with the prospect of Iran mounting nuclear warheads on its ballistic missiles. At the time of the third round of talks, several European diplomats suggested that they had advised U.S. negotiators that any comprehensive agreement with Tehran should include restrictions on developing or acquiring the capability to mount a nuclear warhead on a ballistic missile. Meanwhile, Iran maintains that its missile program is non-negotiable and insists that it poses no threat to neighbors. This claim, however, contradicts recent history, not only regarding Israel but even Iran’s missile strikes in January 2024 against its immediate neighbors demonstrated.
The ‘Axis of Resistance’
Given the dramatic unravelling of the ‘Axis of Resistance’ over the past year, the regional security challenge posed by this grouping has been partly—or is in the process of being—defused. Both Hezbollah and the PMF are embroiled in national processes that aim to either disarm them or bring them under control of their respective states’ armed forces, thereby removing them from the Iranian hard-power orbit. Hamas, for example, again finds itself under military pressure from Israel after the breakdown of the ceasefire on May 18 and Israel’s widely condemned assault on Gaza.
The only remaining disruptive force within the ‘Axis’ have been the Houthis in Yemen, who faced escalating weeks-long aerial attacks by the United States to the cost of $1 billion. To everyone’s surprise, on May 6, Trump announced a ceasefire deal with the Houthis toward sparing U.S. ships from being targeted in the Red Sea, thereby following the Chinese and Russian model. In fact, in March 2024, Beijing and Moscow had reportedly reached agreements with the Yemeni movement pledging not to target their ships. As a result, these narrow, unilateral arrangements have left the Houthi threat to international shipping though the Bab el-Mandeb intact for others.
In the meantime, the military confrontation between Israel and the Houthis escalated. On May 5, the Israeli Air Force launched a series of strikes targeting the Houthis’ main air and seaport facilities in the Red Sea port city of Al Hudaydah, aiming to disrupt the group’s logistics and supply lines. In retaliation, the Houthis on May 8 fired a long-range ballistic missile toward Israel, which struck near Ben Gurion Airport in Tel Aviv, temporarily disrupting international air traffic on May 8. It remains unclear at the time of going to print, how far the Houthi issue will be part of a potential U.S.–Iran deal, given the said Trump–Houthi arrangement and Tehran’s usual claim of deniability regarding its military support for the Houthis. As recently as May 12, reports indicated that Iran continued to try to ship weapons to the Houthis.
Trump Forced Tehran to the Negotiating Table
The Islamic Republic had initially opposed diplomacy with Trump. After all, throughout 2024, Tehran had suffered a series of major defeats: Israel had destroyed Iranian air defenses, leaving key regime infrastructures extremely vulnerable in the event of another Israeli attack, and its “Axis of Resistance” had lost core pillars (Hezbollah and Syria). Tehran has therefore lost key power leverages that it had once possessed for decades. Yet, in the face of Trump’s ever-mounting economic and military pressure, Supreme Leader Ali Khamenei walked back his February emphasis that negotiations with Washington were “not logical, nor wise, nor honorable”, and soon rescinded his earlier categorical rejection of talks.
Economically, Trump has reimposed “maximum pressure” sanctions with the aim of driving Iranian exports to zero, in much the same approach following his unilateral withdrawal from the JCPOA in May 2018. On May 1, 2025, amid ongoing U.S.-Iran talks, he tightened the economic cord around the regime’s neck, as he publicly declared that any importer of Iranian oil will lose access to U.S. markets. This mimicked a threat he had successfully directed against China in 2018 that forced the latter to completely halt oil imports from Iran in November of that year.
Militarily, prior to the start of Iran talks, Washington began a military build-up in the region, namely in Diego Garcia, a joint U.S.–UK base in the Indian Ocean. This was meant to underpin the military threat that Trump has consistently evoked in case of diplomacy failing, including U.S. air power that can hit Iran’s underground nuclear facilities with ‘bunker-busting’ bombs. In other words, the military pressure Iran faces emanates from both Israel and the U.S., posing a grave threat to its key infrastructures—nuclear, missile, and energy. Such a scenario, the regime is aware, could destabilize it and even endanger its survival.
In short, the combination of these economic and military pressures and threats have forced Khamenei to agree to negotiations with the Trump administration.
Consternation and Calculations in Tehran
But voices of concern and warnings from within the Iranian establishment have steadily increased from one round of negotiations to the next—culminating with the recent U.S. demand of zero enrichment.
This has hardened Khamenei’s and the Foreign Ministry’s defiance, both warning that under such conditions negotiations are doomed to fail. In fact, the May 24 edition of the IRGC-affiliated daily Javan provided insights into the establishment’s read of the talks’ trajectory and what it sees being at stake.
Its title story, one day following the fifth round of talks, stated that Washington’s ‘zero enrichment’ demand had complicated and slowed down negotiations, with a controversy now raging over enrichment—Tehran’s critical “red line”. An editorial titled “Why is enrichment a red line?” laid out the different purposes of the levels of enrichment, from civilian to military. It then did not mince words when arguing that “there is no reason not to use nuclear technology as a deterrent and authority-building element”, with “nuclear technology” being “a component of national power” to confront “enemies and political rivals” from deploying sanctions and other pressure on the Islamic Republic.
Clearly, Iran’s regime wants to retain domestic enrichment capacity not only to restart “nuclear escalation” when deemed necessary but also as an option toward weaponization. In reality, however, Tehran’s nuclear program has consumed a tremendous amount of the country’s human, financial, and political resources, with enrichment being neither economically sound nor an expression of 21st-century technological advancement, as former IAEA advisor and nuclear expert Behrooz Bayat has consistentlylaid out.
Diplomatic Failure: Risks of Survival
Despite the steadfast tone of defiance, the Islamic Republic cannot afford a collapse of the talks. Walking away from the negotiations risks inviting a new level of economic and military pressures, which would threaten the regime’s survival, the single most important objective of Iran’s leadership. Economically, U.S. pressure on Iranian oil revenues could intensify; even comprehensive UN sanctions risk being reimposed. In a significant detour from its previous quasi-appeasement toward Tehran, the E3 (France, Germany, and the UK) has threatened to activate the JCPAO’s “snapback” mechanism, which would automatically reimpose such sanctions, if no deal is forged by August. Militarily, there is also the risk of Israeli and/or U.S. strikes against Iranian nuclear and ballistic-missile sites on the heels of collapsed negotiations. Time is running out for Tehran to forge a deal with Trump if it wants to avoid a nightmare scenario which could unhinge the regime’s hold on power.
Even more crucially, such a scenario may precipitate another popular uprising against the regime—amid what I call a long-term revolutionary process in Iran—triggered by an aggravated economic crisis bordering on chaos and collapse in particular, and a deterioration of widespread public discontent, in general.
A foretaste of this can already be witnessed: a nationwide strike by truck drivers in protest over escalating economic pressures and government neglect has gripped the country, threatening Iran’s supply chains. Tellingly, the truckers’ strike started on May 19 in Bandar Abbas, the major city near Rajaee Port where on April 26 a huge explosion killed several drivers and left the injured frustrated over the lack of government support. Instead, the establishment has tried to contain the strikes, mainly through suppression (including arrests).
In fact, the port explosion amounted to an economic shock, occurring amid the most severe post-revolution economic crisis. The detonation of 10,000 containers at Iran’s ‘golden gateway’, the critically important international-trade hub of Rajaee Port near Bandar Abbas along the Persian Gulf and near the Strait of Hormuz, occurredon the day the third round of U.S.–Iran talks began.
The port explosion has a chilling link tothe missile program: The containers contained chemicals imported from China earlier this year to be used for the production of solid-fueled ballistic missiles, and were part of Tehran’s critical strategy to rearm after Israel had destroyed Iranian missile-production facilities in its October 2024 aerial assault.
In fact, according to Iran’s Ports and Maritime Organization, the port handles a whopping 85–90% of the country’s container trade and more than half of its total trade. To downplay the true extent of the incident, some Iranian officials falsely claimed that merely 15% of the nation’s container trade had gone through that port. In fact, the port explosion is not only an economic shock whose macro- and socio-economic ramifications will be unfolding in the near to medium future, but also raises crucial security-related questions: Why were the containers carrying military chemicals not declared as such and are these security liabilities the norm at that port and potentially others? Was the explosion a result from sabotage? These security dimensions also cast a shadow on potential post-deal investments in Iran, given the central role the Rajaee Port is playing in this regard.
Now, amid the ever-widening truckers’ strike, the Supreme National Security Council (SCSC)—an élite body tasked with regime security and enjoying overriding powers—for the second time after September 2024 intervened to block the implementation of a December 2023 legislation tightening the hijab laws. The timing suggests the regime fears that its security forces may be overstretched if faced with both strikes and a renewed upheaval à la 2022’s “Woman, Life, Freedom” movement that had markedthe erstwhile culmination of the still-raging revolutionary process.
Tehran’s Counter-strategy
The Islamic Republic’s concerns about the negotiations are also reflected in its iron will to control the domestic narrative surrounding them. As such, officials and state media insisted that Witkoff’s premature departure during the fifth round in Rome was due to his alleged need to “catch a flight”. In reality, Trump’s Middle East Special Envoy is known to use his private jet for his government duties, translating into a relatively flexible flight schedule. This regime narrative demonstrates the fear and dangers associated with admitting the failure of that round. After all, the SNSC continues to impose a media ban on reporting about details of the talks with Washington, especially forbidding Iranian domestic media from referring to foreign-media coverage.
Therefore, crucially, Tehran will try to drag diplomacy on in a delicate balancing act to protect its red lines while avoiding the collapse of the talks. The underpinning hope is that the longer diplomacy lasts, the more likely the possibility to see U.S. demands softened.
Key in this regard is the attempt to offer tactical concessions of a cosmetic and temporary rather than substantive and permanent nature, potentially translating into a temporary deal. This could include a confidence-building ‘zero enrichment’ period to avert the dismantling of its nuclear infrastructure. According to a May 15 The Guardian report, Omani mediators proposed to Araghchi that Iran accept a three-year period of ‘zero enrichment’, after which it would revert to the 2015 JCPOA’s 3.67% level. In the interim, Moscow would provide Iran with the enriched uranium it needs for its civilian projects. More recently, according to a May 28 Reuters report, Iranian official sources suggested that Tehran may halt uranium enrichment for one year, ship part of its highly enriched stock abroad, or convert it into fuel plates for civilian nuclear purposes. In return, Washington would release frozen Iranian funds and recognize Tehran’s right to refine uranium for civilian purposes. Such a ‘political deal’ could lead to a broader nuclear accord, the sources suggested. Yet, prior to the next, sixth round of talks, even this one-year pause has been publicly rejected by Iran’s Foreign Ministry. In fact, these scenarios would depend on Trump’s flexibility and his ability to portray such a deal as a victory, despite likely opposition from elements within the U.S. establishment and from Israel.
At the same time, Iran continues to issue counterthreats to ward off military and economic pressures, by threatening Israel with a missile barrage that overwhelms its air defenses, and Europe with veiled threats in case of the activation of the JCPOA’s ‘snapback’ mechanism.
Moreover, there are indications that Tehran may once again flaunt its ‘negative power’ in the Gulf, probably with the aim of pressuring Arab Gulf states to lobby Washington for a softer Iran policy. On May 20, for instance, a Panama-flagged Emirati tanker in the Persian Gulf was “hijacked” by a ship from the Iranian shadow fleet that is operatedby the IRGC. In fact, following Trump’s 2018 JCPOA withdrawal, Tehran had pursued a dual strategy whose replication today carries more risks than then: On one hand, ‘nuclear escalation’ (though today this could provoke strikes against its nuclear infrastructure), and on the other, targeting UAE and Saudi energy infrastructures with a series of sabotage operations and drone attacks. This culminated in September 2019 in the Houthis’ successful drone attack at the heart of Saudi oil production, which managed to halve it. The existential vulnerability of these states’ economic models has since become traumatic, which paved the way for Abu Dhabi and Riyadh to seek rapprochement with Tehran over the past few years.
In addition, Tehran may hope that differences between Trump’s and Netanyahu’s policies over Iran will reappear. In fact, before the recent overlapping of the United States and Israel in terms of ‘zero enrichment’, there have been signs of a Trump–Netanyahu divergence on some Middle Eastern conflicts. For instance, Israel was caught off-guard by Trump’s announcements about the start of Iran talks during Netanyahu’s visit to the Oval Office on April 7 and his ceasefire deal with the Houthis, which incidentally didn’t deter their attacks on Israel. It is not entirely clear whether Trump opposes Israeli military action against Iran in general or only temporarily as long as talks with Tehran are ongoing.
At the same time, there have also been voices within the Iranian foreign-policy establishment, such as Mostafa Zahrani, the former head of the Institute for Political and International Studies (IPIS), the Iranian Foreign Ministry’s think-tank, to de facto use the inexperienced figure of Witkoff to strike a “comprehensive” deal with Washington. However, this is likely to mean that one would address all issues (nuclear, missiles, and regional policies) in a temporary or cosmetic rather than fundamental and irreversible fashion—given the earlier noted sources-of-power dimensions of those Iranian programs and policies.
Last but not least, in contrast to past instances of Iran’s diplomatic exchanges, it is not clear what the exact bargain this time will be: as per JCPOA, this has been nuclear de-escalation (a significant reduction of the nuclear program while allowing Iran the right to enrich uranium at 3.67% on its own soil) in return for sanctions relief. Today, the latter component may be replaced by pressure relief (both economic and military), while the United States may allow Iran access to its frozen assets abroad, including $6 billion parked in Qatar.
Devil in the Details
In the wake of the fifth round of talks on May 31, the United States delivered a first formal proposal for a deal to Iran via Oman. According to some reports, in contrast to its ‘zero enrichment’ demand, Washington had suddenly allowed for some limited low-level Iranian domestic enrichment, similar to the 2015 JCPOA cap of 3.67%. At the same time, Iran would not be allowed to build any new enrichment facilities, “dismantle critical infrastructure for conversion and processing of uranium,” stop new research and development on centrifuges, while placing its nuclear program under a “strong system for monitoring and verification”—including accepting the IAEA’s Additional Protocol that allows for snap inspections—under the control of the IAEA and the United States. Meanwhile, a regional enrichment consortium would be created—probably supervised by Washington and with the participation of Iran, Saudi Arabia, the UAE, and Türkiye—as a way to provide Tehran with enriched uranium for other civil purposes beyond the allowed 3% level. As for U.S. concessions, the extent and timing of sanctions relief remains unclear.
However, as the exact nature of those provisions is not clear, the devil will be in the details. First, a key sticking point pertains to the duration and extent of limitations placed on Iran’s domestic nuclear program. Second, as to the consortium, Iran would insist that enrichment also takes place on its own soil. Against this backdrop, Tehran fears the proposal could end up in a significant and perhaps durable dismantlement of its nuclear program, and top Iranian figures have already signalled their opposition to such a deal. Khamenei has already condemned the U.S. proposal as “rude and arrogant”.
As Tehran’s nuclear red line in terms of domestic uranium enrichment and its infrastructure remains endangered, it will likely table a proposal of its own and try to save the talks from collapse. In the meantime, the U.S. position has seemingly reverted to its characteristic inconsistencies. The above-sketched U.S. proposal has also attracted harsh criticisms, from inside the U.S. Congress and Israel, fearing Trump may concede too much vis-à-vis Tehran. At the same time, a recent IAEA report about secret Iranian nuclear activities could prepare the ground for increasing international pressure on Tehran.
In sum, the U.S.–Iran talks carry tremendous stakes for all actors involved, whether present at the negotiating table or not, like Israel, Europe, the Arab Gulf states. Not least is Iranian society which is held hostage to the regime’s priority to ensure its survival by all means necessary.
For the Islamic Republic—finding itself in a position of historic geopolitical and economic vulnerability—the talks involve gigantic risks, with the realistic scenario of major regime destabilization. The outcome, however, hinges largely on the Trump administration, given the formidable military and economic tools still at its disposal, and the vagaries of a U.S. president whose erratic decision-making injects significant uncertainty into a high-stakes process.
Silicon Borders: The Global Justice of AI Infrastructure
By 1987 the Reagan Administration had been debating for years whether India should be allowed to buy a supercomputer. The Pentagon argued that granting India’s export request was a threat to national security, given that India at the time had an active nuclear weapons program and close ties to the Soviet Union. The United States Commerce Department eventually granted India an export license for a mid-powered Cray XMP-14 computer. Two years later, the Bush administration faced the same question—this time for Brazil and Israel—and by the 1990s, not only supercomputers but also cryptography was subject to export controls.
Today, technology travels the world more freely than ever. The source of the most advanced technology, as before, is the United States. This technology dominance has made export controls a powerful tool. The Biden administration started a policy of “chokepoints,” restricting the export of advanced chips and software, primarily targeting the People’s Republic of China (PRC). However, this policy trend creates “silicon borders” that can affect developing countries around the globe.
Technology dominance is geopolitical power. Consider Ukraine, which is highly dependent on Starlink, the low-earth satellite service of Elon Musk’s SpaceX. Ukraine’s “entire front line would collapse if I turned it off”, Musk has stated. Reportedly, the United States has used Ukraine’s dependency on Starlink as leverage in negotiations.
AI infrastructure, just as the Starlink satellite network, tends towards a natural monopoly; the most efficient market structure would be a single provider. This enables technological dominance. When AI infrastructure becomes an important resource, unilateral trade restrictions on AI infrastructure by the dominant power—“silicon borders”—are a problem for global justice.
Of course, much is uncertain about AI and its future. Equally unclear are frameworks for analyzing AI’s global impact as it develops. This article offers conceptual clarifications and sketches a framework for global AI policy analysis.
The Economics of AI Infrastructure
AI infrastructure is the foundation of AI applications and services. It is the set of resources that enable the development, training, and deployment of AI systems. AI infrastructure consists of four components.
Computational resources: Specialized hardware used in AI data centers.
Data resources: Training datasets as well as software to collect, generate, and process such datasets.
Foundation models: Large neural networks trained on vast datasets using significant compute resources.
Distribution mechanisms: Cloud services, inference software, and application programming interfaces (APIs).
Computational resources include specialized chips, like Graphics Processing Units (GPUs), for which Nvidia, an American technology company based in California, is the market leader. In conjunction with their surrounding software ecosystem, these chips are used to train AI models and to “run” them—a process known also as “inference.” Large amounts of these chips are required to train and adapt large AI models, known also as “foundation models”.
Some AI developers publish their models. A model is basically a file that describes the neural network and the weights, i.e., the connections in this network. Publishing a model disentangles development and deployment. With a model file in hand, you can run the model in any capable data center. Such “open weights” models thus created a market in compute resources where cloud companies “host” open models. This enables competition and is good for privacy; since anyone can host an open model, you can decide whom you trust with your data.
By contrast, large AI companies, such as xAI, Google, and OpenAI, bundle different components of the AI infrastructure together. These companies build out the computational resources, collect and condition datasets, train foundation models, and then distribute their models exclusively. You cannot work with their models unless you go through them.
Importantly, AI infrastructure is also an infrastructure in an economic sense: It has high fixed costs, low marginal costs, and significant economies of scale. Because of these features, AI infrastructure tends towards a natural monopoly.
The fixed costs associated with developing frontier AI capabilities are extraordinary. Training a model like OpenAI’s GPT-4 cost around $40 million for the training run. If you factor in research and people, the whole process cost over $100 million. By 2027, these costs could reach more than $1 billion. Thus, the fixed cost barrier to join this competition is prohibitively high and always increasing. Few entities have the financial means to play in this game.
But once you built a data center, collected the data, and trained a foundation model, the marginal costs for inference are remarkably low. Serving an additional user or request is nearly negligible. Of course, in aggregate, running a model consumes massive amounts of energy. But the average costs decline dramatically as output increases. This drives market concentration; a small number of companies hold a large percentage of the overall market.
This tendency for market concentration is reinforced by economies of scale. As more people use a particular AI infrastructure—such as the one offered by OpenAI—the ecosystem surrounding that infrastructure grows. Each additional user gives you more data, which you can use to improve the model. The current generation of models then trains the next generation of models—involving what is known as “synthetic data” and “model distillation.” Thus, those who have many users and good models will develop even better models and acquire even more users—a self-reinforcing cycle of dominance through network effects.
The current industry landscape reflects this: A handful of technology companies with vast capital, computational resources, and scientific talent occupy dominant positions. In specialized AI hardware, Nvidia maintains approximately 80% market share for training chips. In cloud services that host AI infrastructure, three providers (Amazon Web Services, Microsoft Azure, and Google Cloud) control roughly 65% of the global market.
This dominance may appear fragile. In January 2025, the Chinese company DeepSeek released their “R1” model, which matched the capability of earlier models by OpenAI, shocking stock markets. To some, this only showed how important silicon borders are to ensure continued American dominance. It also shows that open models have the potential to disrupt the AI infrastructure market.
Going forward, the AI infrastructure market might be competitive in some segments but not others. The segment around smaller models, which require fewer computational resources, will have lower barriers to entry and greater competition. Such smaller models are useful in many day-to-day applications like providing recipes, advice, coding, or companionship. More capable models will be larger and will offer critical capacities beyond what such smaller models can offer. The market segment for such larger “frontier” models will have high and increasing barriers to entry and greater tendencies towards a natural monopoly.
AI Infrastructure Chokepoints
AI infrastructure is not just economically concentrated—it is politically gated. Where the economics of AI created market concentration, the United States started to exploit technological dependence as “chokepoints”. The Biden administration has established export controls, notification requirements, and licensing regimes. These silicon borders are justified in terms of national security, mainly targeting the PRC. But the result may be a system in which some countries can build on powerful AI tools, while others are left behind.
Compute: The GPU Chokepoint
The first chokepoint is hardware. In October 2022, the U.S. Department of Commerce announced restrictions on the export of high-end AI chips to China. Later revisions expanded the list to include other types of chips and imposed restrictions on certain countries in the Middle East, including Lebanon, Libya, and Syria. The latest “AI Diffusion” rule limited exports with all but 18 countries. The Trump administration, for now, has rescinded this latest rule.
These chips are indispensable. Training a modern foundation model requires tens of thousands of top-end GPUs running in parallel for weeks or months. Restricting access to these chips is effectively restricting access to the frontier of AI.
For many middle-income countries, the hardware needed to train competitive AI models is out of reach. However, even close allies like Israel may face rising barriers (with some exceptions if a data center is operated by a U.S. provider, such as Microsoft).
Foundation Models: Controlled Capabilities
A second chokepoint is access to foundation models themselves. Under the AI Diffusion rule, frontier models—like GPT-4, Gemini, or Claude—cannot be licensed, transferred, or made available to users in certain countries without explicit authorization. The trend is towards treating highly capable AI models as export-controlled items.
The logic here mirrors Cold War-era controls on encryption and supercomputers. Foundation models could be used for disinformation, cyber warfare, or weapons design. This “dual use” concern means that access to foundation models depends on geopolitics.
Distribution: APIs and Cloud Access
The most effective and relevant chokepoint lies in how models are accessed. Foundation models are rarely downloaded and run in independently operated data centers. Instead, models are hosted in the cloud and accessed via APIs that offer the model as a service. Whereas chips can find their way to China through loopholes and can’t be summoned back once they are there, access to models via APIs can be controlled much more effectively.
API restrictions are hidden chokepoints. They need not be announced as policy, but their effect is powerful. Currently, major AI companies, like OpenAI and Anthropic, allow their models to be accessed widely, except from Iran, Syria, or North Korea. However, even if access is not explicitly restricted, companies tend to over-comply with government export restrictions. Moreover, the source of power is the potential to shut down access.
For all countries without domestic AI champions—meaning practically everyone except the United States and China—such dependency is a vulnerability. Losing access to APIs would not only hamper the development of next-generation applications, but also reduce the productivity overall across all tasks for which such foundation models could help, such as language translation, software engineering, data analysis, drug development, and forecasting of political and economic risks, to name just a few.
Global Justice of AI Infrastructure
Silicon borders are powerful. As AI capabilities become essential inputs for economic development, scientific progress, and even basic public services, nations that lack direct control over this infrastructure become dependent on those who wield it.
Who draws these silicon borders, and on what terms, is hence a central question of global justice. Silicon borders risk violating basic principles of global justice. Within political philosophy, there are three interconnected lines of argument which support this analysis.
1. The Level Playing Field Argument
The global economy operates—in theory and in practice—as a cooperative system in which everybody, in the aggregate, wins. This win-win logic of global trade requires a level playing field. No side should get an advantage. For example, rules around copyright and intellectual property must be adhered to by everyone insofar as these rules maintain a level playing field.
The principle of justice here is this: Trade must offer fair gains to all participants. Advantages for some and disadvantages for others undermine the idea of justice that is inherent in the social practice of trading with one another.
This idea is developed by the philosopher Aaron James. He illustrates it with a discussion of the WTO’s global intellectual property (IP) regime. Stringent IP protections are fair between developed countries but unfair for developing countries—because developing countries lose out on the opportunity of fair gains. For some trade parties, some rules tilt the playing field.
This would be the case for silicon borders. Restricting access to AI infrastructure can be an unfair disadvantage for some. Silicon borders, even if they are aimed at the PRC, have effects on developing countries that are already disadvantaged by structural asymmetries of the global digital economy. For developing countries, silicon borders may tilt the playing field.
In this vein, philosopher Laura Valentini argues that global economic systems are coercive and that, at some point, such coercion is incompatible with an equal respect for persons. Coercive economic structures may not be justifiable to everyone who is subject to them. Treating everyone with respect would require changing these structures where possible.
Likewise, silicon borders are coercive. Powerful actors unilaterally control access to AI infrastructure. The United States limits the export of advanced GPUs and private companies dictate terms of service for cloud platforms or APIs. When such silicon borders have effects that are severe enough, they fail to treat all people with equal respect.
3. The Resource Interdependence Argument
A third argument builds on the fact of deep economic interdependence. Nations are not isolated units. This interdependence transforms the way we should think about resources essential for economic life. In a slogan: Where there is interdependence, there must be justice. One proponent of this argument is the political theorist Charles Beitz.
AI infrastructure, though human-made, increasingly functions similarly to a critical natural resource. Access to computational power, foundation models, and the platforms for distributing them is becoming as crucial for economic participation and development as access to energy, capital, water, or physical trade routes. A resource that is a fundamental input in a shared global economy needs to be distributed fairly. Silicon borders thus might become an unfair obstacle to resource access.
Balance with National Security
On the other hand, these three arguments might be missing something. Silicon borders are levers of power yielded for a good purpose: they protect national security and democracy.
AI infrastructure is a dual-use technology. Unfettered access, or so the dual-use argument goes, could allow adversaries to use AI for various nefarious ends: intelligence analysis, logistics, weapons, cyber warfare, and breakthroughs in material science or cryptography with a military use.
Thus, silicon borders protect national security and enable safe AI. The whole world might be better off as silicon borders avoid an AI race.
Politics of Crisis Governance
This national security argument echoes the older melody of crisis governance: Faced with a perceived high-stakes threat—whether terrorism, financial collapse, or AI proliferation—governments argue that decisive, often preemptive, action is necessary.
The global politics of AI is conducted as crisis governance. Unfortunately, the feeling of being in a crisis often clouds our thinking.
First, the chokepoint theory assumes that national security needs to be “balanced” against other values such as openness, free trade, or global justice. Similarly, after 9/11, national security was balanced against civil liberties. However, this metaphor of “balancing” some values against others is problematic. Treating global justice claims merely as interests to be “weighed” or “balanced” fundamentally misunderstands their nature. Claims of justice might be strict, not simply a competing consideration.
Finally, state power can metastasize. Granting any state sweeping powers to control AI infrastructure in the name of security creates tools that can be readily abused for other ends, including economic coercion, geopolitical leverage, or even domestic surveillance. Any lack of robust oversight—which is often deficient in national security matters—exacerbates this risk.
National security remains a legitimate concern and an important government function. Yet it too easily serves as a convenient veil for protectionism or geopolitical maneuvering.
The Road Ahead
Silicon borders are an emerging challenge for global justice. AI infrastructure risks becoming monopolized. This is a problem of global justice on three grounds: Silicon borders tilt the playing field, are coercive, and ignore resource dependencies.
The solution is not a demand for unrestricted access. Even if the “balance” metaphor is problematic, concerns of national security can’t be entirely dismissed. What can be done?
First, dominant powers should exercise restraint in the scope of export controls, limiting restrictions to truly strategic technologies. The empirical assumptions that underwrite the chokepoint strategy need to be validated. Do these policies achieve their goals? The strategy itself needs to be audited: Is the policy really driven by a concern for national security? Or is economic protectionism the real goal?
Second, regional cooperation offers a promising path forward, particularly for MENA countries. Pooling resources for shared research centers, joint procurement of AI infrastructure, and collaborative regulatory frameworks could create economies of scale that individual nations cannot achieve alone.
This regional approach moreover aligns with the institutionalists’ thesis: Despite the prominence of a global justice discourse, a society’s prosperity is primarily a function of domestic institutional quality. By extension, access to AI infrastructure is valuable primarily when countries have the institutional capacity to deploy it effectively, transparently, and for public benefit. The most successful responses to silicon borders will hence come from countries that combine advocacy for fairer global access with liberal domestic governance. This requires not just investment in technology but in an educational, legal, and economic ecosystem.
History suggests that policymakers tend to overestimate the risks of technology proliferation and underestimate the benefits of cooperation and technology diffusion. Long-term security may depend more on global stability and shared prosperity—potentially fostered by wider AI access—than on the prospect of maintaining a technological edge through silicon borders.
The past is prologue. Throughout the 1980s, the United States was losing the semiconductor industry to Japan. As today, the Reagan administration responded with trade restrictions and industrial policy, in vain. The industry moved to Asia. But still, the next technological revolution took place in the United States—because it successfully attracted many highly skilled engineers. If history is any guide, then domestic success depends on technological talent and an attractive open ecosystem—not choking global trade for uncertain gains.
Great Power Competition Makes Its Exit as Middle Eastern AI Ambitions Grow
The Middle East has become a regional flashpoint in the technology competition between the United States and China. This is unsurprising: many states within the region are intent on throwing their capital behind long-term technological ambitions and are amenable to some degree of collaboration with others, making them of interest to the great powers who seek a commanding lead over the technological foundations of power. The domain of artificial intelligence (AI) is no exception; tussling over the orientation of states like the United Arab Emirates (UAE) and Saudi Arabia is now taken for granted. Great power competition has been the framework de jour of foreign policy analysts.
American presidential administrations from the late-Obama administration, through the first Trump administration, and up to the Biden administration devoted increasing resources to the objective of shoring up the United States’ leadership in AI and its Middle Eastern partnerships. The latter has concentrated disproportionately in the Gulf, maturing significantly since the Generative AI boom of 2023.
Under the Trump administration, however, great power competition as a driver of American policy is sharply reversing. The framework is inadequate for explaining the United States’ most recent changes regarding the pursuit of science and technology, indicating a downgrading of the importance of America’s great power competition with China. This shift has important implications for AI and related Middle Eastern technology ambitions.
The upshot of this shift for the Middle East is as follows: the diminishment of basic and applied scientific research as a U.S. policy priority will accelerate Gulf states’ efforts to construct viable, indigenous, and autonomous AI industries and related technology ecosystems.
That being said, there will be no sharp decoupling of U.S.-Middle East technology cooperation. Nor does the United States’ shifting stance toward technology competition indicate a broader realignment of its Middle East relationships. Instead, the focus of Gulf states (in particular) will turn to accelerating their existing trend toward technological autonomy, potentially contracting the scope of cooperation with the United States in the medium- and long-term. Gulf states will likewise seek to shrink the timeline of their existing arrangements with the U.S. technology industry. Consistent with political scientist Stacie Goddard’s argument, President Trump’s inclination toward transactional dealmaking might be a break from great power competition, but that does not mean a break from foreign engagement.
Finally, this argument is not normative; it makes no claims about whether the United States should be undergoing this shift or whether Gulf states should accelerate their timelines.
This article begins with an assessment of Gulf states’ efforts to make inroads in the American AI and broader technology ecosystem, showing how these have intertwined with the U.S.-China competition. Then, we turn to America’s abrupt pivot away from this competition and the science and technology policies constructed to serve it. The article concludes with a projection of how Middle Eastern AI ambitions will (and will not) be impacted by America’s pivot.
Great Powers in the Gulf
The Middle East’s AI ambitions are disproportionately concentrated in the Gulf and, more specifically, in the UAE and Saudi Arabia. Part of this owes to these states’ capital-rich coffers, with sovereign wealth funds and investment vehicles oriented towards diversifying their sources of economic growth and development. As political economist Robert Mogielnicki details, the Gulf has been trending toward emerging technology investments and forums for some time. Gulf sovereign wealth funds have focused on digitization as a means of economic diversification and domestic development initiatives. Though less-resourced, Oman, Bahrain, Qatar, and Kuwait are also acting to invest in burgeoning technologies.
That said, capital alone is insufficient to explain the inroads made in AI and related emerging technologies. The UAE and Saudi Arabia also established national AI and data strategies that seek to deploy resources to indigenous technology ecosystem build outs and construct policies suitable to the attraction of foreign talent (the former as far back as 2017).
Saudi Arabia, for its part, has made inroads in the American technology industry as well. In March 2024, the Saudi Public Investment Fund (PIF) was reportedly in talks with U.S.-based venture capital fund Andreessen Horowitz to establish a $40 billion fund to invest in AI, with talks continuing into at least late-2024. Additionally, the PIF entered into a strategic partnership with Google in October 2024, sharing plans for a new AI hub located near Dammam. Saudi-based oil producer Aramco also partnered with California-based AI semiconductor startup Groq in September 2024 to construct the world’s largest AI inferencing data center to support the large-scale deployment of AI models. The partnership expanded in December 2024.
These ambitions have been accompanied by a slew of additional investments and milestones by Gulf states, which intertwine with the broader U.S.-China technology competition. The UAE’s state-backed AI conglomerate G42 took center-stage in this competition during the swelling of geopolitical interest in AI in 2023. The reason: G42 began making quickand high-profile forays into both the AI sector and the American technology industry. One such engagement was a December 2023 announcement that G42 partnered with OpenAI to leverage the latter’s AI models in financial services, energy, healthcare, and public services.
American officials noticed. The United States has approached its AI competition with China through the framework of ‘compute governance’, which aims to restrict China’s AI development by preventing it from purchasing certain pieces of necessary hardware. One way to restrict this access is to ensure that foreign nations collaborating with U.S. companies agree not to sell the hardware they have purchased to China, a process called ‘export controls’.
Concerned with the threat of advanced technology loss from companies like OpenAI to China (in part through this arrangement with G42), then-chairman of the U.S. House Select Committee on the Chinese Communist Party Rep. Mike Gallagher wrote a letter to then-Secretary of Commerce Gina Raimondo to investigate the risks G42 poses to the export of American technology in January 2024. The letter was explicitly concerned with G42 CEO Peng Xiao’s ties to Chinese companies.
Less well-known is that United States Bureau of Industry and Security (BIS) officials (located within the Department of Commerce) had already met with Xiao and other G42 representatives in the Summer of 2023. BIS communicated that G42 had to side with the United States or China and make their investments appropriately. By February 2024, G42 had divested from Chinese holdings, making efforts to cut Chinese companies from its supply chain, indicating a hardening U.S. orientation by the company.
Fast forward to April 2024, when Microsoft announced a $1.5 billion investment in G42, granting it a minority stake and a board seat. The Microsoft-G42 partnership was the result of government-to-government negotiations and assurances concerning the security of advanced technology exchange between the firms. In March 2025, building on this progress, the UAE Department of Government Enablement announced a multi-year agreement between Microsoft and the G42 company Core42 to create a unified, sovereign cloud system for government services.
More recently, the UAE’s AI investment vehicle MGX is participating as a partner in the “Stargate” data center project worth up to $500 billion, jointly formed by OpenAI, Oracle, and Softbank. The project was announced on January 21, 2025, directly following U.S. President Trump’s inauguration. (Reporting in May 2025 indicates that OpenAI is seeking to expand its Stargate data center project abroad, though to which state(s) is unclear.)
Merits of the project notwithstanding, Stargate could reasonably be interpreted as the second Trump administration’s intention to continue the American pivot towards emerging technology, principally focusing on AI as the crown jewel of twenty-first-century power. This interpretation was, nevertheless, premature, and U.S. policymaking has since broken from this trend.
American Scientific Disengagement
At first glance, rhetoric from the second Trump administration recognizes the urgency of technological competition. In his remarks at the AI Action Summit in Paris in February 2025, Vice President J.D. Vance made four points clear, one of which is that American AI technology will continue to serve as the global “gold standard”. Similarly, the White House published an open letter to the newly confirmed Office of Science and Technology Policy (OSTP) Director Michael Kratsios on the utter importance of scientific progress and technological innovation, tasking him with the maintenance of “unrivaled” American leadership in critical and emerging technologies like AI.
Yet, actual federal policymaking paints a different picture. Early funding-related actions portended a shift in the priorities of the U.S. government.
A memo put out by the White House Office of Management and Budget in the second week of the new Trump administration ordered federal agencies “to temporarily pause all activities related to obligations or disbursement of all Federal financial assistance”. The ensuing chaos caused by the extraordinarily broad directive (such assistance, interpreted literally, amounts to several trillion dollars) was seen as a temporary speed bump; a bureaucratic error.
This interpretation was reasonable, but that same week, the administration released a “buyout” offer to all federal employees (voluntary separation with a promise of time-bound extension of compensation). Still, however, this could be interpreted as a shift in the composition of the U.S. federal government (fewer employees and leaner, more efficient staffing).
This interpretation also breaks down in the immediate aftermath of these actions, with the upending of the country’s scientific research apparatus. In early February, the National Science Foundation (NSF)—a critical component of U.S. technological leadership—shared plans internally to cut between one-quarter and half of its staff over two months. In April 2025, NSF Director Sethuraman Panchanathan—appointed by President Trump in his first term—resigned from the agency (well before his term was up) amid severe funding cut directives from the Department of Government Efficiency (DOGE). Days later, in early May, the NSF staff were told to “stop awarding all funding actions until further notice”. Then, that same week, President Trump’s proposed Fiscal Year 2026 budget requested a 56% funding reduction for the NSF. (Note that the U.S. Congress, not the Executive Branch, determines federal agency budgets; the president’s request is not final.)
Taken together, a picture begins to emerge not of a different approach to U.S. technology primacy, but a re-prioritization of science and technology policy altogether.
A critic might point to a continuation of the current funding levels for specific technological initiatives, including those that are AI-related, as evidence that these have retained their importance in policymaking (though not keeping up with inflation).
This is a significant piece of evidence, though a suitable interpretation is not that the administration has retained the great power technology competition framework of previous administrations. Rather, it indicates that some level of diversity of opinion exists among U.S. officials and policymakers.
Indeed, a strong piece of evidence indicating a shift away from great power competition is the disinterest of U.S. officials in retaining scientists in the United States, including at agencies like the NSF. The disruptions described so far have led some European universities, particularly in states like France, Belgium, and the Netherlands, to establish programs for American scientists looking to flee cuts to research. European Commission President Ursula von der Leyen also announced a $556 million investment to pull in researchers from the United States in May (the impact of this relatively small investment should not be exaggerated, though its purpose is clear).
Some U.S.-based researchers hold long-term concerns over the viability of scientific research in America, given the perceived political bent of the recent disruptions. According to polling released by Nature in March, among 1,608 researchers, more than 1,200 answered “yes” to a question asking if they are considering leaving the United States due to Trump-induced science disruptions.
Not to be outdone by Europe, Chinese actors have made their own overtures to American scientists. An advertisement was put out by a Shenzhen-based tech recruiter in February targeting scientists laid off from U.S. federal agencies. The ad appeals to “talents who have been dismissed” by U.S. agencies. This occurs just as Chinese startup DeepSeek enjoys the momentum generated by its V3 and R1 models (the latter is taken to be a direct competitor of OpenAI’s ‘o-series’ models). As of March, recent graduates from top U.S. schools like Stanford and Harvard are flooding DeepSeek with resumes.
If technological supremacy as a pillar of United States policy still reigned supreme, these foreign actions would be met with counterbalancing policies by the U.S. government to mitigate or prevent such losses (or even the prospect of losses). Yet, these counterbalancing policies are not in evidence. Moreover, if the interpretation that U.S. policymaking is guided by different priorities than those associated with great power competition is accurate, then we should be able to identify a range of actions consistent with these new priorities across federal agencies (the NSF, in principle, could merely be an aberration).
The story is much the same across the federal scientific research apparatus. Also in early February, the Trump administration announced an intention to limit “indirect funding” distributed by the National Institutes of Health (NIH) to laboratories nationwide. A series of lawsuits related to these cuts followed in quick succession. Undeterred, by late March, the Department of Health and Human Services, which oversees the NIH, announced plans to cut 20,000 full-time jobs (including some employees who accepted the “buyout”), aiming for a force reduction from 82,000 to 62,000 employees. An initial layoff of 10,000 employees began in April.
Since these early actions, a sense of “chaos and confusion” has ruled over agencies like the NIH. DOGE directives to ban all outgoing communications and to cease submitting research to journals chilled the agency. Although some rules and layoffs have been reversed, senior staff reportedly are left with a sense of insecurity. (Even beyond scientific research, traditionally secure recipients of (limited) funding, like the Department of Defense’s future-planning Office of Net Assessment, were scheduled for “disestablishment”.)
Some academics in the United States remain forward-looking, though it is now common to find flagship publications like Science refer to these policy actions as those that “wreaked havoc in American universities and…threaten the global scientific enterprise”.
Such policy actions are not best interpreted as irrational means of pursuing American technological leadership. Instead, they are consistent with a policy approach that is not driven by the demands of urgent technological competition with China to prevent the latter’s re-shaping of world order. The rhetoric we noted above, in this case, does not reflect the in-practice orientation of the U.S. government. In this way, the actions are rational; they merely reflect different priorities. This shift will impact Middle Eastern AI (and broader technological) ambitions over the medium- and long-term in ways that reflect their existing interests.
Accelerating the Bid for Autonomy in the Middle East
Importantly, significant changes to U.S.-Middle Eastern technology dynamics will likely be visible over the medium- and long-term rather than the short-term. There are two reasons for this. First, the Trump administration may continue an aggressive approach to the export of advanced semiconductors and/or AI models to other states, depending on its replacement policy for Biden-era regulations. This simply means that the flow of hardware, AI models, and/or computing capacity to other countries may continue to be restricted according to some set of regulations established by the U.S. government and therefore require significant diplomatic engagement to navigate them. Second, the commercial partnerships already formed between the United States and Gulf states have staying power and cannot be readily reversed.
Regarding the first point, a visit by Emirati National Security Advisor Sheikh Tahnoon bin Zayed Al Nahyan in March 2025 with President Trump is relevant, as AI was at the top of the agenda. The Microsoft-G42 partnership described above has allowed the UAE to garner increasing access to advanced chips, including Washington’s approval of the sale of H100 AI chips designed by Nvidia, which enable the development and deployment of advanced AI models.
It is within the Emirates’ interest (and the interest of any state with AI ambitions) to retain or expand such access. Thus, as technology and geopolitics researcher Mohammed Soliman notes: Tahnoon may have tried to “balance between the security concerns of the United States but at the same time not to cap [the UAE’s chip imports] when it comes to how many chips they can access in a way that hammers [sic] their own ambitions.”
Indeed, limited early signals indicate a continuation of the outsized role played by U.S.-led export controls in Gulf states’ AI ambitions. Both Sheikh Tahnoon’s visit to the White House and the addition of entities to the United States’ export control blacklist indicate the UAE’s (and likely broader Gulf’s) interest in balancing U.S. interests with their own. Access to advanced enabling hardware in sufficient amounts and within suitable timeframes will continue to drive relations.
Regarding the second point, the commercial investments and partnerships that Gulf states have formed—Microsoft-G42, Groq-Aramco, Stargate, etc.—represent sunk costs and they hold staying power. Such actions represent years of financial and diplomatic investment, complete with the relocation or establishment of talent and facilities relevant to AI development or deployment. These costs cannot simply be shifted, even if the United States’ current loss of scientific talent continues. Additionally, they have staying power because the United States’ commercial AI ecosystem remains, at least for now, the most dynamic in the world with access to the most sophisticated models, hardware, and talent. It would not be within Gulf states’ interests, therefore, to summarily switch sides, even if this were possible.
All that said, the diminishment of science and technology as policy priorities in the United States will accelerate Gulf states’ existing goals to construct viable, indigenous, and autonomous AI industries. Because China (or Europe) cannot simply absorb the old and new costs of AI development and deployment, Gulf states will therefore retain their partnerships and investments with the United States while actively searching for ways to accelerate timelines for independence. We should not be tempted by a dichotomous view of U.S.-China competition in which the ‘winning’ state gobbles up the Middle East’s ambitions; multiple pathways within this broader dynamic are possible, and the bid for autonomy is amply evidenced as the preferred path for the region.
Consistent with this, geopolitical dynamics represent their own level of analysis beyond technology. Put simply: although technology can be a driver of bilateral or multilateral relationships, there are more fundamental interests at stake in international relations. Technology, including AI, can be segmented off from these fundamental interests. To this end, the second Trump administration has singled out states like Saudi Arabia as venues through which to hammer out broader geopolitical disputes. Most prominently, peace talks have been held in Saudi Arabia on the Russo-Ukraine war.
Moreover, in continuity with his first administration, President Trump’s first foreign trip was to Saudi Arabia, part of a visit to the broader region. The AI-related deals with the UAE and Saudi Arabia announced during his visit, including the pending approval of massive Nvidia and AMD chip exports to the two countries, are consistent with the Gulf region’s accelerated drive for autonomy. Indeed, the flashpoint in these deals is the potential for major AI infrastructure to be constructed abroad in the Middle East (e.g., data centers). These deals are precisely the kind one would expect in a transactional approach to foreign policy, rather than great power competition.
The overarching point here is that there is no black-and-white set of consequences for Middle Eastern AI ambitions amid the United States’ downgrading of technological great power competition with China. Although Gulf states in particular will accelerate their bids for technological autonomy, there is always the potential for bilateral or multilateral developments that impact AI diplomacy; geopolitics is too complex to predict precisely.
This likewise applies to Chinese AI ambitions. The United States’ re-positioning comes just as China begins to oversee the maturation of its own indigenous AI industry. Chinese startup DeepSeek’s release of R1 in January threw the technology market into temporary turmoil. While Chinese advances should not be exaggerated, its domestic AI industry is experiencing upward momentum across companies (including Baidu and Manus AI). How a maturing Chinese AI industry will intersect with Middle Eastern AI ambitions amid the United States’ shifting priorities is not immediately clear, beyond the projections made above.
All this goes to show that the framework provided by great power competition is only useful insofar as it can account for observed policy actions. Should the framework become so stretched in its effort to account for what is observed, such that the actions begin to appear bizarrely irrational (as many above may seem), then it is an indication that the framework is simply inadequate. In these cases, the framework must be dismissed or substantially revised.
The United States’ reorientation toward science and technology policy is one such case. The Middle East—having emerged as a major region in earlier technology competition between the United States and China—will likewise exhibit behaviors that fall beyond this framework’s capacity to account for them. Analysts and observers should adjust and prepare accordingly.
The Future of Democracy in the Age of Deepfakes
By most surveys, some 1 in 5 Americans dismiss or deny the effects of global climate change. At the tail end of the global COVID pandemic, 1 in 5 Americans believed the statement “Bill Gates plans to use COVID-19 to implement a mandatory vaccine program with microchips to track people.” And, after Joe Biden won the presidential race in 2020, more than half of Republican voters believed that Donald Trump rightfully won.
Basic facts regarding our planet’s health, our public health, and the foundations of our democracy are being denied by a significant number of citizens. This prevalent alternate reality is not unique to the United States; this plague of lies, conspiracies, misinformation, and disinformation is global in nature.
Most of the beliefs in these (and other) baseless claims are being spread through traditional channels and social media, and amplified by famous personalities, influencers, politicians, and some billionaires. What happens when the landscape that has allowed widespread and dangerous conspiracies to take hold is super-charged by generative AI?
Making Deepfakes
Before the less-objectionable term “generative AI” took root, AI-generated content was referred to as “deepfakes”, a term derived from the moniker of a 2017 Reddit user who used this nascent technology to create non-consensual intimate imagery, NCII (often referred to by the misnomer “revenge porn”, suggesting somehow that the women depicted inflicted a harm deserving of revenge).
Today, generative-AI is capable of creating hyper-realistic images, voices, and videos of people saying or doing just about anything. These technologies hold the promise to both revolutionize many industries while also super-charging the spread and belief in dangerous lies and conspiracies.
Trained on billions of images with an accompanying descriptive caption, text-to-image AI models progressively corrupt each training image until only visual noise remains. The AI model then learns to denoise each image by reversing this corruption. Once trained, this model can be conditioned to generate an image that is semantically consistent with any text prompt, such as “Please generate an image of the great Egyptian Sphinx and pyramids during a snowstorm.” (As a side note, fellow AI researcher Sarah Barrington gave me the advice that you should always say “please” and “thank you” when speaking with AI models so if—or when—they take over the world, they will remember you were nice to them).
Figure 1: An AI-generated image of the Sphinx and pyramids during a snowstorm.
Video deepfakes fall into two broad categories: text-to-video and impersonation. Text-to-video deepfakes are the natural extension of text-to-image models where an AI model is trained to generate a video to be semantically consistent with a text prompt. These models have become significantly more convincing over the past 12 months. A year ago, the systems tasked with creating short video clips from a text prompt like “Will Smith eating spaghetti” yielded obviously fake videos of which nightmares are made.
The videos of today, while not perfect, are stunning in their realism and temporal consistency and are quickly becoming difficult to distinguish from reality; the updated version of Will Smith enjoying a bowl of spaghetti is evidence of this progress.
Although there are several different incarnations of impersonation deepfakes, two of the most popular are lip-syncs and face-swaps. Given a source video of a person talking and a new audio track (either AI-generated or impersonated), a lip-sync deepfake generates a new video track in which the person’s mouth is automatically modified to be consistent with the new audio track. And, because it is relatively easy to clone a person’s voice from as little as 30 seconds of their voice, lip-sync deepfakes are a common tool used to co-opt the identity of celebrities or politicians to push various scams and disinformation campaigns.
A face-swap deepfake is a modified video in which one person’s identity, from eyebrows to chin and cheek to cheek, is replaced with another identity. This type of deepfake is most common in the creation of non-consensual intimate imagery. Face-swap deepfakes can also be created in real time, meaning that soon you will not know for sure if the person at the other end of a video call is real or not.
The trend of the past few years has been that all forms of image, audio, and video deepfakes continue their ballistic trajectory in terms of realism, ease of use, accessibility, and weaponization.
Weaponizing Deepfakes in the 2024 U.S. Election
It is difficult to quantify the extent to which deepfakes impacted the outcome of the 2024 U.S. presidential elections. There is no question, however, that deepfakes were present in many different forms, and—regardless of their impact—their use in this recent election is a warning for future elections around the world.
The use of deepfakes in the election ranged from outright attempts at voter suppression to disinformation campaigns designed to confuse voters or cast doubt on the eventual outcome of the election.
In January 2024, for example, an estimated tens of thousands of Democratic party voters received a robocall in the voice of President Biden instructing them not to vote in the upcoming New Hampshire state primaries. The voice was AI-generated. The perpetrators of this attempted election interference were Steven Kramer (a political consultant), Paul Carpenter (a magician and hypnotist who was paid $150 to create the fake audio), and a telecommunications company called Lingo Telecom. Carpenter used ElevenLabs, a platform offering instant voice cloning for as little as $5 a month.
Throughout the campaign, it was common to see viral AI-generated images of black people embracing and supporting Donald Trump chalking up millions of views on social media. Cliff Albright, the co-founder of Black Voters Matter, a group encouraging black people to vote, said the manipulated images were pushing a “strategic narrative” designed to show Trump as popular in the black community. “There have been documented attempts to target disinformation to black communities again, especially younger black voters,” Albright said.
In an attempt to presumably cast doubt over the fairness in the election, countless fake videos—linked back to Russia—circulated online purporting to show an election official destroying ballots marked for Trump. An endless stream of viral AI-generated images and videos polluted social media; these ranged from fake images pushing the Kamala Harris is a socialist/communist narrative to a fake image of Taylor Swift endorsing Donald Trump.
While the threats from deepfakes are already with us, perhaps the more pernicious result of deepfakes is that when we enter a world where anything we see or hear can be fake, then nothing has to be real. In the era of deepfakes, a liar is equipped with a double-fisted weapon of both spreading lies and, using the specter of deepfakes, casting doubt on the veracity of any inconvenient truths—the so-called liar’s dividend.
Trump, for example, publicly accused the Harris-Walz campaign of posting AI-generated images of large rally crowds. This claim was baseless. It could be argued that denying crowd size is simple pettiness, but there could also be something more nefarious at play. Trump publicly stated that he would deny the results of the election if he lost, so denying large crowd sizes prior to the election would give him ammunition to claim voter fraud after the election. As the violent January 6 insurrection from the 2020 election showed us, the stakes for our democracy are quite high. As deepfakes continue to improve in realism and sophistication, it will become increasingly easy to wield the liar’s dividend.
Figure 2: An authentic photo of a Harris-Walz rally that, during the 2024 U.S. national election, Donald Trump claimed was fake.
Protecting Democracy from Deepfakes
If we have learned anything from the past two decades of the technology revolution (and the disastrous outcomes regarding invasions of privacy and toxic social media), it is that things will not end well if we ignore or downplay the malicious uses of generative AI and deepfakes.
I contend that reasonable and proportional interventions from creation through distribution, and across academia, government, and the private sector are both necessary and beneficial in the long-term for everyone. I will enumerate a range of interventions that are both practical and, when deployed properly, can keep us safe and allow for innovation to flourish.
Creation. There are three main phases in the life cycle of online content: creation, distribution, and consumption. The Coalition for Content Provenance and Authentication (C2PA) is a multi-stake holder, open-source initiative aimed at establishing trust in digital audio, image, and video. The C2PA has created standards to ensure the authenticity and provenance of digital content at the point of recording or creation. This standard includes the addition of metadata and embedding an imperceptible watermark into content, and extracting a distinct digital signature from content that can be used for identification even if the attached credentials are stripped out. All AI services should be required to implement this standard to make it easier to identify content as AI-generated.
Distribution. Social media needs to take more responsibility for its role in sharing content, from the unlawful to the lawful-but-awful items that are shared on their platforms and amplified by their own recommendation algorithms. However, while it is easy to single out social media platforms for their failure to rein in the worst abuses on their platforms, they are not uniquely culpable. Social media operates within a larger online ecosystem powered by advertisers, financial services, and hosting/network services. Each of these—often hidden—institutions must also take responsibility for how their services are enabling a plethora of online harms.
Consumption. When discussing deepfakes, the most common question I’m asked is: “What can the average consumer do to distinguish the real from the fake?” My answer is always the same: “Very little”. After which I explain that artifacts in today’s deepfakes—seven fingers, incoherent text, mismatched earrings—will be gone tomorrow, and my instructions will have provided the consumer with a false sense of security. The space of generative AI is moving too fast and the forensic examination of an image is too complex to empower the average consumer to be an armchair detective. Instead, we require a massive investment in primary and secondary education to empower consumers with the necessary skills to understand how and from where to obtain reliable news and information.
Authentication. The process of identifying manipulated content by qualified experts is partitioned into two broad categories: active and reactive. Active approaches include the type of C2PA content credentials described above, while reactive techniques operate in the absence of such credentials. Within the reactive category, there are a multitude of techniques for detecting manipulated or AI-generated content. Collectively, these techniques can be effective, but a major limitation is that by the time malicious content is uploaded online, flagged as suspicious, analyzed for authenticity, and a fact check is posted, the content can easily have racked up millions of views. This means that this type of authentication is appropriate for post-mortems, but does not address the billions of daily uploads.
Legislation. To date, only a handful of nations and a few U.S. states have moved to mitigate the harms from deepfakes. While I applaud individual U.S. states for their efforts, internet regulation cannot be effective with a patchwork of local laws. A coordinated national and international effort is required. In this regard, the European Union’s Digital Safety Act, the United Kingdom’s Online Safety Act, and Australia’s Online Safety Actprovide a road map for the United States. While regulation at a global scale will not be easy, some common ground can surely be found among the United States and its allies, thus serving as a template for other nations to customize and adopt.
Academe. In the 1993 blockbuster movie Jurassic Park, Jeff Goldblum’s character Dr. Ian Malcom criticized the reckless use of technological advancements in the absence of ethical considerations by stating: “Your scientists were so preoccupied with whether they could, they didn’t stop to think if they should.” I am, of course, not equating advances in AI with the fictional resurrection of dinosaurs some 66 million years after extinction. The spirit of Goldblum’s sentiment, however, is one all scientists should absorb.
Many of today’s generative-AI systems used to create harmful content are derived from academic research. For example, researchers at the University of California, Berkeley developed the program pix2pix, which transforms the appearance or features of an image (e.g., transforming a day-time scene into a night-time scene). Shortly after its release, this open-source software was used to create DeepNude, a software that transforms an image of a clothed woman into an image of her unclothed. The creators of pix2pix could and should have foreseen this weaponization of their technology and developed and deployed their software with more care. This was not the first case of such abuse, nor will it be the last. From inception to creation and deployment, researchers need to give more thought on how to develop technologies safely and, in some cases, whether the technology should be created in the first place.
There is much to be excited about in this latest wave of the technology revolution. But if the past few technology waves have taught us anything, it is that left unchecked, technology will begin to work against us and our democracy. We need not make the mistakes of the past. We are nearing a fork in the road for what role technology will play in the type of future we want. If we maintain the status quo, technology will continue to be weaponized against individuals, societies, and democracies. A change in corporate accountability, regulation, liability, and education, however, can yield a world in which technology and AI works with and for us.
Famed actor and filmmaker Jordan Peele’s 2018 public service announcement on the dangers of fake news and the then-nascent field of deepfakesoffers a word of relevant advice. The PSA concludes with a Peele-controlled President Obama stating: “How we move forward in the age of information is gonna be the difference between whether we survive or whether we become some kind of f***ed up dystopia.” I couldn’t agree more.
A Costly Illusion of Control: No Winners, Many Losers in U.S.-China AI Race
During the past three years, in the wake of the rapid development of generative artificial intelligence (GenAI), the already tense technology-related competition between the United States and China has intensified. AI has become the clear focus of U.S. efforts, which aim to slow the ability of Chinese companies to develop advanced models. This dynamic has spilled out across the globe and affected supply chains and countries across the AI stack. The ‘geopolitics of AI’ has now become the primary battleground between the United States and China, with unknown but increasingly negative externalities.
At the same time, this competition has impeded progress towards developing an international framework to ensure the safe and secure development of AI. These issues, long discussed in academic literature, have now escaped the confines of the ivory tower. Artificial general intelligence (AGI) or artificial superintelligence (ASI) have gone from theoretical concepts to apparently achievable outcomes within a much shorter timeframe than experts believed just five years ago.
This essay will examine how we arrived at this point, the dangers of allowing the current trajectory to continue unchecked, and how we might get out of the zero-sum, winner-takes-all paradigm. This framework has seized Washington (including policymakers and most think tanks) and threatens to heighten the risk of conflict between the United States and China (including potentially over Taiwan) with massive ramifications for the global economy and the future of AI development.
AI Now Dominates U.S.-China Technology Competition
Several key strands of thought have led to a situation in which the United States and China find themselves locked in a struggle to dominate AI, with many, particularly among U.S. policymakers and think tanks, characterizing the competition as zero-sum. According to this narrative, whoever gets to something resembling AGI wins, because they will use this advantage for strategic gain. The implications of this framing are profound for the world, given that companies in the United States and China are far and away the global leaders in the development of so-called frontier AI models.
The current U.S. export control suite targeting the AI stack stems largely from two wellsprings:
First, the concept of ‘compute governance’—the assertion that compute hardware should be restricted as a way to control AI development—and the gradual identification of China as the most important target of such restrictions. This has been coupled with the second wellspring: the widespread adoption of ‘choke point technologies’ among the U.S. policy making community. This is particularly true for former Biden administration National Security Advisor Jake Sullivan, congressional critics of China, and DC-based think tanks that have acted as uncritical supporters for U.S. government policies targeting China and the AI stack.
The concept of advanced and scaled-up compute as a strategic chokepoint was initially popularized in AI alignment circles, influenced by broader observations in economics and geopolitics. Nick Bostrom of the Future of Humanity Institute in Oxford indirectly seeded some of these ideas through early papers addressing the strategic implications of openness in AI development, including strategic technology control.1 AI safety/security researchers Miles Brundage, Shahar Avin, and Jack Clark notably explored compute dynamics in their influential 2018 report, “The Malicious Use of Artificial Intelligence”, highlighting compute as a key factor.
Significantly, the early focus on ways to prevent the misuse of advanced AI in general in the 2021-2022 timeframe morphed into a focus on China in particular as a nation-state ‘authoritarian’ actor that could both misuse AI capabilities and, if allowed to get to AGI first, wield this as a strategic tool to gain unspecified advantages. Hence, policy choices began to coalesce around the need for the United States to both take the offensive in promoting the ability of U.S.-based AI companies to pursue AGI and ensure leadership in AI was coupled with the defensive concept of slowing China’s progress down. The latter goal was achieved through a series of regulatory measures unleashed for the first time via sweeping, unilateral,extraterritorial export controls in October 2022.
A key prerequisite for the measures pushed by adherents to the compute governance model was articulated by Sullivan, who linked advanced compute with national security concerns. This happened in fall 2022, when Sullivan highlighted the importance of advanced compute during a speech in Washington DC, listing computing-related technologies including microelectronics, quantum information systems, and AI. This set the stage for two years of steady regulatory actions by the Biden administration directed at Chinese firms with the aim of undermining the ability of China’s semiconductor industry to support advanced node manufacturing and systematically cutting off the access of Chinese firms to advanced AI hardware—the core of compute—in the form of GPUs made by U.S. tech companies Nvidia, AMD, and Intel.
This process contributed to two major events in 2025. The first was the April 2025 banning of exports of Nvidia’s H20 GPU, designed specifically by the firm for the Chinese market to meet earlier export control requirements. The H20 GPU is necessary for running trained models and applications to solve problems, a process called inference. In the view of the U.S. government and proponents of compute governance, the H20 represented the last chokepoint on the hardware side because inference is necessary for AI to move from chatbots to reasoning systems and, finally, to agentic AI. As of mid 2025 agentic systems, which feature the ability to access websites, fill in forms, and conduct other actions on behalf of individuals or companies, have begun to be deployed.
The second major event was the initial implementation of the so-called AI Diffusion Framework, rooted in the same compute governance argument and designed to prevent Chinese firms and researchers from accessing forbidden hardware globally.
The series of U.S. government measures launched starting in October 2022 and relevant to the overall “choke point” effort include but are not limited to:
Framework for Artificial Intelligence Diffusion. This Biden era rule was pulled back in mid-May but will be replaced by a new rule that supports the concept of enabling U.S. government control over where AI infrastructure is deployed globally. Due later this year, the new rule will likely include a global licensing regime for advanced GPUs, while requiring bilateral agreements and commitments around diversion and know your customer (KYC) to prevent Chinese firms or researchers from accessing restricted AI hardware.2
As this defensive regulatory process has heated up, leading U.S. AI companies have become drivers on both sides of the ‘promote and protect’ equation, particularly U.S. proprietary AI model leaders OpenAI and Anthropic. Founded in 2021 by former OpenAI executives, including co-founders CEO Dario Amodei and policy czar Jack Clark, Anthropic was established with a mission centered on AI safety and alignment. Over time, the company—according to its statements—recognized that controlling access to large-scale computational resources (compute) is pivotal in managing the development and deployment of advanced AI systems.
This realization positioned compute governance as a practical mechanism to enforce safety protocols and prevent misuse, especially in the context of international competition. In an essay published in late 2024, Amodei wrote about a future world in which AI needs to be developed to benefit “democracies over autocracies” and advocated for measures such as limiting access to semiconductor and manufacturing equipment, a clear reference to US AI competition with China. Anthropic’s focus on compute governance has intensified in response to China’s rapid advancements in AI.
The release of models like DeepSeek V3 and R1 in late 2024 and early 2025 demonstrated that despite U.S. export controls, Chinese firms were able to develop advanced AI models—in some cases without the guardrails around Chemical, Biological, Radiological, and Nuclear (CBRN) materials and other national security-related risks prioritized by companies such as Anthropic.
In a March 2025 submission to the U.S. Office of Science and Technology Policy, Anthropic emphasized the necessity for robust evaluation frameworks to assess the national security implications of AI systems and the importance of maintaining stringent export controls on AI chips to countries such as China. In early May, at the Hill and Valley Forum, Clark asserted that DeepSeek’s AI capabilities had been over-hyped and the Chinese firm was not a threat. In a pointed comment on both U.S. export controls and compute governance, Clark asserted that: “If they [DeepSeek] had access to arbitrarily large amounts of compute, they might become a closer competitor.” Anthropic has also apparently tested DeepSeek’s V3 and R1 models against ‘national security’-related risks—presumably CBRN, cyber, and autonomy—and determined that these models were not risks. OpenAI executives have made similar statements around China, export controls, and AI-related risks.
There are additional important drivers of both the government and industry views of the types of policy approaches that may be required as we near the advent of AGI. For instance, the Effective Altruism (EA) movement has significantly influenced Anthropic’s philosophy and approach. Early funding from prominent EA figures, such as Jaan Tallinn and Dustin Moskovitz—who also founded Open Philanthropy—provided the financial foundation for Anthropic’s initiatives. While the company has since sought to distance itself publicly from the EA label, the movement’s emphasis on mitigating existential risks and promoting long-term societal benefits clearly continues to resonate within Anthropic’s operational ethos, alongside an orientation toward slowing China’s AI development.3
Open Philanthropy is also a major financial donor to the Center for Security and Emerging Technology (CSET), an early driver of the focus on China as a nation-state actor on AI and the concept that choke point technologies could be used to slow China’s ability to develop advanced AI. Several key former CSET officials within the White House and Commerce Department drove the Biden administration’s efforts on export controls and AI policy and heavily influenced Sullivan’s thinking on China, AI, and technology controls.
Compute Governance Approach Exacts A Heavy Cost in Industry and Geopolitics
While the proponents of the compute governance approach may have been rightly concerned about how advanced AI could be used by malicious non-state actors such as terrorist groups, the shift in focus to almost exclusively applying the concept to a nation-state, China—home to companies that are among the global leaders in AI development—now appears to have been clearly misguided. The implications of applying this approach to China do not appear to have been exhaustively considered, with no effort having been made to bring together a broader group of individuals who understand global technology development, China political and economic development, supply chains, and commercial roadmaps across the AI and semiconductor stacks.
The impacts and costs of this decision have been immense. Left unexamined and unchecked, it is likely to lead to much higher risks of conflict between the United States and China, including over Taiwan, which is still the locus of the most advanced AI hardware production. In a detailed look at this issue I authored with technology entrepreneur and futurist Alvin Graylin, we assessed that while there is no “winning” of an AI arms race between the United States and China, there are already many losers. For proponents of compute governance, choke point technologies, and EA, this should prompt serious reconsideration of specific policies and narratives—but this review has yet to happen .
First, the pursuit of compute choke points has massively disrupted the global AI semiconductor industry, damaging companies across the AI stack, including Nvidia, AMD, Intel, Lam Research, Applied Materials, ASML, KLA, Samsung, SK Hynix, and many others who fall within their supply chains. The costs for these disruptions constitute hundreds of billions of dollars and the negative impact on these firms’ R&D budgets and their ability to continue innovating and competing will continue to pile up. These consequences will be felt over the next decade, given time horizons within the industry.
In addition, as of May 2025, China’s retaliation for U.S. controls included bans on critical minerals and rare earth elements (REE) and products, with costs likely to mount considerably in the coming months. In the short-term, this increase will be due to production stoppages at EV and other companies. It will increase in the long-term as well as western countries will need to recreate entire REE and critical mineral supply chains for themselves.
Second, the U.S. approach has been to focus primarily on the potential downsides of advanced AGI or ASI and the Decisive Strategic Advantage (DSA) it may hypothetically confer on China, while completely ignoring the many present and future benefits that advanced AI applications can offer a country and a society. These include innovation in critical areas such as healthcare, climate change science, and green technology, to name a few. In focusing on a potential outcome with only speculative timing and implications, the designers of U.S. policies and their think tank proponents almost never acknowledge the concrete benefits that advanced AI will bring to societies, including China.
All the companies developing the most advanced AI models in China are civilian, with little or no links to China’s military, and all are focused on civilian applications of the technology. Yet the justification for U.S. controls has focused on largely misunderstood concepts such as military-civilian fusion and the potential for GPUs to be used in supercomputers that could be used to model weapons systems. This is not where the cutting edge of advanced AI development in China is focused.
Approaches such as the AI Diffusion Framework also use justifications regarding potential military end uses. However, there is no evidence that military-linked Chinese firms would ever consider using overseas cloud-based services to either train or deploy inference applications involving the transfer of sensitive data even remotely connected with surveillance, security, or military end uses. No government would allow this.
Third, U.S. policy on AI and China has blocked—and will continue to preclude—progress on erecting an international regime of AI safety and security around the development of advanced models and applications. Without the involvement of the Chinese government and companies in this process, no globally acceptable set of guardrails can be established. This could have grave implications for AI safety and security, especially regarding efforts to mitigate the potential existential risks posed by AI. Significantly, in 2017 Amodei acknowledged this dynamic, warning of the dangers of a U.S.-China AI race “that can create the perfect storm for safety catastrophes to happen”. Some eight years later, Amodei is now a staunch proponent of winning the race for AGI with China, despite the fact that the exact risks he cited in 2017 are much more pressing in 2025.
Instead of supporting an open process where individuals, companies, and key nation-states are able to collaborate and discern where AI development is headed, the U.S. government and leading AI firms developing proprietary models are rapidly moving towards a closed development environment. This will mean the most advanced capabilities will not be publicly released, and other governments, including U.S. allies, will increasingly be left out of the loop, with diminished ability to determine how close leading players are getting to creating AGI. Bostrom warned against just such as situation: “…direct democracy proponents, on the other hand, may insist that the issues at stake are too important to be decided by a bunch of AI programmers, tech CEOs, or government insiders (who may serve parochial interests) and that society and the world is better served by a wide open discussion that gives voice to many diverse views and values.”
Global Safety Efforts at Risk: The Fragile Gains of AI Diplomacy
Caught in the middle of this are other key actors, such as the United Kingdom, which in 2023 launched the Bletchley Park Process precisely to bring together key stakeholders to discuss how to regulate frontier models and ensure AI safety and security. Much progress was made over an 18-month period, with the establishment of an AI Safety Institute (AISI) network, capacity-building within government-associated bodies, and linking the leading AI labs with the AISI’s to begin researching and testing methods of assessing advanced models and ensuring they would not enable the development of CBRN weapons or facilitate advanced cyber operations.
The need for this process is clear and the UK’s focus has remained steadfast, even as the early signs from the Trump administration suggested less emphasis on this issue. The Trump administration has thus far been more focused on expanded export controls and a new version of AI Diffusion rule—aiming to constrain the ability of Chinese companies like DeepSeek to develop advanced AI)—and closely collaborating with leading U.S. model developers to ensure the government is able to take control as the AGI inflection point nears.
This misplaced focus could ultimately fully derail the AISI process. At the Paris AI Action Summit in February 2025, I participated in a closed-door meeting as China debuted its own AI Safety and Development Institute, a network of AI safety-focused organizations that is doing advanced work on how to test models and reduce risks around fraud and other key AI safety issues. In addition, I participated in a panel with leading AI developers and safety experts on how the industry could continue collaborating on safety issues even in the absence of clear government policy at the national or international level.
This issue is particularly pressing as models are becoming more capable and Anthropic and OpenAI are suggesting that AGI may be just two or three years away. If U.S. efforts to contain China via compute governance result in the scuttling of any chance to reach an international consensus on AI governance this will cause collateral damage of the highest order; it increases risks around both AI deployments and geopolitical conflict involving the United States and China.
This latter issue is becoming more salient, as policymakers in both Beijing and Washington awaken to the realization that the cat is out of the bag. U.S. officials are now explicit about the goals of promoting AI and protecting against Chinese companies gaining progress on AI, guided by a belief that whichever country has the company which gets to AGI first will have a Decisive Strategic Advantage (DSA). Bostrom raised the idea of DSA very early, well before the advent of ChatGPT, but in the context of how it might emerge as a result of open or closed scenarios of AI development, not in the context of geopolitical great power competition. Officials in Beijing are certainly now aware of this narrative but have not likely thought through the full implications of this framing.
The DSA Narrative Fuels a Zero-Sum Race with No Safe Off-Ramp
The DSA framing is very dangerous because there will be considerable uncertainty as companies such as Anthropic, OpenAI, and DeepSeek/Huawei get closer to capabilities that will be considered to constitute AGI or even ASI. This uncertainty could prompt consideration of preemptive cyber or kinetic attacks on AI data centers known to harbor training capacity for the most advanced models. Neither country will be willing to allow the other to reach AGI first without some intermediate response.
Proponents of export controls and the race to DSA whose ‘best guess’ is that the optimal strategy would be to pursue an approach based on “isolating our worst adversaries and eventually putting them in a position where they are better off taking the same bargain as the rest of the world” will be in for a rude awakening about the realities of international superpower politics if this dynamic is allowed to continue unchecked. If, for instance, Beijing believes U.S. companies, using advanced GPUs manufactured in Taiwan by global foundry leader TSMC, are nearing AGI, this could prompt Beijing to take action against Taiwan that it would not otherwise have considered. This alone is a huge escalation of real risk for Taiwan resulting directly from policies based on compute governance and DSA approaches.
Critics of the DSA framing have questioned projections that assume that rapid progress towards AGI will occur merely with the deployment of larger numbers of AI software researchers. While I agree with Jack Wiseman, the author of a recent excellent article on this issue, that scenarios such as AI 2027 (which plays out a notional race to AGI or ASI between the United States and China featuring proxies for OpenAI and DeepSeek) assume that leaders will be “extremely cavalier” about the strategic balance in the face of DSA, the more important factor could be the perception in Beijing or Washington that the other side is ahead. Without dialogue, and with proprietary model developers dominating the landscape and collaborating closely with their governments, the potential for misunderstanding and escalation grows, becoming the primary risk around AI development.
In what now appears to be a self-fulfilling prophecy that the United States and China are in an ‘arms race’ to get to AGI first, fueled by fear of the consequences of one side crossing the DSA threshold, China has several advantages. The emergence of innovative companies such as DeepSeek and the continuing efforts of technology major Huawei to revamp the entire semiconductor industry supply chain in China to support the development of advanced AI hardware, illustrate the difficulty of slowing—let alone halting—the ability of Chinese firms to keep pace with U.S. AI leaders. Even former Google CEO Eric Schmidt now basically admits that the export controls have not only failed but in fact have served as an accelerant to China’s technology advances in AI.
China has major advantages in the race to deploy AI at scale, such as a long-term energy production strategy. The vast majority of these deployments will be consumer-facing (for example, through agentic platforms that benefit citizens via healthcare innovations) and enterprise-focused (for example, driving improved productivity). In other words, applications with no connection to China’s military modernization.
Currently, the development environment around AI models and applications is highly competitive, with around a dozen major players in each market, along with many more startups. Competition to get to AGI means that there will be a smaller number of players demanding higher levels of compute. U.S. controls will complicate the ability of Chinese firms to maintain access to large quantities of advanced compute, but this pressure will ease over time as domestic sources ramp up.4 If certain breakthroughs in model development and platform deployment lead either government to believe the other side is pulling ahead in the ‘race to AGI’, this is likely to cause serious distortions in the way governments and companies will choose to interact in AI development, with unknown implications, particularly on bilateral relations.
Here the risk is that at some point key players such as the National Development and Reform Commission (NDRC) in China, which is responsible for approving AI data centers and tracking overall compute capabilities, and the U.S. Department of Commerce, which has similar responsibilities, could decide that the race is coming down to one or two companies, and both governments would then choose to optimize these companies’ access to advanced compute—the scenario laid out in AI 2027. This would turn these companies into targets of both regulatory measures and potentially other actions from either government, all compounding the risk of conflict as companies approach something resembling AGI.
Under this scenario, collaboration around the real risks of advanced AI is likely to take a back seat, as neither government will want to consider cooperation in slowing a runaway race toward AGI. This unwillingness to collaborate carries unknown but almost certainly negative consequences for the industry, the U.S.-China relationship, and global efforts to understand the risks of advanced AI deployments and erect baseline guardrails. This is a very dangerous and unstable world, full of existential uncertainties as both sides consider the implications of the other reaching and acting on AGI.
Building Guardrails and Reimagining the Stakes Before It Is Too Late
We are now in a critical window. Both the U.S. and China have the necessary compute for at least the next two generations of model development, which will mean exponential progress towards AGI throughout 2025 and into 2026 and 2027. For the next 24 months, there will be many advanced GPUs flowing to AI data centers around the world as AI products, including increasingly sophisticated AI agent applications, are deployed, and open-source models and weights become increasingly available to a wider group of actors. This will be a critical time period to develop guardrails around AI development before a U.S.-China race to AGI becomes the dominant driver of AI development. It represents the chance of avoiding irreversible damage to U.S.-China relations as the potential for conflict accelerates.
How can we avoid missing the window and heading into a world where AI competition itself poses existential risks?5 Some supporters of export controls assert that critics of the compute governance framework think the proper solution is to do nothing. In reality, critics assert that the best approach is to stop doing counterproductive things that raise the risk of conflict and start doing things that reduce risks, laying the groundwork for a peaceful and stable coexistence.
First, develop a bilateral mechanism between the United States and China to discuss AI development and AGI in particular. This will require a major effort to stabilize the bilateral relationship, build trust around technology-related issues that have become ideologically infused, and find agreement that transnational issues, such as climate change, pandemics, and the risks around AGI necessitate the two most important players in these domains coming to grips with the implications of endless competition. The United States has successfully negotiated with China on controls for advanced technologies in the past, such as bans on nuclear weapons testing and proliferation. These discussions could and should include confidence building measures such as reducing controls on some technologies, such as hardware used for AI inference.
Second, revamp the Bletchley Park Process, which has been the only serious channel for discussing global AI governance, with full participation from both the United States and China, along with other major players such as India, Singapore, Japan, France, and Canada. Critical to this would be for the U.K. to continue to play a mediating role between the United States and China. In addition to discussing things like CBRN and cyber risks, a reconstituted Bletchley Park Process and AI Safety Institute Network process would need to focus on rapidly changing capabilities (such as agentic AI) and move quickly to boost the capacities of the Safety Institutes, test models (including with third-party companies), and establish generally acceptable criteria for assessing how close models and platforms are to AGI.
The 2026 India AI Action/Safety/Security Summit, while far off, could be a last chance to do this, but many hurdles remain to refocusing the efforts on these issues and blunting the considerable fallout on the process from the U.S.-China dynamic. A major contribution to this effort was a meeting in Singapore and a paper on AI governance issued in May, The Singapore Consensus on Global AI Safety Research Priorities. Many key thought leaders in the industry attended the meeting, including from major AI developers and think tanks—OpenAI, Anthropic, RAND, and others—who are major supporters of the compute governance paradigm—along with a number of Chinese AI leaders and safety researchers and academics, and representatives from the UK, United States, Singapore, Japan, and Korea AI Safety Institutes.
Significantly, most of the AI safety community and prominent thought leaders and technologists (including machine learning pioneer Yoshua Bengio and Swedish-American physicist, machine learning researcher, and author Max Tegmark, who organized the Singapore event) support the inclusion of Chinese organizations. This is critical for continuing the progress on the AISI process.
No leading Chinese companies developing AI were in attendance and the Chinese government’s position on these issues remains unclear. In addition, a series of major and harsh U.S. government actions targeting the Chinese AI sector is set to be rolled out in the coming months. Already, several leading Chinese AI organizations have been put on U.S. blacklists, complicating their participation in international AI conferences. These new U.S. actions will work against efforts by the AI safety community to change the current troubling trajectory.
Third, push open sourcing of model weights to improve transparency. Moving the entire industry to an open-source model should be the ultimate goal of any regulatory process. With proprietary models developed by companies not providing the transparency that is required for the open-source community, the risks around uncertainty in the AGI timeframe will continue to rise to more dangerous levels.
Fourth, in addition to their participation in the AISI network process, include Chinese AI firms in all critical industry fora, in particular the Frontier Model Forum (FMF). This would help begin to develop trust within key players in the industry, particularly OpenAI and Anthropic, that Chinese companies, with the backing of the Chinese government, are pursuing the types of responsible scaling policies internally that are becoming the norm in the industry. It would foster a sense of shared responsibility among technology leaders to help governments develop responsible policies and avoid surprises, contributing to an eventual serious bilateral discussions about AGI and DSA.
Finally, there is a need for a major educational campaign to widen the discussion around these issues beyond a small number of AI labs and shadowy government departments driven by zero-sum conceptions around the development of advanced AI. The robust AI safety community, sidelined to some degree at the Paris Summit, can play a major role here by drawing attention to the dangers of U.S.-China AI competition derailing the push for a global AI safety framework. There is much work to be done.
The author will be participating in a number of dialogues around this issue in the coming months, including in Oxford and Paris, and at the World AI Conference in Shanghai in late July.
In the referenced paper, Bostrom makes this judgement: “The likelihood that a single corporation or a small group of individuals could make a critical algorithmic breakthrough needed to make AI dramatically more general and efficient seems greater than the likelihood that a single corporation or a small group of individuals would obtain a similarly large advantage by controlling the lion’s share of the world’s computing power.” ↩︎
A mid-May trip by President Trump through the Middle East saw a number of AI infrastructure related deals signed, though there is still considerable opposition to selling large numbers of advanced GPUs to countries including Saudi Arabia and the UAE coming from China critics and Congress, who are concerned about the potential for future diversion or eased access for Chinese companies to advanced GPU clusters. ↩︎
In Amodei’s late 2024 essay, this approach is clear in comments such as, “My current guess at the best way is via…a strategy…in which a coalition of democracies seeks to gain a clear advantage (even just a temporary one) on powerful AI by securing its supply chain, scaling quickly, and blocking or delaying adversaries’ access to key resources like chips and semiconductor equipment. This coalition would on one hand use AI to achieve robust military superiority (the stick) while at the same time offering to distribute the benefits of powerful AI (the carrot) to a wider and wider group of countries in exchange for supporting the coalition’s strategy to promote democracy (this would be a bit analogous to ‘Atoms for Peace’). The coalition would aim to gain the support of more and more of the world, isolating our worst adversaries and eventually putting them in a position where they are better off taking the same bargain as the rest of the world: give up competing with democracies in order to receive all the benefits and not fight a superior foe.” ↩︎
Some proponents of export controls now admit it could take a decade before the controls lead to a clear advantage for U.S. developers. We are likely to be close to AGI well before this and China’s domestic semiconductor industry will have made huge strides this ten year period, making this claim dubious at best. ↩︎
For more recommendations on these issues, see: There can be no winners in a US-China AI arms race. MIT Technology Review. January 21, 2025. https://www.technologyreview.com/2025/01/21/1110269/there-can-be-no-winners-in-a-us-china-ai-arms-race/ ↩︎
What Counts as Legal Knowledge in the Age of AI?
Let’s face it. Most of our students now use AI for everything even if we all pretend otherwise—from summarizing dense readings to drafting emails, preparing for class, writing thesis proposals, entire essays, and yes, even PhD chapters. According to a 2024 study by the digital education council, 86% of students already use AI in their studies. While AI cannot yet consistently produce work that passes on its own, a literate user—one who understands prompting and the logic of large language models—can produce results that are passable, even polished, even at the graduate level. But students are not the only ones. Many academics now rely on AI tools as well—for course preparation, for generating PowerPoints, writing emails, responding to students, and increasingly, even for marking papers. In all these uses, AI has become an infrastructural support for academic life.
And yet, while there are clear ethical concerns tied to these practices, those concerns remain blurry, fragmented, and inconsistently articulated. For some, using AI is little more than the 21st-century equivalent of using spellcheck or Google; for others, it is a serious breach of academic integrity. Most people, though, fall somewhere in between—and the jury is still very much out to lunch. What’s striking is that for now, at least, the presence of these ethical questions does not appear to be deterring anyone.
This raises deeper and more uncomfortable questions. If our methods of assessment and our expectations about student preparation are out of sync with what’s actually happening in real academic and professional settings—and if the skills we’re cultivating are no longer tightly aligned with what’s valued in professional practice—how should legal education evolve? This is not a problem that can be answered in the abstract. Legal education is always situated, embedded within specific contexts—jurisdictional differences, access to technology, language proficiency, AI literacy, professional cultures, and the commercial environment all shape what is possible and what is desirable. A student in a Canadian law school and one in a rural university in Egypt will experience the AI transition very differently. Still, the core questions must be asked. What should academic legal education consist of today? What counts as academic knowledge in an age where information retrieval, summarization, and even basic drafting can be outsourced to machines? How will professional certification evolve when traditional benchmarks—writing a PhD, publishing papers, mastering citations—can be reached far more quickly with the right tools? Does it still make sense to ask someone to labor over a dissertation for three years if something that looks and feels credible can be produced in twelve months or less? What is the purpose of prioritizing publications, when a seasoned academic with prompting fluency can churn out a well-structured article in a weekend?
These are not rhetorical provocations. They are questions that demand rethinking—not just of what we teach, but of what we consider knowledge to be, and what we mean when we say someone is “qualified.”
1. What Counts as Legal Knowledge Now?
Legal knowledge has long been associated with a particular repertoire of skills: reading and interpreting texts, recalling precedent, articulating coherent arguments, and expressing these in grammatically and rhetorically competent prose. It has also been tightly linked to form—essays, case notes, doctrinal analysis, briefs, and theses—all of which have historically signified not just the student’s familiarity with legal content, but their capacity to produce this content in institutionally approved formats. However, the rise of AI in both legal education and legal practice invites us to re-examine what we mean by knowing the law. When generative tools can retrieve case law, draft arguments, summarize dense theory, and mimic citation styles with remarkable ease, the emphasis begins to shift from traditional forms of content production to something else—perhaps to evaluative judgment, critical selectivity, and most crucially, to a rearticulation of academic discernment.
Yet this is not an epistemic revolution. It is a reconfiguration—one in which long-standing forms of knowledge practice are revaluated, rather than replaced. Academic discernment has always been central to legal scholarship. The ability to tell the difference between a strong argument and a superficial one, to identify what matters in a body of case law, to recognize patterns across seemingly unrelated domains, or to spot conceptual slippages—these are not new skills. What is changing is their visibility and centrality. In an environment where machines can now do much of the assembling, retrieving, and even drafting, what remains human (at the moment) is the capacity to discern—to question the framing, to test the coherence, to notice the subtleties that fall through the algorithmic net, and, if anything, the desire to have humans perform this role.
This shift is not limited to students. Academics too are embedded in this transition. Many already use AI tools to prepare lectures, generate PowerPoints, draft emails, mark papers, and even scaffold research outputs. Academic labor is being reorganized—not replaced—through these new tools. If anything, the role of discernment is heightened: knowing what not to trust, what to discard, where to intervene, and how to shape the raw material that machines now produce at scale. In this sense, academic legal knowledge is not being hollowed out—it is being redirected toward its more interpretive, critical, and curatorial dimensions.
This evolution mirrors what is happening across the legal professions more broadly. The corporate lawyer drafting a contract can now generate a boilerplate version in minutes, tailored to jurisdiction and subject matter. But the value of the lawyer lies not in producing that initial draft—it lies in knowing where the liabilities hide, what the client’s risk appetite is, what must be bespoke. The judge who uses AI to summarize case law or identify doctrinal patterns is not thereby outsourcing her judgment; her role shifts toward distinguishing precedent that matters from one that merely appears relevant. The arbitration lawyer can feed facts and legal issues into a well-prompted AI system and receive a draft written brief in twenty minutes—but she is still the one who decides what tone to strike, which argument to lead with, and which parts to strategically omit. In each case, discernment—the ability to evaluate, to contextualize, to anticipate implications—remains central.
And yet, this is only part of the picture. There are also strands of academic and professional life where discernment is not simply reframed, but minimized—or even rendered unnecessary. Some judges may indeed sign off on AI-generated drafts wholesale, especially in low-stakes or high-volume procedural contexts. Some law firms may come to expect that first drafts of memos or client communications be entirely AI-generated—not because the lawyers are being negligent, but because the time constraint and cost-benefit calculus make it the rational choice. In academic settings, there are already entire syllabi, slide decks, and draft papers being composed through automated tools. And again, this is not necessarily a failure of professional ethics—it may be precisely what allows an overworked academic or legal professional to meet expectations and keep the institutional machinery moving.
In such cases, discernment doesn’t vanish altogether, but it becomes selective. It is no longer a universal requirement, but a contingent one—invoked where complexity, novelty, or institutional risk demands it. For routine tasks, for repetitive forms, for standardized outputs, AI may generate results that are not only sufficient, but preferable. The idea that legal expertise consists in writing everything from scratch is increasingly untenable, not because the profession is collapsing, but because the tools are becoming normalized and institutional logics—faster turnaround, lower cost, backlog reduction—begin to favor them.
This destabilizes not only modes of work, but the very idea of legal expertise. Historically, the stature of the legal expert—whether scholar, practitioner, or judge—rested not only on knowledge of content, but on the difficulty of acquiring and deploying that knowledge. The professional mythology of law relied on this asymmetry. Legal knowledge was complex, arcane, interpretively rich. It required years of study, training, and cultivation. And it came at a premium—epistemically, financially, socially. The figure of the jurist, whether imagined as the learned judge or the scholarly sage parsing the “right answer” from a sea of doctrine, derived authority precisely from this cultivated exclusivity.
But what happens to that mythology when someone with no legal training can obtain a reliable legal opinion from a chatbot in under five minutes? What happens when clients come to believe—not unreasonably—that paying a lawyer for an hour of research is optional, if not obsolete? When early-career academics are compared not to each other, but to generative systems that can produce publishable text on demand? These shifts don’t just affect workload or workflow. They erode the epistemic scarcity on which professional mystique has long depended. The result is a recalibration of authority. Expertise may no longer mean “I know what others do not”, but rather, “I know how to navigate, interpret, and problematize what machines can already produce”. This does not dissolve legal professionalism or academic competence, but it does displace their traditional foundations. What remains valuable is not mastery in the old sense, but a kind of epistemic choreography: knowing when to trust, when to doubt, when to slow down the automation, and when to let it run.
2. Rethinking Professional Certification: Who Gets to Say Who’s Qualified?
The transformations underway in legal knowledge and labor do not stop at how work is done—they cut directly into who is licensed to do it. Historically, professional certification in law and academia has functioned not only as a gateway to practice, but as a mechanism for sustaining the epistemic and social authority of the professions. Law schools, bar associations, and doctoral programs have long served as gatekeeping institutions—controlling not just access to legal or academic roles, but shaping what counts as legitimate knowledge, who counts as an expert, and how that expertise is measured. But if, as we saw in the previous section, large parts of legal and academic work are shifting toward routinized or automation-compatible tasks—if discernment is no longer always needed—then a fundamental question arises: why require years of study, credentialing, and training to perform work that can now be done effectively, or at least sufficiently, by machines or by individuals with minimal legal education? If the associate’s job, or the teaching assistant’s job, is increasingly to interface with systems, respond to outputs, and lightly edit automated drafts, then what does it mean to say that only someone with a JD, LLB, or PhD is qualified to perform that role?
There is a real risk that universities and professional bodies will lose their monopoly on credentialing—not because they are failing, but because the definition of “qualified” is itself being contested. Already, we can imagine private firms offering short-term, intensive induction programs tailored to very specific, low-discretion tasks in legal practice: e-discovery, contract review, litigation support, drafting procedural memos. A highly reputable firm could launch a three-month certification pathway offering hands-on training in AI tool use, document handling, and firm protocols. Graduates of that program may be just as attractive to employers—especially for entry-level or narrowly scoped roles—as someone who has spent four years studying jurisprudence, philosophy, and comparative constitutional law.
This is not dystopian; it is efficient. From the perspective of firms and institutions under pressure to reduce costs, increase turnaround, and adapt to technological realities, such private credentials may not only be acceptable—they may be preferable. Why pay for the expensive epistemic overhead of a fully trained legal scholar when the task at hand demands none of that? Why require a junior tutor in a large undergraduate law course to have a PhD, when all they are doing is grading essays largely written (in whole or part) with AI tools, or responding to emails that are themselves AI-generated? The tutor in this scenario becomes a human interface, a compliance layer, a figure of soft oversight.
But the loss here is not merely symbolic. What risks erosion in this reconfiguration is the depth and breadth that liberal arts education once provided—not as ornamental, but as formative. Legal education has never been just about professional readiness. It has been a vehicle for cultivating intellectual autonomy, critical reflection, historical memory, and ethical sensitivity. A student trained in jurisprudence or legal theory may never use those frameworks directly in day-to-day practice—but they are more likely to ask difficult questions about justice, interpretive ambiguity, institutional bias, or historical contingency. The liberal arts pre-requisite of legal education functioned as a slow-burning safeguard against technocratic narrowness. If certification becomes purely task-oriented, we risk producing professionals who can perform tasks but cannot interrogate them.
This is compounded by another danger: dependency. If entire workflows are routineized through AI systems—so much so that new professionals are trained to trust the output unless otherwise directed—then even the capacity for oversight may atrophy. The role of the human becomes supervisory, but in name only. And when supervision becomes habitual and unexamined, the very conditions for discernment begin to wither. One does not oversee what one does not understand; one does not question what one does not recognize as contingent. Moreover, the deskilling of entry-level professionals may create a structural ceiling. How will a junior associate who has never constructed a legal argument from scratch acquire the tacit judgment to challenge or improve an automated brief? How will a junior academic develop a voice in a system where originality is de-emphasized in favor of fluent synthesis? Professional development becomes hollowed out; learning becomes a kind of custodial engagement with machine outputs. The result may be a generation of legal professionals who can work faster, but not think deeper.
The mythos of expertise—the idea that one has undergone a process of intellectual and ethical formation that renders one trustworthy—does not survive this transition unscathed. Replacing it with metrics, micro-credentials, and prompt-badges creates a more agile, more modular model of competence—but also one more fragile, more susceptible to epistemic drift. The authority of the professional has always rested, at least partly, on the belief that their judgment was forged in contexts that exceed any immediate task. Remove that context, and the authority begins to resemble that of a systems operator: legitimate only insofar as the system functions.
This is not an argument for nostalgia. It is, however, a call for clarity. If we are to move toward new models of certification—more flexible, more responsive, more plural—we must also ask what kinds of knowledge we are willing to devalue, what kinds of professionals we are prepared to produce, and what epistemic risks we are prepared to normalize. Certification is not just about qualification; it is about the stories we tell about why someone should be trusted. If professional certification is no longer the exclusive domain of universities and if legal knowledge is fragmenting into multiple layers of automation, discernment, and interface management, then what is left for universities to do? What is their role in a landscape where practical competence, task-oriented micro-credentials, and platform-based certification can bypass traditional degrees entirely?
These are not hypothetical scenarios—they are emerging realities. And they pose a direct challenge to the historical identity of the university as both a gatekeeper of professions and a steward of deeper intellectual traditions. For much of the modern era, the university held a dual mandate: on the one hand, to prepare individuals for specific careers through structured training and assessment; on the other, to cultivate broader intellectual capacities—critical thinking, ethical reflection, historical perspective—that exceed any particular task. This was especially pronounced in legal education, which combined doctrinal knowledge with elements of philosophy, history, politics, and moral reasoning. Law was not merely taught as a set of rules, but as a living discourse shaped by cultural narratives, power structures, and ethical dilemmas. Curriculums often included courses on economics, sociology, history, and political theory, reflecting a belief that understanding law required engagement with the wider intellectual currents that inform societal organization and human behavior.
Even the most practice-oriented law degrees embedded their training within a broader academic culture, one that insisted (at least nominally) on something more than procedural fluency.
That “something more” is now at risk—not only because AI can perform many of the surface tasks once used to measure competence, but because the institutional relevance of universities is being challenged by alternative providers who promise speed, focus, and cost-efficiency. What, then, can universities offer that neither platforms nor private firms can easily replicate? The answer may lie not in abandoning the liberal and theoretical dimensions of legal education, but in doubling down on them—strategically and visibly. If discernment is becoming a selective rather than universal requirement in legal and academic labor, then universities must become the sites where discernment is cultivated as a public good. Not everyone will need deep jurisprudential reflection in their day-to-day legal work—but someone must be trained to ask whether the automation of precedent application is distorting doctrine; whether the data used in sentencing algorithms embeds bias; whether our legal vocabulary still reflects the moral and social transformations of our time.
This is not an argument for elitism, but for clarity of institutional function. If universities attempt to compete with private certification on efficiency, they will almost certainly lose. They are too slow, too regulated, too fragmented. But if they reassert their value as institutions where epistemic habits are not just trained but interrogated—where knowledge is not just used but historicized and questioned—they may retain a critical role in shaping the professions of the future, rather than merely manufacturing them.
To do this, however, universities must rethink their own pedagogical and structural assumptions. They cannot continue to teach and assess students as though AI does not exist. Nor can they continue to equate academic excellence with quantity of output, when speed and fluency are now artificially replicable. If universities are to defend and renew their role, they must take seriously the question: what cannot be trained in three months? What kind of knowledge resists automation—not because it is obscure or inefficient, but because it deals with ambiguity, judgment, pluralism, and the contestability of meaning?
There is also a political dimension. The university, at its best, functions as a counterweight to the economization of knowledge. It resists the reduction of value to utility, of insight to immediacy. If that role is to survive, it must be asserted deliberately. The very existence of low-discernment professional tracks makes the case for high-discernment spaces even more urgent. Not as luxury goods, but as forms of epistemic infrastructure without which the legal system becomes shallow, brittle, and narrowly optimized. In this light, the future of the university is neither to vanish nor to universalize itself. It is to specialize in cultivating slow, reflective, critical forms of legal knowledge that complement—but do not compete with—the automated, the promptable, and the modular. It must become a site of second-order thinking in a world increasingly saturated with first-order output. Not everyone will need that training. But someone must have it. And someone must be able to teach it.
3. The University as Place of Alternative Ordering: Reclaiming Purpose Beyond Utility
If professional certification is no longer the exclusive domain of universities, and if legal and academic work is increasingly divided between high-discretion and low-discretion tasks, then the university must ask itself not just what it teaches, but what it is. What makes it different from a training program, a certification platform, or a corporate induction course? What kind of place does it claim to be, and why should anyone enter it?
To answer this, we must shift from thinking of the university as a neutral space—a zone where knowledge circulates—to thinking of it as a place: a structured, lived, and symbolically charged site with a particular relationship to time, authority, and subjectivity. More precisely, the university can be understood as a heterotopic place. In Michel Foucault’s sense, a heterotopia is a place that stands in contrast to the ordinary arrangements of the world. It mirrors society while simultaneously inverting or suspending its norms. It is where different temporalities operate, where identities can be reconfigured, where alternative logics are not only imagined but rehearsed. The university, at its best, has always been such a place.
In legal education, this heterotopic character has been especially pronounced. It is where students have encountered not just legal rules, but legal thought—where they could wrestle with jurisprudence, historical injustice, normative theory, or comparative frameworks in ways that the profession often lacks time or tolerance for. It is where they could argue in bad faith to test an idea, take intellectual risks without professional consequence, and encounter worldviews radically different from their own. These are not incidental features of the university—they are constitutive of its function as a place where human beings can become something other than efficient performers of tasks.
Yet this role is now endangered—not because the university has lost its value, but because the epistemic, economic, and technological forces that once deferred to it are beginning to bypass it. AI renders routine legal and academic work more efficient; credentialing becomes modular and market-driven; institutions increasingly seek measurable outputs. In such a world, the university is pressured to justify itself in transactional terms: are students employable? Are programs scalable? Is content optimized? But to accept these terms wholesale is to evacuate the very distinctiveness that defines the university as a heterotopic place—a site ordered differently, where knowledge is valued not for what it delivers, but for how it transforms.
But if the university accepts these terms too readily, it risks erasing the very distinctiveness that gives it value. It cannot outcompete the speed of platforms, the agility of private firms, or the precision of AI-driven instruction. What it can offer, and must continue to offer, is something those systems cannot replicate: a place to imagine otherwise. A place to experiment, to converse, to slow down, to inhabit uncertainty. A place to reflect not just on how to do the work, but whether the work should be done at all. A place where students encounter ideas that do not yet have application, and people they would not otherwise meet.
AI has a role in this place—but only if subordinated to those ends. Used well, AI can expand access, scaffold curiosity, and amplify learning. Used poorly, it accelerates the flattening of educational experience into a series of plausible outputs. If the university is to survive as a heterotopic place, it must insist that not all value can be measured in speed, accuracy, or optimization. It must defend the time it takes to think, the uncertainty required to learn, and the risk of failure as a necessary part of formation.
But this cannot be accomplished by nostalgia. The university cannot simply repeat the gestures of a bygone era and expect to retain its authority. It must change—not to conform to the logic of efficiency, but to better articulate its difference from it. That change begins with a sober recognition: the university’s monopoly on certification is gone, and its role as a transmitter of content is no longer exclusive. What remains is its capacity to be a place—a heterotopic place—where knowledge is not just acquired, but reimagined. Where the legal profession, and indeed society, can look to find not just trained individuals, but thinkers capable of asking what legal knowledge is for. That, in turn, demands a rethinking of the university’s structures, rituals, and pedagogies—so that its distinctiveness is not only preserved, but made newly relevant. The university must change. The next section begins to explore how.
4. The University Must Change—now.
If the university is to survive as a site where the dominant logics of utility, speed, and procedural efficiency are not simply mirrored but interrogated—then it must recognize how deeply its foundations are being shaken. The disruptions brought by AI are not merely technological; they are epistemic, institutional, and cultural. They demand not an update, but a transformation. But not all change is renewal. Some change leads to erosion, dilution, and loss. The task now is to discern what must be rethought, and what must be defended.
The most urgent shift is this: education must be reclaimed as transformative, not transactional. The university cannot be reduced to a knowledge-delivery platform, and learning cannot be reduced to credential acquisition. A legal education is not a checklist of competencies. It is a formative process that shapes how people think, argue, doubt, imagine, and judge.
This formation cannot be achieved through interface alone. It requires dialogue, friction, surprise—encounters with other minds and other ways of seeing. It requires moments of silence, boredom, failure, and risk. None of this can be outsourced to systems designed to optimize for predictability and speed.
And yet the temptation to proceduralize is growing. Increasingly, there is pressure to view education through the lens of operational efficiency: standardized content delivery, modular assessments, uniform feedback protocols. In such a view, faculty become interchangeable facilitators, and those most invested in high-discernment education—those who linger over complexity, who defend the value of ambiguity, who challenge grade inflation and resist over-scripting the curriculum—are cast as obstructive or dispensable. This is not merely a managerial error. It is an epistemic tragedy.
Tenure, in this context, is not an antiquated privilege. It is a structural commitment to the possibility of independent, unhurried, and sometimes unpopular forms of thinking. It can protect the kind of intellectual work that does not conform to performance metrics, the kind of teaching that does not scale neatly, and the kind of judgment that cannot be automated.
If the university is to retain its distinctiveness, it must protect faculty not in spite of their resistance to proceduralization, but because of it.
This also requires a shift in how we relate to students. Faculty must stop assuming that students’ use of AI tools is inherently deceptive. Often, it is adaptive. It reflects an instinct to meet demands with available tools, to synchronize with a world moving faster than we are.
The challenge is not to discipline this instinct, but to direct it—to invite students into the co-creation of new norms for learning, writing, and thinking with AI. The goal is not to punish students for adapting, but to ask with them: what is lost in automation? What remains irreducibly ours?
None of this is simple. Rethinking legal education in the age of AI means redesigning assessment, rethinking pedagogy, defending intellectual labor, and resisting the gravitational pull of procedural convenience. It means insisting that some things—conversation, reflection, judgment—cannot be scripted into rubrics or replaced by dashboards. It means slowing down in a culture obsessed with acceleration. It means allowing students to become more than users of systems—to become thinkers who can critique systems, live with uncertainty, and make judgments in contexts where rules do not suffice.
This also requires rethinking our assumptions about authorship and creativity. AI does not simply automate—it generates. It produces arguments, summaries, citations, and structure in ways that make the boundary between learning and outsourcing blurrier than ever. Unlike past tools such as citation managers or search engines, LLMs intervene in the production of meaning itself. And unlike contract cheating—where students purchase complete assignments—AI tools invite co-authorship. Students who use AI are not always avoiding work; they are often engaging with it differently, through prompting, filtering, and revision. The pedagogical question is shifting from “Did you write this?” to “How did you arrive here?” That distinction matters. It requires moving away from punitive models and toward an ethic of intellectual transparency. If creativity is no longer solely a matter of producing from scratch, but also of curating, framing, and editing, then assessment must evolve to capture those forms of engagement. The risk is not that students will cheat, but that we will fail to teach them what good use looks like.
This is not a nostalgic vision. It is a forward-looking one. Furthermore it is rooted in a commitment to the university as a place where people are formed, not processed; where education is lived, not downloaded. A place where law is not simply learned, but reimagined. A place where students are invited to reflect not only on how to work in the world, but on how the world itself might be otherwise.
And we must remember that these questions will not land evenly. The future of legal education will be shaped differently in Cairo than in Cambridge, in Accra than in Amsterdam. Universities are situated institutions. Their reinvention must be globally attentive and locally grounded.
This is not the end of academic legal education. But it is a turning point of a different order. Previous reforms have shifted curriculum, pedagogy, or access. This moment, however, threatens the epistemic foundations themselves: the authority of judgment, the value of formation, the time needed for thinking. Whether what follows is renewal or retreat will depend on how seriously we take this challenge—and how confidently we respond, not with resignation, but with re-design rooted in the clarity of purpose.