A Costly Illusion of Control: No Winners, Many Losers in U.S.-China AI Race
The aggressive U.S. approach to China risks triggering conflict between the two nations while upending any progress to a global framework for AI safety and security
During the past three years, in the wake of the rapid development of generative artificial intelligence (GenAI), the already tense technology-related competition between the United States and China has intensified. AI has become the clear focus of U.S. efforts, which aim to slow the ability of Chinese companies to develop advanced models. This dynamic has spilled out across the globe and affected supply chains and countries across the AI stack. The ‘geopolitics of AI’ has now become the primary battleground between the United States and China, with unknown but increasingly negative externalities.
At the same time, this competition has impeded progress towards developing an international framework to ensure the safe and secure development of AI. These issues, long discussed in academic literature, have now escaped the confines of the ivory tower. Artificial general intelligence (AGI) or artificial superintelligence (ASI) have gone from theoretical concepts to apparently achievable outcomes within a much shorter timeframe than experts believed just five years ago.
This essay will examine how we arrived at this point, the dangers of allowing the current trajectory to continue unchecked, and how we might get out of the zero-sum, winner-takes-all paradigm. This framework has seized Washington (including policymakers and most think tanks) and threatens to heighten the risk of conflict between the United States and China (including potentially over Taiwan) with massive ramifications for the global economy and the future of AI development.
AI Now Dominates U.S.-China Technology Competition
Several key strands of thought have led to a situation in which the United States and China find themselves locked in a struggle to dominate AI, with many, particularly among U.S. policymakers and think tanks, characterizing the competition as zero-sum. According to this narrative, whoever gets to something resembling AGI wins, because they will use this advantage for strategic gain. The implications of this framing are profound for the world, given that companies in the United States and China are far and away the global leaders in the development of so-called frontier AI models.
The current U.S. export control suite targeting the AI stack stems largely from two wellsprings:
First, the concept of ‘compute governance’—the assertion that compute hardware should be restricted as a way to control AI development—and the gradual identification of China as the most important target of such restrictions. This has been coupled with the second wellspring: the widespread adoption of ‘choke point technologies’ among the U.S. policy making community. This is particularly true for former Biden administration National Security Advisor Jake Sullivan, congressional critics of China, and DC-based think tanks that have acted as uncritical supporters for U.S. government policies targeting China and the AI stack.
The concept of advanced and scaled-up compute as a strategic chokepoint was initially popularized in AI alignment circles, influenced by broader observations in economics and geopolitics. Nick Bostrom of the Future of Humanity Institute in Oxford indirectly seeded some of these ideas through early papers addressing the strategic implications of openness in AI development, including strategic technology control.1 AI safety/security researchers Miles Brundage, Shahar Avin, and Jack Clark notably explored compute dynamics in their influential 2018 report, “The Malicious Use of Artificial Intelligence”, highlighting compute as a key factor.
Significantly, the early focus on ways to prevent the misuse of advanced AI in general in the 2021-2022 timeframe morphed into a focus on China in particular as a nation-state ‘authoritarian’ actor that could both misuse AI capabilities and, if allowed to get to AGI first, wield this as a strategic tool to gain unspecified advantages. Hence, policy choices began to coalesce around the need for the United States to both take the offensive in promoting the ability of U.S.-based AI companies to pursue AGI and ensure leadership in AI was coupled with the defensive concept of slowing China’s progress down. The latter goal was achieved through a series of regulatory measures unleashed for the first time via sweeping, unilateral, extraterritorial export controls in October 2022.
A key prerequisite for the measures pushed by adherents to the compute governance model was articulated by Sullivan, who linked advanced compute with national security concerns. This happened in fall 2022, when Sullivan highlighted the importance of advanced compute during a speech in Washington DC, listing computing-related technologies including microelectronics, quantum information systems, and AI. This set the stage for two years of steady regulatory actions by the Biden administration directed at Chinese firms with the aim of undermining the ability of China’s semiconductor industry to support advanced node manufacturing and systematically cutting off the access of Chinese firms to advanced AI hardware—the core of compute—in the form of GPUs made by U.S. tech companies Nvidia, AMD, and Intel.
This process contributed to two major events in 2025. The first was the April 2025 banning of exports of Nvidia’s H20 GPU, designed specifically by the firm for the Chinese market to meet earlier export control requirements. The H20 GPU is necessary for running trained models and applications to solve problems, a process called inference. In the view of the U.S. government and proponents of compute governance, the H20 represented the last chokepoint on the hardware side because inference is necessary for AI to move from chatbots to reasoning systems and, finally, to agentic AI. As of mid 2025 agentic systems, which feature the ability to access websites, fill in forms, and conduct other actions on behalf of individuals or companies, have begun to be deployed.
The second major event was the initial implementation of the so-called AI Diffusion Framework, rooted in the same compute governance argument and designed to prevent Chinese firms and researchers from accessing forbidden hardware globally.
The series of U.S. government measures launched starting in October 2022 and relevant to the overall “choke point” effort include but are not limited to:
- Implementation of Additional Export Controls: Certain Advanced Computing and Semiconductor Manufacturing Items; Supercomputer and Semiconductor End Use; Entity List Modification. This included the initial package of unilateral but extraterritorial controls, which dragged Japan and the Netherlands unwillingly into the U.S.-China technology competition arena. Both countries continue to push back on aspects like end-use and domestic persons controls.
- Implementation of Additional Export Controls: Certain Advanced Computing Items; Supercomputer and Semiconductor End Use; Updates and Corrections. This added more technologies to control lists and changed the performance thresholds triggering control of advanced GPU exports to China.
- Foreign-Produced Direct Product Rule Additions, and Refinements to Controls for Advanced Computing and Semiconductor Manufacturing Items. This massively expanded the scope of Chinese companies added to the Entity List, spreading across the entire semiconductor supply chain in China, and added new semiconductor manufacturing tools to control lists.
- Framework for Artificial Intelligence Diffusion. This Biden era rule was pulled back in mid-May but will be replaced by a new rule that supports the concept of enabling U.S. government control over where AI infrastructure is deployed globally. Due later this year, the new rule will likely include a global licensing regime for advanced GPUs, while requiring bilateral agreements and commitments around diversion and know your customer (KYC) to prevent Chinese firms or researchers from accessing restricted AI hardware.2
As this defensive regulatory process has heated up, leading U.S. AI companies have become drivers on both sides of the ‘promote and protect’ equation, particularly U.S. proprietary AI model leaders OpenAI and Anthropic. Founded in 2021 by former OpenAI executives, including co-founders CEO Dario Amodei and policy czar Jack Clark, Anthropic was established with a mission centered on AI safety and alignment. Over time, the company—according to its statements—recognized that controlling access to large-scale computational resources (compute) is pivotal in managing the development and deployment of advanced AI systems.
This realization positioned compute governance as a practical mechanism to enforce safety protocols and prevent misuse, especially in the context of international competition. In an essay published in late 2024, Amodei wrote about a future world in which AI needs to be developed to benefit “democracies over autocracies” and advocated for measures such as limiting access to semiconductor and manufacturing equipment, a clear reference to US AI competition with China. Anthropic’s focus on compute governance has intensified in response to China’s rapid advancements in AI.
The release of models like DeepSeek V3 and R1 in late 2024 and early 2025 demonstrated that despite U.S. export controls, Chinese firms were able to develop advanced AI models—in some cases without the guardrails around Chemical, Biological, Radiological, and Nuclear (CBRN) materials and other national security-related risks prioritized by companies such as Anthropic.
In a March 2025 submission to the U.S. Office of Science and Technology Policy, Anthropic emphasized the necessity for robust evaluation frameworks to assess the national security implications of AI systems and the importance of maintaining stringent export controls on AI chips to countries such as China. In early May, at the Hill and Valley Forum, Clark asserted that DeepSeek’s AI capabilities had been over-hyped and the Chinese firm was not a threat. In a pointed comment on both U.S. export controls and compute governance, Clark asserted that: “If they [DeepSeek] had access to arbitrarily large amounts of compute, they might become a closer competitor.” Anthropic has also apparently tested DeepSeek’s V3 and R1 models against ‘national security’-related risks—presumably CBRN, cyber, and autonomy—and determined that these models were not risks. OpenAI executives have made similar statements around China, export controls, and AI-related risks.
There are additional important drivers of both the government and industry views of the types of policy approaches that may be required as we near the advent of AGI. For instance, the Effective Altruism (EA) movement has significantly influenced Anthropic’s philosophy and approach. Early funding from prominent EA figures, such as Jaan Tallinn and Dustin Moskovitz—who also founded Open Philanthropy—provided the financial foundation for Anthropic’s initiatives. While the company has since sought to distance itself publicly from the EA label, the movement’s emphasis on mitigating existential risks and promoting long-term societal benefits clearly continues to resonate within Anthropic’s operational ethos, alongside an orientation toward slowing China’s AI development.3
Open Philanthropy is also a major financial donor to the Center for Security and Emerging Technology (CSET), an early driver of the focus on China as a nation-state actor on AI and the concept that choke point technologies could be used to slow China’s ability to develop advanced AI. Several key former CSET officials within the White House and Commerce Department drove the Biden administration’s efforts on export controls and AI policy and heavily influenced Sullivan’s thinking on China, AI, and technology controls.
Compute Governance Approach Exacts A Heavy Cost in Industry and Geopolitics
While the proponents of the compute governance approach may have been rightly concerned about how advanced AI could be used by malicious non-state actors such as terrorist groups, the shift in focus to almost exclusively applying the concept to a nation-state, China—home to companies that are among the global leaders in AI development—now appears to have been clearly misguided. The implications of applying this approach to China do not appear to have been exhaustively considered, with no effort having been made to bring together a broader group of individuals who understand global technology development, China political and economic development, supply chains, and commercial roadmaps across the AI and semiconductor stacks.
The impacts and costs of this decision have been immense. Left unexamined and unchecked, it is likely to lead to much higher risks of conflict between the United States and China, including over Taiwan, which is still the locus of the most advanced AI hardware production. In a detailed look at this issue I authored with technology entrepreneur and futurist Alvin Graylin, we assessed that while there is no “winning” of an AI arms race between the United States and China, there are already many losers. For proponents of compute governance, choke point technologies, and EA, this should prompt serious reconsideration of specific policies and narratives—but this review has yet to happen .
First, the pursuit of compute choke points has massively disrupted the global AI semiconductor industry, damaging companies across the AI stack, including Nvidia, AMD, Intel, Lam Research, Applied Materials, ASML, KLA, Samsung, SK Hynix, and many others who fall within their supply chains. The costs for these disruptions constitute hundreds of billions of dollars and the negative impact on these firms’ R&D budgets and their ability to continue innovating and competing will continue to pile up. These consequences will be felt over the next decade, given time horizons within the industry.
In addition, as of May 2025, China’s retaliation for U.S. controls included bans on critical minerals and rare earth elements (REE) and products, with costs likely to mount considerably in the coming months. In the short-term, this increase will be due to production stoppages at EV and other companies. It will increase in the long-term as well as western countries will need to recreate entire REE and critical mineral supply chains for themselves.
Second, the U.S. approach has been to focus primarily on the potential downsides of advanced AGI or ASI and the Decisive Strategic Advantage (DSA) it may hypothetically confer on China, while completely ignoring the many present and future benefits that advanced AI applications can offer a country and a society. These include innovation in critical areas such as healthcare, climate change science, and green technology, to name a few. In focusing on a potential outcome with only speculative timing and implications, the designers of U.S. policies and their think tank proponents almost never acknowledge the concrete benefits that advanced AI will bring to societies, including China.
All the companies developing the most advanced AI models in China are civilian, with little or no links to China’s military, and all are focused on civilian applications of the technology. Yet the justification for U.S. controls has focused on largely misunderstood concepts such as military-civilian fusion and the potential for GPUs to be used in supercomputers that could be used to model weapons systems. This is not where the cutting edge of advanced AI development in China is focused.
Approaches such as the AI Diffusion Framework also use justifications regarding potential military end uses. However, there is no evidence that military-linked Chinese firms would ever consider using overseas cloud-based services to either train or deploy inference applications involving the transfer of sensitive data even remotely connected with surveillance, security, or military end uses. No government would allow this.
Third, U.S. policy on AI and China has blocked—and will continue to preclude—progress on erecting an international regime of AI safety and security around the development of advanced models and applications. Without the involvement of the Chinese government and companies in this process, no globally acceptable set of guardrails can be established. This could have grave implications for AI safety and security, especially regarding efforts to mitigate the potential existential risks posed by AI. Significantly, in 2017 Amodei acknowledged this dynamic, warning of the dangers of a U.S.-China AI race “that can create the perfect storm for safety catastrophes to happen”. Some eight years later, Amodei is now a staunch proponent of winning the race for AGI with China, despite the fact that the exact risks he cited in 2017 are much more pressing in 2025.
Instead of supporting an open process where individuals, companies, and key nation-states are able to collaborate and discern where AI development is headed, the U.S. government and leading AI firms developing proprietary models are rapidly moving towards a closed development environment. This will mean the most advanced capabilities will not be publicly released, and other governments, including U.S. allies, will increasingly be left out of the loop, with diminished ability to determine how close leading players are getting to creating AGI. Bostrom warned against just such as situation: “…direct democracy proponents, on the other hand, may insist that the issues at stake are too important to be decided by a bunch of AI programmers, tech CEOs, or government insiders (who may serve parochial interests) and that society and the world is better served by a wide open discussion that gives voice to many diverse views and values.”
Global Safety Efforts at Risk: The Fragile Gains of AI Diplomacy
Caught in the middle of this are other key actors, such as the United Kingdom, which in 2023 launched the Bletchley Park Process precisely to bring together key stakeholders to discuss how to regulate frontier models and ensure AI safety and security. Much progress was made over an 18-month period, with the establishment of an AI Safety Institute (AISI) network, capacity-building within government-associated bodies, and linking the leading AI labs with the AISI’s to begin researching and testing methods of assessing advanced models and ensuring they would not enable the development of CBRN weapons or facilitate advanced cyber operations.
The need for this process is clear and the UK’s focus has remained steadfast, even as the early signs from the Trump administration suggested less emphasis on this issue. The Trump administration has thus far been more focused on expanded export controls and a new version of AI Diffusion rule—aiming to constrain the ability of Chinese companies like DeepSeek to develop advanced AI)—and closely collaborating with leading U.S. model developers to ensure the government is able to take control as the AGI inflection point nears.
This misplaced focus could ultimately fully derail the AISI process. At the Paris AI Action Summit in February 2025, I participated in a closed-door meeting as China debuted its own AI Safety and Development Institute, a network of AI safety-focused organizations that is doing advanced work on how to test models and reduce risks around fraud and other key AI safety issues. In addition, I participated in a panel with leading AI developers and safety experts on how the industry could continue collaborating on safety issues even in the absence of clear government policy at the national or international level.
This issue is particularly pressing as models are becoming more capable and Anthropic and OpenAI are suggesting that AGI may be just two or three years away. If U.S. efforts to contain China via compute governance result in the scuttling of any chance to reach an international consensus on AI governance this will cause collateral damage of the highest order; it increases risks around both AI deployments and geopolitical conflict involving the United States and China.
This latter issue is becoming more salient, as policymakers in both Beijing and Washington awaken to the realization that the cat is out of the bag. U.S. officials are now explicit about the goals of promoting AI and protecting against Chinese companies gaining progress on AI, guided by a belief that whichever country has the company which gets to AGI first will have a Decisive Strategic Advantage (DSA). Bostrom raised the idea of DSA very early, well before the advent of ChatGPT, but in the context of how it might emerge as a result of open or closed scenarios of AI development, not in the context of geopolitical great power competition. Officials in Beijing are certainly now aware of this narrative but have not likely thought through the full implications of this framing.
The DSA Narrative Fuels a Zero-Sum Race with No Safe Off-Ramp
The DSA framing is very dangerous because there will be considerable uncertainty as companies such as Anthropic, OpenAI, and DeepSeek/Huawei get closer to capabilities that will be considered to constitute AGI or even ASI. This uncertainty could prompt consideration of preemptive cyber or kinetic attacks on AI data centers known to harbor training capacity for the most advanced models. Neither country will be willing to allow the other to reach AGI first without some intermediate response.
Proponents of export controls and the race to DSA whose ‘best guess’ is that the optimal strategy would be to pursue an approach based on “isolating our worst adversaries and eventually putting them in a position where they are better off taking the same bargain as the rest of the world” will be in for a rude awakening about the realities of international superpower politics if this dynamic is allowed to continue unchecked. If, for instance, Beijing believes U.S. companies, using advanced GPUs manufactured in Taiwan by global foundry leader TSMC, are nearing AGI, this could prompt Beijing to take action against Taiwan that it would not otherwise have considered. This alone is a huge escalation of real risk for Taiwan resulting directly from policies based on compute governance and DSA approaches.
Critics of the DSA framing have questioned projections that assume that rapid progress towards AGI will occur merely with the deployment of larger numbers of AI software researchers. While I agree with Jack Wiseman, the author of a recent excellent article on this issue, that scenarios such as AI 2027 (which plays out a notional race to AGI or ASI between the United States and China featuring proxies for OpenAI and DeepSeek) assume that leaders will be “extremely cavalier” about the strategic balance in the face of DSA, the more important factor could be the perception in Beijing or Washington that the other side is ahead. Without dialogue, and with proprietary model developers dominating the landscape and collaborating closely with their governments, the potential for misunderstanding and escalation grows, becoming the primary risk around AI development.
In what now appears to be a self-fulfilling prophecy that the United States and China are in an ‘arms race’ to get to AGI first, fueled by fear of the consequences of one side crossing the DSA threshold, China has several advantages. The emergence of innovative companies such as DeepSeek and the continuing efforts of technology major Huawei to revamp the entire semiconductor industry supply chain in China to support the development of advanced AI hardware, illustrate the difficulty of slowing—let alone halting—the ability of Chinese firms to keep pace with U.S. AI leaders. Even former Google CEO Eric Schmidt now basically admits that the export controls have not only failed but in fact have served as an accelerant to China’s technology advances in AI.
China has major advantages in the race to deploy AI at scale, such as a long-term energy production strategy. The vast majority of these deployments will be consumer-facing (for example, through agentic platforms that benefit citizens via healthcare innovations) and enterprise-focused (for example, driving improved productivity). In other words, applications with no connection to China’s military modernization.
Currently, the development environment around AI models and applications is highly competitive, with around a dozen major players in each market, along with many more startups. Competition to get to AGI means that there will be a smaller number of players demanding higher levels of compute. U.S. controls will complicate the ability of Chinese firms to maintain access to large quantities of advanced compute, but this pressure will ease over time as domestic sources ramp up.4 If certain breakthroughs in model development and platform deployment lead either government to believe the other side is pulling ahead in the ‘race to AGI’, this is likely to cause serious distortions in the way governments and companies will choose to interact in AI development, with unknown implications, particularly on bilateral relations.
Here the risk is that at some point key players such as the National Development and Reform Commission (NDRC) in China, which is responsible for approving AI data centers and tracking overall compute capabilities, and the U.S. Department of Commerce, which has similar responsibilities, could decide that the race is coming down to one or two companies, and both governments would then choose to optimize these companies’ access to advanced compute—the scenario laid out in AI 2027. This would turn these companies into targets of both regulatory measures and potentially other actions from either government, all compounding the risk of conflict as companies approach something resembling AGI.
Under this scenario, collaboration around the real risks of advanced AI is likely to take a back seat, as neither government will want to consider cooperation in slowing a runaway race toward AGI. This unwillingness to collaborate carries unknown but almost certainly negative consequences for the industry, the U.S.-China relationship, and global efforts to understand the risks of advanced AI deployments and erect baseline guardrails. This is a very dangerous and unstable world, full of existential uncertainties as both sides consider the implications of the other reaching and acting on AGI.
Building Guardrails and Reimagining the Stakes Before It Is Too Late
We are now in a critical window. Both the U.S. and China have the necessary compute for at least the next two generations of model development, which will mean exponential progress towards AGI throughout 2025 and into 2026 and 2027. For the next 24 months, there will be many advanced GPUs flowing to AI data centers around the world as AI products, including increasingly sophisticated AI agent applications, are deployed, and open-source models and weights become increasingly available to a wider group of actors. This will be a critical time period to develop guardrails around AI development before a U.S.-China race to AGI becomes the dominant driver of AI development. It represents the chance of avoiding irreversible damage to U.S.-China relations as the potential for conflict accelerates.
How can we avoid missing the window and heading into a world where AI competition itself poses existential risks?5 Some supporters of export controls assert that critics of the compute governance framework think the proper solution is to do nothing. In reality, critics assert that the best approach is to stop doing counterproductive things that raise the risk of conflict and start doing things that reduce risks, laying the groundwork for a peaceful and stable coexistence.
First, develop a bilateral mechanism between the United States and China to discuss AI development and AGI in particular. This will require a major effort to stabilize the bilateral relationship, build trust around technology-related issues that have become ideologically infused, and find agreement that transnational issues, such as climate change, pandemics, and the risks around AGI necessitate the two most important players in these domains coming to grips with the implications of endless competition. The United States has successfully negotiated with China on controls for advanced technologies in the past, such as bans on nuclear weapons testing and proliferation. These discussions could and should include confidence building measures such as reducing controls on some technologies, such as hardware used for AI inference.
Second, revamp the Bletchley Park Process, which has been the only serious channel for discussing global AI governance, with full participation from both the United States and China, along with other major players such as India, Singapore, Japan, France, and Canada. Critical to this would be for the U.K. to continue to play a mediating role between the United States and China. In addition to discussing things like CBRN and cyber risks, a reconstituted Bletchley Park Process and AI Safety Institute Network process would need to focus on rapidly changing capabilities (such as agentic AI) and move quickly to boost the capacities of the Safety Institutes, test models (including with third-party companies), and establish generally acceptable criteria for assessing how close models and platforms are to AGI.
The 2026 India AI Action/Safety/Security Summit, while far off, could be a last chance to do this, but many hurdles remain to refocusing the efforts on these issues and blunting the considerable fallout on the process from the U.S.-China dynamic. A major contribution to this effort was a meeting in Singapore and a paper on AI governance issued in May, The Singapore Consensus on Global AI Safety Research Priorities. Many key thought leaders in the industry attended the meeting, including from major AI developers and think tanks—OpenAI, Anthropic, RAND, and others—who are major supporters of the compute governance paradigm—along with a number of Chinese AI leaders and safety researchers and academics, and representatives from the UK, United States, Singapore, Japan, and Korea AI Safety Institutes.
Significantly, most of the AI safety community and prominent thought leaders and technologists (including machine learning pioneer Yoshua Bengio and Swedish-American physicist, machine learning researcher, and author Max Tegmark, who organized the Singapore event) support the inclusion of Chinese organizations. This is critical for continuing the progress on the AISI process.
No leading Chinese companies developing AI were in attendance and the Chinese government’s position on these issues remains unclear. In addition, a series of major and harsh U.S. government actions targeting the Chinese AI sector is set to be rolled out in the coming months. Already, several leading Chinese AI organizations have been put on U.S. blacklists, complicating their participation in international AI conferences. These new U.S. actions will work against efforts by the AI safety community to change the current troubling trajectory.
Third, push open sourcing of model weights to improve transparency. Moving the entire industry to an open-source model should be the ultimate goal of any regulatory process. With proprietary models developed by companies not providing the transparency that is required for the open-source community, the risks around uncertainty in the AGI timeframe will continue to rise to more dangerous levels.
Fourth, in addition to their participation in the AISI network process, include Chinese AI firms in all critical industry fora, in particular the Frontier Model Forum (FMF). This would help begin to develop trust within key players in the industry, particularly OpenAI and Anthropic, that Chinese companies, with the backing of the Chinese government, are pursuing the types of responsible scaling policies internally that are becoming the norm in the industry. It would foster a sense of shared responsibility among technology leaders to help governments develop responsible policies and avoid surprises, contributing to an eventual serious bilateral discussions about AGI and DSA.
Finally, there is a need for a major educational campaign to widen the discussion around these issues beyond a small number of AI labs and shadowy government departments driven by zero-sum conceptions around the development of advanced AI. The robust AI safety community, sidelined to some degree at the Paris Summit, can play a major role here by drawing attention to the dangers of U.S.-China AI competition derailing the push for a global AI safety framework. There is much work to be done.
The author will be participating in a number of dialogues around this issue in the coming months, including in Oxford and Paris, and at the World AI Conference in Shanghai in late July.
- In the referenced paper, Bostrom makes this judgement: “The likelihood that a single corporation or a small group of individuals could make a critical algorithmic breakthrough needed to make AI dramatically more general and efficient seems greater than the likelihood that a single corporation or a small group of individuals would obtain a similarly large advantage by controlling the lion’s share of the world’s computing power.” ↩︎
- A mid-May trip by President Trump through the Middle East saw a number of AI infrastructure related deals signed, though there is still considerable opposition to selling large numbers of advanced GPUs to countries including Saudi Arabia and the UAE coming from China critics and Congress, who are concerned about the potential for future diversion or eased access for Chinese companies to advanced GPU clusters. ↩︎
- In Amodei’s late 2024 essay, this approach is clear in comments such as, “My current guess at the best way is via…a strategy…in which a coalition of democracies seeks to gain a clear advantage (even just a temporary one) on powerful AI by securing its supply chain, scaling quickly, and blocking or delaying adversaries’ access to key resources like chips and semiconductor equipment. This coalition would on one hand use AI to achieve robust military superiority (the stick) while at the same time offering to distribute the benefits of powerful AI (the carrot) to a wider and wider group of countries in exchange for supporting the coalition’s strategy to promote democracy (this would be a bit analogous to ‘Atoms for Peace’). The coalition would aim to gain the support of more and more of the world, isolating our worst adversaries and eventually putting them in a position where they are better off taking the same bargain as the rest of the world: give up competing with democracies in order to receive all the benefits and not fight a superior foe.”
↩︎ - Some proponents of export controls now admit it could take a decade before the controls lead to a clear advantage for U.S. developers. We are likely to be close to AGI well before this and China’s domestic semiconductor industry will have made huge strides this ten year period, making this claim dubious at best.
↩︎ - For more recommendations on these issues, see: There can be no winners in a US-China AI arms race. MIT Technology Review. January 21, 2025. https://www.technologyreview.com/2025/01/21/1110269/there-can-be-no-winners-in-a-us-china-ai-arms-race/ ↩︎