Editor’s Note
In this issue, we accepted that the only constant is change and, in keeping with that truism, we decided to evolve with the times.
Quite the innocent experiment, we typed “The System That Failed” into Google. The information that was returned in response to our query was an AI overview and suggestion that we write a science fiction short story with that title.
The plot? “To explore a future where advanced AI systems manage global resources and infrastructure, leading to a seemingly perfect world, but with unforeseen consequences,” the Gemini AI assistant wrote.
This suggestion isn’t … fiction. Our human society cannot possibly contemplate how AI will be integrated into human interactions and productivity three months from now, let alone three, thirty, or three hundred years into the future.
But there have been experiments in prescience by hundreds of acclaimed science fiction writers in the past, such as E.M. Forster who in 1909 published “The Machine Stops”, a dystopian tragedy set in 2081 where a dependent and subterranean human society has deified a ‘machine’ that sees to their every need. As the system begins to malfunction, the humans are lost, like children without guidance and paralyzed by fear of doing things for themselves. Unable to detach from their dependence, they quietly suffocate into extinction.
It is this dependence that we need to be wary of.
How do we manage integrating AI into the human machine much in the same way that the Gutenberg press or steam engine revolutionized the way we think and feel? That’s what we posed to AI itself in the machine-generated essay “Beyond Automation: Managing the Integration of AI into Human Civilization”.
Despite its promise, AI integration comes with significant risks, the essay said, adding that “AI systems can perpetuate or amplify existing biases in data, leading to unfair outcomes in education, hiring, policing, and lending, and others”.
True enough, the AI model fabricated three fake links, directing the reader to nonexistent pages on websites like the Smithsonian, History.com, and the World Bank. While a faulty reference for the origins of the Gutenberg press might seem inconsequential, the issue of AI models generating false sources has already caused real world ramifications, as seen in the recent fines levied against U.S. lawyers for using ChatGPT-generated research that produced fake case law.
In his poignant article on legal knowledge in the AI age, international jurist and Department of Law Chair at AUC Thomas Skouteris writes that definitions of licensing and credentialing are likely to change rapidly.
Furthermore, he writes of another danger: dependency. “If entire workflows are routineized through AI systems—so much so that new professionals are trained to trust the output unless otherwise directed—then even the capacity for oversight may atrophy,” Skouteris writes.
Education is not the only area impacted by AI; geopolitical dominance has become a key battleground as these machines continue to rapidly develop.
“Artificial general intelligence (AGI) or artificial superintelligence (ASI) have gone from theoretical concepts to apparently achievable outcomes within a much shorter timeframe than experts believed just five years ago,” writes senior vice president for China and Technology Policy Lead at DGA ASG Paul Triolo in this month’s essay, “A Costly Illusion of Control: No Winners, Many Losers in U.S.-China AI Race“.
He sees a new battleground over dominance of Global AI development and control, chiefly between China and the United States.
“The aggressive U.S. approach to China risks triggering conflict between the two nations while upending any progress to a global framework for AI safety and security,” he writes.
The AI battleground has become literal, in some cases. One of the most ominous—and deadly—uses of AI is in warfare, already battle-tested in Gaza. Anwar Mhajne, associate professor of political science at Stonehill College, explores this in “Gaza: Israel’s AI Human Laboratory”. Mhajne explains how Israel’s AI identification program, Lavender, uses “concerningly broad” criteria for identifying potential Hamas operatives, “assuming that being a young male, living in specific areas of Gaza, or exhibiting particular communication behaviors is enough to justify arrest and targeting with weapons”.
Alongside its use in warfare, UC Berkeley Professor Hany Farid warns that AI’s ability to falsify reality can and will threaten democracy.
“If we have learned anything from the past two decades of the technology revolution (and the disastrous outcomes regarding invasions of privacy and toxic social media), it is that things will not end well if we ignore or downplay the malicious uses of generative AI and deepfakes,” he writes.
And that’s just the point we’re trying to make in this issue with engaging content from our contributors: what is AI teaching us about ourselves?
This issue also marks a sad moment as we bid farewell to Cairo Review’s co-managing editor Karim Haggag, who leaves AUC to head the Stockholm International Peace Research Institute. Our loss is their gain, but we see much collaboration in our future. Adieu, Karim, you’ll be sorely missed.
Cairo Review Co-Managing Editors,
Karim Haggag
Firas Al-Atraqchi
