Winter 2015

Bombarded by blogs and videos, Twitter feeds and Facebook posts, we are living in dizzying digital times. For journalists, technology-driven transformation has brought new opportunities yet prompted anxieties about everything from readerships to paychecks. Bringing some clarity to a seemingly uncertain future is the aim of our Special Report: Media in the Online Age. Who better to lead the conversation than Arianna Huffington? In our Cairo Review Interview, the New Media mogul talks about the outlook for journalism and her expansion ideas for the Huffington Post—plans that include the launch of HuffPost in Arabic this spring.

Dan Gillmor, founding director of the Knight Center for Digital Media Entrepreneurship at Arizona State University’s Walter Cronkite School of Journalism and Mass Communication, makes the case for optimism in his lead essay, “The Promise of Digital.” Christopher B. Daly explores the deeper reasons for legacy media’s demise in “Death of the Newsroom?” In “Watchdogs Unleashed,” Brant Houston argues that investigative journalism is making a comeback. R. S. Zaharna opens a window on the world of digital diplomacy in “From Pinstripes to Tweets.”

Among the hazards that today’s Online Age journalists face is the determination of those who would manipulate, control, jail, or even kill writers, reporters, and editors. Joel Simon, executive director of the Committee to Protect Journalists, explains the new challenges and risks facing the profession in “Dangerous Occupation,” an extract from his new book, The New Censorship: Inside the Global Battle for Media Freedom, published by Columbia University Press. In “Tests for Egyptian Journalists,” Naomi Sakr reports on the narrowing prospects for press freedom in the Arab World.

Plus ça change, plus c’est la même chose. The French expression about how things remain the same unfortunately applies to the persistence of Arab and Muslim stereotyping by the American television and film industries. In “Hollywood’s Bad Arabs,” Jack G. Shaheen reports on how since the September 11 attacks this stereotyping “has extended its malignant wingspan, casting a shadow of distrust, prejudice, and fear over the lives of many American Arabs.” The essay is adapted from the third edition of Shaheen’s landmark work first published in 2001, Reel Bad Arabs: How Hollywood Vilifies a People, published by Interlink Publishing.

Scott MacLeod
Managing Editor

A Better Citizen

Mahmoud El-Gamal will be forever nostalgic about his days as an economics undergraduate at the American University in Cairo. In the 1980s, before heading to the United States for graduate school and a distinguished academic career, he was a fixture of AUC’s vibrant downtown campus. He fondly remembers helping a professor develop a new course on Islamic finance. He enjoyed listening to youthful musicians, watching Youssef Chahine films, and even writing poetry. None of that got in the way of his studies: he received the President’s Cup for graduating at the top of his class.

Three decades later, El-Gamal, 52, has returned to Egypt, and to AUC. In July, he became the university’s provost and vice president for academic affairs. When receiving the offer, he told his wife, Ghada: “If I don’t do this, then I will probably never work in Egypt.” He has his work cut out for him. El-Gamal’s years away coincided with the long reign of former President Hosni Mubarak. Also, since his graduation in 1983, AUC has more than doubled in size to a student body of more than 6,500, and 500 members of the faculty. The university also operates on a new $400 million main campus in the suburb of New Cairo, about an hour’s drive from the century-old Tahrir Square campus in downtown Cairo.

One of El-Gamal’s goals as provost is to inspire greater passion for learning and critical thinking among students, and to facilitate more and better teaching by the faculty. He speaks enthusiastically about AUC’s recent efforts to implement a bridge program for freshmen from Egyptian high schools that were not intellectually rigorous; the program helps students develop their writing and critical thinking skills. “It starts by having the right people teaching courses, so they can ignite the passion in the student to pursue what they’re good at, rather than what the parents want them to be,” he says.

Many students and parents, El-Gamal frets, don’t value a liberal arts education and instead think of universities as pre-professional training grounds. He seeks to encourage students to study and work in the fields they are passionate about. In turn, he argues, graduates will end up being better managers and leaders because they received a rounded education. “A liberal arts education produces a better citizen,” he says.

El-Gamal holds a joint appointment at Rice University as a professor of economics and statistics and the endowed Chair in Islamic Economics, Finance, and Management. He joined Rice in 1998 and has also served as a scholar at the university’s James A. Baker III Institute for Public Policy. He previously taught at the California Institute of Technology as well as the University of Wisconsin–Madison, and the University of Rochester. He has a Master of Science degree in statistics from Stanford University and received his doctorate in economics from Northwestern University. His personal blog boasts a sizeable collection of verse written over the years, in both Arabic and English—odes to Cairo and the Nile River, to Allah and childhood, to the loneliness and hope inherent in travel.

El-Gamal is a leading scholar in the field of Islamic economics, the author of Islamic Finance: Law, Economics, and Practice and co-author of Oil, Dollars, Debt, and Crises: The Global Curse of Black Gold.

He pursued his study of Islamic finance to have a greater impact than the “technical, esoteric stuff” he was used to producing in his research. He started by translating 1,600 pages of Islamic jurisprudence that define Islamic finance laws into a reference book. He wanted the majority of Muslims who are not Arabic speakers—in Malaysia and Indonesia, for example—to better understand the field.

El-Gamal’s critiques of Islamic finance produced some heartburn throughout the industry. “I basically dissected all the smoke and mirrors tricks on which the industry was built,” he explains. For example, he argued that the banks were using loopholes in Islamic jurisprudence to provide secure Western-style lending in a way that could be considered acceptable in Islamic law. In what he dubbed “sharia arbitrage,” the banks were still making profits and borrowers still owing fees.

For a spell in the mid-1990s, El-Gamal served in the International Monetary Fund’s Middle East department with responsibility for the West Bank and Gaza Strip. After the signing of the historic Oslo Accords, his mission was to help formulate effective monetary policy while Palestinian negotiators worked on a final deal for a Palestinian state. El-Gamal believed that working at the IMF was an opportunity to “get an economy right,” but he returned to academia as the peace process badly faltered.

Expertise in finance brings some added value to El-Gamal’s academic post at AUC, given the university’s plans for austerity budgets amid a meltdown of Egypt’s economy following the 2011 uprising that began outside AUC’s Tahrir Square campus gates. “Strategy and growth are not one and the same thing,” says El-Gamal. “AUC has grown too fast, and may need to go through a period of consolidation.”

Clearly, El-Gamal intends to make an impact with his return to Egypt and AUC. He has never been one to stand on the sidelines. After the September 11 attacks, he decided to give a public address or khutbah at his local mosque in Houston—a response to “being sick and tired of being defined by other people,” he explains. “Overt prejudice against Muslims and Arabs increased after 9/11, and that made Egyptian-Americans like myself more conscious of the xenophobic side of American society. However, the relatively infrequent incidents of prejudice, unfortunate as they have been, were outnumbered by acts of kindness and inclusiveness. Indeed, this openness to people of different origins and persuasions, and the freedom afforded to all such people, are the open secrets of America’s success.”


Mohamed Tawfik is the Egyptian ambassador to the United States. When he took the lectern at the American University in Cairo recently, it was not to discuss the tensions in Egyptian-American relations, or to analyze the latest upheavals in Iraq and Syria. Instead, he was there to discuss a book titled Candygirl: An Egyptian Novel.

Literature, as much as diplomacy, is Tawfik’s passion. He is the author of three novels and three books of short stories. Candygirl, which he published in 2010, is set in 2007 Cairo as a ruling Egyptian regime is crumbling. A shrouded critique of Hosni Mubarak’s sclerotic government, the book is a science fiction thriller tracking the fate of those involved in Iraq’s unconventional weapons programs. The protagonist is an Egyptian nuclear scientist who is on the run from espionage agencies; he submerges himself in a virtual world, where he proceeds to discover true love. “A good novel for me is like a good symphony,” Tawfik told a packed auditorium. “It is not only based on the events and the characters, but a very important part of it is the empty spaces, the silences, the room it leaves for each individual reader to interpret.”

Tawfik must squeeze his writing in between demanding diplomatic assignments. He served as ambassador to Lebanon and Australia before taking up his current post in Washington in September 2012 during the turbulent tenure of former President Mohammed Morsi. He became the familiar face of the Egyptian state on American TV news shows after Morsi was overthrown less than a year later, justifying the lethal crackdown on Muslim Brotherhood protest camps.

Candygirl’s title is after the avatar in the form of a woman that the protagonist falls in love with. Although Candygirl is a middle-aged writer’s depiction of cyberspace, Tawfik has found an audience among a generation of young Egyptians who pride themselves on being digitally literate. The book is required reading for AUC’s current freshman class, as part of the university’s One Book, One Conversation, One Community Common Reading Program to enhance a culture of reading among students. Students are participating in an essay and creative arts contest around Candygirl, and various related debates and panel discussions are also being held. AUC President Lisa Anderson, who launched the reading program, calls Candygirl “fascinating and thought-provoking.”

A public servant’s bent for literary experimentation is an Egyptian tradition. Perhaps the most illustrious example is Naguib Mahfouz, who wrote thirty-four novels plus hundreds of short stories and movie scripts while working as a functionary in various government ministries. Tawfik Al-Hakim, another of the country’s literary luminaries, worked as a prosecutor in the courts of Alexandria and various provincial towns. Al-Hakim introduced a new style of dialogue writing, a balance between formal Arabic and colloquial Egyptian that revolutionized the Arabic novel and theater.

Tawfik told the Cairo Review that he found inspiration in George Orwell’s 1984, a dystopian novel about a government surveillance state. To him, Orwell’s story is “a description of how governments in general are always willing to use technology to further their own control and their own interest.” He sees similar tactics used in the West after September 11 and in Egypt today, where the government has technological tools and public support to maintain control. “The challenge is how to do that and at the same time not affect the basic principles that are in the constitution,” Tawfik said.

Oriental Hall, etc.

The persistence of Islamic radicalism in the Middle East has left Washington unclear about how to promote stability and democracy, according to Walter Russell Mead, an international relations professor at Bard College and Yale University. In a recent lecture hosted by AUC’s Prince Alwaleed Bin Talal Bin Abdul Aziz Alsaud Center for American Studies and Research, Mead spelled out how top U.S. interests, like maintaining oil and contributing to a secure Israel, are colliding with a Middle East whose more globally connected citizens are rising economically and demanding political rights. “We’re well aware that we’ve had two presidents who have tried in their own ways to find solutions to the Middle East and haven’t found them,” Mead said.

At a recent panel hosted by the AUC’s School of Global Affairs and Public Policy called “Rethinking the Rentier Curse”, Mohamad Al-Ississ, associate dean of GAPP, argued that dependence on foreign-generated income from sources such as oil or aid “changes the fundamental relationship between the governed and the governing.” The slow development of an independent private sector in the Middle East forces citizens to rely on the state for livelihoods, he said. A private sector dominated by privilege, patronage and cronyism, added Adeel Malik, a lecturer at the Oxford Centre for Islamic Studies, hinders the emergence of political and economic autonomy among the merchant class that can hold governments accountable.

Arab World on the Precipice

Now more than any time in recent memory, the Arab World as a political entity is confronted with ominous threats and hair-raising domestic and regional challenges. The Arab national identity is being tested by the natural forces of change: demographics, globalization, and interdependence on the one hand, and extremist ideologies on the other. Arab economies and security are also over-dependent on foreign powers or stakeholders, which makes the Arab world prey for conflicting influences and interference. Arab apathy and a diminished inclination or ability to deal with problems further exacerbates the challenges. And, a pattern of bad governance and misguided policies marginalizing or discriminating against ethnic minorities has generated domestic turbulence and fissures in the social fabric of Arab society.

As the largest nation in the Arab World and heir to a long and proud history of leadership, Egypt is best suited to address these new challenges and dynamics. What is needed from Egypt is vision: not only on how to build a better future for this great country, but for the Arab World in its entirety. She can offer a model of the social and cultural contributions of good governance and a more equitable and stakeholder-friendly social contract between state and citizen. To be effective, the vision should entail a strong regional compact between the Arab nations towards addressing our common challenges. In this role, Egypt, the oldest nation state in the region, and the traditional beacon of Arab society, carries the responsibility of defining the future of the Arab World.

The first message in this effort should be directed at domestic and Arab public opinion, and would be most effective if articulated at the Egyptian presidential level. The message should highlight the risks of present and future challenges, emphasize the inevitability of cooperative action between the Arab nations, and affirm the imperative of preserving Arab identity. Above all, it must commit the full strength of the Arab World to meeting these challenges and preventing the potentially catastrophic consequences of the attempts to re-demarcate the Arab region.

The recent events in Iraq, Syria, Libya, the Gaza Strip, and Yemen are portents of a stormy future for the region. Yet, before we can turn our attention to Arab peoples outside our borders, we must begin a dialogue about the principles that should govern us internally. These principles must aid in building a better future, by eliminating the risk of polarization, sectarianism, and religious extremism that so threaten the Arab World today. At the same time, these principles should generate new ideas about how to reconcile the Arab national identity with respect for the culture and character of the region’s minorities of different ethnic origins. The principles take into account the need to confront extremism robustly but wisely. In pursuit of such principles, I propose the following:

—That the heads of the stable Arab countries hold separate national dialogues between their peoples.

—That these states then provide the results of these dialogues to the Arab League to be monitored, coordinated, and recorded whenever possible.

—That the Arab League thereafter draft a document or declaration of the Arab World Citizen, calling on its members to respect the national state, the unity of its territory, and its sovereignty, while doing so incorporating principles for the protection of the cultural, personal, and social context of national minorities in the state.

—That the Arab World review this document every ten years in order to preserve the progress of societal change.

—That an urgent and joint meeting between the Arab states’ respective ministers of the interior and justice be held to develop better cooperation with respect to the issue of terrorism, regardless of political differences elsewhere, a proposal previously endorsed by the Council of Arab Foreign Ministers at the beginning of 2014, and later at the Arab summit in Kuwait. Terrorism of the most dangerous sort sows poison and division amongst us, creating a cancer that cannot be ignored or eliminated by tactical individual responses.

—That a meeting be held between the leaders of the Arab intelligence agencies and ministries for better consultation and exchange of information about extremist movements and develop ways to confront them.

—Finally, it is also important to facilitate dialogue between Arab intellectuals and elites regarding the best educational and cultural paths to confront extremist ideology.

It is an understatement to say the Arab World is at risk. We have a national and moral responsibility to redirect and correct the political rudder of our homelands, to embrace our common roots and ties, and build a durable, just, and stable international polity inclusive of the Arabic-speaking world. We must ask ourselves: Which model of Arab civilization will produce the states we want to live in? My conclusion is that these states should be:

—Modern national, democratic, pluralistic states, which do not discriminate between citizens.

—Legitimate states that respect and adhere to domestic and international law as an authority upon us and others, without discrimination or exception.

—Independent states, which believe in the need for strong relations with the different nations of the world, provided they respect our rights and interests.

—Active states interacting with the international system, working towards greater involvement amongst the Security Council as well as international economic organizations in order to ensure the rights of developing countries.

—Wise states which carefully maintain their natural resources, and develop strong positions and initiatives regarding environmental degradation and the use of such resources as energy and water.

—Humane states which respect the rights of minorities, women, and children.

—Sovereign states which will not be deterred in pursuing their self-defense and national security, yet at the same time seek to secure collective security on a regional basis and to resolve disputes by peaceful means.

The Arab World faces a critical choice: To embrace progress and move forward, or slide into a societal and political abyss that will destroy the Arab World as a political entity and erode its societal identity.

Nabil Fahmy, a former foreign minister of Egypt, is dean of the School of Global Affairs and Public Policy at the American University in Cairo

Barack Obama’s Lost Promise

I still vividly remember President Barack Obama’s speech in Cairo on June 4, 2009, delivered just five months into his first term of office. It conveyed an ambitious vision for reshaping America’s relationship with the Muslim World. He acknowledged that the colonial legacy had fueled mistrust of the West, and that during the Cold War, Muslim countries were treated as proxies without regard to their own aspirations. “I’ve come to Cairo to seek a new beginning between the United States and Muslims around the world, one based upon mutual interest and mutual respect,” Obama told us. We in Egypt and the Middle East welcomed a new American policy based on understanding and dialogue.

Obama emphasized multilateralism in his foreign policy. This was an encouraging development after the unilateralist policies of his predecessor, President George W. Bush, whose War on Terrorism had led to invasions and long occupations of Afghanistan and Iraq. Obama adopted a strategy of engagement, cooperation, negotiation, and persuasion. Yet as a realist Obama did not exclude a willingness to use force if American core interests were threatened. Some dubbed his policy “multilateralism with teeth.” Whatever Obama’s intentions, however, multilateralism has not necessarily produced good results.

In handling the wars in Afghanistan and Iraq, inherited from his predecessor, Obama preferred a blend of unilateralism and multilateralism. In an attempt to reconcile U.S. foreign policy with international law, Obama ordered a swift and complete withdrawal of U.S. troops from Iraq. In Afghanistan, where U.S. operations had NATO support, Obama ordered a surge of U.S. troops before calling for a phased withdrawal. However, in both countries, U.S. pullback has led to a precarious security situation and the potential for complete collapse.

Obama seemed to be at his multilateral best in response to the Libyan uprising in 2011. He achieved a United Nations Security Council resolution allowing the “use of all measures” to intervene in Libya’s domestic conflict to prevent human rights violations. NATO eventually took the lead in a military effort that effectively supported armed rebel factions until Libyan dictator Muammar Gaddafi was overthrown. But Obama’s interventionism in Libya would later come under intense criticism as the country disintegrated into factional warfare; some questioned whether Western intervention was necessary at all, and others blamed a premature NATO withdrawal for leaving the country with weak institutions.

Arguably, the mission creep that occurred in Libya—from protecting civilians to overthrowing a ruler—undermined Washington’s attempts to replicate multilateral intervention in response to another Arab Spring uprising in Syria. Obama called for UN intervention in Syria two years after revolts began, and only when the alleged use of chemical weapons challenged the international body’s responsibility to act against Bashar Al-Assad. But when the White House failed to mobilize consensus for military action in the Security Council, it agreed to Russia’s proposal to place Syria’s chemical weapons under international control. Causal effects are difficult to assess, but a lack of targeted military intervention opened up space for other forces, such as the extremist group known as the Islamic State in Iraq and Syria (ISIS).

Washington’s agreement to enter substantive negotiations with Iran over its nuclear program is a prime example of Obama’s multilateralism. The White House opened up official channels of communication with Iran, sending Secretary of State John Kerry to negotiate with Iranian diplomats during P5+1 talks about the country’s nuclear program. This is a significant reversal from George W. Bush’s “no talks” policy and threat of a pre-emptive attack. Many pressing issues demand an American dialogue with Tehran: Iran’s nuclear program; the rise of ISIS; the increasingly unstable political situation in Afghanistan and Yemen; and the repercussions of that instability on a nuclear-armed Pakistan.

A glaring exception to Obama’s multilateralism is his policy on the Palestinian-Israeli conflict. Like most of his predecessors, Obama acceded to the Israeli preference for the United States to remain the prime mediator in the dispute. After expending relatively little effort in Obama’s first term, the administration mounted a diplomatic drive in the second. U.S.-led negotiations resulted in the release of Palestinian prisoners and a Palestinian willingness to freeze an appeal for international recognition of the State of Palestine. A year later, however, talks have failed and prospects for peace are dim.

The disappointing results can be attributed partly to the magnitude and complexity of the region’s problems. Few in the administration—or in the region—were prepared for the dramatic, fast-paced developments of the past few years: the Arab uprisings; the spread of extremism and terrorism; the sectarian rivalry between Shiites and Sunnis; the escalation of violence between Hamas and Israel. These challenges were compounded by unsuccessful economic development strategies, social inequity, and persistence of poverty.

For better results, the United States must extend a multilateralist approach to the core Palestinian-Israeli conflict. Washington must not walk away, and Europeans should not remain on the sidelines. As the Obama administration moves toward a deal with Iran, it must assuage the fears of Gulf Cooperation Council countries about a realignment of American security interests. Obama must find a way to work constructively with the main players in the region to contain the threat posed by ISIS.

In his commencement address at the U.S. Military Academy at West Point last spring, Obama defined multilateralism as a legacy of his presidency. He emphasized the need to mobilize allies and partners for collective action, and lauded international organizations from the UN and NATO to the World Bank and International Monetary Fund. Even if the shift to multilateralism has not fulfilled the hopes and expectations Obama raised five years ago in Cairo, it is a potentially important milestone. But without a more consistent and effective implementation of the policy, the promise of a better American relationship with the Muslim World will remain elusive.

Magda Shahin is director of the Prince Alwaleed Bin Talal Bin Abdul Aziz Alsaud Center for American Studies and Research at the American University in Cairo 

Huffington’s World

Surrounded by young assistants, Arianna Huffington runs the growing Huffington Post empire from a book-cluttered, glass-enclosed office looking out on a vast newsroom with row upon row of editors and reporters at computer terminals. The state-of-the-art digital media operation is a long way from her home in Brentwood, California, where just ten years ago she and a few colleagues dreamed up the news and blog site and sketched the future online powerhouse’s first layout on a scrap of paper.

HuffPost has revolutionized journalism. Today, it employs more than 800 staffers and publishes the work of tens of thousands of unpaid bloggers. As of January 2015, ranked 28th among all U.S. websites (and 4th among news sites) and 89th globally, according to the analytics firm Alexa. In 2011, HuffPost was sold for $315 million to AOL, where Huffington’s title is president and editor-in-chief of the Huffington Post Media Group. In the same year HuffPost started up its first international editions, in Canada and the United Kingdom. HuffPost won a Pulitzer Prize in 2012—becoming only the second digital publication to do so—for a series on wounded American military veterans.

Born Arianna Stassinopoulos, Huffington left her native Athens and studied economics at Cambridge University. She has authored fourteen books, including the recent No. 1 New York Times best-seller, Thrive: The Third Metric to Redefining Success and Creating a Life of Well-Being, Wisdom, and Wonder. She was married to (and later divorced from) California Republican congressman Michael Huffington. In 2003, Arianna Huffington herself stood briefly as an independent candidate for governor of California. A political activist and newspaper columnist, she became “addicted” to blogging in 2002. Her idea for a progressive-leaning website took hold after the re-election of President George W. Bush in 2004. Cairo Review Managing Editor Scott MacLeod spoke with Huffington in New York on January 12, 2015.

CAIRO REVIEW: The attack on Charlie Hebdo magazine in Paris last week—what do you make of that?

ARIANNA HUFFINGTON: The response has been overwhelming. And that rally [in Paris on January 11] was very significant. For me the most important thing is to keep making the distinction between the extremism and Islam. I think when we blur the lines between extremism and what Islam represents, that’s when it becomes much harder to find long-term solutions.

CAIRO REVIEW: In Syria, we have journalists being beheaded. Now we have an attack on a newspaper office in Paris.

ARIANNA HUFFINGTON: Obviously it is an attack on free expression. It is an attack on allowing diversity of opinion and tolerance of all the principles fundamental not just for democracy but for humanity. Being able to accept opinions when you disagree is at the heart of building a civilization.

CAIRO REVIEW: Did Charlie make a mistake by publishing those cartoons?

ARIANNA HUFFINGTON: We actually published those cartoons after the terrorist attack to show solidarity. We decided to publish because we felt it was important at that moment to show solidarity with all the forces of tolerance and free expression. We have written so much about Islam, what a great culture it is and what a great religion it is. We have established our credentials here in terms of Islam. For us it was important to keep making that distinction which sometimes gets blurred even by very intelligent people, who want to say it’s the religion itself that promotes violence, and it is not. It is the extremists who promote violence.

CAIRO REVIEW: Is there a line we should not cross when it comes to respecting other religions and cultures, and the hurt that certain opinions or styles of delivering opinions may cause?

ARIANNA HUFFINGTON: Look at what has been done to Christianity in the United States, the images of Jesus, including in modern art. For me this in no way diminishes my love of the heart of what Jesus Christ represents. So why is it so fragile that a satirical attack on a religious figure is seen in those terms? Satire always had more leeway, going back to Jonathan Swift, whose “Modest Proposal” about [poverty in] Ireland was eating the children. Satire is based on exaggeration. And comedy and exaggeration have always been given more leeway. If it is “where do you draw the line,” everybody would draw the line differently; everybody’s “offense line” would be drawn differently.

CAIRO REVIEW: Is this a clash of civilizations?

ARIANNA HUFFINGTON: No, I don’t think so. This is about extremism. It is not about a particular culture or a particular civilization. It is about a minority that steps into an existential vacuum, for many young people who feel disconnected from any fundamental truths and gravitate to some absolute doctrine.

CAIRO REVIEW: Is digital technology feeding tensions between cultures?

ARIANNA HUFFINGTON: In my new book, Thrive, I wrote about what I called “the snake in the Garden of Eden.” We all focus on the huge advantages in digital technology and how it has brought the world together. But there is that snake in the garden, which is the hyper-connectivity which disconnects us from ourselves and from our own wisdom and source of truth, which I believe is inside each human being whatever their religion or culture. We all have access to that center of peace and wisdom in us. I think often technology and the hyper-connectivity of technology can isolate us and disconnect us from that. That’s a huge problem that can foster extremism. Also, of course, the fact that extreme ideas can travel faster—in the same way that good ideas can travel faster.

CAIRO REVIEW: You’ve been there since the beginning of the digital revolution that has changed the landscape of journalism. Now where is it all going?

ARIANNA HUFFINGTON: I see the future in this hybrid, which is a combination of great journalism, and a platform. That’s how I see the Huffington Post, and frankly that’s how I see the future of journalism generally. At HuffPost we have about 850 journalists, editors, reporters, engineers working together all over the world. But we also have a platform that has almost 100,000 bloggers, and which can be used by anyone who has something interesting to say to distribute their opinions. To me, that is what the golden age of journalism is going to be. Revering the best traditions of journalism, in terms of long-form investigative reporting, fairness, accuracy, fact checking. And at the same time, being able to provide a platform for people who otherwise may not have a means of distribution but have something interesting to say. This is not a free for all. It has to clear a quality bar. But also to have no hierarchy except quality. At HuffPost you can have a post by [French President] François Hollande but next week a post by a student who nobody has heard of but who has an interesting opinion. That is something that the new technologies have made possible.

CAIRO REVIEW: What contribution has the Huffington Post made to journalism and society at large?

ARIANNA HUFFINGTON: The Huffington Post, going back to our founding, which is almost ten years ago, created a platform which gave voice to many people who would not have had a voice otherwise. And elevated the stature of blogging. Because we have no hierarchy, and you can have a president next to a student, it meant that not only thousands of people who would not have a voice have a voice, but it was in the context of a very civilized conversation. Our comments were always pre-moderated, so the conversation was not taken over by trolls. We could start conversations about subjects and then stay on them, and bring in new voices. The second thing is that from the beginning, the Huffington Post believed that it was very important to put the spotlight on good things. Traditionally, journalists would say “if it bleeds, it leads.” You put on the front page the crises and the disasters. But we believe that—obviously you do that—but we also believe in putting the spotlight on good things, on examples of compassion and ingenuity. Most recently, for example, when violence erupted in Ferguson, we also at same time did a splash of the good things happening in Ferguson: people supporting their neighbors, the people going into schools to teach children when the schools were closed, et cetera, et cetera. We believe that the media have not done a good job of telling the full story. The full story is a mixture, of terrible things, beheadings, killings, and amazing things happening. But if you go to most papers or TV news reports, you wouldn’t know anything good is happening. So that is really part of another contribution the HuffPost has made. The global—we see ourselves as a global newsroom. Bringing together our coverage across the world, giving a platform to people across the world to communicate with each other. Breaking down barriers.

CAIRO REVIEW: Institutionally what have you done in terms of rebuilding the media landscape? You started this on the back of a napkin in your home in Brentwood according to the legend. Now here you are in big offices here in New York City.

ARIANNA HUFFINGTON: We were part of the shift, away from journalists seeing themselves as they once laid down the law, telling people how the world was from up the mountaintop, and we began to show the world of journalism as being a two-way street. So that at the core of what we’ve done institutionally is engagement. Engaging our readers. Listening to our readers. Having a two-way conversation with our readers, instead of bringing the truth down from the top of Mount Olympus.

CAIRO REVIEW: Can everybody create a Huffington Post? What is the trick that enabled your success?

ARIANNA HUFFINGTON: Part of it is timing. It is very hard now to create a major destination site. News travels more through social now. Through being shared. People receive the news though their feeds, friends on Facebook, tweets. The HuffPost is probably the last major site to be a destination site. People go to HuffPost. And, our social traffic has grown tremendously, too.

CAIRO REVIEW: Let’s talk about the media poobahs, as you once called them. You’re becoming a poohbah yourself, I think.

ARIANNA HUFFINGTON: Things have changed dramatically in the media world since we launched. There has been a real convergence now, between traditional media doing more and more online and new media like us doing more and more of what is seen as traditional journalism. Investigative journalism, long-form journalism. As you probably saw, we just hired these great editors from the New Republic. So there is not any more this distinction, this division. It is much more blended.

CAIRO REVIEW: Former New York Times Executive Editor Bill Keller famously called you the “Queen of Aggregation,” and talked about how the Huffington Post uses unpaid writers and aggregates what other journalists are doing.

ARIANNA HUFFINGTON: He said that a long time ago. He would never say that now. This was some of the misconceptions that have now dramatically changed. The truth is that Huffington Post does a lot of different things. It does traditional journalism of the kind that won us a Pulitzer—which you don’t win for aggregation. It does aggregation. We do aggregation proudly. We believe that there is so much good stuff on the web that we don’t produce, and our job is to make everything that is the best of the web—this is a promise to our readers—to make it available. Whether we produce it, whether we aggregate it, or whether our bloggers write it. And now I think that everybody accepts that everybody uses blogs from people who are not on staff, who are not paid. It is the same principle as people who go on TV, and they are not paid, because they want a larger platform for their views.

CAIRO REVIEW: How has the relationship with AOL changed the Huffington Post?

ARIANNA HUFFINGTON: Our relationship with AOL has dramatically accelerated our growth. When we moved into these offices four years ago, we were in one country. We are now in thirteen countries, soon to be in more in 2015. When we moved here we had very little mobile. Now we have millions of readers on mobile. We had no video. Now we have an entire studio and eight hours of live video a day. It was really a great opportunity for us to achieve what we want much faster.

CAIRO REVIEW: One of the critiques of legacy media is that a few giants controlled all the media. Is being part of AOL a risk for the Huffington Post’s brand of journalism?

ARIANNA HUFFINGTON: What is great is that we have maintained our identity. We have been running the Huffington Post as stand-alone within AOL. All our international editions have maintained that HuffPost DNA.

CAIRO REVIEW: How is your profitability?

ARIANNA HUFFINGTON: We don’t split up the Huffington Post in terms of P & L [profits and losses] because it is part of AOL. But we have done a lot of great things in the advertising front, with native advertising, with sponsored sections. There are a lot of innovative things.

CAIRO REVIEW: Can the legacy media survive in the Digital Age? Did you read the New York Times internal innovation report? Can the New York Times survive.

ARIANNA HUFFINGTON: Absolutely. Totally. It is just a matter of how much they innovate online. The New York Times has done some great things, including on the multimedia front, like the “Snow Fall” [project] that they did last year. I think there is still something in our human DNA that loves print. It is not just because of my age. Even my daughters who are millennials, they love buying magazines, they read books. So all these predictions that print is dead have been proven wrong. Every time something new is invented we think it will completely supplant the old. We thought television would supplant movies. It hasn’t. We thought digital would supplant live events. Far from it. Live events are more popular than ever. In the same way, the web will not supplant print.

CAIRO REVIEW: Give the New York Times some advice.

ARIANNA HUFFINGTON: What they are doing is good: this convergence between what they are doing in print and doing more and more online. A lot of their writers now blog, they tweet, they use social media.

CAIRO REVIEW: What are your plans for the Huffington Post?

ARIANNA HUFFINGTON: We are launching HuffPost in Arabic in May. We are launching in Australia. We are looking at where we are going to be launching next. We will have fifteen international editions. The goal is to keep expanding.

CAIRO REVIEW: What is the journalistic rationale and business rationale for that?

ARIANNA HUFFINGTON: Journalistically, we are a global media company. So we want to be around the world. These editions also act as bureaus. Let’s say when there is an election in the United Kingdom, our UK reporters take the lead in the coverage. When there is a World Cup in Brazil, our Brazilian reporters take the lead. There is more and more global collaboration. We have a very complex translation system, so stories are quickly translated. On the business front, these are all JVs [joint ventures], commercial partnerships, which makes it easier for us to move faster. They are all fifty-fifty partnerships with major media companies, whether it is Le Monde in France, or El País in Spain.

CAIRO REVIEW: What is the aim of Huffington Post in Arabic?

ARIANNA HUFFINGTON: It is a great opportunity both to tell the story of all the problems and the crises, but also to tell all the positive stories and good things in the Arab World, which so often are lost in the coverage of the violence and the crises.

CAIRO REVIEW: I understand that you are going to operate out of London rather than an Arab capital. Is there a risk for contributors from the Arab World?

ARIANNA HUFFINGTON: We don’t think that way.

CAIRO REVIEW: An advantage of the legacy media, publications such as the New York Times, is that readers can go there for a good grounding in what they need to know. In the digital era there is a cacophony of information and voices. How does the reader find the truth in all this?

ARIANNA HUFFINGTON: Earning the trust of readers is key. At the Huffington Post, we pride ourselves in having a whole standards process, having our editors and reporters trained in fact checking and verification. We have earned our readers’ trust. But that happens gradually. Obviously when you first emerge, the reader has to test you.

CAIRO REVIEW: Does the cacophony of information and voices in the digital media world today leave the public confused and democracy diminished?

ARIANNA HUFFINGTON: Quite the opposite. Our changing media world has the potential to inform the public more than ever before and also strengthen democracy. People are tired of being talked at. They want to be talked with. The online world is now a global conversation, with millions of new people pulling up a seat at the table every day—indeed, nearly three billion people will join the Internet community by 2020. And new media and social technologies have created an explosion of ways for people to connect with the content they value. These connections have fueled revolutions, caused giant corporations to roll back policies, and brands to engage with consumers in totally new ways. People have gone from searching for information and data to searching for meaning, often by trying to make a difference in the lives of others. For all these reasons and more I believe we are living in a golden age of journalism for news consumers.

CAIRO REVIEW: How do you evaluate the performance of the mainstream traditional media in the United States today?

ARIANNA HUFFINGTON: The traditional media too often suffer from ADD [Attention Deficit Disorder]. They are far too quick to drop a story, even a good one, so eager are they to move on to the next big thing. Online journalists, meanwhile, tend to have OCD [Obsessive-Compulsive Disorder]—we chomp down on a story and stick with it, refusing to move on until we’ve gotten down to the marrow. But the larger point is that the never-very-useful division between “old media” and “new media” has become increasingly blurred. Digital and traditional media have a great deal to learn from each other—and the evidence is all around us that they are learning from each other. Increasingly, purely online news operations like the Huffington Post are engaging in investigative journalism. And mainstream traditional operations are adopting more and more of the digital tools that can bring in the community to make it part of the creation of journalism, through reports from the ground, through video, through Twitter feeds, through all the new media available to us. There will, of course, always be mistakes, or reporting that’s not skeptical enough of sources or the government. But now there are many more outlets and voices that will point that out and course-correct the story.

CAIRO REVIEW: What are the prospects for better election coverage in the 2016 presidential election year? Is it political journalism that is broken, or politics itself?

ARIANNA HUFFINGTON: Those who represent us—and those who want to represent us—are plagued by short-term thinking, and obsessed with fundraising. But the media bear plenty of responsibility for the diminished conversation. Especially during presidential campaigns, our media culture is locked in the “Perpetual Now,” constantly chasing ephemeral scoops that last only seconds and that most often don’t matter or have any impact in the first place, even for the brief moments that they’re “exclusive.” This was the jumping-off point for a great piece by HuffPost‘s Michael Calderone about the effect that social media have had on 2012 campaign coverage. “In a media landscape replete with Twitter, Facebook, personal blogs and a myriad of other digital, broadcast and print sources,” he wrote, “nothing is too inconsequential to be made consequential. Political junkies, political operatives and political reporters consume most of this dross, and in this accelerated, 24/7 news cycle, a day feels like a week, with the afternoon’s agreed-upon media narrative getting turned on its head by the evening’s debate. Candidates rise, fall, and rise again, all choreographed to the rat-a-tat background noise of endless minutiae.”

CAIRO REVIEW: How do you size up your favorite new media start-ups: ProPublica, BuzzFeed,Vice, First Look Media, or others?

ARIANNA HUFFINGTON: We really are living in a golden age of journalism for news consumers. And there’s no shortage of great journalism being done—including by all the outlets you name—and there’s no shortage of people hungering for it. And there are many different business models being tried to connect the former with the latter. The truth is that there are going to be more and more great digital media players, sites that create more engagement in ways that are good for all of us. So I welcome all of them.

The Promise of Digital

The journalism craft is undergoing massive changes, nearly unprecedented in their scope, and full of uncertainty for the people who have traditionally called themselves journalists. What’s been unclear for years, and to some extent remains so, is whether the emerging media and journalistic ecosystem will support the kind of information resources we need in our communities.

Those communities are geographic, and they are topical. Topical communities, increasingly, are well served by a variety of media sources. Geographic communities, especially local and regional news in the United States and many other countries, have been losing some ground. Yet, there is reason for optimism.

The primary catalyst for the downturn and for my optimism about a resurgence is technology. The tech revolution of the past several decades has had profound consequences. Today people are as likely to seek the news on their smartphone, tablet or computer as they are in a newspaper or on a television set. This has severely disrupted traditional business models as readers and advertisers have moved online. But the same technology has also created profound opportunities. The tools of media creation have become not just ubiquitous but enormously flexible and easy to use. We are creating, as a result, a radically more complex and diverse media ecosystem.

Humanity has learned that diverse ecosystems are more stable than ones that are less diverse. The dangers of monocultures are well understood despite our reliance on them in, among many examples, modern farming and finance. When society relies on a monoculture that fails, the results are catastrophic; when “too big to fail” U.S. financial institutions became insolvent a few years ago, for instance, taxpayers were forced to bail out bankers and their shareholders.

A diverse ecosystem features ongoing success and failure. Entire species come and go, but the impact of losing a single species in a truly diverse ecosystem—however unfortunate for that species—is limited. In a diverse and vibrant capitalist economy, the failure of enterprises is tragic only for the specific constituencies of those enterprises. But assuming that we have fair and enforceable rules of the road for all, what economist Joseph Schumpeter called “creative destruction” ensures long-term economic sustainability.

The journalistic ecosystem of the past half-century, like the overall media ecosystem, was dominated by a small number of giant companies. Those enterprises—aided by governmental policies and manufacturing-era efficiencies of scale—controlled the marketplace, and grew larger and larger. The collision of Internet-driven technology and traditional media’s advertising model was cataclysmic for the big companies that dominated.

But is it catastrophic for the communities and society they served? In the short term, it’s plainly problematic, at least when we consider Big Journalism’s role as a watchdog—though the dominant companies have served in that role inconsistently, at best, especially in recent years. But the worriers appear to assume that we can’t replace what we will lose. They have no faith in the restorative power of a diverse ecosystem, because they don’t know what it’s like to be part of one.

Blooming of the News Deserts

The emerging ecosystem’s diversity stems in large part from an element of capitalism popularized by Silicon Valley: entrepreneurship. The lowered barrier to entry has encouraged countless young (and not so young) people around the world to try their hand at a media startup. And the last few years have seen an explosion of those startups—including some well-financed for-profit and not-for-profit organizations. Scores of independent local news operations have emerged. Many have failed, including Bayosphere, a user-driven local news site in San Francisco that I launched a decade ago. But some are succeeding, including a number of information services that serve local communities that had been turning into what some call “news deserts.” A list of promising local sites, maintained by journalist Michele Mclellen, is growing, she reports, with progress on the revenue front as well as the journalistic one. It seems likely now that local news will be provided mostly by a legion of small startups that can never get very big. This may not be an interesting investment for Silicon Valley’s venture capital community, but these outlets will be a key part of journalism’s sustainable future.

Some valuable local news sources were never intended to generate revenue in the first place. In a Silicon Valley community where I lived for many years, our neighborhood had a simple mailing list where neighbors contributed valuable, newsworthy information; much of it amounted to outright journalism by any standard. Countless bulletin boards and mail lists are doing this in countless places, a parallel media universe that remains mostly invisible. Several startups are working to expand this notion, including Front Porch Forum, which has been systematically offering neighborhoods a platform aimed at improving local information.

What we called “placeblogging” a few years ago—individual blogs covering local affairs—has become a more organized medium. In northern New Jersey, for example, Debra Galant, working with a nearby university, has turned Baristanet into a vibrant source of community news. Similar efforts have sprung up in many other places as well.

Some of the best local journalism has been explicitly not-for-profit. has consistently produced excellent journalism in Vermont, where local news organizations (as elsewhere) have shrunk drastically in recent years. Foundations and individuals have supported its work.

Communities of interest are expanding at a rapid rate, meanwhile. One of the pioneers in this category is Nick Denton, a former Financial Times journalist who launched the Gawker Media collection of blogs, which has become a large and profitable venture. Josh Marshall’s Talking Points Memo began as a political blog and is now a serious media player. Om Malik’s GigaOm, a combination of technology blogs, research, and events, has pushed the boundaries as well. Walt Mossberg and Kara Swisher started the All Things Digital site with Dow Jones and the Wall Street Journal; last year, with funding from Comcast Corporation and others, they moved their technology-news operation to a new and well-regarded company called Re/code. Skift, a service aimed at the travel industry, also fits into what its founder, Rafat Ali, calls “vertical” news: targeting a niche and covering it deeply. There are countless others.

In some ways, verticals are the new trade journals and newsletters, which have always had an audience. They are growing in number and value. One of the most interesting recent startups in this area is News Deeply, a new media and technology firm co-founded by former ABC journalist Lara Setrakian, who began with a news site called Syria Deeply and is moving into other targeted topic areas. (Note: I am an advisor.)

The ecosystem of quality work extends far beyond what we’ve traditionally called journalism, moreover. Consider the deep reporting by advocacy organizations that is readily available online. No formal journalism organization does better reporting—collecting documents, interviewing, etc.—on human rights issues than Human Rights Watch. No news operation matches the quality of reporting on civil liberties that the American Civil Liberties Union does routinely. These and many other non-governmental organizations and advocacy groups are probing and exposing the abuses of governments and other powerful institutions. Yes, they are advocates. But they are transparent about their worldviews—indeed, like many news organizations outside the United States. Advocacy journalism has a long and proud history. From my perspective, the modern advocates are part of the journalistic ecosystem.

We’ve hardly begun to see the impact of technology in a longer-term sense. The future includes the marriage of data (from sensors, not just traditional data sources such as geographic or census data) and media, where we can remix vital information from a variety of sources to better understand our world. When we tap the collaborative power of this technology, the result will be profound. A particularly intriguing example is Safecast, a Japanese project that collects radiation data from the region where the 2011 disaster—earthquake, tsunami, nuclear meltdowns—took place.

Billionaires to the Rescue?

New media operations such as BuzzFeed and Vox Media have been adding significant staffs to do investigative, high-quality journalism. They’ve joined some not-for-profit enterprises such asProPublica and First Look Media, which were funded by wealthy individuals who wanted to see serious journalism survive. Let’s look at several of these:

BuzzFeed has emerged as one of the most important new media companies of the decade. It has transcended its early “listicle” reputation—publishing lists of trivia as well as adorable cat pictures and the like—and now boasts a staff of excellent journalists. For example, it lured Wired magazine’s high-profile editor and writer Mat Honan to head up its Silicon Valley bureau.

—Vox Media has been collecting talent and investment dollars in the past several years. Its best-known news product, The Verge, covers technology, science, art, and culture. But its new Vox site is aimed at a more general audience. And, like BuzzFeed, it has created its own software platform to manage content and data—what its managers hope is a competitive advantage that could plausibly also become a platform others license.

ProPublica started with millions of dollars in funding from the Sandler banking family, and has become a powerful presence. It consistently produces some of the best investigative journalism around, and boasts two Pulitzer prizes, an astonishing achievement for an organization not even a decade old. (Note: I serve on a board of directors with ProPublica’s founding editor.)

—First Look Media is a group funded by eBay founder Pierre Omidyar in collaboration with investigative journalists Glenn Greenwald and Laura Poitras. Its first product, The Intercept, has boasted many scoops and deep looks into national security topics. (Note: Omidyar’s investment arm was a backer of my failed 2005 startup.) First Look has a hybrid business model: the journalism will apparently be not-for-profit, while another part of the company works on new tools and startup ideas that it hopes will scale into bigger businesses.

Getting millions of dollars from rich benefactors is not a sustainable business model for more than a tiny number of journalists. This is why large recent investments we’re seeing in companies like Vox Media and BuzzFeed are so intriguing. Although the new media investment boom unquestionably has bubble-economy elements—there will be a reckoning sooner rather than later, I believe—it is gratifying to see optimism after so much doomsaying.

Business models, too, are seeing innovation—or at least useful experimentation. Kickstarter and other crowd-funding services are a big help, though not enough by themselves to do more than help startups get launched. We need to find ways to create financial sustainability, with recurring revenues, not just the ability to start. One controversial method of advertising, which was traditional media’s bread and butter, is called “native advertising” in which sponsors create their own site content. This is fine as long as it’s labeled clearly.

Which of the recent entrants into the media marketplace will survive, much less prosper? We don’t have an answer yet. But we are starting to see some business strategies that can work for some operations. TheTexas Tribune, a nonprofit, has worked hard to create and sustain a number of different revenue streams, including corporate sponsorships and signature events. In the end, news organizations will have to try everything.

One of the most essential issues for many who care about journalism is whether the best journalism organizations will survive. The New York Times, among others, may be irreplaceable. Without the voices and gravitas of the best, we would all be worse off. But optimism is warranted.

The Times has struggled financially in recent years, and it has not yet found a way to make what an executive from another news company called “digital dimes” a sufficient replacement for print dollars. But the paper is making progress. Last year’s leak of an internal strategy report demonstrated its growing understanding of the need to transform into a “digital first” organization. The report showed that Timesexecutives are fully aware of the challenges and, perhaps belatedly, moving faster to develop new revenue streams to complement the paper’s superb journalism and digital experiments, in which it continues to invest. The paper’s efforts with “paywalls”—charging subscriptions fees—have had some success, but are unlikely to provide a solid transition from the print era to a digital future. No one can predict with any certainty whether the Times will make it work. Yet however tragic it would be if the Times succumbed to an economic reality it could not handle—even if every one of today’s major journalistic institutions disappeared—quality journalism would not die.

One of the most imponderable questions is the impact of technology from a new front. Search and social media, which came to the fore only in the past decade, are becoming a primary method by which journalists help our audiences find what we do. Much of what younger people see and watch, in particular, was shared with them by others. Major services such as Twitter and Facebook are increasingly where sharing and conversations start, and the Google database, which contains trillions of data signals created by Web users, is the overwhelming source for information when people search online.

For journalists, these enormous players constitute a double-edged sword. News organizations increasingly are using social media platforms as content platforms for their work, or at least the conversation surrounding their work. That’s where the audiences are, at least part of the time. But this is a short-sighted tactic. To the extent that journalists rely on third-party platforms that they do not control, they are leaving themselves vulnerable to the whims and business needs of those platform operators. Facebook and Google are business competitors already, and Twitter is becoming one. How long will news organizations feed so much of what they do into their competition?

Some potential roadblocks, apart from the question of whether we’ll find solid new business models, are looming. One is the growing power of not just centralized services like Facebook and Google, but the potentially overwhelming control telecommunications companies—the ones providing our connections to the Internet—are asserting in a digital age. If our Internet service providers can choose which bits of information get to their destinations in what order and at what speed, or whether they get to their destinations at all, freedom of expression and the ability to innovate without permission will be at risk. Governments are asserting more control as well. The re-centralization of the Internet could lead to tight press controls around the globe.

Media Literacy

For all the real and potential obstacles, I am optimistic about the future of journalism. One reason is a growing recognition that audiences can no longer be passive consumers of news. We all need to use, not just consume, information from all sources. This means being skeptical, using judgment, asking more questions, reading/watching more diverse sources, and understanding how media is used to persuade and manipulate. This all falls under the related categories of “media literacy” and “news literacy.” Traditional media organizations missed a big opportunity in the past by not being the primary advocates for these skills. A small but valuable example of this, and a good start toward boosting news literacy, has been visible in coverage of breaking news where journalists explain not just what they’ve learned but also what they don’t know yet.

Telling what we don’t know is part of a journalistic principle I consider essential in this new century: transparency. It will no longer be enough to be thorough, accurate, fair, and independent. Add transparency to those principles, as the best journalists will do, and they’ll earn deeper trust from their audiences. Trust will add up to value in some information marketplaces.

As we move toward a new journalistic ecosystem, it’s easy to see all the problems and fret about the future. I’d rather look at all the experiments in providing information and paying for it. Focusing on the latter is what keeps me optimistic. Indeed, I am envious of my young students. They can create their own futures. In my early career as a journalist, it cost a lot of money to launch a media product. Now thanks to communications technology there is almost no barrier to entry.

Dan Gillmor is a professor of digital media literacy and founding director of the Knight Center for Digital Media Entrepreneurship at Arizona State University’s Walter Cronkite School of Journalism and Mass Communication. He is the author of Mediactive and We the Media: Grassroots Journalism by the People, for the People. He formerly worked as a reporter at the Kansas City Times and theDetroit Free Press. He was a columnist for the San Jose Mercury News from 1994 to 2005, where he was credited with establishing the first blog for a daily newspaper at On Twitter:@dangillmor.

Death of the Newsroom?

For anyone interested in discovering how the business model for American journalism has changed over time, here is a thought exercise. Consider the following institutions:

—CBS News Radio, the House of Murrow, the leading source of breaking news for Americans by the end of World War II.

New York Herald Tribune, the finest paper in the United States for much of the twentieth century (yes, a smarter and better written paper than the New York Times), and the home base for the indispensable Walter Lippmann, the most influential columnist of the century.

Saturday Evening Post, which featured the work of the country’s greatest illustrator, Norman Rockwell, and presented its millions of readers with news, views, and diversions.

LIFE magazine, a pillar of the vaunted Time Inc. media empire and the most important showcase for the skills of photojournalism.

Next, let’s pick a historical moment. Somewhat arbitrarily, let’s go back fifty years and look at 1964. If you asked any educated, engaged American adult who paid attention to world and national affairs in 1964, that person would have agreed that all four of those journalistic institutions were indispensable. It would have been hard to imagine American society without them.

Within a few years, though, all would be gone (or so diminished that they were mere shadows of themselves). The rise of television news hollowed out CBS Radio and ultimately killed off LIFE as we knew it. A printer’s strike finished off the Herald Tribune, leaving the quality newspaper field to theTimes alone. Corporate ownership pulled the plug on the Saturday Evening Post when tastes changed and the magazine started racking up annual losses in the millions.

Now, let’s jump ahead to 1989, halfway to the present from 1964. The lineup of indispensable media would look different.

—The Times had not only outlived the Trib by then, but had surpassed it in almost every respect.

—National Public Radio, and its television sibling, Public Broadcasting Service, brought intelligent, original reporting to the airwaves and won the loyalty of millions.

—CNN, the brainchild of billboard businessman Ted Turner, established the 24-hour news cycle by putting journalism on television round the clock and across the globe.

—Bloomberg, the business news service, was not even founded until 1982 but burst on the scene and soon became an essential tool for traders and later, as a general business news service for readers worldwide.

Thus, at quarter-century intervals, we can see the phenomenon known to economists as “creative destruction” at work, with a vengeance. The older media, despite their eminence in the journalism establishment and their deep ties into the lives of their audiences, were swept aside and replaced, often by upstarts less than a decade old.

And all that happened even before the Internet came along to “change everything.” In light of such a turbulent history, it behooves us to look deeply into the history of news organizations. Where did they come from? How did they pay the bills in earlier periods? Is there anything to learn from the days before the Huffington Post, YouTube, and social media?

Nowadays, it is commonplace to refer to the news media that predates the Internet as “legacy media.” Just what is that legacy?

Printers and Pamphleteers

In America, the history of selling the news can be said to have begun in 1704, when John Campbell, the postmaster in Boston, got tired of writing his longhand weekly summary of interesting developments for his friends. So, copying the model of news journals in London, he went to a nearby “job printer” and launched something never seen before in North America: a printed weekly newspaper. The world took little notice, but Campbell’s new venture, titled The Boston News-Letter, began the long rise of “big media” to the pinnacle of power and profit that it reached in the late twentieth century, just before going through a near-death experience in the last fifteen years. Over the course of those three centuries, news has been carried in many different kinds of vehicles. In broad terms, the news business has also operated under a succession of prevailing business models. And each time the business model changed, a new philosophy of journalism was needed. Repeatedly, journalism has evolved slowly over decades, only to face a crisis or some external shock in which innovators could flourish.

Campbell and other colonial-era newspaper editors and printers, including the estimable Benjamin Franklin, all operated in a business world that had several key characteristics. Most producers of newspapers were printers, and they worked in a shop, which was the era’s distinctive form of productive activity other than farming and fishing. In the typical eighteenth century shop, whether it was a cooperage or a chandlery, a brewery or a printshop, a “master” presided. A master in any field had two distinguishing features: he had all the skills needed to take the raw materials of his trade and turn them into finished products, and he had enough capital to be able to afford a workplace and the tools and materials needed to get started. As in most other kinds of shops, the master printer was assisted by a journeyman (who had the skills of the trade but lacked the capital—so far—to open his own shop) and an apprentice (who lacked both skills and capital but whose contract with the master entailed a legal right to be taught the mysteries of the trade). Each shop had a small crew, working in a strict hierarchy.

For printers in America, the greatest challenge was to import a press and a set of metal letters from England, which was a major capital outlay. An economist might observe that printing had a higher “barrier to entry” than many other shop-based businesses. The technology imposed further conditions. Presses were operated by hand, and inks were slow to dry, so there was a physical limit on the number of papers a printer could turn out in a week—on the scale of the low hundreds of copies. Most of these newspapers were offered only on a subscription basis, a year at a time, and they were quite expensive. My research indicates that they were priced along the lines of a contemporary investors’ newsletter, costing the equivalent of several thousand dollars a year. It is worth noting that the subscribers were paying nearly the full cost of the paper (plus a profit), since there were very few ads in the early papers.

In 1704, as newswriting conventions were just being established, most items in a newspaper read more like letters. They were discursive, they took a lot for granted, and they assumed that the reader would continue reading to the end. Often the contents of a newspaper would include many actual letters, sent to the postmaster-editor or to his friends, and they would be printed because they were so informative. The early papers also contained a regular flow of proclamations from the Crown or the provincial authorities, always conveying a one-way message from those at the top of the social hierarchy to those below. Newspapers in America also aggregated news from Europe. The printer would simply subscribe to one or more papers from England, and when they arrived through the postal service, the American printer would lift items verbatim from the source paper—never minding if the material was weeks or months old. If news of Europe had not reached the colonies, then it was still new to the colonists. Most early newspapers were only a page or two long, and some left blank space for comments.

The “public prints” also carried plenty of information interesting to merchants, ship captains, and others involved in the vast Atlantic trading system, including offers of slaves for sale. In addition, papers routinely carried news about oddities such as lightning strikes, baby goats born with two heads, meteor showers, and the like. Such strange occurrences were often presented for more than their ability to astonish; they were framed as occasions for readers to reflect on how these signs and portents revealed God’s providence, and many were explicitly presented as episodes of the wrath of God. Another common type of item involved reports of public executions; these often included descriptions of rather leisurely procedures designed to torture the miscreant before sending him (or her) to meet the Creator. In describing such burnings, hangings, and stranglings, the newspapers were advancing the social purpose of public executions, which was to caution and intimidate the general population against a life of depravity. In addition, newspapers offered a grab bag of poetry, quips, jokes, and whatever else came to the printer’s mind. In that sense, dipping into a newspaper 300 years ago was not all that different from doing so today: you never knew what you might find there.

With rising levels of population and economic activity in the colonies, newspapers slowly began to spread and grow. By the 1760s, there were a few dozen titles, mostly in port cities from New Hampshire to South Carolina. They catered to an elite audience of literate white men who needed information and could afford to pay for it. By necessity, they were small-scale, local operations. No printer owned more than a single newspaper. A few copies could be sent to distant places through the postal service (where they enjoyed a special low rate), but they remained overwhelmingly modest, local affairs. The only way that most people of middling ranks could read a newspaper was by finding one in a tavern, where many a barkeep would share his own paper with his customers by hanging it on a post (hence the popularity of the name Post in newspaper titles).

The newspaper trade suffered a blow in 1765, when Parliament imposed a tax on paper. The Stamp Act required that all paper products bear a stamp proving that the tax had been paid. The tax fell heaviest on printers, who considered paper their stock in trade, and they felt particularly aggrieved. Several printers went so far as to declare the death of newspapers and printed images of tombstones on their front pages. As it happened, Parliament lifted the tax, and newspapers survived. But the Stamp Tax left a bitter taste among printers, and more of them opened their pages to politics and began sympathizing with the radicals in the patriot movement. A decade later, they would be helping to lead the American Revolution.

Over the course of the eighteenth century, another form of journalism arose—the pamphlet. These were much cheaper than newspapers and sometimes widely distributed, but the writing, printing, and distribution of pamphlets was not a real business. These were done by amateurs for non-economic motives. Indeed, it has been observed that newspapers were like stores, and pamphleteers were like peddlers. They were hit-and-run efforts—usually political, almost always anonymous (or pseudonymous). The pamphleteers managed to inject a big infusion of politics into American journalism, advancing political arguments that could not be risked by printers of regular newspapers. In some respects, the pamphleteers resemble the bloggers of our times, ranting about political topics not to make a living, but to have an impact.

The pamphleteers engaged in a polemical debate that grew increasingly polarized in the early 1770s over the issue of separating the colonies from Britain. Cautiously at first, the regular newspapers joined in the great debate, and—driven by their readers—they became identified with the Whig or Tory cause.

During the early years of the Republic, papers not only became more political, but they also became more partisan. Indeed, newspapers predated American political parties and provided the first nodes around which the parties grew. Some papers were founded by partisans such as Alexander Hamilton (or, as in the case of his rival Thomas Jefferson, by surrogates), and newspaper editors helped readers figure out which candidates for office supported Hamilton’s Federalists and which ones supported Jefferson’s Republicans. In return, victorious parties rewarded loyal editors with lucrative government printing contracts and showered benefits like reduced postal rates on the whole industry.

Such then was the kind of journalism that American’s founders were familiar with. It was local, small-scale, independent, and highly argumentative. One thing it did not have was much original reporting. Indeed, throughout the first century of journalism in America, there was no one whose job was to gather facts, verify them, and write them up in story form. Opinions were abundant, facts were haphazard.

Hail to the Penny Press

During most of the nineteenth century, the news business was a high-technology, innovative field, often at the forefront of deep changes sweeping through the U.S. economy. It may be hard for us today to think of newspapers as innovators, but they once were, and it may well be that the failure to continue to innovate is a major source of newspapers’ current problems.

Beginning in the 1830s, newspapers pioneered in creating the first truly mass medium. Led by Benjamin Day, who founded the New York Sun, and his great rival James Gordon Bennett, who founded the New York Herald, newspaper editors discovered the simple but powerful truth that there is money to be made in selling down-market. The founders of these “Penny Press” papers brought a profoundly new model to American journalism, based on deep and simultaneous changes in economics, technology, marketing, and philosophy.

First, Day decided to go after an under-served market: the literate from the middling ranks of society. He wrote for tradesmen, clerks, laborers, anyone who could read. His motto for the Sun was “It shines for all”—and he meant all. To make his paper affordable, he slashed the price from six cents a copy to a penny. That allowed him to take advantage of simple arithmetic: if you multiply a small number by a very big number, you end up with a pretty darn big number. In his case, the Sun began selling many more copies than anyone had before—rather than hundreds a week, he was selling thousands a day. So, his small purchase price was more than offset by his large circulation figures.

To make his paper even more affordable, Day changed the business model in another way: readers no longer had to subscribe for months at a time. They could lay down a penny for the Sun today and skip it tomorrow. This put tremendous pressure on Day to meet an entirely new problem: his paper would have to be interesting every day. He met that challenge by re-defining news. Instead of old, recycled news from Europe, letters from ship captains, and official proclamations from New York’s government, Day discovered the appeal of telling New Yorkers short, breezy stories about the calamities and strange doings of regular people. The Sun’s pages were filled with stories about suicides, riots, brawls, and the fires that plagued wooden cities like New York. People loved it, and they voted with their pennies for Benjamin Day’s new kind of journalism day after day. Soon, the circulation was soaring and money was rolling in. News was now defined as whatever lots of people found interesting.

Day was also fortunate in his timing, because the decade of the 1830s was a time when inventors were applying a new technology to a host of age-old human problems. That new technology was steam power, which was being applied to such problems as powering ships that could travel upstream and the new-fangled railroads. One of the earliest adaptors of steam power was the printing trade, which had relied since Gutenberg’s time on the power of human muscles to raise and lower the heavy platen that pressed paper and ink together. With the introduction of steam-powered presses (and fast-drying inks), it was now physically possible to produce enough copies of a newspaper in a few hours to meet the demands of thousands of ordinary people in a growing city like New York.

The success of Day and Bennett and the imitators who soon followed in other cities had some powerful unintended consequences. One was a radical new division of labor, which brought about the de-skilling of printer/editors and a radical flattening of the organizational chart. Once, newspapers had been produced by a master printer, assisted by a journeyman (who could expect to become a master one day), and an apprentice (who could in turn expect to become a journeyman one day). But, with the growth in scale of newspapers, the owners forced through a deep restructuring. The new big-city dailies would be run by one person, with the title of publisher. The publisher was the sole proprietor and was responsible for organizing the entire enterprise.

As papers grew, publishers began appointing assistants, along these lines:

—A chief of production to oversee printing (a trade that, thanks to steam power, now involved tending machines rather than the traditional hand skills);

—A head of circulation to make sure all those thousands of copies got distributed every day;

—An advertising director, to run the growing volume of ads, which would soon make up a giant new revenue stream;

—An editor, to preside over the newsroom, where the new job of reporter was spreading and would eventually develop into specialties such as covering crime or sports.

Called by various titles, these four individuals would all see their domains grow in the coming decades, until newspapers were employing hundreds of workers in specialized roles. By the 1840s, it was already dawning on journeymen that they were not going to learn all the skills of this new trade, that they would never accumulate enough capital to go out on their own, and they would never be their own master. They were now doomed to a life of wages.

The rise of popular and profitable newspapers had another profound consequence: publishers like Day and Bennett declared their separation from the parties and became politically independent. Observing that they won an “election” every day—in which the ballots were the readers’ pennies—publishers said they would stand apart from the parties and pass judgment on the performance of all office-holders. They would do so in the name of “the people,” whom the publishers now claimed to represent. They would act as the people’s tribune (hence the popularity of that name in the newspaper trade) and “lash the rascals naked throughout the land.”

Near the end of the nineteenth century, all these ideas were taken to their ultimate fulfillment by a later generation of mass-market newspapers, known as the “yellow press.” Led by Joseph Pulitzer and his rival, William Randolph Hearst, the yellow papers brought tabloid journalism to new heights. Readers loved it, and by the year 1900, the yellow papers passed the circulation milestone of a million copies a day. (Let’s do the math on that: 1 million purchasers x 1 cent = $10,000 a day in income from circulation alone. That’s $3.6 million a year. Add a comparable amount of income from advertising, and you have a huge enterprise.) The money surged into these papers, flowing in two broad streams of revenue—one from circulation, both regular subscribers and newsstand sales, and another from advertising, both “display” ads and classified. Readers grew accustomed to paying less than the real cost of the newspaper, because advertising brought in so much money. In another case of good timing, the era of Pulitzer and Hearst coincided with the rise of big-city department stores like Macy’s and Gimbels, which regularly bought full-page ads to carry on their rising competition.

Rise and Fall of Corporate Empires

In the early twentieth century, some leading figures in American journalism pushed back against the rise of the tabloid style. They aspired to make journalism into a true profession—along the lines of law and medicine—with a defined canon of knowledge, a set of standard procedures, and a mechanism for certifying new journalists and policing the ranks of practitioners. None other than Joseph Pulitzer himself gave this movement a big lift when he decided to leave a major portion of his huge fortune to Columbia University in order to endow a school of journalism and a set of prizes intended to elevate the practice of journalism by rewarding each year’s best work.

Another major supporter of the drive to raise the standards of journalism was Adolph Ochs, the publisher who bought the failing New York Times in 1896 and set about trying to turn it into “must reading” for the American establishment. Ochs asserted that his paper would provide all the news that respectable people needed “without fear or favor,” regardless of parties, religions, or other interests. Through his involvement on the board of The Associated Press and other industry groups, Ochs strove to get his fellow publishers to produce papers that were serious, responsible, and decent.

Pulitzer, Ochs, and other reformers thought their biggest problem was achieving real independence. That was the foremost quality they associated with professionalism (and interestingly, not “objectivity”), and they understood journalistic independence not just in political terms. Yes, they believed that newspapers should, of course, stand apart from the political parties. They should not carry water for either side in their news coverage, and they should editorialize freely in a non-partisan manner in favor of the best candidates and policies. But they also had a deeper concern: they wanted to liberate the nation’s newsrooms from the pernicious effects of hucksterism, ballyhoo, and puffery. They wanted to stamp out the influence of the emerging field of press agentry, to get their own staff reporters to stop taking bribes for favorable stories, and to assert the inviolability of the newsroom. The goal was to create a wall of separation between “church and state,” between the newsroom and the advertising side of the paper. As newspapers became big businesses, the professionalizers hoped to insulate reporters and editors from the imperatives of making money.

As businessmen themselves, most publishers did not see the greater threat to professionalism that they actually faced—the growing transformation of the news industry from stand-alone, family-run small businesses to the corporate form of ownership that would sweep almost the entire field in the coming decades. It was the new business model, dominated by the for-profit, publicly traded corporation that transformed journalism in the mid- to late-twentieth century and left it vulnerable to collapse.

It was often great fun while it lasted. One of the pioneers in building the big media companies was William Randolph Hearst. Heir to an enormous fortune, Hearst had the means to build the first major media empire. Keeping his family-owned newspaper in San Francisco, Hearst bought a failing paper in New York City in 1895. And he did an unusual thing: he kept the Examiner, so he now owned two newspapers. Later, he founded new newspapers—in Los Angeles, Chicago, Boston, and elsewhere—and kept ownership of all of them in his hands, thus dictating their editorials and giving the Hearst press an increasingly conservative, isolationist outlook that mirrored his own views. But he did not stop there. He also bought magazines, including the muckraking Cosmopolitan, then ventured into new fields as they came along—newsreels, radio, television. By the time of his death in 1951, the Hearst Corporation was a mighty media monolith.

In the 1920s, radio manufacturers like the Radio Corporation of America (RCA) and Westinghouse—which were already large, profitable, publicly traded corporations—became darlings of Wall Street when they figured out how to make money in radio not just by building the receivers that people craved, but by broadcasting programming as well. In short order, companies like RCA’s new subsidiary NBC (National Broadcasting Corporation) began adding to the corporation’s bottom line by creating “content” for a growing audience and then renting that audience out to advertisers and commercial sponsors. In the new era, RCA could make money on both the hardware of radio and the programming. All that remained was to build the network of affiliated radio stations across the country, which allowed NBC to profit many times over from the same content. In that setting, the cost of putting a little news on the air—to satisfy the broadcast regulators’ requirement that radio operate in “the public interest”—was a tiny cost for running a very lucrative enterprise.

The emerging broadcasting powerhouses of NBC and CBS (Columbia Broadcasting System) were highly profitable entertainment companies that ran their news divisions for decades as “loss leaders.” The vaunted CBS Radio News operation, run by the Tiffany Network, the home of Edward R. Murrow and the other pioneers of radio news, was paid for by the jokes of Jack Benny, and his sponsors—Chevrolet, Jell-O, Grape Nuts, and Lucky Strike. When television came out of the laboratory after World War II and entered consumers’ homes in the 1950s and 1960s, the same corporate and regulatory scheme that dominated radio took over the new medium, and television news grew up almost entirely in the corporate domain overseen by NBC’s David Sarnoff and CBS’s William Paley, whose first commitment was to make money for their stockholders.

And make it they did. In the process, they became almost entirely dependent on advertisers. Their industry depended on sending signals through the airwaves to consumers who pulled those signals in through an antenna. At the time, no one could figure out a practical scheme for charging them to receive the signals, so broadcasting was originally founded on a free model. NBC and CBS—and their rivals and affiliates—gave their content away for free in order to assemble the largest possible audience, so they could sell that audience to advertisers. Like the big automakers, a small number of sellers—including, eventually, ABC (American Broadcasting Company)—dominated the market. Although each one was big, they all wanted to be bigger. The logic of the situation was simple: if some viewers or listeners are good, more are better. Best of all would be to rope in every single radio listener and television watcher. To do that, of course, broadcasters would have to cater to mass taste and shun partisan politics. As a result, the news divisions in corporate broadcasting needed to acquire a “cloak of invisibility”—an ethos of factuality and detachment that would avoid offending Democrats and Republicans, or anyone else for that matter.

In the world of print journalism too, publishers and investors kept moving in the direction of the corporate model. One pace-setter was tycoon Henry Luce (to use an epithet that he brought into news vocabulary). Along with sidekick Briton Hadden, Luce invented the weekly news magazine in 1923, and TIME quickly caught on with American readers, making it the profitable cornerstone of the Time & Life empire. Time Inc. launched Fortune, Sports Illustrated, People, and dozens of other titles before merging with the movie and music giant Warner Communications Inc. Most recently, the company orphaned its original magazine businesses and sent them out to fend for themselves, while morphing the remaining film and television properties into a global entertainment conglomerate made up now mainly of “video content providers.”

Through the middle and later decades of the twentieth century, the corporate model eventually came calling even on the now long-established and no-longer-innovative newspaper industry. As newspapers folded and merged, a smaller number of papers remained standing as monopolies (or near-monopolies) in most of the big and medium-large cities of the United States. That meant that they could practically print money on their presses, since anyone who wanted to advertise (either display or classified) in their domain had to pay the newspaper for the privilege. Many of the monopoly papers were lucrative enough to become takeover targets for the emerging chains like Gannett and Knight-Ridder. As they sold out to the chains, those papers left the control of their long-standing family owners (the Chandlers, the Binghams, the Coxes, and the like) and became small parts in the portfolio of big, remote corporations with no civic or sentimental ties to the areas those papers served.

For a while, it all sort of worked. In the decades after World War II, the big media that arose in the new corporate order seemed to have it made. They were (mostly) earning buckets of money, which allowed them to pursue the professional goals so admired in the newsroom. Editors could tell the business side to buzz off. Editors could open new bureaus in Washington and overseas. A correspondent like Morley Safer could spend CBS’s money to shoot film of American soldiers burning Vietnamese villages. Publishers like Arthur Ochs Sulzberger (grandson of Adolph Ochs) at the Times and Katherine Graham at the Post could bet the house on bold reporting—such as the Pentagon Papers and Watergate—that directly challenged the power of government. It was an era of rising salaries, rising standards, and rising expectations. The journalism that was originally enshrined in the Constitution—small, local, independent, opinionated—had been changed beyond recognition.

Then it all went bust. It is tempting to say that the Internet was to blame for everything, and many people in journalism (especially those of a certain age) really do believe this. It’s easy to see what happened in journalism as an episode of “technological determinism”—that is, the new technology of the personal computer and the Internet combined to form a superhuman force that destroyed everything. But the real story is more complicated and gives a bigger role to the agency of the people (in and out of journalism) who made the decisions that brought about the big crack-up.

One issue that is often overlooked is the threat to journalism posed by corporate ownership itself. Take NBC News, for example. The news division was a small part of NBC, which was first and foremost an entertainment company. NBC was, in turn, a small part of its parent company, General Electric (GE), which was a globe-straddling conglomerate of industrial and financial interests. NBC News was a small tail on a mighty big dog. Managers at GE gave profit targets to all divisions with simple instructions: meet your numbers or face being spun off. But the pressure to make a profit was not the only problem in this regime. There were also inherent conflicts of interest that journalists could not escape. How could NBC News report on GE’s role as, say, a supplier of jet engines to the Pentagon? Or as a builder of nuclear power plants? Or, at ABC News, after the The Walt Disney Company bought ABC, how could a film reviewer for ABC’s Good Morning America show critically evaluate a new film from Walt Disney Studios?

As more and more of these journalism operations got folded into bigger and bigger corporations, they lost something else—their ability to rock the boat. Large corporations, especially ones that sell products to the U.S. government or face regulation by the U.S. government or need favors from the U.S. government, are not in the habit of blowing the whistle on government waste. Large corporations do not have it in their DNA to pick fights with powerful institutions like the Catholic Church or the Democratic Party or the professional sports establishment. Yet, the dictates of journalism sometimes lead reporters to fight those fights. My point is that the news business had serious, systemic problems before anyone tried to read a newspaper on a computer. The golden era that is so often lamented turned out to be more of a gilded age. In any case, it can now be seen in the rear view mirror as a distinct historical period—one that is over.

In what could serve as an epitaph for that period, here is what journalist Steve Coll (now the dean of the journalism school at Columbia that Pulitzer endowed) said in 2009:

Uniquely in the history of journalism, the United States witnessed the rise of large, independently owned, constitutionally protected, civil service-imitating newsrooms, particularly after the 1960s. These newsrooms and the culture of independent-minded but professional reporting within them were in many respects an accident of history.

Bottom Lines

Starting in the mid-1990s, people with online access began discovering a part of the Internet known as the World Wide Web. It brought an apparently endless array of visual displays to your computer screen. As with the telegraph and the radio before it, this seemed like a cool invention that delighted hobbyists but did not come with operating instructions on how to make money with it. Most publishers disdained the Web at first, which was a costly human mistake they made, and not the product of technological determinism. Because they tried to stand still, publishers got run over. The mighty dual revenue stream that had paid for all the great journalism in print media suddenly dried up. Display advertising shrank, as more and more ads migrated to the Web. Classified advertising dried up almost overnight, thanks to Craigslist. On the circulation side, subscriptions and newsstand sales both evaporated as readers moved online and expected content to be free.

To make matters worse for the legacy media, the Web posed an existential threat. From the beginning, most newspapers were a grab-bag of various content. They covered politics and government, along with business and crime and sports and fashion and a growing array of features and departments. Early newspapers often included poetry and fiction, too. In every case, the newspaper presented itself to readers on a take-it-or-leave-it basis as a pre-determined bundle of material, ranging from important news to the comics. The Web un-bundled all that content and rearranged it. Online, people who really liked sports could find faster, deeper coverage of sports on a website than they could in their local print newspaper. People who really liked chess could find a higher level of engagement with chess online than in a newspaper’s chess column. And so it went for all the elements in the newspaper: there was a superior version online, usually for free, without having to wait for an inky stack of paper to arrive at your doorstep to tell you about things that happened yesterday. It was time to ask: if the newspaper didn’t exist, would it make any sense to invent it?

Now, all media are digital.

People who liked the Web and understood it moved rapidly into the digital space, and they are thriving. The founders of Huffington Post, Drudge Report, BuzzFeed, Vice, TMZ, Talking Points Memo, Politico, and many more are doing just fine, thank you. News ventures that were “born digital” are not carrying the big fixed costs of legacy media, so they are able to profit in the changed environment.

This is not the future; it’s the present. We are in a transitional period, and it is naturally messy. We are in a period of great contingency, with many unsolved problems—notably how to pay for ambitious, expensive, accountability journalism. On the other hand, journalists have better (and cheaper) tools than ever. The “barriers to entry” have fallen, and the field is open to new talent in a way not seen since the early nineteenth century. Journalists have a global reach that earlier generations only dreamed of. I don’t believe in historical golden eras, but there’s a definite shine on some of these new ventures.

There is a brisk trade in making confident assertions about the future of journalism. I will venture this tentative judgment: if you want to look into the near future, look at the powerful trends now at work. One snapshot of those trends appeared in the New York Times last October, in a story about the newspaper’s own recent financial performance. The Times is the most important institution in American journalism, so its future is a matter of no small concern. It turns out that the paper’s latest quarterly numbers were mixed. Overall, the paper lost $9 million, on revenues of $365 million. The main reason for the loss was the cost of buying out about 100 newsroom employees, who were being let go (out of more than 1,300), combined with the continued downward trend in print advertising, which dropped by another 5.3 percent. That is the kind of gloomy news we are used to hearing about the legacy media. But the report also pointed the way forward. During the same three-month period, the Timesadded 44,000 new digital subscribers, and the revenue from digital advertising rose by 16.5 percent. That sounds like a glass that’s half full (at least). The news business will survive. That’s the headline.

Christopher B. Daly, an associate professor in Boston University’s Department of Journalism, is the author of Covering America: A Narrative History of a Nation’s Journalism. He previously reported on New England for the Washington Post (1989 to 1997) and served as statehouse bureau chief for the Associated Press in Boston (1982–1989). On Twitter: @profdaly.

Watchdogs Unleashed

It was February 2009 when veteran investigative journalist Laura Frank lost her job with the closure of the Rocky Mountain News, a newspaper that had served Denver for 150 years. The paper had a stellar reputation, winning four Pulitzer prizes in just the previous decade. Frank had received plaudits for a recent series, “Deadly Denial,” on how former nuclear industry workers were getting turned down for compensation for medical illnesses they contracted while building nuclear weapons. None of that mattered to the parent firm, E. W. Scripps Company. Citing $16 million in losses in the past year, it shut the paper down.

Economic decline triggered by the 2008 recession also led to the closure of the print edition of the Seattle Post-Intelligencer, and other papers entered bankruptcy proceedings. The financial crisis was accelerating the decline of a prime responsibility of the newspaper industry—investigative reporting. Already, and for some time, investigative reporters had experienced layoffs; some also resigned when their organizations closed investigative units and reassigned reporters to other beats. Newsrooms with reputations for award-winning investigations, in Philadelphia, San Jose, Miami, and Los Angeles as well as other metropolitan areas, saw staffs decimated. Consolidation and cutbacks in Washington news bureaus threatened watchdog reporting on the federal government; coverage of state houses across the country was sharply reduced.

A survey that I conducted in 2008 of twenty large to medium size newspapers, as part of my research on the state of investigative journalism, found that half had eliminated or reduced their staffs. Recent surveys by the American Society of News Editors and the Pew Research Center estimated that about 20,000 daily newspaper editorial jobs have been lost—a 36 percent decline since the peak of 56,900 newsroom jobs in 1989. Membership in Investigative Reporters and Editors (IRE), a professional association, dropped from nearly 5,400 in 2003 to 3,700 in 2009.

At the same time, Newsweek and TIME, venerable magazines known for their investigative work, had begun their slide in circulation and reduced their investment in investigative journalism. The situation was not any better in broadcast where many investigative teams were cut and entertainment was substituted for news, both nationally and locally. NBC Dateline’s work became diluted and CBS’s 60 Minutes delivered fewer hard-hitting investigations.

After the closure of the Rocky Mountain News, Frank did some investigative reporting on the newspaper industry itself for the Exposé series produced by the Public Broadcasting Service (PBS). “The Withering Watchdog” found that many newspapers were still actually profitable but were cutting staff and failing to reinvest in operations and training because of pressure from Wall Street to retain high profits.

Nonprofits and Networks

Some investigative journalists took up jobs as investigators for other private and public institutions outside the journalism field. Others like Laura Frank, however, decided to create their own newsrooms, this time as nonprofit businesses that would rely on donations, payments for content from mainstream media, and fundraising events.

One celebrated example is ProPublica, a nonprofit newsroom established in 2007 through a three-year $10 million annual grant from the Sandler banking family. ProPublica’s editorial team is led by the former editor of the Wall Street Journal and a former New York Times investigative editor. In 2010, it became the first online news source to win a Pulitzer Prize, in the category of investigative reporting for an article on a hospital’s operations during Hurricane Katrina. Besides being published at, the piece was published by the New York Times Magazine—one of the numerous media organizations with which ProPublica has established collaborations. Besides the funding from the Sandlers, it also receives grants from major foundations that promote civic responsibility.

Leading the way well before ProPublica entered the scene were two established nonprofit investigative newsrooms, the Center for Investigative Reporting in San Francisco, founded in 1977, and the Center for Public Integrity in Washington, which began work in 1989. They were launched and are sustained primarily with foundation and individual donors although they have earned income from deals with commercial and nonprofit broadcasters. Both have produced investigations that received wide distribution through those partnerships. Journalists such as Frank hoped they could replicate this successful model at the regional and local level.

Frank formed her own nonprofit company, the Rocky Mountain Investigative News Network. She raised money from the John S. and James L. Knight Foundation, the largest journalism foundation in the United States, and from the smaller Ethics and Excellence in Journalism Foundation in Oklahoma City. Both foundations would play key roles in the assisting of many new investigative reporting start-ups. With those funds, Frank hired two of her former colleagues while arranging free office space at the nonprofit Rocky Mountain PBS station in exchange for giving the station the results of their investigative work.

In Seattle, journalists from the Post-Intelligencer started Investigate West and also secured offices in the local PBS station. In San Diego, the former editor and metropolitan editor of the San Diego Union Tribune launched The Watchdog Institute, now an online media outlet called inewsource. It found offices in the local PBS station and at San Diego State University.

In Wisconsin, Illinois, and Massachusetts, journalists began investigative newsrooms at universities, either as a part of journalism departments or under memorandums of understanding. The newsrooms received free space and access to student reporters, while the universities got newsrooms where students could receive professional training and publish their work. Overall, the start-up funding varied from a few thousand dollars to millions a year.

In July 2009, journalists from the Center for Public Integrity and Center for Investigative Reporting plus some of the newer start-ups met at the Pocantico Center near New York City. They agreed on a declaration of intent to create the Investigative News Network (INN), a North American organization to share business functions, attract funding, and encourage collaborations. Some one hundred organizations are now members of INN. Most produce investigative stories that are shared with hundreds of news organizations. Many of the stories arise out of collaborations between INN members or INN members and mainstream media.

Because of the collaborations and distribution, the investigative projects can achieve considerable reach. A 2010 project on sexual assault on college campuses was guided by the Center for Public Integrity and included National Public Radio and five of the smaller organizations. The series was carried by forty-nine newspapers and magazines, fifty-six nonprofit and commercial broadcast outlets, seventy-seven online newsrooms, sixty student newspapers and college-related outlets, and forty-two non-government organizations. The spotlight on the issue eventually resulted in congressional legislation, and spawned dozens of follow-up stories.

By 2011, the nonprofits were routinely providing investigative stories and other content not only on their web sites, but also to hundreds of other outlets, including newspapers, TV stations, radio stations, and other online news sites. And new nonprofits continued to spring up. The Pew Research Center reported in 2013 that at least 172 nonprofits had been created since 1987.

One of the latest to launch is the Marshall Project, a newsroom devoted to investigating justice issues and financed by a hedge fund owner and philanthropist, with Bill Keller, a former New York Times executive editor, serving as editor in chief.

Since 2009, many for-profit newsroom editors—some possibly spurred by the determined work of the nonprofit newsrooms—began talking again about investigative reporting being their franchise. Rather than cutting back on their teams they expanded and focused on that work. The Dallas Morning News, Minneapolis Tribune, and the Milwaukee Journal Tribune were among metropolitan newspapers that maintained or increased the number of their investigative reporters.

Magazines such as the Atlantic and New Yorker are publishing cutting edge investigative pieces while managing to maintain circulation. Online magazines have also entered the business: BuzzFeed has created an investigative desk and Quartz is publishing investigations.

In television and video, Vice News started in December 2013 with provocative short video pieces and documentaries. Mother Jones, which broke one of the major stories of the 2012 U.S. presidential election—a video that caught Republican candidate Mitt Romney expressing disdain for Democrats as voters who pay no taxes and think government must take care of them—may provide a model for the future of investigative reporting. It is a nonprofit publication still in business after nearly forty years. It survives by receiving donations from individuals, foundation grants for special projects, and from selling advertising in its magazine.

Investigating the World

During the first decade of the twenty-first century another transformation began in Europe and spread to other continents. In 2001, American and Danish journalists held the first ever Global Investigative Journalism Conference, in Copenhagen (I was one of the two creators of the conference). The conference, modeled on the IRE annual conference, allowed journalists to share practical methods and tips on investigative and computer-assisted reporting for four days.

Exceeding organizers’ expectations, more than 400 journalists from forty countries attended. One reason was that international journalists were seeking ideas on how to create independent investigative newsrooms because they had few opportunities to do investigations due to limited resources or corrupt owners.

At the next global conference in 2003, the concept of a network of nonprofit newsrooms that would share information and collaborate on cross-border stories had emerged and the Global Investigative Journalism Network was created. The network has held conferences every two years. It now numbers ninety member organizations—some of which also have membership in the Investigative News Network—and held the first Asian investigative reporting conference last November. The global network has spawned numerous collaborations, especially into corruption in eastern Europe. The Organized Crime and Corruption Reporting Project in Serbia has overseen cross-border investigations with other groups into human trafficking, money laundering, and the drug trade. Connectas in Latin America is another collaboration of nonprofit newsrooms that is tracking corruption across borders.

Both in the United States and internationally, a parallel movement in the use of data for investigations is spurring the creation of small online newsrooms and investigations. The movement had begun in earnest in the 1980s in the United States and reached wide acceptance there by the late 1990s. Known as computer-assisted reporting or later, data-driven journalism, the methodology produced more credible stories that simply could not have been done before because of the enormous amount of records that could be examined.

It also meant a small team could have much higher impact using the data and working with others. And the use of data analysis in journalism also has drawn a new generation of reporters from the computer sciences. The data greatly aided cross-border investigations because data and data analysis can be easily shared. Traditional journalists are still slow to use digital techniques, but much progress has been made; more than a thousand journalists attended an annual computer-assisted reporting conference in the United States last year, nearly double the highest previous number. A decade of training of European journalists by data journalism specialists from the United States has borne fruit, with papers like the Guardian producing exemplary investigative work that routinely includes data analysis and data visualization. An early high point was the Guardian’s reporting and analysis of race riots in cities in the United Kingdom. The Guardian not only did traditional reporting but used social science methods to examine the use of social media by rioters and their economic and ethnic status.

The WikiLeaks organization, meanwhile, capitalized on the “data dump” to obtain U.S. government secrets—in the form of classified military and diplomatic reports and cables—and disseminate them via its own website and partner news organizations around the world. Initially, WikiLeaks believed the general public would review its data, discover wrongdoing, and report on it. But it quickly became apparent that linking up with journalists practiced in the ways of investigative reporting—interviewing, on the ground reporting, fact-checking, etc.—was critical to producing credible stories.

A second data dump of documents on off-shore companies that hide money and avoid taxes was obtained by the International Consortium for Investigative Journalists, a part of the Center for Public Integrity. Known as “Offshore Secrets,” the project used the leak of millions of confidential bank records to write stories that involved reporters in fifty-eight countries.

Another well-known data dump came from National Security Agency (NSA) employee Edward Snowden, whose leaks of NSA documents indicated widespread illegal domestic spying. He went to documentary filmmaker Laura Poitras and Guardian columnist Glenn Greenwald as well as Washington Post reporter Barton Gellman. The work of Poitras and Greenwald attracted the notice of Ebay founder Pierre Omidyar, whose interest in public accountability had already prompted him to launch a digital media outlet in Hawaii. Omidyar decided to finance the creation of an international online news organization, First Look Media. In its initial months of operation, First Look has focused on abuses by intelligence and national security agencies, invasion of digital privacy by governments, and surveillance of the Muslim community in the United States. It also has produced social justice stories—such as analysis of the shooting of black youth Michael Brown in Ferguson, Missouri, and the racism in that city—and criticism of mainstream media coverage of national security issues.

Paying for Public Accountability

Despite the rise in nonprofit journalism and the resurgence of investigative reporting in some mainstream newsrooms, the conundrum of paying for it remains. In his 2006 book, All the News That’s Fit to Sell: How the Market Transforms Information into News, economist James Hamilton pointed out that the public at large has never been willing to directly pay for public interest journalism. Instead, advertising generally paid for it over the years. Given that losses in advertising range into the billions of dollars—down 49 percent at American newspapers from 2003 to 2013—and donations to nonprofits remain in the hundreds of millions, the challenges are large.

Currently, most funding for nonprofit news in the United States comes from foundations and private donors, and internationally it comes from foundations and governments, particularly from Scandinavian countries and the U.S. Agency for International Development. Successes with earned income from training, events, and syndication are spotty although the Texas Tribune has been a leader in creating new revenue streams through sponsored public engagement events, individual donations, and data sales. Thus, in just a few short years it now relies on foundations for only a third of its funding. But the Pew Research Center continues to report a heavy reliance by nonprofits on foundations, which appear to be shifting their focus on how to improve democracy in the United States. The foundations also have pushed for the nonprofits to become more independent, more business-oriented, and less reliant on their continued funding. Internationally, funding remains a deep problem although the Open Society Foundations, the Konrad Adenauer Stiftung, and Adessium Foundation are still strong supporters of investigative centers and conferences.

As Drew Sullivan of the Organized Crime and Corruption Reporting Project wrote in a paper in 2013, “Investigative Reporting in Emerging Democracies: Models, Challenges, and Lessons Learned,” the obstacles to investigative reporting include not only safety and professionalism issues, but also poor financial support and a culture gap between journalists and funders. David Kaplan, the executive director of the Global Investigative Journalism Network, has written extensively about the lack of sufficient funding for investigative reporting throughout the world. In a recent global network project supported by Google Ideas, “Investigative Impact: The Case for Global Muckraking,” he cites ten case studies, several done by nonprofits. In one of them, “YanukovychLeaks,” reporters used divers to retrieve documents thrown into a lake next to the Ukraine presidential palace; the dried documents provided a look into billions of dollars of looted wealth.

He also cited investigations into the killing of disabled children in Ghana, the corruption of a Philippine president, how 70 percent of Pakistan’s parliamentary members do not pay taxes, and hundreds of needless neo-natal deaths at a city hospital in South Africa.

But, as Kaplan noted in a previous 2012 report, “despite its frontline role in fostering accountability, battling corruption, and raising media standards, investigative reporting receives relatively little support—about 2 percent of global media development funding by major donors.” He also has found, as has the Pew Research Center, that “few nonprofit investigative journalism organizations, particularly reporting centers, have adequate sustainability plans. To survive in a competitive and poorly funded environment, many will need to diversify and become more entrepreneurial, drawing revenue from various sources and activities.”

The work of Laura Frank, meanwhile, reflects the evolution of nonprofit newsrooms. In 2013, she merged her I-News group with Rocky Mountain PBS and a public radio station, and entered into collaborative agreements with several other radio newsrooms and a commercial TV station. Within a year, she became president and general manager of news for the PBS station. With additional funding from the station and the Corporation for Public Broadcasting, she expanded the editing and reporting staff and produced such stories as “Losing Ground,” which examined the deep disparity in economic and living conditions between Hispanics and whites in Colorado.

By partnering with PBS, Frank won a major increase in her audience—the station’s 65,000 contributing members. She also acquired an experienced fundraising team. In an interview in 2013 with the American Journalism Review, Frank said: “The greatest value is being able to sustain in-depth public service journalism, because we have the infrastructure to do that. Investigative reporting is expensive and risky. Being able to merge with an organization that has the infrastructure in place, and you bring in the journalism and an injection of energy, and it’s sort of a perfect marriage.”


Brant Houston is the John S. and James L. Knight Foundation Chair in Investigative and Enterprise Reporting at the University of Illinois at Urbana-Champaign. He oversees, a community online news and information project. From 1997 to 2007, he served as executive director of Investigative Reporters and Editors, Inc. He is the author of Computer-Assisted Reporting: A Practical Guide. On Twitter: @branthouston.

Dangerous Occupation

Twenty years ago, most people got their international news from relatively well-established foreign correspondents working for agencies, broadcast outlets, and newspapers. Today, of course, the process of both gathering and disseminating news is more diffuse. This new system has some widely recognized advantages. It democratizes the information-gathering process, allowing participation by more people from different backgrounds and perspectives. It opens the media not only to “citizen journalists” but also to advocacy and civil society organizations including human rights groups that increasingly provide firsthand reporting in war-ravaged societies. New information technologies allow those involved in collecting news to communicate directly with those accessing the information. The sheer volume of people participating in this process challenges authoritarian models of censorship based on hierarchies of control.

But there are also considerable weaknesses. Freelancers, bloggers, and citizen journalists who work with few resources and little or no institutional support are more vulnerable to government repression. Emerging technologies cut both ways, and autocratic governments are developing new systems to monitor and control online speech that are both effective and hard to detect. The direct links created between content producers and consumers make it possible for violent groups to bypass the traditional media and reach the public via chat rooms and websites. Journalists have become less essential and therefore more vulnerable as a result.

Many predicted that the quantity, quality, and fluidity of information would inherently increase as time went on and technology improved, but this has not necessarily been the case. While mass censorship has become more difficult, new and highly effective models of repression have emerged in response to the rapid changes in the way news and information is gathered and delivered. Statistics indicate that even as information technologies have proliferated, the situation for journalists on the ground has gotten worse, not better. The number of journalists killed and imprisoned around the world has reached record levels in recent years and, according to several studies, press freedom is in decline. At the beginning of December 2012, there were 232 journalists in jail according to Committee to Protect Journalists (CPJ) research, the highest tally ever recorded. While historically repressive countries like Iran and China contributed to the upsurge in imprisonment, the world’s leading jailer of journalists was Turkey, a country with a relatively open media and aspirations to join the European Union. Most of those jailed were being held on anti-state charges, and over half of all journalists in jail around the world worked online, including a majority of those imprisoned in China.

In 2012, seventy-four journalists were killed while carrying out their work; this is close to the record highs recorded at the peak of the Iraq war. The Syrian conflict has proved devastating for the press, with thirty-five journalists killed in a single year. The tally once again reinforced the hybrid nature of frontline newsgathering, with high casualties among both established international media organizations and citizen journalists. Two renowned war correspondents, Marie Colvin of the Sunday Times and the French photographer Rémi Ochlik, were killed when their improvised media center in Homs was targeted by Syrian forces in February 2012. In 2014, two American journalists were beheaded by an Islamic extremist faction in Syria. Meanwhile, at least thirteen citizen journalists who reported on the conflict from an activist perspective and provided devastating video images of the carnage and savagery of war also perished, some at the hands of Syrian government snipers.

The leading historical press freedom index compiled by Freedom House, based in Washington, DC, shows that global press freedom has waned in recent years. “After two decades of progress, press freedom is now in decline in almost every part of the world,” Freedom House noted in its 2011 Freedom of the Press Index, which tracks the state of media freedom in over 190 countries and has been published since 1980. “Only 15 percent of the world’s citizens live in countries that enjoy a free press.”


Covering Mexico in the 1990s

How did we get here? Why has it become more dangerous for journalists and other information providers even as technology has made it easier to communicate and access information across borders? The best place to start is to look at the way international correspondents operated twenty years ago, at the dawn of the Internet revolution. While each country and each situation is different, my own experience as a freelance correspondent covering Mexico in the 1990s gives some insight into how the process worked.

Mexico City was a sought-after posting for international reporters, and there were dozens of correspondents representing everyone from the BBC to the American broadcast networks; wire services ranging from the Chinese Xinhua to the Italian ANSA; national dailies like the Guardian andEl País; and local and regional newspapers like the Baltimore Sun and the Sacramento Bee. We reported the news by interviewing government officials, analysts, and people in the street; we traveled from dusty small towns to the urban slums; we covered the collapse of the Mexican peso and the Zapatista uprising. Nearly all of our reporting was done face to face. Phone interviews were unusual. Most sources did not trust the phones and were reluctant to discuss matters that were remotely sensitive. We sometimes still filed our stories by dictation. We also used fax machines, later modems, and finally e-mail, which was often balky and erratic.

An important part of our job was to read carefully the Mexican press each day (we would watch TV as well, but the national broadcasters at the time were largely mouthpieces for the government). We relied on the Mexican media to track national developments, spot stories and trends, and compare perspectives. Occasionally, an international correspondent would break a major story not covered in the domestic media, but much of our daily coverage was derived from Mexican media reports. When we traveled to a provincial city, we would often seek out a journalist from the local newspaper and ask for a briefing and introductions to officials. Usually such arrangements were informal, but sometimes we hired local reporters as stringers and essentially paid them to be our guides. Some of the reporters who helped us were threatened as a result.

For the most part, the dozens of international journalists based in Mexico City covered the same stories in largely the same way. Foreign editors often asked their correspondents to “match” what the wires were doing or follow up on a particularly compelling story published in a rival publication. Some reporters were more connected and more entrepreneurial than others. Some were better interviewers or more stylish writers. But in the end most of us made a living not necessarily by differentiating our coverage but by tailoring it to local markets. As a freelancer, I always sought new angles and new perspectives, particularly the perspectives of average people, including slum dwellers, small farmers, factory workers, and activists. That kind of reporting took time, which was my competitive advantage since I did not have daily filing deadlines. While the editors were reading the wires, in the pre-Internet era I did not have direct access to the work of my competitors. I learned what my colleagues had been up to at the Friday “cantina night,” at which several dozen foreign correspondents would regularly gather. If I wanted to read their stories, I had to go to a local coffee shop called Sanborns that stocked international magazines and newspapers. I spent hours doing this each weekend despite the fact that everything was between a few days and a few weeks out of date.

Like a lot of businesses, freelance journalism was “disrupted” by the Internet. By the end of the decade, the San Francisco Chronicle and the Fort Worth Star-Telegram were no longer in separate markets. When it came to national and international news they were competing not only with each other but also with the New York Times, the Wall Street Journal, and the BBC and any other news website accessible over the Internet. This undermined the value not only of freelancers but also of the full-time correspondents employed by second-tier U.S. newspapers like the Boston Globe andNewsday. Some of the journalists who worked for these publications were extraordinarily talented, but their positions were expendable once readers were able to use the Internet to obtain easy access to national and international news organizations with more correspondents and more resources. This made closing the bureaus an easy call, particularly as the same technological forces that suddenly forced regional newspapers to compete against one another also ravaged their economic model, with advertising migrating online and circulation plummeting.

The role that international correspondents in Mexico City played as a conduit for the Mexican media also become less essential as interested readers gained the ability to access the websites of local newspapers like La Jornada or Reforma directly through the Internet. Of course, this was before Google Translate, so Spanish-language skills were required.

The trend of closing and consolidating bureaus in Mexico City accelerated after the 9/11 terror attacks, when news organizations retooled to cover the wars in Afghanistan and Iraq. Today Mexico remains a vital global story, but there are far fewer international correspondents operating in the country. Those left continue to play a vital role by covering sensitive stories that might be too dangerous for local journalists, particularly on drug trafficking. They are also generally able to report with greater independence and to provide specialized context that make their stories more appealing and accessible to their readers, viewers, and listeners.

While the decline in foreign correspondents in Mexico and in so many other countries around the world is in many ways lamentable, it also must be acknowledged that the Internet exposed some of the inefficiencies of the old structure, in which dozens of reporters filing for different media outlets essentially produced the same story. Certainly, informed and committed observers of Mexico can go online and find infinitely more information than I could access as a correspondent covering the country in the 1990s. Does that mean people are better informed? It depends. If information were apples, then the role of international journalists back in the 1990s was to select the best fruit and “export” them to the international market. Now, using the Internet, international news consumers can buy wholesale. They have access to more information, but they also must sort through it on their own, deciding what is most important and most useful. What has changed is the marketing and distribution systems. But much of the information about Mexico that the world needs is still produced by local journalists on the ground. Their role in the new international media ecosystem is therefore even more crucial.

Keeping It Local in Iraq

There was another reason that local journalists took on a more direct role in informing the global public, and that was safety. In Iraq, the risk of specific targeted attacks on international journalists become so great that at the height of the violence many well-staffed international bureaus in Baghdad were forced to rely on Iraqi journalists to carry out nearly all street reporting. These reporters became the eyes and ears of the world; they also paid a terrible price in blood.

Initially, the U.S.-led invasion of Iraq in 2003 was covered almost exclusively by international media. The Iraqi media under Saddam Hussein was one of the most censored and controlled in the world, and there was virtually no independent information inside the country. Seeking to facilitate coverage of the invasion from the perspective of the allied forces, the U.S. military established a program to “embed” journalists with the invading militaries. The idea was to give reporters frontline access to the combat operations but also to manage coverage of the war. In addition to the thousands of embedded journalists, several hundred “unilateral” reporters converged independently in Baghdad. They clustered in high-end hotels, like the Hamra and the Palestine. The Iraqi government tolerated them because it wanted journalists in Baghdad to help get its message of resistance and defiance out to the world. It also wanted Western journalists on the scene to document the collateral damage from U.S. bombs. International journalists were accompanied everywhere by Iraqi government minders.

As the Saddam Hussein regime disintegrated, the system of minders and controls began to break down, and the journalists began to slip away. The two sets of reporters—the embeds who accompanied the U.S. military and the unilaterals who covered the war from Baghdad—converged on Firdos Square, just outside the Palestine Hotel, as the U.S. Marines rode triumphantly into Baghdad. But the concentration of international media did not necessarily produce an accurate portrait of events. The actions of the Marines and journalists on the ground in Firdos Square, amplified by editors in newsrooms in the United States, turned what was a relatively minor and ambiguous moment in the conflict, the toppling of the statue of Saddam Hussein, into a triumphalist image, according to an account by the journalist Peter Maass published in the New Yorker.

In the immediate aftermath of Saddam’s fall, international news organizations moved to a system of bureaus, and journalists began to operate freely. During this brief period, they moved about the entire country, delving into areas of Iraqi life previously unexplored. “We entered Iraq during a fleeting golden age of modern war time journalism,” recalled the Washington Post reporter Rajiv Chandrasekaran, who established the newspaper’s bureau in Baghdad in April 2003. “In the early weeks and months there were minimal security threats. You could put fuel in the tank, find a willing driver, and go anywhere. The biggest risk was getting into a car accident or being invited into someone’s home and served tea, since drinking water was pumped directly from the Tigris. Journalists were seen as neutral, sympathetic figures and people were anxious to engage with us.”

But there were ominous signs. On July 5, 2003, an Iraqi gunman approached the 24-year-old British freelance cameraman Richard Wild and shot him in the head as he was reporting outside Baghdad’s natural history museum. The circumstances remain murky—it was unclear whether Wild was carrying a camera or whether he was working on a story about looting—and journalists in Baghdad paid the incident little heed, seeing it as just a random act.

But as the conflict intensified, so did violent attacks on the media. As Internet access expanded rapidly in Iraq, insurgent groups developed their own online information networks, relying on websites and local and regional news outlets to communicate externally and on chat rooms to engage with their supporters. These groups had little interest in influencing international public opinion but had a strong interest in disrupting the emerging civil society and consolidation of the American-backed government. Attacking journalists was an effective method for achieving this goal.

As journalists were kidnapped and in some cases executed, bureaus added fortifications, and journalists were forced to move around in armored vehicles accompanied in many cases by armed guards. “In mid-2004, as Al-Qaeda elements began taking over leadership of the insurgency from the nationalist good old boys of the Baath party, things changed dramatically,” Chandrasekaran recalled. “Journalists were no longer seen as neutral actors and people to talk to. Instead, they were people to apprehend and kidnap. The tenor of the interviews changed among those Iraqis not supportive of the American presence. People were less interested in talking, and harder to get a hold of. The interviews were far more uneasy. Journalists stopped talking to those people.”

Because Western reporters could only move around in carefully planned operations, Iraqi reporters—many of whom were initially hired as translators and fixers—took over many of the frontline reporting activities. Unlike Westerners, they could disappear into the crowd, particularly at the scene of suicide attacks. When they went home, Iraqi journalists did their best to keep their profession a secret. But there were casualties. On September 19, 2005, the New York Times reporter Fakher Haider was seized from his home in the southern Iraqi city of Basra by men claiming to be police officers. His body, with a gunshot wound to the head, was recovered the next day. On December 12, 2006, the Associated Press cameraman Aswan Ahmed Lutfallah was shot dead by an insurgent who spotted him filming clashes in Mosul.

The increased dependence by international media organizations on local Iraqis to carry out frontline reporting created challenges with the U.S. military, which was often suspicious of Iraqis carrying cameras since the insurgents often sought to document their attacks. Accredited Iraqi reporters were regularly detained and accused of having ties to terrorists. Some were held for extended periods.

The violent censorship of the media carried out primarily by insurgent groups and sectarian militias had a devastating effect on the quality and quantity of information coming out of Iraq. Reporting on the nature and structure of militant and sectarian groups, their relationship to government security forces, the role of Iran in the insurgency, and the scope of government corruption were all vital—and underreported—stories. Despite the vast resources invested in covering Iraq, the strategy of imposing censorship through violence was extremely effective. More than 150 journalists and fifty-four media support workers were killed during the Iraq war, the highest number ever documented in a single conflict. Eighty-five percent of them were Iraqis.


The Pearl Killing

Between 2002 and 2012, 506 journalists were killed, according to CPJ data, compared to 390 in the previous decade. One factor contributing to the increase was that during this period journalists became regular victims of terrorist violence, including murders and kidnappings.

The new wave of terror attacks on the media began with the murder of the Wall Street Journalcorrespondent Daniel Pearl, killed in Pakistan in February 2002. The decision by Al-Qaeda operative Khalid Sheikh Mohammed to murder Pearl was apparently based on several factors, none of which appear to have been considered carefully. First, Pearl’s murder was a way of demonstrating ruthlessness and resolve in the face of the U.S.-led invasion of Afghanistan that had toppled the Taliban government, devastated Al-Qaeda’s terrorist infrastructure, and taken a considerable toll on the organization’s leadership. Mohammed later told interrogators at Guantánamo, where he is now being held after being captured in Pakistan in March 1, 2003, that Pearl’s Jewishness was “convenient” but was not the motive for his abduction or murder.

Based on the gruesome video made of Pearl’s killing—which was actually a reenactment because the video camera failed on the first take—Mohammed clearly saw the murder as a recruiting tool that he hoped would inspire Al-Qaeda followers. It certainly sent a message of contempt for Western public opinion and to the journalists who helped shape it.

For Al-Qaeda’s followers Pearl’s murder sent a message: international journalists were legitimate targets of terror operations, and the goal should be to maximize media attention of such abductions. The fact that the Pearl murder may have stemmed from the impulsive actions of Khalid Sheikh Mohammed rather than official Al-Qaeda policy no longer mattered. Targeted attacks on the media carried out by militants either linked to or inspired by Al-Qaeda multiplied, and this in turn had a profound effect on the ability of journalists to report the news from parts of the world where the group maintains a presence. In many instances, the implied sanction conferred by Pearl’s murder provided a modicum of a religious justification for what was essentially a criminal enterprise, kidnapping for ransom.

Pearl’s killing also sent a clear message that in the Internet era there were other ways to communicate and that traditional journalists were dispensable, useful primarily as hostages and props in elaborately staged videos designed to convey a message of terror to the world.

A decade later, such dangers to journalists were dramatically evident in Syria. In 2013 and 2014 as the hardline Islamist factions of the Syrian rebels gained the upper hand, journalists became specific targets. During this period numerous journalists were among the Westerners and Arabs kidnapped by the group known as the Islamic State in Iraq and Syria. Two American freelancers were beheaded in murders videotaped and posted on the Internet: James Foley, who worked for GlobalPost and Agence France-Presse, and Steven Sotloff, who wrote for various publications including TIME magazine.


From Curated Search to Social Media

When I was a freelancer covering Mexico and Latin America, international news was “curated.” This meant that if you were a news consumer outside the country, you had to rely on the judgment of the journalists on the ground and the editors who selected what stories to publish to stay informed. By the time the second Iraq war rolled around, people got their news in a different way: search. Yes, they might read their local paper or watch the news from the BBC or CNN, but if they got interested in a particular story, they could use Google to dig deeper. A few keystrokes would direct them to the most detailed coverage of a specific incident or to an investigation or analyses that shed light on a certain aspect of the story.

By the time the Arab revolts broke out in December 2010 a new method of following international news had emerged: social media. Again, many people around the world watched the events unfold on television or read about them in their local newspapers. And as during the Iraq war, they used search engines to dig deeper. But social media—notably Twitter and Facebook—became the most efficient way to follow developments minute by minute and gain access not only to the perspectives of journalists but also eyewitnesses and informed observers. I followed developments in Egypt while sitting in my office in New York. I streamed Al-Jazeera on my desktop while simultaneously monitoring Twitter. I followed the feeds of journalists that I knew were present in Tahrir Square. Their tweets led to bloggers and activists who were also on the scene. I was generally aware of every major development within minutes. Even though news websites were updated perhaps hourly, I turned to them the way people used to read weeklies like TIME or Newsweek—not for breaking news but for context and analysis.

The evolution from curated to search to social media is tracked by Ethan Zuckerman in his 2013 bookRewire: Digital Cosmopolitans in the Age of Connection. One of the great benefits of using social media to share news is that it provides a forum for local journalists to bring specific stories and issues to the attention of a global audience. Plus, the same channels used to spread the news can also be used to defend the rights of the reporter who is subsequently threatened or put under pressure. Take the case of the Liberian journalist Mae Azango. In March 2012, Azango published a story entitled “Growing Pains: Sande Tradition of Genital Cutting Threatens Liberian Women’s Health” in Front Page Africa, a Liberian newspaper that also serves the large community of exiles through its active website. The story blew the lid off a taboo subject in Liberia, female genital cutting. The practice is carried out by the secret Sande society, which is also a powerful political force in Liberia, particularly at election time. This is why the country’s Nobel Prize–winning president Ellen Johnson Sirleaf was reluctant to stand up to the Sande and tackle the issue of cutting, which is practiced on an estimated 60 percent of Liberian girls. In her report, Azango interviewed a woman who described being held down by five women while her clitoris was cut out with an unsterilized knife. The account was brutal and shocking. Azango chronicled the health risks and social consequences of the practice, interviewing medical professionals. The story sparked a fierce debate in Liberia and in the exile community. But it also put Azango at risk. Death threats poured in, forcing Azango and her daughter into hiding.

Neither Azango’s original report nor the threats against her were covered by traditional media. What gave the story life were social media networks that helped spread the word. Once the threats emerged, this same network was mobilized. Eventually, the New York Times columnist Nicholas Kristof took up Azango’s case and used his Twitter feed, which has over one million followers, to draw international attention that led to action. President Sirleaf, who had responded with seeming indifference, eventually agreed to provide physical protection for Azango. As a result of the international attention, she also took steps to challenge the Sande, and her government imposed a temporary ban on genital cutting while it studied the issue.

While social media has had a profound effect on news consumers and journalists, it has also changed the way governments manage information. Activists celebrated when the effort of Egypt’s President Hosni Mubarak to control information during the Tahrir Square revolt failed, and he was eventually toppled. But governments around the world learned a different lesson from those events. Recognizing the threat posed by new information networks, they began cracking down on online speech. While some countries seek to hide their repressive policies beyond a façade of democracy, China is quite public about its efforts and takes pride in its ability to monitor and censor information online—and no wonder. With more than 564 million Internet users at the end of 2012, China has more people online than any other country. Many are highly active on social media.

The Chinese government has developed strategies combining traditional forms of repression with high-tech techniques, like using software to filter out prohibited content. Newsgatherers in China face myriad restrictions. Foreign reporters are often unable to obtain visas to enter the country, and those based in China are sometimes blocked from traveling to certain areas, notably Tibet. Photographers often find their work obstructed by security forces, particularly when documenting demonstrations. Chinese employees of international newsgathering operations face constant monitoring and government pressure. International human rights organizations are for the most part unable to operate inside mainland China. Meanwhile, Chinese journalists working for the domestic media must follow regular government directives on what to cover and not cover. Failure to follow these guidelines is likely to result in dismissal or worse.

While dissident journalists in China face the possibility of arrest and imprisonment, CPJ research suggests that the number of journalists imprisoned has not increased in recent years and stood at thirty-two at the end of 2012. Given the size of China’s population—and the size of its press corps—imprisonment is clearly not the preferred strategy. In fact, most of those imprisoned are not traditional journalists but online dissidents and activists who straddle the line between journalism and activism.

The real threat as China sees it is the way in which the growing number of people who use social media share information and links. The program of domestic monitoring—using electronic surveillance and tens of thousands of paid government supporters who patrol chat rooms and read posts—has been expanded. Filtering has become more pervasive and effective. But China’s ultimate goal is to transform the current structure of the Internet, converting it from a decentralized global system to one in which national governments exercise effective control. If China succeeds in its efforts, the Internet as we know it would come to an end.

Quality Control

The ability of governments to manage, control, and manipulate information undermines the creation of a global civic culture and shields powerful institutions from public accountability. When Pakistan’s government suppresses coverage of its military and intelligence operations, when China censors reports about food safety, and when Syria completely blocks access to international reporters, they are not only censoring within their own national borders. They are censoring news and information critical to people in many parts of the world. Without adequate information, global citizens are essentially disempowered. While the right of people everywhere to “seek, receive and impart information and ideas through any media and regardless of frontiers” is enshrined in Article 19 of the Universal Declaration of Human Rights and other international legal instruments, the reality is that there are few effective means to fight back against censorship on an international level.

While the lack of institutional and legal protections is troubling, it is hardly surprising that governments would seek to censor and control information. But what happens if the media itself is doing the censoring? This is unquestionably a significant issue. In many countries around the world, the media is not independent. It is partisan, biased, corrupt, and irresponsible. It is beholden to powerful corporate interests, in some cases governments, in other cases opposition forces. While the quality of information clearly matters, the imperative in the current environment is to ensure that information of all kinds continues to flow within national boundaries and across borders. Governments, militants, and other enemies of freedom of expression cannot be allowed to restrict the flow of international news.

Poor media performance, while lamentable, is not a violation of international law. Article 19 of the Universal Declaration of Human Rights guarantees the right of all people to express their ideas and prevents governments from prohibiting their expression. It does not guarantee that the ideas ultimately expressed will be thoughtful, considered, or responsible.

Clearly, journalists and media organizations should produce ethical, responsible journalism that serves the public interest. This is always the goal. But based on my experience as the executive director of the Committee to Protect Journalists, I am reluctant to combine the defense of freedom of expression with a discussion of strategies for improving the quality of information. This is because too often I have seen governments justify restrictions on freedom of expression or the press by arguing that certain kinds of information are harmful or destabilizing or that the media is biased, irresponsible, or beholden to “foreign” interests. I remember a debate with the Venezuelan ambassador to the United States, who argued that his government should use its authority to ensure that all news presented to the public was “truthful.” One can acknowledge that the performance of the Venezuelan media has at times been woeful while still recognizing that this is a terrible idea.

I can remember another meeting I had with the interior minister of the Gambia, a tiny sliver of a country in West Africa where journalists have faced persecution and restrictions. He argued that government intervention was necessary because the Gambian media was reckless and irresponsible. My counterargument is that the media is generally no more biased, underdeveloped, or polarized than the rest of the society and that expecting the media to rise above all other institutions is unrealistic and unfair. Journalists everywhere too often fail in their responsibility to inform the public, to hold governments to account, and always to seek the truth. But such failures should never be used to justify legal action, control, or censorship.

There are of course legitimate limits on freedom of expression in an international and domestic context. Incitement to violence is never protected, there must be legal redress available for libel and slander, and governments may take certain legally prescribed measures to limit speech to safeguard national security. There are also valid critiques of the media in nearly every country in the world. The U.S. media is dominated by corporate interests. The British media is distorted by invasive and scandal-mongering tabloids. Elements of the Pakistani media are infiltrated by state security agencies. The Turkish media is dominated by business interests beholden to the government. The Mexican media has been partially corrupted by trafficking organizations. Governments can help address these issues by supporting media development and investing in journalism education; they can take steps to end cronyism and break up monopolies. But a prerequisite for any government efforts to improve the quality of information available to the public must be a clear and unequivocal embrace of the full range of freedom of expression guaranteed under international law. Without such a commitment, governments in my experience that point to the media’s shortcomings are looking to exploit them to justify restrictions rather than to ensure that people have access to timely and accurate information. Under international law, governments do not have the authority to restrict information because it is “biased,” “false,” or offensive, disruptive, or destabilizing. This is the essence of free expression.

Informing the World

Technology has transformed the way that news is disseminated and consumed around the world. But we still need people on the ground in places where news is breaking out for the system as it is now structured to work effectively. There has been much excitement about the roles that citizen journalists and activists have played in providing firsthand accounts of unfolding events in Egypt, Iran, Syria, and China. That excitement is understandable. But for the moment the most important (and least heralded) figures in the global information ecosystem are local journalists working in their own countries. Partly as a result of their increased visibility and importance, local journalists are more vulnerable than ever. They are the ones informing global citizens—and they are ones being jailed and killed in record numbers.

The flow of information is undoubtedly increasing, and today because of the ubiquitous nature of social media we are often inundated and often unable to process it or put it into proper context. In fact, the volume of information can even obscure what we don’t know and prevent us from seeing the ways that governments and violent forces are disrupting the flow of news within countries and across borders. Deluged with data, we are blind to the larger reality. Around the world new systems of information control are taking hold. They are stifling the global conversation and impeding the development of policies and solutions based on an informed understanding of the local realities. Repression and violence against journalists is at record levels, and press freedom is in decline.


Excerpted from The New Censorship: Inside the Global Battle for Media Freedom, by Joel Simon. Copyright © 2015 by Joel Simon. With permission from the publisher, Columbia University Press.

Joel Simon is the executive director of the Committee to Protect Journalists. He has written widely on media issues, contributing to Columbia Journalism Review, Washington Post, Guardian, New York Review of Books, and others. He is author of The New Censorship: Inside the Global Battle for Media Freedom. On Twitter: @Joelcpj.

Tests for Egyptian Journalists

In a classic essay in the Journal of Democracy in 2002, “The End of the Transition Paradigm,” democratization analyst Thomas Carothers questioned the assumption that elections are the be-all and end-all of democracy. His argument seems especially apt in Egypt’s case. One mistake, according to Carothers, is to believe that the political and economic effects of decades of dictatorship can be brushed aside. Another is to imagine that state institutions under dictatorship functioned sufficiently well that they can be merely modified and need not be entirely rebuilt. Political scientist Sheri Berman, writing in Foreign Affairs in 2013, made similar points about what she called the “pathologies of dictatorship.” These leave a poisonous aftermath of pent-up distrust and animosity, she said, bereft of political bodies capable of responding to or even channeling popular grievances. In Egypt, media institutions, largely controlled by the state since soon after the country became a republic in 1952, are part of this problem, but they can also be part of a future solution.

To the extent that news media contribute to framing public discussion, the closer they get to representing the full plurality of interests and viewpoints in society, and the more they report verified information rather than prejudice, rumors, and lies, the more likely it is that different social groups will understand each other and make policy choices that are collectively beneficial. How media pluralism is achieved depends on history. Some have argued that the norms of professional journalism, which underlie claims to internal pluralism by various U.S. and northern European media outlets, spread as a face-saver for democracy under monopoly capitalism. That is to say, internal pluralism, by giving an impression of editorial non-partisanship, obscures the high barriers to media market entry that give partisan, capitalist media owners their advantageous position in the field.

The alternative to internal pluralism is to have a plurality of voices expressed by multiple media outlets. Where this happens, media outlets are often identified with competing political ideologies. They may operate as tools of political struggle, mixing commentary with reporting. In these circumstances media users, who everywhere tend to choose news sources that accord with their own views, are less likely to be exposed to other ways of thinking, which limits openings for political dialogue. Many national media systems contain a mix of internal and external pluralism. Evidence shows that the greater the mix—with different media (commercial, public service, partisan, community and minority media) valued for their different functions and different styles of journalism—the more political dialogue is facilitated and the better democracy is served.

Opposition Press

In Egypt’s history, periods of rapid press growth occurred at times, such as in the 1870s and early 1900s, when the country was in ferment over how best to challenge foreign rule. Newspapers gave vent to a cross-section of emerging political movements, establishing a link between journalism and political campaigns. An American study of Arab journalists in 1953, after many decades of colonialism in the region, found them to be widely in agreement that their publication’s primary purpose was to fight for political causes. Yet not all news media of the time had started out with that intent. Rose El-Youssef, founded by an eponymous Egyptian actress in the 1920s, was a notable literary magazine designed to engender respect for theater and the arts before it became a political supporter of the pro-independence Wafd Party, winner of the first elections called under the parliamentary system set up in 1923. Mustafa Amin, founder with his brother Ali of the Akhbar El-Yom newspaper group  in the 1940s, established his credentials as a journalist inspired by models of fair reporting.

Gamal Abdel Nasser’s rise to power after the monarchy was overthrown in 1952 put an end to multiparty politics in Egypt. The jailing of outspoken journalists became increasingly common until, in 1960, Nasser nationalized the press, bringing all leading titles, including Rose El-Youssef and Al-Akhbar, under government control and turning print journalists into government employees. Anwar Sadat, Nasser’s successor, continued to imprison writers and politicians by the hundreds. But in 1977 he reintroduced political parties, within limits, and allowed them to publish party newspapers. In a situation where the ruling party is routinely “re-elected” other parties inevitably remain in opposition. Thus Egyptian newspapers came to be categorized as belonging to either the “national” (nationalized) or the “opposition” press.

The notional divide between opposition journalists and national ones, in a climate where nationalism was highly prized, was blurred in theory in the 1990s by the reappearance of newspapers that belonged neither to political parties nor the government. In practice, however, these papers were now also regarded as oppositional, and often dismissed as yellow press; sensationalism tended to be the simplest route to overcoming logistical obstacles to sales. Egyptian entrepreneurs made increasing use of a legal loophole that allowed foreign-registered periodicals to be printed and distributed in Egypt; such outlets, many based in Cyprus, numbered forty-one by late 1997, up from just a dozen in seven years. Despite wholesale bans and clampdowns by the Hosni Mubarak government, the new titles made a mark. El-Dostor, launched in 1995 by the son of a former foreign minister, had its local printing licence withdrawn in 1998 but performed well enough after returning to the newsstands in 2004 to switch from a weekly to a daily in 2007, complete with a website.

By then, satellite television was also starting to liven up the media scene in Egypt. The U.S.-led war to eject Iraqi occupation forces from Kuwait in 1991 prompted Saudi and Egyptian use of satellite broadcasting to counteract Iraqi propaganda and offer Arab viewers an alternative narrative to that of CNN as well. Pan-Arab channels proliferated from then on. In 2000, with Saudi-financed television production houses in Cairo luring Egyptian TV presenters away from national channels, and Egyptian business elites buoyed by U.S. pro-privatization rhetoric and cosy interdependence with the Mubarak regime, the Egyptian government decided to license private Egyptian television channels for the first time. By restricting the channels to satellite transmission, the state preserved its own monopoly over terrestrial broadcasting and, because Egypt had its own majority state-owned satellite, the government kept hold of levers that could get dissident private channels off the air.

The regime did not foresee the extent to which a multiplicity of privately owned newspapers, television channels, and—from around 2004—political blogs, would open the way to higher standards of journalism, intensify public political debate and raise awareness of the crushing hardships faced by large sections of the population because of corruption, unemployment, police brutality, and general government neglect. Public enthusiasm for the alternative voices available in these media soared as Egypt held its first multi-candidate presidential election in 2005. In that year alone, levels of Internet take-up among the Egyptian population, whether measured by use or subscription, more than doubled. Access to video-sharing, with the birth of YouTube in 2005, further facilitated reporting of human rights abuse. By 2008, the year when Facebook use took off in Egypt, the country had 160,000 bloggers.

The private TV channels, prohibited from broadcasting news, filled prime time slots with talk shows about national current affairs, dubbed beramig hiwariya (dialogue programs). Competing with each other for audience share, the presenters of these shows brought together officials and commentators representing multiple viewpoints, dissected scandals revealed through social media, and tussled daily with state security and channel owners over what they could and could not say.

It is not fanciful to imagine that the unwonted solidarity among different social groups, so much appreciated by protesters in the 2011 uprising, could be traced in part to an emerging national discourse of shared concern and aspiration in which the new journalism had played a part. The ground-breaking Facebook page for Khaled Said, a young Egyptian killed by police outside an Internet café in Alexandria in June 2010, was called “We are all Khaled Said.” For a brief moment, at least, a media delivering home truths could be viewed as patriotic rather than oppositional.

Fighting Terrorism

The dedication of a particular group of journalists and editors and the activism of bloggers were critical factors in the period of rapid media development in Egypt from 2005 to 2011. It is true that their work was made possible by structural change, to the extent that more media outlets were licensed or created online. But, time and again after the January 25 revolution, wealthy media owners revealed the tight institutional links they had—through business interests, bank loans, and lack of transparency in the licensing of media ventures—with the deposed Mubarak regime, the interim rulers of the Supreme Council of Armed Forces (SCAF), and, after the ouster of Mohammed Morsi as president in July 2013, with the government of former military chief and SCAF member, President Abdel Fattah El-Sisi.

It took very few weeks, after the initial euphoria of early 2011, for private media owners to demonstrate their disinterest in defending fair and probing journalism. For a while, journalists silenced in one place could move elsewhere and stage a comeback. But this was against a backdrop of rising tolls of killings and imprisonment of journalists and an ever firmer conviction among politicians that media were either with them or against them. Egypt was the third most dangerous place in the world for journalists in 2013, with six deaths that year alone. “Egypt is where logic comes to die” was the verdict of one Egyptian journalist and blogger, struggling to respond to questions from an interviewer, ahead of the May 2014 presidential election, about death sentences recently passed on hundreds of Muslim Brotherhood supporters for a single murder, stories peddled by national media about nefarious foreign plots, and the Egyptian army’s announcement that it had achieved a miracle cure for AIDS.

Blatant censorship of mainstream private media has lately reached a level not seen since 2010, when the Mubarak regime imposed draconian curbs ahead of elections to the People’s Assembly. Throughout 2014, a number of respected journalists, including Journalists’ Syndicate Vice President Abeer Saadi, withdrew from the field of their own accord, finding it impossible to reconcile their personal professional standards with a return to the worst practices of previous regimes. Belal Fadl ended his daily column in Al-Shorouk, citing censorship; satirist Bassem Youssef stopped his show on MBC-Misr, citing fears for his own and his family’s safety; novelist Alaa Al Aswany gave up his political column in Al-Masry Al-Youm, saying “nothing is allowed but one opinion and one thought.” In September came news that Yosri Fouda would no longer continue his late night talk show Last Word on ONTV, an Egyptian channel that had been sold to Tunisian media mogul Tarek Ben Ammar in 2012. Reem Maged, another popular ONTV presenter, had already left a year previously.

With Dream TV’s October 2014 decision to interrupt Wael Al-Ibrashi’s talk show, mid-episode, after he criticized ministers, and Al-Nahar TV’s removal of Mahmoud Saad from his show days later, battle lines were clearly drawn between different camps of media workers. On one side were the editors who came together on October 26 to pledge support for all state measures taken to combat “terrorists and protect national security” and reject any use of the media to demoralize the police, military, or judiciary by questioning their performance. On the other side were journalists, 642 of them by November 6, who signed a statement posted on November 2 to denounce the editors’ surrender of press freedom and defend journalists’ right to keep the public informed.

A speech by President El-Sisi in early August had set the scene for the editors’ pledge. He articulated his vision of the media being engaged in a battle against “terrorism”—an implicit reference to the proscribed Muslim Brotherhood. Harkening back to the media compliance enforced by Nasser, El-Sisi insisted the media had a duty to unite Egyptians and focus on national goals.

Not for the first time were national unity and national security being conflated in public discourse, with “unity” assumed to mean, as Al Aswany put it, a single opinion and a single thought. Nasser’s slogan in the 1960s was “No voice above the voice of the battle.” Sadat in 1980 had introduced the Law of Shame to treat political criticism as an issue of morals and “ethical security.” The country’s schools backed up such mobilization. In the words of an Egyptian education specialist, quoted in anEgypt Today blog in 2013, decades of rote learning “nursed blind obedience” to political and religious authorities, teaching students merely to operate the “machines of the regime.” Thus, from Mubarak’s information minister trying to control pan-Arab satellite channels in 2008 to the Interior Ministry seeking in 2014 to undertake mass surveillance of social media, the authorities have been able to pepper official texts with vague references to “societal norms,” “social integrity,” and “national unity,” without being challenged to say what they mean.

The vagueness matters because of what laws can realistically be expected to achieve. It may be practicable to try to legislate to protect public safety and security against terrorist attacks. Unity, on the other hand, in the sense of social cohesion among different religious or ethnic groups or between rural and urban populations, has not so far been achieved through diktat. Past censorship of film and television drama suppressed or trivialized grievances, purportedly to prevent inflaming tensions. Yet levels of polarization were exposed to public view during the turmoil of late 2011, showing that decades of suppressing media coverage of grievances did not make them go away.


Promises, Promises

Have the media-related clauses in Egypt’s 2014 constitution addressed dilemmas arising from concerns about reporting grievances and conflict? In some respects the answer is yes. The constitution not only refers repeatedly to human rights but states explicitly, in Article 93, that the state shall be bound by the international human rights agreements it has ratified, and that these shall have the force of law “after publication in accordance with the prescribed conditions.” Egypt thus accepts Article 19 of the International Covenant on Civil and Political Rights (ICCPR), which sets out the right to freedom of expression, the responsibility to respect the rights and reputations of others, and the legitimacy of protecting national security, public order, and public health or morals. It also accepts Article 20 of the ICCPR, which outlaws any propaganda for war or “advocacy of national or religious hatred that constitutes incitement to discrimination, hostility, or violence.”

Indeed, the provisions of Article 19 and 20 of the ICCPR are repeated in Articles 70 to 72 of the constitution; these refer, among other things, to the need for laws against defamation, incitement to violence, and discrimination between citizens. But constitutional promises of freedom and adherence to international human rights norms have to be understood in the context of the constitution’s detailed treatment of the regulatory apparatus that will oversee all kinds of media. Here, despite the creation of new bodies, there is consistency with past practice in terms of the dividing lines between outlets that are publicly and privately owned and between print and broadcast media.

The constitution provides, under Articles 211 to 213, for a Supreme Council to regulate all kinds of private media, online or offline, a National Press Organization to manage and develop the state-owned press, and a National Media Organization to manage and develop state-owned audio, visual, and digital media outlets. In June 2014, the Ministry of Information was dissolved for the second time since March 2011—the first time lasted only four months—to be formally replaced by the three new bodies. However, since the constitution left the composition and regulations of all three to be determined by the law, and since Egypt still had no elected legislature when the ministry was axed, the result was a continuing absence of transparency and accountability in media oversight.

As a temporary measure the prime minister appointed Essam El-Amir, head of the state broadcaster, the Egyptian Radio and Television Union (ERTU), to hold ministerial powers at the supposedly defunct Information Ministry until such time as the new bodies would be created. Interviewed afterwards, El-Amir was quick to sing El-Sisi’s praises. It appeared that the ministry’s budget would continue and its tens of thousands of employees would keep their jobs; El-Amir’s promotion seemed to reinforce the ERTU’s identity as an arm of the executive branch of government.

Continuity was also evident in the state-run press. This consists of some fifty-four publications produced by eight publishing houses, which together employ an estimated 31,000 people. These houses, along with the state-owned Middle East News Agency, were hitherto controlled by the Higher Press Council, a body appointed by the upper house of parliament, the Shura Council, itself one-third appointed. In January 2014, the Higher Press Council was authorized to continue choosing editors of state publications pending creation of the new regulator.

It is striking that, in an age of media convergence and pressure on state finances, neither of the drafting committees for the constitution approved under Morsi in 2012, or the 2014 version approved after Morsi’s removal, thought beyond the country’s existing model of big state media houses, separated between broadcast and print. Under the 2014 constitution, the regulator for private media outlets is charged with ensuring their “independence, neutrality, plurality, and diversity” and “preventing monopolistic practices.” The words “plurality and diversity” do not appear in the articles describing regulation of state media and nothing is said about the state monopoly of terrestrial media, nor about the competitive advantage enjoyed by state media houses in terms of their sheer size or privileged access to advertising income.

If more proof were needed of obstacles to fulfilling the constitutional promise of media freedom, it came in two proposals put forward in August, one by an ERTU figure and the other by a university professor, as to how the laws establishing the new media regulators should look. Critiquing both, Mona Nader, head of the media unit at the Cairo Institute for Human Rights Studies, compared them to “laws recently issued or proposed” that block protests and the activities of civil society. Just as these laws starkly contradicted key articles of the new constitution that guarantee the right to peaceful protest (Article 73) and the right of non-governmental associations to practice their activities freely (Article 75), proposed legislation for media regulation threatened to curtail the freedom from censorship promised in Article 71.

The Black Box

Pending clarification as to how media regulation will proceed, one of the most urgent tasks confronting those who wish to upgrade media policymaking is to make policymakers and the public more aware of the range of options available, as well as formulae that have been tried and tested elsewhere. The work of several Cairo-based research and advocacy bodies, grouped in the National Coalition for Media Freedom, means that a fund of background information is already accessible in Arabic. The constitution itself and the challenges facing news media everywhere suggest three prongs for focused, proactive gathering and discussion of policy-related information. They are: existing human rights provisions on pressing issues like incitement, defamation, and privacy; effective business models for local, community, and alternative media; and the development imperative of credible audience research.

Human rights texts offer benchmarks for the rights and responsibilities of media freedom. News media everywhere, online and offline, pose dilemmas about conflicting priorities, between free speech on one hand and rights to privacy, one’s good name, and freedom from discrimination or threats on the other. The Black Box, a program on the channel Al-Kahera Wal Nas (Cairo and the People), owned by advertising magnate Tarek Nour, left people dumbfounded in January 2014 when it started airing tapes of private telephone conversations that seemed intended to defame individuals linked to the January 25 revolution. Thoughtful journalists at the time addressed the broadcasts’ legality, but not everyone felt equipped to counter the spurious argument that national security trumped privacy or defamation concerns in this case. Given the strong legal protection that Egypt’s public officials have routinely enjoyed against criticism voiced in the media, through criminalization of journalistic work deemed “insulting,” The Black Box testified to Egyptians’ lack of equality before the law.

Defamation went hand in hand with something close to incitement to violence on Al-Kahera Wal Nas when in May 2014 the presenter of its program The President and the People denounced street artist Ganzeer as an affiliate of the banned Muslim Brotherhood. At a time when death sentences were being passed en masse against alleged Muslim Brotherhood supporters, Osama Kamal showed a photograph of Ganzeer, revealed his real name, and questioned whether such artists should be left to “do as they please.” Ganzeer used his own blog to issue a strong denial, but Kamal could have claimed multiple precedents for his action elsewhere on Egyptian television, for which no one had been held to account.

There has been no real public debate about the editorial frameworks that allow incitement on screen since TV presenter Rasha Magdy called on viewers to defend the army against demonstrators outside the ERTU building in October 2011. The demonstration was triggered by the burning of a church. Magdy described the protesters as violent. But, in what came to be known as the Maspero Massacre, twenty-seven demonstrators were shot dead by soldiers or run over by army vehicles, with hundreds more injured. A committee convened to investigate the coverage cleared ERTU of intentional incitement, while admitting there had been professional errors.

The point about such occurrences is that Egypt has already made provision for dealing with them by ratifying United Nations human rights treaties, as noted in the 2014 constitution. Yet fear persisted that new media regulatory bodies would be fashioned in the image of old ones, or new ethical charters would be foisted upon journalists from on high. One charter proposed by Egyptian academics in early 2013 had no fewer than fifty-eight clauses, whose number, tone, and mere existence implied that their authors were either unfamiliar with, or unimpressed by, international norms. The Declaration on the Principles of the Conduct of Journalists adopted by the International Federation of Journalists contains just nine clauses, the last of which accords with a UNESCO General Assembly agreement in 1999 that guidelines for journalistic standards should come from news media professionals themselves.

With the margins for media development narrowing in 2014, it was left to the initiative of committed journalists to find ways to connect with audiences by pursuing alternative business models for alternative reporting. Their challenge, as evidenced by a variety of projects already attempted—including independent online radio stations, neighborhood TV channels set up by enterprising residents in deprived areas outside the capital, and a hardcopy hyper-local free-sheet designed to reclaim a sense of ownership over public space in part of central Cairo—is to combine financial sustainability with editorial independence.


Audience Backup

One such project, Mada Masr, was formed in 2013 by journalists who used to work for Al-Masry Al-Youm’s weekly English-language offshoot, Egypt Independent. That was closed after the chairman of the Al-Ahram Newspaper and Publishing House joined the board of Al-Masry Al-Youm Group. Responding to the closure members of the Egypt Independent team, having agreed on their own ethical principles of universal access and diversity, took these with them to Mada Masr, along with ideas about combining traditional and novel sources of revenue and establishing an ownership structure that would prevent dominance by a single shareholder. The aim of diversifying financial support and spreading legal liability has been a recurring theme of proposed independent media projects since 2011.

Sustainability, however, depends heavily on media users. As Egyptian media scholar Rasha Abdulla concluded in a report released by the Carnegie Endowment for International Peace in July 2014, journalists who seek to defend the public’s right to know will need “significant backup” from their audiences. An absence of qualitative or even reliable quantitative knowledge about media use in Egypt is rarely admitted. But with online advertising now a potential lifeline for independent media, such ventures may need to build audience research into their business development plans.


Naomi Sakr is professor of media policy at the University of Westminster. She is also director of the Arab Media Center at the university’s Communication and Media Research Institute. She is the author of Transformations in Egyptian Journalism, and has written and edited several other books on Arab media.

Hollywood’s Bad Arabs

Is it easier for a camel to go through the eye of a needle than for an Arab to appear as a genuine human being?” I posed this question forty years ago, when I first began researching Arab images. My children, Michael and Michele, who were six and five years old at the time, are, in part, responsible. Their cries, “Daddy, Daddy, they’ve got bad Arabs on TV,” motivated me to devote my professional career to educating people about the stereotype.

It wasn’t easy. Back then my literary agent spent years trying to find a publisher; he told me he had never before encountered so much prejudice. He received dozens of rejection letters before my 1984 book, The TV Arab, the first ever to document TV’s Arab images, made its debut, thanks to Ray and Pat Browne, who headed up Bowling Green State University Popular Press. The Brownes came to our rescue. Regrettably, the damaging stereotypes that infiltrated the world’s living rooms when The TV Arab was first released—billionaires, bombers, and belly dancers—are still with us today. In my subsequent book Reel Bad Arabs I documented both positive and negative images in Hollywood portrayals of Arabs dating back to The Sheik (1921) starring Rudolph Valentino. The Arab-as-villain in cinema remains a pervasive motif.

This stereotype has a long and powerful history in the United States, but since the September 11 attacks it has extended its malignant wingspan, casting a shadow of distrust, prejudice, and fear over the lives of many American Arabs. Arab Americans are as separate and diverse in their national origins, faith, traditions, and politics as the general society in which they live. Yet a common thread unites them now—pervasive bigotry and vicious stereotypes to which they are increasingly subjected in TV shows, motion pictures, video games, and in films released by special interest groups.

Reel images have real impacts on real people. Citing a 2012 poll, journalist William Roberts asserts that “Fifty-five percent of Arab American Muslims have experienced discrimination and 71 percent fear future discrimination,” including false arrests to death threats. After 9/11, as many as two thousand persons may have been detained, virtually all of whom are Arab and Muslim immigrants. Roberts goes on to explain that 60 percent of Americans have never met a Muslim, and that since 9/11, thirty new anti-Islamic hate groups have formed in the United States.

Nowadays, our reel villains are not only Arab Muslims; some Muslims hail from Russia, Pakistan, Iran, or Afghanistan; others are homegrown, both black and white American villains who embrace radical Islam. Given this new mix of baddies, no wonder we view Muslims far worse today than in the months after 9/11. In October 2001, an ABC poll found that 47 percent of Americans had a favorable view of Islam. As of this writing, only 27 percent of Americans view Muslims favorably.

Thirteen years have passed since the September 11 attacks. Regrettably, stereotypes of Arabs and Muslims persist, replayed and revived time and time again. Sweeping mischaracterizations continue spreading like a poisonous virus. One of the first lessons that our children learn from their media about Arabs, and one of the last lessons the elderly forget, is: Arab equals Muslim equals Godless enemy.

Indeed, there are some bad Arabs and Muslims out there—but that goes for people of all races and religions. No one group has a monopoly on the good and innocent. But this stereotype is so prevalent, so powerful, that people internalize it and, due to the absence of positive Arab and Muslim images in popular culture, cannot separate the reel from the real. You don’t tar an entire race or religion, hundreds of millions of people, based on the actions of a small minority.

From Tintin to Taken

The pre-9/11 Arab is the post-9/11 Arab: Reel Arabs appear as terrorists, devious and ugly Arab sheikhs, and as reel Muslims intent on terrorizing, kidnapping, and sexually abusing Western heroines. Even reel evil mummies return; this time they pop up in the United States. Legion of the Dead (2005), for example, is a low-budget, you-should-never-see-this horror film; it was so bad that I fast-forwarded through most scenes. The camera reveals an Egyptian burial ground, located not in the Egyptian desert but in the woods outside of Los Angeles. Here, the resurrected evil princess, Aneh-Tet, and her reincarnated male mummies go on a kill-them-all rampage; they melt some people’s faces, and terminate others with bolts of lightning.

In his children’s movie, The Adventures of Tintin (2011), Steven Spielberg falls back on the familiar Arabland setting, filling it with not-so-nice characters, especially the Arab hagglers in the souk. We also see reel dense and disposable robed guards patrolling a palace that is ruled by a weird, bearded Arab sheikh called Ben Salaad.

Arab, Afghan, and Pakistani Muslim villains appear in films such as: Taken (2008), Taken 2 (2012),Iron Man (2008), Killer Elite (2011), and G.I. Joe: Retaliation (2013). I found Iron Man difficult to watch because so many reel dead Arab bodies littered the screen. Elite displays all-too-familiar slurs. Here, reel Arabs appear as dysfunctional, mute, and unscrupulous “camel jockeys.” One of Hollywood’s ugliest reel sheikhs ever, an oily Omani potentate who kidnaps and imprisons the Western hero, is tagged the “old sheikh bastard.”

One-liners and scenes having nothing to do with Arabs continue to prowl silver screens. My friend Chuck Yates who taught at Earlham College called my attention to Seth MacFarlane’s A Million Ways to Die in the West (2014), a film with Western movie clichés that takes gratuitous jabs at Arabs, like Mel Brooks’s 1974 film, Blazing Saddles. In Saddles, Brooks links robed Arabs brandishing rifles with a long line of bad guys, notably Nazis. Million Ways contains two “Arab” scenes that merit our attention. One scene shows a principal character expressing relief that he has no Arab ancestry. But “the real howler,” observes Yates, “is in the gunfight showdown at the end. Here, our hero stalls for time by telling his opponent that he can’t fight until after he recites the Arab Muslim death chant, and then proceeds to holler a long string of gibberish.”

About five minutes into the entertaining children’s movie Nim’s Island (2008), we see the young heroine reading My Arabian Adventure, a book by her favorite author, Alex Rover. Abruptly, the camera cuts to the desert where five armed Arab bandits hold Alex, the Western hero, hostage. Alex, sitting blindfolded atop a camel, his hands tied behind his back, asks, “How am I going to die? Will it be by my captives’ guns? Or will it be death by thirst?” The Arab leader chuckles, “A special hole, just for you! Ever heard of the pit of spiders!” Alex smiles, then hops off his camel and trounces all five armed Arabs. End!

Critics were unanimous in praising Matthew McConaughey’s Oscar-winning role in Dallas Buyers Club (2013), and rightfully so. He gives a sensitive, sympathetic performance as a man stricken with AIDS who overcomes numerous obstacles in order to help AIDS patients get the medication they need. Critics, however, failed to note the blatant defamation of Arabs by McConaughey’s character. Seven minutes into the film, he and his cowboy pals take a break from working at the rodeo. They sit, smoke, and discuss possible future employment.

Friend: You think any more about [going to] Saudi Arabia? They need guys over there.
McConaughey character: Fuck no! Why would you want to go and work for a sand nigger, anyway?
Friend: Because they pay five times as much, that’s why.
McConaughey character: They got hot ass over there?
Friend: It’s a Muslim country; you can’t fuck the women.
McConaughey character: That takes me right out, then.
[The three men laugh.]

Homeland and Islamophobia

Critics contended that the 2014 fall line-up on prime time TV would be the most diverse ever. They cited ten new shows that would feature Asian, African American, and Latino characters in leading roles. ABC Studio’s executive vice president, Patrick Moran, boasted, “Maybe other networks are now rethinking diversity, but for us it always felt that’s what the world looked like, and it’s just a more contemporary approach to have more diversity reflected in our shows.”

ABC’s Moran and TV critics did not mention the absence of Arab American characters. Nor did he or any other TV executive say that since 1983 the major networks—ABC, CBS, NBC, and now Fox—have not featured an Arab American protagonist, ever, in an ongoing series.

So, I will break this silence by sharing with you TV’s shameful history regarding its evolving portraits of America’s Arabs. Before 9/11 they were basically invisible on TV screens. As far as most TV producers were concerned, Arab Americans did not exist. Only Danny Thomas in the Make Room for Daddy series (1953–1965) and Jamie Farr in M*A*S*H (1972–1983) could be identified by their Arab roots.

Perhaps they were better off being invisible. Soon after the September 11 attacks, TV programmers did make America’s Arabs and Muslims visible, vilifying them as disloyal Americans and as reel threatening terrorists. They surfaced in numerous popular TV shows as villains, intent on blowing up America. As I point out in my 2008 book, Guilty: Hollywood’s Verdict on Arabs after 9/11, as viewers we were bombarded with images showing them as clones of Osama bin Laden.

The vilification process began with the Fox TV series 24, and the 2002 CBS TV movie The President’s Man: A Line in the Sand. Other TV series expanded on and embellished the stereotype: shows like The West Wing, Hawaii Five-0, NCIS, NCIS Los Angeles, Tyrant, Homeland, The Agency, Sleeper Cell, and The Unit. Regrettably, most film and TV critics remained silent about these dangerous new images.

An Israeli presence on American TV helped solidify the stereotype. CBS TV producer Donald P. Bellisario led the way. Bellisario also demonized Arabs and Muslims in his successful series, JAG(1995–2005). In 2005, the producer introduced an Israeli heroine, Mossad agent Ziva David, to his highly rated NCIS series. In the series, David, the only full-time Israeli character on American mainstream television, wore a Star of David and an IDF uniform jacket to show the “military influence” on her character. Like some episodes of JAG, some NCIS episodes also advanced prejudices, showing David and her friends trouncing Arab baddies, in America and in Israel. Bellisario might easily have squashed stereotypes by adding a heroic Arab character to his series, an agent named Laila Rafeedie, for instance. David and Rafeedie could be friends, working side-by-side with the NCIS team to solve murders and catch the bad guys.

Then there’s Homeland, which remains Islamophobic in its basic structure. Most of the Arab and Muslim characters in this series are villains, linked to terrorism. As journalist Laila Al-Alian said in the Guardian: “Viewers are left to believe that Muslims/Arabs participate in terrorist networks like Americans send holiday cards.” Also, Islam is vilified; when a captured American marine is tortured instead of turning to the Islam of peace, he embraces Hollywood’s stereotype, the Islam of violence. The Showtime series functions somewhat like 24, providing a means for the national security state to publicize fantasies of an Arab Muslim terrorist threat.

The FX channel’s much-hyped filmed-in-Israel series, Tyrant, displays some of the most racist anti-Arab images I have ever seen on American television. The series pits Arabs against Arabs. Consider the first episode: After a twenty-year absence, Barry (Bassem) Al-Fayeed and his all-American family return to the mythical Arab nation, Abbudin. Bassem feels obligated to see his father, Khaled, who rules this violent nation, and to attend his nephew’s wedding. Immediately, ominous music underscores the action and the camera reveals Khaled’s other son, the bare-chested, stupid, ruthless, and “insane” Jamal. He brutally rapes a woman in her home while her family sits passively, unable to prevent the abuse. Later, at his son’s wedding, Jamal violates his son’s new wife, his own daughter-in-law, by breaking her hymen with his fingers in the bathroom and showing the blood.

Almost all of Tyrant’s Arab characters are backward, barbaric types. Or they are rapists. Or they are warmongers. Or they are rich and spoiled. The show even depicts an Arab child as a murderer. Repeated flashbacks show Khaled the dictator directing his men to kill scores of unarmed women and men. As the massacre ends, Khaled orders one of his sons to shoot dead a helpless man begging for mercy; when the boy refuses, his younger brother does the deed. Week after week, the series fueled anti-Arab sentiment.

Tyrant’s executive producers Gideon Raff and Howard Gordon were responsible for Showtime’sHomeland, and they also worked together on Fox’s 24, so I was not entirely surprised. But I am dismayed that many TV reviewers gave the series a thumbs-up. The Hollywood Reporter wrote, “The plot is stirring and entertaining.” The Boston Herald called Tyrant “the most engrossing new show of the summer.” At least TIME panned it: “[Tyrant] fails badly… [Arab characters] sneer, suffer, and read ridiculous dialogue.”

In 2012, “a small group of creators and industry types has built a pipeline between Israel and the Los Angeles entertainment world nine thousand miles away,” writes journalist Steven Zeitchik. The first-ever Israeli-made drama—Dig—was sold directly to series (the USA network) for U.S. audiences. Digfocuses on Peter, an American FBI agent stationed in Jerusalem. A press release states: “While investigating the murder of a woman, he stumbles into a two thousand-year-old conspiracy embedded in the archaeological mysteries of the ancient city.” Nir Barkat, the city’s mayor, is pleased that Dig is being filmed in Jerusalem. He was also pleased with the Brad Pitt film World War Z, in 2012. In Z, the film’s characters and subtitles repeatedly state, incorrectly, that “Jerusalem is the capital of Israel.”


All-American Muslims

Despite the negative images discussed here, many positive developments are also taking place. Several impressive TV series and documentaries focused on Detroit and Dearborn’s Arab Americans and Muslim Americans, successfully exposing the impact of injurious stereotypes. The programs also underscored how America’s Arabs and Muslims—from football players to law enforcement officers—are pretty much like other Americans who contribute much to the greater society.

In November 2011, the TLC channel began telecasting its reality show All-American Muslim. Eight episodes of this critically acclaimed series focus on five Muslim American families from Dearborn. Though the series was short-lived, it inspired a nationwide conversation about what it means to openly practice one’s religion; it also revealed the discrimination America’s Arabs and Muslims sometimes face. For example, in December a special interest conservative group, the Florida Family Association, called on advertisers to boycott the series, calling it “propaganda that riskily hides the Islamic agenda’s clear and present danger to American liberties and traditional values.” Most sponsors stayed with the series, but the hardware store Lowe’s, and the travel planning website, pulled their ads. There was some concern that the reality series would be taken off the air. But the series was not canceled. In fact, the advertising time for the remaining episodes sold out.

Commissioned by Detroit Public Television, producer/director Alicia Sams explored the diversity of Arab Americans in her 13-part Emmy Award-winning series, Arab American Stories (2012). Each half-hour features three short films, directed by independent filmmakers from around the country, profiling a wide variety of ordinary Arab Americans. Episodes focus on people such as Father George Shalhoub of Livonia, Michigan, who turned St. Mary’s Antiochian Orthodox Church into a positive force for its churchgoers; Diane Rehm, a national radio host of The Diane Rehm Show; Fahid Daoud and his brothers, whose chain of Gold Star Chili restaurants began in Cincinnati and spread throughout southern Ohio; researcher and radiologist Dr. Elias Zerhouni; hip-hop artist, poet, and activist Omar Offendum, who offers contemporary messages of cultural understanding; and Hassan Faraj, a Lebanese-American butcher whose dedication to his work and family inspired a local theater company to write a performance piece about him.

One year later, PBS telecast three one-hour episodes of Life of Muhammad, hosted by the noted journalist and author Rageh Omaar. The series gave viewers fresh and timely insights into the Islamic faith by focusing on Muhammad’s life, from his early days in Mecca, to his struggles and eventual acceptance of his role as prophet, his exodus to Medina, the founding of Islam’s first constitution, and finally to his death and the legacy he left behind. Some of the world’s leading academics and commentators on Islam—British novelist and historian Tom Holland, Bishop Michael Nazir-Ali, and Georgetown University professor of religion John L. Esposito—spoke about Islamic attitudes toward charity, women, social equality, religious tolerance, and Islam’s timely role in the world today.

Some episodes in the commercial TV series, Robin Hood (2006–2009), show Robin wielding so-called “Saracen” weapons, such as a recurve bow and a scimitar. The series offers heroic images of an Arab Muslim woman. The British-Indian actress, Anjali Jay, is featured as Djaq, one of Robin’s loyal “Merry Men.” Numerous episodes show Djaq helping Robin and his friends bring down all the villains.

Two commercial networks acted responsibly, shelving series loaded with stereotypes. In March 2014, The Walt Disney Company, parent company of ABC and ABC Family, canceled Alice in Arabia, a series about Arab kidnappers who oppress women. The plot is worth noting: an American teen is “kidnapped by her Saudi relatives and whisked off to Saudi Arabia, where she is kept as a prisoner in her Muslim grandfather’s home.” Alone in Saudi Arabia, “she must count on her independent spirit and wit to find a way to return home while surviving life behind the veil.”

A Disney spokesperson stated that the series was canceled because “the current conversation [with the Arab American Anti-Discrimination Committee and other organizations] surrounding our pilot was not what we had envisioned and is certainly not conducive to the creative process, so we’ve decided not to move forward with this project.” This explanation is pure fluff. If I ever meet the CEO of The Walt Disney Company, Robert Iger, I will ask him if ABC would even think of doing a similar series like Alice in Africa, Cathy in China, or Marie in Mexico? If not, why consider Alice in Arabia?

Disney was criticized for its portrayal of one of Hollywood’s most ruthless sorcerers, Jafar, in their 1992 animated film, Aladdin. So why did Disney resurrect Jafar in the ABC 2013–2014 series, Once Upon a Time in Wonderland? Here, the evil magician terrorizes and kills people: Jafar freezes some, then turns them into dust. Not surprisingly, no heroic Arab characters appear in Wonderland. When, if ever, will Disney cease vilifying Arabs? Will a Disney spokesperson ever say: “We are sorry for advancing prejudices. Disney is a family network; we care about children. We are not in the business of demonizing a people, a religion, and a region. This will never happen again.”

Weeks after Alice was axed, Fox unexpectedly canceled one of their well-publicized, 13-episode series,Hieroglyph. Set in ancient Egypt, Hieroglyph was about palace intrigue, seductive concubines, criminal underbellies, and divine sorcerers. “We wanted to do a show about deceit, sex, intrigue in the court, and fantastical goings-on—no better place to set that than ancient Egypt,” said Fox entertainment chairman Kevin Reilly.

Yet, Egyptomania persists. In 2015, Spike TV plans to telecast a series about King Tut. The programs will “dramatize the royal soap opera that surrounded the throne in 1333 BC.” And in 2016, Universal will release The Mummy, yet another reboot of their profitable Mummy franchise.

Desperately needed is an increased, positive Arab American presence on commercial television. As I have repeatedly said, the major networks—ABC, CBS, NBC, and now Fox—have not featured an Arab American protagonist in an ongoing series. Instead, what the networks have done is to vilify them. The time is long overdue for a much-needed corrective.

But there have been a few, faint glimmers of light. In July 2011, Turner Classic Movies (TCM) took a positive step and confronted the stereotypes head-on in their series, Race and Hollywood: Arab Images on Film. As curator, I helped select the films that were telecast twice weekly over an eight-day period. I also served as the series guest expert, discussing at length with host Robert Osborne all thirty-two “Arab” features, five shorts, and several cartoons.

One year later, various PBS stations across the country telecast Michael Singh’s absorbing, controversial documentary about reel Arabs, Valentino’s Ghost (2012). Then there is producer Chelsea Clinton’s excellent 34-minute documentary, Of Many (2014), which offers a creative view of relationships between Jews and Muslims at New York University. Her film focuses on a developing friendship between Rabbi Yehuda Sarna and Imam Khalid Latif, leaders of their religious communities at the university.

Credit goes to the producers of TBS TV for the network’s successful sitcom, Sullivan & Son, which regularly features comedian Ahmed Ahmed as Ahmed Nasser, an American Arab Muslim. And AMC’s short-lived 2013 detective series, Low Winter Sun, also merits recognition. The series featured as one of its main characters a tough, brilliant cop—an Arab American woman named Dani Khalil. Actress Athena Karkanis played Khalil.

Finally, one outstanding documentary meriting special attention is Rashid Ghazi’s Fordson: Faith, Fasting, Football (2011). Winner of numerous awards, Fordson follows a predominately Arab American football team from a working class Detroit suburb struggling for acceptance in post-9/11 America. The camera focuses on team members, their families, and their coach as they prepare for their annual big cross-town rivalry game during the last few days of the Muslim holy month of Ramadan. Ghazi’s film advances racial and religious tolerance. It should also help young viewers, even non-football fans, to understand that they and Fordson’s Arab American students are pretty much alike. Hillary Clinton called this film “a great documentary and a great story.” After Michael Moore watched it, he said, “I want everyone in America to watch this film.” Me too, Michael.

And the Winner Is…

Here are some mainstream Hollywood features that reveal decent Arab and Muslim characters. Two are outer space dramas. For example, in Gravity (2013) we see American astronauts exploring outer space. One very brief scene, however, features astronaut Shariff. He utters only a few words before an accident occurs, killing him. Including Shariff was probably a tip of the hat to the Egyptian-American scientist, Farouk El-Baz. Dr. El-Baz worked with NASA on the first moon landing.

Prominently featured in Ender’s Game (2013) is Alai, an Arab Muslim boy. Early on, some youths begin to pick on Alai, but the protagonist steps in and protects him. Final frames show the bright, talented Alai at work with his fellow crew members. Those who previously harassed him now accept him, and Alai bonds with the protagonist.

Opening and closing frames of the entertaining medieval film George and the Dragon (2004) focus on Tarik, a likeable and courageous, dark-complexioned Moor. Tarik’s heroics remind me of another reel good Moor, Azeem, who was played by Morgan Freeman in Kevin Costner’s Robin Hood: Prince of Thieves (1991). Both Tarik and Azeem save the protagonist’s life, and they help bring down the film’s villains.

One main character who aids the protagonist in Non-Stop (2014), a thrilling who-is-trying-to-blow-up-this-plane movie, is an Arab Muslim character, Dr. Fahim Nasir. At first, some passengers think Dr. Nasir is the villain, which brings to mind the Arabs in the film Flightplan (2005), directed by Robert Schwentke, with Jodie Foster. In this movie, some passengers tagged the Arabs as the bad guys before they were cleared of any wrongdoing.

There was an increased Arab presence at the 2014 Academy Awards ceremony. Nominated films focused on the people of Egypt, Palestine, and Yemen. Jehane Noujaim’s compelling movie, The Square (2013), documenting Egyptians struggling for freedom, was up for Best Documentary Feature. Sara Ishaq’s moving film Karama Has No Walls (2012), which focused on Yemen’s revolution, was nominated for Best Documentary Short Subject. The Academy again recognized Hany Abu-Assad by nominating his Omar (2013), a tragic love story about Palestinians resisting the occupation, for Best Foreign Language Film.

Though none of the three nominated “Arab” films received an Oscar, the nominations reveal a positive trend: Arab filmmakers and others are creating fresh films dealing with topical issues. Films such as Emad Burnat’s Five Broken Cameras (2011), Susan Youssef’s Habibi (2011), Elia Suleiman’s The Time That Remains (2009), and Abu-Assad’s Omar focus on Palestine and how the Israeli occupation impacts Palestinians, young and old.

To their credit, some Israeli filmmakers also expose the occupation’s telling effects on innocent Palestinians. I highly recommend Eran Riklis’ Lemon Tree (2008), and Yuval Adler’s Bethlehem (2013). And from the director of The Syrian Bride, Eran Riklis’s Dancing Arabs (2014) is a well-intentioned movie about coexistence. Riklis focuses on a young Arab trying to find his place in Israel.

Filmmakers from the United States, Canada, Morocco, Saudi Arabia, and Lebanon are also in the mix. Nadine Labaki’s thoughtful Where Do We Go Now (2011) examines events occurring after the country’s civil war. Labaki’s protagonists, several bright Lebanese women, peacemakers all, plot to defuse religious tensions between the village’s Muslims and Christians. In Cherien Dabias’s new film, May in the Summer (2013), the protagonist looks forward to being married in Amman and being reunited with her Christian family there. But her strong-willed mother does not want May to wed a Muslim man. See the film to find out whether May can control the situation.

Then there is Haifaa Al-Mansour’s critically acclaimed Wadjda (2012), the first feature ever directed by a Saudi woman. This modest story about a young girl and her bicycle warms one’s heart, and was shot entirely in Saudi Arabia. Nabil Ayouch’s gripping Horses of God (2012) follows four boys from the slums of Morocco who, sadly, become suicide bombers. And there is also Canada’s Ruba Nadda’s feature, Inescapable (2012). This intriguing story set in Syria—a police state filled with not-so-nice intelligence officers—concerns a Canadian Arab’s quest to find his adult daughter who has gone missing while traveling in Damascus.

I found Rola Nashef’s Detroit Unleaded (2012) to be a terrific feel-good story about an Arab American couple in love. Their on and off courtship warmed my heart. Nashef’s friend, Suha Araj, also came out with a charming short film, The Cup Reader (2013). Then there’s Sam Kadi’s The Citizen (2012). Kadi’s compelling story concerns a Lebanese immigrant who arrives in the United States the day before 9/11. Kadi shows us what happens to this kind man, who loves America, when fixed prejudices rule the day. In John Slattery’s Casablanca Mon Amour (2012), two students take a road trip and visit several Moroccan villages. Along the way, they meet a variety of hospitable Moroccans and see some captivating scenery.

Axis of Evil

I have been friends with Axis of Evil comedians Maz Jobrani, Dean Obeidallah, and Ahmed Ahmed since we first met, years ago, at a conference in Washington, DC. Back then, they were struggling to make a name for themselves in show business. Directors and agents had warned them that unless they changed their names they would be relegated to playing three types of roles: terrorists, sleazy princes, and/or greedy oil sheikhs. But Ahmed refused, telling journalist Andrew Gumbel: “I’m never going to change my name. It’s my birth name, my given name.”

After years of setbacks and frustration in Hollywood, all three comedians and a growing number of other Arab American comics found a way to avoid being typecast as stereotypical reel bad Arabs. They moved forward and began using comedy to fight against discrimination. Instead of remaining silent, they spoke up—and told jokes. They used stand-up comedy to make the case for Arab and Muslim inclusion in the American “public square.” When asked why comedy, Ahmed said, “We can’t define who we are on a serious note because nobody will listen. The only way to do it is to be funny about it.”

Iranian-American comedian Maz Jobrani loved Tony, John Travolta’s character from Saturday Night Fever. However, casting directors wanted him for Muslim stereotypes. Jobrani told his agent, “No more terrorists. I don’t need to play these parts. It just feels icky. It does. You feel like you are selling out.”

It’s pretty much the same story with Dean Obeidallah. He, too, refused to take parts that demeaned his heritage. Instead, he successfully launched himself with his own material, offering more positive images of Arabs and Muslims. He was featured prominently in the 2008 PBS special, Stand Up: Muslim American Comics Come of Age.

All three comedians have had thriving careers at premier stand-up venues, in the United States and abroad. Their live comedy performances are available on DVD and Netflix. And all three made impressive independent features and documentary films. The first was Ahmed’s thoughtful and entertaining documentary, Just Like Us (2010). The film shows Ahmed and his fellow stand-up comedians being well received by audiences from New York to Dubai; especially moving are Ahmed’s scenes with his Los Angeles family.

Jobrani’s comedy specials Brown & Friendly (2009) and I Come in Peace (2013) show highlights from his stand-up comedy special live performances here and in Stockholm. In 2013, Jobrani also producedShirin in Love, a pleasing, independent romantic comedy focusing on the attraction between the Iranian protagonist, Shirin, and her non-Iranian mate.

That same year, Obeidallah, along with Negin Farsad, produced, directed, and starred in the documentary The Muslims Are Coming! (2013). Familiar names like Jon Stewart and Rachel Maddow appear, offering insightful commentary that exposes and contests discrimination. We also see ordinary Americans, from Arizona to Alabama, interacting with Obeidallah’s comedians before and after they perform in several major cities. Closing “Hug a Muslim” frames are especially memorable.

Finally, the Detroit area is the setting for Heidi Ewing and Rachel Grady’s documentary, The Education of Mohammad Hussein (2013). The film offers compelling insights into a post-9/11 America that struggles to live up to its promise of civil justice for all. The documentary focuses on a tightly knit Muslim community in the Detroit Hamtramck neighborhood. Here, American Muslim children attend a traditional Islamic school—leading their faith and patriotism to be scrutinized. We see what happens when the children and their neighbors meet the Koran-burning Florida preacher Terry Jones: his hate-rhetoric fails to provoke them.

Not so long ago these up-and-coming young filmmakers were struggling artists, just beginning their careers. Some were only thinking about making films; others had just written rough drafts. Yet despite all the obstacles they faced, they went on to direct and produce inventive independent films—films that challenge racial, gender, and religious stereotypes, films that make us laugh and think at the same time.

Stereotypes and Steelworkers

I wrote The TV Arab to help to make unjust Arab portraits visible. Along the way, I discovered painful lessons about what happens to people—be they Arab, Asian, African, Hispanic, or Jew—when they are continuously dehumanized. So, I tried to save readers like you from being subjected to these heinous stereotypes, writing that “a more balanced view of Arabs” was necessary, and that unless we counter this stereotype, innocent people will suffer. And, sadly, they have.

We still have a long way to go. No matter. I have a deep and abiding faith that young storytellers from Arkansas to Abu Dhabi will eventually shatter damaging portraits, image by image. Artists will lead the way, creating inventive, realistic Arab portraits. I recall the wisdom expressed by Vaclav Havel, former president of the Czech Republic, in his book The Art of the Impossible: Politics as Morality in Practice, “None of us as an individual can save the world as a whole… But each of us must behave as though it was in his power to do so.”

My optimism is always renewed by going back to what I learned growing up in Pittsburgh. As I wrote in The TV Arab, “In Clairton’s steel mills I shared sweat with men of many ethnic backgrounds. Mutual respect prevailed. Steelworkers can wipe out stereotypes. So can writers and producers.”

Writers and producers, actors and directors, you and I—we still can wipe out stereotypes; it’s in our power to do so.

Excerpted from Reel Bad Arabs: How Hollywood Vilifies a People, by Jack G. Shaheen. Copyright ©2015 by Jack G. Shaheen. With permission from the publisher, Interlink Publishing.


Jack G. Shaheen is a distinguished visiting scholar at New York University’s Hagop Kevorkian Center for Near Eastern Studies. He is a former consultant for CBS News as well as for many films, including Syriana and Three Kings. His extensive collection of representations of Arabs and Muslims, notably motion pictures, is housed at New York University. He is the author of The TV Arab; Reel Bad Arabs: How Hollywood Vilifies a People; Guilty: Hollywood’s Verdict on Arabs after 9/11; and A is For Arab: Archiving Stereotypes in U.S. Popular Culture.

From Pinstripes to Tweets

Ah, the good old days of diplomacy. The men donned pinstriped suits and the women were draped in pearls. The image of the diplomat was one of luxury, privilege, exclusivity, and secrecy. The embellishments of high culture and high education were captured in the rich symbolism of the famous painting The Ambassadors, created by Hans Holbein the Younger in 1533, at the dawn of contemporary diplomacy in the West. Mouse-click forward five centuries and digital communication technology is not only altering the methods but also the meaning of diplomacy. By going “digital,” the once secretive and exclusive domain of the elite has gone public.

In the realm of influencing relations between nations, digital media has suddenly unpinned the power to communicate from the almost exclusive control of the state. Thanks to digital platforms such as social media, state actors must now compete with non-state actors for a voice in the international arena as well as for legitimacy in the eyes of the public—including their domestic one. This is the great communication challenge for diplomats today and tomorrow.

The art of communication has always been central to diplomacy, from the Byzantine diplomats to the emerging digital diplomats of our time. Understanding the centrality of communication in the evolution of diplomacy helps put the angst over digital and social media in perspective. Currently, diplomacy is associated with the state-centric system of international relations that developed in modern Europe in the seventeenth century. Diplomacy and communication are as old as human society itself. Diplomacy and negotiations were requisite in arranging marriages as well as in commerce and trade throughout the territories of dynastic China. Among the earliest recorded diplomatic documents on political relations are the Amarna Tablets from ancient Egypt.

Even in ancient times, the centrality of communication in the practice of diplomacy was evident in the value placed on written and oral communication skills. In ancient Greece, oratory skills were highly prized, as diplomats had to present their case in open, public forums. Eloquence was similarly valued in envoys in ancient India. The Arthashastra, a treatise on statecraft believed to have been written by Kautilya, discussed the duties of diplomats in detail as representatives, informers, communicators, and negotiators. As Trần Văn Dĩnh noted in Communication and Diplomacy in a Changing World, “all Vietnamese envoys to Peking were top poets and writers—especially those endowed with a wit, a gift for quick repartee.” The verbal adroitness of the envoys became part of Vietnamese folklore.

The diplomacy of the Prophet Mohammed is well known throughout the Islamic world. The Prophet sent special envoys to deliver letters to the leaders of the region: Emperor Heraclius of the Byzantium; Sassanid King Khosrow II of the Persian Empire; Ashamat Al-Negashi, Emperor of the Abyssinian Kingdom of Aksum; and to the Muqawqis who ruled Egypt.

In modern Europe, the term diplomacy was originally associated with the study of handwriting, which was necessary in order to verify the inscriptions presented by representatives of neighboring territories. In his book On the Way to Diplomacy, the political scientist Costas M. Constantinou notes that it wasn’t until the late eighteenth century that the word diplomacy gained political currency and aligned to statecraft and external affairs.

Modern diplomatic practice has continued to place a premium on communication. “The value of a diplomat lay in his ability to communicate, negotiate, and persuade,” diplomatic scholars Keith Hamilton and Richard Langhorne wrote in The Practice of Diplomacy: Its Evolution, Theory and Administration. The phrase “to be diplomatic” suggests verbal finesse and tact in potentially disruptive situations. The idea of diplomats as the messengers and builders of relations between heads of states represents a somewhat nostalgic one, albeit critical even in this digital era. Speaking of her travels to more than a hundred countries, former Secretary of State Hillary Clinton called it “shoe-leather diplomacy” and emphasized the importance of being on the ground. Today’s diplomats, according to Daryl Copeland, a former Canadian diplomat and author of Guerrilla Diplomacy: Rethinking International Relations, also need to be as home in the bazaar as on the floor of the United Nations Security Council.

Whereas diplomacy and communication have a cordial relationship, the initial resistance of diplomats to digital media is emblematic of the rather strained relations between diplomacy and communication technology. Seemingly every communication innovation has represented at first a jolt, then a boon, to diplomatic practice. The invention of the telegraph initially caused an uproar in ministries and chancelleries far and wide, but then was openly embraced. The “diplomatic cable” became a staple of the trade. In Real-Time Diplomacy: Politics and Power in the Social Media Era, Philip Seib, vice dean of the Annenberg School for Communication and Journalism and former director of the Center on Public Diplomacy at the University of Southern California, highlights the challenge now presented by digital technology. “In a high-speed, media-centric world, conventional diplomacy has become an anachronism,” he writes. “Not only do events move quickly, but so too does public reaction to those events. The cushion of time that enabled policymakers to judiciously gather information and weigh alternatives is gone.”

Public Diplomacy or Propaganda?

There is an enduring perception that the media can often influence international relations more so than the diplomat. When the mass media emerged in the twentieth century, first radio and then television were perceived as being all-powerful. During World War I, radio in particular was associated with propaganda, which could penetrate the psyche of troops and demoralize them. The prevailing belief at the time, including among the field’s early researchers, was that propaganda messages delivered by the mass media would have an immediate and persuasive effect on the audience derived from deception, manipulation, and coercion. Like a shot from a hypodermic needle, once the message was injected into a society there would be little resistance from a passive audience.

After World War I, researchers began an intensive study of propaganda, the media, and the ways to influence audience attitude and behavior—a focus that continues today. Not surprisingly, the outbreak of World War II in Europe saw the deployment of the mass media as part of the war effort. The Voice of America broadcasting service was launched within months of the U.S. entry into the war. Later, during the Cold War, the United States government used Radio Free Europe to penetrate the Iron Curtain.

Such international broadcasts have become a standard instrument in a nation’s communication efforts to influence publics. Current government efforts using broadcast media, and now social and digital media as well, to reach audiences falls within the realm of what has been termed public diplomacy—a state’s efforts to communicate directly with publics rather than directly with governments. While public diplomacy strives to persuade based on credibility and openness, it nonetheless faces a challenge to distance and distinguish itself from propaganda. Edmund Gullion, a past dean of the Fletcher School of Law and Diplomacy at Tufts University, who is credited with coining the term public diplomacy, introduced it in a deliberate attempt to find an alternative to the word propaganda. That term change occurred in 1965, but the term, like the field itself, was largely dormant until September 11, 2001.

Ambassadors Who Tweet

The 9/11 attacks on the United States represented a wake-up call for public diplomacy, underscoring that the perceptions of foreign publics have domestic consequences. Public diplomacy, or “the battle for hearts and minds,” as it was more commonly called, was second to the military offensive when the United States launched the War on Terrorism. Not surprisingly, given the historical successes of broadcast media and the continuing perception of media power, post-9/11 American public diplomacy was driven by mass media initiatives.

The notion of public diplomacy had already received a boost from its link to the idea of “soft power,” introduced by the political scientist Joseph Nye in 1990. At the time, Nye suggested that the world was growing increasingly intolerant of hard power displays, such as military force or economic sanctions. Soft power, on the other hand, represented the ability to influence others through attraction and persuasion rather than coercion. Over the past decade, more countries have increasingly recognized the importance of public diplomacy and soft power.

The advance of public diplomacy has coincidentally paralleled the rise of social media. Once again, communication technology that was first viewed with trepidation is increasingly being perceived as a benefit to diplomatic practice. In 2009, Shahira Fahmy of the University of Arizona conducted a search in scholarly databases pairing the term “diplomacy” with different types of social media tools—“blog,” “YouTube,” “Twitter.” To her surprise, the search generated zero results. Only a few years later, Fergus Hanson of the Lowy Institute in Australia wrote about the development of “e-diplomacy” in the U.S. State Department. He concluded that most public diplomacy initiatives have social media “baked in” as an integral part of their designs.

Today, the adoption of digital and social media in public diplomacy appears to be spreading rapidly, even if many diplomats remain personally hesitant to take the plunge. In 2009, then Mexican envoy to the United States Arturo Sarukhan became the first ambassador in the Washington diplomatic corps to take to Twitter. At a recent forum at American University in Washington, he noted the inherent risks of using it: errors are very public, and could even go viral. Of the 183 accredited ambassadors in Washington, he estimated that only forty have created personal Twitter accounts.

Tech-savvy diplomats contend that the benefits outweigh the risks. According to Sarukhan, simply logging in and monitoring social media “widens the information and intelligence bandwidth.” Diplomats can complement the often partisan views of media commentators and policy experts who dominate the air waves with less scripted conversation on Twitter. He lauds the benefits of social media as a means of circumventing traditional media, especially when trying to get out a message and influence the narrative. He believes that his active and persistent presence on Twitter might have played a role in diluting and quelling a damaging media narrative of Mexico that had started to emerge.

These new media tools pose considerable challenges for diplomatic institutions, as a recent Aspen Institute report on integrating social media and diplomacy noted. One of the major challenges is the different pace of adoption, integration, and use of the tools between the public and governments. The diplomatic services of many nations are still inclined to use social media much like broadcast media: to shoot messages at publics. The problem, as countries are learning, is that social media has enabled publics to shoot back.

“Why Wasn’t I Consulted?”

Early efforts by American diplomats to use media in cultivating relations with foreign publics seem almost quaint—“telling our story,” per the motto of the former United States Information Agency. Digital media has intervened in the relational power dynamics, changing the balance of power between the state and the public. While on the surface digital media represents a technological shift, the more important change is in diplomatic thinking. Digital media has compelled nations to reconsider how they view publics and communicate with them. The supposedly passive audience of the information-starved age has been transformed into an aggressive, digital-media-empowered audience that demands to know, “why wasn’t I consulted?”

In the first phase of this progression, after 9/11, American public diplomacy initiatives echoed the Cold War approach and strategy. The focus was on getting the message out. The mass media was the tool of choice, not only because of its expansive reach, but also because it allowed for complete control over the message’s design and delivery. The goal was information dominance, gaining the upper hand in the battle for hearts and minds. American public diplomacy after 9/11 produced one high-profile media initiative after another—Al-Hurra television, Radio Sawa, and Hi magazine—largely aimed at influencing attitudes in the Arab World. Each initiative introduced with great fanfare was quickly met with a barrage of criticism because of its perceived disregard for the cultural and political sensitivities of the publics. As commentator Rami G. Khouri remarked at the time, “Al-Hurra, like the U.S. government’s Radio Sawa and Hi magazine before it, will be an entertaining, expensive and irrelevant hoax.” Capturing the sentiments of many, he added, “Where do they get this stuff from? Why do they keep insulting us like this?” Many even portrayed the elusive Osama bin Laden, who periodically released video tapes promoting Al-Qaeda’s cause to Al Jazeera, getting the upper hand in this public relations war. The late Ambassador Richard Holbrooke famously remarked, “How can a man in a cave out-communicate the world’s leading communications society?”

Social media has effectively rendered this one-way quest for information dominance and control obsolete. That ushered in a new phase of public diplomacy based on the relational imperative. A new era focused on relationship-building as the foundation of public communication was emerging. Governments realized that publics were no longer content to be the target audience, or “target practice,” for public diplomacy messages. Social media had greatly expanded the array of media and information choices. Breaking the barriers of selectivity and gaining audience attention had become much more challenging for official public diplomacy. During this early period of social media, official public diplomacy responded rather quickly with pronouncements of “engagement.” In fact for a while, U.S. and UK diplomats and scholars stopped using the term “public diplomacy” in favor of “engagement.” Yet, despite the vocal intent of engaging or involving the audiences, social media initiatives were rather tepid and consisted mostly of grafting some of the interactive features of social media onto mass media initiatives. Hi magazine, as an early example, added a comments section. Later initiatives in this engagement phase included ventures onto digital media platforms, for example YouTube video contests, and the mandatory Facebook page for all foreign ministries.

Proliferation of social media soon spawned a third phase of public diplomacy in which governments operated on the understanding that publics were not content with being merely participants in government-initiated and controlled communication. Thanks to digital media’s low costs and high capabilities, publics quickly seized the mantle of being content producers. They now had the ability to augment their voice and initiate a new communication dynamic in the public arena. Governments, not wanting to lose relevancy, in turn, quickly lauded the publics, movements, and initiatives they favored. This phase saw the increasingly organized participation of civil society organizations and the rise of “relationship building,” “mutuality,” “partnerships,” and “social networks” in the lexicon of public diplomacy. Many of these words found particular resonance in pro-democracy initiatives.

The third phase of social media and public diplomacy solidified the relational paradigm of public diplomacy with its emphasis on relationship building and networking. Simply crafting clever messages or developing creative media approaches was no longer enough in reaching or influencing publics. Effective public diplomacy now rested on a government’s ability to cultivate relations with publics in order to promote policy agendas and create policy change. The operative words in this phase are publics, movements, and initiatives that a government favored. The challenge for diplomacy is that digital media remained a medium, and policy itself remained the message. And in the policy battles, publics are using digital media to go for the political jugular.

Digital Strategies

In a fourth phase, governments are facing adversarial relations with publics, be they publics that are challenging the policies of foreign governments or their own governments. While adversarial publics may emerge spontaneously, they can quickly become a recognized movement, such as Occupy Wall Street in the United States, or the Gezi Park protest movement in Turkey.

The existence of contentious publics—foreign and domestic—is not a new challenge for policymakers. However, in the past the suppression of public movements and rebellions was made possible by a state’s ability to control and if need be silence communication. Government control over the mass media accorded it that ability and power. Social media, by definition, does not lend itself to such control. The very visible, global magnitude of social media in the hands of adversarial publics is new for state actors. Governments that try to treat the new media like the old media are suffering the consequences.

As governments struggle to devise an effective response, publics are further exploiting the capabilities of digital media. They are not only challenging governments, but challenging their legitimacy. Communication credibility is one thing; political legitimacy is another. “Crisis” and “confrontation” are appearing with increasing frequency in public diplomacy discussions as states struggle to effectively respond to challenges from their own domestic public as well as global publics.

Diaspora populations, playing a more prominent role, are a critical public often overlooked in public diplomacy. Digital media has been called the quintessential communication tool of diasporas. When disaster and crisis strike, diaspora publics have the most incentive to respond. How tech-savvy digital diaspora respond is another matter. Diaspora may respond in an outpouring of support and serve as a bridge between their country of origin and global publics. Electronic Intifada, an activist website started during the second Palestinian Intifada, is a prime example. Diaspora can even use their intimate ties to the home country to unseat a government. It is perhaps not coincidental that some of the most piercing foils in public diplomacy-as-regime change have been spearheaded by leaders in the diaspora. But, this again, is not a new phenomenon. Cassette tapes were once considered new media and their circulation is credited with sparking an unexpected youth revolution in Iran and sending a shah into exile.

New strategies are available to the new cyber publics demanding a voice. All publics are exploiting the anonymity conferred by digital media. Unlike the traditional media where people can identify the source, the Internet is a bastion of hidden identities.

The power of anonymity was evident in one of the most prominent and baffling hoaxes in the early period of the Arab Spring. I remember reading some of the first reports in the Washington Post about the dramatic abduction of a Syrian-American blogger Amina Arraf, “A Gay Girl in Damascus,” in June 2011. At the time, Syrian activists were struggling to get on the radar screen of Washington policymakers. Amina’s first post had been in mid-February. Two months later, in late April, her blog gained wide attention after a moving post, “My Father the Hero.” By early May, she was on the short list of Arab bloggers in recommended reading for President Barack Obama prepared by Foreign Policy. Then, suddenly on June 6 Amina was abducted. The New York Times, Guardian, and other prominent Western news outlets covered the story. Reporters Without Borders issued a press release. Supporters created a Facebook page, with more than 15,000 clamoring for Amina’s release.

This was a heady time for social media in the Arab World, with global attention focused on the Arab Spring. Andy Carvin, a prominent blogger with National Public Radio, led a crowd-sourced effort to find Amina. She never was found because she didn’t exist. She turned out to be a cyber vehicle created by an American graduate student studying in Edinburgh who wanted to join the conversation on events in Syria. He did it through Amina.

While Amina may not have been real, her cyber effect certainly was. The strategy succeeded in generating attention and compassion for activists in Syria. As one reader posted on the New York Times blog The Lede, “If she is a real person or not, or if her accounts are fictionalized or not, to me is irrelevant. The Syrian government is oppressing its people forcefully—this is a fact. If the story of her disappearance gets a few more people to pay attention, then whether true or false, more attention will be focused on the Syrian government.” Despite being a fictitious person, Amina Abdallah Arraf al-Omari today has her own Wikipedia page.

Some have suggested that digital media has evened the communication playing field between state and non-state actors. Many state actors believe that the activists are using digital strategies that allow them to gain the upper hand over states—for example in the case of “digital jihadists,” including the extremist group called the Islamic State in Iraq and Syria (ISIS). Here we may pause and reflect again on strategy. Many Western governments and much of the media appear focused on the content of ISIS’s messages. While the graphic nature of the content by its very nature does draw attention, it is not the content that matters in digital media as much as the relational connections and the exchange of information. Everything about these tools highlights their interactivity. They are tools for engagement, for creating conversations, and building relationships. This relational dynamic is where the communication power is.

Activists have realized this new dynamic and are exploiting the interactive capabilities of digital communication tools. Many in government and diplomacy, however, appear still tethered to the “message-media” mindset of trying to craft messages and control media. They still struggle to find the right message and miss the importance of mapping the network of relations that carry, shape, and ultimately distort their messages.

Governments need to shift from analyzing messages to studying the online and offline relational dynamics. It is not so much what adversarial publics are saying, so much as how they are organizing themselves. Ali Fisher and his colleagues recently noted patterns of “swarmcast”—a tactic used by groups of protesters to quickly form and disperse to challenge authorities. Swarming often involves protesters using disruptive, highly visible events to gain media attention—and then dispersing before security authorities can respond. This tactic and other interactive network patterns are part of a “netwar” strategy first identified by John Arquilla and David Ronfeldt in their study of the “Battle for Seattle” waged by protesters against a meeting of the World Trade Organization in 2007.

This is just one illustration of how activists are turning the tables on governments thanks to social and digital media tools. The pairing of online with offline strategies is particularly powerful, as seen in reports of how ISIS is also using social media to draw in and connect with potential recruits in Western countries. A recent article in the New York Times provided a glimpse into how the group responds to potential recruits on a personal level. While many officials focused on the graphic message content, the critical feature was how ISIS was using the social media tools to connect and build relations. British fighters answer questions on a website called as specific as what shoes to bring and whether toothbrushes are available. When asked what to do upon arriving in Turkey or Syria, the fighters often casually reply, “Kik me”—the instant messenger for smartphones—to continue the discussion in private.

This type of outreach challenges government public diplomacy efforts. One of the crucial things learned in the intensive study from early propaganda to present studies of mediated communication is that while the media is good for creating awareness, it is not as effective at creating attitude change. With digital media, people again flock to sites that reinforce rather than challenge their beliefs. The prime mode for attitude change remains interpersonal communication. Trust, which is so critical, especially in risk taking, is conveyed primarily through subtle nonverbal cues conveyed through eye contact, facial twitches, or posture. Whereas digital media may not be able to create attitude change, its portability makes it ideal for facilitating those offline relations. To overlook these important offline relations is to ascribe a phantom persuasion element to digital media.

People Power

Digital media in the hands of adversarial publics should be a wake-up call to governments. Public diplomacy is no longer a competition just between states. The perceptions of foreign publics have domestic consequences. In turn, domestic publics can influence the global perceptions of a country. Governments need to re-think what the relational imperative means in a digital era. The relational imperative represents a mind shift from focusing primarily on messages and media as the core of diplomatic communication to the relational connections between publics and nations. Previously there was the “message imperative,” and communication strategists began with the questions, “what is our message?” and “how can we deliver it?” The relational imperative requires the questions “what are the connections or relations among the parties?” and “how are the parties and publics using those connections to further their cause?” The Gezi Park example is illustrative of how an innocuous environmental group of protesters morphed into a much larger alliance of seemingly disparate groups joining together against Turkish authorities.

One of the reasons that the Arab Spring caught the attention of Western researchers was because of the way people were using the social media to “circulate” information and organize themselves. While the slogans such as “We are all Khaled Said” may have been powerful, it was the interconnectivity of social media that amplified the message content. This interconnectivity and relational dimension represents unchartered territory for governments still operating in a message-media mindset.

Today, relational connections can matter more than messages. Yet, governments are still concentrating primarily on using digital and social media to convey the message. The unspoken assumption is that the governments are still autonomous entities that can still initiate and control the communication dynamic. Dominant public diplomacy strategies still focus primarily on control and influence, whether it be controlling the message, the media, or the narrative. Digital media eludes effort to control.

This relational dynamic is why social and digital media have usurped communication control from governments. Government control over the mass media, common in the Arab world, is illustrative of the one-to-many one-way form of communication power. With social media, publics now have the communication power to compete with governments in the public sphere. This observation is not new; media scholars have been waving red flags for several years. The challenge is not in controlling or countering the public, but finding ways to respond effectively when the public is in control, when the audience is seeking to influence governments and their policies. Trying to counter the communication can be as ineffective as attempts to control the communication; both rest on the outdated idea that the state and its opponents are autonomous political entities. In an interconnected and globalized world, the luxury of autonomy is an illusion; they are all interconnected.

Here, the mutual influence accorded by digital media takes on a new significance. Digital media is shattering the assumption of one-way influence, assumption in public diplomacy, that governments can seek to influence publics without being influenced as well. In an interconnected sphere, one cannot influence the other without being influenced in return. Public demands for openness, accountability, and transparency scratch the surface of this emerging trend. How states will respond to mutual influence—of being open to public influence rather than only trying to influence publics—is increasingly becoming the critical unanswered question. It is a question that more and more nations will need to find answers to soon.

R.S. Zaharna is an associate professor in the School of Communication at American University in Washington, DC. She has written extensively on public diplomacy, and has served in an advisory role to governments, diplomatic missions, and international organizations on communication projects in Asia, Europe, and the Middle East. Her recent book is Battles to Bridges: U.S. Strategic Communication and Public Diplomacy after 9/11.

Putin the Spoiler

The year 2014 will go down in history as the year when Russian leader Vladimir Putin kicked over the world chess board and destroyed the post-Cold War system of mutual security commitments. By demanding to reorder the system of the international relations that Putin’s Kremlin views as unjust, Russia has emerged as the revanchist nuclear power. Putin’s actions raise many questions. Are they the result of his leadership model—and its evolution? Or are we seeing the logic of the Russian system of personalized power, with Putin simply its current embodiment?

Mr. Nobody

I remember December 31, 1999, when Russian TV broadcast the spectacle of Boris Yeltsin, the first post-communist Russian president, leaving office. The cameras showed him turning to Vladimir Putin, and then gesturing to the Kremlin surroundings as if he was leaving him a gift. Here, now you are the master of all this. Putin looked pale and tense; there was no expression on his face—his gaze was remote. He had to be overwhelmed at that moment: a boy from St. Petersburg, a regular guy from a worker’s family, a recent gofer, was receiving a huge country to rule. Yeltsin handed Putin a present, which was Russia.

But why Putin? Why had Yeltsin’s political entourage chosen a man with no experience in public politics, who was virtually unknown to the public at the time? Several reasons explain that unexpected move. Yeltsin’s family and the oligarchs around him needed a loyal person without excessive ambitions ready to defend the interests of his mentors. Yeltsin’s ruling team did not want another charismatic leader; it did not want a heavyweight with his own power base. It wanted an individual close to the security services ready to defend the regime and someone who was predictable. Putin was the right man, in the right place, at the right time.

Putin coped brilliantly with the tasks entrusted to him. He not only ensured the safety of the ruling corporation, but also managed to fulfill the hopes of society that longed for stability and predictability. True, soon Yeltsin’s family was removed from power and the oligarchs who had hoped to handle Putin were pushed out of the Kremlin. However, the basic interests of the outgoing team were secured: Putin proved that he could guarantee his part of the deal. Having unexpectedly become the Russian leader, Putin at the beginning treaded with caution, diligently acquiring the Kremlin’s art of rule, which demanded that he reassert total control over the power resources—something Putin would ultimately achieve. Putin set about to create his own base and create a super presidency based on effective one-man rule, emphasizing subordination, strengthening the role of the power structures, bringing its members into the government, and eradicating opposition. By 2004, Putin had created a regime that resembled the bureaucratic-authoritarian regimes of Latin America in the 1960s and 1970s. This is the system of rule where power is concentrated in the hands of a leader who relies on the state apparatus, security forces, and big business.

For some time during Putin’s first presidency from 2000 to 2008, Russia’s liberals hoped that this regime had some reformist potential. However, by the period from 2005 to 2007 it became clear that Putin’s agenda and the mechanisms of rule he had created had only one goal—preserving power at any cost and reproducing it. He subjugated the independent television and press; cracked down on the ambitious oligarchs; the prosecutor’s office and the courts were pressed into service; the parliament was made a rubber stamp of the Kremlin; the opposition was silenced. The elite and the society helped Putin to put them in chains and lock them in cages by trading their freedoms for stability and a well-being based on the rising oil price. As for Putin’s behavior and policy, the more he stayed in power the more apparent his authoritarian style became: the urge to control everything around him, his suspicion toward any plurality of views.

Why did Putin go this way? Was this his political outlook? Or was it the result of the system of personalized power built by Yeltsin? I would argue that Putin’s views and his understanding of power have become instrumental to the survival of the Russian model of personalized power. Putin not only preserved the Russian matrix of one-man rule revived by Yeltsin, but created a much more efficient and tough instrument of personalized power due to his personal mentality and background.

Putin’s KGB experience, a suspicion of the West that he had difficulty concealing, the deep complexes formed in his youth, a desperate desire to succeed by traversing the murkiest corridors of power, his reliance on shady deals and the mafia-style loyalty of close friends, his disrespect for law (demonstrated during his tenure as St. Petersburg Mayor Anatoly Sobchak’s lieutenant), his belief in raw force as an argument, and his tenacity and acumen in pursuing his agenda—all of these hardly prepared Putin for transformative leadership or even for a moderate reformist agenda that might give at least some independence to society and the business class. Looking back, one could see the existence of the two personal qualities that had an imprint on his leadership style: his indifference to the price of his actions, and his low bar for risk taking. In any case, caution, deliberation, willingness to be a team player, respect for law, and an ability to work on a strategic agenda have never been counted among Putin’s personality characteristics. Besides, before he had been picked to be the guarantor of Yeltsin’s legacy, he had never been a leader even among his own gang and had not been successful in his KGB career—a strong desire to compensate for these personal deficits might explain his penchant for macho behavior and bullying.

Putin went even further, in fact, eliminating the counterbalances within the vertical power structure that had existed in the post-Stalin period, which prevented total absorption of power by one team or leader—Communist Party control over the security services was one of these balances. For the first time in Russian history, the Russian praetorians (the representatives of the special services), who had been the gate keepers, became rulers, opening the way for them to acquire total control over all areas of public and state activity.

During his first presidency, Putin worked within a paradigm we might call “Join the West and Pretend to Accept its Standards.” Between 2000 and 2003, Putin even toyed with the idea of joining NATO, and he became America’s partner in the anti-terrorist coalition. He created the illusion that he was a pro-Western economic modernizer with authoritarian aspirations—an acceptable profile for the West. Apparently at the time Putin was deliberating on the mechanisms of his rule, the degree of subjugation of society, and the nature of compromises he could allow with the West. He was trying to balance his provincial longing to engage with the most powerful leaders and his suspicion of the West. Vanity was not the only explanation of his “partnership” period; he was learning to use the West to pursue his own ambitions. Putin’s foreign policy made it easier for the Russian elite to integrate personally into Western society while keeping Russian society closed off from the West. This policy looked ideal for a leader who was turning Russia into an energy superpower that functioned by cooperating with the West.

The Soviet Union survived through rejection of the West, whereas Putin’s regime has experimented with glomming on to the West. The Kremlin’s imitation of the Western norms and institutions helped Putin to use Western resources for the regime’s needs. The emergence in the West of the powerful lobby that serves the interests of the Russian rent-seeking elite has become a factor in both the Kremlin’s international impact and preservation of the Russian domestic status quo.

Regrettably, the pro-Kremlin lobby in the West includes expert and intellectual circles ready to take part in the staged events hosted by the Russian president and the Kremlin team like the annual Valdai Club that are used to legitimize the regime on the international scene. One would agree that the experts would be curious to see the objects of their research in person.  However, if this were really the clubgoers’ motivations, a single visit would be enough. One would understand if the participants of the Valdai Club use this opportunity to tell the Russian leaders how the outside world views their policy. But judging from the transcripts, the guests are embarrassed to discuss problems that the Russian elite might find too uncomfortable to answer. Perhaps they do not wish to appear rude, but in that case one would ask what purpose these meetings serve, and who really benefits from them.

During his period of romancing the West, Putin apparently arrived at some truths that now guide his policy toward the community of liberal democracies: the West uses values as a tool of geopolitics and geo-economics; everybody in the Western world has his or her price; there is no united West and there are therefore many vulnerable points for Trojan horses; Western leaders are ready to accept double standards and imitation of norms if they see some economic reward for doing so; the West has no courage when it comes to responding to bullying. When in 2011 Putin and his talking heads—such as Foreign Affairs Minister Sergei Lavrov—began to speak about the “decay of the West,” invoking the German historian Oswald Spengler, they surely believed what they were saying.

The Putin Doctrine

The Russian system has undergone several reincarnations since it emerged after the collapse of the Soviet Union in 1991. It initially survived by dumping its former state shell—the USSR—and by acquiring legitimacy through the adoption of an anti-communist and reformist character. Since then, the system has been preserving itself through a process of gradual evolution of the regime. At the beginning of the 1990s, it adopted a soft authoritarian rule that imitated liberal standards and professed a readiness for partnership with the West. Today, the liberal dress-up game is a thing of the past; the system has turned to a harsh authoritarianism and harbors ambitions to become the antithesis of the West. Isn’t this an unusual phenomenon? A nuclear superpower perishes in a time of peace, and re-emerges decades later in another geographical configuration but preserving its predecessor’s key characteristics.

From the time of the Soviet collapse until recently, the Russian regime—the engine of the Russian system—had based its rule on the following premises: it recognized liberal civilization’s dominant role; it declared its adherence to Western norms; it partnered with the West to advance its interests; its elite integrated into Western society on a personal level. Imitation of Western institutions and norms, the emergence of a comprador rent-seeking class, the relative freedom citizens enjoyed in the private domain, and the limited pluralism of political life (as long as it didn’t threaten the ruling class’s monopoly on power)—all of these variables helped both Yeltsin and Putin to survive. The postmodern, post-Cold War world, with its shifting and indeterminate ideological lines, was an ideal arena for Russia’s game of “Let’s Pretend.”

During the last decade, the Russian regime’s growing institutional rigidity, its turn toward repression, and the failure of its social contract—a guarantee of the people’s well-being in exchange for submission—all pointed to the degeneration of the Russian system. This process of decay could have continued indefinitely, fueled by high oil prices, corruption, public indifference, and the lack of alternatives. However, with the start of Putin’s second term as president in 2012 (having been prime minister during the handpicked presidency of Dmitry Medvedev in the interim) the Kremlin was forced to change tactics and adopt a new survival strategy: the Putin Doctrine legitimizes a harsher rule and a more assertive stance abroad. Circumstances forced the Kremlin’s hand here: the rise of protest activity in 2011, and the fear that even mild political struggle could threaten the regime, pushed Putin to stabilize the situation by rejecting the modernization that was threatening to upset the status quo, and by setting up the machinery of coercion before a new wave of protests struck. All of these developments unfolded according to the logic of the personalized power system, which feels threatened even in situations of limited political pluralism. Putin’s fears fit the trajectory of the system at the stage of decay.

By the beginning of 2014, the Kremlin presented a new political outlook based on the following assumptions: the world is in crisis, and the West—in terminal decline—no longer dominates the global economy and international politics; a “polycentric system of international relations” has emerged; competition between Russia and the West “takes place on a civilizational level”; Western ideology is doomed since it has rejected “traditional values” and has tried the “absolutization of individual rights and liberties.” By declaring the impending end of the liberal democracies, Putin was formally closing off the pro-Western period of Russian history that began in 1991 and included parts of his own tenure in office.

Ukraine and the War President

The Ukrainian political crisis and Russia’s undeclared war on Ukraine in 2014 didn’t just give the Kremlin an opportunity to put the Putin Doctrine into practice; it also enriched it with new elements. The Kremlin has used Ukraine to justify Russian society’s military-patriotic mobilization around the regime and its transformation into a “besieged fortress”—the siege-laying enemy in this case being an “internal” one, a Ukraine within Russia’s sphere of influence that receives support from an external adversary.

Ukraine has long been Putin’s personal project. He experienced a defeat with the country’s Orange Revolution in 2004, in which protests against vote fraud forced a runoff presidential election. Then in 2014, pro-Europe protesters forced the ouster of President Viktor Yanukovych after he rejected a European Union association agreement and opted to strengthen ties and pursue an economic bailout with Russia. The Kremlin is now taking revenge for past and present uprisings, as well as teaching the rebellious Ukrainians a lesson and warning the Russians about the price of insubordination and the attempt to get out of the Russian matrix. There is another angle, too. The Russian lesson to Ukrainians is a warning to the West: “Don’t meddle—this is our playground!” But this is not the end for the Kremlin’s agenda. Ukraine is supposed to test the West’s ability to accept Putin’s rules of the game. Let us not forget that this test had previously been conducted in Georgia—when Russia launched a military intervention over the South Ossetia crisis.

At the same time, the Ukrainian crisis has allowed the Kremlin to test a new way of manipulating mass consciousness: it justifies the “besieged fortress” modality by exploiting the people’s respect for Russia’s World War II history and its popular hatred of fascism to create a phantom enemy, represented by the “Ukrainian fascists” and their Western mentors who “threaten the Russian World.” The regime has manipulated public consciousness by injecting a series of blatant propaganda lies directly into the Russian psyche. The tactic has worked, too, creating unprecedented public consolidation around the country’s leader: 83 percent of Russia’s citizens supported Putin’s “wartime presidency” as of May 2014. Even Russians who were otherwise critical of Putin’s rule found it prudent to rally around the War President, lest they be accused of unpatriotic behavior.

These media manipulation techniques have helped the Russian authorities to mobilize the population around the flag in defense of the Motherland, feeding off the growing discontent of the Russian public, its anticipation of the looming troubles, their memory of failed reforms, and, at the same time, their longing for hope and reassurance. There are deeper reasons for the success of Putin’s mobilization, too. Russian society today resembles a sand heap lacking cultural and moral regulators. There is an irony here: the “traditional values” that once consolidated Russian society were demolished by the Stalin regime, which subjugated individuals to the state and its leader. Since then, the atomization of Russian society and the lack of the cohesive social networks existing in other societies—for instance, Confucianism plays this inhibiting role in China—have left Russians totally at the mercy of the state and its operational kit. Disoriented and demoralized, individuals compensate for their helplessness by seeking meaning in a collective national “success” that would bring them together and restore their feelings of security and pride. The annexation of Crimea has become one such “success”—a bloodless and peaceful one, in fact; it has given ordinary Russians a chance to forget their everyday problems and experience a surge, albeit temporary, of vicarious optimism.

Of course, the remnants of the Soviet great power mentality and the longing for expansionism have facilitated the work of this compensatory mechanism. However, it would be a mistake to consider the great power complex and imperialism as eradicable characteristics of the Russian psyche. In 1991, only 16 percent of Russians spoke of the “common bond” between them and the citizens of other Soviet republics. Russia owes the sharp increase in its citizens’ great-power aspirations to the Kremlin’s efforts to reconstruct the Soviet mentality.

Russian Uniqueness

In its quest to discover new forms of popular indoctrination, the Kremlin dared to play the ethnic card for the first time. On April 17, 2014, Putin stated: “The Russian people have a very powerful genetic code… And this genetic code of ours is probably, and in fact almost certainly, one of our main competitive advantages in today’s world… It seems to me that the Russian person or, on a broader scale, a person of the Russian World, primarily thinks about his or her highest moral designation, some highest moral truths…” Moreover, Putin declared that an ability and willingness to die in public is one of the main features of the Russian genetic code. “I think only our people could have come up with the famous saying: ‘Meeting your death is no fear when you have got people around you.’ Why? Death is horrible, isn’t it? But no, it appears it may be beautiful if it serves the people: death for one’s friends, one’s people, or for the homeland…” This is a wartime slogan—a sign that the Kremlin is experimenting with putting Russia on a war paradigm. Putin has begun a dangerous game, one aimed at reviving the worst of Russia’s national complexes.

This shift toward militarization won’t be limited to an increased military budget and a growing role for the military industrial complex. Russian militarism is a unique form of the state based on order, which is an alternative to the rule of law state. What this shows us is that the current regime can only control society by engaging in the search for an “enemy,” which calls for a militaristic framework. Although turning Russia into a Stalin-era army-state is no longer possible, the Kremlin is militarizing certain walks of life, or imitating militarization in other areas where it cannot achieve the genuine article.

This is of course not the first time the Kremlin has attempted to solve its problems by means of a military-patriotic mobilization: it also did so during the 1999 second Chechen war as well as in the 2008 Russia-Georgia war. But, as we saw in both of these cases, this type of mobilization turns out to be short-lived and requires the constant validation of additional triumphs over the enemy—whether that enemy is real or imagined. It’s equally important that these triumphs yield zero or limited casualties, since large losses of life can undermine the leader’s support base. It is ironic that a country whose population is so easily drugged by militarist propaganda is at the same time so afraid of bloodshed. In past instances in which Russians rallied around their leader in the wake of military threats, the conflict was over in a matter of months. The Crimean consolidation is going to last longer, but the resulting hangover will eventually wear off. When it does, the Kremlin will once again be forced to distract Russians from their worries and frustrations with the regime. Meanwhile, the Kremlin now has fewer distraction strategies up its sleeve, especially in light of the fact that it has played its trump card: war and the threat of war.

Through the summer of 2014, as the Kremlin continued its experiment with the war paradigm, it seemed that the process had slipped out of its bonds and acquired a logic of its own. Putin can’t return the country to a peacetime footing, because he can’t provide for the people’s welfare and stability; he needs to find enemies in order to justify his continued rule, but his war strategy unleashes suicidal forces that he can’t control. In fact, Russia’s undeclared war with Ukraine has yielded consequences that the Kremlin can barely manage. Among these consequences: the consolidation of Russia’s hawks, who demand “victory” over Ukraine; pressure from the military-industrial complex, which wants a bigger share of the budget; growing frustration among the Russian nationalists and imperialists, who expected Putin to go ahead with a full-scale invasion of Ukraine; possible emergence of new national “icons” in the form of pro-Russia separatist leaders who would compete for hero status with Putin if they returned to Russia; the growing internal and external costs of the military adventurism; and crippling Western sanctions.

Putin has been trapped and this became apparent after the September 2014 ceasefire in Ukraine negotiated with his participation. The truce appeared to be illusionary, only a pause before Russia resumed helping the pro-Russian separatists in eastern Ukraine not only to build their self proclaimed statelets, but to continue their attempts to expand their influence by seizing new parts of Ukrainian territory. By creating the situation of no peace-no war, the Kremlin has established a source of permanent tension in seeking to turn Ukraine into a failed state.

And so the Russian leader continues to search for new ideas for justifying his right to perpetual and unrestrained rule. Putin’s set of ideas resembles a stew cooked with whatever the chef could find in the pantry: Sovietism, nationalism, imperialism, military patriotism, Russian Orthodox fundamentalism, and economic liberalism. He easily juggles ideas borrowed from both Russian conservatives and right-wing ideologues from the West. Putin likes to cite the Russian philosopher Ivan Ilyin, an opponent of the West who considered fascism “a healthy phenomenon” during the rise of the Nazi Germany. “Western nations cannot stand Russian uniqueness… They seek to dismember Russia,” Ilyin complained as he called for a “Russian national dictatorship.” Putin has not yet talked of such a dictatorship, but he loves to complain about Western efforts to back Russia into a corner.

Experts close to the Kremlin have long been citing German philosopher Carl Schmitt, who was an influential thinker in the development of national socialism in Germany. Schmitt proposed subordinating law to certain political goals, thus contrasting the abstract legitimacy of the rule of law state to the “substantive legitimacy” derived from a unified nation—ideas that resonate in Putin’s rhetoric. Pro-Kremlin political analysts are especially fond of Schmitt’s “state of emergency” theory, according to which a regime’s sovereignty implies its ability to push beyond legal boundaries and abstract law. Putin essentially operates within this realm, discarding constitutional norms and the principles of global order and creating new norms on the fly. True, Putin has not yet actually declared a state of emergency in Russia, but the legislative basis for doing so already exists.

The Kremlin’s experiments with the Putin Doctrine in Ukraine justify fears of a predatory world. Putin intentionally keeps the world in suspense, forcing the West to guess at his next move. And as he plunges Russia back into the past, he nevertheless keeps repeating the mantra that “Russian values do not differ dramatically from European values. We belong to the same civilization. We are different… but we have the same ingrained values.” Having said that, Putin began using another mantra in the fall of 2014: the West constantly humiliates Russia; the West has ruined the world order and triggered the state of chaos. Consciously or not, this is a textbook case of cognitive dissonance; he is endorsing contradictory truths, thus disorienting the world and making chaos his field of play.

Putin’s new survival concept is intended to legitimize the political regime that he has been building. This regime does not fall into traditional categories based on the dichotomy between democracy and authoritarianism. Traditional definitions fail to capture the uniqueness of the Russian personalized power system, which encompasses elements of praetorianism, imperial longing, nuclear power status, the petro-state, a claim for special recognition by the outside world, and a readiness to deter the West even as it maintains membership in Western clubs. This is definitely a much tougher form of authoritarianism—one with dictatorial tendencies, and one which views projection of its might abroad as a critical component of its survival.

The Kremlin, however, has not yet crossed the final line into mass repressions. But the iron law of coercion, once begun, will take on a life of its own. If the regime resorts to violence—even if it does so only selectively at the outset—it will find it difficult to stop. It will find it necessary to continually reassert its strength; any sign of weakness will immediately rally those who are angry, dissatisfied, or thirst for revenge, which, sooner or later, will provoke more coercion.

There are three reasons that the turn to repressive violence will result in a downward spiral. First, the fact that Russia is currently ruled (for the first time in its history) by representatives of the security apparatus—which is accustomed to repressive means of activity and which isn’t about to give up power—increases the chances of further repression. Second, a regime whose resources to bribe the population are diminishing may sooner or later turn to state violence or even state terror to keep the public under control. And third, the ruling team’s fear of losing power is what creates the impetus for repressive behavior: the more it fears it, the more prepared it is to commit violence. In other words, the clenched fist has been raised; the only questions are when it will fall, and upon whom. Putin’s regime is in bobsleigh mode; it has no choice but to slide down the track until it hits the bottom.

Meanwhile, Russian society, drugged by militarist propaganda, remains dissatisfied and angry. In the fall of 2014, only 7 percent of survey respondents were ready to take part in public protests. But only 22 percent of Russians view the rulers “as a team of honest professionals”; the majority viewed them as pursuing their own interests. Even more important: while 47 percent of Russians said that the interests of the state are more important than those of the individual, 37 percent believed it should be the other way around. Despite the desperate attempts to return Russians to a mindset of total subjugation to the state, more than a third of them still want to live in a different system.

Tactical Victory, Strategic Defeat

Never before has the West had such powerful mechanisms of influencing Russia, thanks to the integration of the Russian elite into Western society. At the same time, never has the West been so incapable of influencing Russia thanks to the ability of the Russian elite (and Ukrainian, and Kazakh) to corrupt and demoralize the Western political and business establishment. The former oligarch Mikhail Khodorkovsky was correct in saying that Russia exports to the West commodities and corruption.

Does the West have the means to pacify a Russian leader who has started to play hard ball in the global arena? The American fleet in the Black Sea? It will only give another pretext to the Kremlin to prove that the West is a threat to Russia. Cutting investments in Russia? Don’t you think that Putin considered such consequences? If he is ready to face them then that means that the logic of regime survival is stronger than the logic of investment growth. A European gas boycott? Who really believes that it would happen today?

Let’s imagine a situation when the West decides that it is ready to start dismantling the laundromat the Russian elite has built with the assistance of the Western “service lobby.” Will it be a moment of truth for the Kremlin and the Russian ruling class? I am not sure. The Kremlin has prepared for this eventuality. In fact, Putin, having declared the need for “nationalization” of the Russian elite (this means that the elite has to return its wealth back to Russia), is ready for a new challenge. Moreover, any Western decision to stop co-opting the Russian elite will help Putin to tighten control over the political and business establishment. Those members of the political class who decide to return will be his loyalist base. Others, they will become traitors. One could conclude that Putin is ready to close the country, that he is prepared to accept Russia’s growing isolation as the price for keeping power. Putin wants to remain a member of the western clubs—the G-8, NATO-Russia Council, the World Trade Organization? I am not certain of that, either. He would like to stay there—but with his own agenda. He does not necessarily want to take Russia from the international system. But he wants to change this system according to his wishes and he wants to get endorsement for his right to break the rules. If the West is not ready for that, Putin will be ready to get out. From now on he will be breaking the rules—with or without the West’s consent.

In any event, we are dealing with the Russian leader who started operating in bobsleigh mode—he has jumped into a sleigh and is hurtling down the track so that it is no longer possible to stop him. He is acting to preserve his power. The more he tries to preserve it the more damage he inflicts on his country, but he can no longer reverse his course. German Chancellor Angela Merkel was wrong saying that Putin is living in another world. He actually fits into his system of power. He has started to protect his lifelong rule, and every new step he takes makes his departure from power ever more improbable, forcing him to take greater and greater risks.

Putin may have the conviction that he is succeeding. He may think that the West is tamed and is ready only to wag his finger at Russia. If this is true, we are facing a dangerous new era in international relations. Putin has won the support of a nation that yesterday seemed to be so tired of him. He has retaken control of the elites, too. He is back on the scene as the War President. He became hostage to the Kremlin logic. He can try to save the regime only by showing might, aggressiveness, and recklessness. The moment he stops, he is politically dead and there are too many forces wanting for their chance to retaliate.

Yet, Putin’s moves have triggered the law of unintended consequences. His tactical victory will inevitably result in his strategic defeat. The Kremlin may fortify the walls of its decaying fortress while undermining its foundation. The incursion into Crimea has already triggered the collapse of the Russian ruble. The Putin Doctrine turns the country into a perpetually mobilized command economy state, which in 1991 brought about the collapse of the Soviet Union.

The law of unintended consequences is also at work in the Ukraine crisis. The Kremlin did what no Ukrainian political force previously could achieve—the Russian invasion united Ukraine’s disparate political forces—liberals, nationalists, the left, oligarchs, communists, and even the Party of Regions. It is possible that Putin will only help Ukrainians to strengthen their national identity on the basis of a national liberation struggle.

Vladimir Putin has unleashed processes that he can’t contain any more. He still can play and pretend; he can blackmail the world unprepared for a maverick at the helm of a nuclear power. He can be the spoiler, forcing others to seek appeasement in fear of cornering him. But he cannot win: he has already lost. He has prevented modernization of Russia and turned the country toward the past. He will not be able to contain the inevitable tide of Russian frustration when the people discover that he lied to them and that he can’t guarantee them a normal and dignified life. The grapes of future wrath are growing. The question remains, what will be the final price of Putin’s departure?

Lilia Shevtsova is a nonresident senior fellow at the Brookings Institution’s Foreign Policy Program and an associate fellow at Chatham House’s Russia and Eurasia Programme. From 1995 to 2014, she was a senior associate of the Carnegie Endowment for International Peace in Moscow. She is the founding chair of the Davos World Economic Forum’s Global Council on  Russia’s Future. Her books include Yeltsin’s Russia: Myths and Reality; Putin’s Russia; and Russia: Lost in Transition: The Yeltsin and Putin Legacies.

The Globalization of Clean Energy Technology

The Chinese market for clean energy technology is huge and growing. The reasons behind it, according to Kelly Sims Gallagher, have a great deal to do with the intersection of policy decisions made consciously by the Chinese government and the wider advances in the globalization of the development of high technology in the past twenty years. The story of that intersection is the heart of The Globalization of Clean Energy Technology: Lessons from China. Gallagher, a professor at the Fletcher School of Law and Diplomacy at Tufts University (currently on leave serving as senior policy advisor to the White House Office of Science and Technology Policy), takes lessons from China’s experience in researching, acquiring, and deploying four technologies—natural gas turbines, advanced batteries for automotive vehicles, solar photovoltaics (aPV), and coal gasification—to draw general conclusions about the state of clean energy in a global context.

Gallagher interviewed more than a hundred experts and practitioners, including dozens within China, whose experiences in the industry frame the case study discussions that run through the book. Gallagher complements these accounts with her statistical analysis of Chinese patents as well as useful appendices charting other countries’ policies aimed at creating markets for clean energy technology. The work provides a snapshot of how China has been able to propel itself to the top of clean energy deployment.

The essential role that government policy has played reinforces what environmentalists have long said about the necessity of such market-making interventions as carbon taxes, cap-and-trade, and renewable portfolio standards—which mandate the percentage of electricity that must be derived from low- or zero-carbon sources. As Gallagher writes, traditional fossil fuel power generates effects (what economists call externalities) that are not captured in the price. Likewise, market prices on their own do not explicitly place a premium on technologies that reduce carbon emissions or improve public health. Thus, government regulation acts as a signal to the market that innovation will be rewarded. The more stable a policy, the greater the chances it will sustain a healthy market for low- and zero-carbon technology. That role will only grow in importance with the climate change agreement hammered out by the United States and China in Beijing last November. The Chinese pledge to increase their share of non-fossil fuel energy to 20 percent will guarantee a robust market for all of the technologies studied in this volume.

The Chinese government has responded to this deficiency in market forces through not only regulation, including plans for a nationwide cap-and-trade program, but also massive allocation of financial resources toward this goal. Indeed, according to Gallagher’s interlocutors, access to financing in China was perhaps the country’s greatest advantage over its competitors (in addition to low labor costs). “Implicit but unspoken in the interviews,” Gallagher writes, “is that a government loan is more than just that—it is an indication of government support that enables firms to accrue far more capital than they would be able to do otherwise.”

Government support, whether regulatory or financial, is not sufficient on its own to guarantee widespread deployment. Each of the four technologies covered in the volume have experienced varying degrees of success, due either to extenuating circumstances in the domestic Chinese market, or decisions taken by international companies. Chinese success in solar PV, for example, is well known. While government subsidies are crucial, Gallagher’s interviewees also point to the geographic concentration of Chinese firms and their vertically integrated nature. Chinese manufacturers are thus able to respond quite quickly to shifts in demand for specific types of solar PVs, while at the same time they are pressured to cut costs due to stiff competition. Conversely, the diffusion of natural gas turbine technology has not progressed as well. Because of China’s long history of coal usage, the market for natural gas is not as mature. Much of the manufacturing and technological innovations in turbine technology have been under the purview of a few large firms that, unlike for the other sectors profiled here, have jealously guarded their state-of-the art turbines.

A second key point Gallagher makes—and one in direct response to conventional wisdom on doing business in China—is the perceived improvement in the treatment of intellectual property in the energy sector. While foreign firms, especially in areas such as consumer electronics and entertainment, have seen their copyrights infringed and software pirated, the energy sector participants Gallagher interviewed did not see concerns over lax enforcement of intellectual property law as an insuperable obstacle to their operations. Indeed, there is growing confidence in the ability of Chinese courts to fairly adjudicate disputes, though the sample size of foreign firms suing Chinese firms is very slim. The fact that Chinese labor is inexpensive and that the Chinese government is committed to be a leader in the field in cooperation with international firms seem to outweigh concerns that innovations in clean energy technology will be stolen.

There are important implications arising from Gallagher’s work for the upcoming climate negotiations in Paris toward the end of 2015. A global deal to cut emissions, in Gallagher’s estimation, will only be practicable if met with robust financing measures that will facilitate the adoption of these technologies in the world’s least developed countries. The example the Chinese have set shows that effective and tailored government policies and abundant financing can do a lot to speed the transition. For the world’s sake, one can hope the resources are available to replicate this hitherto successful model.

Neil Bhatiya is a policy associate at the Century Foundation, focusing on U.S. foreign policy in South Asia. He was previously a research fellow at the Streit Council for a Union of Democracies. On Twitter: @NeilBhatiya.

Chasing Chaos

Humanitarian aid is an industry. It is tempting to think it is managed like a business, but unfortunately there is a significant difference. The consumers—in this case, the beneficiaries of the aid—have little or no say in holding the aid providers accountable. Consumers of commercial products and services in a free market must be appeased—their demands are explored and met—or else the business will not survive. But in the business of humanitarian aid, services and products are regularly delivered without consulting the aid recipients or affected community members; consumers of the aid cannot ask for changes in policy.

Aid literature abounds with suggestions to shift the paradigm, for example recommending consultations with stakeholders and community engagement. But when disaster strikes and emergency aid is quickly channeled to a crisis zone, very little of this talk is heeded. Aid agencies responsible for implementation are accountable mainly to donors and funding organizations, rather than to the aid beneficiaries in the affected areas.

In Chasing Chaos: My Decade In and Out of Humanitarian Aid Jessica Alexander captures these tensions. In a compelling piece of storytelling, she teases out the angels and demons of the humanitarian aid industry. Alexander takes the reader on a fast-paced and painful tour of the globe’s calamities that lives up to the book’s title. She travels from one crisis area to another, sharing detailed experiences from the Darfur war, the Rwandan genocide, the child warriors of Sierra Leone, the tsunami in Indonesia, and the earthquake in Haiti. Along the way, she presents insights that have implications for good governance.

The author’s long-view of development aid, for example, is valuable because the public’s attention span for major crises is short. Supplies are poured into areas that the media covers intensively—what scholars call the “CNN Effect.” Soon thereafter the global attention wanes away, or moves to other more pressing or sexier crisis zones. Already initiated projects peter out. Despite continuous efforts by development partners, donor-funded projects are rarely sustainable.

Alexander offers multiple examples of the negative effects of aid, whether it is too much assistance during a short period of time or the dire effects on communities when aid disappears. In Indonesia in 2005, Alexander describes the over-supply of aid because of the media’s constant coverage of a devastating tsunami. What followed was aggressive competition between aid agencies to place signs and logos on their projects—and mark their territories. NGOs needed to show tangible results to their donors, while the actual needs of those affected took second place. Consider the futility of thousands of tons of supplies sent from every corner of the world: medication with Japanese or French pamphlets that cannot be deciphered by the community workers; or ripped jeans and high-heeled shoes to a population that mostly wears kebayas.

Volunteers can be equally useless. In Haiti, after the 2010 earthquake that killed about 180,000 Haitians and displaced more than 1.5 million people, the author describes the “voluntourists” who came to help out for a week, or during spring break, and have their pictures taken with the babies—just to feel good about it. These parachutists do more harm than good. The author complains that humanitarian aid staffers do not necessarily have altruistic motives. She notes how many are in it because of the career opportunities, job perks, danger pay—all the while living in air-conditioned quarters and partying all night.

To further illustrate the unsustainability of donor projects, Alexander gives examples of a cholera treatment center in Haiti. Despite the continuous threat of the disease, funds were discontinued and the center was forced to shut down. Donors were then attracted to new crisis areas, such as Syria and Yemen. Although healthcare should be a national government’s responsibility, in crisis areas governments are not ready to take over in a short time. Donor assistance is a temporary treatment of the symptoms but not of the underlying ailment.

Laila El Baradei is professor of public administration and associate dean for graduate studies and research in the School of Global Affairs and Public Policy at the American University in Cairo. She previously served on the faculty of Cairo University. She has been a contributing author to the Egypt Human Development Report in 2004, 2008, and 2010, the Millennium Development Goals Second Country Report for Egypt in 2004, and the World Bank Country Environmental Analysis for Egypt in 2005. On Twitter: @Egyptianwoman.

Reflections of a Media Critic

In 2012 on the fortieth anniversary of the Watergate Affair, a notable commentary on the state of political journalism appeared in the newspaper whose investigative reporting uncovered the Nixon administration scandal. Leonard Downie Jr., who had worked at the Washington Post for forty-four years, wrote that American investigative reporting was at risk in the “digital reconstruction of journalism.” As Downie noted, the mission of investigative reporting and holding governments accountable remains as essential as ever to American democracy. But, he concluded, it has become a financial burden for established newspapers now struggling to survive, while digital startups seeking to fill the vacuum have not found a successful model for financial sustainability.

The financial uncertainties that Downie describes are a cause of concern for the future of quality journalism. But it is part of a deeper problem—the corruption of American political culture in the United States. The American political system is broken, and political journalism has played a part in that failure.

I saw my first televised political convention at age ten. It was a spectacle that intoxicated me. With my camp buddy Jeff Greenfield (who would go on to become a big-time political journalist at CBS and CNN), I watched the presidential candidate nominations of Dwight D. Eisenhower and Adlai Stevenson II in 1952. We lived in a world of two major political parties with traditions and alignments. Democrats and Republicans stood for different but clear principals. They developed strategies for building grassroots support. The around-the-clock coverage, and the reporting on the political conventions by respected and experienced journalists like CBS news anchor Walter Cronkite, made us feel a connection with the democracy we were learning about in school. Then the media consultants took over. The conventions eventually became like conventional TV shows, shorter and less engaging. The big TV networks insisted on their need for commercial support and showed limited appetite for public service programming.

Meanwhile, the merger of the news biz and show biz further changed political journalism. Political reporters began spinning their “face time” on pundit shows into self-promotion exercises. They parlayed name recognition into book contracts and bigger jobs negotiated by agents. Much of the rise of these journalists was tied to their access to government sources and the hype encouraged by their media outlets. Some reporters became prancing egos.

In a 2011 column, the Washington Post’s Dana Milbank skewered the annual White House Correspondents’ Association annual dinner. Once a “nerd prom” for journalists, he wrote, the event had spun out of control. “With the proliferation of A-List parties and the infusion of corporate and lobbyist cash, Washington journalists give Americans the impression we have shed our professional detachment and are aspiring to be like the celebrities and power players we cover,” Milbank wrote.

The superficial aspect of so much of political journalism today is another huge concern. Hugely popular social media outlets like Facebook and Twitter have become dominant sources of constantly updated information. Thoughtful journalism offering context and background are often conspicuous by their absence from the headline hit parades. A number of new online publications feature deeper long form journalism, but in the Twitter age of endless hits it is not clear how widely they are read, or how munch influence they have outside of a small information elite. There is constant repetition in an ever-changing zeitgeist and stew of sensation. Questions raised in the morning are gone by the afternoon. The public is being exposed to more while absorbing less as a flurry of changing stories compete to dominate the news agenda. Cultural critic Bill McKibben dubbed our times “The Age of Missing Information” in his perceptive 1992 book on sensory overload.

Another problem is how political journalism has contributed to the poisoning of political discourse. Politicians now see politics as warfare. Political parties have fractured as factions and movements manipulated by big-money donors operate with targeted communications and over-the-top partisan online media. Political action committees tie support to ideologically rigid requirements. Supreme Court decisions have essentially supported the takeover of politics by agenda-driven interests. Entities like Fox News can be blamed for supercharging the political environment and assuring a lack of civility. Political polarization is fueled by TV talking heads with prefabricated audience-tested “message points” that make it difficult for people to disagree without anger or find common ground. While the media publishes frequent commentary about America’s broken political system, too few journalists are challenging the new partisan political reality.

That may be partly because of the complicity of political journalism in the broken system. For example, asHuffington Post senior media reporter Michael Calderone has noted, throughout his presidency Barack Obama “has used smaller, private meetings with influential columnists and commentators as a way to explain his positions before rolling out major foreign and domestic policy decisions.” Obama has met with conservative as well as liberal journalists. In 2014, Calderone reported on how the U.S. leader held an off-the-record meeting with more than a dozen prominent American journalists—from numerous leading outlets from the New York Times and Washington Post to the Atlantic and New Yorker—just hours before calling for an escalation of the war against the Islamic State in Iraq and Syria in a primetime televised address. When asked about the meeting, the White House “declined comment.”

So there you have it, four decades after a newspaper brought down an American president, political journalism is joined at the hip with an administration that doesn’t just secretly brief journalists but sucks up their ideas for how to sell a war to the public. It’s one more nail in the coffin of political journalism.

Danny Schechter is the author of sixteen books, most recently Madiba A-Z: The Many Faces of Nelson Mandela. He was an Emmy Award-winning producer for ABC News and has directed numerous documentary films. He edits and blogs at

Political Order and Political Decay

Francis Fukuyama is best known as the political theorist who, at the age of 36, declared that we had reached the end of history. It was 1989, the Berlin Wall was about to fall, and Fukuyama, then a member of the State Department’s Policy Planning unit, wrote an essay in the National Interest arguing that liberalism faced no real competitors: it is humanity’s ideological endpoint.

Twenty five years after Fukuyama wrote that career-defining essay—“The End of History?”—it seems that mankind might sooner colonize Mars than establish universal democratic freedom on Earth. His critics have pointed to events such as the Rwandan genocide, the Balkan wars, and the September 11, 2001, attacks as evidence that his theory is flawed, yet Fukuyama has not wavered. Just because all societies will eventually aspire towards democratic ideals, he argues, it doesn’t follow that it’s easy to make democracy work.

This is the departure point for Political Order and Political Decay, the second in Fukuyama’s two-volume series tracing the development of political institutions from tribalism to the modern state. It is preoccupied with one central question: how do countries “get to Denmark?”—although Fukuyama is less concerned with the actual nation than with the Denmark of liberal policymakers’ dreams, the ultimate example of a prosperous, stable, and accountably governed society.

Political Order and Political Decay is not a how-to guide—the Scandinavian country can’t be broken down into component parts and re-assembled at home like a piece of Ikea flat-pack furniture. Instead, in a remarkable demonstration of academic scope and assiduousness, Fukuyama explores global political history, from the industrial revolution to the modern day, in search of clues on the mechanisms of political change.

According to Fukuyama, the three elements of a successful modern democracy are a legitimate and effective state, the rule of law, and democratic accountability. Countries exhibit these to various degrees. Take China, which has a highly effective state but a lack of democratic accountability. In India, the reverse is more true: its democratically elected leaders struggle, thanks to institutionalized corruption and bureaucratic incompetence, to get anything done.

Fukuyama does not single out a particular driving force behind political development, which he sees as a complex interplay of economics, culture, geography, climate, conflict, political personalities, and luck. He does, however, identify patterns. He offers, for instance, an interesting study into how different patterns of colonial rule in Sub-Saharan Africa, Southeast Asia, and Latin America have affected state development in these areas.

“The problem is that Denmark didn’t get to be Denmark in a matter of months and years,” Fukuyama writes. “Contemporary Denmark—and all other developing countries—gradually evolved modern institutions over centuries.” This means that attempts to impose institutions on countries from the outside rarely succeed. An underlying theme to Fukuyama’s latest work is the importance of humility and the need for policymakers to accept their limitations. In this way, Fukuyama is striking out at two sets of former colleagues.

The first are America’s neoconservatives, of which Fukuyama was once a leading light. He was an enthusiastic supporter of the U.S.-led invasion of Afghanistan, but by 2004 he was so disillusioned by the “liberation” of Iraq that he wrote an essay in the National Interest, later turned into the book America at the Crossroads: Democracy, Power, and the Neoconservative Legacy, criticizing his former colleagues. Ten years on, a number of his former neocon friends still aren’t speaking to him.

Western forces have found it relatively easy to impose the mechanisms of accountable government in Iraq and Afghanistan, he argues: both have parliaments, political parties, and periodic elections. It is much harder, however, to create states that are seen as legitimate, or to impose the rule of law—even when, as in Iraq and Afghanistan, billions of dollars are funnelled into state-building programs.

He also criticizes the international development community, including the World Bank (where he once worked), whose attempts at promoting democratic reform are often inept and sometimes counter-productive. Rarely do outsiders possess the knowledge needed to successfully reform institutions—and it makes little difference whether well-intentioned NGO workers are aiming to import foreign institutions or simply to reform or formalize existing local ones. Only “indigenous actors” are sufficiently aware of the “constraints posed and opportunities presented by their own history and traditions,” he writes.

Even when countries do “get to Denmark,” or somewhere close, they may not remain successful democracies forever. To complete his apparent campaign to annoy everyone he’s ever worked with, Fukuyama draws on his experience at the State Department to show how American politics has decayed. All political systems are subject to decay: laws and institutions often struggle to keep up with social change, and elites are always looking for ways to re-assert control and recapture the state. In the United States this means that well-heeled and well-organized lobby groups exert too much power, and the constitutional emphasis on “checks and balances” has been exploited to produce a “vetocracy” that paralyzes governance. Fukuyama believes this process is not irreversible, but no one currently seems to possess the political will and grit to force through needed reform.

For a thinker who once epitomized American triumphalism in the aftermath of the Cold War, Fukuyama sounds almost defeatist. He concedes that some countries may never reach “Denmark” and others will decay, which prompts the question: why is Fukuyama so sure that history has an “end” at all? Why should there be any guiding idea, or ideology, driving the apparent chaos of political change? Fukuyama often comes under fire for failing to account for how liberalism might survive a radically altered future—perhaps one in which technological change drives ever widening levels of inequality.

Yet it’s hard to reconcile a commitment to liberal ideals with a belief that democratic rights should be something other than universal. For Fukuyama, liberalism is not just an instrumental good, it has an intrinsic value, too. Democracy recognizes individuals’ “dignity as agents,” he writes. “Political agency is an end in itself, one of the basic dimensions of freedom that complete and enrich the life of an individual.” If you believe this, it is not easy to see why humanity should strive for any political system that fails to recognize individuals as agents, and not subjects, who ought to have a stake in their political future and the freedom to live their lives in the fullest possible sense.

If only Political Order and Political Decay could be pushed into the hands of political leaders, NGO workers, and policy wonks worldwide. They might find it a dense, challenging read at points, but it would hopefully leave them with both a much-needed sense of humility and a renewed awareness of the preciousness of liberal democracy.

Sophie McBain is an assistant editor at the New Statesman. She has written for the New Republic, FT Weekend,Guardian, Monocle, and Spear’s. From 2008 to 2011 she worked as communications assistant for the United Nations Development Programme and as a consultant for the African Development Bank based in Tripoli, Libya. On Twitter: @SEMcBain.