Ripon Forum


Vol. 51, No. 5

View Print Edition

In this edition

Amid all the bad news coming out of Washington, DC these days, there is a lot of good work being done by inspiring individuals who are quietly trying to make government work better for the American people.

Common Sense in the Wake of Disaster

The National Flood Insurance Program is no longer just a program for hurricane-prone areas. It is a program for the entire U.S., and it needs to be reformed.

Partisan Politics is Unhealthy for America

The continuation of failed efforts in Congress to avert the Obamacare health care crisis has put a spotlight on its unhealthiest habit: partisan gridlock.

What Would Michel Do?

In light of today’s gridlock and the need for Congress to reassert its authority, the late Republican Leader’s insights and approach to legislating are needed now more than ever.

To Modernize Congress, Strengthen its Ability to Deliberate

The long-term health of Congress depends on reforming the institution in a way that that strengthens, rather than bypasses, its deliberative features.

From Silicon Valley to Washington, DC

Last December, Matt Cutts officially quit Google to join the U.S. Digital Service. Today, he is helping to lead the effort to modernize the government.

Why Federal IT Systems need to be Brought Into the 21st Century

The federal government is one of the last bastions for information technology that was built in a bygone century – some of it approaching 50 years old.

Data-Driven Government

America’s governors and the states they run are using evidence-based techniques such as big data, business intelligence, cloud platforms and redictive analysis to solve public policy problems.

Reactions & Regulation in the Age of Computational Propaganda

Regulation against certain uses of disinformation tools, such as bots, would be a welcome development. Legislation targeting digital tools themselves, however, could be detrimental to the internet and free speech.

The EMP Threat Facing the United States

For those able to execute an unconstrained analysis of today’s threat environment, the single most urgent concern for America is what threatens her electric grid.

How Tax Reform can Boost Competitiveness

There is strong evidence that high corporate tax rates deter investment. This is especially worrisome because the United States has the highest tax rate among OECD countries.

Ripon Profile of Jenniffer González-Colón

The Congresswoman from Puerto Rico discusses the effort she is leading to rebuild the island terroritory in the wake of Hurricane Irma and Hurricane Maria.

Reactions & Regulation in the Age of Computational Propaganda

Washington’s scrutiny of Silicon Valley’s biggest tech firms has reached its peak in recent weeks, as various committees work to discern the ways social media platforms were used to manipulate public opinion during the 2016 U.S. Election.

Facebook and Twitter have revealed the extent to which the Internet Research Agency, a contracted arm of the Russian government, penetrated and exploited social networks during the contest. The platforms have disclosed hundreds of thousands of dollars in online advertising and 671 accounts and pages linked directly to the Agency, which has been disseminating disinformation online with varying degrees of success since at least 2014.

In response to repeated requests from lawmakers and expert researchers, Facebook and Twitter have announced they will take measures aimed at curbing the influence of disinformation on their networks. These efforts include promoting transparency in online political advertising on both platforms. While these moves are steps in the right direction, more still needs to be done to illuminate how American citizens are being politically coerced, harassed, and silenced on social media. Specifically, how does manipulation of public news algorithms happen, and how is computational propaganda effected at scale? Regulation against certain uses of disinformation tools, such as bots, would be a welcome development. Legislation targeting digital tools themselves, however, could be detrimental to the internet and free speech.

Regulation against certain uses of disinformation tools, such as bots, would be a welcome development. Legislation targeting digital tools themselves, however, could be detrimental to the internet and free speech.

The use of social media bots has been at the forefront of congressional inquiries into Russian manipulation in 2016. Bots are computer programs built to carry out automated tasks online. These software entities can be programmed to interact with users, promote messages online, or perform more mundane tasks such as manage permissions in a chatroom. Recent media coverage has focused on social bots – iterations of this automated technology that pose as humans or interact with humans online. Some of the more unscrupulous social bots on sites like Twitter, Facebook and YouTube promote political messages and game social media algorithms to drive online trends. They have the effect of manufacturing political consensus online — they create artificial trends or manipulate news feeds on social media, making particular information or people appear as if they are supported by real human traffic.

In and of themselves, however, bots are not inherently good or bad – they are merely an infrastructural part of the internet. In fact, bots make up slightly over half of all internet traffic. Tools that everyday users of the internet know and love — such as search engines, Wikipedia, and chatrooms — would not be possible without the use of these automated agents. It is for that reason that any policy concerning bots should be targeted against specific, malevolent uses of them – against driving political messages or rendering protest hashtags irrelevant, for example – instead of taking the form of a blanket ban on bots altogether. Social bots can be used to hide the identity of those behind political manipulation and massively amplify digital attacks. They can, however, also be used as a social prosthesis for democracy — allowing journalists and civil society groups to scour large datasets and automate aspects of reporting and political communication that would otherwise have to be done manually.

Some argue that private companies alone should be the ultimate arbiters of what is regulated on their networks. While there is merit to this view, especially in terms of preserving the rights of users to freedom of expression, there are also important caveats that cannot be ignored. Indeed, the past records of these companies suggest that self-regulation is not sufficient to counter the fact that big tech’s conflicts of interest tend to go unchecked until it is too late. The tough truth is that, left to their own devices, it is tenuous at best to claim that tech giants will curb computational propaganda on their networks. With Google and Facebook representing 77% of digital ad revenue in the States and nearly all of that market’s new growth, the abstraction of democratically-oriented software design is plainly secondary to concrete profits.

Social bots can be used to hide the identity of those behind political manipulation and massively amplify digital attacks. They can, however, also be used as a social prosthesis for democracy.

A litany of events over the past year alone has made this evident: Mark Zuckerberg dismissed out-of-hand the idea that online disinformation may have influenced the 2016 election as “pretty crazy”; Google stifled criticism in the U.S. of its business practices abroad; and, Twitter deleted data critical to understanding exploitation of its platform. This was mere days after the platform publicly criticized third-party research based on data limits the company itself is responsible for. A fundamentalist defense of purely private regulation also ignores the very real danger of regulatory capture: tech giants have spent over $150 million in lobbying in the past decade, with a vast increase in the past five years.

Civil society groups and expert researchers — such as the Alliance for Securing Democracy, the Atlantic Council’s Digital Forensics Lab, Bellingcat, and ComProp — have all made significant contributions to illuminating the dark underbelly of online disinformation, even as tech giants have been coy about their knowledge or critical of such research. They’ve revealed that there’s no marketplace of ideas when bots can amplify or dampen any message. Only after sufficient outcry from legislators, researchers, and the public have private companies even begun to openly acknowledge these problems. Even after Facebook admitted the presence of Russian disinformation on its network, for instance, Columbia Professor Jonathan Albright was quick to point out that it had vastly underestimated the number of users the propaganda had reached. Bots and false amplifiers can even be said to represent a new form of censorship – one based on content amplification rather than content suppression. Legal experts have astutely observed that our current legal framework is ill-equipped to handle such challenges.

It is plain that all sides have their blind spots: scholars and researchers grasp technology’s impacts on democracy without an access to the data, computing power, or business savvy of the private sector; business interests can be naïve and myopic about the political and social harm their platforms can inflict on democracy; policymakers can lack the technological literacy to craft effective policy.

What is needed is more cooperation between experts, private industry and policymakers to craft both public and private policies that will preserve the inviolate American principle of free speech, while also limiting the insidious harm that abuse of online networks can incur.

What is needed is more cooperation between experts, private industry and policymakers to craft both public and private policies that will preserve the inviolate American principle of free speech, while also limiting the insidious harm that abuse of online networks can incur. Private self-regulation, uninformed, heavy-handed public policy (such as those currently being proposed in Brazil and Germany), or maintaining the anarchic status-quo are all undesirable options. A blue-ribbon commission composed of members from all parties would be a proper initial step in the right direction, and have the highest probability of benefitting all members and ensuring that the internet continues to be a positive-sum game for everyone involved.

Malicious uses of technologies, explained by those academic experts who best understand them, can be prohibited by law by informed legislators. This in turn would provide private companies with necessary latitude in the management of their proprietary software, while also preventing regulatory capture. As Tim Wu writes, “If we believe in liberty, it must be freedom from both private and public coercion.”

Samuel Woolley is the Director of Research of the Computational Propaganda Project at the Oxford Internet Institute, University of Oxford. He is also a fellow at Alphabet’s think-tank and technology incubator, Jigsaw. Nick Monaco is a research associate on the Computational Propaganda Project. He is also a research associate at Jigsaw.