Future Tense

Regulators Seem to Think That Facebook Is the Internet

That’s a problem.

A view of the Earth from space with lights illuminating connected networks.
NASA/Unsplash

In 1997, SixDegrees.com was the first real attempt at social networking, creating a space where users could upload their information and list their friends. The site peaked at approximately 3.5 million users before it shut down in 1999.

Since then, a series of social networking business models have emerged, each time offering more advanced tools for user interaction. LiveJournal, a site for keeping up with school friends, combined blogging and social networking features inspired by the WELL; Friendster was a social network that allowed for increased interaction and control by users; Myspace had open membership and gave users the freedom to customize their pages. In 2005, it—and its 25 million user base—was sold to News Corp. But within three years, Myspace had been surpassed by Facebook, which launched in 2004 for college students and opened to everyone in 2006.

As that history shows, in the early days of the internet, disruption was profound and constant: New companies emerged in music, video, e-commerce, publishing, and telephony on an almost yearly basis. The internet appeared to be a space where competition could thrive.

Not anymore.

Today, disruption happens in much smaller ways. The examples are getting fewer because economies of scale have concentrated innovation in the hands of a few players.

More often, disruption comes from regulation—for better or, often, for worse. Current regulation attempts focus on the actions and behavior of some actors—chiefly Facebook—while creating unintended consequences for the internet, particularly fragmentation, or the splintering of the global internet into one that more closely hews territorial borders.

It’s understandable that both the public and regulators might think that regulating “the internet” means focusing on the biggest players. Much of it has to do with the fact that users are often exposed to various types of illegal behavior and content through some of the most popular services that exist on the internet. Fix misinformation, or extremism, or ideological silos, or security on Facebook, the thinking goes, and you’ve fixed it on the internet as a whole.

But this is an unhelpful and misguided narrative. First, the internet is not a monolith, so treating it as if it is simply will not work. Second, many of the issues regulators are trying to address are not internet problems; they are societal. Terrorism, child abuse, and mis- and disinformation are not an offspring of the internet; they existed before the internet, and they will continue to exist after it as they are ingrained in human societies. Yet, they are treated as if they are exclusively internet problems. Third, and most importantly, regulators should stop thinking of the internet as Facebook and treating it as such. In the internet regulatory landscape, there is a mixed bag of different issues, and Facebook’s involvement in all of them, direct or indirect, adds to the current complexity. Content moderation, privacy, intermediary liability, competition, encryption—these are all broader issues related to the internet, not just Facebook. Yet, the pattern that has emerged is to treat them as Facebook issues. What this means is that, instead of focusing on trying to address them in ways appropriate for the entire internet ecosystem, they are addressed through the Facebook lens. This has been quite accurately characterized as the “Facebook Derangement Syndrome.”

The global regulatory agenda is replete with such examples. In the United Kingdom, the Online Safety legislation wants to ban end-to-end encryption because of Facebook’s plan to introduce it as a default setting in its Messenger service. On the other side of the Commonwealth, Australia recently introduced a media bargaining code mainly targeting Facebook. Facebook famously “left” the country before renegotiating a new agreement. Similarly, in what seems to be a coordinated effort, Canada has vowed to work with Australia in an attempt to impose regulatory restrictions on Facebook.

And this trend is not limited to the Commonwealth.

India’s new intermediary guidelines aim at tightening a regulatory grip on Facebook and its partner company WhatsApp while Brazil’s fake news bill, which got approved by the Senate, is focusing on content moderation on Facebook and traceability on WhatsApp. In France, there have been conversations about the introduction of “new rules” for Facebook, while Germany’s Network Enforcement Law—NetzDG—was drafted with the primary focus of taming Facebook. Finally, in the United States, the Trump administration issued an unsuccessful executive order that aimed to regulate Facebook for bias.

This approach of limiting regulation to Facebook is not entirely unusual. It reflects the principal-agent conundrum that, over the years, has allowed companies like Facebook to  propose policies and deploy tools that can have an impact on the way regulation gets to be enforced. The principal-agent problem is predominantly characterized by conflict of interest and moral hazards. Due to information asymmetries, the agent holds the bargaining power, and this creates a few unknowns: The principal is in no position to know the information the agent holds. Even when she does, she cannot be certain that the agent is acting to her best interest. So, the principal ends up focusing squarely on the agent, disregarding any peripheral issues that might be significant.

The principal-agent problem may help explain why governments appear ready to introduce regulation targeting Facebook; however, it does not help explain why, in the process of doing so, the main loser is the internet and its users.

Over the past couple of years, Facebook has said, “We support regulation,” and “we want updated internet regulations to set clear guidelines for addressing today’s toughest challenges.” This statement would be significant if it did not come across as self-serving. At this stage, regulation is inescapable and Facebook knows it—as does the rest of big tech. In an effort to adapt to this new reality, companies often take advantage of their dominant position to drive regulatory processes, often at the expense of regulation itself.

In this context, the question we should be asking is not whether regulation is appropriate, but what are the real implications of regulating in such a manner? There is already an argument that focusing on a few, big players has an impact on the health of innovation and the ability of newcomers to compete. And, then, there is the internet. The internet’s global reach is one of its main strengths. It is a feature, not a bug. Among other things, it allows the maintenance of supply chains all over the world; it allows people to communicate; it lowers costs; and it makes information sharing easier, all the while helping to address societal issues like poverty or climate change.

To this end, the attempt to regulate based on one—or a handful—of companies can jeopardize this very important goal of the internet. It can create fragmentation, in the sense of not allowing data to flow across borders or networks to interconnect, and this can be very real and have a very big impact. It can impose limits on the way information and data gets to be shared and the way networks may interoperate. These are significant trade-offs, and they must be part of any regulation’s process.

So where do we go from here?

For sure, the answer cannot be to stop regulating. We must accept, however, that the current approach often generates unintended consequences that only superficially affect those who must be regulated.

In this light, a possible way forward is to experiment with regulation. Experimental regulation is a relatively underused approach, yet it is flexible enough to accommodate dynamic markets, like the internet. Originally associated with the work of John Dewey, this idea is premised on the fact that, in policymaking, the way we approach theories and strategies of justice depends on “the experience of their pursuit; it is these changes that then allow us to consider how best to achieve our objectives.” The advantage of this thinking is that it considers unintended consequences as an opportunity to better define appropriate regulatory frameworks and how to achieve the desired goals.

Internet regulation does not experiment enough, and when it does, it appears to have the wrong focus. In Australia, for instance, the effort to ensure robust journalism in an age of disinformation in social media platforms led to a “link tax” that undermines the architecture, history, and economics of the internet. This is partly due to the role big tech companies play in the regulatory process. One of the immediate things one can observe with internet regulation is the process some actors deploy: In the beginning, they operate in favor of sustaining existing policies and bureaucracies. The thinking is that longevity brings legitimacy, and as a result, the policy becomes its own cause. Once this strategy is embedded in process, these powerful forces move toward pushing their own regulatory agenda.

It is for this reason that there is a certain appeal toward flexible regulatory systems that allow different units to experiment with different approaches and make room for assessments that separate the relevant from the nonrelevant and preexisting rules. Although experimentation neither offers a drastic approach nor aims to replace traditional routes of regulation, it can limit the risks of politicization, as politics become more context focused.

One of the first things we need is an internet impact assessment that looks at the different parts of the internet’s infrastructure and the effect that regulation may have. It is no longer just about regulating a few actors. It is about protecting the global infrastructure that we all depend on daily.

The internet has a Facebook problem, but the internet is not Facebook.

This article represents the views of the author and not those of his employer, the Internet Society. Facebook is an organization member of the Internet Society. 

Future Tense is a partnership of Slate, New America, and Arizona State University that examines emerging technologies, public policy, and society.