Thinking About Regulating the Online Space? Focus on Decentralizing Power

Photo credit: Frerk Meyer. CC-BY-SA

Originally published as a series in Open Democracy (1, 2, 3). Edited for clarity.

In this series I argue the internet is the political battleground of our era. The space where powerful actors battle to control the flow of information. Unless we act now, our chances of achieving a more fair society will become slim.

Each of the three parts is self sufficient:

Part I Who’s to blame: Sketching the boundaries of “the internet problem”.
-Part II How upcoming tech will increase the power of intermediaries.
Part III A framework around which to coordinate efforts towards re-decentralization.


Part I — Who’s to blame? The internet on the defendant’s bench

The internet used to be seen as a catalyst of positive social change. These claims have become rare, at least in Europe and the US.

Why were the Arab Spring, Occupy Wall Street, and the Spanish 15M movement hailed as the internet delivering on its promise, and the recent US election and Brexit seen as the internet delivering nightmares?

Is it just about who was leveraging it? What, if anything, has changed?

This piece seeks to provide a rough compass for those trying to understand the underlying causes of much of what is problematic with the internet and the web today.

I argue that paying attention to who shapes internet traffic (and how) is crucial. It will allow us to understand the connection between issues often framed as unrelated (eg. net neutrality and misinformation), as well as to anticipate issues that have yet to emerge.

The Architecture

The inter-net (network of networks) offered a revolutionary way to transport information. Unlike the telephone, the internet never required a central operating room with dozens of people connecting cables to enable a conversation.

Fig. 1 — A Switchboard (Via AP Photo)

On the internet, each node can help route the components of a message to its destination. None of the nodes is central. None is essential. There is no single point of failure.

If in 1971 the UCLA node crashed, Stanford could still send a message to the University of California at Santa Barbara (UCSB) by routing through NASA’s Ames Research Center. None of the nodes played a role equivalent to the switchboard for telephone.

Fig. 2 ArpaNet 1971 — Via Red Hat Linux Test Page

The more nodes that join the network, the more robust it becomes, and the greater the chances of getting a message from one point to another. That is, in synthesis, why a decentralized system was so appealing: as opposed to a centralized system, scale makes it more resilient.

The Web

These architectural principles inspired and were upheld by Tim Berners-Lee when he designed the web – the content layer that sits on top of the internet. Tim Berners-Lee gave away the property rights over his invention. He allowed everyone to use it; now anyone can set up a website without asking for permission.

As part of the Web’s features, Berners-Lee’s chose to implement the hyperlink in such a way that any and all websites could link to any and all websites. And so, like the nodes on the internet, all websites were created equal, as were all links.

This meant that, on top of the architectural decentralization that the internet infrastructure provided (no node is essential to get a message across the network), Berners-Lee offered organizational decentralization for decisions regarding the content that would be produced and consumed: no single person or institution has the power to define what gets published, and its up to those connected to the internet to point links at the content they want people to read. Virality can emerge as a consequence of many active choices.

In short, both the internet and the web were purposefully designed as non hierarchical, decentralized systems. There was no choke point.

The power of decentralization

Under the assumption that information is power, the decentralized system triggered great expectations regarding its potential for social justice.

When systemically excluded groups came online they began voicing the thoughts that had previously been silenced or ignored by traditional brokers of communication, such as government and traditional mass media. These groups started telling their personal stories. Patterns became visible. Problems typically framed as isolated cases became more openly perceived and discussed as systemic problems (e.g. #meToo and #BlackLivesMatter).

Fig. 3. Network architecture (adapted from Baran, 1962)

People discuss their past and present, define and refine their identity, and condense their individual hopes and dreams. A collective future worth fighting for is born.

Things don’t stop there! The web reduced the costs of finding dispersed like-minded allies and coordinating collective action. Dreams can materialize into a future. And thus, over the past decade, hundreds of grassroots movements have sprung up…

It was internet spring.

Everything and anything seemed possible.

Today that feeling seems to have faded. Why?

Pointing the finger to a single cause is clickbait. It leads us astray. Things are a bit more complicated, so bear with me…

Below is a sketch of the dimensions that intersect in what is often lazily presented as The Internet Problem.

Isolating the effect of each is a difficult if not impossible task. Yet conceptualizing them as separate dimensions helps us understand the need to design targeted strategies for each problem.

→ Social problems acquire visibility because of the web: The web often mirrors social problems such as exclusion and inequality. As internet penetration grows and more people on the margins of society get online, certain tensions and contradictions that are silenced or ignored in physical space become more visible. Gentrification might have offered blissful ignorance to those living outside the ghetto walls, but the internet is collapsing that physical barrier. Therefore there is a set of social problems that are not created by the internet and the web, but made more visible by them. Inequality, in all its forms, requires political action aimed at ensuring a fair distribution of the benefits of being a member of society. This goes far beyond what those of us working in tech policy can achieve. Its requires a thorough debate on taxation and public spending, among other issues.

→ Social problems become quantifiable because of the web: The internet has enabled the creation of large and easily accessible structured data about our societies. These data, in turn, enabled an explosion of quantitative research that sheds light on problems, and shows statistical associations that are compelling, yet often poorly explained. Given the lack of comparable data from the pre-internet era, some of these studies are actually incapable of providing a baseline (or counter-factual) that can show the extent to which the identified problems (eg. fake news, [online] violence) were in any way caused or made worse by the internet, and not merely mirrored by the internet. It often seems we are eager to kill the messenger for mentioning the problems we face. This in turn often causes digital platforms to limit the amount of information they make available to independent researchers. Those with access to key data and those of us shaping public opinion need to make an effort to frame our findings appropriately. And in a world increasingly governed by data, governments need to make sure the population becomes proficient in data analysis. Its not just about ensuring future generations are prepared for the jobs of the future, its about ensuring they will be empowered to engage effectively in the democratic debates of the present and future.

→ Social problems caused by internet platforms: These include design fails that might generate harm to users at large; design fails that specifically affect certain minority or otherwise excluded groups; the embedding of a bad incentive structure that triggers negative consequences both directly or as a result of unforeseen and emergent properties of the ecosystem it enabled (eg. click-based revenue models spurring click-bait and fake news). Most public attention has been focused on this area. The narrow focus has meant that issues that overlap and underlie these are not discussed. It is important to stress that online platforms have often underplayed their impact, responsibility, and capacity to deal with the harm they generate. This requires setting up a governance structure that is independent and responsive to the community it serves. One that can ensure those who are reckless are held accountable, and that members of the online community are not treated as mere assets to be milked, but human beings whose rights deserve respect. Nevertheless, this set of articles will not try to address the process of institution building. These articles will focus on an underlying phenomenon that explains why defective designs have become so problematic: centralization.

→ Internet platform issues are magnified due to centralization: The internet was designed as a decentralized system. There were no central brokers deciding who could say what, or what information could or should travel through. It was ok for things to go wrong. Releasing a product or service quickly, identifying problems based on the experience of a small group of early adopters, and iterating, became a mantra among programmers and entrepreneurs. In a decentralized system problems are local and can be neutralized quickly. Therefore — overall — we were happy with the risk-taking ethos of internet entrepreneurs. It enabled experimentation and innovation by reducing the harm these risks posed to society.

The context has changed. Over the past 5 years Google and Facebook alone have crept from managing less than 50% of the traffic to top web publishers, to 75% today. As the process of centralization advances over our most important medium of communication, we are increasingly seeing that design fails, the gaming of a platform’s rules (and so on) can lead to widespread and catastrophic consequences.

Today, many express concerns about the impact the internet and the web has on politics. Take so called “fake news”, for example. It has always existed. Yet, with centralization, a piece of information posted on a platform with 2 billion users can seamlessly reach the minds of a whole nation. Data breaches? Having data of billions of people on a single system means the prize for an effective hack is sky high.

As a handful of companies take over the role of data brokers, we are slowly building the single-point of failure the decentralized design sought to avoid.

Don’t carry all the eggs in one basket, they say. Yet today we are moving towards allowing all the brains to be subject to one algorithm. What could possibly go wrong?

The next two parts will narrow in on how the centralization process takes place, and how we should work towards re-decentralizing the web.


Part II — The present and future of a centralized internet

In the previous part I argued that the growing concerns regarding the internet have many causes: from underlying social problems, to bad science; from bad incentive structures put in place by big platforms, to the ongoing process of centralization that magnifies the impact of any problem that might arise.

I defined centralization broadly as the process through which intermediaries reshape our internet, increasing their gate-keeping power over the information that circulates through it.

I argued that centralization is creating the single point of failure that the original design sought to avoid, and that this should be the key concern of policy-makers.

The culture of fail fast and iterate that boosted innovation over the past decades has become highly problematic. In a centralized system problems are no longer localized and easy to neutralize. In a centralized system failure spreads too quickly, and can cause a lot of harm.

Constant evolution

How does centralization take place? The web is always and only becoming. It’s in constant evolution. Each link that is made, each server that is set up is part of this process.

But some actors have bigger wrenches than others. There are gatekeepers at a network, device, application and storage level. They have the capacity to influence the decisions of millions of people who produce and consume content, and thus how the entangled web evolves, and how people understand the world they live in.

These brokers are not merely replacing the traditional media in their role as information brokers. Their power is qualitatively superior.

Whereas traditional media managed a one-way stream of information

old media — >consumer

New information brokers also harvest a LOT of real-time data about the information recipients, creating a two-way stream of information:

New media can leverage the collected data to smartly nudge users to one piece of content instead of others, for example.

new media <—> user

Intermediation continues to grow in breadth and depth, fueling the process of web centralization

Intermediation is not in itself a bad thing. Search engines, for example, have become a key ally in enabling the web to achieve scale by helping users find relevant information in the ever-growing web of content. But it can also have problematic effects.

There are several ways in which intermediation can take place.

It can be structurally embedded, such as through algorithms that automatically sort information on behalf of the user.

Intermediation can also operate within the previously mentioned structure in somewhat organic ways, such as when users unknowingly interact with networks of bots (automated accounts) controlled by a single user or group of users, or armies of trolls paid to disseminate specific information, or disrupt dialogue. In these cases, the bots and trolls act as intermediaries for whoever owns or created them.

But how did we get to this point where centralization is giving the internet a bad name?

Intermediation, centralization and inequality

Fig. 4 The Mechanics of centralization (CC-BY Juan Ortiz Freuler)

Part of it is an “organic” cycle whereby the more central a player is the more personal data it can collect, enabling such player to further optimize intermediation services. This optimization and personalization can in turn make services more attractive to users, pushing competitors out of the market, and thus “organically” reducing the range of services to which users can migrate. This is an example of a rich-get-richer dynamic.

The other key dynamic occurs beyond the set of existing rules, and I would call outright illegitimate. That is, intermediaries often leverage their position as a tool to [illegitimately]prioritize their own services, allowing them to further increase their market share. Their success in the intermediation market should not allow them to force their way to success in another market. Amazon is a perfect dynamic of this example: It relies on its position as owner of the marketplace to study buyer behavior and define the products it could sell directly. It then relies on its algorithms and design to get a competitive edge over rivals.

GIF by nomalles

The perils of centralization: a look into the future

New technological developments — such as smart assistants, augmented and virtual reality — will likely increase the breadth and depth of intermediation over the next decade. This, in turn, threatens to accelerate and further entrench the process of centralization.

Centralization and search

Whereas originally different users would go to different websites looking for links to other websites, we quickly shifted to search engines that presented users with a list of websites of interest. Currently the trend suggests smart assistants will take over this role, skipping that step and providing the user with specific contents or services, without offering the bigger picture. Winner takes all.

With AR and VR the user is placed in an even more passive role and might be “force-fed” information in more seamless ways than through today’s online advertising. Whoever operates the code manages the process of blending the curated digital world with the physical environment in which our species evolved over millions of years. No contours on your screen. No cover on your book to remind you of the distinction between worlds.

Fig. 5 The evolution of information retrieval (CC-BY Juan Ortiz Freuler)

Developments in technologies such as AR and VR are capable of further isolating people into curated silo-worlds, where information flows are managed by the owners of these algorithms.

This would reduce the probability of people facing random or unanticipated encounters with information , such as a protest on the streets. These unmediated encounters are often key to the development of empathy between people, and the fuel upon which social movements develop.

Having further isolated groups would erode the set of common experiences upon which trust within society is built. This trust is key for the coordination of big projects, and to ensure a fair distribution of the benefits of such coordination.

Centralization and person-to-person communication

The internet has not merely reduced the cost of one-to-one person communication; it has offered a qualitative leap in communications. Whereas the newspaper, radio and TV enabled one-to-many communications, and telephone facilitated one-to-one communications, the internet has facilitated group communications, often referred to as many-to-many communications.

This is what we observe in places like Twitter and chat rooms, where thousands if not millions of people interact in real time. The deployment of effective many-to-many communications often relies on curatorial algorithms to help people find relevant conversations or groups. This means that some of the challenges faced in the realm of Search (previous section) affect person to person communications.

Yet centralization also poses a distinct set of risks for these communications. Among them, risks to the integrity of signifiers (representations of meaning, such as symbols or gestures), and their signified (meaning).

Intermediation in person-to-person communications

A. The intermediary’s responsibility to respect the integrity of a message

Fig 7 — A model of communications (CC- BY @Juanof9)

When texting with a new lover it is often the case that a word or emoji is misinterpreted. This often leads to an unnecessary quarrel, and we need to meet up physically to clear things up. Oh, no! That’s not what I meant…What I wanted to say is…

Conveying meaning is not simple, and we often require a new medium or set of symbols to explain and correct what went wrong.

Now imagine that someone could tamper with your messages, and you might not have that physical space to fix things… And that it’s not your lover you are communicating with, but the electorate or a group of protesters.

The internet facilitates engagement by bringing people closer together. The apparent collapse of the physical space between users is achieved by slashing down the time between the moment in which a message is sent and received, until it’s close to real time. For millions of years the only type of real-time communications we’ve had as a species involved physical presence. Thus real-time digital communication makes us feel physically close. This illusion often makes us forget that there is physical infrastructure between us, and that someone manages it. A package with the message is being transported, and it goes through the hands of several actors before it reaches its destination. It is fundamental that all parties managing these channels respect the integrity of the message. But we are mostly unaware of existence of these managers. .

Centralization, which leaves communication channels under the control of a handful of actors, could effectively limit parties from exchanging certain signifiers (symbols, such as words).

If virtual and augmented reality are the future of communications, then we should bear in mind that not only spoken or written language will be sent over communication channels. These communications will include a wide array of signals for which we still have poorly defined signifiers. This includes body gestures and — potentially — other senses, such as smell and taste. To get an idea of the complexity of the task ahead of us, think about the gap between experiencing a movie through descriptive noise captioning and the standard hearing experience of the same content.

Fig. 8 Screenshot: George Costanza, Seinfeld.

In the past, the debate was focused on the legitimacy of the frames traditional intermediaries – such as newspapers – applied to political events and discourse. For example, how the old media shifts the narratives depending on who the victims and perpetrators are (instead of the acts themselves), and shaping its audience’s appetite for certain policies.

With new intermediaries come new challenges. Our new mediums enable person to person mass communication. By reducing (or eliminating) the availability of alternative mediums through which parties can communicate, centralization could limit the sender’s ability to double-check with the receiver(s) whether or not a message’s signifiers were correctly delivered.

Distributed archive systems, where many players simultaneously store the same content independently and check for consistency across all copies (such as those currently being developed based on Bitcoin’s blockchain model) offer a glimpse of hope in this battle. A blockchain could protect the message’s integrity from ex-post tampering. Yet it must be noted that the phase between the message’s production and its transcription onto a distributed ledger is subject to some of the risks present in our current model.

B. The effect of centralization on the fluidity of the decoding process

A second issue affecting person-to-person communication is the process through which the relationship between signifier (symbol) and signified (meaning) comes to be (point B on the diagram). The decoding process.

The process of information consumption is not automatic or passive. The receiver has a role to play. The word cat triggers a different set of reactions in a cat owner and a person allergic to cats.

The receiver constructs meaning by relying on her own experiences as well as recalling instances in which members of the community managed to coordinate a conversation by relying on [what seemed like] an agreed-upon meaning of a concept. Through this process individuals and groups play an active part in the construction of reality.

This active interpretation enables language to be fluid: the relationship between signifier (symbol, such as a word) and signified (meaning) can shift over time. Language, as a system, is open and somewhat decentralized. It requires individuals to coordinate around meanings. No single actor can effectively impose a meaning. We see this through slang, for example, where marginalized groups, despite their exclusion from formal spaces of power, coin terms to more accurately share their thoughts and feelings.

This active decoding process suggests that a reflective capacity comes embedded within language. The noisiness of the process through which we interpret and discuss our world provides the flexibility necessary for critical social changes to become possible. New meaning can be constructed.

With cat the process is quite straightforward. Now shift from cat to more abstract concepts — like justice and war, or muslim and latino — and things get trickier. Since people don’t necessarily deal with muslims or latinos directly, third parties — such as the mass media and the board of education—exercise greater control over their meaning.

Much like the elites defining terms in a dictionary, mass media often takes over the process of rooting the signifiers onto a broader set of signifiers in order to construct meaning.

Reiterated associations between latinx and negative frames can, over time, lead to the triggering of negative mental responses to the mere reference to Latinx, even when the negative frame itself is not present. If so, the term has been effectively rooted onto the negative frame. As from that moment, the negativity has become part of its meaning.

A centralized web of content, where the few define which frames should be applied and distributed, becomes a liability — the opposite of the open space the web was meant to create. Many of us still believe that by distributing the power to construct meaning —and therefore the way we understand our identity, our relationships, and the societies we live in – the web has huge potential to make the world a more equal and fair place.

Let’s consider how the process of centralization might play out 20 years from now…

Many resources are currently devoted to the development of brain-computer interfaces. Brain-computer interfaces imply tending a bridge across the air gap that currently exists between people and their devices. That is, bridging our five senses.

Eliminating such air gap might limit the receiver’s capacity to diverge in the way she processes the signifier: the computer would arguably take over the decoding role, and with it our subject’s ability to decode and reconstruct–through purpose or mistake– signifiers into novel and potentially transformative meanings. The evolution of thought itself could become subject to the whims of whoever controls the tech. Whereas our natural language is an open and somewhat decentralized system, code is more like rigid, like numbers. Huge power thus lies in the hands of those capable of defining meaning.

Every step towards the roll out of these technologies strengthens incentives for intermediaries to ensure they can operate unchecked.

Too much power…

Those in control of information flows are gaining too much control over what conversations take place and what meanings can be constructed. As the concentration of power increases, the “mistakes” of these power players trigger harms of a breadth previously unknown. Public scrutiny is on the rise. Yet the public seems to react with cynicism, distrust and criticism to whatever fix big corporations propose. This suggests public criticism is not targeted at the solution being proposed, but at the actors forwarding these solutions. There seems to be a feeling that these corporations lack the legitimacy to exercise the power they have managed to amass, regardless of how they choose to exercise it.

How to move forward? The next part sketches a plan…


Part III — Focus on Re-decentralizing Power!

In March 2018 Cambridge Analytica was on the cover of every newspaper. The company managed to get hold of millions of data points from Facebook users. Most reporters focused on the meaning of consent in the digital age and Facebook’s inability to enforce it. Most reporters covering the Cambridge Analytica story missed the big picture. The scale of the operation was only possible because Facebook has too much data about too many people. Cambridge Analytica is a cautionary tale about the risks of centralizing data and control over the flows of information. The internet and the web were designed to decentralize data and power. Cambridge Analytica’s use of Facebook is an example of what a system with a single point of failure leads to.

This piece strives to show the bigger picture: how big players — exerting power over internet access, device, application, and data markets — have become a liability. The Cambridge Analytica scandal is but a drop of water trickling down the visible top of an iceberg….

Many claim the internet is broken. These claims are often examples of misdirected anger. What is broken is the social contract. Inequality is rising, and the tensions associated with injustice are spilling onto the online space. Since the internet facilitates the collection of structured data, it allows us to measure and reveal the underlying social tensions as never before. Media and un-savvy researchers too often choose a frame that places the blame on the messenger, instead of talking about the unhealthy social relationships underlying what their investigations reveal.

The internet, with its capacity to facilitate communication, aggregate opinion, and coordinate people by the thousands in real time, is arguably the most powerful tool at our disposal to solve the social issues at hand. The internet has made it easier for women to coordinate around the #MeToo movement, and enabled the growth of Black Lives Matter, to mention two recent examples. Rape, misogyny and racially targeted police violence are not new issues, but the internet provided a platform for these covered-up conversations to scale.

From the development of written language to the printing press; from the telegraph to the web, accessing and sharing knowledge has fueled humankind’s progress and development. Much of what was considered revolutionary only decades ago is mistakenly taken for granted today.

The problem with misdirected anger is that it leads to misdirected policies, which could undermine the internet’s capacity to catalyze much-needed social change. We need to ensure that when we think about internet policy and regulation we think about it through a political lens:

If we accept that the internet has become a key tool for politics in this broad sense of the term, we can see the internet is indeed facing a problem. A problem that is often neglected for being less tangible, but that underlies much of what concerns the public. A problem that not only reflects but can reinforce current social problems, and frustrate the goal of ensuring meaningful political participation: centralization.

Centralization and decentralization

I use the term centralization broadly, to refer to the process through which intermediaries have reshaped the internet and the web, placing themselves as gatekeepers of information. The “move fast and break things” ethos, which unleashed bold innovation a decade ago, has become deeply problematic. Each ‘mistake’ on the centralized internet of today causes harm to thousands if not millions.

The power of intermediaries is far from having reached its peak. Technological trends like the internet of things, augmented reality and virtual reality are set to broaden and steepen these powers. Furthermore the companies leading in this sector used to be small startups. Now they have the money to buy-out what used to be a crowd of free-coders; people who understand the medium and could have pushed back, or offered users a competing medium.

Decentralization is about creating barricades to this process, so that power remains distributed across the network. Decentralization is about ensuring that every message and every idea will be granted a fair shot in the digital public square.

The battle for the net takes place today and every day. There are no straightforward solutions. Every turn requires difficult decisions. It is therefore time to involve as many people as possible. We need to broaden the legitimacy of the request to decentralize, and the steps that are taken towards that end.

Unsurprisingly, the powerful intermediaries use their money to influence politicians, academia, and the private sector. Through this influence, these intermediaries have managed to make certain aspects of this debate taboo. Challenge their hegemonic practices and you might be labelled anti-tech, anti-progress or anti-capitalist.

If we hope to protect the citizens of tomorrow from expected and unexpected scenarios we need to get creative and bold today. And we need the mass of netizens on board. We need open and robust debates. We cannot afford anything less than this. Too much is at stake.

If the reason for much of the misdirected public anger is that the process of centralization is less tangible than its symptoms, perhaps a first step is to make this underlying layer more visible and part of public discourse.

Technology is increasingly being developed by private companies that have a bottom line. In an attempt to sell their services to a broad audience they have relied on metaphors (eg. “the Cloud”) that have over-simplified the internet’s architecture, and obscured the key political battleground of this century. Unless we shed some light over the internet’s inner workings, the intermediaries will continue to have the upper hand in this battle.

The Neutrality Pyramid

We need to re-state the physical existence of intermediaries and their power to the broader public. The pyramid shows some of the key layers in which gate-keeping is being exercised today. It highlights the types of actors that, at each layer, can affect the people’s ability to share ideas and produce meaningful political change tomorrow.

The Neutrality Pyramid builds on the concept of net neutrality, which the general public has incorporated and actively defended in the past. It argues for extending the no blocking, no throttling and no paid prioritization rules applied to network providers to actors managing other layers.

To explain the economic advantages of Net Neutrality, advocates and even the judiciary have talked about the “virtuous circle of innovation” that results from keeping the content and connectivity layers separate.

Fig. 9 Net Neutrality’s virtuous circle (CC — BY @Juanof9)

This framework was put to test before the courts by Verizon (2014), as it battled against net neutrality. The DC Circuit Court claimed it “is reasonable and supported by substantial evidence”.

The Neutrality Pyramid assumes net neutrality is a necessary condition for users to be able to navigate the web freely, but is not sufficient. The Neutrality Pyramid therefore argues that the idea of a “virtuous circle” can and should be applied to other layers. We need to guarantee enforceable rules against discrimination at each level if we are going to ensure the web remains open.

The choice of a pyramidal design seeks to signal that, from a user perspective, different actors exercise different types of control over our ability to deliver a message.

If an internet service provider (ISP) decides to drop data packets containing certain keywords, then it doesn’t matter what device we have, or what platform we rely on: the message will not be delivered. If a device does not allow the use of certain apps, then certain tools may become unavailable, and so on. The lower the actor is placed on the pyramid, the greater the risk they pose to the open internet and the open web as tools for social change.

Tailored and targeted approaches for each level of the pyramid might be required.

Taking action

Fig 10 — The Neutrality Pyramid (CC-BY @juanof9)
  1. Seeing the pyramid: As users and responsible consumers we need to be aware of exactly who each of these intermediaries are and how they manage their role as intermediaries. If they do not respect our rights, we should shift to more decent providers or services.
  2. Observing behaviors within each layer: As a community we need to promote enforceable rules to ensure that each level of the pyramid will be kept from abusing its intermediary or market powers to stifle competition within its layer.
  3. Observing dynamics between layers: As a community we need to ensure each intermediary stays within its segment of the pyramid, ruling out any further vertical integration, and promoting the re-fragmentation of companies that have integrated across these layers over the past decades. Given the difficulty to monitor and react to cases of discrimination before they destroy a market or inhibit a conversation, fragmenting these companies seems like a reasonable way of weakening or eliminating their incentives to breach the neutrality rules.

Given the closed nature of these companies and the lack of information about their operations, it would be useful to set up public committees tasked with assessing the economic and political risks and impacts the centralization process has had on the flow of ideas, innovation and competition. Public debate would certainly benefit from a more detailed map of this space.

The battle isn’t new. A handful of avant-garde activists and innovators have been at it for years. But it is ultimately up to us (the mass of citizens, users, and consumers) to signal to political representatives and markets alike that we want change. Below is a sketch of how the battle is playing out around the globe.

– Net Neutrality

Regulators in India, EU and elsewhere have held their ground in spite of the pressure exerted by internet service providers (ISPs). Dozens of countries now have enforceable rules that prohibit ISPs from discriminating between the content that travels through the network.

What’s the issue? As the basis of the pyramid, failure to ensure the neutrality of the net would eventually collapse the rest of the layers. In the US, where one might expect fierce competition between many players, most people have their choice restricted to one of two ISPs. Furthermore, as net neutrality rules have been weakened in the US, the ISPs have launched a bidding war for content companies (eg. Comcast and Fox. AT&T and Time Warner), which creates further incentives for them to breach the net neutrality principles and favor their own content over that of competitors.

– Device neutrality

A Member of the Italian Parliament, Stefano Quintarelli, has been promoting a bill that would grant users the right to use any software they like, including those from sources other than the official — vertically integrated — store. The French telecom regulator and the EU Board of Regulators (BEREC) have made recommendations in favor of the protection of device neutrality, and even the South Korean regulators established non binding recommendations for pre-installed apps to be removable.

In Russia, Android was fined for continuing to pre-install its associated Google Apps.

What’s the issue? Currently most internet traffic is mobile. Device producers often limit the operating systems that can be run on devices. Furthermore, to make navigation easier, closed environments known as apps have flourished. By requiring apps to be installed through App Stores, under the control of those managing the operating systems, a choke point was created. Since most people rely on Apple or Android operating systems for their devices, these companies have gained huge leverage on what apps are used.

– Platform neutrality

Last year the EU fined Google for unfair placement of their own comparison shopping services among the search results. The EU argued Google purposefully and illegitimately down-ranked competing services and granted its own competing service a prominent placement. India has recently followed this decision, and fined Google for the same behavior.

What’s the issue? Estimates place Google in control over 85% of the market of search. Facebook, which over the past years purchased Instagram and Whatsapp is considered to have a similar control over the market of social media advertising. By establishing themselves as gateways to content Google, Facebook (and others like WeChat) have created a central chokepoint through which they can influence the ideas and services made available to people.

– Personal control over personal data

On the one hand, new blockchain-powered platforms like Filecoin, Sia, Storj and MaidSafe seek to decentralize data storage by offering crypto-coins to people willing to put their latent storage capacity on the market. On the other hand, Tim Berners-Lee, the inventor of the Web, is developing Solid (Social Linked Data), through which he seeks to decouple data from the applications that silo it today. If he succeeds, data will be stored and managed by the people who produce it, and applications will compete based how they visualize the data and enhance user experience, not on their data hoarding capacity. An effective implementation would make platform neutrality less of a challenge: Switching between applications would be simple, since all the data they relied on is standardized across applications, and stored by ourselves. You can plug it into another application and move on as quickly you as you switch between tabs on a browser.

What’s the issue? The coupling data storage and applications has allowed big companies to create artificial barriers to competition, such as limiting the ability to migrate contacts and information across platforms. Given network effects, this has allowed a handful of companies to gain an unhealthy portion of the market, with the corresponding control over public discourse that comes with it.

The battle to ensure the internet remains a tool for citizens to build a more just society will be our constant companion throughout the next decade. The battle is uphill. With each day that goes by without a thorough debate regarding our rights our chances get slimmer.

The sketch outlined in these pieces suggests difficult trade-offs. Many questions remain. Yet we should not feel paralyzed by the grave asymmetry of information between us and the intermediaries. Intermediaries continuously leverage the opacity of their systems to stall conversations about the risk they represent to us and our political systems. I hope these pieces illuminate a space around which we can gather and think out loud. The clock is ticking…

Creative Commons License
Except where otherwise noted, the content on this site is licensed under a Creative Commons Attribution 4.0 International License.