Facebook — its new corporate name is Meta — has always wanted to get to know you. Its public goal has ostensibly been to connect people. It’s been wildly successful in doing so by building out what can only be called everyday infrastructure around the world.
There are 3.5 billion people worldwide using Facebook’s suite of products, which includes Messenger, Instagram and WhatsApp. As the infrastructure provider, Facebook knows a lot about who its users are, and what they do.
Recently, the company has announced a US$10 billion investment in the “metaverse” — an immersive version of the internet that can only increase Facebook’s hold on citizens via the data it collects about us.
This announcement comes at a time when everyone wants to do something about Facebook. Recent reporting on corporate ethics, fuelled by whistle-blower Frances Haugen’s document dump and testimony in the United States Senate — along with a six-hour blackout of its services worldwide in October — demonstrate both the scale of Facebook’s reach and the consequences of letting the status quo persist.
But before we fix anything, we need to consider the logic behind determining what ought to be fixed.
A human rights focus
In order to effectively regulate data-intensive, privately held global infrastructure like Facebook, we need to prioritize human rights concerns. Upholding human rights can act as the underlying logic for any regulatory framework, and in doing do, provide it with an established, universal ethical heft.
Focusing on human rights means prioritizing the basic values embodied in the United Nations’ Universal Declaration of Human Rights: protecting human dignity, ensuring autonomy and equality and “brotherhood” (or, in 2020s parlance, community). It means understanding that these rights are indivisible and interdependent.
The benefits and harms of social media affect human beings — the subjects for whom human rights are intended. Facebook, and other companies like it, have changed our lives by becoming global infrastructure, affecting how, when and if we engage with others. Through this process, our lives have become “datafied.”
We need to think more purposefully about how to embed human rights in our digital policies as we increasingly live and find meaning within online environments and contexts. As the UN’s Guiding Principles on Business and Human Rights affirm, states have a duty to protect human rights. Businesses, however, also have the responsibility to respect human rights.
A global communications giant
The focus on calls for reform to date, including Haugen’s explosive Senate testimony, has been centred around content on the social network Facebook built and is best known for. But Facebook is much more than that.
The blackout showed that Facebook is an essential piece of global communications infrastructure. The corporation formerly known as Facebook, and its properties Instagram and WhatsApp, facilitates small business and informal economies around the world. It provides login credentials to thousands of other apps.
Some developing countries in Africa even rely on Facebook as a portal to the internet for significant portions of their populations.
And in the very near future, Meta intends to bring another billion people online through various internet infrastructure projects.
So how do we regulate a tech giant like Facebook to ensure human rights are upheld? Many cases for regulation have focused on the right of freedom of expression, because that’s how most of us consciously experience it. However, a focus on content moderation is a losing game at best.
Human rights tied to freedom of expression
I’ve written previously about how Facebook has stepped into the void on adjudicating freedom of expression on its network through the Facebook Oversight Board.
But freedom of expression is not independent of other rights. The Oversight Board’s own docket shows that deciding on cases involving freedom of expression does not happen in a vacuum. Other rights — such as the right to non-discrimination, the right to security of the person and the right to life — need to be considered.
Various proposals for how to regulate Facebook and social media are already out there, advocating for transparency and accountability, changes to U.S. regulations that currently provide immunity to social media platforms and creating “toxicity taxes” in order to tackle the dilemma of content moderation.
The Canadian government now has a chance to fix problematic legislation it had previously proposed to curb social media content, which has the potential to erode other human rights in the process.
Meanwhile, the U.S. Federal Trade Commission and many states are following the trust-busting strategy, an approach that is currently stalled in the courts.
The next big push, of the existential variety, is going to be the defenestration of the very companies that populate the top of the S&P 500. Just look around. Trust-busting and heavy regulation of Tech and social media is a matter of when, not if. If so, this chart stops falling. pic.twitter.com/FySJhfReEp
— Jeff Weniger (@JeffWeniger) June 8, 2020
Global assent
Part of the problem is that people around the world continue searching for ethical frameworks to manage the relationship between technology and society when we already have a successful model readily available to us: international human rights. It’s one one of the few global, ethical frameworks in existence that has overwhelming assent.
The other part of the problem is that we have mostly assumed that rights in the analog world should apply online. This means that territorial states are places of relevance and and enforcement. But Facebook’s infrastructure is global — it’s not a state. UN Special Rappoteurs are pointing out how the analogue and digital don’t always align in terms of privacy and expression, but this is just the beginning.
Anything that happens in the online world has a global impact, as we’ve seen with the European Union’s General Data Protection Regulation. It’s clear that the impetus for protecting human rights is critical, no matter who is potentially violating them. But how to go about designing human rights protections in the name of autonomy, dignity, equality and community is not currently being contemplated when it comes to our digital spaces.
We must acknowledge the global and everyday reach of Facebook’s infrastructure. We need to understand how Facebook, and other tech companies like it, are dramatically shaping our experiences in ways that are both visible and invisible.
Understanding Facebook as a form of public infrastructure simply means acknowledging that it provides us with something essential: services that enable other services and activities, services we cannot get in the same way elsewhere.
Some have suggested that we treat Facebook as a hostile country to properly contain it. This seems unnecessary. Facebook is an example of a new type of global infrastructure that needs to protect and respect human rights.
- is Professor of Political Science and Canada Research Chair in Global Governance and Civil Society, University of Toronto
- This article first appeared on The Conversation