Over the past years, many commentators, researchers, and even politicians have become deeply concerned with the direction where social media giants are heading. One of the recurring talking points raised by some observers is the call for social media companies to be politically and socially “responsible.” Coinciding with this, we have witnessed social media platforms radically increasing their interventions and shifting their role from a neutral service provider to a clinical content curator as if modern humans have lost the last bits of their agency to these platforms and their algorithms.
Mainstream politicians, especially those with left-leaning and progressive positions, seem to be especially concerned about hate speech, online harassment, fake news, racism, and offensive content creating a culture of fear and intimidation in the online sphere. There is also this increasingly more popular narrative that these platforms radicalize people just by algorithmically steering them into the world of conspiracies, vague ideas, and political paranoia. Humans, as some seem to think, do not act, but instead, platforms have taken over the world. On the other hand, part of the political right, especially conservatives, classical liberals, and libertarians, have blamed platforms like Facebook, Twitter, Google, and Youtube for politically motivated censorship, excessive political correctness, and content manipulation. There is a constant stream of news of people being banned and censored for seemingly trivial and undisclosed reasons. One of the most notable facts is that although this has been going on for a long time, this trend has been accelerated by Brexit, the election of Donald Trump and the emergence of alternative media.
The question that has not been asked is what has happened in the last couple of years that would justify such countermeasures – such as purging digital content or deleting accounts – at such large-scale. What we are witnessing is a messy international phenomenon in which opposing and conflicting social and political trends meet real-life organizations, management fashions, new modes of content creation and distribution, and rapidly emerging technologies. It is time to take a look at what is going on and to make some initial observations on the current trends. We should understand better where we stand, why, and what might lay ahead for the social media platforms in the years to come.
In January 2019, Kirsten Grind and John D. McKinnon at The Wall Street Journal reported that something strange is going on in the world of social media. Social media companies seem to be actively consulting outside parties and groups about their online content moderation policies.
“The world’s biggest social-media [sic] companies, under fire for failing to police content on their sites, have invited an array of outside groups to help them figure out who should be banned and what’s considered unacceptable.”
Last year Facebook CEO Mark Zuckerberg proposed that “Facebook will introduce an independent body that will oversee appeals about content decisions,” per QZ. The supreme court of Facebook would define and decide, based on Facebook’s policies and applicable extralegal frameworks, what kind of content is appropriate, and what is not.
In early 2019, Facebook published an official statement about setting up an independent oversight board for its content decisions. “As we build out the board we want to make sure it can render independent judgment, is transparent and respects privacy,” the statement said. Indeed, involving those groups into the discussion and decision-making that criticize companies like Facebook seems to be a reasonable thing to do. About a week ago, Facebook published the results of the global consultation. To a critical observer of the ongoing corporate social responsibility trend, this seems like a “responsible” step to take. However, it would be intellectually sloppy to expect that significant tensions are defused soon.
The matter of fact is that social media companies have become seemingly aggressive in their efforts to enforce their content policies, and furthermore, many critical social media platforms have created a complex web of rules whose implementation and enforcement cannot be known beforehand. Community guidelines and supporting policies are being constantly re-interpreted so that content creators and users are always on their toes and never know what to expect to happen. Recently Youtube took harsh action against several accounts, and for example, several journalists and educators were targeted mistakenly as well. Previously it seemed like that platforms and users understood that sharing links and videos about controversial political, social, and economic issues did not mean that you personally endorse or promote those particular views. Things have changed, and nowadays, the guilt of association is taken for granted. Today social media platforms will take action and ban and delete users that are perceived to share “borderline” content, and now people have the possibility to flag content at a massive scale so anything slightly questionable or problematic might be taken down.
Why has the situation changed so drastically, and more importantly, why now? It seems that some of the biggest social media companies have moved away from their role as neutral players into the fuzzy world of idealism, internally different sensitivity to social responsibility and social justice agenda. Silicon Valley tech giants have been blamed for political bias. While this line of criticism has been very US-centric, some European commentators have already noted that its troublesome to see that corporations act as moral arbiters and limit free expression and impose strict requirements on users. The situation is quite tricky, and there is a lot of work to do to make sense of what is going on. Various organizational actors likely perceive the situation in different ways, depending on their tasks. However, some recent observations with insights from organization science and economics can help in hypothesizing the situation and tensions that these companies are facing right now.
We see a complex blend of different factors in play right now. First, harshly enforced but uncertain platform-specific rules and policies, and the fear of concrete legal sanctions play a significant role in the corporate sensemaking and how various actors inside these organizations might perceive the situation. There is a lurking danger of expanding the definition of questionable “borderline” content. While this content may not be de facto illegal or criminalized, content creators might find it harder to understand the actual boundaries of acceptable speech and behavior. Thus, entering the grey zone comes with potential additional compliance and enforcement costs that affect both the users and platforms. These grey zones or borderlines are tough to monitor as they are double-edged swords. Content creators can circumvent intended restrictions by relying on obfuscate language, but at the same time, platforms are enforcing even more arbitrary restrictions and labels. For example, harassment is an abstract concept, and it is not clear how it should be applied in every conceivable context. Regulations and consequently impressions of borderline setting may explain why it is difficult for organizations to stick with the idea of just abiding the law and not extending their activities into areas in which they are not comfortable operating.
Secondly, recent political polarization has made people more sensitive to every sort of emotional damage when facing opposite views, edgy humor, or critical coverage of controversial issues. People that are being triggered by the online content they see or hear demand platforms to take care of it. Society is being infantilized: not even adults are now assumed to handle their lives independently anymore. Platforms are necessitated to take action and assume the responsibility of purging online content as they might see that keeping the platform sanitized and hygienical is a question of life and death. Furthermore, any disappointed stakeholder now has the power to challenge platforms in public and demand actions to be taken. Attempts to manage these risks have caused so-called secondary risks when tighter community standards or their selective enforcement leave questions about potential biases unanswered.
Thirdly, to reiterate our point about corporate social responsibility, it is noteworthy how hard it is for companies to interpret and act on widely differing demands for responsibility. There is no doubt that the current discourse around corporate social responsibility is one of the most dominant themes in organizations and management. The contemporary view of ghost-like responsibility is not based on a solid theoretical or otherwise distinguishable and transparent framework. Even the term itself is heavily value-laden. It is used to legitimate the idea that organizations have liabilities to ideals beyond its legal boundaries and obligations. This obscurity has enabled a diverse set of ideologically-driven activists to contest and expand the meaning of responsibility. When used in making sense of corporate resources, customers, markets and strategy, discussion driven by responsibility can intervene with the crucial realization of the primary organizational goals. Responsibility has the potential to be translated into politicized perspectives that do not have any tangible bearings on the organization itself.
It does not seem likely that sudden enthusiasm towards social media censorship is an effectively implemented and centrally coordinated plan, nor does it appear as a purely random and soon-to-be corrected experiment either. More likely, social media companies and tech giants are facing a chaotically noisy environment, and they have not been able to make sense of the ongoing debates.
Technically-savvy entrepreneurs have built big tech companies during the last ten to fifteen years. In turn, severe social and institutional pressures towards these companies have been building up just for the last couple of years. At least this is the picture that an outside observer can notice only by following the international discussion around these matters. It would be no surprise if strategic management culture of big tech platforms with influential engineering culture finds it challenging to construct a balanced view of its value proposition, people and strategic challenges today taking into account the chaotic operational environment. Their actions imply that the perception of the problems revolve around technology and are not informed by the contemporary social and political discussion.
Please, bear in mind that we rely on publicly available information and can only speculate without confirmation from insider sources. The potential explanations that we offer here may well turn out to be realistic and truthful but not all-encompassing. However, our final point is to recall your consideration on why these companies have not remained the same as they were in the past. Why did they become engaged with an ambitious and adventurous virtue signaling project to steer content with such enthusiasm? Why did these companies change their mission so dramatically in tandem with the rise of cultural polarization, victimhood culture, and identity politics? How did we survive with the Internet ten years ago without censorship and restrictions of speech?
The internet, as we know, it has already changed dramatically from what it used to be just some years ago. Many people put too much effort into observing technological change and thus ignore the underlying organizational, political, and ideological forces. As we have demonstrated by our discussion here, these complex forces have indirect power to define the use and even essence of technologies. As human cognition tends to be adaptive to even dramatic changes, incrementally developing trends may remain unnoticed for long. Thus, we need to take some distance from the daily news feeds and pay more attention to the greater schema – the trends and mentalities that are shaping the social media platforms in the long run.
The text was written together with Thomas Brand and was published in his blog on 9.7.2019.
The authors express their own viewpoints here and do not represent any organizations associated with them.
Photo source: Foter.com