‘Just Speech’ is a research project led by Dr Talita
Dias, the Shaw Foundation Junior Research Fellow at
Jesus College, University of Oxford. It explores the phenomenon of online hate speech and its regulation
under international law.
Hate speech is not a new phenomenon. It has been a constant, if not inescapable feature of mass atrocities committed at least since the 20th century. Notorious examples include the Armenian massacre in Turkey, the Holocaust, the ethnic cleansing campaign in the Former Yugoslavia and the Rwandan genocide. In all those instances, derogatory language was used in the mass media to create the circumstances conducive to violence. It eventually led to some of the most serious human rights abuses and atrocity crimes, such as genocide, war crimes and crimes against humanity. Simply put, atrocities and human rights abuses do not happen in a vacuum but are triggered and fuelled by hateful rhetoric.
But in the digital age, the impact of hate speech within and beyond national borders has been unprecedented. With the advent of the Internet and social media platforms, in particular, content can now be disseminated by individual users with a speed, scale and directness never seen before. Its effects have been felt in developed and developing countries alike. Examples range from Trump’s explosive rhetoric leading to the Capitol Riots in the United States, and football-related online hatred in the United Kingdom, to the mass violence against the Rohingya in Myanmar, enabled by online hate speech on Facebook.
The Current Legal Landscape
As a global phenomenon taking place in the boundless digital environment, online hate speech needs an international legal framework. Yet existing rules of international law on the matter are outdated and highly fragmented. ‘Hate speech’ as such is not a legal term of art in international law. It has been broadly defined as any expression of hatred, opprobrium, enmity, detestation, or dehumanisation of an individual or group identified by a protected characteristic, i.e. race, colour, sex, language, religion, political or other opinion, national or social origin, property, birth or another status.
One of the key challenges of regulating hate speech in international law lies in the wide variety of speech acts and effects, ranging from ‘mere’ expressions of hatred to incitement to violence, each having a distinct legal implication. Thus, a fundamental legal dilemma is how to reconcile the speaker’s freedom of expression (and users’ freedom to information) with the rights of affected users to non-discrimination, bodily integrity and privacy.
This challenge is compounded in the online environment. On the one hand, information and communications technologies have massively increased opportunities for expressing one’s views and receiving information freely, as well as the exercise of other individual freedoms so dependent, such as the right to freedom of opinion, to participate in democratic processes, and to protest. On the other hand, the pervasiveness of the Internet may also amplify the negative impact of hate speech and other harmful acts, leading to greater hostility, division, and violence in societies.
The project aims to
Unpack the different ways in which speech can be used to spread hate or cause harm to individuals;
Piece together the rules of international law that apply to different types of online hate speech – from human rights instruments to international crimes;
Advise governments, online platforms, users and civil society organisations on their legal responsibilities and the means to discharge them, from content moderation to counter-speech and awareness-raising campaigns.