Coffee Space


Effective Censorship

Preview Image

Preview Image

Recently it was the International Holocaust Remembrance Day on the 27th of January, a day that would normally go unmarked by myself. This is not due to being unsympathetic to the atrocities committed or the people lost, but simply that I am human and have limited emotional capacity 1. I cannot carry the burdens of the world and address all of the world's problems simultaneously (and I wary of people who claim to be able to), so I must pick and choose which battles I choose to fight. Being British, for me this is a 2 minute silence for Remembrance Day, where the allies choose to remember those lost in WWI and WWII. Typically this is more aimed towards remembering the soldiers lost, but it's also highlights the toll such an awful ideology has on society.

The holocaust remembrance day did this year have some significance to me. A community I dabble in (which I will not detail here) chose to use this day to "combat Nazis" in the community, by asking for removal of all users with names that references Nazis 2. Censorship is an interesting topic and I believe it warrants some discussion.

To be clear, the default position in this case is to have no community policy. That is to say, users have the ability to pick any name they choose.

Case For Censorship

Their case for censorship essentially boils down to a few points:

Promotion - Users with certain names promote Nazism, even glorifying it. If unchallenged, their belief is that history is doomed to repeat itself. They argue that the community has some duty to censor such names, as not to glorify Nazism and the horrible ideology it represents.

Definition - When asked to clearly define a policy that could be fairly acted upon and understood by others, they admit that writing such a policy would be difficult, but it's not an excuse to do nothing. So far they have been unwilling to write a policy, even a draft proposal.

Opinion: I am not unsympathetic to the request. I believe we all like to think of ourselves as the sort of people who would in fact stand up to Nazis, whichever form they take. We all would like to believe we are on the right side of history. That said, it could be argued that the censorship of an ideology can also help validate it (i.e. "this is what {authority} doesn't want you to know!").

Case Against Censorship

Their are several cases against censorship that were presented by the community:

Promotion - One major point against censorship is whether it really is an effective measure against promoting Nazism, or any other ideology. It is was generally accepted that in many places, all ideas could be discussed and do battle in the market place of ideas, with bad ideas losing out to better ideas on the basis of merit. When Nazis are discussed in general, they are typically considered as an example of evil. When Nazis are part of jokes, they are typically made fun of. This doesn't appear to be a pattern that will change too soon.

Burden - One argument is that it increases the community moderator's work load, as this will likely all have to be done manually (discussion below). Given how some references to Nazism is in protest to existing moderation policies, it is not impossible that some will use differing variations of Nazi phrases and purposely burden the moderators.

Definition - One problem is to define what "promotion of Nazism" even looks like. Clearly there is a difference between "Nazis are bad" vs "Nazis are good" as a name, so the censorship cannot occur simply based on the invocation of 'Nazi' alone. In fact, there are infinitely many ways to reference Nazis, in both a positive and negative light. And, if all of them were censored, people would simply event more 3.

Scope - The community is international, we have people from all walks of life. Some of these people are heavily religious, for example. Whilst a European may be heavily offended by the idea of a Nazi, an Abrahamic religion may be offended by a person who reports themselves to be the devil. A sexual assault victim may be offended by the idea of somebody referring to violent sex. Essentially, what we are drafting here is a censorship on offensive material. The problem here is that everything can in theory be offensive, and what is not offensive today may be offensive tomorrow. An example could be "Let's go Brandon", which is now the equivalent of chanting "Fuck Joe Biden", a statement many Democrats take offence to as a far-right meme. It is near impossible to define any kind of acceptable scope for offence, it is infinitely a matter of perspective, both personally and over time.

Opinion: Just because something is hard, is not a good case to do nothing at all. That said, something poorly implemented can be worse than the default position of doing nothing.

Rise of Nazism & White Supremacy

There is a general belief that Nazism, white supremacy, racism, etc, are on the rise around the world. This generally reminds me of how a few Muslim friends tell me that the "day of reckoning" is really soon now - as if every generation of Muslim since Mohammed came down from the mountains has not said exactly the same thing. These are simply the trappings of a young person's inexperienced mind - to believe that this moment is of importance, not because they are the ones experiencing it.

If you look hard enough for something, you will find it anywhere you look. If you look around your surrounds now, look for the number "11". Any two "ones" next to one another, parallel lines you see, the edges of a shape, etc. And yet, the universe is not planting "11"'s everywhere, there is no grand designs to plant the number everywhere. If you look hard enough, you will always find what you are looking for.

To really escape this madness, you must come up with a predictive model - these are the only types of models that really matter. If your model has no predictive capability, your model is wrong - simple as. It's only when you close your eyes, predict where an "11" will be located, look to that location and open them again, can you have predictive capability. Then you realise any correct predictions are either by chance, or with a-priori information (i.e. I looked towards the clock, because I know that "11" is printed on the face).

If discussion about Nazis or terrible ideologies leads to Nazism, you need to provide a predictive model. I.e.: Person X was not previously a Nazi, was exposed to Y, then became a Nazi with a probability of Z. If you are unable to provide such a model, then how can you state that discussions or usernames lead to Nazism? You can't both claim to have a predictive model, but then fail to make predictions 4.

Personal Experience

I have some anec-data regarding racism in general which I will share here. I obviously won't name-names, but I still believe it is useful to consider some examples. I am personally from a low-economic background in the UK, and as such you might imagine that I interacted with working class people that generally politically leaned right.

I have known quite a few racists. The typical statement is "those X's are Y", where Y is not something very nice. This may be based on some personal experience with an X, and based on their limited exposure, is not even untrue. In the majority of cases, when that person has had the opportunity to have a personal interaction with an X, they might say something like "they were not bad for an X, they are one of the good ones". Over time, their perspective changes to "X's aren't so bad" and finally they settle on something like "some X's are Y" (as opposed to "all X's are Y"), a large and meaningful improvement.

There are some on the other hand (very few) who state "those X's are Y", but never change their position, despite positive exposure to an X. Your thoughts might be "well, let's throw this racist in jail" - but to what end? They aren't going to do several years in jail and suddenly realise the error of their ways, if anything they will be radicalised even more (jails tend to radicalise people, not de-radicalise them). These people are generally perfectly functional, productive members of society, until they interact with an X. In this case, the best option simply seems to be to limit their interaction with X's.

It is child-like thinking to believe that you can reason with every person, and that your ideologies will perfectly overlap, and you will live happily ever after in a perfect Utopia where each person agrees. It's much worse, you are unlikely to find any person whose ideas truly overlap 100%. To that end, you must simply accept that either you make peace with your differences, or you do not.

Generally it is my policy to treat people as they treat me. It is not my responsibility to take offence on other's behalf and I try to make decisions based on my personal interactions with somebody. Somebody may say something like "don't you know, that X person is a Y?" and I say "maybe, but my interactions with them are generally positive". To that end, I am able to make peace with both the X's and the anti-X's (just don't invite them to the same party).

You may consider this a 'coward' position, but actually I believe it is the only position that is maintainable. I do not require that we hold the same beliefs, only that you hold yours with honest conviction. As long as there is this, we can hold an honest discussion 5.

Open Questions & Discussion

I pose some open questions ultimately and offer some discussion as a potential path to move forwards:

  1. Automated tooling - Whatever censorship is introduced, ideally there is some automation mechanism to relieve burden on those that implement it. The burden of implementing this automation should not be put on those being asked to do the moderation as I suspect it may be an impossible task. Asking for something impossible and then punishing somebody for not completing it is cruel by any standard.
  2. Right to appeal - Each person should have the right to a fair trial, to be judged by their peers. If you are to ban a user based on their name, they should have the opportunity to appeal that decision. Perhaps their birth name really is "Adolph", for example.
  3. Unbiased censorship - Whatever the policy that is employed, it should be unbiased. It shouldn't strictly define Nazism for example, but instead the concepts of genocide, mass murder, racism, etc. It should he general enough that it by default captures past, present and future violations.
  4. Insensitivity - Whatever the policy definition is, it should also be time/location/language insensitive. It should make sense no matter what geographic location it exists and whichever time period it exists. If you are imposing a moral code, it should be based on universal morality. There are some examples of such that you can get the majority of people to agree with, such as murder, stealing, etc.

Currently, given such a policy remains ill defined, I propose that the community maintains the default position. Any ill thought out policy at this point could be worse than no policy, even acting to prevent people from being censored, even when it is universally agreed a user should be.

In any case, the community actually doesn't have the ability to enforce such bans. The first step is definitely to create the tooling required to even enforce any such policy, which gives time to draft a policy properly.

  1. For example, a child dies every day due to the awful actions of another human, but I cannot weep for them all.

  2. To be clear, the community is not full of Nazis. I refer to one or two users.

  3. One example is how the CCP try to censor the 1989 Tiananmen Square massacre, and people just find new ways to reference it in discussions.

  4. On a side note, humans are terrible for creating predictive models, especially based on personal beliefs.

  5. In my opinion, the most egregious thing a politician can do is support a position they do not honestly hold, but pretend they do. Compromise is important in politics and you may need to compromise on your personal values for the greater good, but you should not lie about this compromise.