Reimagining Disinformation Responses: Disinformation Expert Working Group – 5/1/2022 Hawaii #1

by | Aug 12, 2022 | Free Speech, Open Blog | 0 comments

“Since June 2021, the [US Defense Department’s] Daniel K. Inouye Asia Pacific Center for Security Studies (DKI APCSS) has been working with the Taiwan Institute of National Defense and Security Research (INDSR) to facilitate expert dialogue to advance a human security approach to disinformation. The Disinformation Expert Working Group (DEWG) is a small but expanding group of interdisciplinary experts brought together under Chatham House rules. Participants have included technical experts from academia; the humanitarian and human rights community; policy think tanks; the tech industry; and lawyers specializing in international humanitarian and human rights law.” – an excerpt from the backgrounder

“Our vision is to gather a diverse group of experts to develop a strategic-level, whole-of- society approach to initiate a dialogue focused on developing resiliency and reducing risk to the harms of disinformation. We would like to move beyond the politicization of the technology, threat actors, and platforms so that we can focus on developing whole-of-society human security-based solutions. Therefore, we will not discuss threat actors or attribution. The discussion will focus on the protection of civilians and society, and on ways to reduce the exposure, vulnerability and risk from the harms of disinformation.” – an excerpt from the invitation

K.S. Park’s opening remark on “Reimagining Responses and Approaches to Disinformation”: 

It is elementary that the harms of disinformation can be mitigated by greater transparency and sharing of accurate information. Especially, the new trend of disinformation coincides with the rise of political leaders vested in dividing people into the good and bad, and disinformation campaigns by people supporting those leaders usually involve construction of “enemies” and spreading false information discrediting those opponents. The greatest antidote to the rise of mad dictators like Hitler, Putin, or Tatmadaw is a well-informed democratic republic guaranteed freedom of speech. We should notice that disinformation is more rampant in countries lacking democracy and equality. To address the fundamental causes of disinformation, we need to protect more freedom of speech using the good old international human rights law.

However, even a democratic republic cannot be safe from disinformation harms as seen in Trump and the Capitol Hill attacks. The 2016 Trump election caused much concern as some noticed “fake news” going viral beyond legitimate news and many Trump voters’ beliefs in clearly false electoral information. However, much of the harmful electoral information (e.g., Obama’s Kenyan birth) originated from political leaders not from social media. Also, racial hatred is often the most effective fertilizer of disinformation.Disinformation perpetrated by political leaders against races are the most destructive (as seen in Putin’s accusation against Ukraine). False information lacking either of these two elements can go through the crucible of marketplace of ideas and can be self-corrected (e.g., Twitter’s ability to platform self-correction amidst a chaos of local information on Chilean earthquake)  Therefore, political leadership demonstrating a commitment to representation of all people and racial justice is crucial to combatting the new disinformation phenomena. Again, we are back to the fundamentals.

Legal challenges

Simplistic solutions have existed as long as problems.  “False news” crimes and criminal defamation have been abused by authoritarian governments to suppress and persecute dissident/rival political forces and opinions which are often truthful. Most recently, Myanmar’s military government that came into power through coup d’etat on February 1 proposed a draft cybersecurity law that punishes speakers (and intermediaries) up to 3 years of imprisonment for disinformation. The international human rights literature is full of similar abuses spanning several decades from the 1960s around the world, and several international human rights bodies have denounced “false news” crimes and criminal defamation laws for that reason.

Even non-authoritarian governments around the world have resorted to harsh measures against disinformation. However, such punitive measures not only hamper counter-speech aiming to abate the harms of disinformation but also produce distrust of the society in general, generating receptivity to conspiracy theories and even further disinformation trends. Such overzealous prosecution creates a siege mentality and a sense of victimhood on the political groups thus persecuted, who then take on their own versions of conspiracy theories and disinformation tactics to defend their political gains and attack the opponents, causing failure of deliberative democracy where opposing political camps stop discussion and engage in shouting matches calling each other “fake news”.

Working the Intermediaries

Another response to disinformation has been intermediary liability. All forms of disinformation remedies should preserve the civilizational significance of the Internet: the fact that it gives powerless people the same power of information and communication without gatekeepers (like TV and newspaper) as big companies and governments, and therefore contributes to equality and democracy. Recognition of such value has resulted in intermediary liability safe harbor as a human rights standard whereby intermediaries were protected from liability for user contents that they were not aware of, as in EU’s E-Commerce Directive 2000, and the US’ DMCA 512.

A gray area was what to do with contents that intermediaries are put notice of existence but not of their illegality? Germany’s 2017 NetzDG exploited this grey area to impose intermediary liability on noticed contents and this encouraged several Southeast Asian governments to follow suit in different adaptations (Malaysia, Viet Nam, Indonesia, Philippines) but this is a problem as intermediaries will engage in self-censorship, erring on the side of deleting as opposed to retaining the contents that they are given notice of but they are not sure about illegality of. Indeed, the relationship between NetzDG, Germany’s national law aiming to address online disinformation, and intermediary liability safe harbor enshrined in the 2000 E-Commerce Directive and the new Digital Services Act has not been clear.  Platforms pushed toward the brinkmanship of liability by such law will avoid innovating with anti-disinformation measures. Taking down too many contents will result in erosion of trust on the internet and therefore creates a fertile soil for conspiracy theories and other disinformation.

Intermediaries’ Self-regulation

Facebook and Twitter de-platformed the accounts and/or took down postings inciting violence with false information (e.g., Trump’s account for spreading false information that the election was rigged). Twitter’s Trust and Safety Council and Facebook’s Oversight Board are also the attempts to address disinformation conducive to incitation of violence. When the platforms have an opportunity to invest time and resources into deliberating upon and justifying takedown/deplatform decisions, they seem to make relatively fair, transparent and effective decisions whether to take down or leave in tact postings.

What is problematic, there are too many non-deliberated takedown or leave-in decisions that are made, that do not benefit from calm balancing, in the primary stage of content moderation. Many primary take-downs are often over-inclusive and under-inclusive. Also, there are so many decisions that, even when objected to, do not escalate properly to a higher level of deliberation or a full disclosure of the reasoning behind the decisions. Transparency and due process is the solution to such scalability problem, meaning that sharing information with and engaging with the users to properly escalate contested cases to a higher level of deliberation will result in better finetuning of the initial flagging, thereby reducing the number of contested takedowns, buying time for calm deliberation, completing a virtual circle.

Problem of algorithm

Social media facilitates that direct communication by providing the hubs for people to communicate with one another. Social media communication is based more on relationship than on power of contents and is therefore subject more to echo chamber and filter bubble effects. One may argue for removing or scrutinizing the strong filter bubble and echo chamber effects by taking out algorithm. However, many people, good or bad, rely on the filter bubble and echo chamber.  Environmentalists depend on that escalation in spreading their messages just the same way ISIS uses that for spreading their messages. Generally, algorithm itself is neutral and part of automation.  Even laundry machines have algorithms in them.  Inspecting those algorithms or equations embedded in the control board of the laundry machines will not lead us to any harmful element that we want to remedy through public scrutiny. Likewise, reducing to the filter bubble and echo chamber effect down to their elements will not give us any solution uniquely responding to the disinformation in a tailor-made manner. Rather, content moderation algorithm depends a great deal on human decisions on what constitutes disinformation. Remember that deplatforming Trump was not an algorithmic decision but a human decision. It is not the machine that is the problem but the input into the machine.

See also Protection of Civilians in Information Warfare/Operation http://old.opennetkorea.org/en/wp/3691

See also K.S. Park’s closing remark: http://old.opennetkorea.org/en/wp/3693

0 Comments

Submit a Comment

Your email address will not be published. Required fields are marked *