Disinformation and algorithmic transparency

by | May 27, 2021 | Free Speech, Innovation and Regulation, Open Blog | 0 comments

Kyung Sin Park’s panel presentation at UNESCO World Press Freedom Day Conference on April 29, 2021

The traditional theory of freedom of speech depended on an assumption that speech is soft: speech in itself does not cause harm and its effects are mediated through the reactions of the hearers. Therefore, based on the harm principle, speech or speaker should not be restricted or punished unless absolutely necessary.

Speech is no longer soft. It has become hard thanks to the digital space and its echo chamber and filter bubble effects. Especially, a subset of speech called disinformation readily propagates through algorithmic escalation. However, taking down or punishing disinformation itself can easily violate international human rights standards established on ‘false news’ crimes and imposing the take-down responsibility on platforms will violate the international standard of intermediary liability safe harbor. There comes the call for algorithmic transparency: the idea that disinformation can be abated by publicly monitoring and adjusting the content curation algorithms that suggest platform users what to read next.

Platforms have argued that such algorithm constitutes trade secret. However, many things are already required to be disclosed to public scrutiny if they cause harms. Publicly listed companies, for instance, are required to make disclosures on their finances even if it means disclosing sensitive information. However, the more important issue with algorithmic transparency is that people promoting disinformation will game the system better when they know the algorithm. That was where the discourse of algorithmic transparency stalled 3 years ago.

However, recently, when the “fake news” bombs exploded near the end of Trump’s tenure, the US and EU began toying with algorithmic transparency once again.

One solution to the possibility of abuse is limited disclosure: disclosure only to trustworthy individuals. However, will such limited disclosure be effective for public control of algorithm? Who are the people who will want to review the content curation algorithm?  Civil society, who already began complaining about not being included in algorithm scrutiny.  What do civil society do?  Disclose important facts and name and share the bad ones. We face a dilemma of either leaving out civil society organizations and foregoing the full accountability process relying on them or including civil society organizations and risk leaks to the disinformation perpetrators.

One may argue that platforms benefit from the strong filter bubble and echo chamber effects and at least such cascading bias can be remedied through limited public disclosure even if that means disclosure only to technical experts. However, many people, good or bad on the filter bubble and echo chamber.  Environmentalists depend on that escalation in spreading their messages just the same way ISIS uses that for spreading their messages. Generally, algorithm itself is neutral and part of automation.  Even laundry machines have algorithms in them.  Inspecting those algorithms or equations embedded in the control board of the laundry machines will not lead us to any harmful element that we want to remedy through public scrutiny. Likewise, reducing to the filter bubble and echo chamber effect down to their elements will not give us any solution uniquely responding to the disinformation in a tailor-made manner.

Of course, there can be two different algorithms at stake here:  (1) content curation algorithm related to filter bubble and echo chamber (2) content moderation algorithm which platforms developed to respond to the disinformation problem. It may be the content moderation algorithm that we must expose to public scrutiny that help us remedy the disinformation problem. However, the problem of abusing persists here as well. What is more, content moderation algorithm depends a great deal on human decisions on what constitutes disinformation. Remember that deplatforming Trump was not an algorithmic decision but a human decision.

I think that we should notice that disinformation is more rampant in countries lacking democracy and equality. The recent peak of disinformation in US is exceptional, for instance, it was top-down, and should not be extrapolated to the world. To address the fundamental causes of disinformation, we need to protect more freedom of speech using the good old international human rights law.

 

https://en.unesco.org/commemorations/worldpressfreedomday/2021/programme

https://en.unesco.org/sites/default/files/wpfd2021_programme_final.pdf

 

0 Comments

Submit a Comment

Your email address will not be published. Required fields are marked *