Event summary here
Event full proceeding here
K.S. Park’s presentation:
I will talk about two technologies: Internet and AI, and human rights issues related to each.
Internet is different from other technologies because of its unique ability to scale “up” the social movements. In 2011, Internet was nominated for Nobel Peace Prize, if you remember. But now, Internet is seen as the sources of evil, fake news, hate speech, etc. We need to investigate carefully what has happened in between.
What are then the international human rights standards that can strengthen Internet’s unique capacities? Network neutrality and intermediary liability safe harbor.
Firstly, network neutrality provides financial protection for those who wish to place online their messages by making sure that as long as the author pays access fees proportional to the capacity of the connection, no matter how popular the messages become, in other words, no matter how many people view or download the messages, the author does not have to worry about the cost of sending the information to those viewers. However, Korea is threatening that Internet-given freedom to speak to the global audience by having instituted in 2016 the sending party network pays rule where the networks must pay to send information to other networks. That rule not only pushed the network operator to adjust access fees according to the network usage over long-term which means that the authors now do have to worry about the financial cost of being popular or being viewed by many people around the world. Even before that happened, the SPNP rule lowered competition among the network operators in hosting good contents, which caused the Internet access prices to be the highest among the developed countries other than the cul-de-sacs like Australia or New Zealand.
Secondly, intermediary liability safe harbor provides legal protection to the platforms that give equal opportunity of spreading messages or searching for information to powerless individuals just as to powerful corporations and governments. Intermediary liability safe harbor at its simplest exempts platforms from liability for the contents uploaded by third parties that the platforms do not know about lest they are forced to engage in general monitoring or prior censorship of the contents, which will again extinguish the equalizing power of the Internet. Germany’s new NetDZ act targeting criminal information do not violate the international standard because the liability arises only for ‘noticed’ content. Australia’s new intermediary liability law targeting “abhorrent violent content” law however imposes liability on failure to notice, which forces the intermediaries to engage in general monitoring. Also, Germany’s law requires intermediaries to decide whether the posting is illegal or not and forces them to err on the side of deleting. These do not constitute a good balance between human rights of the authors of postings and those referred to in the postings.
We need to strengthen intermediary liability safe harbor and network neutrality to provide a scaled response to hate speech, fake news, etc. Also, we need to review outstanding human rights issues affecting people’s ability to fight back. For instance, in Korea, truth defamation law is paralyzing #MeToo movement despite the UN Human Rights Committee’s 2015 recommendation to abolish truth defamation. Also, truth defamation law allowed takedown of many early warnings online about cleaning the humidifier by putting detergent through the vaporization system, which produced fatally poisonous gases that many people in need of humidifiers such as babies and pregnant mothers breathed in ever more quantities.
These days technology means information technology too often. However, using software to solve problems does not raise any new human rights issues. If at all, bio-tech needs a closer examination for its harms than info-tech. Actually, info tech such as crowd-sourcing is needed to solve these physical tech-and-human-rights issues such as the humidifier detergent case. When we first look at info-tech, we should look for solutions internal to technology. For instance, the problem with AI is that it reflects and intensifies the bias of human beings – this time without error. How do we solve that? How about algorithmic affirmative action? We can giving weighing factors to counter the biases we detected. Now, in order to detect the biases, we will need big data, which is again solution inside technology.
Just to conclude, info-tech is different from other technologies because of its ability to strengthen individuals’ ability to learn and communicate. The solutions to the problems of info-tech must be first sought within the technology, or sought in the rules and practices that foster such endogenous solutions, like intermediary liability safe harbor, network neutrality, and algorithmic affirmative action.