“We Need Data Socialism” – 2017.12.20 IGF Geneva WS 303 Artificial Intelligence in Asia

by | Jan 9, 2018 | Free Speech, Innovation and Regulation, Open Net | 0 comments

Asian countries have focused on the economic significance of the Internet, not its social significance of equalizing and liberating the powerless individuals’ ability to access and diffuse information.  I am worried that Asian countries pre-occupied with development-oriented thinking may take the same approach to AI.

The AI discourse in Korea in particular is much less judgmental on its social and philosophical significance and more focused on its economic benefits.  For instance, AI discourse in Korea is solidly intertwined with the rhetoric of “Fourth Industrial Revolution”.  Japan also seems more pragmatic as virtual friends to the aging population are being welcomed without much scruple.  When Open Net hosted a Digital Asia Hub Seminar on AI in Asia back in December 2017, the title of one talk caught my eyes: “Safety of AI or AI for Safety”, it was obviously challenging the audience to focus on economic benefits than social concerns.

However, I think that Asia is a better place to get it right in terms of abating social harms.  Firstly, in terms of economic polarizing, I think that AI’s impact on Asia will be less acute because the rate of self-employment is already high, as opposed to employment more vulnerable to displacement by AI than self-employment which requires human investment-backed decision-making.  Also, there is less society-wide pooling of resources (e.g., welfare) as in Europe or US.  For instance, the taxation rate of the richest Asian country Japan is around 26%, which is 10% lower than US and 20-30% lower than European welfare states.  All of this means that there is much room for enhanced welfare to absorb the AI displacement shock, which in turn will not be that great because much of the economic activity is non-AI-displaceable self-employment.

Secondly, AI is essentially software like OS is software.  What makes AI intelligent is data fed into it.  Our brain is also a bunch of protein molecules, and what makes AI intelligent is experience, memory and identity.  Therefore, what makes AI more equitable will be the equitable availability of the data used to make AI more intelligent.  To make distribution of data more equal, I want all of us to think about the following:


The idea that data are shared with one another as much as possible for the maximum benefit of the society.  This is not necessarily an opposition to data protection laws which grant data subjects ownership over data about them, thereby on surface hampering the community use of personal data, but can be an improvement upon data protection laws by carving out a family of personal data that can be freely used for social discourse, for instance, “publicly available data” such as Singapore, India, Canada and Australia do.  Also, the principles of open data and open government can demand also carving out some personal data from the strictures of data protection law, such as court decisions database and other records of government agencies taking or deliberating adverse actions against their own citizens.  These improvements on the existing data protection laws can enhance equitable availability of AI-training data:  Currently, government agencies and companies justify building the closed silos of personal data and not sharing them with people, citing the concerns of data protection laws.  Some of these concerns are justified as they are but other concerns need be moderated with or balanced against the people’s need for that data for participatory democracy.  Case-by-case exceptions maintain chilling effects on people wishing to use the data for AI and other socially beneficial uses.  Categorical exceptions need be carved out so that within these exceptions people can enjoy freedom of expression, open data, and democracy without worrying about their discourse satisfies any collectivistic (or majoritarian) notions of public interest, which often can crush beneath it pluralistic visions of a society.

In the film Deus ex Machina, the guru toward the ending reveals where he got the AI-training data from: the Internet.  What is in the Internet will be determined by data governance rules including data protection laws and the exceptions to them, as one could see what right to be forgotten does to the availability of data.  To allow people around the world to benefit from this learning software called AI, we need to build data governance rules to allow as open access as possible to as much data as possible.  Personal data normally may be deemed individually owned but there are circumstances requiring social and communal ownership of personal data by all the members of that society.

In this line of thought, Asian countries may fare better than US or EU because they DO have data protection laws and at the same time have exceptions to publicly available data, allowing the balance necessary for data socialism.  Many countries lack data protection laws at all or are adopting just now but they can also enjoy the benefit of hindsight.

You can watch the whole session here.


Submit a Comment

Your email address will not be published. Required fields are marked *