“Three Layers of Exclusion” – 12.20.2017 IGF Geneva WS241 Artificial Intelligence and Inclusion

by | Jan 4, 2018 | Innovation and Regulation, Open Blog | 0 comments

There are three layers of exclusion that probably prompted the Brazillian conference on AI and inclusion: (1) economic exclusion; (2) algorithmic exclusion; and (3) data inequity

Firstly, people are worried that only few players have access to AI while others are displaced from their jobs, worsening economic inequity from the pre-AI capitalism when capital ownership worsened economic inequity in capitalism.   However, all technologies in human history have displaced human labor but created new jobs.  AI’s displacing impact will dwarf all technologies having come before it but the solution remains the same: welfare and re-education, most recently punctuated by the proposal for basic income.

Secondly, people are concerned that algorithmic decision-making may deepen by automation the biases previously held by people.  Insurance underwriting and recruitment if fully automated may eliminate by probabilistic triage those candidates with unconventional merits or minority cultural or racial backgrounds that have not previously flourished under certain criteria favored by the companies using the algorithm.  MS chatbot Tay showed the extreme example of that.  Once programmed to be a popular Twitterian, it hacked its way to achieve that by simulating the most sexist and racist human Twitterians.  However, exactly, Tay’s example shows a way to a solution.  The results of algorithm depends on instructions given to it.  If we do not like the result, we can hard-code good results into it: algorithmic affirmative action.  If formulaic probabilistic triage will exclude racial minorities, we can inject a weighing factor to make the result more inclusive.  What to hard-code into algorithm is not an AI problem but a political, human problem.

Thirdly,  properly functioning AI requires training data of enormous quantity.  AI is just a program just like Windows OS as you saw in the movie Her, which can be made available through downloading copies or through cloud.  However, not all will have access to the data through which AI can be trained, and the availability of data is what will make or break for those wishing to harvest the benefits of AI.  Data governance is the key to making AI inclusive.

For instance, data protection law has worked to protect people’s ability to control about themselves and prohibit the data controllers (governments and companies) from using the people’s data at will thereby lessening data inequity but at the same time, some parts of data protection law such as right to be forgotten have worsened data inequity by suppressing people’s online access to data which can be accessed only through brute force methods available only to the rich.  Also, data protection law has been used as excuses for government agencies not to disclose data that could be used by people in participating in governance and strengthening social stock of knowledge, e.g., judicial decisions database.  Data protection and open data are all both important goals and how to weave that together will have impact on how inclusive AI will become.

You can watch the whole session here.


Submit a Comment

Your email address will not be published. Required fields are marked *