Open Net Hosts “AI, Ethics & Data Governance: From International Trends to Korea’s New Data Laws” Conference on January 20

by | Jan 14, 2020 | Innovation and Regulation, Notice, Open Seminar, Open Seminar main, Press Release, Privacy | 0 comments

 

[International Conference] AI, Ethics & Data Governance : From International Trends to Korea’s New Data Laws - Register Online

Open Net hosts the international conference, “AI, Ethics & Data Governance: From International Trends to Korea’s New Data Laws” on January 20, 2020. The seminar is open to all those interested, and registration is free of charge. Lunch and Simultaneous interpretation will be provided. You can find out more about the conference below.

Click to register: https://forms.gle/4Eas5uLcbLx1nPsZA

 

Date & Time: January 20, 2020 (Mon) 10:30-16:30

Location: Libertas Hall, B1F CJ Law Hall, Korea University, Seoul, Korea

Hosted by: Open Net Korea, Global Network of Internet and Society Research Centers, Harvard University’s Berkman Klein Center, Korea University’s American Law Center, Digital Asia Hub

 

As powerful artificial intelligence (AI) and algorithmic technologies are deployed, people around the globe are asking where and why lines are drawn around key issues of ethics and privacy:

  • Google is using natural language processing algorithm to ‘read’ emails and suggest quick responses to them, and sometimes to report on child porn pushers, but stopped suggesting advertisers relevant to the contents of your emails, for instance, suggesting Mexican restaurants when your email asks your friend for dinner at one.  Do we feel more or less infringed if machines make decisions as opposed to humans? Does this change our value judgments about upload filters or intermediary liability safe harbors? 
  • Microsoft refused to install its face-recognition technology for American law enforcement agencies’ street monitoring equipment while providing the same to correctional facilities in China involving a much smaller number of face subjects.  Is the dividing line between consent to collection versus consent to comparison, or is there any difference between the two? Does the consent-based framework deal efficiently with the US government’s impending plan to use facial recognition for border checks?   
  • Amazon shut down its hiring algorithm when it could not fairly judge female applicants.  Does a solution require adding more women into the training base that the software is trained upon? This could mean less privacy, at least at the collection stage, even if it is later anonymized.  Facial recognition is being criticized for not recognizing racial minorities but some call it “a feature not a bug”. Is “inclusive” AI necessarily good? How do you make “good” inclusive AI?  

 

The Global Network of Internet and Society Research Centers (NOC, in short) has conducted a series of conferences and seminars under the theme of AI and Inclusion.  Digital Asia Hub has made efforts to bridge the gap between the Global South and the Global North from Asian perspectives. This year, Korea University Law School’s American Law Center and Open Net Korea, a civil society organization that has worked on the technology and rights issues, join forces with Harvard University’s Berkman Klein Center for Internet & Society, to bring NOC and Digital Asia Hub at one place in Seoul, Korea, on January 20, 2020. 

One of the centerpieces of the conference will be the launch of the Principled AI Project, a white paper and data visualization mapping prominent AI ethics principles, presented by Jessica Fjeld of Harvard University’s Cyberlaw Clinic. 

Also, in focus will be data governance and AI and how data protection law and open data initiatives affect the inclusiveness of AI.  As long as the current development of AI is taking place along the lines of machine learning, governance on the training data to be fed into machine learning will have a decisive impact on AI’s contribution to sustainable and equitable development.  

We will also debate how Korea’s three data laws (the Personal Information Protection Act, the Information and Communications Network Act, and the Credit Information Act) that were amended on January 9 would affect AI and data governance in Korea. The main purpose of the amendments is to enable the processing of pseudonymous data for statistics and scientific research without the consent of the data subject.  

 

Session 1  Taking Stock of Ethics on AI: Launch of Mapping AI Ethics Principles

  • Moderator: KS Park, Professor, Korea University; Director, Open Net Korea
  • Speaker 1: Jessica Fjeld, Lecturer on Law, Harvard University Law School 
  • Speaker 2: Herbert Burkert, Professor, St. Galen University
  • Speaker 3: Marcelo Thompson, Professor, Hong Kong University
  • Discussants: 
    • Carlos Affonso Souza, Director, ITS Rio
    • Eunpil Choi, Research Fellow, Kakao
    • Malavika Jayaram, Director, Digital Asia Hub (remote)
    • Sang-Wook Yi, Professor of Philosophy, Hanyang University

 

Session 2  AI and Data Governance

  • Moderator: KS Park, Professor, Korea University; Director, Open Net Korea
  • Speaker 1: Graham Greenleaf, Professor, UNSW
  • Speaker 2: Claudio Lucera, Professor, Paraiba State University 
  • Speaker 3: Byung-il Oh, President, Jinbonet 
  • Discussants: 
    • Dae Hee Lee, Professor of School of Law, Korea University 
    • Hoyeong Lee, Director of Center for AI & Social Policy, KISDI 
    • Kelly Kim, Legal Counsel, Open Net Korea

0 Comments

Submit a Comment

Your email address will not be published. Required fields are marked *