[Kyoto IGF 2023] DTSP Open Forum: Unlocking Trust and Safety at Scale: Preserving the Promise of the Open Internet

by | Oct 11, 2023 | Free Speech, Open Seminar, Open Talks, Privacy | 0 comments

Open Net participated in a workshop titled “DTSP Open Forum: Unlocking Trust and Safety at Scale: Preserving the Promise of the Open Internet, on October 11, 2023, at the Kyoto IGF with the following themes and Open Net’s researcher Kyoungmi Oh spoke at the session:

Description of the Session:

With billions of people across the globe logging on each day, the “trust and safety” field is rapidly growing as a key enabler for digital citizens’ ability to connect and interact within each other—within and across borders. This session will bring diverse stakeholders together to discuss how the Digital Trust and Safety Partnership’s best practices and assessment processes are driving transparency and accountability when it comes to platform governance. This event is an opportunity to hear directly from diverse stakeholders in how to strengthen such work. 

DTSP” ‘s Trust and Safety Best Practices (related to digital services and products transparency, enforcement, governance and development) govern many platforms’ response to content and conduct online.  However, DTSP goes beyond merely setting best practices; the wide variety of platforms that make up its membership go through an independent third-party led audit/assessment process to measure the maturity of these best practices. These are salient questions at a time when the Internet policy community is grappling with questions about the relationship between growing appetite for national-level regulations and the preservation of the open, global Internet. The DTSP offers a model/tools for platform accountability at a global scale, built on the foundation of the open, interoperable internet.

This open forum aims to seek feedback from the global Internet community about the trust and safety best practices and third-party audit/assessment models. For example,  are they sufficient in capturing growing social and policy interest in internet safety? What is missing, and how should the best practices evolve to keep apace with emerging change, regulatory trends, and threats to the open Internet? How can we build on such work to address regulatory interest in safety and risk mitigations? And, what role can cross-industry best practices, norms, and principles play in broader policy conversations when it comes to promoting safety, voice, and the open Internet?

Key objectives

  • Raise awareness of DTSP with the multistakeholder internet governance community 
  • Gain feedback on the DTSP approach from participants  
  • Increase understanding of opportunities and challenges for industry best practices and standards on Trust & Safety as a means of supporting an open, interoperable, reliable, and secure Internet. 

Panelists:

David Sullivan, DTSP Executive Director
Nobuhisa NISHIGATA, Director, Computer and Data Communications Division Telecommunications Bureau, MIC-Japan, 
Tajeshwari Devi, Online Safety Commission, Fiji (virtual) 
Angela McKay, Google 
Brent Carey, Netsafe (virtual)
Kyoungmi Oh, OpenNet Korea

Kyoungmi Oh gave feedback on the report as follows:

First, trust and safety do not take into account the human rights harms when contents are taken down or otherwise censored.  

Trust and Safety is defined in terms of ‘content- and conduct-related risks’, which is in turn defined to be “illegal, dangerous, or otherwise harmful content or behavior.” 

Trust and Safety.  The field and practices employed by digital services to manage content- and conduct-related risks to users and others, mitigate online or other forms of technology-facilitated abuse, advocate for user rights, and protect brand safety. 

Content- and Conduct-Related Risks. The possibility of certain illegal, dangerous, or otherwise harmful content or behavior, including risks to human rights, which are prohibited by relevant policies and terms of service.

The way it is defined, only the contents not their takedowns are deemed as causing risks. This does not sufficiently protect one important human right, freedom of expression. If one can be harmed directly by another’s content, that is because it causes mental distress on the subject or audience.  Likewise, one can be harmed by censorship because censorship can be also discriminatory and hostile to the author’s identity or thoughts. For instance, if a posting, “Black Lives Matter” is taken down for the reason that it favors black lives as opposed to all lives, the author will no longer feel that the platform is a safe space for information sharing that is actually important for prevention of police brutality. Also, a content can be dangerous because it can incite others into verbal or physical attacks against the subject.  However, censorship of a content can also be dangerous if dissident voices and viewpoints are removed.  For instance, in a society charged with religious hatred, a majority leader’s or the government’s disinformation can trigger verbal and physical attacks and censoring minority’s rebuttals will cause harms to the minority.  

We are not saying that censorship by platforms necessarily infringe on international human rights law which binds the state bodies, and people whose postings are censored by one platform can always post on other platforms as well.  However, UN Guiding Principles do require the companies to try to protect human rights as well. There are only a handful of platforms in any given country that being censored by dominant platforms can always cause the same concern of trust and safety. To the extent that you want to make the framework comprehensive, there must be concern for the trust and safety of the authors of contents as well as the subjects and audience of contents.   

This is especially important because of the rising trend of digital authoritarianism where governments are becoming more and more the sources of harmful disinformation and harmful censorship. 

Second, DTSP’s Safe Framework is so well thought out that it seems adaptable to any industry, not just digital industry. I can totally see the same iteration of development, governance, enforcement, improvement and transparency being very important to the trust and safety of the pharmaceutical industry, for instance.  Just need to define trust and safety not in terms of content- or behavior-based risk but in other terms relevant to the industry. but It is all good. But I am then worried that whether the Framework sufficiently focuses on the unique aspect of the digital industries, for instance, freedom of expression, privacy, etc. Digital industries have formed the liberating and equalizing core of human civilization where search engines and platforms have provided powerless individuals with the same power of information and mass communication formerly available only to big companies, governments, or legacy media heavily influenced by the companies and governments. So much so that back in time of Jasmin Revolution, there was an international movement to nominate the internet for Nobel Peace prize. Can we define trust and safety in ways that protect the unique civilizational significance of the internet or will DTSP become of the numerous consumer product safety initiatives?  I think that the success of DTSP lies in whether we can answer these questions correctly. 

Along that line, net neutrality is another value by which trust and safety must be defined. For instance, net neutrality can be translated into users’ equal access to all data regardless of contents, origins, or payment, according to EU’s 2015 Open Internet Regulation. For instance, telcos’ behavior of selective zero-rating, network slicing, or network usage payment do interfere with users’ equal access to all data. 

Third, fortunately, one way to strengthen the connection to the unique significance of the digital industries is already reflected in some of the 35 best practices. That is a collaboration of digital rights organizations. No technology has been welcomed as much as the internet: it was met by a new wave of numerous organizations dedicating to protection of the internet. These organizations and companies have common goals only if the companies allow, deviating a little from their profit motives. DTSP asks companies to work with these organizations in the process of product development, product enforcement, product improvement. 

Product development:  Work with recognized third-party civil society groups and experts for input on policies

Product enforcement: Work with recognized third parties (such as qualified fact checkers or human rights groups) to identify meaningful enforcement responses

Product improvement:  Foster communication pathways between the Practicing Company on the one hand, and users and other stakeholders (such as civil society and human rights groups) to update on developments, and gather feedback about the social impact of product and areas to improve

However, you need the same element in transparency as well.  Actually, without transparency, communication during PD, PE, and PI many not be meaningful.  Limited transparency with recognized human rights organizations under appropriate non-disclosure agreements can be very helpful in adding contexts and nuances to content moderation while not risking abuse by bad actors. Twitter’s Trust and Safety Council did this relatively well, sharing much more information about new products, enforcement, etc., with civil society.  

This will answer the other question that DTSP posed about the difficulty of maintaining transparency while not revealing information that can be used by bad actors for abusing purposes. Limited transparency with civil society groups must be explored more.  

Fourth, the Safe Framework is more or less abstract and procedural. In order to assist identifying and checking on 35 “best practices”, 45 questions are asked but these questions are themselves too abstract and procedural.  For instance, instead of defining what content should be taken down, the Safe Framework asks the following questions: 

How are content reviews prioritized, and what factors are taken into consideration? 

What types of tools/systems are used to review content or manage the review process? 

What types of processes or mechanisms are in place to proactively detect potentially violating content or conduct (automated or manual)? 

Asking these questions is not a problem in itself but it is hard to evaluate the Safe Framework based on these open-ended questions because we don’t know how content reviews are possibly prioritized and what possible tools/systems are in place.  I think that the questions should be phrased in “yes/no” format and should reflect the digital industries’ unique aspects. 

Fifth, DTSP asks the following question: “DTSP is considering whether some Commitments or best practices should be given greater consideration than others when conducting assessments”.  

I think that Product Enforcement and Product Transparency are the more important because that is where the rubber meets the tires. That is where the products are in direct touch with the users. What is lacking in development, improvement, and governance can be compensated by excellent enforcement and transparency.  

0 Comments

Submit a Comment

Your email address will not be published. Required fields are marked *