Human Rights in China (HRIC) has partnered with Ranking Digital Rights and ARTICLE 19 to conduct a Community Led Assessment of Rights Impacts in the Tech Industry (CLARITI) on X, formerly known as Twitter, Inc. The CLARITI methodology, available for use at claritihria.net, provides civil society with a structured tool to assess tech companies’ human rights practices in line with the UN Guiding Principles.
As one of the predominant online platforms for open and uncensored Chinese speech, X’s enforcement of its moderation policies has a significant impact on Chinese communities in diaspora and online. HRIC has found that consistent under- and over-enforcement of moderation policies, exacerbated by AI, as well as inconsistent identity verification policies, have had a disproportionately negative impact on Chinese human rights defenders, members of persecuted communities, and all those who seek to express themselves freely outside of China’s Great Firewall. In particular, account suspensions, phishing, harassment, and other restrictions as a result of X’s actions or lack thereof have prevented these individuals and groups from exercising their rights to free speech, access to information, and freedom of association.
Read more below, or download the full report here:
Executive Summary
Human Rights in China (HRIC) is a nongovernmental organization founded in March 1989 by overseas Chinese students and scientists. HRIC’s mission is to support and strengthen domestic civil society actors through the advancement of international human rights and the institutional protection of these rights in the People’s Republic of China, including Hong Kong Special Administrative Region (HKSAR) and overseas Chinese in diaspora.
For this project, HRIC worked closely with human rights defenders and dissidents working on China issues who have been and are using X (formerly known as Twitter) as a mode of communication while circumventing the Great Firewall of China. For the purposes of this assessment, this target group of X users will be referred to as “rightsholders.” Mediums like X allow rightsholders to access and share information on the Internet which may otherwise be censored on Chinese news sites and social media. It is also a key platform for facilitating free expression and communication among the Chinese-speaking community, including around sensitive topics such as human rights.
In undertaking the assessment, we addressed the following key concerns: 1) X’s content moderation policies and their enforcement, which is overly reliant on AI and has resulted in under- and over- moderation of content, in turn leading to arbitrary account suspensions; and 2) X’s inconsistent verification system, including the recently revamped Blue Checkmark that allows impersonation and misinformation campaigns, such as coordinated spam, harassment, and bots, to flourish. These issues significantly hinder rightsholders’ ability to use X to express their opinions freely, share information on crucial human rights issues happening within the mainland that may otherwise never reach a global audience, and simultaneously access important information that would be censored by the Great Firewall of China. These activities are protected by Article 19 of the Universal Declaration of Human Rights, which declares that “[e]veryone has the right to freedom of opinion and expression; this right includes freedom to hold opinions without interference and to seek, receive and impart information and ideas through any media and regardless of frontiers.”
The importance of X for Chinese human rights defenders cannot be over-estimated. There are no real alternatives to X for these users, since other Chinese messaging platforms and social media apps like WeChat or Weibo are heavily censored, surveilled, or government-linked, and non-Chinese alternatives like Mastodon or Bluesky do not have significant usage or reach. X’s arbitrary and non-transparent decisions about content moderation and account suspension create far-reaching consequences for China-based users’ rights and safety, especially their right to freedom of expression. This results in defender communities that are already marginalized in China being disproportionately affected.
For this HRIA, HRIC used the CLARITI (Community-Led Assessment of Rights Impacts in the Technology Industry) methodology to conduct an assessment of X (formerly Twitter). The methodology was developed by Ranking Digital Rights in 2023 with support from ARTICLE 19 under the Engaging Tech for Internet Freedom Initiative (ETIF). The assessment is intended to address salient issues experienced by rightsholders, such as arbitrary account suspensions, coordinated spam and attacks, and problems with the Blue Checkmark i.e. lack of transparency of verification, as well as impersonation efforts via Blue Checkmark accounts. The assessment would be the start of an important process and conversation with X to address and identify key issues, and hopefully enable rightsholders to be able to continue to access X freely and without any encumbrances in the long run, in turn ensuring their right to freedom of expression (including the right of access to information), and right to privacy.
Scope
This HRIA assesses X’s content moderation and content visibility. Our target country is mainland China: though X remains banned and does not have an actual office presence in the country, the app is still in use by rightsholders in the mainland. Rightsholders have been using the platform through Virtual Private Networks (VPNs). The timeframe of the assessment is July to December 2024.
Methodology
The CLARITI methodology applies the International Bill of Human Rights as its baseline to define human rights and the UN Guiding Principles on Business and Human Rights (UNGP) as a guideline to assess the content moderation systems and practices of the company. Other legal and non-legal requirements, such as the Global Network Initiative (GNI) Principles, the Santa Clara Principles, and the EU General Data Protection Regulation (EU GDPR), are also applied in the analysis of soft law guidance available regarding how tech companies can respect human rights, as well as the broader legal and regulatory context in which the company operates.
Stakeholder Engagement
In addition to undertaking extensive secondary research (see Appendix A), HRIC directly consulted 20 rightsholders. These interviewees are prolific Chinese human rights defenders and dissidents, based inside and outside of mainland China (in diaspora), including grassroots groups, lawyers, journalists, students, and other civil society actors. We also consulted another stakeholder, an ex-Twitter employee with direct familiarity with X’s human rights processes, to gain better insights on the technical, legal, and human rights issues with regards to X’s operations and influence in mainland China.
Impact Assessment
A human rights impact assessment was conducted in line with UNGP Articles 12, 13, 14, 18, 19, 23, and 24, and impact assessment best practices. This assessment highlighted the following impacts and causes which must be addressed by X:
Article 13(a) of the UNGP states: “The responsibility to respect human rights requires that business enterprises: (a) Avoid causing or contributing to adverse human rights impacts through their own activities, and address such impacts when they occur.” Through our human rights impact assessment, we found that X’s enforcement of its content moderation policies is overly reliant on AI and has resulted in under- and over- moderation of content, in turn resulting in arbitrary account suspensions, which has a far-reaching impact on the rights of rightsholders, infringing their rights to freedom of expression and access to information. Arbitrary account suspensions, which may be temporary but can last for hours, days, weeks, or even months, mean that rightsholders will lose a valuable tool of communication during that period, while in an environment that already has extensive restrictions on communication.
In the same vein, X’s inconsistent verification system, including the recently revamped Blue Checkmark, allows impersonation and misinformation campaigns, such as coordinated spam, harassment, and bots, to flourish, which has impacted users’ ability to freely receive and impart information.
There is no public information on what steps X is taking to address the impact of these crucial issues. Further, X does not have a dedicated human rights unit to address this impact, with evidence showing that X has deprioritized human rights in its logos, pathos, and ethos overall, such as disbanding its entire Trust and Safety Council in December 2022.
Recommendations for X
We make the following recommendations to X to uphold its responsibility to respect human rights and mitigate the adverse human rights impacts identified above:
With regards to the Blue Checkmark “for a fee,” X should reconsider only allowing users to receive Blue Checkmark verification through its paid premium subscription. Our research and stakeholders’ experiences have shown that the current system has encouraged impersonation attempts, disinformation campaigns, spear-phishing attacks, and hacking, thus fundamentally preventing the target group from exercising their right to freedom of expression, as well as contributing to information threats and transnational repression. The current Blue Checkmark verification requirements should be revised to prioritize information accuracy.
As a stop-gap measure, there should be rapid action focused on due diligence and accurate verification of information, such as the identities of the Blue Checkmark holders, through improving X’s current content moderation AI algorithms to precisely address the aforementioned issues that have come about as a result of the Blue Checkmark, as well as an increase in the number of human reviewers which would be able to reinforce these efforts.
In the mid and long term, it is recommended that X’s Blue Checkmark verification should revert to a system that emphasizes due diligence and accuracy of information with adequate human rights safeguards, to prevent overreach. To do this, X would need to take active steps to ensure and verify that an X account is owned by the person or organization it claims to represent. At the same time, verification requirements should be cognizant of the existing real-name and ID verification regulations under the Cybersecurity Law in China.
Content moderation should not overly rely on AI models to address both under- and over-moderation, both of which significantly affect rightsholders. An immediate solution would be to utilize more human moderators with specialized training. Such human moderators, not based inside the PRC, should be context-aware, i.e. with a good understanding of issues involved, and linguistically diverse. A long-term solution would be to refine the AI models, which would go towards better trained data sets and models, and enhancing human control over decisions made by AI and ensuring legality, necessity, and proportionality in content moderation decisions.
Algorithms should be committed to accuracy of information instead of simply focusing on high engagement with low credibility. Proactive content moderation, by system or human, should only be channeled towards addressing actual issues such as impersonation attempts, disinformation campaigns, spear-phishing attacks, and hacking, and not to the extent that it becomes over-moderation, where relevant and legitimate content gets taken down.
Account suspension, especially of human rights defenders, must have a clear basis and should not be done at will without notification. There must be remedies for reinstatement. For instance, this could be done through transparently communicating to users about the content being moderated and providing appeal mechanisms and improving user control mechanisms such as blocking or reporting.
Fundamentally, X should be committed to human rights and its indivisibility. As such, X cannot claim to uphold freedom of expression while remaining silent or taking contradictory approaches on other concerns in relation to privacy and access to information, amongst others, as these rights are equally important to all users. This requires a holistic change in direction, policies, and systems.
A human rights unit within X would be able to address some of the above concerns, in particular related to human rights defenders using the platforms to circumvent the Great Firewall of China, on an immediate basis. A dedicated human rights unit would serve as a direct grievance mechanism that would deal specifically with human rights related complaints and issues, whereas the system now does not adopt a human rights-centered approach.
We look forward to collaborating with X in order to enhance its protection, respect, and remedy of adverse human rights impacts on its users in China and the diaspora.
Contact: etif@article19.org or communications@hrichina.org.