China’s Sweeping Recommendation Algorithm Regulations in Effect from March 1
This article was originally published on September 7, 2021, and was last updated on January 6, 2022, to reflect the recent updates to the regulations on recommendation algorithms.
The Cybersecurity Administration of China has passed a new set of recommendation algorithm regulations that take significant steps in regulating how the technology can be used. If enforced as intended, the regulations will have a major impact on companies that rely heavily on the technology, such as social media applications, e-commerce platforms, and news sites, requiring them to increase oversight and make significant technical adjustments.
On December 31, 2021, the Cybersecurity Administration of China (CAC), China’s internet watchdog, passed the Internet Information Service Algorithm Recommendation Management Regulations, a new set of regulations that rein in the use – and misuse – of recommendation algorithms.
The regulations are the most extensive set of rules ever to be created for the implementation of recommendation algorithms in the world, demanding more transparency over how the algorithms function and affording users more control over which data companies can use to feed the algorithm. But the regulations also go beyond addressing user rights, mandating that algorithm operators follow an ethical code for cultivating ‘positive energy’ online and preventing the spread of undesirable or illegal information.
Many of the issues tackled in these new regulations – discriminatory data practices, opaque recommendation models, labor violations, and more – have already been addressed in other legislation, such as the Personal Information Protection Law (PIPL) and the Data Security Law, as well as in local rules, such as Shenzhen’s data protection regulations.
The regulations will come into effect on March 1, 2022.
Who is affected by the recommendation algorithm regulations?
The proposed regulations define ‘application algorithm recommendation technology’ as the application of algorithm technology, such as content generation and synthesis, personalized recommendation, sorting and selection, content retrieval and filtering, or scheduling and decision-making in order to provide users with content and information.
The regulations have far-reaching consequences for the tech industry. Personalized recommendation algorithms are used extensively by social media apps for content recommendation and targeted advertising, so companies such as Douyin (the Chinese version of TikTok) operator ByteDance and WeChat operator Tencent are would-be targets of the new regulations. These two companies also own news aggregator apps, namely Jinri Toutiao and Tencent News, which also have to fall in line.
E-commerce companies and service platforms, such as food delivery apps that recommend products and services to users based on their past activity and preferences, will also be affected. Meanwhile, food delivery and logistics apps will have to adjust how they implement algorithms to allocate orders and schedule the working hours of their employees.
Any foreign company that operates an app or online service that uses algorithms for any of the above-mentioned purposes will also have to adhere to the regulations.
How do the regulations restrict the use of recommendation algorithms?
Under the new regulations, algorithm operators will have to update their technology to comply with new technical requirements, from auditing keywords to enabling users access to and control over their personal data profiles. In addition, operators will have to adjust the direction of their recommendation algorithms in order to adhere to ‘mainstream values’ and are prohibited from using algorithms for a range of illicit behavior, such as implementing anti-competitive practices and engaging in price discrimination.
Technical and policy requirements for recommendation algorithm providers
Below are the technical and policy requirements outlined in the new regulations.
Technical and Policy Requirements for Recommendation Algorithm Operators |
Establishing a management system and service regulations |
Article 7: Establishing and improving management systems and technical measures for:
|
Review and evaluation |
Article 8: Regularly reviewing, evaluating, and verifying algorithm mechanisms, models, data, and application results. |
Moderation of illegal and undesirable content |
Article 9: Strengthening content management by building a characteristics database for identifying illegal and undesirable information and improving storage standards, rules, and procedures for database entries. |
Illegal information must immediately be halted from dissemination once it is discovered and must be eliminated or prevented from spreading further through other measures. The incident must also be recorded and reported to the CAC. |
Ecosystem management |
Article 11: Strengthen ecosystem management on the main page of the recommendation algorithm service by establishing robust mechanisms for manual intervention and enabling users to independently choose content. |
Article 12: Implementing content deduplication, fragmentation, intervention, and other content management strategies; ensure rules on areas such as content retrieval, sorting, selection, promotion, and display are transparent and easy to understand, in order to avoid adverse effects on users and to prevent and minimize disputes. |
Requirements for news dissemination |
Article 15: If providing online news services:
|
User control over recommendation keywords |
Article 17: Providing users with options that are not based on their user profiles or providing users with a convenient option to turn off algorithm recommendation services.
Immediately stop providing related services when a user chooses to disable the algorithm recommendation service. |
Providing users with the ability to select or delete user tags for algorithm recommendation services.
If the application of the algorithm significantly impacts users’ rights and interests: Providing users with an explanation and bearing the corresponding responsibility. |
Guaranteeing safe use of algorithms for elderly users |
Monitoring, identifying, and disposing of online fraud information facilitates to enable the elderly to safely use algorithm recommendation services. |
Protection of labor rights |
Article 20: Establishing and improving relevant algorithms such as platform order allocation, remuneration and payment, working hours, rewards and penalties, and guaranteeing workers’ legal rights and interests, such as remuneration, rest time, and holidays |
Note: The above is an abridged translation prepared by the China Briefing team. The content is intended for reference only. Source: Cyberspace Administration of China |
Of the above requirements, perhaps the most significant are those outlined in Article 17, which grant users the power to control their profile. This will require developers to create an interface where users can view their profiles and actively select and remove keywords used for the recommendation algorithm – a first for any algorithm regulation anywhere in the world.
In addition, the regulations stipulate that the algorithm operators must also clearly inform users of the circumstances in which they are using the recommendation algorithm and publish the basic principles, intentions, and operating mechanisms of the algorithm recommendation service.
The content moderation requirements stipulated in Article 9 also make platforms liable for any illegal or undesirable content that is recommended to users through algorithms. This is an extension of similar stipulations in China’s Cybersecurity Law that hold platforms accountable for hosting illegal or undesirable content.
Technology platforms have been penalized for hosting such content on numerous occasions in the past, and the new regulations would expand the scope of liability and require more scrupulous content moderation mechanisms to be written into the code of companies’ algorithms.
Finally, the stipulations regarding the protection of labor rights stipulated in Article 20 are also not entirely new. In July 2021, the State Administration of Market Regulation (SAMR) release new policy guidelines protecting the rights of delivery drivers, which among other demands for labor rights, required safer and more reasonable implementation of algorithms to allocate deliveries and working hours for drivers. The new algorithm regulations serve to prop up the rights of workers, giving regulators another tool with which to penalize companies that violate the rules.
Ethical requirements for recommendation algorithm providers
The new regulations also require operators to update and design their algorithms to adhere to ‘mainstream values’ and promote ‘positive content’. They also require operators to create means for preventing or reducing the spread of illegal or undesirable content and preventing criminal activity, which will have to be built into the platform systems.
Below are some of the ethical requirements of algorithm operators.
Ethical Requirements for Recommendation Algorithm Operators |
Promotion of ‘positive content’ |
Article 6: Adhering to mainstream values and optimizing the recommendation mechanism to spread positive energy and promote ‘algorithms for good’. |
Article 8: Prohibiting the setting up of algorithm models that violate public order and good customs, such as inducing users to over-indulge or over-consume. |
Article 11: Actively displaying content that adheres to mainstream values on all homepages, lists of trending searches and recommended picks, pop-up windows, and other prominent places. |
Protection of minors |
Article 18: Protecting minors’ networks in accordance with the law, facilitating access for minors to content that is beneficial to the health of their body and mind by developing a usage and service model that is appropriate for minors. |
Protection of the elderly |
Article 19: Protecting the legal rights and interests of the elderly and fully considering the needs of the elderly for travel, medical treatment, consumption, and handling affairs.
Providing intelligent services for the elderly and carrying out telecoms-related services in accordance with the law. |
Note: The above is an abridged translation prepared by the China Briefing team. The content is intended for reference only. Source: Cyberspace Administration of China |
For long-time observers or users of China’s internet, the above articles will likely come as no surprise. China has passed several laws that require platforms and service providers to prevent content that violates laws and crosses the Party line.
The above stipulations expand the scope of responsibilities of algorithm operators from moderating and preventing illegal content from appearing on their platforms to actively promoting ‘positive’ content that follows the Party line. This largely consists of content that is patriotic, family-friendly, and focuses on positive stories in line with the ‘core socialist values’ of the CCP, while refraining from content that promotes undesirable behavior – extravagance and over-consumption, violent or anti-social behavior, sexual promiscuity, excessive adoration of celebrity idols and other public figures, and political activism, to name a few.
While some of these requirements are familiar – China has strict internet censorship laws – others are part of a newer trend. Cracking down on content that promotes over-consumption and extravagant lifestyles is a more recent development and has been accelerated by the government’s recent push to promote ‘common prosperity and reduce income inequality’. High-profile figures have also been reprimanded for their excessive shows of wealth in recent years, and the prohibition of content promoting such lifestyles can be seen as a continuation of these efforts.
The regulations also indicate a marked change in how the government sees the roles of big tech companies and platforms in society. It will no longer be enough to simply comply with regulations and prevent criminal behavior; companies must also make a concerted effort to ensure a safe and ideologically healthy online environment for all users.
This is clear from the requirements for protecting the rights of the more vulnerable members of society, such as minors and the elderly (seen in articles 18 and 19). Companies must consider how their services can help users (such as by pushing content to the elderly that is beneficial for their medical needs), and not simply generate a profit.
Prohibited behavior for recommendation algorithm providers
The new regulations also place tight restrictions on the application of recommendation algorithms, reining in the misuse of these systems to gain an upper hand over competitors or manipulate users to spend more time or money on apps.
Below are some of the activities prohibited in the new regulations.
Prohibited Activity for Algorithm Operators |
Promoting illegal activities |
Article 6: Using recommendation algorithms to engage in illegal activities, such as those that threaten national security, disturb economic and social order, and infringe upon the rights and interests of others, or use recommendation algorithms to spread illegal information. |
Article 10: Algorithm operators are not permitted to add any illegal or undesirable keywords to a user’s interests or use them as a user tag for push information. |
Faking or manipulating content or data |
Article 13: For news providers: Generating or aggregating fake news, disseminating news from sources outside the scope of national regulations. |
Article 14: Using algorithms to falsely register accounts, illegally trade accounts, manipulate user accounts, or creating fake likes, comments, forwards, or page view numbers to fraudulently increase traffic or carry out traffic hijacking. |
Using algorithms to interfere with the presentation of information online, such as of blocking information, over-recommending information, manipulating lists, sorting search results, controlling trending searches and topics, etc.; engaging in behavior that affects online public opinion or evades supervision and management. |
Engaging in anti-competitive behavior |
Article 15: Using algorithms to place unreasonable restrictions on other online information providers, or hindering or disrupting the normal legal online information service operations of other providers. |
Endangering health and wellbeing of minors |
Article 18: Push information to minors that may cause them to imitate unsafe behavior, violate social ethics, cultivate bad habits, and other information that may affect the physical and mental health of minors. Algorithm recommendation services must not be used to cause internet addiction among minors. |
Engaging in discriminatory practices |
Article 21: Imposing differential treatment on consumers for the sale of products and services, such as engaging in price discrimination based on consumers’ preferences and transaction habits and other illegal behavior. |
Note: The above is an abridged translation prepared by the China Briefing team. The content is intended for reference only. Source: Cyberspace Administration of China |
The PIPL, which came into effect on November 1, 2021, tackles some of the issues mentioned above, such as prohibiting price discrimination and other discriminatory practices for automated decision-making processes. Meanwhile, stipulations in Article 14 adhere to China’s antitrust laws, which prohibit the use of algorithms to restrict market competition or fix prices.
In another significant move, the regulations directly prohibit operators from adding illegal or undesirable keywords to user profiles and from using such tags to push information to them.
The obligations to protect minors outlined in Article 18 will likely mean video formats, such as pranks or internet challenges that encourage viewers to replicate and share similar content on social media – popular among younger users in many countries around the world – will not be welcome on Chinese apps.
The long-term trend for regulating the internet
It is clear the Chinese government is taking a much more hard-hitting approach than many other countries when it comes to cracking down on illegal and undesirable online content. Although the scope of content targeted by China’s internet regulations covers topics and areas that would be considered acceptable in other countries, it also tackles head-on issues that many around the world are grappling with – fake news, misinformation, online abuse, and so on.
By placing the responsibility for moderating and prohibiting such content at the feet of the platforms themselves, China is taking a very different approach from many western countries, where platforms are largely protected from legal repercussions for content hosted on their platforms, and liability instead falls on the individual who posted the content.
Under China’s regulations, violators who refuse to correct issues or whose actions have had serious consequences are liable for fines of between RMB 10,000 (US$1,568) and RMB 100,000 (US$15,688) – an insignificant amount for large technology companies. However, as many of the regulations overlap with those outlined in other legislation, some cases could culminate in much larger fines. In addition, public naming and shaming tactics, such as the inclusion on an ‘undesirable entity’ list, could also bring larger companies with serious violations to heel.
These new regulations will be difficult for many companies to implement, but they will likely also be difficult for regulators to enforce. Smaller companies with limited development budgets may find it hard to update their systems to comply and employ the manpower needed to moderate the content. It is therefore likely that many will not fully comply in the short term, and that fines will be meted out inconsistently.
Some of the requirements also require a great deal of interpretation (what kind of content is considered “beneficial to the health of body and mind” exactly?), which may make it harder for companies to comply and update their systems accordingly. This will most likely lead to mistakes and misunderstandings in the beginning.
However, companies that appear to be making a concerted effort to comply with the law and actively communicate with the authorities are likely to be looked upon more favorably, even if compliance with the regulations (and the authorities’ interpretation of them) is imperfect.
Companies that operate in both the Chinese and overseas markets will likely have to decouple their operations to ensure compliance with the Chinese regulations while maintaining the more open approach to internet governance abroad adopted elsewhere.
This is again not a new trend; China’s strict internet laws have seen several foreign internet companies, particularly those who enable distribution of information, (Google, Twitter, Facebook, and most recently, Microsoft) to shut down or significantly scale down operations in China.
For companies operating in less sensitive areas, the changes and compromises will be more technical than ethical, such as redesigning algorithms to enable users more control over their data and building mechanisms to prevent fraud and other illegal behavior.
If your business requires assistance to set up a compliant IT system in China, you can reach out to our team by sending an email to technology@dezshira.com.
About Us
China Briefing is written and produced by Dezan Shira & Associates. The practice assists foreign investors into China and has done so since 1992 through offices in Beijing, Tianjin, Dalian, Qingdao, Shanghai, Hangzhou, Ningbo, Suzhou, Guangzhou, Dongguan, Zhongshan, Shenzhen, and Hong Kong. Please contact the firm for assistance in China at china@dezshira.com.
Dezan Shira & Associates has offices in Vietnam, Indonesia, Singapore, United States, Germany, Italy, India, and Russia, in addition to our trade research facilities along the Belt & Road Initiative. We also have partner firms assisting foreign investors in The Philippines, Malaysia, Thailand, Bangladesh.
- Previous Article China Stock Connect: Expanding the Shanghai-London Program to Germany and Switzerland
- Next Article China is Committing to Strengthening IP Protections. But Should Foreign Companies Take it in Good Faith?