As we already mentioned in a previous post, things in China are getting particularly tough in terms of liability for user generated content, with two new sets of rules affecting main Internet service providers within the mainland territory, such as Weibo (microblogging site, similar to Twitter).
In particular, the Cyberspace Administration of China (CAC) has issued two Acts that will enter into force on October 1, 2017. The first one, translated to English by China Law Translate, it’s called “Provision on the management of Internet Forum Community Services” (互联网论坛社区服务管理规定) and focuses on:
Forums, message boards, communities and other forms of services on the internet that provide users with interactive information publication community platforms
According to Articles 6 and 7 of the Act, Internet forum community service providers have an obligation to monitor the content published by the users of their platforms and, upon discovery of content prohibited by the law, they will have to delete it. Additionally, and unlike many other legal systems, where the take down becomes the ultimate obligation of the provider, in these cases the provider will also need to keep records of the event and promptly inform the State or respective local information office.
The Second Act, known as “Provisions on the Administration of Internet Comments Posting Services” (国家互联网信息办公室关于印发《互联网跟帖评论服务管理规定》的通知) targets a slightly different type of provider. In particular:
Services provided by Internet websites, apps, interactive communication platforms, and other communication platforms with characteristics of new media and functions of social mobilization for users to post words, symbols, emojis, photos, audios and videos, and other information in the manners of posting a topic, replying to a post, leaving a message, and “bullet screen,” among others [more on “bullet screen”, below¹]
This Act establishes similar monitoring obligations to the ones already referred above, but more detailed: for instance, including not only text monitoring but also sound comment management systems. As in the previous case, these “comments posting service providers” are obliged to detect and deal with illegal information in a timely manner, and report it to the relevant authorities. On the top of that, in case their main purpose is to act as new information providers, instead of simply monitoring, they will need to establish preventive systems to censor posts before they are made public.
In summary, a regime very distant from the liability safe harbours that currently operate in U.S. and Europe.
Bonus track: The Second Act mentions “bullet screen” as a specific type of content. Basically, these are videos in which people can comment over the video itself. If you can’t still picture it, you have a pretty bizarre example here.