Bulletin November 2018 (Vol. 20 No. 1)

Censorship is an old-school idea compared to the sophisticated data-driven approach China is taking to control the spread of information in the country. China’s AI Approach to Information Control speculation that a CNN reporter trying to cover the story had been arrested. A combination of censorship and debunking was subsequently deployed to dampen the rumours. “For more politically sensitive issues, like attacking the responsible government officers or uploading photos of people protesting against the government on the street, there was more likely to be censorship. But less sensitive issues, like the discussion about the CNN reporter, were more likely to be debunked. “In fact, we find that in some cases censorship wasn’t very effective. If people were hungry for information, no matter if it was censored, they would still look for it. There was one story questioning the number of casualties in the blast that was censored. This was information that people wanted to know so even though it was censored, they went looking for it.” Still, as information control tactics have evolved, people’s social media usage has moved away Denouncing negative information as ‘fake news’ has become a depressingly common tactic of politicians around the world. In China, this has been turned into a whole industry. Debunking news and news sources is one of a spectrum of activities that the state is using to control information and its flow, according to Dr Fu King-wa of the Journalism and Media Studies Centre. Dr Fu has been leading research on social media censorship in Mainland China for nearly a decade, but in the past couple of years he has had to expand that remit because censorship has become only the crudest form of control. “Censorship is what I call Control 1.0, which is used to control political content and regulate the news media and social media. But now we have Control 2.0, which is about manipulating public opinion to make people think in a different direction,” he said. This involves deploying the ‘50-cent army’ ( wu mao dang ), who are paid a small amount of money to cast doubts on social media postings that are unfavourable to the government. In addition, if people click on links embedded in unfavourable postings, they may automatically receive a message that the item is ‘misinformation’ Dr Fu believes the government is using the information control tactics to prepare its citizens for ‘Control 3.0’ – the social credit scoring system that will roll out in 2020. The operational details are still unclear, but it is quite possible that a person’s social media activities could count towards their score, which may determine such things as whether they can purchase airplane tickets or send their children for private education. “How can a system like social credit work? There has to be a context in which the citizens get used to these multiple ways of having information controlled by the government. Control 1.0 and 2.0 have created an atmosphere over the past 10 years of collecting multiple kinds of information from people who have no way to say no. They are giving up a lot of privacy in exchange for services,” he said. China is not the only place trying to use technology to monitor its citizens. A new book in the United States, Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor by Virginia Eubanks, shows how some American states and cities have developed automated systems to assess and track citizens on social benefits. “That book is about punishing poor people. But in China it’s not just to punish the economically- disadvantaged but also political dissidents,” he said, adding: “AI [artificial intelligence] does a lot of good things for society. But there are a lot of issues to sort out about how this works.” The WeChatscope website can be found at wechatscope.jmsc.hku.hk from the more public domains of Sina Weibo to the semi-public space of WeChat’s public accounts. In response, Dr Fu launched WeChatscope in 2018 to track the activities of more than 3,000 accounts that post on social affairs, including their posts, user engagement and removal of posts. This makes it possible to tackle another form of information control – self-censorship – and Dr Fu has detected more cases where people delete posts or refrain from re-posting them. He has also noticed that different media outlets remove their content at the same time, suggesting they may have been directed to do so. Preparing for Control 3.0 Information control tactics are also evolving into the international arena. China recently tried to get Cambridge University Press to block access to politically-sensitive articles and pressured airlines to drop Hong Kong and Taiwan as independent destinations and place them under China. and be given a link to ‘learn more’ about what is wrong with it. Censorship can backfire A prime example of this was after the 2015 Tianjin blast, in which a storage facility blew up, killing 173 people and injuring hundreds more. On Weiboscope, a platform Dr Fu created to track social media activity, he detected questions being posted asking why dangerous explosive material was stored near a residential area in contravention of the law and speculating that corruption was involved. There was also Censorship is what I call Control 1.0, which is used to control political content and regulate the news media and social media. But now we have Control 2.0, which is about manipulating public opinion. Dr Fu King-wa WeChatscope (beta version) is developed to collect data from a selected panel of WeChat public accounts for a better understanding into the role of the platform in content censorship, information distribution, user engagement, platform intervention, and connectivity enabled by the technology. A censored Weibo post about a CNN reporter being attacked during a live broadcast. Research 13 | 14 The University of Hong Kong Bulletin | November 2018

RkJQdWJsaXNoZXIy ODI4MTQ=