top of page
  • Lujain Ibrahim

Roundtable Summary - Social Media's Dis/Misinformation Problem

Introduction


News consumption today largely occurs in platform-dominated media environments. In the Asia-Pacific region, various combinations of Youtube, Facebook, Whatsapp, Line, Instagram, Wechat, Naver, Kakao, and Tik Tok are now the primary spaces where people find news. The spread of dis/misinformation on social media today impacts elections, inter-faith relations, pandemic measures, and public trust at large. Platforms and their algorithmically controlled feeds play a critical role in ‘manufacturing consent’ in society. The Dis/Misinformation problem on social media is one of the key concerns around Platform governance today. Governments in the APAC region have been making prominent regulatory interventions to tackle the spread of mis/disinformation online. Singapore’s Protection from Online Falsehoods and Manipulation Act (POFMA) came into effect in 2019. Indonesia is reportedly proposing new rules to compel platforms to take down “unlawful content” with criminal liability and large fines for non-compliance. In Europe, the European Commission recently put out an updated 2022 Code of Practice on Disinformation, recognised under the co-regulatory framework of the DSA. In the face of government scrutiny and wider societal backlash, platforms have made attempts at self-regulating. Facebook has their own ‘Community Standards’ and created an Oversight Board of independent experts to make recommendations on upholding of its Community standards. All platforms employ an army of human and AI tools to detect and take down certain types of content and work closely with NGOs and media organisations to integrate fact checking on platforms.


A transcript and video recording is available here. Our key speakers for the panel featured: Wolfgang Schulz, Research Director at The Alexander von Humboldt Institute for Internet and Society, Chris Fei Shen, Associate Head at the City University of Hong Kong, Nurma Fitrianingrum, Good Governance Project Officer at Tifa Foundation. This summary captures key themes, ideas, and policy recommendations made during the event.


 

Roundup


EU


On the European Union (EU), national-level laws like those in Hungary and the EU's Digital Services Act (DSA) were discussed. Wolfgang Schulz expressed concerns about Hungary's law, which punishes statements deemed false or distorted by the government, as this leaves too much room for abuse and may limit free speech. On the DSA, it is worth noting that it is a binding instrument that refers to national definitions of legal and illegal content without defining disinformation itself. This means that content deemed legal or illegal in a country can be enforced under the DSA The DSA's focus is on the procedures and structures for effective complaint mechanisms and trusted flaggers to address illegal content, and it also acknowledges data access for researchers and a hybrid approach to private regulation that acknowledges the role of community standards. The DSA also requires a risk assessment of specific service elements that may trigger misinformation and the establishment of codes of conduct for companies. Finally, Wolfgang Schulz notes the European Commission's Code of Conduct on disinformation, which seeks to establish voluntary industry rules and monitor compliance before enacting stricter measures if necessary. Overall, a variety of measures are necessary to combat misinformation, as different types of disinformation require different tools to address.


Indonesia


Indonesia Indonesia has criminalized the spreading of false information. The Ministry of Communication and Informatics (MOCI) has enacted technical regulations on content moderation, classifying prohibited content as ‘any content that creates public disturbance and disrupts public order’.


This has been criticized by civil society organizations and activists who fear that the government will misuse this clause to limit freedom of speech and civic space. The MOCI is the only actor that makes the decision on the removal of prohibited content. There is no mechanism for accountability or transparency. Platforms have four hours to remove content related to terrorism, child pornography, and content that disrupts public order. Failure to remove this content within 24 hours will result in administrative fines, and if they still fail to remove it, the government will order the internet service provider to block access to this platform without any court process.


Nurma Fitrianingrum notes that the lack of proper due process and deliberative decision-making has been identified as a problem in this regulation. She adds that the government has put very unrealistic time frames for the removal, especially the four-hour limit for ‘urgent material’. It is also unclear how they decide between urgent and non-urgent disinformation or prohibited content. Platforms have limited transparency about content moderation, and this has been met with a repressive regulation and a lack of accountability from the government.


Hong Kong

In a discussion about fact-checking and its effectiveness in correcting political attitudes and reducing political polarization, Chris Fei Shen shares his preliminary research findings that exposure to fact-checking cannot change people's political attitudes but it can slightly reduce effective polarization and political polarization. He adds that while fact-checking can also help correct misperceptions, there can be partisan biases in selecting topics to fact-check, resulting in different types of fact-checking organizations that can achieve varying levels of effectiveness in correcting misinformation.


One study by a Chinese University of Hong Kong scholar had a similar conclusion that different fact-checking organizations in Hong Kong conduct fact-checking professionally but have a natural bias in topic selection. Shen suggests that fact-checking organizations should provide context and details rather than simple true or false answers when addressing misinformation. Vulnerability of certain groups to misinformation, including disadvantaged people with low socio-economic status, those with low media literacy, and those who are high on ideological extremes is also a factor. He adds that political polarization is deeply intertwined with the issue of misinformation, with partisans more likely to believe emotionally-laden information. While regulation could be a solution to misinformation, multiple approaches are needed, including fostering a healthy ‘fact culture’ through critical thinking education. He concludes that misinformation will always be present, and living with it is inevitable.



Role of Platforms

The discussion also covered the potential responsibility of social media platforms in fostering a fact-checking culture to combat the spread of false information. The speakers suggested that platforms such as Facebook could fund or organize scholars and fact-checking organizations as well as design algorithms to send fact-checking information to users. The article also notes that Chinese social media platforms have a strong presence in fact-checking information, despite their tendency to avoid sensitive political issues, and that other platforms have the potential to learn from them.


Recommendations

The speakers included several recommendations to mitigate the issues discussed. One recommendation is for the government to impose incentives or disincentives for platforms to increase their transparency and accountability, such as changing tax policies. Another recommendation is to create a network of NGOs and academic observers to monitor the implementation of the DSA, with a fresh perspective and at an arm's length distance from both the platforms and the European Commission.


Legislation Watch

  1. Digital Services Act (DSA)

  2. Indonesia’s Ministerial Regulation No. 5

  3. Singapore’s Protection from Online Falsehoods and Manipulation Act


Final words

Governments, Platforms, and Media are all putting in place measures to deal with Dis/Misinformation however there is no standout tool or approach. Government-led regulations, without requisite accountability checks and balances,for instance in Indonesia, results in more censorship and effects free speech.


The spread of dis/misinformation on social media has become a significant concern, particularly in the Asia-Pacific region, where social media platforms have become the primary spaces where people find news. The platforms and their algorithmically controlled feeds play a critical role in 'manufacturing consent' in society, impacting public trust at large. Governments in the APAC region have been making regulatory interventions to tackle the spread of dis/misinformation online, and platforms have made attempts at self-regulating. However, these regulations have been met with criticism due to the lack of proper due process and deliberative decision-making, which can result in repressive regulation and the lack of accountability. This roundtable further discussed the themes of fact-checking and potential partisan biases, the role and limits of governments in interacting with platforms, and the push for more hybrid and holistic approaches to platform governance. Misinformation will always be present, and living with it is the only option. However, the different measures, including regulations, fact-checking, and fostering a healthy fact culture, can help address the issue.



 

Platform Futures, a project by Digital Asia Hub, convenes a network of academics and experts to create a space for dialogue on opportunities, challenges, and governance best practices across the Asia-Pacific region.



22 views0 comments
bottom of page