The Vietnamese Government’s recent decision to inspect TikTok’s activities puts a focus on the content distribution algorithms and content moderation of social media platforms. RMIT Senior Lecturer in IT Dr Sam Goundar elaborates.
TikTok entered Vietnam in 2019 and has since experienced a boom in popularity. As of April 2023, Vietnam ranks 6th among countries with the largest TikTok audience in the world, with over 50 million TikTok users over the age of 18, according to data published on Statista.
However, the platform has also been met with criticisms of harmful and inappropriate content. Last month, the Ministry of Information and Communications pointed out six major violations of TikTok in Vietnam. And recently, the Ministry has announced that it would launch a comprehensive inspection of TikTok’s activities in May.
The important role of content distribution algorithms
TikTok’s content distribution algorithm is proprietary and not publicly available, in a move to prevent bad actors from manipulating the system. However, based on the information shared by TikTok and industry experts, there are some general factors that are believed to influence TikTok’s content distribution algorithm.
For instance: user engagement (likes, shares, comments, and follows), video information (captions, sounds, and hashtags used), user settings (language preferences and location), video completion rate (videos that are watched to completion), and timeliness (the recency of a video).
It is important to note that TikTok’s content distribution algorithm is constantly evolving and changing based on user behaviour and feedback. Most other social media networks use similar algorithms as the intention is to reach the maximum viewers possible and advertise to them (that is how the platforms can generate income).
There have been several criticisms of TikTok’s content distribution algorithm, primarily related to issues around transparency and fairness. Some of the key problems that have been identified are: lack of transparency (algorithm is proprietary and not publicly available), whether content is being distributed fairly and without bias, algorithmic bias (studies suggest bias against certain groups, such as people of colour and individuals with disabilities), and filter bubbles (users are only exposed to content that aligns with their interests and beliefs, leading to a lack of diversity of perspectives and opinions).
There have been instances where TikTok’s algorithm has popularised harmful or inappropriate content, such as videos promoting eating disorders or self-harm. This has raised concerns about the effectiveness of the algorithm in identifying and removing harmful content from the platform. The company has taken steps to address these issues, but there is still work to be done to ensure that the algorithm is fair and effective in promoting a safe and welcoming environment.
It also needs to be noted that TikTok is downloaded and installed as an app on our smartphones and runs as an app, thus content is directly accessed from the TikTok’s server to us and therefore difficult to monitor, filter and moderate by regulators because of HTTPS and SHA-256.
Strengthen the use of technology to moderate content
At the moment, not only TikTok, but YouTube, Facebook, Instagram, and other similar social media are facing the same issues with multimedia content, especially videos. That is why all these social media platforms employ hundreds of human moderators to physically watch and moderate videos. However, with humans, we are not able to moderate at the rate at which these videos are produced. According to February 2022 data from Statista, 30,000 hours of videos are uploaded per minute to YouTube, 167 million videos are watched on TikTok per minute, and 44 million Facebook livestreams happen every minute. It will become too expensive to employ thousands and millions of human beings.
Technology is improving exponentially, and we are overcoming these limitations. Machine Learning, Artificial Intelligence (Natural Language Processing), Deep Neural Networks, and Data Science algorithms can be trained to detect patterns and themes associated with harmful content, such as graphic violence, hate speech, or self-harm. These algorithms can analyse videos for visual and audio cues that may indicate harmful content, such as specific keywords, images or sounds.
For example: we can analyse the videos and extract features that we think represent harmful content as described earlier. Then we train our machine learning model to recognise these features and classify future videos based on these features. As we keep on finding new features, we keep on adding and training. Natural Language Processing models can be used to classify the video based on the presence of hate speech, and other harmful audio. We can use computer vision models to detect graphic violence, self-harm, and other harmful videos.
However, technology alone is not good enough. Manual review by human moderators, collaborating with experts (mental health, child safety, and human rights), user reporting, and content rating systems can help ensure a better and safer social media platform and content.
Related
Source: Vietnam Insider