Business

TIKTOK REVEALS TAKING DOWN 296,000 VIDEOS FROM KENYA AMID BAN THREAT

Appearing before the National Assembly’s Public Petition Committee, TikTok’s Public Policy and Government Relations Director, Fortune Sibanda, detailed that the videos were taken down for violating community rules on the platform.Sibanda appeared before the committee after concerns were raised over the emergence of videos that were violent, vulgar and contained explicit sexual content.

In the last year, we have taken down over 296,000 videos from Kenya alone. Around the globe 96.7 per cent of harmful content is taken down before it is reported, while 77.1 per cent of content is taken down before garnering any views,” he stated.The official also explained to the legislators that the platform had a self-regulating mechanism which allowed for harmful videos to be taken down.

“He stated that the AI is programmed using internationally set policy standards. The content is thereafter forwarded to human moderators conversant with the countries’ languages and culturally accepted policies.”These policies, he states are guided by the local partners who help shape what is culturally accepted in different countries,” read the statement by Parliament.

During the hearing, it was also revealed that ByteDance had also outsourced a company in Kenya who have employees who watch and moderate content posted on the platform.The outsourced company is reported to have 250 employees.This comes even as TikTok continues to be on the spot over inappropriate content posted on the platform. In August 2023, Ben Ndolo, a private citizen petitioned Parliament to have the social media platform banned in Kenya.

Ndolo claims that the platform is eroding Kenya’s cultural and social values owing to some videos that are often shared on the platform.Notably, concerns over the content on the social media platform also saw President William Ruto meet TikTok CEO Shou Zi Chew over the moderation of content.At the meeting, TikTok also agreed to have an office set up in Kenya.

TikTok’s decision to remove over 296,000 videos posted by Kenyans sheds light on a complex interplay of factors ranging from content moderation policies to cultural sensitivities and legal frameworks. At the heart of this issue lie questions of what constitutes appropriate content, who gets to decide, and how platforms navigate the diverse landscape of global users while adhering to local laws and norms.

Content moderation on platforms like TikTok is guided by community guidelines and terms of service, which outline acceptable behavior and content standards. These guidelines are often shaped by a combination of legal requirements, community feedback, and platform values. In the case of TikTok, these guidelines aim to foster a safe and positive environment for users while prohibiting content that violates community standards, promotes hate speech, or infringes on intellectual property rights.

However, enforcing these guidelines at scale is a monumental task, especially considering the sheer volume of content uploaded daily. To manage this challenge, TikTok employs a mix of human moderators and automated systems powered by machine learning algorithms. Human moderators review reported content and make decisions based on established policies, while machine learning algorithms analyze content for potential violations.

The removal of over 296,000 videos suggests a significant effort on TikTok’s part to uphold its content standards. While the platform hasn’t provided specific details about the nature of the removed content, it’s likely that it encompassed a range of violations, including nudity, hate speech, misinformation, and copyright infringement. Each of these categories presents unique challenges for moderation, as they require nuanced understanding and interpretation.

Cultural differences also play a significant role in content moderation decisions. What may be considered acceptable in one culture could be deemed offensive or inappropriate in another. TikTok, like many other global platforms, faces the challenge of balancing the diverse cultural norms of its user base while maintaining a consistent set of content standards. This balancing act requires careful consideration and constant refinement of moderation policies to accommodate cultural diversity.

Moreover, legal frameworks vary from country to country, further complicating content moderation efforts. Platforms like TikTok must comply with local laws and regulations while operating in multiple jurisdictions worldwide. This often involves navigating complex legal landscapes and addressing legal demands from governments regarding content removal and user data.

In the context of Kenya, TikTok’s content moderation efforts may be influenced by the country’s legal framework and cultural sensitivities. Kenya, like many other African nations, has its own set of laws governing online content and expression. These laws may impose restrictions on certain types of content, such as hate speech or pornography, and require platforms to take action against violators.

Additionally, cultural norms and values shape the types of content that are acceptable within Kenyan society. TikTok, as a global platform, must take these cultural sensitivities into account when moderating content posted by Kenyan users. This may involve working closely with local stakeholders, including government agencies, civil society organizations, and community leaders, to ensure that the platform’s content standards align with the expectations of Kenyan users.

Overall, TikTok’s decision to remove over 296,000 videos posted by Kenyans underscores the complexities of content moderation in the digital age. It highlights the challenges that platforms face in balancing freedom of expression, cultural diversity, and legal compliance while fostering a safe and inclusive online environment. Moving forward, it will be essential for TikTok and other platforms to continue refining their content moderation processes and engaging with local communities to address emerging challenges effectively.

TikTok’s content moderation efforts in Kenya are part of a broader trend of platforms facing scrutiny over their handling of user-generated content. In recent years, social media platforms have come under increasing pressure to address issues such as misinformation, hate speech, and online harassment. These challenges are not unique to TikTok but are prevalent across the digital landscape.

One key aspect of content moderation is the need for transparency and accountability. Users and stakeholders often seek clarity on how platforms make moderation decisions, what criteria are used, and how appeals are handled. Transparency can help build trust between platforms and their users while fostering greater understanding of the complexities involved in content moderation.

Furthermore, content moderation is an ongoing process that requires continuous adaptation and improvement. As online communities evolve and new challenges emerge, platforms must stay vigilant and responsive to changing dynamics. This may involve investing in new technologies, refining moderation policies, and collaborating with external experts to stay ahead of emerging threats.


TikTok’s removal of over 296,000 videos posted by Kenyans highlights the intricate challenges of content moderation in a global, digital ecosystem. It underscores the importance of striking a balance between fostering freedom of expression, respecting cultural diversity, and upholding legal obligations. Moving forward, platforms like TikTok must remain committed to transparency, accountability, and ongoing innovation to effectively address the complex and ever-evolving landscape of online content moderation.

Related posts

SOCIAL MEDIA INFLUENCER KAIZER OBED DIES AT 25

Huku Kenya

SEVERAL FEARED DEAD AFTER KENSILVER BUS TURNS AT NITHI BRIDGE

Huku Kenya

KENYA INTRODUCES GREEN NUMBER PLATES FOR ELECTRIC VEHICLES : A STEP TOWARDS SUSTAINABLE TRANSPORTATION.

Huku Kenya

Leave a Comment