To get a better browsing experience, please use Google Chrome.Download Chrome
Free TrialAsk for Price
  • Products
  • Solutions
  • Customers
  • Blog
  • API Documentation
  • About Us
  • Demo
    NEW

Documentation

What is content moderation?

Content moderation is the process of reviewing and regulating information content published, disseminated, and shared on the internet, social media platforms, applications, games, or other digital spaces. Content moderation aims to ensure that these contents comply with laws, platform policies, and community guidelines, as well as adhere to ethical and social values.

Content moderation primarily involves the following aspects:

Identifying and filtering inappropriate content involving violence, pornography, terrorism, hate speech, cyberbullying, and misinformation. Preventing illegal activities, such as intellectual property infringement, malware distribution, and phishing. Ensuring user privacy is protected and preventing the leakage of private information. Content moderation can be done through both manual and automated methods. Manual moderation is typically carried out by teams of content moderators who review content based on the platform's set policies and guidelines. Automated moderation relies on artificial intelligence (AI) and machine learning (ML) technologies, such as natural language processing (NLP) and computer vision, to automatically identify and filter problematic content. Many platforms adopt a hybrid approach, combining both manual and automated techniques for content moderation.

In addition, some platforms allow users to report inappropriate content and encourage self-regulation and community management among users. This helps platforms more efficiently discover and address content that does not comply with regulations.

Live Chat