customerserv location page heros

User-Generated Content and Community Moderation Outsourcing with CustomerServ

What Is Content Moderation and Why Do Companies Need It?

It takes years to build a powerful brand that resonates with customers. It can take just moments to destroy your customer’s trust and loyalty when they are exposed to abuse, fraud, misinformation, or offensive content on your company’s digital platforms.

Maintaining control of your brand reputation while engaging with customers in the digital world can be challenging due to the sheer volume of content online. Prior to 2020, social media platforms reported a staggering amount of content being shared. Every day, 350 million photos are uploaded on Facebook, 500 million tweets are posted on Twitter, and 720,000 hours of video are uploaded to YouTube. According to Datareportal’s Global Overview report, 319 million new users came online in 2020 — almost 875,000 new users each day.

Users are not just consuming digital content, they are commenting, posting, interacting and uploading content. User-generated content (UGC) refers to any type of content that is created by people, rather than a brand or business. UGC includes text, images, video, and audio that is shared on social media, company websites, review sites, e-commerce sites, community forums, gaming platforms and other digital channels.

The Rules of Engagement: Moderating User-Generated Content

The proliferation of UGC and its power to sway consumers creates a significant risk for online brands. Businesses have no control over what users share, which opens the door to spam, hateful or defamatory content, harassment, copyright and trademark infringement, and privacy issues. In August 2020, TikTok reported that it had removed more than 300,000 videos that violated its hate speech policy. It also banned more than 1,300 accounts for hateful content or behavior, and removed over 64,000 hateful comments. NewsGuard has identified more than 430 websites publishing misinformation about COVID-19 — from false cures to conspiracy theories to vaccine myths. The case for protecting a brand’s customers from potential abuse is compelling.

Content moderation is the process of screening and monitoring UGC (text, images, video and audio) based on a predetermined set of rules to filter content that is spam, abusive, inappropriate, illegal or otherwise does not adhere to the site’s guidelines for UGC.

5 Common Types of Content Moderation

There are five common approaches to content moderation.

  • Pre-moderation: Pre-moderation provides brands with the most control over UGC, since each piece of content is screened by a moderator before it goes live. It is the ideal approach for businesses that are keen to maintain their online reputation and branding. The tradeoff for this higher level of protection is the lack of real-time discussion and immediate engagement among users.
  • Post-moderation: With post-moderation, user content goes live immediately, but all content is then reviewed by a moderator after it is published. This approach provides users with the real-time interaction that many crave, however, there is the potential for the community to view offensive content before the moderator can review and remove it.
  • Reactive moderation: Online communities with highly engaged users often will use a reactive moderation approach in which the community users can flag or report offensive content or content that violates the community’s guidelines. Moderators can then focus their time on reviewing any content that has been flagged. As with post-moderation, since content is published immediately, there may be some lag time before a user reports offensive content and the moderator reviews it. This approach also can leave businesses open to other risks, such as breaking intellectual property laws, since it is up to the discretion of the community users to report content and they may not recognize repurposing copyrighted materials as a violation of the community’s rules.
  • Distributed moderation: This is essentially a crowdsourced approach in which online community members self-moderate the content. Distributed moderation models use a rating or voting system — content with the highest ratings or most votes from users move to the top of the list (or web page) and the lowest rated content is hidden or removed. This is a high-risk option and is often only used when coupled with another moderation method.
  • Automated moderation: An increasingly popular approach that is highly scalable and cost-efficient, automated moderation relies on sophisticated tools to filter and process UGC. Machine learning algorithms ensure that the filtering process continues to adapt and improve decisions over time based on new data. However, while automated moderation can quickly identify questionable content, the process still relies on human review and judgment to provide the context on whether a piece of UGC is offensive, violates the brand’s UGC standards, or should be removed.

Demand for Real-Time Content Moderation Services Is Growing

The explosion of UGC in recent years and the value that brands place on live consumer engagement has created a growing demand for real-time content moderation services. A study by Transparency Market Research estimates the content moderation solutions market to reach $11.8 billion by 2027, with a CAGR of 10% from 2019 to 2027.

The TMR report pointed out that, in 2018, 60% of the study’s end users preferred content moderation services over software, and they expect that trend to continue.

CustomerServ’s content and community moderation call center and BPO partners provide:

  • Text-based user-generated content moderation (such as blogs, comments, reviews)
  • Image and video moderation
  • Community forum moderation
  • Social media moderation
  • Pre- and post-moderation
  • Reactive and distributed moderation
  • Automated moderation
  • Data annotation

The pandemic dramatically accelerated the popularity, growth and business value of user-generated content globally. Unfortunately, the amount of online abuse also has rapidly grown over the last few years, with four-in-ten Americans (41%) in 2020 stating that they have experienced some form of online harassment, according to Pew Research Center.

Protecting your customers and potential customers from toxic voices and offensive content is essential for building brand loyalty online. Content moderation provides the means to mitigate the risks associated with user-generated content while providing customers with a safe place to share and connect.