Monday, November 28, 2022
More

    Latest Posts

    Shadow banning is silencing crucial content creators – here’s how to get around it

    It’s important for social media platforms to have restrictions in place to protect creators and their audiences – but not to such an extent that it silences the voices that matter. For The Drum’s Content Marketing in Focus, Aaron King (senior account director, ITB Worldwide) shares three approaches that could help ensure that sensitive yet educational content doesn’t fall foul of the algorithm.

     

    The beauty of social media and the internet at large is that it gives everyone access to relevant stories and content that they want to find – no matter how niche the topic. With information so readily available and accessible, we need to be conscious about what we’re putting out there and whether it’s content that we want young people to consume.

     

    That’s why it’s essential to have restrictions and regulations in place on social media platforms. Think about the way media works – if a journalist in a newspaper writes an article that is offensive, the newspaper is liable. The challenge with social platforms is scale; where newspapers might have hundreds of journalists, social platforms have billions.

    <!– inArticleBlock –> 

    If we’re going to treat social media platforms in the same way, then the platform is liable – they need to be responsible and have protections in place to protect their audience, especially when it’s so easy to discover content.

     

    Take TikTok for example – with almost a third of TikTok users under the age of 20, the entertainment platform is one of the strictest – with two types of restrictions: prohibited (e.g. alcohol, gambling, drugs) and restricted (e.g. cosmetic procedures, medications, veterinary products). The rules are different again for Instagram, YouTube and others. For a brand or creator new to the influencer space, it can be quite confusing but important to understand why having these protections and restrictions in place is crucial.

     

    Creating a space for important conversations

     

    Where the problem lies is in the back end. If we want to have important conversations around sexual health, alcohol abuse or living with a disability, for example, there’s no mechanic to ensure that educational content can still be on there. Current self-imposed restrictions and regulations don’t allow for the nuances that come around certain words.

    <!– inArticleBlock –> 

    If an algorithm is looking for restricted or prohibited words – take for example, alcohol – it’s going to detect and delete everything around that word. But what if that piece of content is speaking to a community and offering valuable education around alcohol abuse, people talking about their journeys and sharing their experiences to help others learn from it?

     

    Disabled creators, to give another example, are often flagged for community violations because they don’t fit the mold in terms of a body type you might typically see in ads. It is not unheard of for individuals who have a physical ailment or disability (such as creators raising awareness about their stoma bags or happen to have missing limbs) to have their content taken down automatically by the platform’s internal flagging systems for so-called “violations.” Yet, when appealed and reviewed by a human, the error is recognized and they almost always have their posts re-instated. By the time this approval has been received, the chance for the creator to gain optimum engagement will most probably have passed. These creators have built their whole community and career talking about various personal topics and they are still being punished.

    Creators can spend hours creating content and editing it, only for it to be taken down immediately – losing their views, their comments and potential income. Of course, there will always be those creators at the other end of the scale who will find ways to get around the filters, with new lingo evolving from this suppression. So where do platforms draw the line?

     

    It’s all about context. Platforms need to look left and right instead of having a laser focus. There has to be a way for these social media platforms to understand who is routinely creating valuable content so that they don’t automatically block that content. Moderation is still key to this – but perhaps there could be alternative tools in place to do so.

    <!– inArticleBlock –> 

    Here are three ways this could work:

     

    1. ‘Send to moderator’ button

     

    Rather than relying solely on a robotic review process, any content which could be misconstrued as harmful should have the option to be reviewed by human eyes to determine whether it should or shouldn’t be on the platform. This would give easier access avenues for content revisions during the moderation stage, rather than being wrongly removed. It means that when red flags are raised, it can still remain because it has been visually moderated and deemed safe to watch.

     

    2. Extension of the blue tick verification model

     

    By extending the coveted blue check verification model, this would give audiences a way to navigate and identify sensitive content themselves. That means that those who are trusted voices in communities who have built their career talking about a certain topic – who you know aren’t going to go rogue – would have their own tick. That could be purple for disability, green for child-friendly content, etc.

     

    3. Self-moderation options

     

    There could be an opportunity for creators themselves to moderate their own content when uploading so it’s made transparent whether it’s suitable for children or not. That would then determine whether or not it could show up on the FYP for someone under the age of 16 or 18. Again, if the content is questionable, there could be an additional step (option 1 above) where creators have the option to send to moderators to approve in advance.

     

    While these suggestions might not solve all our social media regulation woes overnight, it’s important to open up dialogue on these conversations to ensure that we’re not silencing the voices that matter. We need social media platforms to be safe for the users and safe for the brands who are spending money on them. We need restrictions and we need moderation in place – but there needs to be some give and take. We can’t rely solely on an algorithm.

     

    More and more, we are seeing brands tackle taboo subjects in their advertising efforts to be diverse and inclusive – it’s about time the platforms moved with the times too.

    Visit The Drum’s Content Marketing in Focus hub for more news, insights, and strategies around content marketing.

    <!– inArticleBlock –> 

    Latest Posts

    Don't Miss

    Stay in touch

    To be updated with all the latest news, offers and special announcements.