Among social platforms, Pinterest has been the most aggressive in challenging misinformation. As far back as 2013 , Pinterest began removing posts that contained images of self harm or harmful health misinformation. In 2017 (long before the COVID-19 pandemic), Pinterest created a misinformation policy that banned anti-vaccination misinformation and false cures. One year later, the company told users that conspiracy theories would no longer be allowed. Then, in the lead up to the 2020 election, Pinterest targeted election and census misinformation.
Over the years, the company has developed a framework for taking down offending content that involves finding offensive content, using machine learning, user-generated reporting, and human moderators. Pinterest engineers say that since 2019, the company’s machine learning has pulled enough violating content that the number of violations that people report directly to Pinterest has dropped by roughly half. In that same time, reports of self-harm content has gone down 80%. The company also works with experts and trusted organizations to elevate reputable content.
Pinterest’s aggressive treatment of misinformation makes it something of an outlier among social media networks. Twitter, Facebook, and YouTube have historically been reticent to interfere with user-generated content on their platforms, though all three began banning or flagging some anti-vaccine misinformation during the pandemic (to varying degrees of success ). Even companies with strong policies, like Pinterest, cannot guard against every piece of misinformation that comes onto its platform. Bromma is certainly clear-eyed about that reality.
“This is not the end of our misinformation journey today,” she says. “We’ll have to keep engaging with experts, make sure we’re staying on top of trends, and continually evaluating our policies and enforcement approaches to make sure they’re serving our community and our mission.”
All Rights Reserved | thetechnetwork.io
All Rights Reserved | thetechnetwork.io