Social media platforms need a proper reporting mechanism for fake news sites.
Some like Facebook do that, however, that’s often misleading, and relying too much on automation is making things worse.
Others, like Twitter, don’t really incorporate that (aside from banning accounts).
Facebook is easy to fake because it only takes a handful of reports on a post to blacklist domains.
Apparently, certain areas of a domain can be blocked, too. For instance, my blog has been quoted in various authority sites (and used as a resource in several universities). But, posts in my Sales category are banned on the platform.
(We occasionally dispute and Facebook unblocks them for a couple of weeks until this happens again.)
Proper validation isn’t trivial for 2 reasons:
- It takes an army of people going through reports all day long
- “Fake news” is subjective and hard to prove
Viral sites frequently invent fake sensations for celebrities only to generate extra traffic (and ad clicks).
Proving that someone hasn’t cheated on their wife or hasn’t yelled at a supermarket is pretty hard, which is why proving or dissing a story is questionable.
Also, while this may work for “authority domains”, it’s easy to work around this requirement with a network of hundreds of domains posting fake stories.
Think of press release blasts that hit a vast number of aggregators sharing the same story.
And don’t even get me started on politics. Socialists reporting articles on capitalism doesn’t have to do anything with fake or real, but a purely subjective and emotional take on a platform.