Facebook is cracking down to ensure Facebook users who share, like, react, or comment on any posts about COVID-19 or coronavirus posts receive warnings.
The idea is to begin warning users that the content they're interacting with is harmful or misinformation and that it's being summarily removed as a result. Facebook announced that this feature will be rolled out in the next few weeks in a recent blog post.
"These messages will connect people to COVID-19 myths debunked by the World Health Organization including ones we’ve removed from our platform for leading to imminent physical harm," wrote Facebook vice president of integrity Guy Rosen.
Facebook had already started working to stave off misinformation about the pandemic across its platforms. So far, it's already removed thousands of inaccurate posts with info that could seriously harm readers, like claims that drinking bleach can cure coronavirus.
"We want to connect people who may have interacted with harmful misinformation about the virus with the truth from authoritative sources in case they see or hear these claims again off of Facebook," explained Rosen.
Facebook has also upped the amount of employees it has currently working on fact-checking content to quell the spread of false information. The company may not get much right, but at the very least it's trying during these difficult times to help make sure people aren't drowning in harmful posts that could endanger their well-being.