Facebook rolled out two open-source algorithms updates which will help to detect terrorist propaganda, child exploitation, and graphic violence. The pair of technology that is being used behind algorithms are TMK+PDQF and PDQ, which will help to store files as digital hashes and will compare every file with well-known cases of harmful contents, as per the sources of Github, Facebook mentioned all this in a blog post.
As per sources, Facebook also mentioned that other non-profit organizations, tech companies, and individuals should use these technologies to figure out the harmful content and should stop spreading rumors on the internet. This initiative can help to remove all the bad content more quickly when people try to upload it.
These technologies are an extra layer of protection and allow hash participating systems to talk to each other, making them that much more powerful.
Facebook took this move when it figured out that there are more child exploitation videos are being posted online.
As per the statement mentioned in the blog post ”In just one year, facebook witnessed that there 541% of the increase in the number of minor sexual abuse videos reported to CyberTipline by the tech companies.”
Facebook marks the first time about video-or photo-matching technology when we look back so some of the biggest tech giants like Microsoft and Google already contributed to similar technologies.
With an additional step to the open-source progress, Facebook collaborated with Cornell University, University of Maryland, Massachusetts Institute of Technology, and the University of California, Berkeley, which already working to figure out ways to stop people from performing subtle modifications to banned photos and videos to bypass safety systems.
The news of this new facebook update that will detect child exploitation and terrorist images was announced by Facebook at the fourth annual child safety hackathon in Menlo Park.