Digital researchers at the University of California, Berkeley, are developing a program/algorithm that uses artificial intelligence to identify "hate speech" on social media.
The researchers believe such a program known as the Online Hate Index (OHI) will outperform their human counterparts in identifying biased comments made by the users of online platforms.
At present, Facebook and Twitter have thousands of employees working to identify and delete hate-filled posts by their users. But humans cannot work at an accelerated rate and are expensive. Some even find this type of work, emotionally taxing or traumatizing, according to the California Magazine, a publication of the Cal Alumni Association.
In addition to artificial intelligence, the OHI will use several different techniques to detect offensive speech online, according to the website The College Fix. The techniques include "machine learning, natural language processing, and good old human brains."
It is the researcher's goal, according to the website, to have "major social media platforms" one day utilizing the technology to detect "hate speech" and eliminate it, and the users who spread it, from their networks.
Even though such a detection program is meant for the greater good of the social networking universe, any attempt to control speech raises Constitutional issues and the First Amendment is clear, according to Erwin Chemerinsky, the dean of Berkeley Law.
"First, the First Amendment applies only to the government, not to private entities," Chemerinsky wrote in an email to California Magazine. "Second, there is no legal definition of hate speech. Hate speech is protected by the First Amendment."
Claudia von Vacano, executive director of UC Berkeley's social science D-Lab, says her researchers are interested in only how to correctly identify hate speech online.
"We are developing tools to identify hate speech on online platforms, and are not legal experts who are advocating for its removal," Vacano told the magazine. "We are merely trying to help identify the problem and let the public make more informed choices when using social media. And, for now, the technology is still in the research and development stage."
Vacano insists she recognizes the importance of making clear distinctions in online posts. She says her team understands unless real restraint is exercised, free speech could be compromised by overzealous and self-appointed censors.
Erik Stallman, an assistant clinical professor of law and the faculty co-director of the Berkeley Center for Law and Technology told the magazine that the highest rate of accuracy for an automated monitoring system was 80 percent.
"That sounds good, but it still means that one out of five posts that was flagged or removed was inaccurate," Stallman said.