Instagram is failing to enforce its own rules and allowing some of its most high-profile accounts to be targeted with abusive comments “with impunity,” according to a new report from the Center for Countering Digital Hate. The anti-hate group claims that Meta failed to remove 93 percent of comments it reported to the company, including ones that contain racial slurs, violent threats and other disturbing language that would seem to clearly violate the social network’s rules.
CCDH’s researchers zeroed in on five Republican and five Democratic lawmakers who are up for election this year. The group included Vice President Kamala Harris, Representative Nancy Pelosi, Senator Elizabeth Warren, Representative Marjorie Taylor-Greene, Senator Marsha Blackburn and Representative Lauren Boebert.
The researchers reported 1,000 comments that appeared on the lawmakers’ Instagram posts between January and June of this year and found that Meta took “no action” against the vast majority of those comments, with 926 of them still visible in the app one week after being reported. The reported content included comments with racial slurs and other racist language, calls for violence and other abuse.
“We’re simulating the moment at which someone reaches out their hand asking for help, and actually, Instagram’s failure to act on that compounds the harm done,” CCDH CEO Imran Ahmed said in a briefing about the report.
The CCDH also found that many of the abusive comments came from “repeat offenders” which, according to Ahmed, has “created a culture of impunity” on the platform. The report comes less than three months before the US presidential election, and it notes that attacks targeting Harris, who is now campaigning for president seem to have “intensified” since she took over the ticket. “Instagram failed to remove 97 out of 105 abusive comments targeting Vice President Kamala Harris, equivalent to a failure to act on 92% of abusive comments targeting her,” the report says. It notes that Instagram failed to remove comments targeting Harris that used the n-word, as well as gender-based slurs.
In a statement, Meta said it would review the report. “We provide tools so that anyone can control who can comment on their posts, automatically filter out offensive comments, phrases or emojis, and automatically hide comments from people who don’t follow them,” Meta’s Head of Women’s Safety, said in a statement. “We work with hundreds of safety partners around the world to continually improve our policies, tools, detection and enforcement, and we will review the CCDH report and take action on any content that violates our policies.”
Read the original article here