Facebook and Twitter accused of failing to protect female public figures

Social media giants Facebook and Twitter have been accused of failing to protect women – particularly those in the public eye – from abuse.

Appearing before members of both Houses of Parliament on the joint Human Rights Committee, the firms admitted they still had work to do to protect MPs and other public figures.

Representatives of the two firms gave evidence to the committee on democracy and free speech.

Citing a number of examples of prominent women being targeted on Twitter in posts which were not initially taken down, SNP MP Joanna Cherry asked if the companies would accept they had made mistakes in policing which had “failed to protect women”.

She referenced research from Amnesty International, which analysed 228,000 tweets sent during 2017 to 778 female politicians and journalists from across the political spectrum in the UK and US, and found that about one in every 14 of them contained abusive or problematic language.

Twitter’s head of UK government, public policy and philanthropy Kay Minshall said she was “horrified” by the stories of abuse she had encountered.

“There is clearly a number of steps that we want to take, we need to take – but we are in a different place to where we were even this time last year.”

Ms Minshall said the platform was “acutely aware of its responsibilities” and now worked closely with parliamentary authorities and law enforcement to improve safety for politicians on social media.

However, Ms Cherry argued that some of the more high-profile incidents of abuse had only been removed after being publicised by other prominent women.

“There seems to be a pattern of Twitter initially ruling that extremely offensive and violent tweets directed at women in public life are acceptable and that Twitter only reviews their decision when they are pressed by other figures in public life,” she said.

The two companies were also questioned by MPs on their ability to more quickly, and proactively, find and remove abuse that appears on their sites.

Facebook’s UK head of public policy Rebecca Stimson said that alongside thousands of human reviews, the social network used what she said was “probably the most advanced automated systems in the world”, but admitted the nuance in language around harassment meant these systems were not yet as able to stop content in the same way it deals with other offensive material.

“There are places where we’re really, really good – terrorism, child exploitation, that kind of thing – our machines are able to find and remove around 99% of that kind of content before it’s ever seen by anyone,” she said.

“Things like bullying and harassment and some of the subject that we’re discussing with you today are much harder for a machine to identify accurately what that is. It might be us just having an argument about something, it might be using some robust language.

“So there, we found about two million pieces of that kind of content, but only about 15% of that was found by our machines and the rest we rely on individuals reporting to us and human reviewers because often it’s more about context and it’s more about intent and those can be nuanced decisions.”

Both firms argued that increased engagement with politicians such as their appearances before committees could only help them improve their content policing.

The two companies have both recently announced new tools to prevent malicious content being posted around the upcoming European Parliament elections, and Ms Minshall also confirmed that in June Twitter would test a new feature that allows authors of tweets to moderate replies – hiding those they did not wish to see.

Advertisement