The company will notify users when they are about to share an offensive comment. Instagram is the most used social network among young people. It is also the platform on which they are most at risk of harassment, according to the British anti-bullying organization named as ‘Ditch The Label’.
42% of young people who have suffered online harassment have done so in this social network, according to a study conducted by the same entity to 10,000 young people between 12 and 20 years. The company owned by Mark Zuckerberg aims to tackle the problem with two new functions: to warn users when they are about to share an offensive message and restrict the comments of certain users who might be bullying in certain profiles.
“Mission of the company is to connect you with others and things you like the most, which can only be achieved if people feel comfortable presenting themselves on the social platform. We know that intimidation is a challenge that many face, especially young people, “said Adam Mosseri, chief of Instagram, in a statement.
In recent years, the company has used artificial intelligence to detect harassment on its platform in comments, photographs and videos. Now wants to go further and, in addition to detecting this content, wants to warn users when their comments may be considered offensive before they have been published. In this way, it is intended to give people the opportunity to reflect and reconsider their harmful publications. In the first tests of this function, the company says it is a good way to encourage users to delete their message and share less harmful content.
The other function is to classify users as offensive to restrict their comments in your posts. “We have listened to young people in our community who are reluctant to block, stop following or denounce their aggressor because it could aggravate the situation, especially if they interact with their aggressor in real life,” the company says. In addition, some of these actions also make it difficult for the victim to track the behavior of their stalker. To address the problem, the company seeks to create a feature that allows victims to control their Instagram experience without being harassed by anyone who is aware of it.
This last option is called “restrict” and will begin to be tested soon. People considered offensive may continue commenting, but the messages will only be visible to them. The victim may choose to make the comments of a restricted person visible to other users by approving them. Restricted people will not be able to see when the victim is active on Instagram or has read their direct messages.
It is not the first time that Instagram takes action to combat bullying. The company already allows denouncing any account, photograph or comment whose purpose is to make bullying to someone. “You can delete a comment from a photo that you have shared and report the situation of bullying and harassment through the help service,” he says in a web of tips to be protected on Instagram. In addition, it encourages users to think about whether they want to block that person and ask for help “from a relative or a teacher you trust.”
In 2018, the company introduced an artificial intelligence based on machine learning to scan photos and identify “problematic” content. The objective of this tool is to detect any signs of harassment and automatically report the content that must be reviewed by a human team, Mosseri explained in a statement.
“Online harassment is a complex issue,” Mosseri acknowledges in the latest statement. The two new features take into account “how people intimidate each other and how they respond to Instagram intimidation”: “But they are only two steps in a long way”. The company recognizes that it is possible “to do more to prevent intimidation from happening on Instagram and so that victims of bullying can defend themselves.”