Instagram Cracks Down On Self-Harm And Suicide Imagery

Instagram recently announce that it will start banning all graphic self-harm images from their platform, this being part of a number of changes that were done in response to the death of Molly Russell, a British teenager. Critics noted that the decision has been necessary and long overdue, the platform making the change in response to the public anger over the 14 year old’s suicide, the account of the teenager showing distressing material about suicide and depression.

As a result of links being made between the 2017 suicide of Molly Russell and her exposure to mages of self-harm on the Instagram platform, there is a renewed focus on government regulations over harmful content on social media.

Promises of Change

Days when pressure was growing on the social media platform resulted in a meeting with Matt Hancock, the health department secretary, and Adam Mosseri (the chief of Instagram) stating that the company needs to do more in regards to protecting the most vulnerable. Mosseri also stated that the company will get better and that they are committed to removing self-harm and suicide-related content at scale.

In addition, Instagram announced further measures, such as the removal of non-graphic images of self-harm from the website and app’s most visible part. On the other hand, critics point out the changes should have been made already, and they remain skeptical on whether or not the changes made are enough to tackle this problem which, according to some, has grown unchecked for a decade.

Prior to the meeting, Matt Hancock said that social media companies have to do more when it comes to removing material that encourages self-harm and suicide, and asked other social media to take action. He said that he does not want people to go on social media and search for images related to suicide and get directed to even more of that type of imagery.

Besides Mosseri, representatives from Google, Twitter, Snapchat, and Facebook were also present on the meeting with Hancock, discussing how to tackle content related to suicide or self-harm. After the meeting, Hancock said that what matter is for children to be safe when they are using those sites. He pointed out that the progress made was good, but also said that there is a lot more work to be done. According to Hancock, all the companies that he met with were committed to solve the issue.

Adam Mosseri agreed that the change was overdue, and when asked by the DailyTelegraph why the platform took so long to handle the issue, Mosseri noting that they have not been as focused as they should have been when it comes to the effects that graphic imagery has. He stated that they are trying to quickly correct, and mentioned that it is unfortunate that it took the last few weeks for them to realize that. He concluded saying that it is the company’s responsibility to address, as quickly as they can, the issue.

Certain Content Still Allowed

Despite the changes, Mosseri also said that some self-harm image will still be allowed to remain on the platform. He gave the hypothetical example of showing a scar with the message “I’m 30 days clean”, which Mosseri believes is an important way to tell the story. That type of content can still remain on the platform, according to him, but the following change is that it will not show up in recommendation services, which means that type of content will be harder to find.

The announcement of the removal of self-harm and suicide imagery led to some confusion among people, with many wondering why the platform allowed that type of graphic images in the first place. Mosseri explained that stories of people struggling will still be allowed for sharing, but certain content will have to get blurred.

Improving the Algorithm?

Instagram will receive help from Facebook’s own investment in image recognition technology, according to Jake Moore of ESET. Moore noted that the process of removing is more slick as image recognition software becomes more finely tuned. He also pointed out that sometimes social media platforms forget that these feeds (referring to self-harm / suicide imagery) could be impactful on people, especially on those that are most vulnerable.

Jake Moore added that the algorithm will get better at recognizing such imagery if more people report them, thus their removal being quicker. He believes that this change is a joint effort between the platform and its users to remove self-harm images, a venture which will take time, according to Moor

Criticism for Late Action

The NSPCC noted that while Instagram had taken an important step, social networks are still falling short, which means that legislation would be a necessity. Peter Wanless stated that it should never have taken the death of Molly Russell in order for Instagram to act. In addition, he pointed out that over the last 10 years, social networks have proven that they will not do enough. Moreover, he urged the government to act and impose a duty of care on social media platforms, enforcing tough consequences for those who would fail to protect younger users.

Others also noted that Facebook has been falling short on a consistent basis when it comes to suicide and self-harm. Jennifer Grygiel, who is a social media expert and an assistant professor of communication at the Syracuse University, said that the company has failed when it comes to prioritizing the prevention of self-harm. She also stated that individuals who are at-risk won’t be safe until Facebok takes its role as global corporation and communication platform more seriously, and she concluded that these changes should have taken place years ago.

Regulations Incoming?

Margot James, the digital minister, was on BBC Radio 4 and stated that the government would have to keep the situation under review in order to ensure that the commitments made by Instagram are made real.

The decision made by Instagram comes as large social media companies like Facebook (which is the owner of Instagram) are preparing to battles with the government of the UK on the future of regulations on the internet. The government is taking into consideration imposing on tech companies a mandatory code of conduct, which could be accomplished via fines for non-compliance, thus prompting in a significant lobbying campaign by social media sites being done behind the scenes. Jeremy Wright, the culture secretary, will unveil the government’s proposal at the end of February.

Social Media Regulations Around The World

In the European Union a clampdown is being considered, especially on terror videos, with social media platforms having to face fines if they do not delete extremist content within one hour. GDRP (General Data Protection Regulation) has been introduced in the EU, which consists of a set of rules on how social media platforms store and use user data.

Germany on the other hand has the NetzDG law, which came into effect at the beginning of 2018. The law applies to companies that have more than 2 millions registered users in the country. Companies were forced to set up procedures in order to review complaints about the content that they are hosting and to remove anything that is clearly illegal within the span of 24 hours.

Russia is also considering 2 laws which are similar to Germany’s, requiring social media platforms to take down, within 24 hours, offensive material, and also imposing fines on companies that would fail to do so. Moreover, data laws that date back to 2015 require social media companies to store any data about Russians on servers that are in the country. As a result, actions are being taken against Twitter and Facebook for note being clear about how they intend to comply with said law.

China, on the other hand, has blocked sites such as Google, Twitter, and apps like WhatsApp. Services there are provided by Chinese applications, such as WeChat, Weibo, and Baidu. Cyber-police monitors social media platforms and screens messages which are deemed “politically sensitive”. Moreover, certain keys are automatically censored, and new words that are seen as being sensitive get added to the list of censored words.

About Instagram & Censorship

The photo and video-sharing social media platform owned by Facebook, was created by Mike Krieger and Kevin Systrom. The platform launched in October of 2010, and it allows users to upload videos and photos to the services, and to edit them using various filters. The platform gained popularity soon after its launch, getting 1 million registered users in 2 months, and 10 million in a year. As of September of 2017, the platform had 800 million users. On October of 2015, Instagram had a total of more than 40 billion photos.

The platform was acquired by Facebook in April of 2012 for about $1 billion in cash and stock. While Instagram has been praised for its influence, it has also been the subject of criticism, especially in regards to policy, interface changes, allegations of censorship, and also illegal or improper contend uploaded by its users.

Censorship of the platform occurred in a number of countries, such as China and Turkey. China blocked Instagram after the 2014 Hong Kong protests, due to the fact that a lot of photos and videos were posted. Macau and Hong Kong were not affected, because they are special administrative regions of China. Turkey is known for being strict when it comes to internet censorship, the country periodically blocking social media platforms (including Instagram). Instagram is also blocked in North Korea, as a result of the fire incident in the Kyoko Hotel that took place in 2015, with authorities blocking the platform in order to prevent photos of the incident to be spread out.

The latest in business news and advice so you never lose interest.