The Secret Language of Self-Harm: Where Are the Risks and How Digital Monitoring Can Help

By Smoothwall
Published 31 January, 2023
4 minute read

With Self-Injury Awareness Day taking place on 1st March, we look at some of the specific risks the online world poses to vulnerable young people – and the importance of having the appropriate safeguarding solutions in place.

Given the private and personal nature of self-harm, many young people share their experiences, or seek out information online first, rather than asking for help and support from teachers, parents or friends.

That makes the job of safeguarding difficult, especially given the fact that much self-harm content is thought to use a ‘secret’ online language, designed to fly under the radar. For example, through hashtags or codenames, pro-self-harm content can be sought out, shared online, or discussed with strangers.

While many social media platforms regulate their content, there’s still plenty of harmful information for students to find on them. Without an appropriate monitoring solution in place, many of these risks can be missed by those with a responsibility for safeguarding.

Here, we explore how DSLs can better understand the specific dangers seemingly safe online channels can pose.

Social media

When using social media platforms such as Facebook, Twitter or Instagram, students can often find their way to pro self-harm content by using specific hashtags and codenames. This is a phenomenon known as Non-suicidal Self-Injury (NSSI).

Popular codenames can easily be missed if a school or college lacks a sophisticated technology-led monitoring solution, as they often appear as typos or everyday words, and phrases.

A study by the Centers for Disease Control (CDC) in the United States discovered an image on Instagram called #MySecretFamily, which used hashtags with codenames for mental health and self-harm issues. These included #Ana for anorexia, #Dallas for Suicidal, and #Ben for borderline personality disorder.

These hashtags go beyond names, however, as they also include what looks to be misspelled or altered words such as #Thynspo (thin inspiration). The CDC found the term #MySecretFamily had more than 1.5 million search results on Instagram alone, highlighting the scale of the issue.

The borderless nature of social media means that this is by no means an issue confined to the US, and young people in the UK can easily access this type of content too.

Below are a number of known hashtags and code names to look out for on social media:

 

Disorder Girls Boys
Self-harm Cat Sam
Suicide Unalive Unalive
Suicidal Sue Dallas
Depression Deb Dan
ADD/ADHD Addie Andy
Anorexia Ana Rex
Bulimia Mia Bill
Paranoia Perry Pat
Anxiety Annie Max

 

Forums

There have been many stories in the news of young people falling victim to pro-suicide and self-harm forums, with strangers online encouraging and challenging them to take their self-harm one step further.

To communicate, and to seek out relevant posts, it is thought that the type of social media codenames we have discussed above are often used. However, as the risk escalates, the language can become a lot more explicit.

Given the sheer volume of topics discussed on mainstream forums, it is understandably hard for schools or colleges to keep track of what is innocent – or risky – behaviour, especially if coded language is used. Furthermore, many of the more harmful forums are thought to present themselves in a seemingly innocent way.

As such, it is important for DSLs to have the appropriate tools in place such as digital monitoring, that allow them to quickly identify a risk – no matter how hidden it might be – and intervene at the earliest stage. This will allow them to prevent any serious harm coming to students and to offer the appropriate support.

How can digital monitoring help?

The two examples we have discussed are challenging for DSLs, given the fact that much of the harmful content being accessed is coded, or hidden amongst perfectly innocent posts.

A traditional ‘eyes and ears’ approach to safeguarding, not only risks missing an incident in the first place but relies on being constantly up-to-date with the latest self-harm hashtags and language.

This is where technology-led active monitoring plays in important role. A student’s overall online behaviour can be monitored, helping build up a ‘risk profile’ over time. This approach can join the dots between apparently unrelated events, such as an internet search, a message to a friend, or even a typed, and then deleted, word document.

Furthermore, an effective digital monitoring solution can detect the latest keywords and phrases related to a particular safeguarding issue, in this case, self-harm. These are constantly updated as soon as a new threat emerges.

The net result is a more effective safeguarding provision, meaning risks can be detected and students offered the support and help they need from the earliest opportunity.

If you’re new to digital monitoring, Smoothwall’s free ‘Complete Guide to Monitoring in Education’ whitepaper is available to download here which explains many of these issues, and the digital monitoring technology, in more detail.

Discover Smoothwall Monitor in action

Book a free Monitor walkthrough and Q+A session with one of Smoothwall’s friendly monitoring experts. We'd be delighted to help.

Book a demo

Sources:

*https://www.theguardian.com/society/2021/feb/16/self-harm-among-young-children-in-uk-doubles-in-six-years/