Reddit currently has a feature titled:
“Someone is considering suicide or serious self-harm”
which allows users to flag posts or comments when they are genuinely concerned about someone’s mental health and safety.
When such a report is submitted, Reddit’s system sends an automated private message to the reported user containing mental health support resources, such as contact information for crisis helplines (e.g., the Suicide & Crisis Lifeline, text and chat services, etc.).
In some cases, subreddit moderators are also alerted, although Reddit does not provide a consistent framework for moderator intervention.
The goal of the feature is to offer timely support to users in distress and reduce the likelihood of harm.
However, there have been valid concerns about misuse—such as false reporting to harass users, or a lack of moderation tools or guidance for handling these sensitive situations.
Given Lemmy’s decentralized, federated structure and commitment to privacy and free expression, would implementing a similar self-harm concern feature be feasible or desirable on Lemmy?
Some specific questions for the community:
Would this feature be beneficial for Lemmy communities/instances, particularly those dealing with sensitive or personal topics (e.g., mental health, LGBTQ+ support, addiction)?
How could the feature be designed to minimize misuse or trolling, while still reaching people who genuinely need help?
Should moderation teams be involved in these reports? If so, how should that process be managed given the decentralized nature of Lemmy instances?
Could this be opt-in at the instance or community level to preserve autonomy?
Are there existing free, decentralized, or open-source tools/services Lemmy could potentially integrate for providing support resources?
Looking forward to your thoughts—especially from developers, mods, and mental health advocates on the platform.
Never. Even any positive thing from Reddit should never be considered because its fucking Reddit!
The only time I saw one of these on Reddit was when some asshole sent me one after a heated thread.
I got them on the fairly regular before I caught my ban and I never even argue I say my piece and gtfo, I don’t respond to people who respond to my comments… it serves no real purpose
So people can send it to others to harass them? It doesn’t work on Reddit why implement it here? Talking about suicide could actually increase the likelihood of it happening so beyond the fact it will be used to harass people it might be making things worse
The existing reporting framework already works for this. Report those so that they can be removed ASAP.
Mods/admins should not be expected to be mental health professionals, and internet volunteers shouldn’t have to shoulder that burden.
The one on reddit is used almost exclusively for harassment. Don’t be more like reddit.
ime as a subreddit mod that was nearly exclusively used for harassment, usually transphobic harassment. In the one or two cases where there was a report for someone who had suicidal or self-harm ideation, there’s still zilch I could have done; I would just approve the post so the user could get support and speak to others (the subreddit was a support group for a sensitive subject, so it wouldn’t be out of place for a post to say that the stress of certain things was making them suicidal).
No way. If anything, that kind of thing just supresses people from expressing themselves honestly in a way that might help them.
Real human connection and compassion might make a difference. A cookie cutter template message is (genuinely) a “we don’t want you to talk about this here” response
We aren’t beholden to advertisers, we don’t need this