Safety & Moderation Policy
Last updated: 01/03/2026
- Josh Storer viewed this policy22/01/2026, 13:05
- Josh Storer edited this policy22/01/2026, 13:22
This Safety & Moderation Policy explains how we keep people safe, how we moderate content, how we restrict abusive accounts, and when safeguarding escalation may occur. Our goal is to reduce harm, prevent misuse, and support responsible reporting.
Contents
Tap an item to jump to that section.
1. Safety principles
2. Content rules (what’s not allowed)
- Harassment, threats, hate speech, discrimination, or intimidation.
- Doxing (private addresses, private phone numbers, personal identifiers).
- Sexual content, exploitation content, or any sexual content involving minors.
- Content that encourages self-harm, suicide, violence, or provides instructions to harm.
- Hoax, false, misleading, or malicious reports (including impersonation).
- Unverified allegations that could defame, inflame, or put someone at risk.
- Vigilante behaviour, calls to confront people, or real-time “hunt” content.
- Spam, scams, advertising, or repeated off-topic posting.
- Malware, hacking attempts, or abuse of platform features.
Where a case involves a child or vulnerable person, we may remove additional details to reduce risk, including redacting live locations or sensitive information.
3. Reporting content and abuse
4. Moderation actions and enforcement
5. 24h / 72h warnings and restrictions
6. Bans and access limitations
7. Exception: your own reported cases
8. Safeguarding escalation and third parties
9. Legal basis for moderation and safeguarding
More detail on personal data processing is available in our Privacy Policy.
10. FOI and accountability
You may request details about moderation processes, policy enforcement, and platform accountability by submitting a Freedom of Information request.
FOI Page11. Contact us
If you believe content is unsafe, abusive, or incorrect, or you want to report a safeguarding concern, contact us.
