Yoomee Child Safety & CSAE Policy

Last Updated: 2025-10-17

1. Introduction & Scope

Yoomee (“we”, “us”, “our”) is committed to protecting children and minors from all forms of child sexual abuse and exploitation (CSAE). This document sets out our publicly published standards, responsibilities, and processes for prevention, detection, reporting, and enforcement of CSAE policy on our platform and in our apps.

This policy applies to all users, content, communications, and features of Yoomee (including but not limited to chat, images, profile data, messaging, uploads, comments, user interactions). It is part of our broader Terms of Service and Community Guidelines.

2. Definitions & Prohibited Conduct

For the purposes of this policy, CSAE (Child Sexual Abuse & Exploitation) includes but is not limited to:

  • Child Sexual Abuse Material (CSAM): any visual depiction (images, videos, cartoons, computer-generated imagery) of a minor engaging in sexual activity or with sexual parts for sexual purposes.
  • Grooming: befriending, contacting, or forming an emotional bond with a minor for sexual purposes or to prepare them for abuse or exploitation.
  • Sextortion: threatening to distribute or expose intimate images of a minor in order to coerce further sexual acts or favors.
  • Sexualization of minors: portrayal or encouragement of minors in sexual manner or contexts, including provocative imagery or descriptions.
  • Trafficking, solicitation, or exploitation: offering or requesting minors for sexual purposes, or facilitating such behavior.

Such conduct is strictly prohibited on our platform. Any content, behavior, message, or account that violates these definitions will be subject to enforcement actions—including content removal, suspension, account termination, escalation to authorities, and reporting.

3. Published Standards & Transparency

We maintain this policy as a publicly accessible web document, with permanent stable URL. We refer explicitly to CSAE, child safety, and to our app name “Yoomee” as listed on Google Play, to satisfy Google's “Published Standards” requirement.

We also integrate these standards into our Terms of Service and Community Guidelines, with clear cross-references. Users are required to accept those terms and are made aware that violations regarding minors will be treated with highest severity.

4. In-App Reporting & User Feedback Mechanism

We provide a reporting mechanism within the app such that users can submit concerns, complaints, or suspicions of CSAE without leaving the app. This includes:

  • A “Report” button or link accessible from user profiles, chat threads, images, or content pages;
  • A structured reporting form with fields for description, optional upload (e.g. screenshot, image), date/time, user IDs, context, and optional contact info;
  • An acknowledgment to the reporter (e.g. “Your report has been submitted and is under review”).

All reports are forwarded into our moderation and escalation pipeline for review and action.

5. Moderation, Detection & Escalation Process

To effectively detect, prioritize, and respond to CSAE reports and content, we operate a multi-layered moderation pipeline:

  1. Automated Detection / Filtering: We use content scanning algorithms (image hashing, AI model analysis, keyword filters) to flag suspect content or messages in real-time or periodically.
  2. Human Review: Flagged content is escalated to trained human moderators for final decision, to reduce false positives and ensure contextual understanding.
  3. Prioritization: Reports indicating imminent risk, minors in danger, or high severity are escalated to “urgent queue” with shorter SLAs (e.g. within 1 hour).
  4. Escalation to Safety / Legal Team: Cases requiring deeper investigation (e.g. possible grooming, repeated offenses, suspect minors) are escalated to an internal Safety team and legal counsel.
  5. Appeal & Review: Where a user disputes moderation decisions, we maintain an appeals channel. Decisions and logs are audited periodically.
  6. Logging & Metrics: Each moderation decision is logged (who made it, when, context, decision) and aggregated metrics (report counts, resolution times, false positive rates) are tracked and reviewed monthly.

We continuously refine models, update rules, retrain classifiers, and monitor accuracy. We also sample blind audits of moderated decisions to ensure quality and fairness.

6. Action on Actual Knowledge & CSAM Removal

Upon obtaining actual knowledge (via report, detection, audit) of CSAE / CSAM content in our system, we commit to prompt, documented action, consistent with law and our standards.

  • Immediate removal or disabling of access to the offending content;
  • Suspension or termination of user accounts that uploaded, shared, or distributed CSAE / CSAM;
  • Preservation of logs and context (while limiting exposure to further spread) to support investigative reporting or authority actions;
  • Submission of confirmed CSAE / CSAM incidents to relevant authorities / agencies (e.g. NCMEC, local law enforcement) in compliance with applicable laws;
  • Internal escalation and review of related content or accounts possibly connected to the incident.

8. Designated Child Safety / CSAE Point of Contact

We designate a point of contact empowered to receive CSAE-related notifications from Google or authorities, oversee enforcement processes, and act on escalations. This contact is integrated into our internal safety and review chain.

Name: [John Doe]

Title / Role: Head of Safety & Compliance

Email: link

Response SLA: We commit to respond to incoming notifications or escalations within 24 hours (or earlier for urgent cases).

9. User Empowerment, Education & Safety Tools

We provide users with resources, controls, and education to promote child safety:

  • Guidance pages or help center articles on recognizing grooming, exploitation, and how to report abuse;
  • Ability for users to block, mute, or restrict contact with others;
  • Automated warnings or safety reminders when a user is about to send personal data or private content to another user;
  • Age gate / verification measures where applicable (if minors might access parts of the app) to limit exposure and features;
  • Safety popups or disclaimers when enabling chat / sharing media for younger users, to raise awareness.

10. Audits, Metrics & Continuous Improvement

We maintain a program of review and improvement:

  • Monthly and quarterly audits of moderation logs, false positive / negative rates;
  • Internal reviews, random sampling, and peer review of decisions;
  • Incorporation of feedback, incident post-mortems, and continuous training of moderators and models;
  • Versioning and change logs of policy updates, with dates and changelogs; new versions are published with “Last Updated” date;
  • Maintaining and publishing anonymized, aggregate metrics (e.g. number of CSAE reports, % resolved within SLAs) when possible, consistent with privacy constraints.

11. Self-Certification & Google Play Declaration

In our submission on Google Play Console (Child Safety Standards / CSAE section), we self-certify that we satisfy all the following:

  • We maintain and link to this publicly accessible CSAE standard (this page).
  • We provide an in-app feedback mechanism for CSAE reporting.
  • We act appropriately when obtaining actual knowledge of CSAE / CSAM, including content removal and escalation.
  • We comply with applicable child safety laws and report to authorities as required.
  • We designate a point of contact empowered to handle CSAE notifications from Google or regulators.

12. Revision History & Version Log

We version and track changes to this policy. Below is a short log:

  • 2025-10-17: Major revision to align with Google Play's updated Child Safety Standards and to add more detailed moderation, escalation, and metrics sections.

13. Contact & Inquiries

For questions, feedback, or additional information about this policy or child safety on our platform, contact us:

Developer: mobile TREND GmbH

Email: Support