Safety and Content publishing rules
We strive to create a safe and comfortable space for all users of our application "Need Help?!". To achieve this, we have established strict safety and content publishing standards aimed at protecting users from harmful, offensive, and illegal content, as well as ensuring ethical interactions within our community.
1. Application Usage Safety
1.1. User Protection
We take measures to prevent fraud, harassment, cyberbullying, and other harmful behavior in the application:
- We use automated systems to detect and block unwanted content.
- We provide complaint mechanisms and promptly moderate violations.
- We prohibit the use of the application to promote hate speech, discrimination, threats, or violence.
- We ensure the security of personal data by complying with international privacy standards (GDPR, CCPA).
1.2. Privacy Policy
We protect users' personal data by limiting its collection, processing, and transfer to third parties. More details are available on the page "Privacy policy" of our website n-e-e-d.help or in the app "Need Help?!".
1.3. Children's Safety Standards
Although our app "Need Help?!" is intended for adults, it's quite possible that minors could use it as well. However, we strive to create a safe and friendly environment for all users, especially children. Our safety standards include the following principles:
1.3.1. Compliance with CSAE and CSAM Prevention
Our application "Need Help?!" fully complies with Children’s Safety and Awareness Expectations (CSAE) standards and strictly opposes all forms of child exploitation in the digital space, including Child Sexual Abuse Material (CSAM). We take the following measures to protect children:
- Detection and removal of CSAM – We use automated technologies to identify and block illegal content related to child sexual exploitation and actively cooperate with law enforcement and child protection organizations.
- Prohibition and suppression of all forms of violence – The application strictly prohibits any content promoting or containing violence against children, threats, harassment, and exploitation. We respond immediately to reports of such violations.
- Safe online interactions – We implement filtering mechanisms to prevent cyberbullying, unwanted contacts, and harmful behavior.
- Minimal data collection – We do not request, store, or share children’s personal data without explicit parental or guardian consent.
- Reporting and moderation mechanisms – Children and parents can easily report suspicious content or behavior, and our team responds promptly, including notifying the appropriate authorities when necessary.
We regularly update our safety policies to align with the latest CSAE requirements, international regulations, and best global practices for child protection in the digital environment.
1.3.2. Child-Appropriate Content
All content undergoes careful review and moderation to meet age restrictions and exclude materials that could harm children.
1.3.3. Protection of Personal Data
We strictly adhere to COPPA, GDPR-K, and other regulations that safeguard minors' data.
1.3.4. Safe Interactions
We use message filtering, automated risk detection, and manual moderation systems to prevent cyberbullying and unwanted communication.
1.3.5. Quick Response to Reports
We ensure prompt review of complaints, blocking of violators, and removal of inappropriate content.
If you have any questions or concerns regarding child safety in our application, please contact us through the form on the page "Contact Us" of our website n-e-e-d.help or in the app "Need Help?!".
2. User Content Publishing Rules
We encourage creativity and self-expression, but all user-generated content must adhere to the following standards:
2.1. Prohibited Content
The following types of content are strictly prohibited from being published, shared, or promoted:
- Violent and abusive content – including depictions of violence, bullying, self-harm, and animal cruelty.
- Illegal content – materials related to drugs, illegal services, terrorism, human trafficking, etc.
- Pornographic and sexual content – including nudity, explicit sexual content, and content exploiting vulnerable groups.
- Hate speech and insults – content that promotes discrimination, racism, sexism, homophobia, or any form of hate.
- Fake news and misinformation – false information that misleads users or harms public interests.
- Fraud and spam – advertising scams, phishing, and financial pyramid schemes.
2.2. User Responsibilities
Users must:
- Follow netiquette and online communication norms.
- Respect others' rights and not share personal data of third parties without their consent.
- Verify the accuracy of information before sharing it.
- Respect copyright laws and avoid uploading unauthorized content.
3. Moderation and Penalties for Violations
We implement a combined approach to content moderation:
- Automated detection – AI-powered filters block prohibited materials.
- Manual moderation – our moderation team reviews disputed cases.
- User reports – any user can report inappropriate content through built-in complaint mechanisms.
3.1. Possible Penalties
In case of violations, we may:
- Remove the content.
- Restrict access to certain app features.
- Temporarily or permanently block the user’s account.
- Report serious violations to law enforcement authorities.
4. Safe User Interactions
To maintain a friendly and secure environment within the app "Need Help?!", we recommend:
- Being respectful – avoiding conflicts and aggressive behavior.
- Not sharing personal information – such as passwords, home addresses, or payment details.
- Being cautious with strangers – avoiding suspicious offers and links.
- Reporting violations – if you encounter inappropriate content or behavior, use the reporting function.
We regularly update our safety standards and content publishing rules to comply with the best practices in digital security and user protection. If you have any questions or complaints, please contact us via the form on the page "Contact Us" of our website n-e-e-d.help or in the app "Need Help?!".