Skip to main content
MyOrbit
  • Home
  • Companionship
  • Creators
  • Communities
  • Get App
  • Home
  • Companionship
  • Creators
  • Communities
  • Get App

MyOrbit Safety Policy

Last Updated: September 5, 2025 • Effective Date: September 5, 2025 • Version 2.0.0

Welcome to MyOrbit! Your safety is our top priority. This Safety Policy explains how we protect our community through proactive moderation, ethical AI practices, and comprehensive user protection measures. It works alongside our Terms of Service, Privacy Policy, and Cookie Notice to create a secure environment for messaging, AI companions, digital twins, and creator experiences.

If you have questions or need to report safety concerns, contact us immediately at safety@myorbit.ai or through our in-app reporting tools.

Table of Contents

  1. Our Safety Framework
  2. Content Moderation System
  3. AI Safety and Ethics
  4. Child and Minor Safety
  5. Creator and Business Safety
  6. Platform-Specific Safety Measures
  7. Privacy-Preserving Safety
  8. Crisis Response and Support
  9. Legal Compliance and Cooperation
  10. Safety Tools and Features
  11. Incident Response Plan
  12. Community Guidelines
  13. Safety Partnerships
  14. Updates and Changes
  15. Contact and Support
  16. Accessibility and Inclusion

1. Our Safety Framework

1.1 Core Safety Commitments

We maintain zero tolerance for:

  • Child sexual abuse material (CSAM) or child exploitation
  • Terrorism content or promotion
  • Non-consensual intimate content
  • Self-harm instruction or promotion
  • Severe harassment or doxxing
  • Criminal activity coordination

1.2 MyOrbit Risk Tiers

We classify content and features into risk tiers to ensure age-appropriate experiences:

Tier 0 - Open Access (All Users 13+)

  • Educational content and general conversations
  • Public avatars marked "General"
  • Business information and support
  • Creative content without mature themes
  • News, weather, general AI assistance

Tier 1 - Contextual Guidance (13+ with Warnings)

  • Mental health and relationship discussions
  • Historical or educational conflict content
  • Health information (with medical disclaimers)
  • AI emotional support companions
  • Substance education in appropriate context

Tier 2 - Age-Restricted (18+ Only)

  • Adult humor and mature themes
  • Avatars marked "Adult" (non-sexual)
  • Dating and relationship advice for adults
  • Alcohol/cannabis content (where legal)
  • Entertainment violence (games, movies)

Tier 3 - Verified Adult with Opt-In (18+ with Extra Verification)

  • Erotica avatars and content
  • Adult roleplay scenarios
  • Sexually suggestive content
  • High-stakes business AI twins
  • Content requiring explicit consent

Tier 4 - Prohibited (Automatic Blocking)

  • All content listed in our zero-tolerance commitment
  • Illegal activities of any kind
  • Content violating our Terms of Service

1.3 Encryption Mode Safety Differences

AI Encryption Mode (Default):

  • Automated content scanning for safety
  • Proactive threat detection
  • Real-time intervention capabilities
  • Crisis support features active
  • Full moderation toolkit available

Superior E2EE Mode (Optional):

  • Zero access by MyOrbit to message content
  • No automated safety scanning possible
  • Users fully responsible for content
  • Report violations with evidence
  • Account-level actions only

2. Content Moderation System

2.1 Multi-Layer Defense Architecture

In AI Encryption mode, we employ:

  • Client-Side Filtering: Device-level ML models for initial detection
  • Edge Protection: CDN-level blocking of known harmful content
  • API Gateway: Rate limiting and anomaly detection
  • AI Moderation: OpenAI Moderation API + custom safety layers
  • Human Review: 24/7 moderation team for nuanced decisions
  • Post-Publication Monitoring: Ongoing safety checks

2.2 Reporting and Response

How to Report:

  • Use in-app report buttons on any content
  • Email safety@myorbit.ai for urgent issues
  • Contact law enforcement directly for immediate danger

Our Response Timeline:

  • Immediate: CSAM, terrorism, imminent harm
  • Within 1 hour: Severe violations
  • Within 6 hours: High-priority reports
  • Within 24 hours: Standard reports
  • Within 48 hours: Low-priority issues

2.3 Enforcement Actions

Based on severity and history:

  • Warning: First-time minor violations
  • Content Removal: Policy-violating material
  • Feature Restrictions: Limited functionality
  • Temporary Suspension: 24 hours to 30 days
  • Permanent Ban: Severe or repeated violations
  • Law Enforcement Referral: Illegal activities

2.4 Appeals Process

  • Submit appeals within 30 days of action
  • Different reviewer examines each appeal
  • Response within 7 business days
  • Final decisions may be escalated to senior team

3. AI Safety and Ethics

3.1 Transparent AI Interactions

All AI features include:

  • Clear labeling as artificial intelligence
  • Periodic reminders of AI nature
  • Disclaimers about limitations
  • Prohibition on impersonation
  • Model information when relevant

3.2 AI Companion Boundaries

Our AI companions:

ARE: Entertainment products, conversational partners, creative tools

ARE NOT: Therapists, medical advisors, real people, emergency services

For mental health support:

  • Crisis hotline: 988 (US) or local equivalent
  • Emergency: 911 or local emergency number
  • Professional help: Licensed therapists only

3.3 Preventing Unhealthy Dependencies

We monitor for and address:

  • Excessive usage patterns
  • Emotional dependency indicators
  • Reality dissociation signs
  • Isolation behaviors
  • Crisis language

Interventions include:

  • Usage reminders and breaks
  • Reality check prompts
  • Resource suggestions
  • Professional help referrals
  • Conversation limits when needed

3.4 Model Safety and Selection

  • Different AI models for different risk tiers
  • Automatic safety filtering on all outputs
  • Refusal of harmful requests
  • Bias monitoring and correction
  • Regular safety audits

4. Child and Minor Safety

4.1 Age Verification System

Progressive Verification:

  • Basic (All users): Phone number verification
  • Intermediate (Sensitive features): Photo age estimation
  • Advanced (High-risk features): Government ID verification

4.2 Minor-Specific Protections (13-17)

  • Private profiles by default
  • No erotica content access
  • Limited adult messaging
  • Enhanced content filtering
  • Parental control options
  • Educational safety resources
  • Restricted monetization features

4.3 Parental Controls

Parents/guardians can:

  • Link to minor's account
  • Monitor activity (with minor's knowledge)
  • Set usage limits
  • Review connections
  • Request data deletion
  • Receive safety alerts

4.4 COPPA Compliance (Under 13)

For U.S. users under 13:

  • Verifiable parental consent required
  • Minimal data collection
  • No behavioral advertising
  • Parent access to all data
  • Immediate deletion upon request

5. Creator and Business Safety

5.1 Creator Protections

  • Identity verification for monetization
  • "Look Ma I'm Famous" IP protection system
  • 30-day grace period for unauthorized avatars
  • Boxing system for celebrity likeness
  • DMCA tools and processes
  • Revenue protection measures

5.2 Business Twin Safeguards

  • Verified business accounts only
  • Accuracy disclaimers required
  • Audit trails for all interactions
  • Hallucination detection systems
  • Human approval for high-stakes decisions
  • Professional boundary enforcement

5.3 Community Standards for Creators

Creators must:

  • Accurately label content tiers
  • Verify age-appropriate audiences
  • Respect IP rights
  • Maintain professional boundaries
  • Follow monetization guidelines
  • Respond to safety reports

6. Platform-Specific Safety Measures

6.1 Messaging Safety

  • Spam and phishing detection
  • Malware scanning (AI mode)
  • Unsolicited content filtering
  • Block and mute tools
  • Message encryption options
  • Forward limiting

6.2 Group and Community Safety

  • Admin moderation tools
  • Community guidelines enforcement
  • Member verification options
  • Content filtering by age
  • Reporting escalation paths
  • Automatic tier restrictions

6.3 Marketplace Safety

  • Seller verification requirements
  • Prohibited items enforcement
  • Transaction monitoring
  • Dispute resolution process
  • Fraud detection systems
  • INFORM Act compliance

7. Privacy-Preserving Safety

7.1 How We Balance Privacy and Safety

In AI Encryption Mode:

  • Content processed in secure memory only
  • No permanent storage of conversations
  • Automated scanning for severe violations
  • Metadata analysis for patterns
  • Crisis detection active

In Superior E2EE Mode:

  • Zero content access
  • User reporting with evidence only
  • Metadata pattern analysis
  • Account-level actions
  • Section 230 protections maintained

7.2 Data Minimization

We collect only what's necessary for safety:

  • Basic account information
  • Safety-relevant metadata
  • User reports and evidence
  • Verification documents (deleted after verification)
  • Behavioral patterns (anonymized)

8. Crisis Response and Support

8.1 Crisis Detection (AI Mode Only)

Automated detection for:

  • Suicide or self-harm expressions
  • Violence planning
  • Child safety threats
  • Medical emergencies
  • Mental health crises

8.2 Crisis Response Protocol

  • Immediate safety check
  • Warm handoff to human support
  • Crisis resources provided
  • Professional referrals offered
  • Follow-up when appropriate
  • Never punitive for help-seeking

8.3 Emergency Resources

United States:

  • Crisis Support: 988
  • Emergency: 911
  • RAINN Hotline: 1-800-656-4673
  • Domestic Violence: 1-800-799-7233

International:

  • Find local resources at findahelpline.com
  • Emergency: Local equivalent of 911

9. Legal Compliance and Cooperation

9.1 Regulatory Compliance

We comply with:

  • GDPR: EU data protection and privacy
  • DSA: Digital Services Act requirements
  • AI Act: EU AI regulations (February 2025)
  • COPPA: Children's privacy protection
  • CCPA/CPRA: California privacy rights
  • Section 230: Platform protections
  • DMCA: Copyright enforcement

9.2 Law Enforcement Cooperation

What we can provide (AI Mode):

  • Account information with valid legal process
  • Available message content with warrant
  • Emergency disclosure for imminent harm
  • Preservation requests honored

What we cannot provide (E2EE Mode):

  • Encrypted message content (no keys exist)
  • Content we cannot technically access
  • Decryption of end-to-end encrypted data

9.3 Transparency Reporting

Quarterly reports include:

  • Content removal statistics
  • Account action metrics
  • Law enforcement requests
  • Appeal outcomes
  • Safety investment updates

10. Safety Tools and Features

10.1 User Control Tools

  • Block: Prevent all interaction
  • Mute: Hide content without blocking
  • Report: Flag violations for review
  • Filter: Customize content visibility
  • Restrict: Limit who can contact you
  • Safety Mode: Enhanced protection settings

10.2 Proactive Safety Features

  • Suspicious pattern detection
  • New device login alerts
  • Unusual activity warnings
  • Password breach monitoring
  • Two-factor authentication
  • Safety number verification (E2EE)

10.3 Educational Resources

  • Digital literacy guides
  • Parent resources
  • Creator safety handbook
  • Business compliance guide
  • Mental health resources
  • Privacy best practices

11. Incident Response Plan

11.1 Security Incidents

  • Within 1 hour: Containment and initial assessment
  • Within 6 hours: Senior team notification
  • Within 24 hours: User notifications if affected
  • Within 72 hours: Regulatory notifications (GDPR)
  • Within 7 days: Public transparency update
  • Within 30 days: Complete post-mortem

11.2 Safety Incidents

  • Immediate: Remove illegal content, secure evidence
  • Within 1 hour: Escalate to safety team
  • Within 6 hours: Law enforcement contact if required
  • Within 24 hours: Affected user support
  • Within 48 hours: Community update if needed

11.3 Post-Incident Actions

  • Root cause analysis
  • Process improvements
  • Tool enhancements
  • Training updates
  • Policy revisions
  • Community communication

12. Community Guidelines

12.1 Expected Behavior

  • Treat others with respect
  • Protect minors from harm
  • Respect privacy and consent
  • Share responsibly
  • Report violations
  • Support community safety

12.2 Prohibited Behavior

See Section 5.2 of our Terms of Service for complete list.

Key prohibitions:

  • Illegal activities
  • Harm to minors
  • Non-consensual content
  • Severe harassment
  • Impersonation
  • Platform manipulation

12.3 Cultural Sensitivity

We respect diverse communities while maintaining safety standards:

  • Context considered in moderation
  • Cultural expression protected
  • Hate speech never tolerated
  • Educational content preserved
  • Artistic expression supported
  • Religious freedom respected

13. Safety Partnerships

13.1 Industry Collaboration

We work with:

  • National Center for Missing & Exploited Children (NCMEC)
  • Technology Coalition
  • Family Online Safety Institute (FOSI)
  • Cyber Civil Rights Initiative
  • Crisis Text Line
  • Academic safety researchers

13.2 Technology Partners

  • PhotoDNA: CSAM detection
  • Perspective API: Toxicity analysis
  • OpenAI: Content moderation
  • Anthropic: Safety alignment
  • Age verification services: Minor protection

14. Updates and Changes

14.1 Policy Updates

  • Regular reviews (quarterly minimum)
  • User notification of material changes
  • 30-day notice before implementation
  • Archived versions available
  • Feedback incorporation

14.2 Continuous Improvement

We continuously enhance safety through:

  • User feedback integration
  • Incident learning
  • Research partnerships
  • Technology advancement
  • Regulatory alignment
  • Community input

15. Contact and Support

15.1 Safety Team Contact

Urgent Safety Issues:

  • Email: safety@myorbit.ai
  • In-app: Safety Report button
  • Response: Within 24 hours

General Safety Questions:

  • Email: support@myorbit.ai
  • Help Center: myorbit.ai/safety
  • Response: Within 48 hours

Media and Research:

  • Email: social@myorbit.ai
  • Research: info@myorbit.ai

15.2 Escalation Path

  1. In-app reporting
  2. Safety team email
  3. Senior safety team escalation
  4. Executive review (critical issues)
  5. Board oversight (systemic concerns)

15.3 Office Hours

  • Safety team: 24/7 coverage
  • Urgent response: Always available
  • Standard support: 24-48 hours
  • Appeals: 7 business days

16. Accessibility and Inclusion

16.1 Inclusive Safety

Safety tools available for all users:

  • Screen reader compatible reporting
  • Multiple language support
  • Clear, simple language options
  • Visual and audio alternatives
  • Assistive technology support

16.2 Vulnerable User Protection

Enhanced protections for:

  • Users with disabilities
  • LGBTQIA+ community members
  • Minority communities
  • At-risk populations
  • Elderly users
  • New internet users

Conclusion

Your safety drives everything we do at MyOrbit. This policy represents our commitment to creating a secure, inclusive platform where innovation and protection go hand-in-hand. We're transparent about our capabilities and limitations, especially regarding encryption choices, and we empower you with tools and knowledge to stay safe.

Thank you for trusting MyOrbit and for being an active participant in our community's safety. Together, we're building the future of human-AI interaction with safety, transparency, and respect at the core.

Remember:

  • Your safety is our priority
  • Report concerns immediately
  • We're here to help 24/7
  • You control your experience
  • Privacy and safety can coexist

Version: 2.0.0

Last Review: September 5, 2025

Next Review: December 5, 2025

Previous Versions: Available on request via safety@myorbit.ai

MyOrbit, Inc.
730 Moreno Ave
Palo Alto, CA 94303
United States

info@myorbit.ai

© 2025 MyOrbit, Inc. All rights reserved.

← Return to MyOrbit
MyOrbit

Build your Orbit. Change your life.

Product

  • Plans & Pricing
  • Blog
  • FAQ
  • Safety

Company

  • Careers
  • Contact Us
  • Beta Access

Legal

  • Privacy Policy
  • Terms of Service
  • Creator Terms
  • Community Guidelines
  • DMCA Policy
  • Cookie Policy
  • Data Deletion

Connect

  • X (Twitter)
  • YouTube
  • Email Us

© 2026 MyOrbit. All rights reserved.