Content Safety & Abuse Reporting
Effective Date: April 23, 2026
Pixel Dojo prohibits the use of our Service for illegal, abusive, or harmful content. This includes content involving or implying minors in sexual contexts, non-consensual intimate imagery, harassment, hate speech, extremist violence, and content that violates privacy, intellectual property, or applicable law. We invest heavily in preventing this content from being created, stored, or served.
Our Layered Safety Controls
We operate four independent layers of safety enforcement. Each runs automatically on every relevant request and can block harmful content even if another layer fails. We do not rely on a single check.
1. Server-side prompt analysis (every tool, every API call)
Every text prompt submitted to any of our generation tools, our public API, our multi-tool Canvas surface, our Creator Studio storyboards, and our LoRA training fields is analyzed before generation begins. We use a three-stage check: a list of known harmful name patterns, a deterministic phrase scanner targeting content involving minors, and an AI-powered content classifier that interprets context (so benign requests like “teenage mutant ninja turtle” pass while genuine harmful intent is blocked). Prompts that fail are rejected with a clear, actionable message; the request never reaches an AI model.
2. Pre-submission image classifier (NSFW-capable tools)
Every image uploaded to a tool that can produce explicit imagery is run through a purpose-built minor-detection classifier before it reaches the generation provider. Uploads that appear to depict children are rejected at upload time with a clear message. This includes reference images, frame inputs for video tools, and images selected from your saved library. Tools that cannot produce explicit imagery (upscalers, background removers, image analyzers) are unaffected.
3. Storage-layer CSAM scanning (every image, every upload path)
Every image stored on Pixel Dojo — generation outputs, reference images, and LoRA training datasets — is served through a storage and content delivery network that runs Cloudflare’s CSAM Scanning Tool. This uses fuzzy-hash matching against known-bad databases maintained by NCMEC and child safety organizations. Detections are reviewed and reported to NCMEC’s CyberTipline as required by 18 U.S.C. § 2258A.
4. Provider-level output filters and post-publication review
Generated images pass through the safety filters built into each upstream model. Images that become public on the community gallery are additionally scanned and may be unpublished automatically. We retain manual review authority over any image marked for public sharing.
Audit Trail and Observability
Every safety event — rejected prompt, rejected image, storage-layer detection — is recorded with the category of violation, the layer that triggered it, the affected user, and a timestamp. Our internal Tool Health dashboard surfaces these events in real time, allowing the team to investigate clusters, false-positive trends, and individual incidents. We preserve safety records as required for any necessary law enforcement reporting.
Fail-Safe Behavior
Our safety controls are designed to fail in the safer direction. If our prompt-analysis service is unreachable, prompts that match known harmful patterns are blocked rather than allowed through. Storage-layer scanning runs continuously at the network edge and is independent of our application servers. The legal-floor protections continue to operate even if one of our internal systems experiences an outage.
How to Report
If you encounter content on Pixel Dojo that you believe violates our policies or applicable law — including suspected abuse, non-consensual imagery, copyright infringement, or any illegal activity — please report it immediately:
Please include the URL of the content, a description of the concern, and any supporting evidence (screenshots, image IDs, account names). Reports are reviewed by our safety team and acted on within five (5) business days, with immediate action for clear or egregious violations.
What Happens Next
Reports are triaged based on severity. Clear violations may be removed immediately and the responsible account suspended pending review. We may take action including content removal, feature limits, account suspension, and notification of law enforcement where legally required. Suspected child sexual abuse material is escalated to NCMEC’s CyberTipline; suspected trafficking content is escalated to the National Human Trafficking Hotline and relevant authorities.
Appeals may be submitted by emailing [email protected]with the subject line “Appeal” and any context that supports your claim. Appeals are reviewed by a different team member than the original action.