EvoLink Moderation 1.0 API
Highest stability with guaranteed 99.9% uptime. Recommended for production environments.
Use the same API endpoint for all versions. Only the model parameter differs.
Pricing
Per-call billingUSD
per call
Credits
per call
Billing rules
- •Flat 1 credit per call — no token-based billing and no volume tiers.
- •Cost is independent of input length or whether an image is included.
- •Every response carries the standard moderation fields plus an evolink_summary block summarizing risk_level and violations.
- •Failed requests (4xx/5xx) are not billed.
Risk-level thresholds
EvoLink applies per-category thresholds to compute risk_level. Any category at or above its high threshold sets risk_level = high and flagged = true.
| Category | Medium | High | Sensitivity |
|---|---|---|---|
| sexual/minors | 0.05 | 0.20 | Red-line — never tolerated |
| self-harm | 0.30 | 0.60 | Strict — low false-negative bias |
| self-harm/intent | 0.30 | 0.60 | Strict — low false-negative bias |
| self-harm/instructions | 0.30 | 0.60 | Strict — low false-negative bias |
| violence/graphic | 0.40 | 0.70 | Strict — low false-negative bias |
| illicit/violent | 0.40 | 0.70 | Strict — low false-negative bias |
| sexual | 0.50 | 0.80 | Standard |
| violence | 0.50 | 0.80 | Standard |
| harassment/threatening | 0.50 | 0.80 | Standard |
| hate/threatening | 0.50 | 0.80 | Standard |
| harassment | 0.60 | 0.85 | Relaxed — fewer false positives |
| hate | 0.60 | 0.85 | Relaxed — fewer false positives |
| illicit | 0.60 | 0.85 | Relaxed — fewer false positives |
Thresholds may evolve. The API is the source of truth for current production values.
Content Moderation API for Text and Images
EvoLink Moderation 1.0 gives developers an OpenAI-compatible moderation API for user-generated content, AI outputs, chatbot messages, and image uploads. Each request returns 13 category scores plus a deterministic EvoLink summary with risk_level, violations, and the dominant category.

What can you build with EvoLink Moderation 1.0?
Text Moderation API for UGC
Filter offensive comments, posts, profile text, and chat messages on social platforms, forums, and community apps. Catch harassment, hate speech, illicit requests, and explicit content before it reaches your users.

AI Output Guardrails
Wrap your chatbot, copilot, or generative pipeline with a moderation API call. Run prompts and outputs through EvoLink Moderation before delivery to block policy-violating responses with predictable latency.

Image Moderation API for Uploads
Moderate image uploads and text-plus-image requests through the same synchronous endpoint. Use category scores and the risk_level summary to send image content into allow, review, or block workflows.
.png&w=1920&q=75)
Why teams choose EvoLink Moderation 1.0
EvoLink Moderation 1.0 is a production-ready multimodal safety layer with deterministic risk levels, multilingual support, and OpenAI-compatible plumbing.
13 Harm Categories
Detects harassment, hate, sexual, violence, self-harm, illicit, and minor-safety violations with per-category confidence scores.
Multimodal Input
Send text alone, a single image alone, or text plus one image in the same request. Image categories cover sexual, violence, and self-harm.
Deterministic Risk Level
Each response includes evolink_summary with low / medium / high risk_level, violations array, and the maximum-scoring category.
Predictable Pricing
Per-call billing at 1 credit per request. No token math, no streaming surprises — moderate as much as your budget allows.
How to integrate the moderation API
EvoLink Moderation is fully compatible with the OpenAI /v1/moderations endpoint. Change the base URL, pass model: evolink-moderation-1.0, and read evolink_summary for production decisions.
Step 1 — Authenticate
Create an EvoLink API key and call /v1/moderations with Bearer token authentication.
Step 2 — Send input
Pass model: evolink-moderation-1.0 and an input array containing a text item, an image_url item, or both. Single image per request.
Step 3 — Read evolink_summary
Use evolink_summary.risk_level (low/medium/high) and violations[] to drive your allow / review / block decision in one branch.
Core EvoLink Moderation 1.0 capabilities
EvoLink-tuned thresholds, calibrated for production use
Production-Grade Safety Engine
A frontier multimodal safety classifier, calibrated by EvoLink with per-category thresholds tuned for real-world content moderation workloads.
Per-Category Thresholds
Strict thresholds on sexual/minors and self-harm, relaxed thresholds on harassment and hate to reduce false positives.
EvoLink Summary Field
Single evolink_summary object with risk_level, flagged, violations, max_score, and max_category — alongside the standard moderation fields for power users.
Text + Single Image
Multimodal evaluation in one synchronous call. Image inputs cover sexual, violence, and self-harm categories.
OpenAI SDK Compatible
Works with the OpenAI SDK out of the box. Switch base_url, set model to evolink-moderation-1.0 — no code rewrite.
Multilingual Detection
Strong multilingual coverage across English, Chinese, Spanish, Japanese, and 40+ languages.
EvoLink Moderation 1.0 FAQs
Everything you need to know about the product and billing.