HappyHorse 1.0 is now liveTry it now

EvoLink Moderation 1.0 API

A production-ready content moderation API for teams that need text and image safety checks behind an OpenAI-compatible /v1/moderations endpoint. Classify harmful content across 13 categories, then use evolink_summary.risk_level to route each request to allow, review, or block.

Highest stability with guaranteed 99.9% uptime. Recommended for production environments.

Use the same API endpoint for all versions. Only the model parameter differs.

Pricing

Per-call billing

USD

per call

Credits

per call

Billing rules

  • Flat 1 credit per call — no token-based billing and no volume tiers.
  • Cost is independent of input length or whether an image is included.
  • Every response carries the standard moderation fields plus an evolink_summary block summarizing risk_level and violations.
  • Failed requests (4xx/5xx) are not billed.

Risk-level thresholds

EvoLink applies per-category thresholds to compute risk_level. Any category at or above its high threshold sets risk_level = high and flagged = true.

CategoryMediumHighSensitivity
sexual/minors0.050.20Red-line — never tolerated
self-harm0.300.60Strict — low false-negative bias
self-harm/intent0.300.60Strict — low false-negative bias
self-harm/instructions0.300.60Strict — low false-negative bias
violence/graphic0.400.70Strict — low false-negative bias
illicit/violent0.400.70Strict — low false-negative bias
sexual0.500.80Standard
violence0.500.80Standard
harassment/threatening0.500.80Standard
hate/threatening0.500.80Standard
harassment0.600.85Relaxed — fewer false positives
hate0.600.85Relaxed — fewer false positives
illicit0.600.85Relaxed — fewer false positives

Thresholds may evolve. The API is the source of truth for current production values.

Content Moderation API for Text and Images

EvoLink Moderation 1.0 gives developers an OpenAI-compatible moderation API for user-generated content, AI outputs, chatbot messages, and image uploads. Each request returns 13 category scores plus a deterministic EvoLink summary with risk_level, violations, and the dominant category.

Hero showcase of EvoLink Moderation 1.0 API

What can you build with EvoLink Moderation 1.0?

Text Moderation API for UGC

Filter offensive comments, posts, profile text, and chat messages on social platforms, forums, and community apps. Catch harassment, hate speech, illicit requests, and explicit content before it reaches your users.

UGC moderation use case

AI Output Guardrails

Wrap your chatbot, copilot, or generative pipeline with a moderation API call. Run prompts and outputs through EvoLink Moderation before delivery to block policy-violating responses with predictable latency.

AI guardrails use case

Image Moderation API for Uploads

Moderate image uploads and text-plus-image requests through the same synchronous endpoint. Use category scores and the risk_level summary to send image content into allow, review, or block workflows.

Image moderation workflow use case

Why teams choose EvoLink Moderation 1.0

EvoLink Moderation 1.0 is a production-ready multimodal safety layer with deterministic risk levels, multilingual support, and OpenAI-compatible plumbing.

13 Harm Categories

Detects harassment, hate, sexual, violence, self-harm, illicit, and minor-safety violations with per-category confidence scores.

Multimodal Input

Send text alone, a single image alone, or text plus one image in the same request. Image categories cover sexual, violence, and self-harm.

Deterministic Risk Level

Each response includes evolink_summary with low / medium / high risk_level, violations array, and the maximum-scoring category.

Predictable Pricing

Per-call billing at 1 credit per request. No token math, no streaming surprises — moderate as much as your budget allows.

How to integrate the moderation API

EvoLink Moderation is fully compatible with the OpenAI /v1/moderations endpoint. Change the base URL, pass model: evolink-moderation-1.0, and read evolink_summary for production decisions.

1

Step 1 — Authenticate

Create an EvoLink API key and call /v1/moderations with Bearer token authentication.

2

Step 2 — Send input

Pass model: evolink-moderation-1.0 and an input array containing a text item, an image_url item, or both. Single image per request.

3

Step 3 — Read evolink_summary

Use evolink_summary.risk_level (low/medium/high) and violations[] to drive your allow / review / block decision in one branch.

Core EvoLink Moderation 1.0 capabilities

EvoLink-tuned thresholds, calibrated for production use

Engine

Production-Grade Safety Engine

A frontier multimodal safety classifier, calibrated by EvoLink with per-category thresholds tuned for real-world content moderation workloads.

Calibration

Per-Category Thresholds

Strict thresholds on sexual/minors and self-harm, relaxed thresholds on harassment and hate to reduce false positives.

Schema

EvoLink Summary Field

Single evolink_summary object with risk_level, flagged, violations, max_score, and max_category — alongside the standard moderation fields for power users.

Multimodal

Text + Single Image

Multimodal evaluation in one synchronous call. Image inputs cover sexual, violence, and self-harm categories.

Compatibility

OpenAI SDK Compatible

Works with the OpenAI SDK out of the box. Switch base_url, set model to evolink-moderation-1.0 — no code rewrite.

Languages

Multilingual Detection

Strong multilingual coverage across English, Chinese, Spanish, Japanese, and 40+ languages.

EvoLink Moderation 1.0 FAQs

Everything you need to know about the product and billing.

EvoLink Moderation 1.0 is a multimodal content safety classifier with calibrated risk-level thresholds. Each response includes the standard moderation fields plus an evolink_summary field with a calibrated risk_level (low/medium/high) and the categories that triggered violations — turning 13 raw scores into a single decision.
Thirteen categories: harassment, harassment/threatening, hate, hate/threatening, illicit, illicit/violent, self-harm, self-harm/intent, self-harm/instructions, sexual, sexual/minors, violence, violence/graphic. Image inputs cover sexual, violence, self-harm, and graphic violence.
No. Each request supports text plus a single image_url. To moderate multiple images, send concurrent requests — one per image — and aggregate the results in your application.
EvoLink applies per-category thresholds tuned for production. sexual/minors and self-harm use strict cutoffs (high at 0.20 / 0.60), violence/graphic at 0.70, harassment/hate at 0.85. Any category at or above its high threshold returns risk_level = high and flagged = true.
Flat 1 credit (10,000 UC) per call across all user groups. Pricing is per-call, not per-token, so cost is fully predictable regardless of input size.
Yes. The endpoint accepts the same request schema as /v1/moderations. Set base_url to your EvoLink endpoint, model to evolink-moderation-1.0, and the OpenAI SDK works without modification — the evolink_summary field is added alongside the standard response.