HappyHorse 1.0 is now liveTry it now
omni-moderation-latest Explained: Text and Image Moderation Guide
guide

omni-moderation-latest Explained: Text and Image Moderation Guide

EvoLink Team
EvoLink Team
Product Team
April 29, 2026
8 min read
omni-moderation-latest is OpenAI's multimodal moderation model for detecting harmful content in text and images. It matters because it moved OpenAI moderation beyond text-only checks and gave developers a single model family for text and image safety workflows.

The short version:

  • OpenAI introduced omni-moderation-latest on September 26, 2024.
  • It is based on GPT-4o and supports both text and image inputs.
  • OpenAI says the model is free to use through the Moderation API.
  • Image support is category-specific, so not every moderation category works for image-only inputs.
  • Teams that want an OpenAI-compatible moderation endpoint inside EvoLink workflows can also evaluate EvoLink Moderation 1.0.

This guide explains what the model does, how it differs from older text moderation models, and how to think about production implementation.

What is omni-moderation-latest?

omni-moderation-latest is OpenAI's moderation model for identifying potentially harmful content. OpenAI's model page describes it as a free moderation model that accepts text and image inputs and returns text output through the Moderation endpoint.

Sources:

The model is not a general-purpose image generator or chat model. It is a classifier. You send user content to the Moderation API, and the response tells you which categories may be present and how strongly the model scored them.

Why OpenAI replaced text-only moderation with multimodal moderation

Before omni-moderation-latest, many moderation systems treated text and images as separate problems. That created awkward production workflows:
  • one moderation call for a user comment
  • another service for image uploads
  • separate category definitions
  • separate response formats
  • separate thresholds and review rules

OpenAI's September 2024 announcement positioned the new model as a way to evaluate harmful text and images with a more capable multimodal classifier. OpenAI also said the model improved performance especially for non-English content.

The practical result is simple: applications that accept both captions and images can use one moderation model instead of stitching together a text classifier and a separate image safety service.

omni-moderation-latest multimodal capabilities comparison
omni-moderation-latest multimodal capabilities comparison

What inputs does omni-moderation-latest support?

OpenAI's model page lists:

ModalitySupport
TextInput and output
ImageInput only
AudioNot supported
VideoNot supported
That means omni-moderation-latest can evaluate text, images, or text-plus-image requests, but it does not moderate audio or video directly.

For teams building user-generated content workflows, this maps well to common cases:

  • comments and chat messages
  • profile text
  • image uploads
  • listings with captions and photos
  • AI-generated text or generated images before publication

Which categories work for images?

This is the detail many teams miss.

OpenAI's announcement says multimodal harm classification was supported for these image-related categories at launch:

  • violence and violence/graphic
  • self-harm, self-harm/intent, and self-harm/instructions
  • sexual content, but not sexual/minors

OpenAI also states that the remaining categories were text-only at the time of the announcement, with plans to expand multimodal support.

In practice, that means image moderation is useful, but it is not the same as saying every text moderation category works equally well for images. If your product needs to detect hate symbols in memes, policy-violating text embedded inside images, brand safety issues, spam overlays, or marketplace-specific visual rules, you may still need additional checks.

omni-moderation-latest vs text-moderation-latest

Areatext-moderation-latestomni-moderation-latest
Primary inputTextText and images
Image moderationNot the main use caseSupported for selected categories
Newer harm categoriesMore limitedAdds illicit and illicit/violent as text-only categories, according to OpenAI's announcement
Multilingual performanceOlder baselineOpenAI reported stronger multilingual performance in its internal evaluation
Best fitLegacy text-only integrationsNewer text and image moderation workflows
If you already use the OpenAI Moderation API, the main reason to evaluate omni-moderation-latest is broader input support and newer category behavior.

How to use omni-moderation-latest

A basic text moderation call looks like this:

from openai import OpenAI

client = OpenAI()

response = client.moderations.create(
    model="omni-moderation-latest",
    input="User-submitted text goes here"
)

result = response.results[0]

if result.flagged:
    print(result.categories)
    print(result.category_scores)

For image moderation, use an image input:

from openai import OpenAI

client = OpenAI()

response = client.moderations.create(
    model="omni-moderation-latest",
    input=[
        {
            "type": "image_url",
            "image_url": {
                "url": "https://example.com/user-upload.jpg"
            }
        }
    ]
)

result = response.results[0]
print(result.flagged)
print(result.category_scores)

For text-plus-image moderation:

response = client.moderations.create(
    model="omni-moderation-latest",
    input=[
        {"type": "text", "text": "Caption or user message"},
        {
            "type": "image_url",
            "image_url": {
                "url": "https://example.com/user-upload.jpg"
            }
        }
    ]
)

Always test these examples against the current OpenAI API docs before shipping, because SDK request shapes can evolve over time.

Production patterns for moderation workflows

The API call is only one part of the moderation system. In production, the bigger question is what your application does with the result — typically mapping scores into allow, review, or block decisions, tracking false positives, and logging reviewer overrides.

With omni-moderation-latest, you build that mapping yourself from category flags and scores. Your application decides which categories are hard blocks, which require review, and which are signals only.
For a complete end-to-end implementation pattern — including input moderation, output moderation, tool-call validation, and risk routing — see our chatbot content moderation guide.

When omni-moderation-latest is a good fit

Use omni-moderation-latest when:
  • you already use OpenAI directly
  • your app needs OpenAI's documented moderation categories
  • your workflow is text-first with some image moderation needs
  • you are comfortable implementing your own threshold and review logic
  • you want a free moderation model inside the OpenAI API ecosystem

For many OpenAI-native products, that is a strong starting point.

When to consider an OpenAI-compatible alternative

An alternative does not need to beat "free" on raw endpoint price. It needs to reduce operational complexity — especially around the allow / review / block mapping that omni-moderation-latest leaves to your application code.
EvoLink Moderation 1.0 is an OpenAI-compatible moderation endpoint that returns a built-in evolink_summary object with risk_level (low / medium / high), so your application can route content without writing category-score aggregation. It supports text-only, image-only, and text-plus-image inputs.
For a detailed pricing and workflow comparison between OpenAI and EvoLink, see OpenAI Moderation API Pricing: Free, Rate Limits, and Alternatives. For request examples and limits, see the EvoLink Moderation API docs.
Choose thisIf your priority is...
OpenAI omni-moderation-latestFree moderation inside a direct OpenAI API workflow
EvoLink Moderation 1.0OpenAI-compatible moderation inside EvoLink with text-plus-image support and a simplified risk summary
Multi-layer moderationCustom policy enforcement, brand rules, appeals, human review, or compliance workflows beyond one API

There is no universal winner. OpenAI's model is a strong fit for OpenAI-native applications. EvoLink is a strong fit when your team wants the moderation layer to sit beside other EvoLink API calls and return a production-oriented risk summary.

FAQ

Is omni-moderation-latest free?

OpenAI describes moderation models as free models, and OpenAI's announcement says the new moderation model is free to use through the Moderation API. Rate limits depend on usage tier.

Does omni-moderation-latest support images?

Yes. OpenAI's model page lists image as an input modality. However, OpenAI's announcement makes clear that image support is category-specific, so not every moderation category applies to image inputs.

Does omni-moderation-latest support video or audio?

No. OpenAI's model page lists audio and video as not supported for this model.

No. EvoLink Moderation 1.0 is a separate EvoLink moderation service with an OpenAI-compatible API interface. It is designed for teams that want text and image moderation inside EvoLink workflows.

Not automatically. If OpenAI's free moderation endpoint fits your workflow, use it. Evaluate EvoLink if you want an OpenAI-compatible moderation endpoint with evolink_summary and risk_level, flat per-call pricing, and integration with other EvoLink APIs.
Explore EvoLink Moderation 1.0

Ready to Reduce Your AI Costs by 89%?

Start using EvoLink today and experience the power of intelligent API routing.