To get a better browsing experience, please use Google Chrome.Download Chrome
Free TrialAsk for Price
  • Products
  • Solutions
  • Customers
  • Blog
  • API Documentation
  • About Us
  • Demo
    NEW

< BACK TO ALL BLOGS

A Practical Migration Guide: From Azure Content Moderator to Azure AI Content Safety

This article provides a hands-on, step-by-step guide for migrating from the deprecated Azure Content Moderator to the new Azure AI Content Safety service, covering the entire process from provisioning and authentication to text/image analysis, production hardening, and policy tuning.If you previously used Azure Content Moderator, you’ve likely seen the deprecation notices. Microsoft recommends moving to Azure AI Content Safety, which offers category-based harm detection with severity scores. In this hands-on guide, you’ll provision the service, authenticate securely, analyze text and images, apply blocklists, set policy thresholds, harden for production (retries, quotas, privacy), and execute a low-risk migration.

  • Difficulty: Intermediate (cloud + API integration)
  • Estimated time: 60–90 minutes for a basic integration; 1–2 weeks to tune policies and complete migration
  • Prerequisites:
  • An Azure subscription and permission to create Azure AI services
  • Basic CLI familiarity and a language runtime (Node.js or Python)
  • Ability to send HTTP requests from your app or scripts

Note on deprecation: Microsoft announced that Azure Content Moderator is deprecated and will retire on March 15, 2027; see Microsoft’s deprecation notice in the Content Moderator overview (Microsoft Learn, 2024). The recommended path forward is Azure AI Content Safety, documented in the Azure AI Content Safety overview (Microsoft Learn, 2024/2025) and the Content Safety FAQ (Microsoft Learn, 2024/2025).

What you’ll build

By the end, you will:

  • Provision an Azure AI Content Safety resource and retrieve its endpoint and keys
  • Authenticate using API keys (and see how to use Microsoft Entra ID/Managed Identity)
  • Call text and image analysis endpoints and interpret severity outputs
  • Create and apply custom blocklists (e.g., banned terms)
  • Implement per-category policy thresholds and human-review fallbacks
  • Handle throttling, errors, and logging for production
  • Run a dual system (legacy Content Moderator + Content Safety) to migrate safely

Step 1 — Provision Azure AI Content Safety

You can create the resource in the Azure Portal or via Azure CLI.

Option A: Azure Portal

  1. Go to the Azure Portal and create a new “Azure AI services” resource (Content Safety kind). Choose your subscription, resource group, region, and name.
  2. After deployment, open the resource and copy the Endpoint and Keys from “Keys and Endpoint.” You’ll use them soon.

Option B: Azure CLI

  • Create a resource group:
  • az group create --name rg-content-safety --location eastus
  • Create the Content Safety resource (S0 is common for production trials):
  • az cognitiveservices account create \
  • --name csafety-demo \
  • --resource-group rg-content-safety \
  • --kind ContentSafety \
  • --sku S0 \
  • --location eastus \
  • --yes
  • Retrieve the endpoint and keys:
  • az cognitiveservices account show \
  • --name csafety-demo \
  • --resource-group rg-content-safety \
  • --query "properties.endpoint" -o tsv

  • az cognitiveservices account keys list \
  • --name csafety-demo \
  • --resource-group rg-content-safety

Checkpoint: You should have an endpoint like https://.cognitiveservices.azure.com and an access key.

Tips:

  • Pick a region close to your users for lower latency and check region availability in the service overview (Microsoft Learn).
  • Decide early if you’ll require private network access (Private Link) and VNet integration to meet compliance needs.

Step 2 — Authenticate (API key or Entra ID)

For quick starts, an API key is fine. For production, prefer Microsoft Entra ID with Managed Identity where possible.

API key headers:

  • Use header Ocp-Apim-Subscription-Key:
  • Content-Type: application/json for JSON payloads

Entra ID / Managed Identity:

  • Acquire an access token for the resource scope https://cognitiveservices.azure.com/.default and include it as a Bearer token in Authorization.
  • Ensure the identity (app registration, service principal, or managed identity) has appropriate RBAC on your Content Safety resource (e.g., Cognitive Services User).

Quick test with curl (API key):

curl -i "https://<your-resource>.cognitiveservices.azure.com/contentsafety/text:analyze?api-version=2024-09-01" \

-H "Content-Type: application/json" \

-H "Ocp-Apim-Subscription-Key: <YOUR_KEY>" \

-d '{"text": "hello", "categories": ["Hate","SelfHarm","Sexual","Violence"]}'

Expect a 200 response with a categoriesAnalysis array.

For broader context on auth and SDK options, see the Azure AI Content Safety quickstart for text (Microsoft Learn, 2024/2025).

Step 3 — Analyze text (REST + code samples)

The text endpoint returns category-specific severity scores so you can enforce different actions by category.

Endpoint

  • POST {endpoint}/contentsafety/text:analyze?api-version=2024-09-01

Request body (typical):

{

"text": "I hate you",

"categories": ["Hate","SelfHarm","Sexual","Violence"],

"blocklistNames": ["myTerms"],

"haltOnBlocklistHit": false

}

Example in Node.js (fetch)

import fetch from "node-fetch";


const endpoint = process.env.CS_ENDPOINT; // e.g., https://csafety-demo.cognitiveservices.azure.com

const key = process.env.CS_KEY;


async function analyzeText(input) {

const url = `${endpoint}/contentsafety/text:analyze?api-version=2024-09-01`;

const res = await fetch(url, {

method: "POST",

headers: {

"Content-Type": "application/json",

"Ocp-Apim-Subscription-Key": key,

},

body: JSON.stringify({

text: input,

categories: ["Hate", "SelfHarm", "Sexual", "Violence"],

}),

});

if (!res.ok) {

throw new Error(`HTTP ${res.status}: ${await res.text()}`);

}

return res.json();

}


(async () => {

const result = await analyzeText("I hate you.");

console.log(JSON.stringify(result, null, 2));

})();

Example in Python (requests)

import os

import requests


endpoint = os.environ["CS_ENDPOINT"] # e.g., https://csafety-demo.cognitiveservices.azure.com

key = os.environ["CS_KEY"]


url = f"{endpoint}/contentsafety/text:analyze?api-version=2024-09-01"

headers = {

"Content-Type": "application/json",

"Ocp-Apim-Subscription-Key": key,

}

payload = {

"text": "I hate you.",

"categories": ["Hate", "SelfHarm", "Sexual", "Violence"],

}


resp = requests.post(url, headers=headers, json=payload, timeout=15)

resp.raise_for_status()

print(resp.json())

Interpreting results

  • The response includes categoriesAnalysis, e.g.:

{

"categoriesAnalysis": [

{ "category": "Hate", "severity": 3 },

{ "category": "SelfHarm", "severity": 0 },

{ "category": "Sexual", "severity": 0 },

{ "category": "Violence", "severity": 0 }

]

}

Turn severities into decisions

function decide(policy, categoriesAnalysis) {

// policy: { Hate: {blockAt:4, reviewAt:3}, Sexual: {blockAt:4, reviewAt:3}, ... }

const outcome = { action: "allow", reasons: [] };

for (const { category, severity } of categoriesAnalysis) {

const p = policy[category];

if (!p) continue;

if (severity >= p.blockAt) {

outcome.action = "block";

outcome.reasons.push(`${category} severity ${severity} >= blockAt ${p.blockAt}`);

break; // block overrides

}

if (severity >= p.reviewAt && outcome.action !== "block") {

outcome.action = "review";

outcome.reasons.push(`${category} severity ${severity} >= reviewAt ${p.reviewAt}`);

}

}

return outcome;

}

Checkpoint: Try inputs that should trigger different categories (hate terms, sexual content hints) and confirm your decision function behaves as expected.

Step 4 — Analyze images

Endpoint

  • POST {endpoint}/contentsafety/image:analyze?api-version=2024-09-01

You can send images as base64 in JSON or as binary via SDKs; REST with JSON is simplest for demos.

Minimal curl example (base64 JSON)

IMG64=$(base64 -w 0 sample.jpg)


curl -s "https://<your-resource>.cognitiveservices.azure.com/contentsafety/image:analyze?api-version=2024-09-01" \

-H "Content-Type: application/json" \

-H "Ocp-Apim-Subscription-Key: <YOUR_KEY>" \

-d "{\"image\": {\"content\": \"$IMG64\"}, \"categories\": [\"Hate\",\"SelfHarm\",\"Sexual\",\"Violence\"]}"

Node.js example (read file, base64)

import fs from "fs";

import fetch from "node-fetch";


const endpoint = process.env.CS_ENDPOINT;

const key = process.env.CS_KEY;


async function analyzeImage(path) {

const content = fs.readFileSync(path).toString("base64");

const res = await fetch(`${endpoint}/contentsafety/image:analyze?api-version=2024-09-01`, {

method: "POST",

headers: {

"Content-Type": "application/json",

"Ocp-Apim-Subscription-Key": key,

},

body: JSON.stringify({ image: { content }, categories: ["Sexual", "Violence", "Hate", "SelfHarm"] }),

});

if (!res.ok) throw new Error(`${res.status} ${await res.text()}`);

return res.json();

}


analyzeImage("sample.jpg").then(r => console.log(JSON.stringify(r, null, 2)));

Formats and limits: Supported image formats commonly include JPEG, PNG, GIF, BMP, TIFF, and WEBP, with size and dimension constraints. For the latest specifics and a working example, follow the image quickstart (Microsoft Learn, 2024/2025).

Checkpoint: Use a benign image and a borderline one to verify different categories and severities.

Step 5 — Create and apply text blocklists

Blocklists let you enforce explicit terms beyond model detection.

Create or update a blocklist

# Create a blocklist named "myTerms"

curl -X PUT "https://<res>.cognitiveservices.azure.com/contentsafety/text/blocklists/myTerms?api-version=2024-09-01" \

-H "Content-Type: application/json" \

-H "Ocp-Apim-Subscription-Key: <KEY>" \

-d '{}'


# Add items

curl -X POST "https://<res>.cognitiveservices.azure.com/contentsafety/text/blocklists/myTerms/blocklistItems:batchCreateOrUpdate?api-version=2024-09-01" \

-H "Content-Type: application/json" \

-H "Ocp-Apim-Subscription-Key: <KEY>" \

-d '{

"blocklistItems": [

{"text": "bannedterm1"},

{"text": "banned phrase"}

]

}'

Apply the blocklist in text analyze

curl -s "https://<res>.cognitiveservices.azure.com/contentsafety/text:analyze?api-version=2024-09-01" \

-H "Content-Type: application/json" \

-H "Ocp-Apim-Subscription-Key: <KEY>" \

-d '{

"text": "this contains a bannedterm1",

"blocklistNames": ["myTerms"],

"haltOnBlocklistHit": true

}'

Checkpoint: With haltOnBlocklistHit=true, you should see the request short-circuit with a blocklist hit indicated in the response.

Governance tip: Assign an owner for blocklists, review additions via change management, and test effects in staging before production.

For details and SDK patterns, see the blocklist coverage in the text quickstart (Microsoft Learn, 2024/2025).

Step 6 — Tune per-category thresholds and human review

The core categories are Hate, SelfHarm, Sexual, and Violence, each returning a severity score. Start with conservative thresholds, then calibrate using real samples.

Suggested starting policy (adjust to your norms):

  • Sexual: review at ≥3, block at ≥4
  • Violence: review at ≥3, block at ≥5
  • Hate: review at ≥2, block at ≥4
  • SelfHarm: review at ≥2, block at ≥3 (and escalate to safety team)

Run experiments:

  • Log every categoriesAnalysis and your decision outcome
  • Sample borderline cases for weekly human review
  • A/B test thresholds on a subset of traffic and measure false positive/negative rates

For governance frameworks and practical risk controls around threshold design, see this primer on Content Risk Control (DeepCleer blog).

Step 7 — Production hardening (retries, quotas, privacy, and ops)

Rate limits and 429 handling

  • You may receive HTTP 429 Too Many Requests when exceeding limits. Honor the Retry-After header and implement exponential backoff with jitter. Microsoft outlines limits and usage practices in the Content Safety FAQ (Microsoft Learn, 2024/2025).

Node.js retry helper (simplified)

async function withBackoff(fn, { retries = 5, baseMs = 200 } = {}) {

let attempt = 0;

while (true) {

try { return await fn(); } catch (err) {

attempt++;

if (attempt > retries) throw err;

const retryAfter = parseInt(err.retryAfter || 0, 10);

const delay = retryAfter > 0 ? retryAfter * 1000 : baseMs * Math.pow(2, attempt) + Math.random() * 100;

await new Promise(r => setTimeout(r, delay));

}

}

}

Structured logging

  • Log input hashes (not raw content where privacy-sensitive), request IDs, category severities, blocklist hits, decisions, and latency. Ship logs to your SIEM. This supports audits and incident response.

Security and privacy

  • Use Managed Identity where possible; rotate API keys if used.
  • Consider Private Endpoints to keep traffic on your VNet.
  • Minimize PII in logs and requests; align with your regional data residency requirements.

Cost control

  • Deduplicate inputs; avoid re-moderating unchanged content.
  • Batch low-priority moderation during off-peak hours.
  • Monitor spend; review the Azure AI Content Safety pricing page (Microsoft Learn) and model volumes in the Azure Pricing Calculator (Microsoft).

Step 8 — Migrate from Azure Content Moderator with minimal downtime

Context: Azure Content Moderator’s binary-style flags (e.g., “adult,” “racy,” certain text categories) differ from Content Safety’s category + severity approach. There is no official one-to-one mapping; expect to calibrate. Microsoft confirms the deprecation and migration recommendation in the Content Moderator overview (Microsoft Learn, 2024) and the Content Safety overview (Microsoft Learn, 2024/2025).

A pragmatic playbook:

  1. Inventory: Document which endpoints you use (text, image), what flags drive decisions today, custom term lists, thresholds, and volumes.
  2. Provision & wire up: Create Content Safety in your target region and add calls for text:analyze and image:analyze into your codebase behind a feature flag.
  3. Recreate term lists: Port term lists into Content Safety blocklists. Decide if haltOnBlocklistHit should short-circuit.
  4. Map policy: Translate allow/deny rules into per-category severity thresholds. Start conservative (higher block thresholds) to avoid over-blocking.
  5. Dual-run: For 2–4 weeks, run both systems on a subset (or all) traffic. Log legacy flags and new severities side-by-side.
  6. Analyze: Measure disagreements and review samples to tune thresholds. Adjust blocklists and policies weekly.
  7. Cutover: Flip the feature flag so Content Safety is authoritative. Keep legacy ready for quick rollback for a short window.
  8. Retire: Turn off Content Moderator calls and finalize documentation.

Operational notes:

  • Communicate changes to compliance and moderation teams early.
  • Update runbooks and dashboards to include categoriesAnalysis fields.
  • Create unit tests based on real samples so future model/version changes don’t surprise you.

If you plan to expand into multi-modal GenAI or cross-platform moderation beyond this Azure scope, this overview of a Generative AI Moderation Solution (DeepCleer) can provide conceptual guardrails for policy extensibility.

Step 9 — Troubleshooting quick reference

Common HTTP errors and fixes

  • 401 Unauthorized: Missing/invalid key or token. Verify headers, endpoint, and rotate keys if needed.
  • 403 Forbidden: Caller lacks RBAC on the Content Safety resource. Assign appropriate roles (e.g., Cognitive Services User) to your principal.
  • 404 Not Found: Wrong endpoint path, region, or api-version; missing blocklist. Double-check the base endpoint and the request path.
  • 415 Unsupported Media Type: Likely Content-Type mismatch or malformed body. Use application/json and verify JSON.
  • 429 Too Many Requests: You hit a rate limit. Respect Retry-After and back off with jitter.
  • 500/503: Transient service issues. Retry with backoff; check Azure Service Health.

Schema/version pitfalls

Verification tactics

Step 10 — Keep learning and next steps

Appendix — Minimal policy template (JSON)

{

"Hate": { "reviewAt": 2, "blockAt": 4 },

"Sexual": { "reviewAt": 3, "blockAt": 4 },

"Violence":{ "reviewAt": 3, "blockAt": 5 },

"SelfHarm":{ "reviewAt": 2, "blockAt": 3 }

}

Appendix — Safe request wrapper (Python)

import random, time, requests


def post_with_backoff(url, headers, json, retries=5, base=0.2):

for i in range(retries + 1):

resp = requests.post(url, headers=headers, json=json, timeout=15)

if resp.status_code in (429, 500, 503):

if i == retries:

resp.raise_for_status()

retry_after = resp.headers.get("Retry-After")

if retry_after:

delay = float(retry_after)

else:

delay = base * (2 ** i) + random.random() * 0.1

time.sleep(delay)

continue

resp.raise_for_status()

return resp.json()

References and documentation