< BACK TO ALL BLOGS
A Practical Migration Guide: From Azure Content Moderator to Azure AI Content Safety
If you previously used Azure Content Moderator, you’ve likely seen the deprecation notices. Microsoft recommends moving to Azure AI Content Safety, which offers category-based harm detection with severity scores. In this hands-on guide, you’ll provision the service, authenticate securely, analyze text and images, apply blocklists, set policy thresholds, harden for production (retries, quotas, privacy), and execute a low-risk migration.
Note on deprecation: Microsoft announced that Azure Content Moderator is deprecated and will retire on March 15, 2027; see Microsoft’s deprecation notice in the Content Moderator overview (Microsoft Learn, 2024). The recommended path forward is Azure AI Content Safety, documented in the Azure AI Content Safety overview (Microsoft Learn, 2024/2025) and the Content Safety FAQ (Microsoft Learn, 2024/2025).
By the end, you will:
You can create the resource in the Azure Portal or via Azure CLI.
Option A: Azure Portal
Option B: Azure CLI
Checkpoint: You should have an endpoint like https://.cognitiveservices.azure.com and an access key.
Tips:
For quick starts, an API key is fine. For production, prefer Microsoft Entra ID with Managed Identity where possible.
API key headers:
Entra ID / Managed Identity:
Quick test with curl (API key):
curl -i "https://<your-resource>.cognitiveservices.azure.com/contentsafety/text:analyze?api-version=2024-09-01" \
-H "Content-Type: application/json" \
-H "Ocp-Apim-Subscription-Key: <YOUR_KEY>" \
-d '{"text": "hello", "categories": ["Hate","SelfHarm","Sexual","Violence"]}'
Expect a 200 response with a categoriesAnalysis array.
For broader context on auth and SDK options, see the Azure AI Content Safety quickstart for text (Microsoft Learn, 2024/2025).
The text endpoint returns category-specific severity scores so you can enforce different actions by category.
Endpoint
Request body (typical):
{
"text": "I hate you",
"categories": ["Hate","SelfHarm","Sexual","Violence"],
"blocklistNames": ["myTerms"],
"haltOnBlocklistHit": false
}
Example in Node.js (fetch)
import fetch from "node-fetch";
const endpoint = process.env.CS_ENDPOINT; // e.g., https://csafety-demo.cognitiveservices.azure.com
const key = process.env.CS_KEY;
async function analyzeText(input) {
const url = `${endpoint}/contentsafety/text:analyze?api-version=2024-09-01`;
const res = await fetch(url, {
method: "POST",
headers: {
"Content-Type": "application/json",
"Ocp-Apim-Subscription-Key": key,
},
body: JSON.stringify({
text: input,
categories: ["Hate", "SelfHarm", "Sexual", "Violence"],
}),
});
if (!res.ok) {
throw new Error(`HTTP ${res.status}: ${await res.text()}`);
}
return res.json();
}
(async () => {
const result = await analyzeText("I hate you.");
console.log(JSON.stringify(result, null, 2));
})();
Example in Python (requests)
import os
import requests
endpoint = os.environ["CS_ENDPOINT"] # e.g., https://csafety-demo.cognitiveservices.azure.com
key = os.environ["CS_KEY"]
url = f"{endpoint}/contentsafety/text:analyze?api-version=2024-09-01"
headers = {
"Content-Type": "application/json",
"Ocp-Apim-Subscription-Key": key,
}
payload = {
"text": "I hate you.",
"categories": ["Hate", "SelfHarm", "Sexual", "Violence"],
}
resp = requests.post(url, headers=headers, json=payload, timeout=15)
resp.raise_for_status()
print(resp.json())
Interpreting results
{
"categoriesAnalysis": [
{ "category": "Hate", "severity": 3 },
{ "category": "SelfHarm", "severity": 0 },
{ "category": "Sexual", "severity": 0 },
{ "category": "Violence", "severity": 0 }
]
}
Turn severities into decisions
function decide(policy, categoriesAnalysis) {
// policy: { Hate: {blockAt:4, reviewAt:3}, Sexual: {blockAt:4, reviewAt:3}, ... }
const outcome = { action: "allow", reasons: [] };
for (const { category, severity } of categoriesAnalysis) {
const p = policy[category];
if (!p) continue;
if (severity >= p.blockAt) {
outcome.action = "block";
outcome.reasons.push(`${category} severity ${severity} >= blockAt ${p.blockAt}`);
break; // block overrides
}
if (severity >= p.reviewAt && outcome.action !== "block") {
outcome.action = "review";
outcome.reasons.push(`${category} severity ${severity} >= reviewAt ${p.reviewAt}`);
}
}
return outcome;
}
Checkpoint: Try inputs that should trigger different categories (hate terms, sexual content hints) and confirm your decision function behaves as expected.
Endpoint
You can send images as base64 in JSON or as binary via SDKs; REST with JSON is simplest for demos.
Minimal curl example (base64 JSON)
IMG64=$(base64 -w 0 sample.jpg)
curl -s "https://<your-resource>.cognitiveservices.azure.com/contentsafety/image:analyze?api-version=2024-09-01" \
-H "Content-Type: application/json" \
-H "Ocp-Apim-Subscription-Key: <YOUR_KEY>" \
-d "{\"image\": {\"content\": \"$IMG64\"}, \"categories\": [\"Hate\",\"SelfHarm\",\"Sexual\",\"Violence\"]}"
Node.js example (read file, base64)
import fs from "fs";
import fetch from "node-fetch";
const endpoint = process.env.CS_ENDPOINT;
const key = process.env.CS_KEY;
async function analyzeImage(path) {
const content = fs.readFileSync(path).toString("base64");
const res = await fetch(`${endpoint}/contentsafety/image:analyze?api-version=2024-09-01`, {
method: "POST",
headers: {
"Content-Type": "application/json",
"Ocp-Apim-Subscription-Key": key,
},
body: JSON.stringify({ image: { content }, categories: ["Sexual", "Violence", "Hate", "SelfHarm"] }),
});
if (!res.ok) throw new Error(`${res.status} ${await res.text()}`);
return res.json();
}
analyzeImage("sample.jpg").then(r => console.log(JSON.stringify(r, null, 2)));
Formats and limits: Supported image formats commonly include JPEG, PNG, GIF, BMP, TIFF, and WEBP, with size and dimension constraints. For the latest specifics and a working example, follow the image quickstart (Microsoft Learn, 2024/2025).
Checkpoint: Use a benign image and a borderline one to verify different categories and severities.
Blocklists let you enforce explicit terms beyond model detection.
Create or update a blocklist
# Create a blocklist named "myTerms"
curl -X PUT "https://<res>.cognitiveservices.azure.com/contentsafety/text/blocklists/myTerms?api-version=2024-09-01" \
-H "Content-Type: application/json" \
-H "Ocp-Apim-Subscription-Key: <KEY>" \
-d '{}'
# Add items
curl -X POST "https://<res>.cognitiveservices.azure.com/contentsafety/text/blocklists/myTerms/blocklistItems:batchCreateOrUpdate?api-version=2024-09-01" \
-H "Content-Type: application/json" \
-H "Ocp-Apim-Subscription-Key: <KEY>" \
-d '{
"blocklistItems": [
{"text": "bannedterm1"},
{"text": "banned phrase"}
]
}'
Apply the blocklist in text analyze
curl -s "https://<res>.cognitiveservices.azure.com/contentsafety/text:analyze?api-version=2024-09-01" \
-H "Content-Type: application/json" \
-H "Ocp-Apim-Subscription-Key: <KEY>" \
-d '{
"text": "this contains a bannedterm1",
"blocklistNames": ["myTerms"],
"haltOnBlocklistHit": true
}'
Checkpoint: With haltOnBlocklistHit=true, you should see the request short-circuit with a blocklist hit indicated in the response.
Governance tip: Assign an owner for blocklists, review additions via change management, and test effects in staging before production.
For details and SDK patterns, see the blocklist coverage in the text quickstart (Microsoft Learn, 2024/2025).
The core categories are Hate, SelfHarm, Sexual, and Violence, each returning a severity score. Start with conservative thresholds, then calibrate using real samples.
Suggested starting policy (adjust to your norms):
Run experiments:
For governance frameworks and practical risk controls around threshold design, see this primer on Content Risk Control (DeepCleer blog).
Rate limits and 429 handling
Node.js retry helper (simplified)
async function withBackoff(fn, { retries = 5, baseMs = 200 } = {}) {
let attempt = 0;
while (true) {
try { return await fn(); } catch (err) {
attempt++;
if (attempt > retries) throw err;
const retryAfter = parseInt(err.retryAfter || 0, 10);
const delay = retryAfter > 0 ? retryAfter * 1000 : baseMs * Math.pow(2, attempt) + Math.random() * 100;
await new Promise(r => setTimeout(r, delay));
}
}
}
Structured logging
Security and privacy
Cost control
Context: Azure Content Moderator’s binary-style flags (e.g., “adult,” “racy,” certain text categories) differ from Content Safety’s category + severity approach. There is no official one-to-one mapping; expect to calibrate. Microsoft confirms the deprecation and migration recommendation in the Content Moderator overview (Microsoft Learn, 2024) and the Content Safety overview (Microsoft Learn, 2024/2025).
A pragmatic playbook:
Operational notes:
If you plan to expand into multi-modal GenAI or cross-platform moderation beyond this Azure scope, this overview of a Generative AI Moderation Solution (DeepCleer) can provide conceptual guardrails for policy extensibility.
Common HTTP errors and fixes
Schema/version pitfalls
Verification tactics
Appendix — Minimal policy template (JSON)
{
"Hate": { "reviewAt": 2, "blockAt": 4 },
"Sexual": { "reviewAt": 3, "blockAt": 4 },
"Violence":{ "reviewAt": 3, "blockAt": 5 },
"SelfHarm":{ "reviewAt": 2, "blockAt": 3 }
}
Appendix — Safe request wrapper (Python)
import random, time, requests
def post_with_backoff(url, headers, json, retries=5, base=0.2):
for i in range(retries + 1):
resp = requests.post(url, headers=headers, json=json, timeout=15)
if resp.status_code in (429, 500, 503):
if i == retries:
resp.raise_for_status()
retry_after = resp.headers.get("Retry-After")
if retry_after:
delay = float(retry_after)
else:
delay = base * (2 ** i) + random.random() * 0.1
time.sleep(delay)
continue
resp.raise_for_status()
return resp.json()
References and documentation