The Azure AI Content Safety SDK for Java provides Java developers with an easy-to-integrate toolkit for building content moderation and harm-detection work…
The Azure AI Content Safety SDK for Java provides Java developers with an easy-to-integrate toolkit for building content moderation and harm-detection workflows. It exposes core clients (ContentSafetyClient and BlocklistClient) and supports authentication via API key or DefaultAzureCredential, plus a Maven artifact (com.azure:azure-ai-contentsafety). Key features include text and image analysis with severity scoring (text: 0–7, image: 0,2,4,6), categorical detection for Hate, Sexual, Violence, and Self-harm, and blocklist management for custom prohibited terms. Typical use cases are real-time chat moderation, user-generated content filtering, compliance enforcement, and automated takedown workflows. Benefits include configurable thresholds, multimodal detection, seamless Azure auth, and straightforward integration into Java backends and moderation pipelines to reduce policy violations and improve safety at scale.
Esta página faz parte do hub OpenClaw Skills com guias de instalação, navegação por categorias e links práticos.