Adds layered AI-specific security to LLM endpoints using Arcjet. It enforces HTTP-layer protections that run before requests reach your model: prompt injec…
Adds layered AI-specific security to LLM endpoints using Arcjet. It enforces HTTP-layer protections that run before requests reach your model: prompt injection detection, PII and sensitive-information blocking, and token-budget rate limiting. Developers register at app.arcjet.com, set ARCJET_KEY, create or reuse a shared Arcjet client, and attach protections via withRule and protect calls. SDKs for JavaScript and Python expose detectPromptInjection/detect_prompt_injection, sensitiveInfo/detect_sensitive_info, and tokenBucket/token_bucket with rule-specific parameters (e.g., detectPromptInjectionMessage, sensitiveInfoValue, requested tokens). Use cases include preventing jailbreaks and role-play escapes, stopping PII leakage, controlling AI API costs, and mitigating bot scraping. Core advantages are early rejection of abuse to save budget, centralized PII compliance, and uniform protection across chat and completion endpoints.
Esta página faz parte do hub OpenClaw Skills com guias de instalação, navegação por categorias e links práticos.