Detect potentially harmful content with built-in compliance checking and handle violations according to your application needs
The SDK provides built-in content moderation capabilities that detect potentially harmful content and expose violation details through response data. How you handle these violations is entirely up to your application’s requirements and policies.
Enable content moderation by setting the compliance option to true:
Copy
import { AnimusClient } from 'animus-client';const client = new AnimusClient({ tokenProviderUrl: 'https://your-backend.com/api/get-animus-token', chat: { model: 'vivian-llama3.1-70b-1.0-fp8', systemMessage: 'You are a helpful assistant.', compliance: true // Enable content moderation (default: true) }});// For event-driven chat, check violations in messageComplete eventclient.on('messageComplete', (data) => { if (data.compliance_violations && data.compliance_violations.length > 0) { // Handle violations according to your application's needs console.log("Violations detected:", data.compliance_violations); // Your custom handling logic here } else { console.log("Content approved:", data.content); }});// Send message - compliance checking happens automaticallyclient.chat.send("User message to check");// Or use direct API method for immediate responseconst response = await client.chat.completions({ messages: [{ role: 'user', content: 'User message to check' }], compliance: true});if (response.compliance_violations && response.compliance_violations.length > 0) { console.log("Violations detected:", response.compliance_violations);} else { console.log("Content approved");}
Performance Note: Enabling compliance checking adds a small amount of latency to requests as content is analyzed for violations. While minimal, consider this when designing real-time applications.
const client = new AnimusClient({ tokenProviderUrl: 'https://your-backend.com/api/get-animus-token', chat: { model: 'vivian-llama3.1-70b-1.0-fp8', compliance: true // All messages will be moderated }});
Content moderation works with both streaming and non-streaming responses. For streaming responses, compliance violations are available in the chunk data:
Copy
// Streaming with compliance checkingconst stream = await client.chat.completions({ messages: [{ role: 'user', content: 'User message' }], compliance: true, stream: true});let fullContent = '';let hasViolations = false;let violations: string[] = [];for await (const chunk of stream) { // Check for compliance violations in the chunk if (chunk.compliance_violations && chunk.compliance_violations.length > 0) { hasViolations = true; violations = chunk.compliance_violations; console.log('Compliance violations detected:', violations); // You can choose to: // - Stop streaming immediately // - Continue streaming but flag the content // - Handle violations after stream completes break; // Example: stop streaming on violation } // Process content delta const delta = chunk.choices?.[0]?.delta?.content || ''; if (delta) { fullContent += delta; displayStreamingContent(delta); }}// Handle violations after streamingif (hasViolations) { handleViolations(violations, fullContent);} else { console.log('Content approved');}