Documentation Index Fetch the complete documentation index at: https://docs.animusai.co/llms.txt
Use this file to discover all available pages before exploring further.
Animus provides content moderation capabilities that help you identify and filter potentially harmful content. Unlike some other APIs, our moderation is integrated directly into our chat completions endpoint using the compliance parameter.
Using Content Moderation
To enable content moderation in your API requests, simply set the compliance parameter to true in your chat completion requests. When enabled, the API will analyze the content and return details about any detected violations.
// Example using fetch API
const response = await fetch ( 'https://api.animusai.co/v2/chat/completions' , {
method: 'POST' ,
headers: {
'Content-Type' : 'application/json' ,
'Authorization' : `Bearer ${ process . env . ANIMUS_API_KEY } `
},
body: JSON . stringify ({
model: "vivian-llama3.1-70b-1.0-fp8" ,
messages: [
{ role: "user" , content: "Text to check for compliance" }
],
compliance: true // Enable content moderation
})
});
const data = await response . json ();
// Check if there are any compliance violations
if ( data . compliance_violations && data . compliance_violations . length > 0 ) {
console . log ( "Content violations detected:" , data . compliance_violations );
} else {
console . log ( "No content violations detected" );
}
When compliance checking is enabled, the API response will include a compliance_violations field that contains an array of any detected content violations. Here’s an example response with detected violations:
{
"id" : "chat-abcd1234" ,
"object" : "chat.completion" ,
"created" : 1677652288 ,
"model" : "vivian-llama3.1-70b-1.0-fp8" ,
"choices" : [
{
"index" : 0 ,
"message" : {
"role" : "assistant" ,
"content" : "Response content here..."
}
}
],
"usage" : {
"prompt_tokens" : 40 ,
"completion_tokens" : 60 ,
"total_tokens" : 100
},
"compliance_violations" : [ "drug_use" ]
}
Content Violation Categories
Our compliance system can detect and flag the following categories of potentially harmful content:
Category Description pedophilia Content related to sexual content involving minors beastiality Content involving sexual acts with animals murder Content that promotes or glorifies murder rape Content related to sexual assault incest Content involving sexual relations between family members gore Explicit and graphic violent content prostitution Content promoting or soliciting prostitution drug_use Content promoting or describing drug use
Advanced Implementation
Handling Different Violation Types
async function handleContentModeration ( userContent ) {
const response = await fetch ( 'https://api.animusai.co/v2/chat/completions' , {
method: 'POST' ,
headers: {
'Content-Type' : 'application/json' ,
'Authorization' : `Bearer ${ process . env . ANIMUS_API_KEY } `
},
body: JSON . stringify ({
model: "vivian-llama3.1-70b-1.0-fp8" ,
messages: [
{ role: "user" , content: userContent }
],
compliance: true
})
});
const data = await response . json ();
if ( data . compliance_violations && data . compliance_violations . length > 0 ) {
// Handle different types of violations
const violations = data . compliance_violations ;
if ( violations . includes ( 'drug_use' )) {
console . log ( "Drug-related content detected" );
// Implement specific handling for drug content
}
if ( violations . includes ( 'gore' ) || violations . includes ( 'murder' )) {
console . log ( "Violent content detected" );
// Implement specific handling for violent content
}
// Log for review
logViolation ( userContent , violations );
return {
allowed: false ,
violations: violations ,
message: "Content violates our community guidelines"
};
}
return {
allowed: true ,
content: data . choices [ 0 ]. message . content
};
}
function logViolation ( content , violations ) {
// Log violation for review and analysis
console . log ( `Violation logged: ${ violations . join ( ', ' ) } - Content: ${ content . substring ( 0 , 100 ) } ...` );
}
Batch Content Moderation
For applications that need to check multiple pieces of content:
async function batchModerationCheck ( contentArray ) {
const results = await Promise . allSettled (
contentArray . map ( content =>
handleContentModeration ( content )
)
);
return results . map (( result , index ) => ({
index ,
content: contentArray [ index ],
result: result . status === 'fulfilled' ? result . value : { error: result . reason }
}));
}
// Usage
const contentToCheck = [
"This is normal content" ,
"This might contain violations" ,
"Another piece of content to check"
];
const batchResults = await batchModerationCheck ( contentToCheck );
batchResults . forEach (({ index , content , result }) => {
if ( result . allowed ) {
console . log ( `Content ${ index } : Approved` );
} else {
console . log ( `Content ${ index } : Rejected - ${ result . violations ?. join ( ', ' ) } ` );
}
});
Best Practices
For effective content moderation in your applications:
Always enable compliance : Set compliance: true for all user-generated content
Implement appropriate responses : Create user-friendly notifications when content is flagged
Pair with frontend filters : Implement basic filtering on the client side to reduce API calls for obvious violations
Handle violations gracefully : Provide constructive feedback to users when their content is flagged
Review edge cases : Periodically review flagged content to understand common violations in your application
Log violations : Keep records for analysis and improvement of your moderation system
Implement appeals process : Allow users to appeal moderation decisions when appropriate
Error Handling
Implement robust error handling for moderation requests:
async function safeContentModeration ( content ) {
try {
const response = await fetch ( 'https://api.animusai.co/v2/chat/completions' , {
method: 'POST' ,
headers: {
'Content-Type' : 'application/json' ,
'Authorization' : `Bearer ${ process . env . ANIMUS_API_KEY } `
},
body: JSON . stringify ({
model: "vivian-llama3.1-70b-1.0-fp8" ,
messages: [{ role: "user" , content: content }],
compliance: true
})
});
if ( ! response . ok ) {
throw new Error ( `HTTP error! status: ${ response . status } ` );
}
const data = await response . json ();
return {
success: true ,
violations: data . compliance_violations || [],
content: data . choices [ 0 ]. message . content
};
} catch ( error ) {
console . error ( 'Moderation check failed:' , error );
return {
success: false ,
error: error . message ,
// Fail safe - assume content needs review
violations: [ 'moderation_error' ]
};
}
}
Integration Examples
Web Application Integration
// Example: Chat application with real-time moderation
class ChatModerator {
constructor ( apiKey ) {
this . apiKey = apiKey ;
}
async moderateMessage ( message ) {
const result = await this . checkCompliance ( message );
if ( ! result . allowed ) {
return {
blocked: true ,
reason: this . getViolationMessage ( result . violations ),
violations: result . violations
};
}
return {
blocked: false ,
content: result . content
};
}
getViolationMessage ( violations ) {
const messages = {
'drug_use' : 'Messages about drug use are not allowed.' ,
'gore' : 'Graphic violent content is not permitted.' ,
'murder' : 'Content promoting violence is prohibited.' ,
// Add more specific messages
};
const specificViolations = violations
. map ( v => messages [ v ])
. filter ( Boolean );
return specificViolations . length > 0
? specificViolations . join ( ' ' )
: 'Your message violates our community guidelines.' ;
}
}
// Usage in chat application
const moderator = new ChatModerator ( process . env . ANIMUS_API_KEY );
async function handleUserMessage ( userMessage ) {
const moderationResult = await moderator . moderateMessage ( userMessage );
if ( moderationResult . blocked ) {
showUserWarning ( moderationResult . reason );
return ;
}
// Process the approved message
displayMessage ( moderationResult . content );
}
Next Steps
Text Generation Learn how to generate text with built-in moderation
Vision Understand vision capabilities and content analysis
Webhooks Set up webhooks for automated moderation workflows
API Reference Complete API documentation and reference