Quick Reference
Top-Level Configuration
Parameter | Type | Default | Required | Description |
---|
tokenProviderUrl | string | - | ✅ | URL of your secure backend endpoint for tokens |
apiBaseUrl | string | https://api.animusai.co/v3 | ❌ | Override default API endpoint |
tokenStorage | 'sessionStorage' | 'localStorage' | sessionStorage | ❌ | Where to store auth tokens |
Chat Configuration
Parameter | Type | Default | Required | Description |
---|
chat.model | string | - | ✅ | Default model ID for chat requests |
chat.systemMessage | string | - | ✅ | Default system prompt |
chat.temperature | number | 1 | ❌ | Controls randomness (0.0-2.0) |
chat.top_p | number | 1 | ❌ | Nucleus sampling threshold (0.0-1.0) |
chat.n | number | 1 | ❌ | Number of choices to generate |
chat.max_tokens | number | - | ❌ | Maximum tokens in response |
chat.stop | string[] | null | ❌ | Stop sequences |
chat.stream | boolean | false | ❌ | Enable streaming (not with autoTurn) |
chat.presence_penalty | number | 1 | ❌ | Penalize new words (-2.0 to 2.0) |
chat.frequency_penalty | number | 1 | ❌ | Penalize frequent words (-2.0 to 2.0) |
chat.best_of | number | 1 | ❌ | Server-side generations to choose from |
chat.top_k | number | 40 | ❌ | Limit sampling to top k tokens |
chat.repetition_penalty | number | 1 | ❌ | Penalize repeating tokens (0.0-2.0) |
chat.min_p | number | 0 | ❌ | Minimum probability threshold (0.0-1.0) |
chat.length_penalty | number | 1 | ❌ | Adjust sequence length impact |
chat.compliance | boolean | true | ❌ | Enable content moderation |
chat.reasoning | boolean | false | ❌ | Extract thinking content |
chat.check_image_generation | boolean | false | ❌ | Auto-generate images from prompts ⚠️ Alpha |
chat.historySize | number | 0 | ❌ | Number of turns for context |
chat.autoTurn | boolean | object | false | ❌ | Enable conversational turns |
AutoTurn Configuration
Parameter | Type | Default | Description |
---|
autoTurn.enabled | boolean | true | Enable/disable conversational turns |
autoTurn.baseTypingSpeed | number | 45 | Base typing speed in WPM |
autoTurn.speedVariation | number | 0.2 | Speed variation factor (±percentage) |
autoTurn.minDelay | number | 500 | Minimum delay between turns (ms) |
autoTurn.maxDelay | number | 3000 | Maximum delay between turns (ms) |
autoTurn.maxTurns | number | 3 | Maximum number of turns allowed |
autoTurn.followUpDelay | number | 2000 | Delay before follow-up requests (ms) |
autoTurn.maxSequentialFollowUps | number | 2 | Max sequential follow-ups before user input |
Vision Configuration
Parameter | Type | Default | Required | Description |
---|
vision.model | string | - | ✅ | Default model for vision requests |
vision.temperature | number | - | ❌ | Default temperature for vision |
Complete Configuration Example
{
"tokenProviderUrl": "https://your-backend.com/api/get-animus-token",
"apiBaseUrl": "https://api.animusai.co/v3",
"tokenStorage": "sessionStorage",
"chat": {
"model": "vivian-llama3.1-70b-1.0-fp8",
"systemMessage": "You are a helpful AI assistant.",
"temperature": 0.7,
"top_p": 1.0,
"n": 1,
"max_tokens": 500,
"stop": ["\n"],
"stream": false,
"presence_penalty": 0.0,
"frequency_penalty": 0.0,
"best_of": 1,
"top_k": 40,
"repetition_penalty": 1.0,
"min_p": 0.0,
"length_penalty": 1.0,
"compliance": true,
"reasoning": false,
"check_image_generation": false,
"historySize": 30,
"autoTurn": {
"enabled": true,
"baseTypingSpeed": 50,
"speedVariation": 0.3,
"minDelay": 800,
"maxDelay": 2500,
"maxTurns": 3,
"followUpDelay": 2000,
"maxSequentialFollowUps": 2
}
},
"vision": {
"model": "animuslabs/Qwen2-VL-NSFW-Vision-1.2",
"temperature": 0.2
}
}
TypeScript Usage
import { AnimusClient } from 'animus-client';
const client = new AnimusClient({
// Required
tokenProviderUrl: 'https://your-backend.com/api/get-animus-token',
// Optional top-level settings
apiBaseUrl: 'https://api.animusai.co/v3',
tokenStorage: 'sessionStorage',
// Chat configuration (model & systemMessage required if provided)
chat: {
// Required
model: 'vivian-llama3.1-70b-1.0-fp8',
systemMessage: 'You are a helpful AI assistant.',
// Optional API parameters
temperature: 0.7,
max_tokens: 500,
compliance: true,
reasoning: false,
// Optional SDK features
historySize: 30,
autoTurn: true // or detailed config object
},
// Vision configuration (model required if provided)
vision: {
model: 'animuslabs/Qwen2-VL-NSFW-Vision-1.2',
temperature: 0.2
}
});
Detailed Parameter Descriptions
Authentication & Connection
tokenProviderUrl
(required)
The URL of your secure backend endpoint that provides Animus access tokens. This endpoint must:
- Authenticate your user (your own logic)
- Call the Animus Auth Service with your API key
- Return the JWT token to the SDK
tokenProviderUrl: 'https://your-backend.com/api/get-animus-token'
apiBaseUrl
(optional)
Override the default Animus API endpoint. Useful for testing or custom deployments.
apiBaseUrl: 'https://api.animusai.co/v3' // default
tokenStorage
(optional)
Choose where to store the authentication token:
sessionStorage
: More secure, cleared when tab closes
localStorage
: Persists across browser sessions
tokenStorage: 'sessionStorage' // default
Chat Configuration
Required Chat Parameters
chat.model
- Default model ID for chat requests
model: 'vivian-llama3.1-70b-1.0-fp8'
chat.systemMessage
- Default system prompt for conversations
systemMessage: 'You are a helpful AI assistant.'
API Parameters
chat.temperature
- Controls response randomness (0.0-2.0)
- Lower values (0.1-0.3): More focused and deterministic
- Higher values (0.7-1.0): More creative and varied
chat.top_p
- Nucleus sampling threshold (0.0-1.0)
Controls diversity by limiting token selection to top probability mass.
chat.max_tokens
- Maximum tokens in response
No API default - varies by model. Set based on your needs.
chat.compliance
- Enable content moderation
When true
, responses include compliance violation detection for non-streaming requests.
chat.reasoning
- Extract thinking content
When true
, extracts <think>...</think>
blocks into a separate reasoning
field.
chat.check_image_generation
- Auto-generate images from prompts ⚠️ Alpha Feature
When true
, automatically generates images when responses contain image_prompt
fields.
Alpha Feature: Image generation is currently in alpha state and not recommended for production use. The API and functionality may change without notice.
SDK Features
chat.historySize
- Conversation context management
Number of previous conversation turns to include in requests. When 0
, history is disabled.
chat.autoTurn
- Conversational turns
Enables natural conversation flow with automatic response splitting and typing delays.
- Simple:
autoTurn: true
- Advanced:
autoTurn: { enabled: true, baseTypingSpeed: 50, ... }
Streaming (stream: true
) is not supported when autoTurn
is enabled.
Vision Configuration
vision.model
(required if vision object provided) - Default model for vision requests
model: 'animuslabs/Qwen2-VL-NSFW-Vision-1.2'
vision.temperature
- Default temperature for vision completion requests
Parameter Override
Most parameters can be overridden on a per-request basis:
// Override in completions()
const response = await client.chat.completions({
messages: [{ role: 'user', content: 'Hello!' }],
temperature: 0.9, // Override default
max_tokens: 100, // Override default
reasoning: true // Override default
});
// Override in completions()
const response = await client.chat.completions({
messages: [{ role: 'user', content: 'Hello!' }],
temperature: 0.8,
reasoning: true
});
Common Configuration Patterns
Basic Chat Setup
const client = new AnimusClient({
tokenProviderUrl: 'https://your-backend.com/api/get-animus-token',
chat: {
model: 'vivian-llama3.1-70b-1.0-fp8',
systemMessage: 'You are a helpful assistant.',
temperature: 0.7,
historySize: 20
}
});
Conversational Interface
const client = new AnimusClient({
tokenProviderUrl: 'https://your-backend.com/api/get-animus-token',
chat: {
model: 'vivian-llama3.1-70b-1.0-fp8',
systemMessage: 'You are a friendly chatbot.',
autoTurn: {
enabled: true,
baseTypingSpeed: 50
},
historySize: 30
}
});
Vision-Enabled Application
const client = new AnimusClient({
tokenProviderUrl: 'https://your-backend.com/api/get-animus-token',
chat: {
model: 'vivian-llama3.1-70b-1.0-fp8',
systemMessage: 'You are an AI that can see and understand images.',
check_image_generation: true
},
vision: {
model: 'animuslabs/Qwen2-VL-NSFW-Vision-1.2',
temperature: 0.2
}
});
Next Steps