Quick Reference
Top-Level Configuration
| Parameter | Type | Default | Required | Description |
|---|---|---|---|---|
tokenProviderUrl | string | - | ✅ | URL of your secure backend endpoint for tokens |
apiBaseUrl | string | https://api.animusai.co/v3 | ❌ | Override default API endpoint |
tokenStorage | 'sessionStorage' | 'localStorage' | sessionStorage | ❌ | Where to store auth tokens |
Chat Configuration
| Parameter | Type | Default | Required | Description |
|---|---|---|---|---|
chat.model | string | - | ✅ | Default model ID for chat requests |
chat.systemMessage | string | - | ✅ | Default system prompt |
chat.temperature | number | 1 | ❌ | Controls randomness (0.0-2.0) |
chat.top_p | number | 1 | ❌ | Nucleus sampling threshold (0.0-1.0) |
chat.n | number | 1 | ❌ | Number of choices to generate |
chat.max_tokens | number | - | ❌ | Maximum tokens in response |
chat.stop | string[] | null | ❌ | Stop sequences |
chat.stream | boolean | false | ❌ | Enable streaming (not with autoTurn) |
chat.presence_penalty | number | 1 | ❌ | Penalize new words (-2.0 to 2.0) |
chat.frequency_penalty | number | 1 | ❌ | Penalize frequent words (-2.0 to 2.0) |
chat.best_of | number | 1 | ❌ | Server-side generations to choose from |
chat.top_k | number | 40 | ❌ | Limit sampling to top k tokens |
chat.repetition_penalty | number | 1 | ❌ | Penalize repeating tokens (0.0-2.0) |
chat.min_p | number | 0 | ❌ | Minimum probability threshold (0.0-1.0) |
chat.length_penalty | number | 1 | ❌ | Adjust sequence length impact |
chat.compliance | boolean | true | ❌ | Enable content moderation |
chat.reasoning | boolean | false | ❌ | Extract thinking content |
chat.check_image_generation | boolean | false | ❌ | Auto-generate images from prompts ⚠️ Alpha |
chat.historySize | number | 0 | ❌ | Number of turns for context |
chat.autoTurn | boolean | object | false | ❌ | Enable conversational turns |
AutoTurn Configuration
| Parameter | Type | Default | Description |
|---|---|---|---|
autoTurn.enabled | boolean | true | Enable/disable conversational turns |
autoTurn.baseTypingSpeed | number | 45 | Base typing speed in WPM |
autoTurn.speedVariation | number | 0.2 | Speed variation factor (±percentage) |
autoTurn.minDelay | number | 500 | Minimum delay between turns (ms) |
autoTurn.maxDelay | number | 3000 | Maximum delay between turns (ms) |
autoTurn.maxTurns | number | 3 | Maximum number of turns allowed |
autoTurn.followUpDelay | number | 2000 | Delay before follow-up requests (ms) |
autoTurn.maxSequentialFollowUps | number | 2 | Max sequential follow-ups before user input |
Vision Configuration
| Parameter | Type | Default | Required | Description |
|---|---|---|---|---|
vision.model | string | - | ✅ | Default model for vision requests |
vision.temperature | number | - | ❌ | Default temperature for vision |
Complete Configuration Example
TypeScript Usage
Detailed Parameter Descriptions
Authentication & Connection
tokenProviderUrl (required)
The URL of your secure backend endpoint that provides Animus access tokens. This endpoint must:
- Authenticate your user (your own logic)
- Call the Animus Auth Service with your API key
- Return the JWT token to the SDK
apiBaseUrl (optional)
Override the default Animus API endpoint. Useful for testing or custom deployments.
tokenStorage (optional)
Choose where to store the authentication token:
sessionStorage: More secure, cleared when tab closeslocalStorage: Persists across browser sessions
Chat Configuration
Required Chat Parameters
chat.model - Default model ID for chat requests
chat.systemMessage - Default system prompt for conversations
API Parameters
chat.temperature - Controls response randomness (0.0-2.0)
- Lower values (0.1-0.3): More focused and deterministic
- Higher values (0.7-1.0): More creative and varied
chat.top_p - Nucleus sampling threshold (0.0-1.0)
Controls diversity by limiting token selection to top probability mass.
chat.max_tokens - Maximum tokens in response
No API default - varies by model. Set based on your needs.
chat.compliance - Enable content moderation
When true, responses include compliance violation detection for non-streaming requests.
chat.reasoning - Extract thinking content
When true, extracts <think>...</think> blocks into a separate reasoning field.
chat.check_image_generation - Auto-generate images from prompts ⚠️ Alpha Feature
When true, automatically generates images when responses contain image_prompt fields.
SDK Features
chat.historySize - Conversation context management
Number of previous conversation turns to include in requests. When 0, history is disabled.
chat.autoTurn - Conversational turns
Enables natural conversation flow with automatic response splitting and typing delays.
- Simple:
autoTurn: true - Advanced:
autoTurn: { enabled: true, baseTypingSpeed: 50, ... }
Vision Configuration
vision.model (required if vision object provided) - Default model for vision requests
vision.temperature - Default temperature for vision completion requests
Parameter Override
Most parameters can be overridden on a per-request basis:Common Configuration Patterns
Basic Chat Setup
Conversational Interface
Vision-Enabled Application
Next Steps
Authentication Setup
Set up secure token provider for your backend
Chat Completions
Start building chat features with your configuration
Conversational Turns
Learn more about natural conversation flow
Media & Vision
Add vision capabilities to your application
