Quick Reference

Top-Level Configuration

ParameterTypeDefaultRequiredDescription
tokenProviderUrlstring-URL of your secure backend endpoint for tokens
apiBaseUrlstringhttps://api.animusai.co/v3Override default API endpoint
tokenStorage'sessionStorage' | 'localStorage'sessionStorageWhere to store auth tokens

Chat Configuration

ParameterTypeDefaultRequiredDescription
chat.modelstring-Default model ID for chat requests
chat.systemMessagestring-Default system prompt
chat.temperaturenumber1Controls randomness (0.0-2.0)
chat.top_pnumber1Nucleus sampling threshold (0.0-1.0)
chat.nnumber1Number of choices to generate
chat.max_tokensnumber-Maximum tokens in response
chat.stopstring[]nullStop sequences
chat.streambooleanfalseEnable streaming (not with autoTurn)
chat.presence_penaltynumber1Penalize new words (-2.0 to 2.0)
chat.frequency_penaltynumber1Penalize frequent words (-2.0 to 2.0)
chat.best_ofnumber1Server-side generations to choose from
chat.top_knumber40Limit sampling to top k tokens
chat.repetition_penaltynumber1Penalize repeating tokens (0.0-2.0)
chat.min_pnumber0Minimum probability threshold (0.0-1.0)
chat.length_penaltynumber1Adjust sequence length impact
chat.compliancebooleantrueEnable content moderation
chat.reasoningbooleanfalseExtract thinking content
chat.check_image_generationbooleanfalseAuto-generate images from prompts ⚠️ Alpha
chat.historySizenumber0Number of turns for context
chat.autoTurnboolean | objectfalseEnable conversational turns

AutoTurn Configuration

ParameterTypeDefaultDescription
autoTurn.enabledbooleantrueEnable/disable conversational turns
autoTurn.baseTypingSpeednumber45Base typing speed in WPM
autoTurn.speedVariationnumber0.2Speed variation factor (±percentage)
autoTurn.minDelaynumber500Minimum delay between turns (ms)
autoTurn.maxDelaynumber3000Maximum delay between turns (ms)
autoTurn.maxTurnsnumber3Maximum number of turns allowed
autoTurn.followUpDelaynumber2000Delay before follow-up requests (ms)
autoTurn.maxSequentialFollowUpsnumber2Max sequential follow-ups before user input

Vision Configuration

ParameterTypeDefaultRequiredDescription
vision.modelstring-Default model for vision requests
vision.temperaturenumber-Default temperature for vision

Complete Configuration Example

{
  "tokenProviderUrl": "https://your-backend.com/api/get-animus-token",
  "apiBaseUrl": "https://api.animusai.co/v3",
  "tokenStorage": "sessionStorage",
  "chat": {
    "model": "vivian-llama3.1-70b-1.0-fp8",
    "systemMessage": "You are a helpful AI assistant.",
    "temperature": 0.7,
    "top_p": 1.0,
    "n": 1,
    "max_tokens": 500,
    "stop": ["\n"],
    "stream": false,
    "presence_penalty": 0.0,
    "frequency_penalty": 0.0,
    "best_of": 1,
    "top_k": 40,
    "repetition_penalty": 1.0,
    "min_p": 0.0,
    "length_penalty": 1.0,
    "compliance": true,
    "reasoning": false,
    "check_image_generation": false,
    "historySize": 30,
    "autoTurn": {
      "enabled": true,
      "baseTypingSpeed": 50,
      "speedVariation": 0.3,
      "minDelay": 800,
      "maxDelay": 2500,
      "maxTurns": 3,
      "followUpDelay": 2000,
      "maxSequentialFollowUps": 2
    }
  },
  "vision": {
    "model": "animuslabs/Qwen2-VL-NSFW-Vision-1.2",
    "temperature": 0.2
  }
}

TypeScript Usage

import { AnimusClient } from 'animus-client';

const client = new AnimusClient({
  // Required
  tokenProviderUrl: 'https://your-backend.com/api/get-animus-token',
  
  // Optional top-level settings
  apiBaseUrl: 'https://api.animusai.co/v3',
  tokenStorage: 'sessionStorage',
  
  // Chat configuration (model & systemMessage required if provided)
  chat: {
    // Required
    model: 'vivian-llama3.1-70b-1.0-fp8',
    systemMessage: 'You are a helpful AI assistant.',
    
    // Optional API parameters
    temperature: 0.7,
    max_tokens: 500,
    compliance: true,
    reasoning: false,
    
    // Optional SDK features
    historySize: 30,
    autoTurn: true // or detailed config object
  },
  
  // Vision configuration (model required if provided)
  vision: {
    model: 'animuslabs/Qwen2-VL-NSFW-Vision-1.2',
    temperature: 0.2
  }
});

Detailed Parameter Descriptions

Authentication & Connection

tokenProviderUrl (required)

The URL of your secure backend endpoint that provides Animus access tokens. This endpoint must:

  1. Authenticate your user (your own logic)
  2. Call the Animus Auth Service with your API key
  3. Return the JWT token to the SDK
tokenProviderUrl: 'https://your-backend.com/api/get-animus-token'

apiBaseUrl (optional)

Override the default Animus API endpoint. Useful for testing or custom deployments.

apiBaseUrl: 'https://api.animusai.co/v3' // default

tokenStorage (optional)

Choose where to store the authentication token:

  • sessionStorage: More secure, cleared when tab closes
  • localStorage: Persists across browser sessions
tokenStorage: 'sessionStorage' // default

Chat Configuration

Required Chat Parameters

chat.model - Default model ID for chat requests

model: 'vivian-llama3.1-70b-1.0-fp8'

chat.systemMessage - Default system prompt for conversations

systemMessage: 'You are a helpful AI assistant.'

API Parameters

chat.temperature - Controls response randomness (0.0-2.0)

  • Lower values (0.1-0.3): More focused and deterministic
  • Higher values (0.7-1.0): More creative and varied

chat.top_p - Nucleus sampling threshold (0.0-1.0) Controls diversity by limiting token selection to top probability mass.

chat.max_tokens - Maximum tokens in response No API default - varies by model. Set based on your needs.

chat.compliance - Enable content moderation When true, responses include compliance violation detection for non-streaming requests.

chat.reasoning - Extract thinking content When true, extracts <think>...</think> blocks into a separate reasoning field.

chat.check_image_generation - Auto-generate images from prompts ⚠️ Alpha Feature When true, automatically generates images when responses contain image_prompt fields.

Alpha Feature: Image generation is currently in alpha state and not recommended for production use. The API and functionality may change without notice.

SDK Features

chat.historySize - Conversation context management Number of previous conversation turns to include in requests. When 0, history is disabled.

chat.autoTurn - Conversational turns Enables natural conversation flow with automatic response splitting and typing delays.

  • Simple: autoTurn: true
  • Advanced: autoTurn: { enabled: true, baseTypingSpeed: 50, ... }

Streaming (stream: true) is not supported when autoTurn is enabled.

Vision Configuration

vision.model (required if vision object provided) - Default model for vision requests

model: 'animuslabs/Qwen2-VL-NSFW-Vision-1.2'

vision.temperature - Default temperature for vision completion requests

temperature: 0.2

Parameter Override

Most parameters can be overridden on a per-request basis:

// Override in completions()
const response = await client.chat.completions({
  messages: [{ role: 'user', content: 'Hello!' }],
  temperature: 0.9,  // Override default
  max_tokens: 100,   // Override default
  reasoning: true    // Override default
});

// Override in completions()
const response = await client.chat.completions({
  messages: [{ role: 'user', content: 'Hello!' }],
  temperature: 0.8,
  reasoning: true
});

Common Configuration Patterns

Basic Chat Setup

const client = new AnimusClient({
  tokenProviderUrl: 'https://your-backend.com/api/get-animus-token',
  chat: {
    model: 'vivian-llama3.1-70b-1.0-fp8',
    systemMessage: 'You are a helpful assistant.',
    temperature: 0.7,
    historySize: 20
  }
});

Conversational Interface

const client = new AnimusClient({
  tokenProviderUrl: 'https://your-backend.com/api/get-animus-token',
  chat: {
    model: 'vivian-llama3.1-70b-1.0-fp8',
    systemMessage: 'You are a friendly chatbot.',
    autoTurn: {
      enabled: true,
      baseTypingSpeed: 50
    },
    historySize: 30
  }
});

Vision-Enabled Application

const client = new AnimusClient({
  tokenProviderUrl: 'https://your-backend.com/api/get-animus-token',
  chat: {
    model: 'vivian-llama3.1-70b-1.0-fp8',
    systemMessage: 'You are an AI that can see and understand images.',
    check_image_generation: true
  },
  vision: {
    model: 'animuslabs/Qwen2-VL-NSFW-Vision-1.2',
    temperature: 0.2
  }
});

Next Steps