Prerequisites: Secure Token Endpoint

Security Requirement: The SDK requires a secure backend endpoint to provide authentication tokens. You cannot use your Animus API key directly in the browser for security reasons.

Authentication Setup Required

Before using the SDK, you must set up a secure token provider endpoint on your backend server. This keeps your API key safe and enables proper user authentication.Click here for complete setup instructions →

Install the SDK

Install the SDK in your frontend application:
npm install animus-client

Your First Chat

Create a simple chat application with just a few lines of code:
import { AnimusClient } from 'animus-client';

// Initialize the client with your token provider
const client = new AnimusClient({
  tokenProviderUrl: 'http://localhost:3001/token', // Your auth server
  chat: {
    model: 'vivian-llama3.1-70b-1.0-fp8',
    systemMessage: 'You are a helpful assistant.'
  }
});

// Listen for the response
client.on('messageComplete', (data) => {
  console.log('AI Response:', data.content);
});

// Send a message
client.chat.send('Hello! Tell me a fun fact about AI.');
That’s it! You just had your first AI conversation with event-driven code.

Progressive Examples

Step 1: Basic Chat with Event Handling

import { AnimusClient } from 'animus-client';

const client = new AnimusClient({
  tokenProviderUrl: 'http://localhost:3001/token',
  chat: {
    model: 'vivian-llama3.1-70b-1.0-fp8',
    systemMessage: 'You are a caring and empathetic friend.',
    historySize: 10 // Remember last 10 messages
  }
});

// Listen for message events
client.on('messageComplete', (data) => {
  console.log('Friend:', data.content);
});

client.on('messageError', (data) => {
  console.error('❌ Error:', data.error);
});

// Have a natural conversation
client.chat.send("Hey, I've been feeling a bit overwhelmed lately with work");
// Later in the conversation...
client.chat.send("Thanks for listening. How do you think I should handle this?");

Step 2: Streaming Responses

Get responses as they’re generated for a more interactive experience:
// Enable streaming for real-time responses
const stream = await client.chat.completions({
  messages: [{ role: 'user', content: 'I just got promoted at work! What should I do to celebrate?' }],
  stream: true
});

let fullContent = '';
for await (const chunk of stream) {
  const delta = chunk.choices?.[0]?.delta?.content || '';
  fullContent += delta;
  process.stdout.write(delta); // Print each word as it arrives
}

console.log('\n✅ Conversation continues...');

Step 3: Conversational Turns

Enable natural conversation flow with automatic response splitting:
const client = new AnimusClient({
  tokenProviderUrl: 'http://localhost:3001/token',
  chat: {
    model: 'vivian-llama3.1-70b-1.0-fp8',
    systemMessage: 'You are a warm and thoughtful companion.',
    autoTurn: true // Enable conversational turns
  }
});

// Listen for turn events
client.on('messageStart', (data) => {
  if (data.messageType === 'auto') {
    console.log(`Turn ${data.turnIndex + 1}/${data.totalTurns} starting...`);
  }
});

client.on('messageComplete', (data) => {
  console.log('Friend:', data.content);
  if (data.totalMessages) {
    console.log(`All ${data.totalMessages} messages completed`);
  }
});

// Send a message that might be split into multiple natural turns
client.chat.send("I've been thinking about making a big life change. Can you help me think through it?");

Step 4: Image Generation

Generate images automatically when the AI suggests them:
const client = new AnimusClient({
  tokenProviderUrl: 'http://localhost:3001/token',
  chat: {
    model: 'vivian-llama3.1-70b-1.0-fp8',
    systemMessage: 'You are a creative and inspiring companion.',
    check_image_generation: true // Enable automatic image generation
  }
});

// Listen for image generation events
client.on('imageGenerationStart', (data) => {
  console.log('🎨 Creating something beautiful:', data.prompt);
});

client.on('imageGenerationComplete', (data) => {
  console.log('✅ Here\'s what I created for you:', data.imageUrl);
});

// Have a creative conversation
client.chat.send("I'm feeling a bit down today. Could you create something that might cheer me up?");
// The SDK automatically detects when the AI wants to create an image and generates it

Step 5: Vision Analysis

Analyze images with the vision capabilities:
const client = new AnimusClient({
  tokenProviderUrl: 'http://localhost:3001/token',
  vision: {
    model: 'animuslabs/Qwen2-VL-NSFW-Vision-1.2'
  }
});

// Share an image and have a conversation about it
const visionResponse = await client.media.completions({
  messages: [{
    role: 'user',
    content: [
      { type: 'text', text: 'I took this photo today and wanted to share it with you. What do you think?' },
      { type: 'image_url', image_url: { url: 'https://example.com/your-photo.jpg' } }
    ]
  }]
});

console.log('Friend:', visionResponse.choices[0].message.content);

Key SDK Features

Secure Authentication

Token-based auth keeps your API key safe on the backend

Event-Driven

Listen for message, image, and error events for reactive UIs

Auto-Context

Automatic conversation history management with configurable size

Real-time Streaming

Stream responses as they’re generated with AsyncIterable pattern

Image Generation

Automatic image generation when AI responses include image prompts

Type Safety

Full TypeScript support with intelligent autocomplete

Configuration Options

Configure the client with extensive options:
const client = new AnimusClient({
  // Required: Your secure token endpoint
  tokenProviderUrl: 'http://localhost:3001/token',
  
  // Optional: API settings
  apiBaseUrl: 'https://api.animusai.co/v3', // Default
  tokenStorage: 'sessionStorage', // or 'localStorage'
  
  // Chat configuration
  chat: {
    model: 'vivian-llama3.1-70b-1.0-fp8',
    systemMessage: 'You are a helpful assistant.',
    temperature: 0.7,
    max_tokens: 1000,
    historySize: 20, // Remember last 20 messages
    compliance: true, // Enable content moderation
    reasoning: false, // Enable reasoning/thinking content
    check_image_generation: true, // Auto-generate images
    
    // Conversational turns configuration
    autoTurn: {
      enabled: true,
      splitProbability: 0.8, // 80% chance to split responses
      baseTypingSpeed: 50, // 50 WPM typing speed
      speedVariation: 0.3, // ±30% variation
      minDelay: 800, // Min 800ms between turns
      maxDelay: 2500 // Max 2.5s between turns
    }
  },
  
  // Vision configuration
  vision: {
    model: 'animuslabs/Qwen2-VL-NSFW-Vision-1.2',
    temperature: 0.2
  }
});

Error Handling

Handle different scenarios with specific error types:
import { ApiError, AuthenticationError } from 'animus-client';

try {
  // Event-driven approach - no await needed
  client.chat.send('Hello!');
  
  // Or use completions() for direct API calls
  const response = await client.chat.completions({
    messages: [{ role: 'user', content: 'Hello!' }]
  });
  console.log(response.choices[0].message.content);
} catch (error) {
  if (error instanceof AuthenticationError) {
    console.error('Authentication failed:', error.message);
    // Handle auth errors (redirect to login, etc.)
  } else if (error instanceof ApiError) {
    console.error(`API Error (${error.status}):`, error.message);
    // Handle API errors (show user-friendly message)
  } else {
    console.error('Unexpected error:', error);
  }
}

What’s Next?

Now that you’ve got the basics, explore more advanced features:

Need Help?

Pro Tip: The SDK handles authentication, retries, conversation history, and response parsing automatically. Focus on building your application logic, not managing HTTP requests!