Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.animusai.co/llms.txt

Use this file to discover all available pages before exploring further.

Key specifications

SpecificationValue
Parameters70 billion
Base architectureLlama 3.1
Context window8,000 tokens
Input formatText
Output formatText
Knowledge cutoffRecent (check our changelog for updates)

Strengths

  • Natural conversation: Vivian excels at maintaining engaging, human-like conversations
  • Relationship building: Designed to build rapport and connections with users
  • General knowledge: Has a broad understanding of various topics
  • Consistent responses: Provides reliable and coherent answers
  • Efficient token usage: Optimized for effective communication without unnecessary verbosity

Use cases

Vivian is ideal for:
  • Customer service chatbots
  • Virtual assistants
  • Educational tools
  • Interactive storytelling
  • Companionship applications
  • Content creation assistance

Example usage

import OpenAI from "openai";

const openai = new OpenAI({
  baseURL: "https://api.animusai.co/v2",
  apiKey: process.env.ANIMUS_API_KEY,
});

const completion = await openai.chat.completions.create({
  model: "vivian-llama3.1-70b-1.0-fp8",
  messages: [
    { 
      role: "system", 
      content: "You are a helpful assistant named Vivian. You're friendly, empathetic, and knowledgeable." 
    },
    {
      role: "user",
      content: "I'm feeling stressed about my upcoming job interview. Any advice?",
    },
  ],
});

console.log(completion.choices[0].message);

Performance and limitations

While Vivian is a powerful conversational model, it has some limitations to keep in mind:
  • Complex reasoning: For tasks requiring deep reasoning or problem-solving, consider using our Xavier model instead
  • Context window: Limited to 8,000 tokens, which may restrict very long conversations
  • Specialized knowledge: May not have expert-level knowledge in highly technical or specialized domains

Token usage and optimization

To get the most out of Vivian while optimizing costs, consider these best practices:
  • Keep system messages concise but descriptive
  • Use relevant context but avoid unnecessary information
  • Consider response length limits for your use case
  • Test and optimize prompts to minimize token usage
For more detailed information about working with Vivian, see our Text Generation guide.