Xavier
Xavier (xavier-r1) is our specialized 70B parameter reasoning model built on top of Llama 3.1. It excels at emotional reasoning and complex problem-solving, making it ideal for applications requiring nuanced understanding and thoughtful analysis. This model is currently under limited release.
Key specifications
Specification | Value |
---|---|
Parameters | 70 billion |
Base architecture | Llama 3.1 |
Context window | 128,000 tokens |
Input format | Text |
Output format | Text with reasoning tokens |
Release status | Limited release |
Distinctive features
Xavier has several unique capabilities that set it apart:
- Thinking tokens: Outputs explicit reasoning tokens that show the model’s step-by-step thinking process
- Emotional reasoning: Particularly strong at understanding and reasoning about emotional contexts
- Multi-step planning: Excels at breaking down complex problems into manageable steps
- Deliberate processing: Takes more time to generate responses but produces more thoughtful results
Use cases
Xavier is particularly well-suited for:
- Mental health support applications
- Complex decision support systems
- Educational tools requiring step-by-step explanations
- Research assistance requiring nuanced reasoning
- Relationship coaching and conflict resolution
- Content that requires emotional intelligence and sensitivity
Example usage
Performance considerations
Xavier’s unique thinking capabilities come with some important performance considerations:
- Increased token usage: The explicit reasoning process uses more tokens than standard responses
- Longer generation time: Takes more time to generate responses due to the additional reasoning steps
- Higher resource utilization: May require more computational resources for optimal performance
Access and availability
Xavier is currently under limited release. To request access to Xavier, please reach out directly to the Animus team at support@animusai.co with details about your use case.
Best practices
To get the most out of Xavier:
- Be specific: Provide clear instructions about the reasoning you want to see
- Allow sufficient time: Expect and plan for slightly longer response times
- Consider context limits: Be mindful of the 8,000 token context window, especially with reasoning tokens
- Optimize token usage: For simpler tasks, consider using Vivian instead to save on token costs
For more information about leveraging Xavier’s reasoning capabilities, see our Text Generation guide.