Important: AutoTurn automatically disables streaming mode. When AutoTurn is enabled, the SDK uses non-streaming responses to properly handle response splitting and turn management. Choose either streaming OR conversational turns based on your use case.
Key Benefits
- Natural Message Flow: AI can send multiple messages in a row, just like humans do
- Intelligent Splitting: Automatically breaks long responses into digestible conversation turns
- Realistic Timing: Simulates human typing speeds with natural variation
- Follow-up Detection: Identifies when responses might prompt user engagement
- Event-Driven: Comprehensive events for real-time UI updates
Basic Usage
Simple Enable
Enable AutoTurn with default settings:
import { AnimusClient } from 'animus-client';
const client = new AnimusClient({
tokenProviderUrl: 'https://your-backend.com/api/get-animus-token',
chat: {
model: 'vivian-llama3.1-70b-1.0-fp8',
systemMessage: 'You are a helpful assistant.',
autoTurn: true // Enable with default settings
}
});
// Listen for message events
client.on('messageStart', (data) => {
console.log(`Starting ${data.messageType} message: ${data.content}`);
if (data.messageType === 'auto' && data.turnIndex !== undefined) {
console.log(`Turn ${data.turnIndex + 1}/${data.totalTurns}`);
}
});
client.on('messageComplete', (data) => {
console.log(`Completed ${data.messageType} message: ${data.content}`);
// Update your UI here
displayMessage(data.content);
});
// Send message - may be split into multiple turns with natural delays
client.chat.send("I can't decide whether to dine-in or do takeout tonight. What do you think?");
Advanced Configuration
Fine-tune AutoTurn behavior with custom settings:
const client = new AnimusClient({
tokenProviderUrl: 'https://your-backend.com/api/get-animus-token',
chat: {
model: 'vivian-llama3.1-70b-1.0-fp8',
systemMessage: 'You are a helpful assistant.',
autoTurn: {
enabled: true,
splitProbability: 0.8, // 80% chance to split multi-sentence responses
baseTypingSpeed: 50, // 50 WPM base typing speed
speedVariation: 0.3, // ±30% speed variation
minDelay: 800, // Minimum 800ms delay between turns
maxDelay: 2500, // Maximum 2.5s delay between turns
maxTurns: 3, // Maximum number of turns allowed
followUpDelay: 2000, // Delay before sending follow-up requests (ms)
maxSequentialFollowUps: 2 // Maximum sequential follow-ups before requiring user input
}
}
});
Configuration Options
Option | Type | Default | Description |
---|
enabled | boolean | true | Enable/disable conversational turns |
splitProbability | number | 0.6 | Probability (0-1) of splitting responses |
baseTypingSpeed | number | 45 | Base typing speed in words per minute |
speedVariation | number | 0.2 | Speed variation factor (±percentage) |
minDelay | number | 500 | Minimum delay between turns in milliseconds |
maxDelay | number | 3000 | Maximum delay between turns in milliseconds |
maxTurns | number | 3 | Maximum number of turns allowed |
followUpDelay | number | 2000 | Delay before sending follow-up requests |
maxSequentialFollowUps | number | 2 | Maximum sequential follow-ups allowed |
How AutoTurn Works
Priority-Based Processing
The system follows a clear priority order for response splitting:
- Priority 1: If autoTurn is enabled AND content has newlines → ALWAYS split on newlines
- Priority 2: If no newlines but pre-split turns are available → Apply
splitProbability
to decide
- Priority 3: Otherwise → Process normally without splitting
Intelligent Turn Management
// Example of how the SDK processes a long response
const longResponse = `Renewable energy is crucial for our future.
Solar power harnesses sunlight through photovoltaic cells, converting it directly into electricity.
Wind energy captures kinetic energy from moving air using turbines, making it one of the fastest-growing energy sources globally.`;
// With autoTurn enabled, this becomes:
// Turn 1: "Renewable energy is crucial for our future."
// Turn 2: "Solar power harnesses sunlight through photovoltaic cells, converting it directly into electricity."
// Turn 3: "Wind energy captures kinetic energy from moving air using turbines, making it one of the fastest-growing energy sources globally."
Realistic Timing Calculation
Realistic delays are calculated based on content length and typing speed:
// Delay calculation example
const calculateDelay = (content: string, baseSpeed: number, variation: number) => {
const wordCount = content.split(' ').length;
const baseTime = (wordCount / baseSpeed) * 60 * 1000; // Convert to milliseconds
const variationFactor = 1 + (Math.random() - 0.5) * 2 * variation;
return Math.max(500, baseTime * variationFactor); // Minimum 500ms
};
Event System
Unified Message Events
All message types (regular, auto-turn, follow-up) use the same event system:
// Message lifecycle events
client.on('messageStart', (data) => {
console.log(`Starting ${data.messageType} message`);
// Show typing indicator
if (data.messageType === 'auto') {
showTypingIndicator(`Turn ${data.turnIndex + 1}/${data.totalTurns}`);
} else {
showTypingIndicator();
}
});
client.on('messageComplete', (data) => {
console.log(`Completed ${data.messageType} message: ${data.content}`);
// Hide typing indicator and display message
hideTypingIndicator();
displayMessage(data.content);
// Check if all messages in sequence are complete
if (data.totalMessages) {
console.log(`All ${data.totalMessages} messages completed`);
// This is when image generation and follow-up requests are triggered
}
});
client.on('messageError', (data) => {
console.error(`${data.messageType} message error: ${data.error}`);
hideTypingIndicator();
// Handle cancellations gracefully
if (data.messageType === 'auto' && data.error.includes('Canceled')) {
console.log('Auto-turn messages were canceled due to new user input');
}
});
Message Types
regular
: Standard user-initiated messages and responses
auto
: Messages from conversational turns (split responses)
followup
: Automatic follow-up requests when AI indicates more content
Follow-Up Request Management
AutoTurn includes intelligent follow-up handling to prevent infinite loops:
const client = new AnimusClient({
tokenProviderUrl: 'your-token-url',
chat: {
model: 'vivian-llama3.1-70b-1.0-fp8',
systemMessage: 'You are a helpful assistant.',
autoTurn: {
enabled: true,
followUpDelay: 1500, // 1.5 second delay before follow-ups
maxSequentialFollowUps: 1 // Allow only 1 follow-up before requiring user input
}
}
});
// Follow-up events
client.on('messageStart', (data) => {
if (data.messageType === 'followup') {
console.log('Processing follow-up request');
showFollowUpIndicator();
}
});
client.on('messageComplete', (data) => {
if (data.messageType === 'followup') {
console.log('Follow-up completed:', data.content);
hideFollowUpIndicator();
}
});
Follow-Up Protection
- Sequential Limiting: Prevents infinite follow-up loops
- Image Generation Protection: Blocks follow-ups immediately after image generation
- User Reset: All counters reset when user sends new message
Message Cancellation
Message cancellation is automatically handled when users send new messages:
// Scenario: User interrupts ongoing conversation turns
client.chat.send("I'm thinking about getting a pet"); // Starts auto-turn sequence
// While turns are being processed, user sends another message
client.chat.send("Actually, tell me about solar panels instead");
// Result:
// - Any unprocessed turns from first message are canceled
// - Already-processed turns remain in history for context
// - Second message is processed normally
// - No out-of-order responses occur
Cancellation Events
client.on('messageError', (data) => {
if (data.messageType === 'auto' && data.error.includes('Canceled')) {
console.log('Auto-turn messages were canceled due to new user input');
// Update UI to reflect cancellation
clearPendingMessages();
}
if (data.messageType === 'followup' && data.error.includes('Canceled')) {
console.log('Follow-up request was canceled due to new user input');
}
});
Integration with Other Features
Image Generation
AutoTurn coordinates with automatic image generation:
const client = new AnimusClient({
tokenProviderUrl: 'your-token-url',
chat: {
model: 'vivian-llama3.1-70b-1.0-fp8',
systemMessage: 'You are a helpful assistant.',
autoTurn: true,
check_image_generation: true // Enable automatic image generation
}
});
// Image generation events work seamlessly with auto-turn
client.on('imageGenerationStart', (data) => {
console.log(`Starting image generation: ${data.prompt}`);
showImageGenerationIndicator();
});
client.on('imageGenerationComplete', (data) => {
console.log(`Image generated: ${data.imageUrl}`);
displayGeneratedImage(data.imageUrl);
hideImageGenerationIndicator();
});
// Send message that might trigger both auto-turn and image generation
client.chat.send("You have any more hiking pics to share from last week?");
Chat History
AutoTurn messages are automatically added to chat history with proper metadata:
// Get chat history including auto-turn messages
const history = client.chat.getChatHistory();
// Auto-turn messages include group metadata
history.forEach(message => {
if (message.groupId) {
console.log(`Message ${message.messageIndex + 1}/${message.totalInGroup} in group ${message.groupId}`);
}
});
Best Practices
Configuration Guidelines
// For natural conversation feel
const naturalConfig = {
splitProbability: 0.7, // High chance of splitting
baseTypingSpeed: 45, // Average human typing speed
speedVariation: 0.3, // Natural variation
minDelay: 800, // Comfortable minimum
maxDelay: 2500 // Not too long
};
// For faster, more efficient responses
const efficientConfig = {
splitProbability: 0.4, // Lower split chance
baseTypingSpeed: 60, // Faster typing
speedVariation: 0.1, // Less variation
minDelay: 500, // Shorter delays
maxDelay: 1500
};
// For dramatic, storytelling applications
const dramaticConfig = {
splitProbability: 0.9, // Almost always split
baseTypingSpeed: 35, // Slower, more deliberate
speedVariation: 0.4, // More variation
minDelay: 1000, // Longer pauses
maxDelay: 4000
};
UI Implementation
// Example UI integration
class ConversationUI {
private typingIndicator: HTMLElement;
private messagesContainer: HTMLElement;
constructor() {
this.setupEventListeners();
}
setupEventListeners() {
client.on('messageStart', (data) => {
this.showTypingIndicator(data);
});
client.on('messageComplete', (data) => {
this.hideTypingIndicator();
this.displayMessage(data.content, data.messageType);
});
client.on('messageError', (data) => {
this.hideTypingIndicator();
if (!data.error.includes('Canceled')) {
this.showError(data.error);
}
});
}
showTypingIndicator(data: any) {
let text = 'AI is typing...';
if (data.messageType === 'auto') {
text = `AI is typing... (${data.turnIndex + 1}/${data.totalTurns})`;
} else if (data.messageType === 'followup') {
text = 'AI is thinking of a follow-up...';
}
this.typingIndicator.textContent = text;
this.typingIndicator.style.display = 'block';
}
hideTypingIndicator() {
this.typingIndicator.style.display = 'none';
}
displayMessage(content: string, type?: string) {
const messageElement = document.createElement('div');
messageElement.className = `message ai-message ${type || 'regular'}`;
messageElement.textContent = content;
this.messagesContainer.appendChild(messageElement);
this.scrollToBottom();
}
}
Troubleshooting
Common Issues
AutoTurn Not Working
// Check that autoTurn is properly enabled
const client = new AnimusClient({
chat: {
autoTurn: true, // Make sure this is set
// ... other config
}
});
Messages Not Splitting
// Increase split probability or check content format
autoTurn: {
enabled: true,
splitProbability: 0.8, // Higher chance of splitting
}
Delays Too Long/Short
// Adjust timing parameters
autoTurn: {
enabled: true,
baseTypingSpeed: 60, // Faster typing
minDelay: 300, // Shorter minimum
maxDelay: 1500 // Shorter maximum
}
Next Steps