Changes:

  • New support added to subscribe to call events via webhook, for both twilio IVR and voice widget projects
Settings > Behavior > Voice > Call events webhook

Settings > Behavior > Voice > Call events webhook

Changes:

  • Added option to disable streaming text in chat widget
    • Stream text can now be turned off in the Modality & interface settings
    • When disabled, the full agent response will be displayed at once instead of being streamed out. Useful for situations where streaming longer messages is not desired

Conversation memory is a critical component of the Agent and Prompt steps. Having longer memory gives the LLM model more context about the conversation so far, and make better decisions based on previous dialogs.

However, larger memory adds latency and costs more input tokens, so there is a drawback.

Before, memory was always set to 10 turns. All new projects will now have a default of 25 turns in memory. This can now be adjusted this in the settings, up to 100 turns.

For more information on how memory works, reference: https://docs.voiceflow.com/docs/memory

We're excited to introduce several major updates that enhance the capabilities of the Agent step and expand our model offerings. These improvements provide more flexibility, control, and opportunities for creating powerful AI agents.

๐Ÿง  Agent Step: Your All-in-One Solution

The Agent step has been supercharged to create AI agents that can intelligently respond to user queries, search knowledge bases, follow specific conversation paths, and execute functionsโ€”all within a single step. Key features include:

  • Intelligent Prompting: Craft detailed instructions to guide your agent's behavior and responses.
  • Function Integration: Connect your agent with external services to retrieve and update data.
  • Conversation Paths: Define specific flows for your agent to follow based on user intent.
  • Knowledge Base Integration: Enable your agent to automatically search your knowledge base for relevant information.

For a comprehensive guide on using the Agent step, check out our Agent Step Documentation.

๐ŸŽจ Expanded Support for Structured Output

We've significantly expanded our support for structured output, unlocking more use cases and giving you greater control over your agent's responses:

  • Arrays and Nested Arrays: You can now define arrays and nested arrays in your output structure.
  • Nested Objects: Structured output now supports nested objects, allowing for more complex data structures.

These enhancements enable you to create more sophisticated agents that generate highly structured and detailed responses, reducing the risk of hallucinations and ensuring more accurate outputs.

โšก Gemini 2.0 Flash Support

We've added support for the Gemini 2.0 Flash model, offering you even more options for powering your AI agents. Gemini 2.0 Flash delivers exceptional performance and speed, enabling faster response times and improved user experiences.

To start using Gemini 2.0 Flash, simply select it from the model dropdown when configuring your Agent step.

We can't wait to see what you'll build with these new features and capabilities! As always, we welcome your feedback and suggestions as we continue to improve our platform.

Happy building! ๐Ÿ› ๏ธ

The Voiceflow Team

Changes:

  • Updated Voiceflow variable handling for consistency in previously undefined behavior:
    • Variables can be any JavaScript object that is JSON serializable.
    • Any variable set toย undefinedย will be saved asย nullย (this conversion happens at the end of the step, so it does not affect the internal workings of JavaScript steps and functions).
    • Functions can now return null (rather than throwing an error) and can no longer returnย undefined (which could cause agents to crash).
    • Functions that attempt to returnย undefinedย will now returnย null (to ensure backwards compatibility).

These changes will go info effect March 31st.

We've revamped our Agent Analytics Dashboard, not only giving it a fresh new look but also introducing a range of powerful visualizations that provide unprecedented visibility into your agent's performance.

๐ŸŒŸ New Visualizations

The updated Analytics Dashboard offers a comprehensive set of visualizations that allow you to track and analyze various aspects of your agent's performance:

  • Tokens Usage: Monitor AI token consumption over time across all models, giving you a clear picture of your agent's token utilization.
  • Total Interactions: Keep track of the total number of interactions (requests) between users and your agent over time, providing insights into engagement levels.
  • Latency Monitoring: Measure the average response time of your agent to ensure optimal performance and identify any potential bottlenecks.
  • Total Call Minutes: Gain visibility into the cumulative duration of voice calls in minutes, helping you understand the volume and significance of voice interactions.
  • Unique Users: Identify the count of distinct users interacting with your agent over time, allowing you to track adoption and growth.
  • KB Documents Usage: Analyze the frequency of knowledge base document access, with the ability to toggle between ascending and descending order to identify the most or least used documents.
  • Intents Usage: Visualize the distribution of triggered intents, with sorting options to analyze intent frequency and identify popular or underutilized intents.
  • Functions Usage: Monitor the frequency of function calls, their success/failure and latency, with sorting capabilities to identify the most or least used functions and optimize your agent's functionality.
  • Prompts Usage: Gain insights into the usage frequency of agent prompts, with the ability to toggle between ascending and descending order to analyze prompt utilization and effectiveness.

๐Ÿ“… Data Availability

Please note that the new Analytics Dashboard service only has data starting from February 9th, 2025. If you require data prior to that date, you can still access it through our Analytics API.

๐Ÿ”ง Upcoming Analytics API Update

We're also working on a new version of the Analytics API that will include the additional data points tracked by the new Analytics Dashboard service. Stay tuned for more information on this exciting update!

We're thrilled to announce several exciting updates that expand your AI agent building capabilities and improve your workflow. Let's dive into what's new!

๐Ÿง  New Models: Deepseek R1, Llama 3.1 Instant, and Llama 3.2

We've expanded our model offerings to give you even more options for creating powerful AI agents:

  • Deepseek R1: Harness the potential of Deepseek's R1 model for enhanced natural language understanding and generation.
  • Llama 3.1 Instant: Experience lightning-fast responses with the Llama 3.1 Instant model.
  • Llama 3.2: Leverage the advanced capabilities of Llama 3.2

These new models are available on all paid plans.


โš™๏ธ Function Editor Enhancements: Modal View and Snippets

We've made some significant improvements to the Function Editor to streamline your development process:

  • Modal View: You can now open the Function Editor as a modal directly from the canvas. This allows you to make quick updates and navigate between your functions and the canvas seamlessly.
  • Snippets: We've introduced a new snippets feature that enables you to insert pre-written code snippets for common concepts in Voiceflow functions.

๐Ÿ“ž Call Recording for Twilio Phone Calls

We're excited to introduce call recording functionality for phone calls made through Twilio:

  • Automatic Call Recording: All phone calls between users and your AI agent will now be automatically recorded.
  • Twilio Integration: The call recordings will be accessible directly in your Twilio account for easy review and management.

You can enable this option in the Agent Settings page under Voice.

We're excited to announce a significant upgrade to our intent recognition system, moving from the traditional Natural Language Understanding (NLU) approach to Retrieval-Augmented Generation (RAG) model using embeddings. This transition brings notable improvements to the speed, accuracy, and overall user experience when interacting with AI agents on our platform.

๐Ÿ“… Phased Rollout

To ensure a smooth adoption, we will be rolling out the RAG-based intent recognition system to all users in phases over the next week. This gradual deployment allows us to monitor performance and gather feedback while providing ample time for you to adjust to the new system.

๐Ÿ†• Default for New Projects

For all new projects created on our platform, the RAG-based intent recognition will be the default system. This means that new AI agents will automatically benefit from the enhanced speed, accuracy, and natural conversation capabilities offered by RAG.

๐ŸŒŸ Faster Training and Interaction

With the new RAG system, agent training and intent recognition are now substantially faster and more efficient. For example, an agent with 37 intents and 305 utterances now trains about 20 times faster, in just around 1 second. This means quicker agent development and smoother conversations for end-users.

๐Ÿง  Automatic Agent Training

Thanks to the advanced training speed enabled by RAG, explicit training is no longer necessary. Simply test your agent, and the training will happen automatically behind the scenes, streamlining your workflow.

๐ŸŽฏ Enhanced Understanding of Complex Queries

RAG leverages embeddings to capture the deeper context and meaning behind words, even when phrased differently. This allows the system to better understand and accurately match complex, detailed questions to the appropriate intents, providing more precise responses to users.

๐Ÿ—ฃ๏ธ More Natural Conversations

With the improved understanding of casual language, slang, and diverse phrasing, the RAG system enables a more natural, conversational experience for users interacting with AI agents on our platform.

๐Ÿ”„ Seamless Transition for Existing Projects

For existing projects, we will keep both the NLU and RAG systems running concurrently for a period of time. This allows you to explore the new system, test it thoroughly, and make any necessary adjustments to your agents. You can easily switch between the NLU and RAG systems in the intent classification settings within the Intents CMS.

We're thrilled to bring you this enhanced experience and look forward to hearing your feedback as you interact with the new RAG-based intent recognition system. Your input is invaluable in helping us continue to innovate and improve our platform to better serve your needs.

In our mission to redefine how users interact with AI agents, we have introduced a new voice modality option to our web widget. This addition is a step towards creating more natural, intuitive, and engaging user experiences. By enabling voice-based conversations, we are empowering businesses to connect with their customers in a way that feels authentic and effortless.

Voice technology has become an increasingly popular and preferred mode of interaction for many users. By integrating voice functionality into our web widget, we are meeting users where they are and providing them with a seamless way to engage with AI agents. This not only enhances the user experience but also opens up new possibilities for businesses to assist, inform, and guide their customers throughout the customer journey.

Natural Voice Interaction

The web widget now supports voice-based communication, allowing users to speak naturally with AI agents. Businesses can integrate this feature to provide their customers with a hands-free, intuitive way to ask questions, receive recommendations, and get assistance while browsing the site.

Customization Options

The voice widget offers customization options to ensure seamless integration with your website's branding:

  • Launcher Style: Select a launcher style that complements your site's design.
  • Color Palette: Choose colors that match your brand guidelines.
  • Font Family: Pick a font that aligns with your website's typography.

These options allow you to maintain a consistent brand experience across all customer touchpoints.

Powered by Advanced Voice Tech

The voice functionality in the widget leverages the best in voice technologies to deliver high-quality conversations:

  • Automated Speech Recognition: Our platform uses advanced ASR technology from Deepgram to accurately transcribe user speech in real-time.
  • Organic Text-to-Speech: We've integrated with leading providers like 11 Labs and Rime to offer a variety of natural-sounding voices that bring AI agents to life.

These technologies ensure that conversations with AI agents feel authentic, engaging, and representative of your brand's personality.

Start Exploring Voice

We invite all our users to start experimenting with the voice capabilities.

As you explore voice functionality, we value your feedback and ideas s- join our Discord community! Your input plays a crucial role in shaping the future of voice-based interactions in the web widget and helping us refine the user experience.

AI Fallback

by Zoran Slamkov

We're excited to introduce AI Fallback, a powerful new feature in beta that enhances the reliability and continuity of your AI operations. This feature ensures your AI services remain operational even during provider outages or service interruptions.

๐Ÿ”„ Automatic Fallback Switching

AI Fallback automatically switches between models when issues arise. When your primary AI model experiences difficulties, the system seamlessly transitions to your configured backup model, ensuring continuous operation of your AI services.

โš™๏ธ Easy Configuration

Setting up AI Fallback is straightforward:

  1. Access your agent
  2. Navigate to agent settings
  3. Set your preferred fallback model by provider

That's all there is to it! The system handles everything else automatically.

๐Ÿ“ˆ Enhanced Reliability

AI Fallback delivers key benefits:

  • Minimizes service disruptions during model outages
  • Maintains consistent AI performance
  • Reduces operational impact of provider issues
  • Ensures business continuity

๐Ÿ”ฌ Under the Hood

The system continuously monitors your primary AI model's performance and availability. When issues are detected, it automatically:

  • Identifies the next available model in your sequence
  • Switches ongoing operations to the backup model
  • Returns to the primary model once issues are resolved

๐Ÿš€ Getting Started

AI Model Fallback is available exclusively for Teams and Enterprise customers. We're excited to hear your feedback during the beta phase! ๐ŸŽฏ