Show source URLs in responses
When enabled, the agent includes the source URL (If public URL) alongside its response so users can verify or learn more. Recommended for help centers and documentation.This feature is available for both the knowledge base and web search system tools.You can set the maximum number of sources you want to show per agent message (defaulted to 1).

Async functions and API tools
You can now run Function and API tool steps asynchronously.Async execution allows the conversation to continue immediately without waiting for the tools to complete. No outputs or variables from the step will be returned or updated.This is ideal for non-blocking tasks such as logging, analytics, telemetry, or background reporting that don’t affect the conversation.Note: This setting applies to the reference of the Function or API tool — either where the tool is attached to an agent or where it’s used as a step on the canvas. It is not part of the underlying API or function definition, which allows the same tool to be reused with different async behaviour throughout your project.
Tool messages
Tool messages let you define static messages that are surfaced to the user as a tool progresses through its lifecycle:- Start — Message delivered when the tool is initiated
- Complete — Message delivered when the tool finishes successfully
- Failed — Message delivered if the tool encounters an error
- Delayed — Message delivered if the tool takes longer than a specified duration (default: 3000ms, configurable)

Voice mode in web widget
Your web widget now supports hands-free, real-time voice conversations. Enable it from the Widget tab for existing projects — it’s on by default for new ones.Users can talk naturally, see transcripts stream in instantly, and get a frictionless voice-first experience. It also doubles as the perfect in-browser way to test your phone conversations—no dialing in, just open the widget and run the full voice flow instantly.


Native web search tool
We’ve shipped a native Web Search tool so your agents can look up real-time information on the web mid-conversation—no custom integrations required.- Toggle on the web search tool in any agent to answer questions that need live data (news, prices, schedules, etc.).
- Configure search prompts and guardrails so the agent only pulls what you want it to.
-
Results are summarized and grounded back into the conversation for more accurate, up-to-date answers.

Telnyx telephony integration
You can now connect your Telnyx account to import and manage phone numbers directly in Voiceflow, enabling Telnyx as your telephony provider for both inbound and outbound calls..
Native support for keypad input (DTMF)
Added native support for DTMF keypad input in phone conversations. Users can now enter digits via their phone keypad, sending a DTMF trace to the runtime. Configure timeout and delimiters (#, *) to control when input is processed. See documentation here.- Keypad input is off by default and can be turned on from Settings/Behaviour/Voice.
- When on in project settings, keypad input can be turned off at the step level via the “Listen for other triggers” toggle.
-
View full documentation here
/cde84932767adb0d81388512a5095eb21cdd6f3d9d2f7ae5ac97cb384842ec6c-CleanShot_2025-11-10_at_11.50.552x.png?fit=max&auto=format&n=aq6_-jBpTUfgp_uc&q=85&s=d20b8bb33a6e4adbe82c09c7c0c6b49e)
Knowledge base metadata
Add metadata to your Knowledge Base sources to deliver more relevant, localized, and precise answers, helping customers find what they need faster and improving overall resolution speed.- Adding metadata on knowledge import
- Dynamically, or statically apply metadata at runtime


Built-in time variables
We’ve added a set of built-in time variables that make it easier to access and use time within your agents—no external API calls or workarounds required. Perfect for agents that depend on current or relative time inputs.Project timezone can be set in project/behaviour settings:
Deepgram Flux ASR model
We’ve added Deepgram Flux, their ASR newest model built specifically for Voice AI.Flux is the first conversational speech recognition model built specifically for voice agents. Unlike traditional STT that just transcribes words, Flux understands conversational flow and automatically handles turn-taking.Flux tackles the most critical challenges for voice agents today: knowing when to listen, when to think, and when to speak. The model features first-of-its-kind model-integrated end-of-turn detection, configurable turn-taking dynamics, and ultra-low latency optimized for voice agent pipelines, all with Nova-3 level accuracy.Flux is Perfect for: turn-based voice agents, customer service bots, phone assistants, and real-time conversation tools.Key Benefits:- Smart turn detection — Knows when speakers finish talking
- Ultra-low latency — ~260ms end-of-turn detection
- Early LLM responses — EagerEndOfTurn events for faster replies
- Turn-based transcripts — Clean conversation structure
- Natural interruptions — Built-in barge-in handling
-
Nova-3 accuracy — Best-in-class transcription quality

Sync audio and text output
Converts text to speech in real time and keeps the spoken audio perfectly aligned with the displayed text. This ensures call transcripts are an accurate, word-for-word representation of what was actually said.
Transcript inactivity timeout
This setting lets you define how long a conversation can stay inactive before the transcript automatically ends.This is different from session timeout — the session stays open, but the transcript closes after the set inactivity period, enabling more accurate reporting and evaluations.Important: ending the transcript does not end the user’s ability to re-engage. If the user responds again, a new transcript will begin within the same session.
Priority processing for Open AI models
We’ve added a new Priority Processing setting for OAI-supported models. When enabled, your requests will be given higher processing priority for faster response times and reduced latency. Note: this will consume more credits.
MCP tools
Supercharge your agents by connecting directly to MCP servers.- 🔌 Connect to MCP servers in just a few clicks
- 📥 Add MCP server tools to your agents
- 🔄 Sync MCP servers to stay up-to-date

Call forwarding tool in agents
You can now enable your agents to forward calls to a different number, SIP address, or extension.- 📞 Seamlessly transfer callers to the right person or agent
- 🔀 Supports phone numbers, SIP addresses, and extensions
- 🛠️ Configure forwarding directly in your agent’s tools

Control reasoning effort for supporting GPT models
We’ve added a reasoning effort slider for all supporting GPT models (GPT-5, GPT-5 mini, GPT-5 nano, GPT-o3 and GPT-o4-mini).
Shareable links now match your AI agent
Shareable links have been upgraded to better reflect the agent you’re building. Each link now points to a hosted version of your AI agent that mirrors your selected environment (dev, staging, production) and interface, so what you share is exactly what others will experience. Password protection is also available for secure access.- 🔗 Sharable links now mirror your actual AI agent
- 🛠️ Environment-specific links (dev, staging, production)
- 🎨 Customize the look and feel via the Interfaces tab
-
🔒 Optional password protection for secure sharing

Staging environment added
We’ve introduced a new staging environment to help you manage deployments more effectively. You can now publish between development, staging, and production to test changes before going live.- New staging environment for pre-production testing
- Publish across dev → staging → production
- More control and confidence in deployment flows
-
Override secrets per environment for greater flexibility

Duplicating projects now clones knowledge base
You can now duplicate projects along with their entire knowledge base. When cloning a project, all connected documents and data sources are copied as well—so your new project starts with the same knowledge setup as the original.This enhancement only applies to project duplication. Knowledge bases are not yet cloned when using project import.Control saving of empty transcripts
You can now choose whether to save transcripts where the bot spoke but the user never replied. Use this toggle to keep your transcript logs cleaner and focused on real interactions. By default, all new projects will save all conversations to transcripts.
Save input variables in tool calls
Previous to this release, you could only capture the output of a tool call (e.g., the response from an API). Now, you can also persist the inputs (the parameters sent to the tool) as Voiceflow variables. This means both sides of the transaction — request and response — can be tracked, reused, or referenced later in the conversation.
Double-click to open agent step
You can now double-click an agent step to jump straight into its editor — saving yourself an extra click.
Tool step
You can now run tools outside of the agent step using the new Tool Step.This lets you trigger any tool in your agent — like sending an email or making an API call — anywhere in your workflows.🛠️ You’ll find the call forwarding Step in the ‘Dev’ section of the step menu for now.Tools can also be used as actions:

New analytics API
A few months ago, we released a new analytics view—giving you deeper insights into agent performance, tool usage, credit consumption, and more.Today, we’re releasing an updated Analytics API to match. This new version gives you programmatic access to the same powerful data, so you can:Track agent performance over timeMonitor tool and credit usageBuild custom dashboards and reportsUse the new API to integrate analytics directly into your workflows and get the insights you need—where you need them.Custom query control & chunk limit for knowledge base tool
You now have more control over how your agents retrieve knowledge. Customize the query your agent uses to search the knowledge base, and fine-tune the chunk size limit to better match your content. This gives you more precision, better answers, and smarter agents.
Better transcripts. Custom evaluations. Better AI agents.
Your AI agents just got a massive upgrade:🔥 What’s new- Transcripts, reimagined – Replay calls, debug step-by-step, filter with precision, and visualize user actions like button clicks — all in a faster, cleaner UI.
- Evaluations, your way – Define what “good” looks like with customizable evaluation templates, multiple scoring types (rating, binary, text), auto-run support, and performance tracking over time.
- Call recordings – Replay conversations to hear how your agent performs in the real world
- Robust debug logs – Trace agent decisions step-by-step
- Granular filtering – Slice data by time, user ID, evaluation result, and more
- Button click visualization – See exactly where users clicked in the conversation
- Cleaner UI – Faster load times, more usable data
- ⭐ Rating evals (e.g. 1–5)
- ✅ Binary evals (Pass/Fail)
- 📝 Text evals (open-ended notes)
- Batch or auto-run – Evaluate hundreds of transcripts in a few clicks, or automatically as they come in
- Analytics & logs – See detailed results per message or overall trends over time
- We’ve release a brand new Evaluations API
- We’ve release a new Transcripts API The legacy Transcripts API is still supported and currently has no deprecation timeline








Gmail tools: let your AI agents send emails
Agents can now send emails seamlessly as part of any conversation. Whether it’s a confirmation, follow-up, or lead nurture message — the new Send Email tool makes it easy to automate communication right from your agent. Just connect your Gmail account and you’re ready to go.Make sure to instruct your agent on how to use this tool properly. Give it a try in the agent step!

Call forwarding step
Seamlessly connect your voice AI agent to the real world with call forwarding.The new call forwarding step lets your AI agent hand off calls to a real person (or another AI agent)—instantly and smoothly.- ✅ Route to phone numbers
- ✅ Include optional extensions
- ✅ Support for SIP addresses

SMS messaging with Twilio tools
Enable your agents to send SMS messages with an effortless connection to Twilio. Try it now in the agent step.
Create AI agents instantly — from just a prompt
We’ve made building AI agents dramatically faster.You can now generate a fully-functional agent by simply describing what you want it to do. No setup. No manual flow-building. Just write a detailed prompt — and Voiceflow will generate everything for you:✅ Agent instructions✅ Tools and workflows✅ Conversation logic and componentsThis means less time configuring, more time testing and refining your agent behavior.Today, we’re launching:- Prompt-to-project generation – go from idea to working prototype in seconds
- Prompt-to-workflow generation – describe a capability, get a complete workflow
- Prompt-to-component generation – create specific tools and logic on the fly

Vonage integration for telephony
Voiceflow now supports importing phone numbers from Vonage as an alternative to Twilio. Vonage offers a minor latency improvement (~200-400ms) over Twilio, for more responsive calls.For more information: https://dashboard.nexmo.com/ https://www.vonage.ca/en/communications-apis/voice/
Smarter knowledge base building with LLM chunking strategies
Your Knowledge Base just got a major upgrade. With our new LLM chunking strategies, you can now prep your data for AI like a pro—no manual formatting needed.We’ve introduced 5 powerful strategies to help structure and optimize your content for maximum retrieval performance:🧠 Smart chunking Automatically breaks content into logical, topic-based sections. Ideal for complex documents with multiple subjects.❓ FAQ optimization Generates sample questions per section, perfect for creating high-impact FAQs.🧹HTML & noise removal Cleans up messy website markup and boilerplate. Best used on content pulled from the web or markdown.📝Add topic headers Inserts short, helpful summaries above each section. Great for longform content that needs context.🔍 Summarize Distills each section to its key points, removing fluff. Perfect for dense reports or research.These chunking strategies help you get more accurate, more relevant answers from your AI—especially for data sources not originally built for Retrieval-Augmented Generation (RAG).Ready to make your Knowledge Base smarter? Try out some LLM chunking strategies and watch the results speak for themselves.Note - LLM chunking strategies use credits. Before processing, we’ll show you a clear estimate of how many credits will be used—so you’re always in control.

Make.com tool
Connect your agents to Make.com with a couple clicks to run your automations from your Voiceflow AI agents.

Airtable tools
Connect your agents to Airtable with a couple clicks. Supported tools include: Create records, Delete records, Get record, List records, Update records.

Agents can now automatically use buttons, cards, and carousels to enrich conversations
By enabling these options and providing guidance on when to use (or avoid) each special tool, your agent will intelligently enhance interactions with visual tools like buttons, cards, and carousels.Note: these configurations are ignored during phone-based conversations, meaning it will not prohibit your ability to create multi-modal AI agents with Voiceflow.
[Deprecation] Dialog Manager API Logs
Legacylog traces are no longer supported, which are sent when with the query parameter ?log=true. This system has not been updated for a significant period and is out of date, especially with new steps.log traces will no longer be returned, after Friday, July 4th, 2025.This affects a small subset of users and should not impact the output or performance of an agent.Going forward, it will be unified with a more robust debug trace system, along with a new debugger UI.New Speech-to-Text Providers
- Added Cartesia’s Ink-Whisper STT model This leverages OpenAI’s whisper model, upgraded for realtime call performance Expanded language support and selection
- Added AssemblyAI Universal STT model Advanced tuning options
-
Added specific model selection for Deepgram STT Nova-2, Nova-3, and Nova-3 Medical

Google Sheets tools
Connect your agents to Google Sheets with a couple clicks. Supported tools include: Add to sheet, Create new sheet, Get rows, Get sheet, Update sheet.
Cartesia voices
We’ve added Cartesia to Voiceflow. You can select from over 100 new voices across two Cartesia models (Sonic 2 & Sonic Turbo).
Added tool usage to project analytics
We’ve added all tool types to your projects analytics dashboard:- Integration tools
- API tools
- Function tools

New workspace dashboard
- We’ve made updates to the workspace dashboard to make it easier to organize your projects, and manage your workspace.
- We’ve added folders, to further organize your projects. Note, if you previously used the Kanban view (deprecated), we’ve automatically converted swim-lanes into folders.
- Home tab (coming soon)
- Community tab (coming soon)
-
Tutorials tab (coming soon)

New navigation
We’ve listened to your feedback and made Voiceflow easier to navigate. It’s the same Voiceflow, just faster to get around!
Security Settings for Widget
- Ability to whitelist domains
- Ability to have a custom privacy message before users engage with your AI agent
-
Ability to not save transcripts

Generative No Reply
Use generative no-reply to dynamically re-engage users that haven’t responded in a while. Responses will be contextual to the conversation.
API Raw Content-Type select
The API (agent) tool and step now have a content-type option on POST requests with a “Raw” body. This will automatically apply theContent-Type header, for a quality-of-life convenience.
Rimelabs Arcana Voices
Rimelabs recently released a new set of Arcana voices, that sound far more natural with intonations and speech patterns such as breathing, pauses.Arcana is still under development and we are working with the Rimelabs team to improve it, we’re aware of some issues with consistency and slurring of speech.Arcana adds ~250ms of latency to the voice pipeline, roughly the same as 11labs.In the future it may be possible to define your own voices by description, e.g. “old man with hoarse southern accent”
Krisp Noise Cancellation
Krisp Noise CancellationLatency is one piece of the puzzle — but quality matters too. That’s why we’ve added Krisp.Background noise, especially speech or music, can seriously throw off voice agents. STT systems transcribe everything they hear, so voices in a coffee shop or lyrics from background music can easily get mistaken for the user’s input, leading to weird or incorrect responses. It can also confuse the agent into thinking the user isn’t done talking, delaying responses or interrupting playback. In short: noise kills both quality and speed.All voice projects (web-voice widget and Twilio) automatically have Krisp noise cancellation applied.Before Krisp:After Krisp:Here are two spectrograms, the upper one visualizing the audio that would be heard by STT without Krisp, and the lower one showing the audio after having been processed with Krisp.Through our testing:We’ve determined that this significantly boosts the accuracy of speech detection and transcription in noisy environments: cafes, offices, on the street, background broadcasts, etc.Krisp noise cancellation adds ~20ms of latency to the audio pipeline, while drastically improving speech detection and transcription accuracy. This ultimately leads to faster final transcriptions, reducing overall speech-to-speech latency by ~100ms.
Salesforce tools
We’ve added Salesforce tools to the agent step. You can now authenticate with Salesforce and add tools to enable your agent to get work done in Salesforce.

Zendesk tools
We’ve added Zendesk tools to the agent step. You can now authenticate with Zendesk and add tools to enable your agent to get work done in Zendesk.

Voice Keywords / Multilingual Speech-to-text
KeywordsFor voice calls we’re introducing keywords. This allows your agent to understand hard to pronounce proper nouns (like product and company names), industry jargon, phrases and more. This is an optional field.MultilingualWe’re exposing Deepgram’s latest Nova-3 multilingual model as an STT option, capable of understanding and transcribing 8 different langauges.In addition, the standard English STT is being updated from Nova-2 to Nova-3, for a boost in performance.

Introducing Voiceflow Credits: A simpler way to track usage
Today marks a significant milestone in Voiceflow’s journey as we officially launch our new credit-based billing system. This update represents a fundamental shift in how you’ll track, manage, and optimize your Voiceflow usage—all designed to bring greater simplicity, transparency, and predictability to your experience.What’s New🎉 Voiceflow CreditsWe’ve completely overhauled our billing system, moving away from the complex token-based approach to a streamlined credit system that unifies tracking across all platform features:- Simplified Measurement: One unified credit system for all actions (calls, messages, LLM responses, TTS)
- Predictable Costs: Clear pricing tiers that make budget planning straightforward
- Transparent Usage: Detailed visibility into exactly how your credits are being consumed
- Developer-Friendly: Messages only count toward credits when your agent is used in production—not when developing in-app or using shareable prototypes
- View your total available and used credits
- Track usage across all agents or drill down into specific ones
- Monitor editor and agent allocation
- Analyze usage patterns over time
- What are Voiceflow Credits? - A comprehensive guide to understanding how credits work
- Introducing Voiceflow Credits - Learn about the philosophy behind the change and how it benefits you
- Credit Calculator - Estimate your credit needs based on your specific usage patterns
- Annual plans now offer a 10% discount (previously 20%)
- Editor seats now cost $50/month with no restrictions (a price reduction)
- Business plan (formerly Teams) base tier increases from $125 to $150

Minor Updates / Fixes
Improvements:- Voice Widget latency decreased by up to 750ms
- Voice Widget now streams with more consistent linear16@16kHz encoding
- Reset memory when a new conversation is launched (launch request)
- Global no reply not working on Agent steps
- The maximum allowed length for
{userID}in the Dialog API will be set to 128 characters, effective April 18th - Unable to remove webhook URLs
- Analytics visualization UI bug
- Voice Widget always setting userID to test on transcripts
- Chat Widget no audio output after page reload
- Export variables fails when project has large number of variables
Call Events Webhook
Changes:-
New support added to subscribe to call events via webhook, for both twilio IVR and voice widget projects Call Events Documentation Webhook system is capable of broadcasting additional events in the future

Streaming Text in Chat Widget Now Optional
Changes:-
Added option to disable streaming text in chat widget Stream text can now be turned off in the Modality & interface settings When disabled, the full agent response will be displayed at once instead of being streamed out. Useful for situations where streaming longer messages is not desired

Max Memory Turns Setting
Conversation memory is a critical component of the Agent and Prompt steps. Having longer memory gives the LLM model more context about the conversation so far, and make better decisions based on previous dialogs.However, larger memory adds latency and costs more input tokens, so there is a drawback.Before, memory was always set to 10 turns. All new projects will now have a default of 25 turns in memory. This can now be adjusted this in the settings, up to 100 turns.For more information on how memory works, reference: https://docs.voiceflow.com/docs/memory
Agent Step, Structured Output Improvements, Gemini 2.0 Flash
We’re excited to introduce several major updates that enhance the capabilities of the Agent step and expand our model offerings. These improvements provide more flexibility, control, and opportunities for creating powerful AI agents.🧠 Agent Step: Your All-in-One SolutionThe Agent step has been supercharged to create AI agents that can intelligently respond to user queries, search knowledge bases, follow specific conversation paths, and execute functions—all within a single step. Key features include:- Intelligent Prompting: Craft detailed instructions to guide your agent’s behavior and responses.
- Function Integration: Connect your agent with external services to retrieve and update data.
- Conversation Paths: Define specific flows for your agent to follow based on user intent.
- Knowledge Base Integration: Enable your agent to automatically search your knowledge base for relevant information.
- Arrays and Nested Arrays: You can now define arrays and nested arrays in your output structure.
- Nested Objects: Structured output now supports nested objects, allowing for more complex data structures.

Variable Handling Update: Consistent Behavior for Undefined Values
Changes:- Updated Voiceflow variable handling for consistency in previously undefined behavior: Variables can be any JavaScript object that is JSON serializable. Any variable set to
undefinedwill be saved asnull(this conversion happens at the end of the step, so it does not affect the internal workings of JavaScript steps and functions). Functions can now returnnull(rather than throwing an error) and can no longer returnundefined(which could cause agents to crash). Functions that attempt to returnundefinedwill now returnnull(to ensure backwards compatibility).
New Analytics Dashboard: Gain Deeper Insights into Your Agent’s Performance
We’ve revamped our Agent Analytics Dashboard, not only giving it a fresh new look but also introducing a range of powerful visualizations that provide unprecedented visibility into your agent’s performance.🌟 New VisualizationsThe updated Analytics Dashboard offers a comprehensive set of visualizations that allow you to track and analyze various aspects of your agent’s performance:- Tokens Usage: Monitor AI token consumption over time across all models, giving you a clear picture of your agent’s token utilization.
- Total Interactions: Keep track of the total number of interactions (requests) between users and your agent over time, providing insights into engagement levels.
- Latency Monitoring: Measure the average response time of your agent to ensure optimal performance and identify any potential bottlenecks.
- Total Call Minutes: Gain visibility into the cumulative duration of voice calls in minutes, helping you understand the volume and significance of voice interactions.
- Unique Users: Identify the count of distinct users interacting with your agent over time, allowing you to track adoption and growth.
- KB Documents Usage: Analyze the frequency of knowledge base document access, with the ability to toggle between ascending and descending order to identify the most or least used documents.
- Intents Usage: Visualize the distribution of triggered intents, with sorting options to analyze intent frequency and identify popular or underutilized intents.
- Functions Usage: Monitor the frequency of function calls, their success/failure and latency, with sorting capabilities to identify the most or least used functions and optimize your agent’s functionality.
- Prompts Usage: Gain insights into the usage frequency of agent prompts, with the ability to toggle between ascending and descending order to analyze prompt utilization and effectiveness.

New Models, Function Editor Enhancements, and Call Recording
We’re thrilled to announce several exciting updates that expand your AI agent building capabilities and improve your workflow. Let’s dive into what’s new!🧠 New Models: Deepseek R1, Llama 3.1 Instant, and Llama 3.2We’ve expanded our model offerings to give you even more options for creating powerful AI agents:- Deepseek R1: Harness the potential of Deepseek’s R1 model for enhanced natural language understanding and generation.
- Llama 3.1 Instant: Experience lightning-fast responses with the Llama 3.1 Instant model.
- Llama 3.2: Leverage the advanced capabilities of Llama 3.2
- Modal View: You can now open the Function Editor as a modal directly from the canvas. This allows you to make quick updates and navigate between your functions and the canvas seamlessly.
- Snippets: We’ve introduced a new snippets feature that enables you to insert pre-written code snippets for common concepts in Voiceflow functions.
- Automatic Call Recording: All phone calls between users and your AI agent will now be automatically recorded.
- Twilio Integration: The call recordings will be accessible directly in your Twilio account for easy review and management.


Retrieval-Augmented Generation (RAG) for Intent Recognition
We’re excited to announce a significant upgrade to our intent recognition system, moving from the traditional Natural Language Understanding (NLU) approach to Retrieval-Augmented Generation (RAG) model using embeddings. This transition brings notable improvements to the speed, accuracy, and overall user experience when interacting with AI agents on our platform.📅 Phased RolloutTo ensure a smooth adoption, we will be rolling out the RAG-based intent recognition system to all users in phases over the next week. This gradual deployment allows us to monitor performance and gather feedback while providing ample time for you to adjust to the new system.🆕 Default for New ProjectsFor all new projects created on our platform, the RAG-based intent recognition will be the default system. This means that new AI agents will automatically benefit from the enhanced speed, accuracy, and natural conversation capabilities offered by RAG.🌟 Faster Training and InteractionWith the new RAG system, agent training and intent recognition are now substantially faster and more efficient. For example, an agent with 37 intents and 305 utterances now trains about 20 times faster, in just around 1 second. This means quicker agent development and smoother conversations for end-users.🧠 Automatic Agent TrainingThanks to the advanced training speed enabled by RAG, explicit training is no longer necessary. Simply test your agent, and the training will happen automatically behind the scenes, streamlining your workflow.🎯 Enhanced Understanding of Complex QueriesRAG leverages embeddings to capture the deeper context and meaning behind words, even when phrased differently. This allows the system to better understand and accurately match complex, detailed questions to the appropriate intents, providing more precise responses to users.🗣️ More Natural ConversationsWith the improved understanding of casual language, slang, and diverse phrasing, the RAG system enables a more natural, conversational experience for users interacting with AI agents on our platform.🔄 Seamless Transition for Existing ProjectsFor existing projects, we will keep both the NLU and RAG systems running concurrently for a period of time. This allows you to explore the new system, test it thoroughly, and make any necessary adjustments to your agents. You can easily switch between the NLU and RAG systems in the intent classification settings within the Intents CMS.We’re thrilled to bring you this enhanced experience and look forward to hearing your feedback as you interact with the new RAG-based intent recognition system. Your input is invaluable in helping us continue to innovate and improve our platform to better serve your needs._for_Intent_Recognition/57c510d9a8056fb12195a4bb1522dd15694b942f794421db4bdba86b9af825e7-CleanShot_2025-02-27_at_10.55.212x.png?fit=max&auto=format&n=4w80VjSYHtllKRvi&q=85&s=1290a426aa830e44f2c36ab3adc608d9)
Expanding the Possibilities of User Interaction with Voice
In our mission to redefine how users interact with AI agents, we have introduced a new voice modality option to our web widget. This addition is a step towards creating more natural, intuitive, and engaging user experiences. By enabling voice-based conversations, we are empowering businesses to connect with their customers in a way that feels authentic and effortless.Voice technology has become an increasingly popular and preferred mode of interaction for many users. By integrating voice functionality into our web widget, we are meeting users where they are and providing them with a seamless way to engage with AI agents. This not only enhances the user experience but also opens up new possibilities for businesses to assist, inform, and guide their customers throughout the customer journey.Natural Voice InteractionThe web widget now supports voice-based communication, allowing users to speak naturally with AI agents. Businesses can integrate this feature to provide their customers with a hands-free, intuitive way to ask questions, receive recommendations, and get assistance while browsing the site.Customization OptionsThe voice widget offers customization options to ensure seamless integration with your website’s branding:- Launcher Style: Select a launcher style that complements your site’s design.
- Color Palette: Choose colors that match your brand guidelines.
- Font Family: Pick a font that aligns with your website’s typography.
- Automated Speech Recognition: Our platform uses advanced STT technology from Deepgram to accurately transcribe user speech in real-time.
- Organic Text-to-Speech: We’ve integrated with leading providers like 11 Labs and Rime to offer a variety of natural-sounding voices that bring AI agents to life.


AI Fallback
We’re excited to introduce AI Fallback, a powerful new feature in beta that enhances the reliability and continuity of your AI operations. This feature ensures your AI services remain operational even during provider outages or service interruptions.🔄 Automatic Fallback SwitchingAI Fallback automatically switches between models when issues arise. When your primary AI model experiences difficulties, the system seamlessly transitions to your configured backup model, ensuring continuous operation of your AI services.⚙️ Easy ConfigurationSetting up AI Fallback is straightforward:- Access your agent
- Navigate to agent settings
- Set your preferred fallback model by provider
- Minimizes service disruptions during model outages
- Maintains consistent AI performance
- Reduces operational impact of provider issues
- Ensures business continuity
- Identifies the next available model in your sequence
- Switches ongoing operations to the backup model
- Returns to the primary model once issues are resolved

New Features: Structured Outputs and Variable Pathing
Today we’re introducing two powerful new capabilities in Voiceflow: Structured Outputs and Variable Pathing. These features expand the possibilities for working with data from large language models (LLMs) in your agents. Let’s explore what they enable!🎉 Structured OutputsStructured Outputs let you define the format of the data you expect an LLM to return, giving you more control and predictability over the results.- In a prompt step, enable the new “JSON Output” option to specify the structure of the LLM’s response.
- Today, Structured Outputs support the following data types: String Number Boolean Integer Enum
- Support for arrays and nested objects is planned for the near future.
- Structured Outputs are available with
gpt-4o-miniandgpt-4omodels.
- Store an entire object in a single variable, then access its properties using dot notation (e.g.
user.name,user.email). - Capture Structured Output responses or API results as objects.
- Use object properties directly in conditions, messages, and other steps.
- Reduce the need for multiple variables to represent a single entity.
- Define precise data requirements for LLMs to provide relevant information
- Capture responses as feature-rich objects in a single step
- Access and manipulate object properties throughout your project
- Streamline your project’s design while expanding its capabilities


Voiceflow Telephony
We’re excited to announce the release of Voiceflow Telephony, bringing enterprise-grade voice capabilities to your conversational experiences. This release represents a significant milestone in our mission to provide comprehensive, low-latency voice solutions for businesses of all sizes.Native Twilio IntegrationWe’ve integrated with Twilio to make phone-based interactions as simple as possible. The new integration allows you to:- Import existing Twilio phone numbers directly into Voiceflow
- Associate phone numbers with specific agents
- Configure separate numbers for development and production environments
- Test different versions of your agent against different phone numbers
- Dramatically reduced response times
- Near real-time agent reactions
- Optimized voice processing pipeline
- High-accuracy transcription
- Low-latency processing
- Support for over 20 languages
- Programmatically initiate calls to any phone number
- Test outbound calls directly from the Voiceflow interface
- Integrate outbound calling into your existing workflows
- Premium text-to-speech voices from industry leaders, such as: ElevenLabs Rime Google
- Support for advanced telephony features through custom actions: Call forwarding DTMF handling Interruption behaviour
- Background audio customization
- Audio cue configuration
- Interruption threshold controls
- Utterance end detection
- Response timing optimization
- User input acceptance timing
- Enhanced call analytics and reporting
-
Additional voice customization options




New AI-Native Webchat
We’re excited to announce a complete reimagining of the Voiceflow webchat experience. This new version introduces AI-native capabilities, enhanced customization options, and flexible deployment methods to help you create more engaging conversational experiences.AI-NativeOur webchat has been rebuilt from the ground up to provide a more natural, AI-driven conversation experience:- Streaming Text Support: Experience real-time message generation with character-by-word streaming, creating a more engaging and dynamic conversation flow. Users can see responses being crafted in real-time, similar to popular AI chat interfaces.
- AI Disclaimers: Built-in support for displaying AI disclosure messages and customizable AI usage notifications to maintain transparency with your users.
- Widget: Traditional chat window that appears in the corner of your website
- Popover: Full-screen chat experience that overlays your content
- Embed: Seamlessly integrate the chat interface directly into your webpage layout
- Color System: Expanded colour palette support with primary, secondary, and accent colour definitions
- Typography: Custom font family support
- Launcher Variations: Classic bubble launcher with customizable icons Button-style launcher with text support
- Chat Persistence: Now configured through the snippet rather than UI settings.
- Custom CSS: Maintained compatibility with most existing class names.
- Proactive Messages: Temporarily unavailable in this release, with support coming soon


Function libraries, starter templates and ElevenLabs support
Function LibrariesIntegrate your agent with your favorite tools using our new function libraries. Access pre-built functions for popular platforms like Hubspot, Intercom, Shopify, Zendesk, and Zapier. These functions, sourced from Voiceflow and the community, make it easier than ever to connect Rev with your existing workflows. Showcase readily available integrations to your team and clients.Transcript Review HotkeysReviewing transcripts just got faster and more efficient. You can now pressR to mark a transcript as Reviewed or S to Save it for Later. These handy shortcut keys are perfect for power users who review a high volume of transcripts.Project Starter TemplatesGetting started is now a breeze. When creating a new project, choose from a set of templates tailored for common use cases like customer support, ecommerce support, and scheduling. These templates help you hit the ground running without the need for extensive setup and customization. Ideal for new users and busy teams.Expanded Voice SupportWe now offer an even greater selection of natural-sounding AI voices. We’ve added support for a variety of new options from ElevenLabs and Rime. Please note that using these voices consumes AI tokens. Check them out for your projects that could benefit from additional voice choices.


Important Update: Deprecation of AI Response and AI Set Steps
This is an important update to our platform. As part of our ongoing commitment to enhancing your experience and providing the most advanced tools for AI agent development, we have made the decision to deprecate the AI Response and AI Set steps.What does this mean for you?- On February 4th, 2025, the AI Response and AI Set steps will be disabled from the step toolbar in the Voiceflow interface to encourage users to move away of these deprecated steps. Existing steps will remain untouched and will continue working as per normal.
- On June 3rd, 2025, these steps will no longer be supported. Any existing projects using these steps will need to be migrated to the new Prompt and Set steps. We will be sending out additional communication in advance to the sunset date.
- Centralized prompt management:The Prompt CMS serves as a hub for all your prompts, making it easy to create, edit, and reuse them across your projects.
- Advanced prompt configuration: Leverage system prompts, message pairs, conversation history, and variables to craft highly contextual and dynamic responses.
- Seamless integration: The Prompt step allows you to bring your prompts directly into your conversation flows, while the Set step lets you assign prompt outputs to variables for enhanced logic and control.
- Continued innovation: We are committed to expanding the capabilities of these new features, with exciting updates planned for the near future.
API Step V2
We’re introducing a new API stepwith a cleaner, more intuitive interface for configuring your API requests. While the existing API step remains fully functional, we recommend trying out the new version at your earliest convenience.Project Data ChangesFor users working with our API programmatically, we’ve included the new step type definition below:
Introducing Claude Haiku 3.5 and Streamlining Model Selection
AddedWe’ve added support for Anthropic’s Claude Haiku 3.5 model.You can now select Claude Haiku 3.5 when creating or editing your agents, allowing you to:- Benefit from improved performance and enhanced conversational abilities
- Create more engaging and human-like interactions

Document Metadata - Update Routes
Added- Document Metadata Update - New PATCH endpoint
/v1/knowledge-base/docs/{documentID}to update metadata for entire documents Updates metadata across all chunks simultaneously Note: Not supported for documents of type ‘table’ - Chunk Metadata Update - New PATCH endpoint
/v1/knowledge-base/docs/{documentID}/chunk/{chunkID}to update metadata for specific chunks Allows targeted updates to individual chunk metadata Supports all document types, including tables Other chunks in the document remain unchanged
Expanded Prompt Support: Add AI Logic to Conditions and Message Variants
We’re excited to announce that Condition steps now support prompts as a condition type, allowing you to use AI responses to determine conversation paths.What’s New- Prompt Conditions: Condition steps can now evaluate prompt responses to intelligently branch conversations down different paths based on AI analysis.
- Message Variant Conditions: Message steps can now use prompt responses to select the most appropriate response text, helping your agent say the right thing at the right time.
- Seamless Prompt Integration: Choose from your existing prompts in the Prompt CMS or create new ones directly within the Condition or Message step.
- Create or select a Condition step
- Choose “Prompt” as your condition type
- Select or create a prompt
- Add paths and define evaluation criteria
- Add variants to your Message step
- Select a prompt to determine variant selection
- Define your variant conditions
- Test your dynamic messaging

Import and Export Variables and Entities
Simplify your workflow and make managing your agents even easier with more sharing options.Import and Export Variables and Entities in the CMSYou can now import and export variables and entities directly in your Agent CMS, saving you time and effort when setting up and sharing your agents.- Quickly populate your variables and entities by importing exported JSON files
- Easily create new versions of variables and entities by importing new files
- Export your variables and entities as JSON files for backup or sharing
- Save time by bulk importing and exporting variables and entities

Prompt Sharing and Improved Variable Debugging
This week we’re excited to introduce two new features that will enhance your workflow and make it easier to build and debug your agents.Export and Import Prompts in the Prompt CMSReusing and sharing prompts across different agents is now a breeze with our new export and import functionality in the Prompt CMS. Easily export prompts individually or in bulk as JSON files.- Import prompts into any agent with just a few clicks, including any variables or entities that are used in the prompt.
- Streamline your workflow by reusing effective prompts across projects.
- Share your best prompts with colleagues and the community.
- See and modify objects and arrays assigned to variables during prototyping.
- Quickly identify issues with variable assignments.


Smart Chunking - New Strategies
This week, we’re excited to announce the beta release of a number of new Smart Chunking features, designed to enhance the way you process and retrieve knowledge base content. These improvements address previous limitations and bring more efficiency to your document management workflow.LLM-Generated QuestionsEnhance retrieval accuracy by prepending AI-generated questions to document chunks. This aligns your content more closely with potential user queries, making it easier for users to find the information they need.Context SummarizationProvide additional context by adding AI-generated summaries to each chunk. This helps users understand the content more quickly and improves the relevance of search results.LLM-Based ChunkingExperience optimal document segmentation determined by semantic similarity and retrieval effectiveness. This AI-driven approach ensures your content is chunked in the most meaningful way.Content SummarizationLet AI summarize and refine your content, focusing on the most important information. This feature streamlines your documents, making your chunks more concise and optimized for retrieval performance.We encourage you to explore these new capabilities and share your feedback.To start using the Smart Chunking beta features, join the waiting list here.
Unlock Dynamic Interactions in Your Agent with Events
Events enables users to trigger workflows without direct user input. They enable your agent to respond to user-defined events tailored to specific use cases. With Events, your agent becomes more context-aware and responsive, providing a more engaging and dynamic user experience.What’s NewEvents System- Custom Triggers: Define custom events in the new Event CMS, allowing your agent to respond to specific user actions beyond just conversational input.
- Seamless Integration: Events act as signals from the user’s interactions—like button clicks, page navigations, or in-app actions—enabling your agent to initiate specific workflows dynamically.
- Event Triggers in Workflows: Use the new Event type in the Trigger step to associate events with specific flows in your agent, giving you full control over the conversational paths.
- Expand Interaction Capabilities: Respond to a wide range of user actions within your application, making your agent more intelligent and adaptable.
- Create Contextual Experiences: Provide relevant interactions based on what the user is doing.
- Streamline User Journeys: Assist users at critical points, offering guidance, confirmations, or additional information exactly when needed.
- User Clicks a Checkout Button: Trigger an event to initiate a checkout assistance flow, confirming items or offering shipping options.
- In-App Feature Usage: Start a tutorial when a user accesses a new feature for the first time.
- User Sends a Command in a Messaging App: Provide immediate responses to specific commands, like showing recent transactions.
- User Navigates to a Specific Page: Offer assistance related to the content of the page, such as explaining pricing plans.
- Introduction to Events: Learn more about what you can do with Events.
-
Using Events: Discover how to define and implement events in your agent using the Event CMS.

Prompt like a Pro: New Prompts CMS, Prompt Step and more
Recognizing that your AI prompts are the cornerstone of agent behaviour, we’ve developed a comprehensive suite of tools designed to provide a central hub for creating, updating, and testing prompts with ease and efficiency.What’s New- Prompts CMS Centralized Prompt Hub: Manage all your prompts in one place, ensuring consistency and easy access across your entire agent. Advanced Prompt Editor: Craft, edit, and test your prompts within an intuitive interface equipped with the necessary tooling to refine your AI agent’s responses. Message Pairs & Conversation Memory: Utilize message pairs to simulate interactions and inject conversation memory, allowing for more dynamic and context-aware agent behaviour. Visibility into Performance Metrics: Gain insights into latency and token consumption, now split by input and output tokens, to optimize your prompts for performance and cost-efficiency.
- New Prompt Step Prompt Integration: Incorporate response prompts directly into your agent workflows using the new Prompt step. Reuse Across Agent: The prompts you create can be easily reused across your agent, making any updates available wherever the prompt is used.
- Assign Prompts in Set Step Simplify Designs: This feature brings prompts to the Set step for purposes of reusability and consolidating methods of setting variable values in your agent.
- Expanded Prompt Support: Soon, you’ll be able to use prompts in more steps within your agent’s flow, unlocking new possibilities for interaction design.
- Community Sharing: We’re developing features that will allow you to share prompts across your agents and with the wider community, facilitating collaboration and collective improvement.
- Prompt CMS and Editor: Explore the central hub for creating, testing, and managing prompts within your agent.
- Prompt step: Learn how to integrate prompts directly into your agent’s flow.
-
Set step: Discover how to dynamically assign prompt outputs to variables for greater control over agent behaviour.

Persistent Listen in Functions
What’s New- Persistent Events: Functions can now define events that persist for the entire conversation session. This means that events associated with components like carousels, choice traces, or buttons remain active even after the conversation moves past the function step.
- Delayed User Interaction: Users can interact with these persistent components at any point during the session. When they do, the agent will refer back to the original function and proceed down the relevant path defined in your function code.
- Flexible Agent Behaviour: You now have control over whether the agent waits for user input at the function step or continues execution immediately, thanks to the new listen parameter settings.
- Behavior: The agent pauses execution at the function step. It waits for immediate user input before proceeding.
- Use Case: Ideal when you require the user to make a choice or provide input before moving on.
- Implementation: Set listen: true in your function’s next command.
- Behavior: The agent continues execution without waiting at the function step. Events defined in the function persist throughout the session and can be triggered later.
- Use Case: Perfect for non-blocking interactions where users might choose to interact with components at their convenience.
- Implementation: Set listen: false in your function’s next command.
- Dynamic Conversations: Create agents that can handle delayed interactions, allowing users to make choices at any point during the conversation.
- Persistent Interactivity: Keep buttons and interactive elements active throughout the session.
- Agent Behavior: Continues to ‘continue_path’ immediately. The purchase event remains active and can be triggered later. When the user clicks “Buy Now” for Product A, the agent navigates to ‘purchase_A_path’.
New Publishing Workflow, Knowledge Base Folders and Updated Choice Step
We’re introducing several new features to enhance your experience with Voiceflow:- New Publishing Workflow: Versioned Publishing with Release Notes: Easily add release notes to each version of your agent during the publishing process, making it simpler to track changes and updates. Dedicated Publishing Tab: Access a new Publishing tab within the Agent CMS, where you can name your version, add release notes, and publish directly. The tab includes: Publish View: Name your versions and add notes before publishing. Release Notes View: Review your agent’s release history with a clean, organized display.
- Folders for the Knowledge Base CMS: We’ve added folders to the Voiceflow Knowledge Base CMS, enabling you to organize your data sources more effectively. This makes managing large datasets and sources easier, keeping everything in order.
-
Enhanced Choice Step with Button Support: You can now attach buttons to a choice step, with intents being optional. This means you can add paths to a Choice step that are only triggered with a button click, offering greater flexibility in how users interact with your agents.



Fueling Your Creativity: We’ve Doubled Your AI Tokens!
At Voiceflow, our mission is ambitious: to provide you with the best agent creation platform in the world. We believe in empowering creators like you to build advanced conversational AI agents without limits.As we’ve integrated more powerful language models into Voiceflow, we’ve seen your projects become more innovative and dynamic. Your agents are smarter, more engaging, and pushing the boundaries of what’s possible in conversational AI.But we don’t want you to slow down—we want to give you more “fuel” to keep that engine running at full speed.That’s why we’re excited to announce that we’ve doubled the included monthly AI token allotments for our Pro, Team, and Enterprise plans!Here are the new allotments:• Pro Plan: Now includes 4 million AI tokens per month. • Teams Plan: Now includes 20 million AI tokens per month. • Enterprise Plan: Now includes 200 million AI tokens per month.Need more? Additional tokens are available for purchase across all plans.This upgrade is about more than just numbers. It’s about supporting your vision and giving you the resources to bring your most ambitious ideas to life. Whether you’re developing complex conversational flows, experimenting with new AI features, or scaling up your existing projects, we’ve got you covered.We’re committed to making Voiceflow not just a tool, but a platform that grows with you—a place where your creativity has no bounds.Thank you for being an essential part of our journey. We can’t wait to see what incredible things you’ll build with this extra boost.Streaming API for Real-Time Interactions
We’re excited to announce the release of our new Streaming API endpoint, designed to enhance real-time interactions with your Voiceflow agents. This feature allows you to receive server-sent events (SSE) in real time using thetext/event-stream format, providing immediate responses and a smoother conversational experience for your users.Key Highlights- Real-Time Event Streaming: Receive immediate
traceevents as your Voiceflow project progresses, allowing for dynamic and responsive conversations. - Improved User Experience: Drastically reduce latency by sending information to users as soon as it’s ready, rather than waiting for the entire turn to finish.
- Support for Long-Running Operations: Break up long-running steps (e.g., API calls, AI responses, JavaScript functions) by sending immediate feedback to the user while processing continues in the background.
- Streaming LLM Responses: With the
completion_eventsquery parameter set totrue, stream large language model (LLM) responses (e.g., from Response AI or Prompt steps) as they are generated, providing instant feedback to users.
Accept: text/event-streamAuthorization: {Your Voiceflow API Key}Content-Type: application/json
completion_events(optional): Set totrueto enable streaming of LLM responses as they are generated.
Streaming LLM Responses with completion_eventsBy setting completion_events=true, you can stream responses from LLMs token by token as they are generated. This is particularly useful for steps like Response AI or Prompt, where responses may be lengthy.Example Response with completion_events=true- Find Your
projectID: Locate yourprojectIDin the agent’s settings page within the Voiceflow Creator. Note that this is not the same as the ID in the URLcreator.voiceflow.com/project/.../. - Include Your API Key: Ensure you include your Voiceflow API Key in the
Authorizationheader of your requests.
- Example Project: Check out our streaming-wizard demo project for a practical implementation using Node.js.
- Compatibility: This new streaming endpoint complements the existing
interactendpoint and is designed to enhance real-time communication scenarios. - Deterministic and Streamed Messages: When using
completion_events, you may receive a mix of streamed and fully completed messages. Consider implementing logic in your client application to handle these different message types for a seamless user experience. - Latency Reduction: By streaming events as they occur, you can significantly reduce perceived latency, keeping users engaged and informed throughout their interaction.

Smart Chunking Beta: Automatic HTML to Markdown Conversion
We are pleased to announce the launch of the Smart Chunking beta program. Over the coming weeks, we will be testing and validating several LLM-based strategies to enhance the quality of your knowledge base chunks. Better chunks lead to better responses and higher-quality AI agents.First Strategy: HTML to Markdown ConversionOur initial strategy focuses on automatic HTML to Markdown conversion. Many users import content from web pages—either directly using our web scraper or via APIs from services like Zendesk Help Center or Kustomer. This content often contains raw HTML, which can be noisy and degrade chunk performance and response quality.By converting HTML to Markdown automatically, we aim to improve the cleanliness and readability of your content. This conversion is supported for all data sources you can upload into Voiceflow.How to Use the New FeatureJoin the Beta ProgramIf you’re interested in participating in the Smart Chunking beta, please sign up via the waitlist link. We will be granting access to participants over the next few weeks.Secrets Manager and Updated Button Step
Announcing the new Secrets Manager in Voiceflow! This new feature enables you to securely store and manage a variety of sensitive information within your AI agents, including API keys, database credentials, encryption keys, and more.Video WalkthroughWhat’s New- Secure Storage of Sensitive Data Safely store passwords and credentials used to access essential services and functions within your software. Utilize AES-256 GCM encryption to ensure confidentiality and integrity of your secrets.
- Visibility Controls Masked Secrets: Values are hidden but can be temporarily revealed when needed. Restricted Secrets: Values remain hidden and cannot be revealed after creation, enhancing security for highly sensitive data.
- Environment Overrides Specify different secret values for Development and Production environments. Seamlessly manage environment-specific configurations without altering your agent’s logic.
- Integration with Function and API Steps Easily insert secrets into Function and API steps without hardcoding sensitive information. Access secrets directly from the canvas by typing
{and selecting from the Secrets tab. - Secure Project Sharing When duplicating or exporting projects, secret values are excluded to maintain security. Re-add secret values as needed in the new project instance.
- Enhanced Security Protects sensitive information using industry-standard encryption. Prevents unauthorized access through robust key management and encryption practices.
- Simplified Management Centralizes the handling of all your secrets within the Secrets Manager. Reduces the risk of accidental exposure by avoiding hardcoded credentials.
- Operational Flexibility Environment overrides allow for different configurations across development and production stages. Streamlines the deployment process by separating environment-specific data.
- Access the Secrets Manager Navigate to Agent Settings in the left sidebar. Click on the Secrets tab to open the Secrets Manager interface.
- Create a New Secret Click New Secret in the top-right corner. Enter the Name, Value, and choose the Visibility setting. Click Create Secret to add it to your Secrets Manager.
- Use Secrets in Your Agent In a Function or API step, type
{to open the variable selector. Switch to the Secrets tab and select the desired secret. - Set Up Environment Overrides Go to the Environments tab in Agent Settings. Click Override Secrets next to the desired environment. Enter environment-specific values for your secrets and click Save.
- Streamlined Button Creation: We’ve made it quicker and more intuitive to add buttons. Simply click the ”+” button next to the Buttons label in the editor to add a new button.
- Clean Editor Layout: The editor has been decluttered to focus on what’s important, allowing you to configure your buttons without any unnecessary distractions.
- Replace Existing Steps: You can replace your existing Button steps with the redesigned version in your projects to benefit from the improved UI.
- Update Your Conversation Paths: Reconnect any paths as needed using the ports associated with your buttons.
- Review Settings: Check your No Match, No Reply, and Listen for Other Triggers settings to ensure they are configured as desired.

New AI-Powered Entity Collection
Some significant updates that we’ve been working on to make your conversational agents even better. Over the next week, we’ll be rolling out new features that leverage advanced AI capabilities to improve how your projects understand and interact with users.What’s New?Transition to AI-Powered Entity ExtractionWe’ve moved from traditional Natural Language Understanding (NLU) to using Large Language Models (LLMs) for entity extraction. This change is all about making your agents more accurate and adaptable when interpreting user inputs. By embracing AI-powered entity extraction, your agents can now handle a wider range of conversational scenarios with greater reliability.New AI-Powered Steps with Enhanced Entity Collection: Choice, Capture, and TriggerTo fully leverage the improved entity extraction capabilities, we’ve updated some core steps in Voiceflow. The Choice, Capture, and Trigger steps have all been upgraded to natively support the new AI-powered entity collection features. This means these steps are now better equipped to collect necessary information from users, making your conversational agents more effective and responsive.Activating the New Steps in Your ProjectThe new Choice and Capture steps are available in the step toolbar. To start using the new AI-powered features, you’ll need to replace your existing steps with these updated versions in your projects. Your existing Choice and Capture steps will remain unchanged, so your current setups won’t be affected unless you choose to update them.Introducing Rules and Exit ScenariosWe’ve added two new features to give you more control over how your agents handle user inputs:- Rules: You can now define specific criteria that the user’s input must meet. This helps ensure that the information collected is valid and meets your requirements.
- Exit Scenarios: These allow your agent to gracefully handle situations where the user can’t provide the necessary information. You can set up alternative paths or responses for these cases, improving the overall user experience.
- Update Your Steps: Replace your existing Choice and Capture steps with the new AI-powered versions available in the step toolbar.
- Configure Rules and Exit Scenarios: Use the new options within these steps to define rules and exit scenarios that suit your project’s needs.
- Test Your Agent: After making changes, be sure to test your agent thoroughly to ensure everything works as expected.
New Multimodal Projects
It is easier than ever to create and manage multimodal agents that support both chat and voice interactions.What’s NewMultimodal Projects• Single Project Type for Chat and Voice: You no longer need to choose between a chat project and a voice project. All new projects (and existing chat projects) will support both modalities by default, streamlining your workflow and expanding your agent’s capabilities.Voice Features in Existing Chat Projects• Immediate Access to Voice Features: Existing chat projects now have built-in voice capabilities. You can start leveraging voice input and output options in your conversations without creating a new project.• Default Text-to-Speech (TTS) Settings: In the agent settings, you can now select a default TTS technology for your agent.Voice Prototyping Tools• Voice Prototyping: The designer prototype tool and web widget now includes voice input and output support. You can test voice and/or chat interactions anywhere and everything, making it easier to refine your agent’s conversational flow.Web Chat Integration• Optional Voice Interface: In the web chat integration settings, there is a new toggle to enable voice input and output. This allows your web chat experiences to include voice interactions. Currently, voice input (STT) for hosted deployments is limited to Chrome browsers. We will be looking to expand support in the future.Impact on Existing Voice Projects• No Changes to Existing Voice Projects: Your current voice projects will remain unchanged and fully functional. However, the option to create new voice-only projects from the dashboard has been removed. All new projects will support both chat and voice modalities.Resources• Feedback: We value your input. If you have any questions or encounter issues, please reach out to our support team.Upcoming Update to Listen Steps Data Format
We are introducing new step types todiagrams[id].nodes to enhance the functionality and flexibility of your AI agents.Affected Property: diagrams[id].nodesWhat’s Changing:- New Node Types Added: ButtonsV2 Node (
buttons-v2) Description: Enhances button interactions with more dynamic features. Type Definition: TypeScripttype ButtonsV2Node = { type: "buttons-v2"; data: { portsV2: { byKey: Record<string, { id: string; type: string; target: string | null; data?: { type?: "CURVED" | "STRAIGHT" | null; color?: string | null; points?: { point: [number, number]; toTop?: boolean | null; locked?: boolean | null; reversed?: boolean | null; allowedToTop?: boolean | null; }[] | null; caption?: { value: string; width: number; height: number; } | null; } | null; }>; }; listenForOtherTriggers: boolean; items: { id: string; label: Markup; }[]; name?: string; noReply?: { repromptID: string | null; path?: boolean | null; pathLabel?: string | null; inactivityTime?: number | null; } | null; noMatch?: { repromptID: string | null; path?: boolean | null; pathLabel?: string | null; } | null; }; nodeID: string; coords?: [number, number]; };CaptureV3 Node (capture-v3) Description: Provides advanced data capture capabilities, including entity capture and automatic reprompting. Type Definition: TypeScripttype CaptureV3Node = { type: "capture-v3"; data: { portsV2: { byKey: Record<string, { id: string; type: string; target: string | null; data?: { type?: "CURVED" | "STRAIGHT" | null; color?: string | null; points?: { point: [number, number]; toTop?: boolean | null; locked?: boolean | null; reversed?: boolean | null; allowedToTop?: boolean | null; }[] | null; caption?: { value: string; width: number; height: number; } | null; } | null; }>; }; listenForOtherTriggers: boolean; capture: | { type: "entity"; items: { id: string; entityID: string; path?: boolean | null; pathLabel?: string | null; repromptID?: string | null; placeholder?: string | null; }[]; automaticReprompt: { params: { model?: | "text-davinci-003" | "gpt-3.5-turbo-1106" | "gpt-3.5-turbo" | "gpt-4" | "gpt-4-turbo" | "gpt-4o" | "gpt-4o-mini" | "claude-v1" | "claude-v2" | "claude-3-haiku" | "claude-3-sonnet" | "claude-3.5-sonnet" | "claude-3-opus" | "claude-instant-v1" | "gemini-pro-1.5"; system?: string; maxTokens?: number; temperature?: number; } | null; exitScenario?: { items: { id: string; text: string; }[]; path?: boolean | null; pathLabel?: string | null; } | null; } | null; rules?: { id: string; text: string; }[]; } | { type: "user-reply"; variableID?: string | null; }; name?: string; noReply?: { repromptID: string | null; path?: boolean | null; pathLabel?: string | null; inactivityTime?: number | null; } | null; }; nodeID: string; coords?: [number, number]; };ChoiceV2 Node (choice-v2) Description: Enhances user choice interactions with improved intent handling and automatic reprompt options. Type Definition: TypeScripttype ChoiceV2Node = { type: "choice-v2"; data: { portsV2: { byKey: Record<string, { id: string; type: string; target: string | null; data?: { type?: "CURVED" | "STRAIGHT" | null; color?: string | null; points?: { point: [number, number]; toTop?: boolean | null; locked?: boolean | null; reversed?: boolean | null; allowedToTop?: boolean | null; }[] | null; caption?: { value: string; width: number; height: number; } | null; } | null; }>; }; listenForOtherTriggers: boolean; items: { id: string; intentID: string; rules?: { id: string; text: string; }[]; button?: { label: Markup; } | null; automaticReprompt?: { exitScenario?: { items: { id: string; text: string; }[]; path?: boolean | null; pathLabel?: string | null; } | null; } | null; }[]; name?: string; noReply?: { repromptID: string | null; path?: boolean | null; pathLabel?: string | null; inactivityTime?: number | null; } | null; noMatch?: { repromptID: string | null; path?: boolean | null; pathLabel?: string | null; } | null; }; nodeID: string; coords?: [number, number]; };
- Enhanced Capabilities: The new node types offer advanced features, allowing for more complex and dynamic conversational flows.
- Existing Nodes Remain Unchanged: Your current diagrams and nodes will continue to function as before.
- New Features Available in New Steps: Only new instances of these steps created after the update will utilize the updated formats and capabilities.
- Update Integrations: If you have custom integrations that interact with
diagrams[id].nodes, plan to update them to accommodate these new node types when creating new steps.
Condition Step Update
What’s New:- Redesigned UI: The Condition Step now features an updated interface for easier navigation and setup of conditional paths.
- Flexible Logic Configuration: Users can now choose between a Condition Builder (for non-technical users) or Expression (for custom JavaScript logic) to create paths based on variables, values, logic groups, or expressions.

New KB Search Step and Major Token Cost Reductions
We’re excited to announce the release of the KB Search Step, a new feature designed to simplify querying data from your Knowledge Base directly within Voiceflow. This step streamlines the process of fetching data chunks, providing you with greater control and efficiency in your conversational workflows.What’s New:- Native KB Querying: Easily query your Knowledge Base without the need for manual API configurations. Simply input a question or variable and retrieve relevant data chunks directly in your workflow.
- Customizable Chunk Retrieval: Use the chunk limit slider to specify the number of data chunks to be returned, ranging from 1 to 10. Fine-tune responses by adjusting this setting to suit your agent’s needs.
- No Match Handling: Set a minimum chunk score for chunk matching. If no data meets this threshold, the workflow can automatically follow a customizable Not Found path, ensuring a seamless user experience even when no exact match is found.
- Enhanced Control Over Prompts: Fetched data chunks are automatically converted into a string format, making them easy to incorporate into text-based prompts and other steps. You have the flexibility to control how this data is used, enhancing the quality and relevance of your agent’s responses.
- Testing Experience: Test your queries directly within the editor. View the returned chunks and their scores for insight into optimizing your knowledge base.
- Improved Efficiency: Eliminate the complexity of setting up manual API calls. The KB Search Step simplifies querying your Knowledge Base, saving you time and effort.
- Greater Flexibility: Control how many data chunks are returned and set minimum chunk scores to ensure the most relevant information is fetched for your agents.
- Enhanced Workflow Integration: Automatically convert and save data into variables, ready for use in subsequent steps within your workflow. This streamlines the process of integrating Knowledge Base data into your conversational designs.
- Token Multiplier Reduction: Tokens are now more affordable, with a reduction of ~50% or more on many of the supported models! Here are the new multipliers: GPT-3.5-Turbo: Reduced from 0.6x to 0.25x GPT-4 Turbo: Reduced from 12x to 5x GPT-4: Reduced from 25x to 14x GPT-4o: Reduced from 6x to 2.5x GPT-4o mini: Reduced from 0.2x to 0.08x Claude Instant 1.2: Reduced from 1x to 0.4x Claude 1: Reduced from 10x to 4x Claude 2: Reduced from 10x to 4x Claude Haiku: Reduced from 0.5x to 0.15x Claude Sonnet 3: Reduced from 5x to 1.75x Claude Sonnet 3.5: Reduced from 5x to 1.75x Claude Opus: Reduced from 20x to 8.5x Gemini: Reduced from 8x to 3.5x
- Billing Email Setting: You can now set a Billing Email on your account from the Billing page on the dashboard. This allows all billing-related emails, such as invoices and payment-related notifications, to be routed to the appropriate contact.
Upcoming Update to Condition Step Data Format
We are introducing a new version of the Condition step type to enhance condition handling in your AI agents.Affected Property:diagrams[id].nodesWhat’s Changing:- A new
condition-v3node type with an updated data structure will be added for greater flexibility and expressiveness.
- The
condition-v3node will support both script-based and variable-based conditions, allowing for more complex logic. - Only new condition steps created after the update will utilize the condition-v3 format, unlocking additional features.
- Plan to modify any custom integrations that interact with
diagrams[id].nodesto accommodate the new node type.
Improved Set Step
We’re excited to announce an update to the Set step, making it more versatile and user-friendly for all Voiceflow users!What’s New?We’ve enhanced the Set step by splitting the traditional single input into two distinct options: Value and Expression. This change aims to streamline the user experience, providing a simpler path for non-technical users while offering more flexibility for advanced use cases.- Value Input: This new input option simplifies the process of setting variables. You can now directly assign values to variables without needing to worry about syntax like quotes or data types. This makes it easier than ever to quickly set variables to specific text (String) values or numbers.
- Expression Input: For those needing more advanced functionality, the Expression input allows you to use JavaScript to set variables dynamically. Whether you’re incrementing or decrementing values, or performing complex calculations, the Expression input gives you the power to build sophisticated logic right within the Set step.

Introducing Messages: Scalable AI Agent Creation
We are excited to announce the release of Messages, our latest feature designed to revolutionize how you manage, scale, and customize your AI agent’s responses. At Voiceflow, our mission is to enable you to build scalable, efficient, and collaborative AI solutions, and Messages is a significant step forward in achieving that goal.Effortless Reusability for Consistency and Efficiency Messages enable you to reuse responses across different workflows, ensuring consistency and saving time. By typing a forward slash (/) in the message editor, you can quickly select from existing messages in the CMS. Any changes made to a reused message will propagate across all instances, streamlining updates and maintaining uniformity.Dynamic Responses with Conditional Variants With Messages, you can create multiple variants of a response to cater to diverse scenarios. Use AI to generate variants automatically or manually create them. Set conditions using the expression builder or JavaScript to tailor responses based on specific criteria, ensuring your agent’s interactions are contextually appropriate and dynamic.Enhanced Management with Improved Visibility The Messages CMS provides a clear view of where each message is used, who last edited it, and when it was last updated. This enhanced visibility helps you manage your responses efficiently, track their usage across your projects, and ensure all team members are on the same page.What’s Changed With the release of Messages, we’ve made several notable improvements to enhance your experience:- Centralized Message Management: Manage all your responses from a single, unified interface.
- Variants and Conditions: Create flexible message variants and set conditions for tailored responses.
- Effortless Reusability: Easily reuse messages across workflows for consistency and efficiency.
- Enhanced Visibility: Track where each message is used and manage updates efficiently.
New Default Model: Claude 3 - Haiku
We’re excited to announce that the default model for new projects in Voiceflow has been updated to Claude 3 - Haiku. This change reflects our commitment to providing the most advanced and effective tools for creating AI agents.What’s New:- Claude 3 - Haiku is now the default model in all new Voiceflow projects, offering enhanced performance, improved language understanding, and more nuanced conversational abilities.
Upcoming changes to the Set step
Upcoming Changes to the Set Step Data FormatAs part of our ongoing efforts to improve the functionality of Voiceflow, we are introducing an update to the Set step data format: the transition fromsetV2 to set-v3 nodes. This update provides a more refined structure to the Set step, offering improved control and flexibility for setting variables in your projects.Key ChangesTransition toset-v3 Nodes:We are transforming the current setV2 nodes into set-v3 nodes. This migration will ensure a more robust and flexible approach to variable management, with an updated structure that simplifies the process of setting variables within your projects.Automatic Conversion:All existing setV2 nodes will be automatically converted into set-v3 nodes when a project is opened in-app. This conversion preserves all existing functionalities while enabling you to leverage the new and improved system.Technical OverviewThe new set-v3 structure is outlined below:Properties Affected:- Fields Added: None
- Fields Modified:
diagram.nodes[type=setV2]transformed intodiagram.nodes[type=set-v3] - Fields Removed: None
setV2 nodes into set-v3 nodes, here are examples before and after the migration:Before Migration:Changes to Javascript step behavior
Executing “untrusted” code is tricky. Bad actors can write malicious code and potentially access sensitive data.The current way the javascript step is set up is a security risk and we want to move off of it at the earliest opportunity. Luckily there is a new backend that is both secure and more performant: It’s now 70-94% faster, there will be more information about that in another post.All Javascript steps created after July 30th, 2024 already automatically use the new system. We’ll be slowly converting existing Javascript step to use the new system, with the cutoff by August 16, 2024.The goal of the javascript step is to provide quick scripting to manipulate variables, rather than be a heavy-load serverless function with networking. For that, we can use functions.No action is needed is on your end, unless you use the following patterns.We’ve monitored all javascript step errors for the past week, running both the new and old backends in parallel, to categorize impact and effects. For the select few users that are affected, we will be proactively reaching out to them.All breaking changes we’ve observed will all be documented here, so people don’t reimplement in the future.Major breaking changesrequireFromUrlThe javascript step used to supportrequireFromUrl() which allows users to load in 3rd party libraries via URL. Commonly libraries such as moment or lodash and other utilities. This method is actually an anti pattern and has major security risks that the new backend does not support.ReferenceError: fetch is not defined"nodeJS modules: bufferOur new backend does not run on NodeJS, but rather a v8 isolate.Calls specific to NodeJS modules rather than the Javascript/ECMAScript standard will no longer be supported."ReferenceError: [module] is not defined".There are low level alternatives to replicate the behavior of nodeJS utilities, and this is something LLMs excel at helping convert (be sure to test, of course!).If you are using Buffer for base64 encoding/decoding, you can easily polyfill a atob or btoa function:this keyword now has a circular reference, and calling JSON.stringify(this) will result in recursion:this.Other minor changesDateTimeFormatIntl.DateTimeFormat() constructor. As per the MDN documentation:so Date(A).toLocaleTimeString.(B, { timeZone, timeZoneName, timeStyle }) would crash.fetchThe fetch() command already doesn’t work today - we don’t really support any await or async actions, but after the cutoff, the Javascript step will go down the fail port, with a debug message saying "ReferenceError: fetch is not defined".resource changesThe new backend solution has the same timeout but will have a definited limit on CPU and memory. In our monitoring no present blocks are affected by these limits.

Sunset Announcement: Alexa Hosting
Effective Date: October 30th, 2024As AI agents continue to evolve, we have observed a significant shift towards building on Web chat and deploying on channels like WhatsApp, Slack, and Discord using our Dialog Manager API. To better align with this trend and allocate our resources effectively, we have decided to discontinue our direct hosting integration with Alexa.Impact- Existing Alexa skills managed and deployed on Voiceflow will no longer function.
- We strongly encourage you to migrate your skills directly to Alexa, as we will not be providing updates or support for our Alexa integration.
Added support for GPT-4o mini
We are excited to announce the addition of GPT-4o mini to our platform. This integration brings new capabilities and improvements to enhance your AI agent experience. Below are the details of this update:New Features:- GPT-4o mini Integration: Our platform now supports GPT-4o mini, providing exceptional cost-efficiency and speed. This compact model is perfect for many of your agent tasks.
- Cost Efficiency: Enjoy significant cost savings with GPT-4o mini with a 0.2x token multiplier.
- Faster Performance: Experience quicker response times, ensuring your AI agents perform efficiently.
Sunset Announcement: FAQ API
We are announcing the sunset of the FAQ API, effective August 6th, 2024. This decision is based on user feedback and our commitment to providing the best tools for your conversational AI needs. We believe that our new feature for uploading tabular data to the Knowledge Base is a more efficient and versatile solution for managing FAQs.To help you transition smoothly, we have created a short video demonstrating how to use this new feature. You can watch the video below.We appreciate your understanding and cooperation as we make this transition. Please reach out to our support team if you have any questions or need assistance.Introducing the New Voiceflow Docs
We’re proud to announce Voiceflow’s newly revamped documentation. You can find it at docs.voiceflow.com.This upgrade seeks to streamline all learning resources to help you use Voiceflow’s designer tools as well as the APIs. Below are the details of this update:- Unified documentation: The old learn.voiceflow.com and developer.voiceflow.com sites have had all their articles merged into our new domain, docs.voiceflow.com. This is now the only site you’ll need to visit to access all of Voiceflow’s documentation. No more separate designer and developer documentation splits. Using the old domains will still redirect you to docs.voiceflow.com. Links inside the Voiceflow platform have also been updated.
- New organization: We’ve reorganized our documentation into Building, Deploying, and Improving Agents, as well as new docs and guides for Getting Started and Improving agents.
- Updated documentation: Many new articles have been written to teach you more about the state of the modern Voiceflow platform.
- New API docs and guides: Our API docs have been restructured, with new conceptual articles added and a guide for Getting Started with APIs.
- Moved changelogs: We’ve also brought over our changelogs from the old website (changelog.voiceflow.com). The link will still work to reach the changelogs.
Sunset Announcement: Native Integrations for WhatsApp, Twilio SMS, and Microsoft Teams
Effective August 12th, we will be sunsetting our native integrations for WhatsApp, Twilio SMS, and Microsoft Teams. The project types for these channels will remain accessible, but all active connections to these channels will be severed, and it will no longer be possible to publish new projects or maintain existing ones on these platforms.We understand this change may impact your workflows, and we are here to support you through this transition. Please explore our other integration options or contact our support team for assistance in adapting your projects.Thank you for your understanding and continued support.Added support for Gemini 1.5 Pro
We are excited to announce the addition of Gemini 1.5 Pro, the first Google model we support, to our platform. This integration brings new capabilities and improvements to enhance your experience. Below are the details of this update:New Features:- Gemini 1.5 Pro Integration: Our platform now supports Gemini Pro 1.5, offering enhanced performance and a broader range of functionalities. This model brings advanced natural language understanding and generation, making it ideal for various applications.
- Enhanced Accuracy: With Gemini 1.5 Pro, you can expect improved accuracy in natural language processing tasks, leading to more precise and reliable outputs.
- Faster Response Times: Enjoy quicker response times, thanks to the optimized performance of Gemini 1.5 Pro.
- Broader Language Support: Gemini 1.5 Pro offers support for more languages, providing a better experience for multilingual users.
More visibility with new updates to the Agent CMS
We are thrilled to introduce new management tools that enhance your ability to oversee and scale your AI agents. Our latest update offers improved visibility into the connections between your agent data and project workflows and components, making it easier to understand, manage, and grow your AI projects effectively.What’s New:Intents, Components, and Functions tables- A new “Used By” column has been added to these tables.
- This column displays which workflows or components are utilizing each resource.
- Each item in the dropdown is a direct link.
- Easily navigate to the referenced workflow or component for quick access and management.
- Understand how all parts of your agent design come together at a glance.
-
Quickly identify dependencies and interconnections between resources.

Set Step Redesign
We are excited to announce the redesign of the Set Step in Voiceflow! The updated Set Step now includes:- Improved Interface: A more intuitive design for easier navigation and configuration.
- New error messages: We’re revamped our error messages for the value input to make it easier to debug.
- Labels: Option to rename and label individual variable sets for better organization and clarity.
-
Improved canvas visibility: More visibility on your configuration available directly on the step.

Upcoming Release: Migration of the Text step to Message step
Introducing the new Message stepWe’re excited to announce a significant update coming in July: the introduction of new Message step and CMS and in Webchat projects. This update will unlock the ability to users to have manage all their agent text responses in a central location, re-use the same message across multiple steps in your agent, and add conditions directly to your response variants. As part of this update, we will be automatically migrating all Text steps to the new Message step.Key ChangesIntroduction of Message Steps: We are introducing a new type of step calledMessage along with a new Messages CMS page in webchat projects to manage all your agent text messages. All webchat and voice projects will contain a new property called responseMessages that will replace existing responseVariants.Automatic Conversion: All existing Text steps will be automatically converted into Message steps when a project is opened in-app. This migration ensures that all existing functionalities are preserved while providing the benefits of the new system.Technical OverviewThe new Message step structure is outlined below:Webchat Projects - Properties Affected:- Fields Added: responseMessages
- Fields Modified: response now has a new field called type
- Fields Removed: responseVariants, responseAttachments, and nodes with type text
- Fields Added: responseMessages
- Fields Modified: response now has a new field called type
- Fields Removed: responseVariants and responseAttachments
- On webchat projects, all text steps are now messages. Text steps will have a few new features, and all text data will no longer be stored in the node itself but in a new field called
response/responseMessages. - On both webchat and voice projects,
requiredEntitiesre-prompts will be attached toresponse/messagesinstead of response/variants (variants will be sunset as well).
- All text data will be saved into responses (responses/discriminator/messages) instead of node data. In message node data, there will only be
messageIDwhich referencesresponse.id. repromptIDin required entities is a reference toresponse.id, and its text data is now saved underresponseMessagesinstead ofresponseVariants.draftin the message node data means that your message step has no content and will not display the response on the CMS until you have content, sodraftis false.
repromptIDin required entities is a reference toresponse.id, and its text data is now saved underresponseMessagesinstead ofresponseVariants.draftin the message node data means that your message step has no content and will not display the response on the CMS until you have content, sodraftis false.

Knowledge Base: Tabular Data, Metadata, and Query Filtering
We are excited to announce a powerful new feature in Voiceflow’s Knowledge Base: the ability to handle tabular data, enrich all data sources with metadata, and perform advanced query filtering. This enhancement provides our users with more flexibility, precision, and control over their AI agents, ensuring they deliver the most relevant and contextually accurate responses.Key FeaturesTabular Data Support With the introduction of tabular data support, users can now upload structured data in table format. This is ideal for scenarios where information is best represented in rows and columns, such as product catalogs, inventories, or detailed records. This structured format allows for more organized and efficient data management within your Voiceflow agents.Metadata Enrichment Metadata is a critical component for enhancing the context and relevance of your data. You can now attach key-value pairs to all your data sources, providing additional layers of information that can be used to fine-tune your AI agent’s responses. Whether it’s categorizing products, tagging items with specific attributes, or adding detailed descriptions, metadata makes your data more insightful and searchable.Advanced Query Filtering Our new query filtering capabilities enable users to perform precise searches within their data using a robust set of operators. You can apply various filters to your data, such as:- Equality and Comparison:
$eq,$ne,$gt,$gte,$lt,$lte - Array Operations:
$in,$nin,$all - Logical Operators:
$and,$or
- FILE Upload
- URL Upload
- TABLE Upload
- Match All Specific Tags: Identify chunks that include every specified tag in the list.
- Match Any of the Specified Tags: Find chunks containing any of the tags listed.
- Exclude Chunks With Certain Tags: Filter out chunks that include any of the tags specified.
- Combination of Conditions: Search for chunks that either contain a specific tag or match a text value exactly.
https://api.voiceflow.com/v3alpha/knowledge-base/docs/upload to https://api.voiceflow.com/v1/knowledge-base/docs/upload. The v3alpha version is still operational, but we recommend moving over to the new URL for improved performance and support. Stay tuned for more updates and happy agent building!Introducing the Triggers Step: Elevate Your Workflow Designs
Today we’re introducing the new Triggers step designed to make it easy to add and manage one or many triggers when developing your agent workflows. This update is part of our ongoing effort to help all Voiceflow users develop their agents for scale by default.Replacing the Intent StepWith the introduction of the Triggers step, we have migrated all the settings from any existing Intent steps into the new format. The new Triggers step offers a more robust and flexible way to handle multiple user intents within a single Workflow, enhancing both the design and functionality of your agents.Why This MattersOur goal with these changes is to simplify the design process and support design best practices. By defining each Workflow as a response to a single user intent and allowing for multiple triggers within that framework, we are aiming to:- Enhance Clarity: Users will have a better understanding of how to build efficient Workflows that address user intents effectively.
- Improving Scalability: These best practices will help users build more complex agents that can scale and adapt to new use cases and collaborations.
- Single Entry Point per Workflow: One entry point ensures clarity and simplicity in Workflow design. Commands: We are deprecating Commands so they will no longer be an option on new projects.
- Right-Click to Add Triggers step: Users can still add multiple entry points if necessary, but the default UX encourages a single entry point.
- Updated Start Chip: Users can add additional triggers directly on the start chip, offering more flexibility in design.
- The Event section from the step toolbar has been removed as we . There is still an option to add additional triggers to the canvas available through the right-click on canvas menu.

Update: Improved Navigation and Export Options 🧭
We’ve made some updates to our share and export options:- Simplified Navigation: We have removed the Share button from the CMS to streamline your navigation. You can now find all export, share prototype, or invite members options under the Agent menu.
-
Enhanced Export Functionality: You now have more control over what you export. We’ve added options to specify the workflow or component you want to export when selecting PNG and PDF formats. This allows for more precise and tailored exports.


We’ve added support for GPT 4o!
GPT-4o is now available for use in AI steps and Knowledge Base. Some benefits of this model include:Faster responses than GPT-4 Turbo with matched performance for English text and code Significant improvement for non-English languages 50% cheaper than Turbo Check it out in the model options menu on AI Steps and the Knowledge Base!Introducing Workflows: Organize and Scale Your AI Agents
We are thrilled to announce the release of Workflows, our latest system designed to improve the way you organize and manage AI agents. At Voiceflow, our mission is to enable you to build scalable, efficient, and collaborative AI solutions, and Workflows is a significant step forward in achieving that goal.Watch the Video WalkthroughStreamlined Organization and Management Workflows provides a clear and intuitive organization structure, replacing our previous system of domains, topics, and subtopics. All your existing designs have been automatically migrated, ensuring no data is lost while giving you immediate access to new features.In the new Workflows page within the Agent CMS, you can organize your agents by use case, team, or any structure that suits your needs, using the new folder directory system. This high-level visibility allows you to monitor each workflow’s progress without needing to navigate to the canvas. Detailed descriptions, intent triggers, and project management features like status and assignee columns provide comprehensive context at a glance.Enhanced Collaboration Collaboration is at the heart of Workflows. You can now assign workflows to team members and set statuses to track progress through the development process. This ensures everyone is on the same page and tasks are clearly defined and managed as you scale the complexity of your AI Agent.Improved Navigation and Usability Navigating between workflows has never been easier. You can access the canvas directly from the Workflows table or use the new design navigation that mirrors your CMS structure. We’ve also introduced a “Return to Designer” button, allowing you to jump back to your canvas from any page, making your experience more cohesive and streamlined.What’s Changed With the release of Workflows, we’ve made some other notable changes to improve design practices and reduce visual clutter:- Removed Intent Steps from Components: To promote best practices in agent design, we have removed the ability to add intent steps directly within Components.
- Cleaner Canvas View: The share button and agent navigation menu have been relocated to the Agent CMS, reducing clutter on the canvas and keeping your focus on designing agent workflows.
- New
workflowsfield:
- Node Reference Modifications:
- Replacing goToDomain Nodes: The existing goToDomain nodes within the diagram.nodes will be replaced with goToNode. The goToNode.nodeID will reference the workflow that corresponds to what the domain’s start step previously pointed to.
- Updating goToNode References: All goToNode nodes that previously referenced a domain’s start block (with the exception of the root/home domain) will be updated to reference the new workflow steps accordingly.
- Removal of Start Steps: All start steps, except for those in the root/home diagram, will be removed. This simplification is part of our broader effort to streamline navigation and reduce redundancy within the agent design environment.







