Skip to main content
March 13th, 2026
Added

Show source URLs in responses

When enabled, the agent includes the source URL (If public URL) alongside its response so users can verify or learn more. Recommended for help centers and documentation.This feature is available for both the knowledge base and web search system tools.

You can set the maximum number of sources you want to show per agent message (defaulted to 1).
Show Source Url Kb Docs 1
January 27th, 2026
Added

Async functions and API tools

You can now run Function and API tool steps asynchronously.Async execution allows the conversation to continue immediately without waiting for the tools to complete. No outputs or variables from the step will be returned or updated.This is ideal for non-blocking tasks such as logging, analytics, telemetry, or background reporting that don’t affect the conversation.Note: This setting applies to the reference of the Function or API tool — either where the tool is attached to an agent or where it’s used as a step on the canvas. It is not part of the underlying API or function definition, which allows the same tool to be reused with different async behaviour throughout your project.Async functions and API tools
January 8th, 2026
Added

Tool messages

Tool messages let you define static messages that are surfaced to the user as a tool progresses through its lifecycle:
  1. Start — Message delivered when the tool is initiated
  2. Complete — Message delivered when the tool finishes successfully
  3. Failed — Message delivered if the tool encounters an error
  4. Delayed — Message delivered if the tool takes longer than a specified duration (default: 3000ms, configurable)
This provides clear, predictable feedback during tool execution, improving transparency and user trust—especially for long-running or failure-prone tools.Tool messages
January 8th, 2026
Added

GPT 5.2

Added global support for GPT 5.2GPT 5.2
December 5th, 2025
Added

Voice mode in web widget

Your web widget now supports hands-free, real-time voice conversations. Enable it from the Widget tab for existing projects — it’s on by default for new ones.Users can talk naturally, see transcripts stream in instantly, and get a frictionless voice-first experience. It also doubles as the perfect in-browser way to test your phone conversations—no dialing in, just open the widget and run the full voice flow instantly.Voice mode in web widgetVoice mode in web widgetVoice mode in web widget
November 27th, 2025
Added

Native web search tool

We’ve shipped a native Web Search tool so your agents can look up real-time information on the web mid-conversation—no custom integrations required.
  • Toggle on the web search tool in any agent to answer questions that need live data (news, prices, schedules, etc.).
  • Configure search prompts and guardrails so the agent only pulls what you want it to.
  • Results are summarized and grounded back into the conversation for more accurate, up-to-date answers. Native web search tool
November 11th, 2025
Added

Telnyx telephony integration

You can now connect your Telnyx account to import and manage phone numbers directly in Voiceflow, enabling Telnyx as your telephony provider for both inbound and outbound calls..Telnyx telephony integration
November 10th, 2025
Added

Native support for keypad input (DTMF)

Added native support for DTMF keypad input in phone conversations. Users can now enter digits via their phone keypad, sending a DTMF trace to the runtime. Configure timeout and delimiters (#, *) to control when input is processed. See documentation here.
  • Keypad input is off by default and can be turned on from Settings/Behaviour/Voice.
  • When on in project settings, keypad input can be turned off at the step level via the “Listen for other triggers” toggle.
  • View full documentation here Native support for keypad input (DTMF)
October 31st, 2025
Added

Knowledge base metadata

Add metadata to your Knowledge Base sources to deliver more relevant, localized, and precise answers, helping customers find what they need faster and improving overall resolution speed.
  1. Adding metadata on knowledge import
When uploading files, URLs, or tabular data to the Knowledge Base, you can attach metadata at import time. This metadata is stored with each document or data chunk, enabling structured filtering and contextual retrieval later. For example, when importing car rental policies, you might tag each file with metadata like “locale”: “US, CA, EU”, or “serviceType”: “car_rental, equipment_rental”. This ensures that when the agent queries using metadata filters (static or dynamic), it only retrieves content relevant to the user’s local region or service context.
  1. Dynamically, or statically apply metadata at runtime
From the Knowledge Base tool in your agent, define the metadata your agent should use when querying the tool. You can specify a static value (or variable) to consistently filter results, or let the agent dynamically assign metadata at runtime — allowing it to query the Knowledge Base contextually based on each unique conversation.Example – Car Booking ServiceIf your Knowledge Base includes information for multiple locales (e.g., US, CA, EU), you can set a metadata field like locale. Instead of hardcoding a single locale, the agent can dynamically apply the user’s locale at runtime — for example:If a user says “I want to book a car in New York,” the agent automatically filters Knowledge Base results with locale: US, ensuring responses only reference policies, pricing, and availability relevant to that locale.Knowledge base metadataKnowledge base metadata
October 29th, 2025
Added

Built-in time variables

We’ve added a set of built-in time variables that make it easier to access and use time within your agents—no external API calls or workarounds required. Perfect for agents that depend on current or relative time inputs.Project timezone can be set in project/behaviour settings:Built-in time variables
October 20th, 2025
Added

Deepgram Flux ASR model

We’ve added Deepgram Flux, their ASR newest model built specifically for Voice AI.Flux is the first conversational speech recognition model built specifically for voice agents. Unlike traditional STT that just transcribes words, Flux understands conversational flow and automatically handles turn-taking.Flux tackles the most critical challenges for voice agents today: knowing when to listen, when to think, and when to speak. The model features first-of-its-kind model-integrated end-of-turn detection, configurable turn-taking dynamics, and ultra-low latency optimized for voice agent pipelines, all with Nova-3 level accuracy.Flux is Perfect for: turn-based voice agents, customer service bots, phone assistants, and real-time conversation tools.Key Benefits:
  • Smart turn detection — Knows when speakers finish talking
  • Ultra-low latency — ~260ms end-of-turn detection
  • Early LLM responses — EagerEndOfTurn events for faster replies
  • Turn-based transcripts — Clean conversation structure
  • Natural interruptions — Built-in barge-in handling
  • Nova-3 accuracy — Best-in-class transcription quality Deepgram Flux ASR model
October 17th, 2025
Added

Sync audio and text output

Converts text to speech in real time and keeps the spoken audio perfectly aligned with the displayed text. This ensures call transcripts are an accurate, word-for-word representation of what was actually said.Sync audio and text output
October 16th, 2025
Added

Transcript inactivity timeout

This setting lets you define how long a conversation can stay inactive before the transcript automatically ends.This is different from session timeout — the session stays open, but the transcript closes after the set inactivity period, enabling more accurate reporting and evaluations.Important: ending the transcript does not end the user’s ability to re-engage. If the user responds again, a new transcript will begin within the same session.Transcript inactivity timeout
October 14th, 2025
Added

Priority processing for Open AI models

We’ve added a new Priority Processing setting for OAI-supported models. When enabled, your requests will be given higher processing priority for faster response times and reduced latency. Note: this will consume more credits.Priority processing for Open AI models
September 29th, 2025
Added

MCP tools

Supercharge your agents by connecting directly to MCP servers.
  • 🔌 Connect to MCP servers in just a few clicks
  • 📥 Add MCP server tools to your agents
  • 🔄 Sync MCP servers to stay up-to-date
Bring in any tool, expand what your agents can do, and take your workflows to the next level.DocumentationMCP tools
September 23rd, 2025
Added

Call forwarding tool in agents

You can now enable your agents to forward calls to a different number, SIP address, or extension.
  • 📞 Seamlessly transfer callers to the right person or agent
  • 🔀 Supports phone numbers, SIP addresses, and extensions
  • 🛠️ Configure forwarding directly in your agent’s tools
This makes it easier to connect customers with the right destination without breaking the flow of the conversation.Call forwarding tool in agents
September 12th, 2025
Added

Control reasoning effort for supporting GPT models

We’ve added a reasoning effort slider for all supporting GPT models (GPT-5, GPT-5 mini, GPT-5 nano, GPT-o3 and GPT-o4-mini).Control reasoning effort for supporting GPT models
September 11th, 2025
Improved
Shareable links have been upgraded to better reflect the agent you’re building. Each link now points to a hosted version of your AI agent that mirrors your selected environment (dev, staging, production) and interface, so what you share is exactly what others will experience. Password protection is also available for secure access.
  • 🔗 Sharable links now mirror your actual AI agent
  • 🛠️ Environment-specific links (dev, staging, production)
  • 🎨 Customize the look and feel via the Interfaces tab
  • 🔒 Optional password protection for secure sharing Shareable links now match your AI agent
September 11th, 2025
Added

Staging environment added

We’ve introduced a new staging environment to help you manage deployments more effectively. You can now publish between development, staging, and production to test changes before going live.
  • New staging environment for pre-production testing
  • Publish across dev → staging → production
  • More control and confidence in deployment flows
  • Override secrets per environment for greater flexibility Staging environment added
September 8th, 2025
Improved

Duplicating projects now clones knowledge base

You can now duplicate projects along with their entire knowledge base. When cloning a project, all connected documents and data sources are copied as well—so your new project starts with the same knowledge setup as the original.This enhancement only applies to project duplication. Knowledge bases are not yet cloned when using project import.
August 28th, 2025
Added

Control saving of empty transcripts

You can now choose whether to save transcripts where the bot spoke but the user never replied. Use this toggle to keep your transcript logs cleaner and focused on real interactions. By default, all new projects will save all conversations to transcripts.Control saving of empty transcripts
August 21st, 2025
Added

Save input variables in tool calls

Previous to this release, you could only capture the output of a tool call (e.g., the response from an API). Now, you can also persist the inputs (the parameters sent to the tool) as Voiceflow variables. This means both sides of the transaction — request and response — can be tracked, reused, or referenced later in the conversation.Save input variables in tool calls
August 14th, 2025
Added

GPT-5 models

GPT-5 models are now available in Voiceflow.GPT-5 models
August 14th, 2025
Improved

Double-click to open agent step

You can now double-click an agent step to jump straight into its editor — saving yourself an extra click.Double-click to open agent step
August 5th, 2025
Added

Tool step

You can now run tools outside of the agent step using the new Tool Step.This lets you trigger any tool in your agent — like sending an email or making an API call — anywhere in your workflows.🛠️ You’ll find the call forwarding Step in the ‘Dev’ section of the step menu for now.Tools can also be used as actions:Tool stepTool step
August 5th, 2025
Added

New analytics API

A few months ago, we released a new analytics view—giving you deeper insights into agent performance, tool usage, credit consumption, and more.Today, we’re releasing an updated Analytics API to match. This new version gives you programmatic access to the same powerful data, so you can:Track agent performance over timeMonitor tool and credit usageBuild custom dashboards and reportsUse the new API to integrate analytics directly into your workflows and get the insights you need—where you need them.
July 29th, 2025
Added

Custom query control & chunk limit for knowledge base tool

You now have more control over how your agents retrieve knowledge. Customize the query your agent uses to search the knowledge base, and fine-tune the chunk size limit to better match your content. This gives you more precision, better answers, and smarter agents.Custom query control & chunk limit for knowledge base tool
July 28th, 2025
Added

Better transcripts. Custom evaluations. Better AI agents.

Your AI agents just got a massive upgrade:🔥 What’s new
  • Transcripts, reimagined – Replay calls, debug step-by-step, filter with precision, and visualize user actions like button clicks — all in a faster, cleaner UI.
  • Evaluations, your way – Define what “good” looks like with customizable evaluation templates, multiple scoring types (rating, binary, text), auto-run support, and performance tracking over time.
📝 TranscriptsA full overhaul of the transcripts experience, built to help teams analyze, debug, and improve agents faster.
  1. Call recordings – Replay conversations to hear how your agent performs in the real world
  2. Robust debug logs – Trace agent decisions step-by-step
  3. Granular filtering – Slice data by time, user ID, evaluation result, and more
  4. Button click visualization – See exactly where users clicked in the conversation
  5. Cleaner UI – Faster load times, more usable data
📊 EvaluationsDefine what “good” looks like — and measure it, your way. Build (or generate) your own evaluation criteria, tailor analysis to your business goals, and iterate with confidence.Eval types – support for:
  1. ⭐ Rating evals (e.g. 1–5)
  2. ✅ Binary evals (Pass/Fail)
  3. 📝 Text evals (open-ended notes)
Also includes:
  • Batch or auto-run – Evaluate hundreds of transcripts in a few clicks, or automatically as they come in
  • Analytics & logs – See detailed results per message or overall trends over time
APIs
  • We’ve release a brand new Evaluations API
  • We’ve release a new Transcripts API The legacy Transcripts API is still supported and currently has no deprecation timeline
🕓 Transition Period: Until September 28, 2025, all transcripts will be available in the legacy view, for existing projects. On September 28th, 2025 the old view will be hidden and the new transcripts view will be the default. Transcripts older than 60 days will still be accessible via API for the foreseeable future.Better transcripts. Custom evaluations. Better AI agents.Better transcripts. Custom evaluations. Better AI agents.Better transcripts. Custom evaluations. Better AI agents.Better transcripts. Custom evaluations. Better AI agents.Better transcripts. Custom evaluations. Better AI agents.Better transcripts. Custom evaluations. Better AI agents.Better transcripts. Custom evaluations. Better AI agents.Better transcripts. Custom evaluations. Better AI agents.
July 28th, 2025
Added

Gmail tools: let your AI agents send emails

Agents can now send emails seamlessly as part of any conversation. Whether it’s a confirmation, follow-up, or lead nurture message — the new Send Email tool makes it easy to automate communication right from your agent. Just connect your Gmail account and you’re ready to go.Make sure to instruct your agent on how to use this tool properly. Give it a try in the agent step!Gmail tools: let your AI agents send emailsGmail tools: let your AI agents send emails
July 25th, 2025
Added

Call forwarding step

Seamlessly connect your voice AI agent to the real world with call forwarding.The new call forwarding step lets your AI agent hand off calls to a real person (or another AI agent)—instantly and smoothly.
  • ✅ Route to phone numbers
  • ✅ Include optional extensions
  • ✅ Support for SIP addresses
Build smarter, more human-ready voice agents—without sacrificing automation.🛠️ You’ll find the call forwarding Step in the ‘Dev’ section of the step menu for now. We’re planning to introduce a dedicated voice section soon—stay tuned!Call forwarding step
July 23rd, 2025
Added

Hubspot tools

Connect your agents to Hubspot to create contacts, leads and tickets.Hubspot toolsHubspot tools
July 23rd, 2025
Added

SMS messaging with Twilio tools

Enable your agents to send SMS messages with an effortless connection to Twilio. Try it now in the agent step.SMS messaging with Twilio tools
July 15th, 2025
Added

Create AI agents instantly — from just a prompt

We’ve made building AI agents dramatically faster.You can now generate a fully-functional agent by simply describing what you want it to do. No setup. No manual flow-building. Just write a detailed prompt — and Voiceflow will generate everything for you:✅ Agent instructions✅ Tools and workflows✅ Conversation logic and componentsThis means less time configuring, more time testing and refining your agent behavior.Today, we’re launching:
  1. Prompt-to-project generation – go from idea to working prototype in seconds
  2. Prompt-to-workflow generation – describe a capability, get a complete workflow
  3. Prompt-to-component generation – create specific tools and logic on the fly
This is a foundational leap in how AI agents get built on Voiceflow - we can’t wait to see what you create.Create AI agents instantly — from just a prompt
July 10th, 2025
Added

Vonage integration for telephony

Voiceflow now supports importing phone numbers from Vonage as an alternative to Twilio. Vonage offers a minor latency improvement (~200-400ms) over Twilio, for more responsive calls.For more information: https://dashboard.nexmo.com/ https://www.vonage.ca/en/communications-apis/voice/Vonage integration for telephony
July 9th, 2025
Added

Smarter knowledge base building with LLM chunking strategies

Your Knowledge Base just got a major upgrade. With our new LLM chunking strategies, you can now prep your data for AI like a pro—no manual formatting needed.We’ve introduced 5 powerful strategies to help structure and optimize your content for maximum retrieval performance:🧠 Smart chunking Automatically breaks content into logical, topic-based sections. Ideal for complex documents with multiple subjects.❓ FAQ optimization Generates sample questions per section, perfect for creating high-impact FAQs.🧹HTML & noise removal Cleans up messy website markup and boilerplate. Best used on content pulled from the web or markdown.📝Add topic headers Inserts short, helpful summaries above each section. Great for longform content that needs context.🔍 Summarize Distills each section to its key points, removing fluff. Perfect for dense reports or research.These chunking strategies help you get more accurate, more relevant answers from your AI—especially for data sources not originally built for Retrieval-Augmented Generation (RAG).Ready to make your Knowledge Base smarter? Try out some LLM chunking strategies and watch the results speak for themselves.Note - LLM chunking strategies use credits. Before processing, we’ll show you a clear estimate of how many credits will be used—so you’re always in control.Smarter knowledge base building with LLM chunking strategiesSmarter knowledge base building with LLM chunking strategies
July 8th, 2025
Added

Make.com tool

Connect your agents to Make.com with a couple clicks to run your automations from your Voiceflow AI agents.Make.com toolMake.com tool
July 8th, 2025
Added

Airtable tools

Connect your agents to Airtable with a couple clicks. Supported tools include: Create records, Delete records, Get record, List records, Update records.Airtable toolsAirtable tools
July 2nd, 2025
Added

Agents can now automatically use buttons, cards, and carousels to enrich conversations

By enabling these options and providing guidance on when to use (or avoid) each special tool, your agent will intelligently enhance interactions with visual tools like buttons, cards, and carousels.Note: these configurations are ignored during phone-based conversations, meaning it will not prohibit your ability to create multi-modal AI agents with Voiceflow.Agents can now automatically use buttons, cards, and carousels to enrich conversations
June 28th, 2025
Deprecated

[Deprecation] Dialog Manager API Logs

Legacy log traces are no longer supported, which are sent when with the query parameter ?log=true. This system has not been updated for a significant period and is out of date, especially with new steps.log traces will no longer be returned, after Friday, July 4th, 2025.This affects a small subset of users and should not impact the output or performance of an agent.Going forward, it will be unified with a more robust debug trace system, along with a new debugger UI.
June 18th, 2025
Added

New Speech-to-Text Providers

  • Added Cartesia’s Ink-Whisper STT model This leverages OpenAI’s whisper model, upgraded for realtime call performance Expanded language support and selection
  • Added AssemblyAI Universal STT model Advanced tuning options
  • Added specific model selection for Deepgram STT Nova-2, Nova-3, and Nova-3 Medical New Speech-to-Text Providers
June 16th, 2025
Added

Google Sheets tools

Connect your agents to Google Sheets with a couple clicks. Supported tools include: Add to sheet, Create new sheet, Get rows, Get sheet, Update sheet.Google Sheets tools
June 10th, 2025
Added

Cartesia voices

We’ve added Cartesia to Voiceflow. You can select from over 100 new voices across two Cartesia models (Sonic 2 & Sonic Turbo).Cartesia voices
June 5th, 2025
Added

Added tool usage to project analytics

We’ve added all tool types to your projects analytics dashboard:
  • Integration tools
  • API tools
  • Function tools
You can now see the number of times each tool has been used, along with the average latency and success/failure rate if you hover a specific tool.Added tool usage to project analytics
June 4th, 2025
Improved

New workspace dashboard

  • We’ve made updates to the workspace dashboard to make it easier to organize your projects, and manage your workspace.
  • We’ve added folders, to further organize your projects. Note, if you previously used the Kanban view (deprecated), we’ve automatically converted swim-lanes into folders.
  • Home tab (coming soon)
  • Community tab (coming soon)
  • Tutorials tab (coming soon) New workspace dashboard
May 28th, 2025
Added

New navigation

We’ve listened to your feedback and made Voiceflow easier to navigate. It’s the same Voiceflow, just faster to get around!New navigation
May 28th, 2025
Added

Claude Opus 4 & Sonnet 4

We’ve added Claude Opus 4 & Claude Sonnet 4 to Voiceflow.Claude Opus 4 & Sonnet 4
May 28th, 2025
Added

Gemini 2.5 Pro & 2.5 Flash

We’ve added Gemini 2.5 Pro & 2.5 Flash to Voiceflow.Gemini 2.5 Pro & 2.5 Flash
May 23rd, 2025
Added

Security Settings for Widget

  • Ability to whitelist domains
  • Ability to have a custom privacy message before users engage with your AI agent
  • Ability to not save transcripts Security Settings for Widget
May 23rd, 2025
Added

Generative No Reply

Use generative no-reply to dynamically re-engage users that haven’t responded in a while. Responses will be contextual to the conversation.Generative No Reply
May 16th, 2025
Added

API Raw Content-Type select

The API (agent) tool and step now have a content-type option on POST requests with a “Raw” body. This will automatically apply the Content-Type header, for a quality-of-life convenience.API Raw Content-Type select
May 16th, 2025
Added

Rimelabs Arcana Voices

Rimelabs recently released a new set of Arcana voices, that sound far more natural with intonations and speech patterns such as breathing, pauses.Arcana is still under development and we are working with the Rimelabs team to improve it, we’re aware of some issues with consistency and slurring of speech.Arcana adds ~250ms of latency to the voice pipeline, roughly the same as 11labs.In the future it may be possible to define your own voices by description, e.g. “old man with hoarse southern accent”Rimelabs Arcana Voices
May 16th, 2025
Added

Krisp Noise Cancellation

Krisp Noise CancellationLatency is one piece of the puzzle — but quality matters too. That’s why we’ve added Krisp.Background noise, especially speech or music, can seriously throw off voice agents. STT systems transcribe everything they hear, so voices in a coffee shop or lyrics from background music can easily get mistaken for the user’s input, leading to weird or incorrect responses. It can also confuse the agent into thinking the user isn’t done talking, delaying responses or interrupting playback. In short: noise kills both quality and speed.All voice projects (web-voice widget and Twilio) automatically have Krisp noise cancellation applied.Before Krisp:After Krisp:Here are two spectrograms, the upper one visualizing the audio that would be heard by STT without Krisp, and the lower one showing the audio after having been processed with Krisp.Through our testing:We’ve determined that this significantly boosts the accuracy of speech detection and transcription in noisy environments: cafes, offices, on the street, background broadcasts, etc.Krisp noise cancellation adds ~20ms of latency to the audio pipeline, while drastically improving speech detection and transcription accuracy. This ultimately leads to faster final transcriptions, reducing overall speech-to-speech latency by ~100ms.Krisp Noise Cancellation
May 15th, 2025
Added

Salesforce tools

We’ve added Salesforce tools to the agent step. You can now authenticate with Salesforce and add tools to enable your agent to get work done in Salesforce.Salesforce toolsSalesforce tools
May 15th, 2025
Added

Zendesk tools

We’ve added Zendesk tools to the agent step. You can now authenticate with Zendesk and add tools to enable your agent to get work done in Zendesk.Zendesk toolsZendesk tools
May 5th, 2025
Added

Voice Keywords / Multilingual Speech-to-text

KeywordsFor voice calls we’re introducing keywords. This allows your agent to understand hard to pronounce proper nouns (like product and company names), industry jargon, phrases and more. This is an optional field.MultilingualWe’re exposing Deepgram’s latest Nova-3 multilingual model as an STT option, capable of understanding and transcribing 8 different langauges.In addition, the standard English STT is being updated from Nova-2 to Nova-3, for a boost in performance.Voice Keywords / Multilingual Speech-to-textVoice Keywords / Multilingual Speech-to-text
April 29th, 2025
Added

Introducing Voiceflow Credits: A simpler way to track usage

Today marks a significant milestone in Voiceflow’s journey as we officially launch our new credit-based billing system. This update represents a fundamental shift in how you’ll track, manage, and optimize your Voiceflow usage—all designed to bring greater simplicity, transparency, and predictability to your experience.What’s New🎉 Voiceflow CreditsWe’ve completely overhauled our billing system, moving away from the complex token-based approach to a streamlined credit system that unifies tracking across all platform features:
  • Simplified Measurement: One unified credit system for all actions (calls, messages, LLM responses, TTS)
  • Predictable Costs: Clear pricing tiers that make budget planning straightforward
  • Transparent Usage: Detailed visibility into exactly how your credits are being consumed
  • Developer-Friendly: Messages only count toward credits when your agent is used in production—not when developing in-app or using shareable prototypes
📊 New Usage DashboardWe’ve launched a brand-new Usage Dashboard that gives you comprehensive insights into your credit consumption. The dashboard allows you to:
  • View your total available and used credits
  • Track usage across all agents or drill down into specific ones
  • Monitor editor and agent allocation
  • Analyze usage patterns over time
💼 Enhanced Team ManagementAdditional editor seats are now just $50 per month with no complicated caps or restrictions. Add as many team members as needed, whenever you need them.🏢 Business Plans (formerly Teams)As part of this update, we’re renaming our Teams plans to Business plans, with enhanced features and capabilities for enterprise customers.Resources to Help You TransitionWe’ve created dedicated resources to help you understand and make the most of the new credit system:Frequently Asked QuestionsWhat do I need to do? Use our Credit Calculator to understand your usage. For most users, no action is required.Will my monthly bill increase? Most organizations will see a decrease in costs, particularly those with multiple editor seats. There are three changes to be aware of:
  • Annual plans now offer a 10% discount (previously 20%)
  • Editor seats now cost $50/month with no restrictions (a price reduction)
  • Business plan (formerly Teams) base tier increases from $125 to $150
Do credits roll over? Credits expire at the end of your subscription period. For monthly plans, unused credits don’t roll over month-to-month. Annual subscribers receive all credits at once to use throughout the year.What happens if I exceed my credit allocation? You’ll receive a notification as you approach your limit. There’s no automatic charging—you can choose whether to upgrade to a higher credit package.Do messages in development count toward my credit usage? No, messages only count toward credits when your agent is used in production. Messages sent while developing in-app or when using shareable prototypes don’t consume credits, giving you the freedom to build and test without worrying about credit usage.We’re committed to making this transition as smooth as possible. If you have any questions or need assistance, please reach out to support@voiceflow.com.Introducing Voiceflow Credits: A simpler way to track usage
April 25th, 2025
Added

Support for OpenAI o3 and o4 mini

Added:
  • Support for OpenAI o3 and o4 mini Support for OpenAI o3 and o4 mini
April 16th, 2025
Added

Support for GPT 4.1 models

Added:
  • Support for GPT 4.1, GPT 4.1 mini and GPT 4.1 nano Support for GPT 4.1 models
April 11th, 2025
Fixed

Minor Updates / Fixes

Improvements:
  • Voice Widget latency decreased by up to 750ms
  • Voice Widget now streams with more consistent linear16@16kHz encoding
Fixes:
  • Reset memory when a new conversation is launched (launch request)
  • Global no reply not working on Agent steps
  • The maximum allowed length for {userID} in the Dialog API will be set to 128 characters, effective April 18th
  • Unable to remove webhook URLs
  • Analytics visualization UI bug
  • Voice Widget always setting userID to test on transcripts
  • Chat Widget no audio output after page reload
  • Export variables fails when project has large number of variables
April 7th, 2025
Added

Call Events Webhook

Changes:
  • New support added to subscribe to call events via webhook, for both twilio IVR and voice widget projects Call Events Documentation Webhook system is capable of broadcasting additional events in the future Call Events Webhook
April 1st, 2025
Improved

Streaming Text in Chat Widget Now Optional

Changes:
  • Added option to disable streaming text in chat widget Stream text can now be turned off in the Modality & interface settings When disabled, the full agent response will be displayed at once instead of being streamed out. Useful for situations where streaming longer messages is not desired Streaming Text in Chat Widget Now Optional
March 31st, 2025
Added

Max Memory Turns Setting

Conversation memory is a critical component of the Agent and Prompt steps. Having longer memory gives the LLM model more context about the conversation so far, and make better decisions based on previous dialogs.However, larger memory adds latency and costs more input tokens, so there is a drawback.Before, memory was always set to 10 turns. All new projects will now have a default of 25 turns in memory. This can now be adjusted this in the settings, up to 100 turns.For more information on how memory works, reference: https://docs.voiceflow.com/docs/memoryMax Memory Turns Setting
March 31st, 2025
Update

Agent Step, Structured Output Improvements, Gemini 2.0 Flash

We’re excited to introduce several major updates that enhance the capabilities of the Agent step and expand our model offerings. These improvements provide more flexibility, control, and opportunities for creating powerful AI agents.🧠 Agent Step: Your All-in-One SolutionThe Agent step has been supercharged to create AI agents that can intelligently respond to user queries, search knowledge bases, follow specific conversation paths, and execute functions—all within a single step. Key features include:
  • Intelligent Prompting: Craft detailed instructions to guide your agent’s behavior and responses.
  • Function Integration: Connect your agent with external services to retrieve and update data.
  • Conversation Paths: Define specific flows for your agent to follow based on user intent.
  • Knowledge Base Integration: Enable your agent to automatically search your knowledge base for relevant information.
For a comprehensive guide on using the Agent step, check out our Agent Step Documentation.🎨 Expanded Support for Structured OutputWe’ve significantly expanded our support for structured output, unlocking more use cases and giving you greater control over your agent’s responses:
  • Arrays and Nested Arrays: You can now define arrays and nested arrays in your output structure.
  • Nested Objects: Structured output now supports nested objects, allowing for more complex data structures.
These enhancements enable you to create more sophisticated agents that generate highly structured and detailed responses, reducing the risk of hallucinations and ensuring more accurate outputs.⚡ Gemini 2.0 Flash SupportWe’ve added support for the Gemini 2.0 Flash model, offering you even more options for powering your AI agents. Gemini 2.0 Flash delivers exceptional performance and speed, enabling faster response times and improved user experiences.To start using Gemini 2.0 Flash, simply select it from the model dropdown when configuring your Agent step.We can’t wait to see what you’ll build with these new features and capabilities! As always, we welcome your feedback and suggestions as we continue to improve our platform.Happy building! 🛠️The Voiceflow TeamAgent Step, Structured Output Improvements, Gemini 2.0 Flash
March 28th, 2025
Improved

Variable Handling Update: Consistent Behavior for Undefined Values

Changes:
  • Updated Voiceflow variable handling for consistency in previously undefined behavior: Variables can be any JavaScript object that is JSON serializable. Any variable set to undefined will be saved as null (this conversion happens at the end of the step, so it does not affect the internal workings of JavaScript steps and functions). Functions can now return null (rather than throwing an error) and can no longer return undefined (which could cause agents to crash). Functions that attempt to return undefined will now return null (to ensure backwards compatibility).
These changes will go info effect March 31st.
March 6th, 2025
Improved

New Analytics Dashboard: Gain Deeper Insights into Your Agent’s Performance

We’ve revamped our Agent Analytics Dashboard, not only giving it a fresh new look but also introducing a range of powerful visualizations that provide unprecedented visibility into your agent’s performance.🌟 New VisualizationsThe updated Analytics Dashboard offers a comprehensive set of visualizations that allow you to track and analyze various aspects of your agent’s performance:
  • Tokens Usage: Monitor AI token consumption over time across all models, giving you a clear picture of your agent’s token utilization.
  • Total Interactions: Keep track of the total number of interactions (requests) between users and your agent over time, providing insights into engagement levels.
  • Latency Monitoring: Measure the average response time of your agent to ensure optimal performance and identify any potential bottlenecks.
  • Total Call Minutes: Gain visibility into the cumulative duration of voice calls in minutes, helping you understand the volume and significance of voice interactions.
  • Unique Users: Identify the count of distinct users interacting with your agent over time, allowing you to track adoption and growth.
  • KB Documents Usage: Analyze the frequency of knowledge base document access, with the ability to toggle between ascending and descending order to identify the most or least used documents.
  • Intents Usage: Visualize the distribution of triggered intents, with sorting options to analyze intent frequency and identify popular or underutilized intents.
  • Functions Usage: Monitor the frequency of function calls, their success/failure and latency, with sorting capabilities to identify the most or least used functions and optimize your agent’s functionality.
  • Prompts Usage: Gain insights into the usage frequency of agent prompts, with the ability to toggle between ascending and descending order to analyze prompt utilization and effectiveness.
📅 Data AvailabilityPlease note that the new Analytics Dashboard service only has data starting from February 9th, 2025. If you require data prior to that date, you can still access it through our Analytics API.🔧 Upcoming Analytics API UpdateWe’re also working on a new version of the Analytics API that will include the additional data points tracked by the new Analytics Dashboard service. Stay tuned for more information on this exciting update!New Analytics Dashboard: Gain Deeper Insights into Your Agent's Performance
February 27th, 2025
Added

New Models, Function Editor Enhancements, and Call Recording

We’re thrilled to announce several exciting updates that expand your AI agent building capabilities and improve your workflow. Let’s dive into what’s new!🧠 New Models: Deepseek R1, Llama 3.1 Instant, and Llama 3.2We’ve expanded our model offerings to give you even more options for creating powerful AI agents:
  • Deepseek R1: Harness the potential of Deepseek’s R1 model for enhanced natural language understanding and generation.
  • Llama 3.1 Instant: Experience lightning-fast responses with the Llama 3.1 Instant model.
  • Llama 3.2: Leverage the advanced capabilities of Llama 3.2
These new models are available on all paid plans.⚙️ Function Editor Enhancements: Modal View and SnippetsWe’ve made some significant improvements to the Function Editor to streamline your development process:
  • Modal View: You can now open the Function Editor as a modal directly from the canvas. This allows you to make quick updates and navigate between your functions and the canvas seamlessly.
  • Snippets: We’ve introduced a new snippets feature that enables you to insert pre-written code snippets for common concepts in Voiceflow functions.
📞 Call Recording for Twilio Phone CallsWe’re excited to introduce call recording functionality for phone calls made through Twilio:
  • Automatic Call Recording: All phone calls between users and your AI agent will now be automatically recorded.
  • Twilio Integration: The call recordings will be accessible directly in your Twilio account for easy review and management.
You can enable this option in the Agent Settings page under Voice.New Models, Function Editor Enhancements, and Call RecordingNew Models, Function Editor Enhancements, and Call Recording
February 27th, 2025
Added

Retrieval-Augmented Generation (RAG) for Intent Recognition

We’re excited to announce a significant upgrade to our intent recognition system, moving from the traditional Natural Language Understanding (NLU) approach to Retrieval-Augmented Generation (RAG) model using embeddings. This transition brings notable improvements to the speed, accuracy, and overall user experience when interacting with AI agents on our platform.📅 Phased RolloutTo ensure a smooth adoption, we will be rolling out the RAG-based intent recognition system to all users in phases over the next week. This gradual deployment allows us to monitor performance and gather feedback while providing ample time for you to adjust to the new system.🆕 Default for New ProjectsFor all new projects created on our platform, the RAG-based intent recognition will be the default system. This means that new AI agents will automatically benefit from the enhanced speed, accuracy, and natural conversation capabilities offered by RAG.🌟 Faster Training and InteractionWith the new RAG system, agent training and intent recognition are now substantially faster and more efficient. For example, an agent with 37 intents and 305 utterances now trains about 20 times faster, in just around 1 second. This means quicker agent development and smoother conversations for end-users.🧠 Automatic Agent TrainingThanks to the advanced training speed enabled by RAG, explicit training is no longer necessary. Simply test your agent, and the training will happen automatically behind the scenes, streamlining your workflow.🎯 Enhanced Understanding of Complex QueriesRAG leverages embeddings to capture the deeper context and meaning behind words, even when phrased differently. This allows the system to better understand and accurately match complex, detailed questions to the appropriate intents, providing more precise responses to users.🗣️ More Natural ConversationsWith the improved understanding of casual language, slang, and diverse phrasing, the RAG system enables a more natural, conversational experience for users interacting with AI agents on our platform.🔄 Seamless Transition for Existing ProjectsFor existing projects, we will keep both the NLU and RAG systems running concurrently for a period of time. This allows you to explore the new system, test it thoroughly, and make any necessary adjustments to your agents. You can easily switch between the NLU and RAG systems in the intent classification settings within the Intents CMS.We’re thrilled to bring you this enhanced experience and look forward to hearing your feedback as you interact with the new RAG-based intent recognition system. Your input is invaluable in helping us continue to innovate and improve our platform to better serve your needs.Retrieval-Augmented Generation (RAG) for Intent Recognition
February 20th, 2025
Added

Expanding the Possibilities of User Interaction with Voice

In our mission to redefine how users interact with AI agents, we have introduced a new voice modality option to our web widget. This addition is a step towards creating more natural, intuitive, and engaging user experiences. By enabling voice-based conversations, we are empowering businesses to connect with their customers in a way that feels authentic and effortless.Voice technology has become an increasingly popular and preferred mode of interaction for many users. By integrating voice functionality into our web widget, we are meeting users where they are and providing them with a seamless way to engage with AI agents. This not only enhances the user experience but also opens up new possibilities for businesses to assist, inform, and guide their customers throughout the customer journey.Natural Voice InteractionThe web widget now supports voice-based communication, allowing users to speak naturally with AI agents. Businesses can integrate this feature to provide their customers with a hands-free, intuitive way to ask questions, receive recommendations, and get assistance while browsing the site.Customization OptionsThe voice widget offers customization options to ensure seamless integration with your website’s branding:
  • Launcher Style: Select a launcher style that complements your site’s design.
  • Color Palette: Choose colors that match your brand guidelines.
  • Font Family: Pick a font that aligns with your website’s typography.
These options allow you to maintain a consistent brand experience across all customer touchpoints.Powered by Advanced Voice TechThe voice functionality in the widget leverages the best in voice technologies to deliver high-quality conversations:
  • Automated Speech Recognition: Our platform uses advanced STT technology from Deepgram to accurately transcribe user speech in real-time.
  • Organic Text-to-Speech: We’ve integrated with leading providers like 11 Labs and Rime to offer a variety of natural-sounding voices that bring AI agents to life.
These technologies ensure that conversations with AI agents feel authentic, engaging, and representative of your brand’s personality.Start Exploring VoiceWe invite all our users to start experimenting with the voice capabilities.As you explore voice functionality, we value your feedback and ideas s- join our Discord community! Your input plays a crucial role in shaping the future of voice-based interactions in the web widget and helping us refine the user experience.Expanding the Possibilities of User Interaction with VoiceExpanding the Possibilities of User Interaction with Voice
February 11th, 2025
Added

AI Fallback

We’re excited to introduce AI Fallback, a powerful new feature in beta that enhances the reliability and continuity of your AI operations. This feature ensures your AI services remain operational even during provider outages or service interruptions.🔄 Automatic Fallback SwitchingAI Fallback automatically switches between models when issues arise. When your primary AI model experiences difficulties, the system seamlessly transitions to your configured backup model, ensuring continuous operation of your AI services.⚙️ Easy ConfigurationSetting up AI Fallback is straightforward:
  1. Access your agent
  2. Navigate to agent settings
  3. Set your preferred fallback model by provider
That’s all there is to it! The system handles everything else automatically.📈 Enhanced ReliabilityAI Fallback delivers key benefits:
  • Minimizes service disruptions during model outages
  • Maintains consistent AI performance
  • Reduces operational impact of provider issues
  • Ensures business continuity
🔬 Under the HoodThe system continuously monitors your primary AI model’s performance and availability. When issues are detected, it automatically:
  • Identifies the next available model in your sequence
  • Switches ongoing operations to the backup model
  • Returns to the primary model once issues are resolved
🚀 Getting StartedAI Model Fallback is available exclusively for Teams and Enterprise customers. We’re excited to hear your feedback during the beta phase! 🎯AI Fallback
February 3rd, 2025
Added

New Features: Structured Outputs and Variable Pathing

Today we’re introducing two powerful new capabilities in Voiceflow: Structured Outputs and Variable Pathing. These features expand the possibilities for working with data from large language models (LLMs) in your agents. Let’s explore what they enable!🎉 Structured OutputsStructured Outputs let you define the format of the data you expect an LLM to return, giving you more control and predictability over the results.
  • In a prompt step, enable the new “JSON Output” option to specify the structure of the LLM’s response.
  • Today, Structured Outputs support the following data types: String Number Boolean Integer Enum
  • Support for arrays and nested objects is planned for the near future.
  • Structured Outputs are available with gpt-4o-mini and gpt-4o models.
💪 Variable PathingVariable Pathing provides a streamlined way to work with complex data structures in your Voiceflow project.
  • Store an entire object in a single variable, then access its properties using dot notation (e.g. user.name, user.email).
  • Capture Structured Output responses or API results as objects.
  • Use object properties directly in conditions, messages, and other steps.
  • Reduce the need for multiple variables to represent a single entity.
🍰 Bringing it All TogetherCombining Structured Outputs and Variable Pathing opens up new design patterns for crafting agent experiences:
  • Define precise data requirements for LLMs to provide relevant information
  • Capture responses as feature-rich objects in a single step
  • Access and manipulate object properties throughout your project
  • Streamline your project’s design while expanding its capabilities
We’re excited to see the voice experiences you create with these new tools! Feel free to share your questions and feedback with us.New Features: Structured Outputs and Variable PathingNew Features: Structured Outputs and Variable Pathing
January 23rd, 2025
Added

Voiceflow Telephony

We’re excited to announce the release of Voiceflow Telephony, bringing enterprise-grade voice capabilities to your conversational experiences. This release represents a significant milestone in our mission to provide comprehensive, low-latency voice solutions for businesses of all sizes.Native Twilio IntegrationWe’ve integrated with Twilio to make phone-based interactions as simple as possible. The new integration allows you to:
  • Import existing Twilio phone numbers directly into Voiceflow
  • Associate phone numbers with specific agents
  • Configure separate numbers for development and production environments
  • Test different versions of your agent against different phone numbers
Setting up telephony is straightforward: simply connect your Twilio account with existing phone numbers, import them into Voiceflow, and assign them to your agents. Your voice experience will be live within minutes.High-Performance VoiceStreaming TechnologyWe’ve built our telephony feature on top of our streaming API, delivering exceptional performance improvements:
  • Dramatically reduced response times
  • Near real-time agent reactions
  • Optimized voice processing pipeline
Speech RecognitionWe’ve selected Deepgram to provide industry-leading Speech To Text (STT):
  • High-accuracy transcription
  • Low-latency processing
  • Support for over 20 languages
Advanced Voice CapabilitiesOutbound CallingWe’ve introduced powerful outbound calling capabilities:
  • Programmatically initiate calls to any phone number
  • Test outbound calls directly from the Voiceflow interface
  • Integrate outbound calling into your existing workflows
Voice Technology StackOur comprehensive voice stack includes:
  • Premium text-to-speech voices from industry leaders, such as: ElevenLabs Rime Google
  • Support for advanced telephony features through custom actions: Call forwarding DTMF handling Interruption behaviour
Voice Experience ConfigurationWe’ve exposed detailed configuration options to fine-tune your voice experiences:Audio Settings
  • Background audio customization
  • Audio cue configuration
Interaction Parameters
  • Interruption threshold controls
  • Utterance end detection
  • Response timing optimization
  • User input acceptance timing
Beta Program DetailsAccess and LimitationsDuring the beta period, all users will have access to telephony features with the following concurrent call limits:Coming Soon
  • Enhanced call analytics and reporting
  • Additional voice customization options Voiceflow Telephony Voiceflow Telephony Voiceflow Telephony Voiceflow Telephony
January 22nd, 2025
Added

New AI-Native Webchat

We’re excited to announce a complete reimagining of the Voiceflow webchat experience. This new version introduces AI-native capabilities, enhanced customization options, and flexible deployment methods to help you create more engaging conversational experiences.AI-NativeOur webchat has been rebuilt from the ground up to provide a more natural, AI-driven conversation experience:
  • Streaming Text Support: Experience real-time message generation with character-by-word streaming, creating a more engaging and dynamic conversation flow. Users can see responses being crafted in real-time, similar to popular AI chat interfaces.
  • AI Disclaimers: Built-in support for displaying AI disclosure messages and customizable AI usage notifications to maintain transparency with your users.
Enhanced CustomizationWe’ve significantly expanded the customization capabilities to give you more control over your chat interface:Interface TypesYou can now choose from three distinct interface modes:
  • Widget: Traditional chat window that appears in the corner of your website
  • Popover: Full-screen chat experience that overlays your content
  • Embed: Seamlessly integrate the chat interface directly into your webpage layout
Visual CustomizationThe new version introduces comprehensive styling options:
  • Color System: Expanded colour palette support with primary, secondary, and accent colour definitions
  • Typography: Custom font family support
  • Launcher Variations: Classic bubble launcher with customizable icons Button-style launcher with text support
Important Notes
  • Chat Persistence: Now configured through the snippet rather than UI settings.
  • Custom CSS: Maintained compatibility with most existing class names.
  • Proactive Messages: Temporarily unavailable in this release, with support coming soon
You can find more details here.MigrationFor detailed instructions on migrating from the legacy webchat, please refer to our Migration Guide.New AI-Native WebchatNew AI-Native Webchat
January 17th, 2025
Added

Function libraries, starter templates and ElevenLabs support

Function LibrariesIntegrate your agent with your favorite tools using our new function libraries. Access pre-built functions for popular platforms like Hubspot, Intercom, Shopify, Zendesk, and Zapier. These functions, sourced from Voiceflow and the community, make it easier than ever to connect Rev with your existing workflows. Showcase readily available integrations to your team and clients.Transcript Review HotkeysReviewing transcripts just got faster and more efficient. You can now press R to mark a transcript as Reviewed or S to Save it for Later. These handy shortcut keys are perfect for power users who review a high volume of transcripts.Project Starter TemplatesGetting started is now a breeze. When creating a new project, choose from a set of templates tailored for common use cases like customer support, ecommerce support, and scheduling. These templates help you hit the ground running without the need for extensive setup and customization. Ideal for new users and busy teams.Expanded Voice SupportWe now offer an even greater selection of natural-sounding AI voices. We’ve added support for a variety of new options from ElevenLabs and Rime. Please note that using these voices consumes AI tokens. Check them out for your projects that could benefit from additional voice choices.Function libraries, starter templates and ElevenLabs supportFunction libraries, starter templates and ElevenLabs supportFunction libraries, starter templates and ElevenLabs support
January 8th, 2025
Deprecated

Important Update: Deprecation of AI Response and AI Set Steps

This is an important update to our platform. As part of our ongoing commitment to enhancing your experience and providing the most advanced tools for AI agent development, we have made the decision to deprecate the AI Response and AI Set steps.What does this mean for you?
  • On February 4th, 2025, the AI Response and AI Set steps will be disabled from the step toolbar in the Voiceflow interface to encourage users to move away of these deprecated steps. Existing steps will remain untouched and will continue working as per normal.
  • On June 3rd, 2025, these steps will no longer be supported. Any existing projects using these steps will need to be migrated to the new Prompt and Set steps. We will be sending out additional communication in advance to the sunset date.
We understand that this change may require some adjustments to your workflow, but rest assured that we are here to support you throughout this transition. The new Prompt and Set steps, along with our powerful Prompt CMS, offer even more flexibility and control over your conversational experiences.Some key benefits of the new approach include:
  • Centralized prompt management:The Prompt CMS serves as a hub for all your prompts, making it easy to create, edit, and reuse them across your projects.
  • Advanced prompt configuration: Leverage system prompts, message pairs, conversation history, and variables to craft highly contextual and dynamic responses.
  • Seamless integration: The Prompt step allows you to bring your prompts directly into your conversation flows, while the Set step lets you assign prompt outputs to variables for enhanced logic and control.
  • Continued innovation: We are committed to expanding the capabilities of these new features, with exciting updates planned for the near future.
For those using the Knowledge Base, we recommend transitioning to theKB Search step. This step allows you to query your Knowledge Base and feed the results into a prompt, enabling even more intelligent and relevant responses.To help guide you through migrating from the AI steps to the Prompt step, check our walkthrough below:We value your feedback and are here to address any questions or concerns you may have. Our team is dedicated to ensuring a smooth transition and helping you unlock the full potential of these powerful new features.Thank you for your understanding and continued support. We are excited about the future of conversational AI development on Voiceflow and look forward to seeing the incredible experiences you will create with these enhanced capabilities.Best regards,Voiceflow
December 17th, 2024
Improved

API Step V2

We’re introducing a new API stepwith a cleaner, more intuitive interface for configuring your API requests. While the existing API step remains fully functional, we recommend trying out the new version at your earliest convenience.Project Data ChangesFor users working with our API programmatically, we’ve included the new step type definition below:
type CodeText = (
  | string
  | {
      variableID: string;
    }
  | {
      entityID: string;
    }
)[];

interface MarkupSpan {
  text: Markup;
  attributes?: Record<string, unknown> | undefined;
}

type Markup = (
  | string
  | {
      variableID: string;
    }
  | {
      entityID: string;
    }
  | MarkupSpan
)[];

type ApiV2Node = {
  type: "api-v2";
  data: {
    name: string;
    url?: Markup | null | undefined;
    headers?: Array<{ id: string; key: string; value: Markup }> | undefined;
    httpMethod: "get" | "post" | "put" | "patch" | "delete";
    queryParameters?: Array<{ id: string; key: string; value: Markup }> | undefined;
    responseMappings?: Array<{ id: string; path: string; variableID: string }> | undefined;
    body?:
      | {
          type: "form-data";
          formData: Array<{ id: string; key: string; value: Markup }>;
        }
      | {
          type: "params";
          params: Array<{ id: string; key: string; value: Markup }>;
        }
      | {
          type: "raw-input";
          content: CodeText;
        }
      | null
      | undefined;
    fallback?:
      | {
          path: boolean;
          pathLabel: string;
        }
      | null
      | undefined;
    portsV2: {
      byKey: Record
        string,
        {
          type: string;
          id: string;
          target: string | null;
        }
      >;
    };
  };
  nodeID: string;
  coords?: [number, number] | undefined;
};

API Step V2
December 5th, 2024
Added

Introducing Claude Haiku 3.5 and Streamlining Model Selection

AddedWe’ve added support for Anthropic’s Claude Haiku 3.5 model.You can now select Claude Haiku 3.5 when creating or editing your agents, allowing you to:
  • Benefit from improved performance and enhanced conversational abilities
  • Create more engaging and human-like interactions
UpdatedTo streamline model selection and encourage the use of the latest models, we’ve hidden Claude Sonnet 3.0 and Haiku 3.0 from the model dropdown.Don’t worry - this change won’t affect any existing agents using these models. You can continue to use and edit them without interruption.Introducing Claude Haiku 3.5 and Streamlining Model Selection
December 4th, 2024
Added

Document Metadata - Update Routes

Added
  • Document Metadata Update - New PATCH endpoint /v1/knowledge-base/docs/&#123;documentID&#125; to update metadata for entire documents Updates metadata across all chunks simultaneously Note: Not supported for documents of type ‘table’
  • Chunk Metadata Update - New PATCH endpoint /v1/knowledge-base/docs/&#123;documentID&#125;/chunk/&#123;chunkID&#125; to update metadata for specific chunks Allows targeted updates to individual chunk metadata Supports all document types, including tables Other chunks in the document remain unchanged
ExamplesCheck our API documentation for detailed request/response examples and metadata formatting guidelines.
December 3rd, 2024
Added

Expanded Prompt Support: Add AI Logic to Conditions and Message Variants

We’re excited to announce that Condition steps now support prompts as a condition type, allowing you to use AI responses to determine conversation paths.What’s New
  • Prompt Conditions: Condition steps can now evaluate prompt responses to intelligently branch conversations down different paths based on AI analysis.
  • Message Variant Conditions: Message steps can now use prompt responses to select the most appropriate response text, helping your agent say the right thing at the right time.
  • Seamless Prompt Integration: Choose from your existing prompts in the Prompt CMS or create new ones directly within the Condition or Message step.
Getting StartedFor Condition Steps:
  1. Create or select a Condition step
  2. Choose “Prompt” as your condition type
  3. Select or create a prompt
  4. Add paths and define evaluation criteria
For Message Variants:
  1. Add variants to your Message step
  2. Select a prompt to determine variant selection
  3. Define your variant conditions
  4. Test your dynamic messaging
Learn moreRead more about the options available with the Condition step and Messages step.Expanded Prompt Support: Add AI Logic to Conditions and Message Variants
December 2nd, 2024
Improved

Import and Export Variables and Entities

Simplify your workflow and make managing your agents even easier with more sharing options.Import and Export Variables and Entities in the CMSYou can now import and export variables and entities directly in your Agent CMS, saving you time and effort when setting up and sharing your agents.
  • Quickly populate your variables and entities by importing exported JSON files
  • Easily create new versions of variables and entities by importing new files
  • Export your variables and entities as JSON files for backup or sharing
  • Save time by bulk importing and exporting variables and entities
These new features are particularly useful when you have a large number of variables or entities to manage, when you need to create new versions frequently, or when you want to share your variables and entities with others.Importing and exporting variables and entities is especially helpful when you’re working with large datasets, complex agents that require numerous variables and entities, or collaborating with team members.These new import and export features in the Agent CMS will help you set up, manage, and share your agents more efficiently, allowing you to focus on creating engaging conversational experiences.Import and Export Variables and Entities
November 28th, 2024
Improved

Prompt Sharing and Improved Variable Debugging

This week we’re excited to introduce two new features that will enhance your workflow and make it easier to build and debug your agents.Export and Import Prompts in the Prompt CMSReusing and sharing prompts across different agents is now a breeze with our new export and import functionality in the Prompt CMS. Easily export prompts individually or in bulk as JSON files.
  • Import prompts into any agent with just a few clicks, including any variables or entities that are used in the prompt.
  • Streamline your workflow by reusing effective prompts across projects.
  • Share your best prompts with colleagues and the community.
Building great agent experiences often involves iterating on prompts. Now you can save time by leveraging your best prompts across all your agents.Improved Variable Debugging for Objects and ArraysDebugging when using complex variables is now more intuitive in the debug panel. We’ve improved the display of objects and arrays so you can easily inspect their values.
  • See and modify objects and arrays assigned to variables during prototyping.
  • Quickly identify issues with variable assignments.
Previously, objects were not displayed from the debug panel, making it difficult to understand what data they contained. This improvement brings more transparency to your debugging process.Prompt Sharing and Improved Variable DebuggingPrompt Sharing and Improved Variable Debugging
November 21st, 2024
Added

Smart Chunking - New Strategies

This week, we’re excited to announce the beta release of a number of new Smart Chunking features, designed to enhance the way you process and retrieve knowledge base content. These improvements address previous limitations and bring more efficiency to your document management workflow.LLM-Generated QuestionsEnhance retrieval accuracy by prepending AI-generated questions to document chunks. This aligns your content more closely with potential user queries, making it easier for users to find the information they need.Context SummarizationProvide additional context by adding AI-generated summaries to each chunk. This helps users understand the content more quickly and improves the relevance of search results.LLM-Based ChunkingExperience optimal document segmentation determined by semantic similarity and retrieval effectiveness. This AI-driven approach ensures your content is chunked in the most meaningful way.Content SummarizationLet AI summarize and refine your content, focusing on the most important information. This feature streamlines your documents, making your chunks more concise and optimized for retrieval performance.We encourage you to explore these new capabilities and share your feedback.To start using the Smart Chunking beta features, join the waiting list here.Smart Chunking - New Strategies
November 6th, 2024
Added

Unlock Dynamic Interactions in Your Agent with Events

Events enables users to trigger workflows without direct user input. They enable your agent to respond to user-defined events tailored to specific use cases. With Events, your agent becomes more context-aware and responsive, providing a more engaging and dynamic user experience.What’s NewEvents System
  • Custom Triggers: Define custom events in the new Event CMS, allowing your agent to respond to specific user actions beyond just conversational input.
  • Seamless Integration: Events act as signals from the user’s interactions—like button clicks, page navigations, or in-app actions—enabling your agent to initiate specific workflows dynamically.
  • Event Triggers in Workflows: Use the new Event type in the Trigger step to associate events with specific flows in your agent, giving you full control over the conversational paths.
Why Use Events?
  • Expand Interaction Capabilities: Respond to a wide range of user actions within your application, making your agent more intelligent and adaptable.
  • Create Contextual Experiences: Provide relevant interactions based on what the user is doing.
  • Streamline User Journeys: Assist users at critical points, offering guidance, confirmations, or additional information exactly when needed.
Examples of How Events Can Enhance Your Agent
  • User Clicks a Checkout Button: Trigger an event to initiate a checkout assistance flow, confirming items or offering shipping options.
  • In-App Feature Usage: Start a tutorial when a user accesses a new feature for the first time.
  • User Sends a Command in a Messaging App: Provide immediate responses to specific commands, like showing recent transactions.
  • User Navigates to a Specific Page: Offer assistance related to the content of the page, such as explaining pricing plans.
Learn More
November 5th, 2024
Added

Prompt like a Pro: New Prompts CMS, Prompt Step and more

Recognizing that your AI prompts are the cornerstone of agent behaviour, we’ve developed a comprehensive suite of tools designed to provide a central hub for creating, updating, and testing prompts with ease and efficiency.What’s New
  1. Prompts CMS Centralized Prompt Hub: Manage all your prompts in one place, ensuring consistency and easy access across your entire agent. Advanced Prompt Editor: Craft, edit, and test your prompts within an intuitive interface equipped with the necessary tooling to refine your AI agent’s responses. Message Pairs & Conversation Memory: Utilize message pairs to simulate interactions and inject conversation memory, allowing for more dynamic and context-aware agent behaviour. Visibility into Performance Metrics: Gain insights into latency and token consumption, now split by input and output tokens, to optimize your prompts for performance and cost-efficiency.
  2. New Prompt Step Prompt Integration: Incorporate response prompts directly into your agent workflows using the new Prompt step. Reuse Across Agent: The prompts you create can be easily reused across your agent, making any updates available wherever the prompt is used.
  3. Assign Prompts in Set Step Simplify Designs: This feature brings prompts to the Set step for purposes of reusability and consolidating methods of setting variable values in your agent.
Looking Ahead
  • Expanded Prompt Support: Soon, you’ll be able to use prompts in more steps within your agent’s flow, unlocking new possibilities for interaction design.
  • Community Sharing: We’re developing features that will allow you to share prompts across your agents and with the wider community, facilitating collaboration and collective improvement.
Learn More
  • Prompt CMS and Editor: Explore the central hub for creating, testing, and managing prompts within your agent.
  • Prompt step: Learn how to integrate prompts directly into your agent’s flow.
  • Set step: Discover how to dynamically assign prompt outputs to variables for greater control over agent behaviour. Prompt like a Pro: New Prompts CMS, Prompt Step and more
November 4th, 2024
Improved

Persistent Listen in Functions

What’s New
  • Persistent Events: Functions can now define events that persist for the entire conversation session. This means that events associated with components like carousels, choice traces, or buttons remain active even after the conversation moves past the function step.
  • Delayed User Interaction: Users can interact with these persistent components at any point during the session. When they do, the agent will refer back to the original function and proceed down the relevant path defined in your function code.
  • Flexible Agent Behaviour: You now have control over whether the agent waits for user input at the function step or continues execution immediately, thanks to the new listen parameter settings.
How It WorksOption 1: Agent Waits for User Input (listen: true)
  • Behavior: The agent pauses execution at the function step. It waits for immediate user input before proceeding.
  • Use Case: Ideal when you require the user to make a choice or provide input before moving on.
  • Implementation: Set listen: true in your function’s next command.
next: {
  listen: true,
  to: [
    {
      on: { 'event.type': 'option_selected', 'event.payload.value': '1' },
      dest: 'option_1_path',
    },
    // Additional event handlers...
  ],
  defaultTo: 'default_path',
},
Option 2: Agent Continues Execution (listen: false)
  • Behavior: The agent continues execution without waiting at the function step. Events defined in the function persist throughout the session and can be triggered later.
  • Use Case: Perfect for non-blocking interactions where users might choose to interact with components at their convenience.
  • Implementation: Set listen: false in your function’s next command.
next: {
  listen: false, // Must be explicitly set to false
  to: [
    {
      on: { 'event.type': 'item_selected', 'event.payload.label': 'Item A' },
      dest: 'item_A_path',
    },
    // Additional event handlers...
  ],
  defaultTo: 'default_path',
},
Benefits
  • Dynamic Conversations: Create agents that can handle delayed interactions, allowing users to make choices at any point during the conversation.
  • Persistent Interactivity: Keep buttons and interactive elements active throughout the session.
ExampleFunction Code with Persistent Events:
export default async function main(args) {
  return {
    trace: [
      {
        type: 'carousel',
        payload: {
          cards: [
            {
              title: 'Product A',
              description: { text: 'Description of Product A' },
              imageUrl: 'https://example.com/productA.png',
              buttons: [
                {
                  name: 'Buy Now',
                  request: { type: 'purchase', payload: { productId: 'A' } },
                },
              ],
            },
            // Additional cards...
          ],
        },
      },
    ],
    next: {
      listen: false, // Agent continues without waiting
      to: [
        {
          on: { 'event.type': 'purchase', 'event.payload.productId': 'A' },
          dest: 'purchase_A_path',
        },
        // Additional event handlers...
      ],
      defaultTo: 'continue_path',
    },
  };
}
  • Agent Behavior: Continues to ‘continue_path’ immediately. The purchase event remains active and can be triggered later. When the user clicks “Buy Now” for Product A, the agent navigates to ‘purchase_A_path’.
Learn More• Updated Documentation: Visit our Supporting listen in functions to learn more about using persistent listen functionality.
October 28th, 2024
Improved

New Publishing Workflow, Knowledge Base Folders and Updated Choice Step

We’re introducing several new features to enhance your experience with Voiceflow:
  • New Publishing Workflow: Versioned Publishing with Release Notes: Easily add release notes to each version of your agent during the publishing process, making it simpler to track changes and updates. Dedicated Publishing Tab: Access a new Publishing tab within the Agent CMS, where you can name your version, add release notes, and publish directly. The tab includes: Publish View: Name your versions and add notes before publishing. Release Notes View: Review your agent’s release history with a clean, organized display.
  • Folders for the Knowledge Base CMS: We’ve added folders to the Voiceflow Knowledge Base CMS, enabling you to organize your data sources more effectively. This makes managing large datasets and sources easier, keeping everything in order.
  • Enhanced Choice Step with Button Support: You can now attach buttons to a choice step, with intents being optional. This means you can add paths to a Choice step that are only triggered with a button click, offering greater flexibility in how users interact with your agents. New Publishing Workflow, Knowledge Base Folders and Updated Choice Step New Publishing Workflow, Knowledge Base Folders and Updated Choice Step New Publishing Workflow, Knowledge Base Folders and Updated Choice Step
October 14th, 2024
Improved

Fueling Your Creativity: We’ve Doubled Your AI Tokens!

At Voiceflow, our mission is ambitious: to provide you with the best agent creation platform in the world. We believe in empowering creators like you to build advanced conversational AI agents without limits.As we’ve integrated more powerful language models into Voiceflow, we’ve seen your projects become more innovative and dynamic. Your agents are smarter, more engaging, and pushing the boundaries of what’s possible in conversational AI.But we don’t want you to slow down—we want to give you more “fuel” to keep that engine running at full speed.That’s why we’re excited to announce that we’ve doubled the included monthly AI token allotments for our Pro, Team, and Enterprise plans!Here are the new allotments:• Pro Plan: Now includes 4 million AI tokens per month. • Teams Plan: Now includes 20 million AI tokens per month. • Enterprise Plan: Now includes 200 million AI tokens per month.Need more? Additional tokens are available for purchase across all plans.This upgrade is about more than just numbers. It’s about supporting your vision and giving you the resources to bring your most ambitious ideas to life. Whether you’re developing complex conversational flows, experimenting with new AI features, or scaling up your existing projects, we’ve got you covered.We’re committed to making Voiceflow not just a tool, but a platform that grows with you—a place where your creativity has no bounds.Thank you for being an essential part of our journey. We can’t wait to see what incredible things you’ll build with this extra boost.
October 4th, 2024
Added

Streaming API for Real-Time Interactions

We’re excited to announce the release of our new Streaming API endpoint, designed to enhance real-time interactions with your Voiceflow agents. This feature allows you to receive server-sent events (SSE) in real time using the text/event-stream format, providing immediate responses and a smoother conversational experience for your users.Key Highlights
  • Real-Time Event Streaming: Receive immediate trace events as your Voiceflow project progresses, allowing for dynamic and responsive conversations.
  • Improved User Experience: Drastically reduce latency by sending information to users as soon as it’s ready, rather than waiting for the entire turn to finish.
  • Support for Long-Running Operations: Break up long-running steps (e.g., API calls, AI responses, JavaScript functions) by sending immediate feedback to the user while processing continues in the background.
  • Streaming LLM Responses: With the completion_events query parameter set to true, stream large language model (LLM) responses (e.g., from Response AI or Prompt steps) as they are generated, providing instant feedback to users.
How to Use the Streaming APIEndpoint
POST /v2/project/{projectID}/user/{userID}/interact/stream
Required Headers
  • Accept: text/event-stream
  • Authorization: &#123;Your Voiceflow API Key&#125;
  • Content-Type: application/json
Query Parameters
  • completion_events (optional): Set to true to enable streaming of LLM responses as they are generated.
Example Request
curl --request POST \
     --url https://general-runtime.voiceflow.com/v2/project/{projectID}/user/{userID}/interact/stream \
     --header 'Accept: text/event-stream' \
     --header 'Authorization: {Your Voiceflow API Key}' \
     --header 'Content-Type: application/json' \
     --data '{
       "action": {
         "type": "launch"
       }
     }'
Example Response
event: trace
id: 1
data: {
  "type": "text",
  "payload": {
    "message": "Give me a moment...",
  },
  "time": 1725899197143
}

event: trace
id: 2
data: {
  "type": "debug",
  "payload": {
    "type": "api",
    "message": "API call successfully triggered"
  },
  "time": 1725899197146
}

event: trace
id: 3
data: {
  "type": "text",
  "payload": {
    "message": "Got it, your flight is booked for June 2nd, from London to Sydney.",
  },
  "time": 1725899197148
}

event: end
id: 4
Streaming LLM Responses with completion_eventsBy setting completion_events=true, you can stream responses from LLMs token by token as they are generated. This is particularly useful for steps like Response AI or Prompt, where responses may be lengthy.Example Response with completion_events=true
event: trace
id: 1
data: {
  "type": "completion",
  "payload": {
    "state": "start"
  },
  "time": 1725899197143
}

event: trace
id: 2
data: {
  "type": "completion",
  "payload": {
    "state": "content",
    "content": "Welcome to our service. How can I help you today? Perh"
  },
  "time": 1725899197144
}

... [additional content events] ...

event: trace
id: 6
data: {
  "type": "completion",
  "payload": {
    "state": "end"
  },
  "time": 1725899197148
}

event: end
id: 7
Getting Started
  • Find YourprojectID : Locate your projectID in the agent’s settings page within the Voiceflow Creator. Note that this is not the same as the ID in the URL creator.voiceflow.com/project/.../.
  • Include Your API Key: Ensure you include your Voiceflow API Key in the Authorization header of your requests.
Additional Resources
  • Example Project: Check out our streaming-wizard demo project for a practical implementation using Node.js.
Notes
  • Compatibility: This new streaming endpoint complements the existing interact endpoint and is designed to enhance real-time communication scenarios.
  • Deterministic and Streamed Messages: When using completion_events, you may receive a mix of streamed and fully completed messages. Consider implementing logic in your client application to handle these different message types for a seamless user experience.
  • Latency Reduction: By streaming events as they occur, you can significantly reduce perceived latency, keeping users engaged and informed throughout their interaction.
We believe this new Streaming API will greatly enhance the interactivity and responsiveness of your Voiceflow agents. We can’t wait to see how you leverage this new capability in your projects!For any questions or support, please reach out to support@voiceflow.com.Streaming API for Real-Time Interactions
October 3rd, 2024
Added

Smart Chunking Beta: Automatic HTML to Markdown Conversion

We are pleased to announce the launch of the Smart Chunking beta program. Over the coming weeks, we will be testing and validating several LLM-based strategies to enhance the quality of your knowledge base chunks. Better chunks lead to better responses and higher-quality AI agents.First Strategy: HTML to Markdown ConversionOur initial strategy focuses on automatic HTML to Markdown conversion. Many users import content from web pages—either directly using our web scraper or via APIs from services like Zendesk Help Center or Kustomer. This content often contains raw HTML, which can be noisy and degrade chunk performance and response quality.By converting HTML to Markdown automatically, we aim to improve the cleanliness and readability of your content. This conversion is supported for all data sources you can upload into Voiceflow.How to Use the New FeatureJoin the Beta ProgramIf you’re interested in participating in the Smart Chunking beta, please sign up via the waitlist link. We will be granting access to participants over the next few weeks.
October 3rd, 2024
Added

Secrets Manager and Updated Button Step

Announcing the new Secrets Manager in Voiceflow! This new feature enables you to securely store and manage a variety of sensitive information within your AI agents, including API keys, database credentials, encryption keys, and more.Video WalkthroughWhat’s New
  • Secure Storage of Sensitive Data Safely store passwords and credentials used to access essential services and functions within your software. Utilize AES-256 GCM encryption to ensure confidentiality and integrity of your secrets.
  • Visibility Controls Masked Secrets: Values are hidden but can be temporarily revealed when needed. Restricted Secrets: Values remain hidden and cannot be revealed after creation, enhancing security for highly sensitive data.
  • Environment Overrides Specify different secret values for Development and Production environments. Seamlessly manage environment-specific configurations without altering your agent’s logic.
  • Integration with Function and API Steps Easily insert secrets into Function and API steps without hardcoding sensitive information. Access secrets directly from the canvas by typing &#123; and selecting from the Secrets tab.
  • Secure Project Sharing When duplicating or exporting projects, secret values are excluded to maintain security. Re-add secret values as needed in the new project instance.
Benefits
  • Enhanced Security Protects sensitive information using industry-standard encryption. Prevents unauthorized access through robust key management and encryption practices.
  • Simplified Management Centralizes the handling of all your secrets within the Secrets Manager. Reduces the risk of accidental exposure by avoiding hardcoded credentials.
  • Operational Flexibility Environment overrides allow for different configurations across development and production stages. Streamlines the deployment process by separating environment-specific data.
Getting Started
  1. Access the Secrets Manager Navigate to Agent Settings in the left sidebar. Click on the Secrets tab to open the Secrets Manager interface.
  2. Create a New Secret Click New Secret in the top-right corner. Enter the Name, Value, and choose the Visibility setting. Click Create Secret to add it to your Secrets Manager.
  3. Use Secrets in Your Agent In a Function or API step, type &#123; to open the variable selector. Switch to the Secrets tab and select the desired secret.
  4. Set Up Environment Overrides Go to the Environments tab in Agent Settings. Click Override Secrets next to the desired environment. Enter environment-specific values for your secrets and click Save.
Learn MoreFor detailed instructions and best practices, please refer to our updated User Guide for the Secrets Manager.Updated Button StepWhat’s Changed?Simplified and Improved UI
  • Streamlined Button Creation: We’ve made it quicker and more intuitive to add buttons. Simply click the ”+” button next to the Buttons label in the editor to add a new button.
  • Clean Editor Layout: The editor has been decluttered to focus on what’s important, allowing you to configure your buttons without any unnecessary distractions.
How to Start Using the Redesigned Button StepThe redesigned Button Step is available now in your step toolbar, labeled as Button. To take advantage of the new user interface:
  1. Replace Existing Steps: You can replace your existing Button steps with the redesigned version in your projects to benefit from the improved UI.
  2. Update Your Conversation Paths: Reconnect any paths as needed using the ports associated with your buttons.
  3. Review Settings: Check your No Match, No Reply, and Listen for Other Triggers settings to ensure they are configured as desired.
Note: Your existing Button steps will continue to function as before unless you decide to update them. This means your current projects won’t be affected until you’re ready to make the switch.Secrets Manager and Updated Button Step
October 1st, 2024
Added

New AI-Powered Entity Collection

Some significant updates that we’ve been working on to make your conversational agents even better. Over the next week, we’ll be rolling out new features that leverage advanced AI capabilities to improve how your projects understand and interact with users.What’s New?Transition to AI-Powered Entity ExtractionWe’ve moved from traditional Natural Language Understanding (NLU) to using Large Language Models (LLMs) for entity extraction. This change is all about making your agents more accurate and adaptable when interpreting user inputs. By embracing AI-powered entity extraction, your agents can now handle a wider range of conversational scenarios with greater reliability.New AI-Powered Steps with Enhanced Entity Collection: Choice, Capture, and TriggerTo fully leverage the improved entity extraction capabilities, we’ve updated some core steps in Voiceflow. The Choice, Capture, and Trigger steps have all been upgraded to natively support the new AI-powered entity collection features. This means these steps are now better equipped to collect necessary information from users, making your conversational agents more effective and responsive.Activating the New Steps in Your ProjectThe new Choice and Capture steps are available in the step toolbar. To start using the new AI-powered features, you’ll need to replace your existing steps with these updated versions in your projects. Your existing Choice and Capture steps will remain unchanged, so your current setups won’t be affected unless you choose to update them.Introducing Rules and Exit ScenariosWe’ve added two new features to give you more control over how your agents handle user inputs:
  • Rules: You can now define specific criteria that the user’s input must meet. This helps ensure that the information collected is valid and meets your requirements.
  • Exit Scenarios: These allow your agent to gracefully handle situations where the user can’t provide the necessary information. You can set up alternative paths or responses for these cases, improving the overall user experience.
Automatic Reprompt FeatureWe’re also introducing an Automatic Reprompt feature. If a user provides incomplete or incorrect information, your agent will now generate personalized prompts to help them provide the missing details. This makes interactions smoother and less frustrating for your users.Why This MattersWe know that accurately understanding user inputs is crucial for effective conversations. By moving to AI-powered entity extraction, we’re providing you with tools that offer greater accuracy and adaptability. This helps your agents handle a wider range of conversational scenarios, providing better experiences for your users.What You Need to DoTo start benefiting from these new features:
  1. Update Your Steps: Replace your existing Choice and Capture steps with the new AI-powered versions available in the step toolbar.
  2. Configure Rules and Exit Scenarios: Use the new options within these steps to define rules and exit scenarios that suit your project’s needs.
  3. Test Your Agent: After making changes, be sure to test your agent thoroughly to ensure everything works as expected.
Rollout DetailsThese updates will be introduced to all users over the next week. If you don’t see them in your workspace right away, they’ll be available soon.We’re Here to HelpAs always, we’re committed to supporting you. If you have any questions or need assistance with the new features, please don’t hesitate to reach out to our support team or visit our Discord community.Thank you for being part of the Voiceflow community. We’re excited to see how you’ll use these new capabilities to create even more engaging and effective conversational agents.
October 1st, 2024
Added

New Multimodal Projects

It is easier than ever to create and manage multimodal agents that support both chat and voice interactions.What’s NewMultimodal Projects• Single Project Type for Chat and Voice: You no longer need to choose between a chat project and a voice project. All new projects (and existing chat projects) will support both modalities by default, streamlining your workflow and expanding your agent’s capabilities.Voice Features in Existing Chat Projects• Immediate Access to Voice Features: Existing chat projects now have built-in voice capabilities. You can start leveraging voice input and output options in your conversations without creating a new project.• Default Text-to-Speech (TTS) Settings: In the agent settings, you can now select a default TTS technology for your agent.Voice Prototyping Tools• Voice Prototyping: The designer prototype tool and web widget now includes voice input and output support. You can test voice and/or chat interactions anywhere and everything, making it easier to refine your agent’s conversational flow.Web Chat Integration• Optional Voice Interface: In the web chat integration settings, there is a new toggle to enable voice input and output. This allows your web chat experiences to include voice interactions. Currently, voice input (STT) for hosted deployments is limited to Chrome browsers. We will be looking to expand support in the future.Impact on Existing Voice Projects• No Changes to Existing Voice Projects: Your current voice projects will remain unchanged and fully functional. However, the option to create new voice-only projects from the dashboard has been removed. All new projects will support both chat and voice modalities.Resources• Feedback: We value your input. If you have any questions or encounter issues, please reach out to our support team.
September 24th, 2024
Update

Upcoming Update to Listen Steps Data Format

We are introducing new step types to diagrams[id].nodes to enhance the functionality and flexibility of your AI agents.Affected Property: diagrams[id].nodesWhat’s Changing:
  • New Node Types Added: ButtonsV2 Node (buttons-v2) Description: Enhances button interactions with more dynamic features. Type Definition: TypeScripttype ButtonsV2Node = &#123; type: "buttons-v2"; data: &#123; portsV2: &#123; byKey: Record<string, &#123; id: string; type: string; target: string | null; data?: &#123; type?: "CURVED" | "STRAIGHT" | null; color?: string | null; points?: &#123; point: [number, number]; toTop?: boolean | null; locked?: boolean | null; reversed?: boolean | null; allowedToTop?: boolean | null; &#125;[] | null; caption?: &#123; value: string; width: number; height: number; &#125; | null; &#125; | null; &#125;>; &#125;; listenForOtherTriggers: boolean; items: &#123; id: string; label: Markup; &#125;[]; name?: string; noReply?: &#123; repromptID: string | null; path?: boolean | null; pathLabel?: string | null; inactivityTime?: number | null; &#125; | null; noMatch?: &#123; repromptID: string | null; path?: boolean | null; pathLabel?: string | null; &#125; | null; &#125;; nodeID: string; coords?: [number, number]; &#125;; CaptureV3 Node (capture-v3) Description: Provides advanced data capture capabilities, including entity capture and automatic reprompting. Type Definition: TypeScripttype CaptureV3Node = &#123; type: "capture-v3"; data: &#123; portsV2: &#123; byKey: Record<string, &#123; id: string; type: string; target: string | null; data?: &#123; type?: "CURVED" | "STRAIGHT" | null; color?: string | null; points?: &#123; point: [number, number]; toTop?: boolean | null; locked?: boolean | null; reversed?: boolean | null; allowedToTop?: boolean | null; &#125;[] | null; caption?: &#123; value: string; width: number; height: number; &#125; | null; &#125; | null; &#125;>; &#125;; listenForOtherTriggers: boolean; capture: | &#123; type: "entity"; items: &#123; id: string; entityID: string; path?: boolean | null; pathLabel?: string | null; repromptID?: string | null; placeholder?: string | null; &#125;[]; automaticReprompt: &#123; params: &#123; model?: | "text-davinci-003" | "gpt-3.5-turbo-1106" | "gpt-3.5-turbo" | "gpt-4" | "gpt-4-turbo" | "gpt-4o" | "gpt-4o-mini" | "claude-v1" | "claude-v2" | "claude-3-haiku" | "claude-3-sonnet" | "claude-3.5-sonnet" | "claude-3-opus" | "claude-instant-v1" | "gemini-pro-1.5"; system?: string; maxTokens?: number; temperature?: number; &#125; | null; exitScenario?: &#123; items: &#123; id: string; text: string; &#125;[]; path?: boolean | null; pathLabel?: string | null; &#125; | null; &#125; | null; rules?: &#123; id: string; text: string; &#125;[]; &#125; | &#123; type: "user-reply"; variableID?: string | null; &#125;; name?: string; noReply?: &#123; repromptID: string | null; path?: boolean | null; pathLabel?: string | null; inactivityTime?: number | null; &#125; | null; &#125;; nodeID: string; coords?: [number, number]; &#125;; ChoiceV2 Node (choice-v2) Description: Enhances user choice interactions with improved intent handling and automatic reprompt options. Type Definition: TypeScripttype ChoiceV2Node = &#123; type: "choice-v2"; data: &#123; portsV2: &#123; byKey: Record<string, &#123; id: string; type: string; target: string | null; data?: &#123; type?: "CURVED" | "STRAIGHT" | null; color?: string | null; points?: &#123; point: [number, number]; toTop?: boolean | null; locked?: boolean | null; reversed?: boolean | null; allowedToTop?: boolean | null; &#125;[] | null; caption?: &#123; value: string; width: number; height: number; &#125; | null; &#125; | null; &#125;>; &#125;; listenForOtherTriggers: boolean; items: &#123; id: string; intentID: string; rules?: &#123; id: string; text: string; &#125;[]; button?: &#123; label: Markup; &#125; | null; automaticReprompt?: &#123; exitScenario?: &#123; items: &#123; id: string; text: string; &#125;[]; path?: boolean | null; pathLabel?: string | null; &#125; | null; &#125; | null; &#125;[]; name?: string; noReply?: &#123; repromptID: string | null; path?: boolean | null; pathLabel?: string | null; inactivityTime?: number | null; &#125; | null; noMatch?: &#123; repromptID: string | null; path?: boolean | null; pathLabel?: string | null; &#125; | null; &#125;; nodeID: string; coords?: [number, number]; &#125;;
What This Means for You:
  • Enhanced Capabilities: The new node types offer advanced features, allowing for more complex and dynamic conversational flows.
  • Existing Nodes Remain Unchanged: Your current diagrams and nodes will continue to function as before.
  • New Features Available in New Steps: Only new instances of these steps created after the update will utilize the updated formats and capabilities.
Action Required:
  • Update Integrations: If you have custom integrations that interact with diagrams[id].nodes, plan to update them to accommodate these new node types when creating new steps.
September 24th, 2024
Improved

Condition Step Update

What’s New:
  • Redesigned UI: The Condition Step now features an updated interface for easier navigation and setup of conditional paths.
  • Flexible Logic Configuration: Users can now choose between a Condition Builder (for non-technical users) or Expression (for custom JavaScript logic) to create paths based on variables, values, logic groups, or expressions.
Only new condition steps created after this update will use the new version, existing condition steps will remain untouched.Condition Step Update
September 18th, 2024
Added

New KB Search Step and Major Token Cost Reductions

We’re excited to announce the release of the KB Search Step, a new feature designed to simplify querying data from your Knowledge Base directly within Voiceflow. This step streamlines the process of fetching data chunks, providing you with greater control and efficiency in your conversational workflows.What’s New:
  • Native KB Querying: Easily query your Knowledge Base without the need for manual API configurations. Simply input a question or variable and retrieve relevant data chunks directly in your workflow.
  • Customizable Chunk Retrieval: Use the chunk limit slider to specify the number of data chunks to be returned, ranging from 1 to 10. Fine-tune responses by adjusting this setting to suit your agent’s needs.
  • No Match Handling: Set a minimum chunk score for chunk matching. If no data meets this threshold, the workflow can automatically follow a customizable Not Found path, ensuring a seamless user experience even when no exact match is found.
  • Enhanced Control Over Prompts: Fetched data chunks are automatically converted into a string format, making them easy to incorporate into text-based prompts and other steps. You have the flexibility to control how this data is used, enhancing the quality and relevance of your agent’s responses.
  • Testing Experience: Test your queries directly within the editor. View the returned chunks and their scores for insight into optimizing your knowledge base.
How It Benefits You:
  • Improved Efficiency: Eliminate the complexity of setting up manual API calls. The KB Search Step simplifies querying your Knowledge Base, saving you time and effort.
  • Greater Flexibility: Control how many data chunks are returned and set minimum chunk scores to ensure the most relevant information is fetched for your agents.
  • Enhanced Workflow Integration: Automatically convert and save data into variables, ready for use in subsequent steps within your workflow. This streamlines the process of integrating Knowledge Base data into your conversational designs.
How to Get Started:To start using the KB Search Step, simply drag the step onto your canvas from the Developer section in the step toolbar. Configure your query question, set your chunk limit, and map the data to a variable—all in less than a minute!Learn More:For detailed instructions on how to use the KB Search Step, check out our updated User Guide.And more exciting updates
  • Token Multiplier Reduction: Tokens are now more affordable, with a reduction of ~50% or more on many of the supported models! Here are the new multipliers: GPT-3.5-Turbo: Reduced from 0.6x to 0.25x GPT-4 Turbo: Reduced from 12x to 5x GPT-4: Reduced from 25x to 14x GPT-4o: Reduced from 6x to 2.5x GPT-4o mini: Reduced from 0.2x to 0.08x Claude Instant 1.2: Reduced from 1x to 0.4x Claude 1: Reduced from 10x to 4x Claude 2: Reduced from 10x to 4x Claude Haiku: Reduced from 0.5x to 0.15x Claude Sonnet 3: Reduced from 5x to 1.75x Claude Sonnet 3.5: Reduced from 5x to 1.75x Claude Opus: Reduced from 20x to 8.5x Gemini: Reduced from 8x to 3.5x
  • Billing Email Setting: You can now set a Billing Email on your account from the Billing page on the dashboard. This allows all billing-related emails, such as invoices and payment-related notifications, to be routed to the appropriate contact.
September 13th, 2024
Update

Upcoming Update to Condition Step Data Format

We are introducing a new version of the Condition step type to enhance condition handling in your AI agents.Affected Property: diagrams[id].nodesWhat’s Changing:
  • A new condition-v3 node type with an updated data structure will be added for greater flexibility and expressiveness.
New Format Preview:
type CodeText = (string | { variableID: string } | { entityID: string })[];

interface MarkupSpan {
  text: Markup;
  attributes?: Record<string, unknown>;
}

type Markup = (string | { variableID: string } | { entityID: string } | MarkupSpan)[];

type ConditionV3Node = {
  type: "condition-v3";
  data: {
    portsV2: {
      byKey: Record<string, {
        type: string;
        id: string;
        target: string | null;
      }>;
    };
    condition: {
      type: "prompt";
    } | {
      type: "logic";
      items: {
        value: {
          type: "script";
          code: CodeText;
        } | {
          type: "value-variable";
          matchAll: boolean;
          assertions: {
            key: string;
            lhs: {
              variableID: string | null;
            };
            rhs: Markup;
            operation:
              | "is"
              | "is_not"
              | "greater_than"
              | "greater_or_equal"
              | "less_than"
              | "less_or_equal"
              | "contains"
              | "not_contains"
              | "starts_with"
              | "ends_with"
              | "is_empty"
              | "is_not_empty";
          }[];
        };
        id: string;
        label?: string | null;
      }[];
    };
    name?: string;
    noMatch?: {
      repromptID: string | null;
      path?: boolean | null;
      pathLabel?: string | null;
    } | null;
  };
  nodeID: string;
  coords?: [number, number];
};
What This Means for You:
  • The condition-v3 node will support both script-based and variable-based conditions, allowing for more complex logic.
  • Only new condition steps created after the update will utilize the condition-v3 format, unlocking additional features.
Action Required:
  • Plan to modify any custom integrations that interact with diagrams[id].nodes to accommodate the new node type.
We will provide more details and release notes when the update becomes available.
September 4th, 2024
Improved

Improved Set Step

We’re excited to announce an update to the Set step, making it more versatile and user-friendly for all Voiceflow users!What’s New?We’ve enhanced the Set step by splitting the traditional single input into two distinct options: Value and Expression. This change aims to streamline the user experience, providing a simpler path for non-technical users while offering more flexibility for advanced use cases.
  • Value Input: This new input option simplifies the process of setting variables. You can now directly assign values to variables without needing to worry about syntax like quotes or data types. This makes it easier than ever to quickly set variables to specific text (String) values or numbers.
  • Expression Input: For those needing more advanced functionality, the Expression input allows you to use JavaScript to set variables dynamically. Whether you’re incrementing or decrementing values, or performing complex calculations, the Expression input gives you the power to build sophisticated logic right within the Set step.
How to Use the Updated Set StepTo ensure a smooth transition to the new structure, all existing Set steps in your projects have been automatically migrated to the updated version. You’ll notice that your previous configurations now utilize the new Expression input, preserving all functionality while aligning with the updated structure.Learn more about the new step in our step guides.Thank you for being a part of our community, and happy agent building!Improved Set Step
August 19th, 2024
Added

Introducing Messages: Scalable AI Agent Creation

We are excited to announce the release of Messages, our latest feature designed to revolutionize how you manage, scale, and customize your AI agent’s responses. At Voiceflow, our mission is to enable you to build scalable, efficient, and collaborative AI solutions, and Messages is a significant step forward in achieving that goal.Effortless Reusability for Consistency and Efficiency Messages enable you to reuse responses across different workflows, ensuring consistency and saving time. By typing a forward slash (/) in the message editor, you can quickly select from existing messages in the CMS. Any changes made to a reused message will propagate across all instances, streamlining updates and maintaining uniformity.Dynamic Responses with Conditional Variants With Messages, you can create multiple variants of a response to cater to diverse scenarios. Use AI to generate variants automatically or manually create them. Set conditions using the expression builder or JavaScript to tailor responses based on specific criteria, ensuring your agent’s interactions are contextually appropriate and dynamic.Enhanced Management with Improved Visibility The Messages CMS provides a clear view of where each message is used, who last edited it, and when it was last updated. This enhanced visibility helps you manage your responses efficiently, track their usage across your projects, and ensure all team members are on the same page.What’s Changed With the release of Messages, we’ve made several notable improvements to enhance your experience:
  • Centralized Message Management: Manage all your responses from a single, unified interface.
  • Variants and Conditions: Create flexible message variants and set conditions for tailored responses.
  • Effortless Reusability: Easily reuse messages across workflows for consistency and efficiency.
  • Enhanced Visibility: Track where each message is used and manage updates efficiently.
We’re confident that Messages will significantly improve your AI agent design process, making it more efficient, scalable, and collaborative. Explore the new Messages feature today and experience the difference! Learn more about using Messages here.Thank you for being a valued member of the Voiceflow community. We look forward to seeing how you leverage Messages to create exceptional AI experiences.
August 15th, 2024
Improved

New Default Model: Claude 3 - Haiku

We’re excited to announce that the default model for new projects in Voiceflow has been updated to Claude 3 - Haiku. This change reflects our commitment to providing the most advanced and effective tools for creating AI agents.What’s New:
  • Claude 3 - Haiku is now the default model in all new Voiceflow projects, offering enhanced performance, improved language understanding, and more nuanced conversational abilities.
This update does not affect existing projects, but you can manually switch to Claude 3 - Haiku within your project settings if you wish to take advantage of the latest models.
August 9th, 2024
Improved

Upcoming changes to the Set step

Upcoming Changes to the Set Step Data FormatAs part of our ongoing efforts to improve the functionality of Voiceflow, we are introducing an update to the Set step data format: the transition from setV2 to set-v3 nodes. This update provides a more refined structure to the Set step, offering improved control and flexibility for setting variables in your projects.Key ChangesTransition toset-v3 Nodes:We are transforming the current setV2 nodes into set-v3 nodes. This migration will ensure a more robust and flexible approach to variable management, with an updated structure that simplifies the process of setting variables within your projects.Automatic Conversion:All existing setV2 nodes will be automatically converted into set-v3 nodes when a project is opened in-app. This conversion preserves all existing functionalities while enabling you to leverage the new and improved system.Technical OverviewThe new set-v3 structure is outlined below:Properties Affected:
  • Fields Added: None
  • Fields Modified: diagram.nodes[type=setV2] transformed into diagram.nodes[type=set-v3]
  • Fields Removed: None
Migration ExampleTo illustrate how the migration transforms setV2 nodes into set-v3 nodes, here are examples before and after the migration:Before Migration:
{
  "diagram": {
    "nodes": {
      "642c35170660144e5e8e4042": {
        "type": "setV2",
        "data": {
          "name": "",
          "sets": [
            {
              "id": "random-id-1",
              "type": "advance",
              "variable": "set-variable1-id",
              "expression": "{{[variable1Name].variable1ID}}"
            },
            {
              "id": "random-id-2",
              "type": "advance",
              "label": "set-2-label",
              "variable": "set-variable2-id",
              "expression": "{{[variable1Name].variable1ID}} + {{[variable2Name].variable2ID}}"
            },
            {
              "id": "random-id-3",
              "type": "advance",
              "variable": "set-variable3-id",
              "expression": "\\"just text\\""
            }
          ],
          "title": "Overwrite Itinerary",
          "portsV2": {
            "byKey": {},
            "builtIn": {
              "next": {
                "type": "next",
                "target": null,
                "id": "642c35170660144e5e8e4043",
                "data": {
                  "points": [
                    {
                      "point": [171.50532091813443, 322.2586013515363],
                      "toTop": false,
                      "locked": false,
                      "reversed": false,
                      "allowedToTop": false
                    },
                    {
                      "point": [230.50532091813443, 322.2586013515363],
                      "toTop": false,
                      "locked": false,
                      "reversed": false,
                      "allowedToTop": false
                    },
                    {
                      "point": [230.50532091813443, 517.7585097988019],
                      "toTop": false,
                      "locked": false,
                      "reversed": false,
                      "allowedToTop": false
                    },
                    {
                      "point": [206.50532091813443, 517.7585097988019],
                      "toTop": false,
                      "locked": false,
                      "reversed": true,
                      "allowedToTop": false
                    }
                  ]
                }
              }
            },
            "dynamic": []
          }
        },
        "nodeID": "642c35170660144e5e8e4042"
      }
    }
  }
}

After Migration:
{
  "diagram": {
    "nodes": {
      "642c35170660144e5e8e4042": {
        "type": "set-v3",
        "data": {
          "name": "",
          "items": [
            {
              "id": "random-id-1",
              "type": "script",
              "value": [{ "variableID": "variable1ID" }],
              "variableID": "set-variable1-id"
            },
            {
              "id": "random-id-2",
              "type": "script",
              "label": "set-2-label",
              "value": [{ "variableID": "variable1ID" }, " + ", { "variableID": "variable2ID" }],
              "variableID": "set-variable2-id"
            },
            {
              "id": "random-id-3",
              "type": "script",
              "value": ["\\"just text\\""],
              "variableID": "set-variable3-id"
            }
          ],
          "label": "Overwrite Itinerary",
          "portsV2": {
            "byKey": {
              "next": {
                "type": "next",
                "target": null,
                "id": "642c35170660144e5e8e4043",
                "data": {
                  "points": [
                    {
                      "point": [171.50532091813443, 322.2586013515363],
                      "toTop": false,
                      "locked": false,
                      "reversed": false,
                      "allowedToTop": false
                    },
                    {
                      "point": [230.50532091813443, 322.2586013515363],
                      "toTop": false,
                      "locked": false,
                      "reversed": false,
                      "allowedToTop": false
                    },
                    {
                      "point": [230.50532091813443, 517.7585097988019],
                      "toTop": false,
                      "locked": false,
                      "reversed": false,
                      "allowedToTop": false
                    },
                    {
                      "point": [206.50532091813443, 517.7585097988019],
                      "toTop": false,
                      "locked": false,
                      "reversed": true,
                      "allowedToTop": false
                    }
                  ]
                }
              }
            },
            "builtIn": {},
            "dynamic": []
          }
        },
        "nodeID": "642c35170660144e5e8e4042"
      }
    }
  }
}

Next StepsAs we approach the release date for this migration, we encourage all users to familiarize themselves with these changes to make the most of the enhanced Set step functionality. If you have any questions or need assistance, our support team is ready to help. Stay tuned for the release!
August 6th, 2024
Update

Changes to Javascript step behavior

Executing “untrusted” code is tricky. Bad actors can write malicious code and potentially access sensitive data.The current way the javascript step is set up is a security risk and we want to move off of it at the earliest opportunity. Luckily there is a new backend that is both secure and more performant: It’s now 70-94% faster, there will be more information about that in another post.All Javascript steps created after July 30th, 2024 already automatically use the new system. We’ll be slowly converting existing Javascript step to use the new system, with the cutoff by August 16, 2024.The goal of the javascript step is to provide quick scripting to manipulate variables, rather than be a heavy-load serverless function with networking. For that, we can use functions.No action is needed is on your end, unless you use the following patterns.We’ve monitored all javascript step errors for the past week, running both the new and old backends in parallel, to categorize impact and effects. For the select few users that are affected, we will be proactively reaching out to them.All breaking changes we’ve observed will all be documented here, so people don’t reimplement in the future.Major breaking changesrequireFromUrlThe javascript step used to support requireFromUrl() which allows users to load in 3rd party libraries via URL. Commonly libraries such as moment or lodash and other utilities. This method is actually an anti pattern and has major security risks that the new backend does not support.
// example usage of requireFromUrl
const moment = requireFromUrl("https://cdnjs.cloudflare.com/ajax/libs/moment.js/2.29.1/moment.min.js");

time = moment().add(1, 'hours');
This will no longer work after the cutoff, the Javascript step will go down the fail port with a debug message saying: ReferenceError: fetch is not defined"nodeJS modules: bufferOur new backend does not run on NodeJS, but rather a v8 isolate.Calls specific to NodeJS modules rather than the Javascript/ECMAScript standard will no longer be supported.
// example of using nodeJS modules
let buff = new Buffer(token, 'base64');
name = buff.toString('ascii');
This will no longer work after the cutoff, the Javascript step will go down the fail port with a debug message saying "ReferenceError: [module] is not defined".There are low level alternatives to replicate the behavior of nodeJS utilities, and this is something LLMs excel at helping convert (be sure to test, of course!).If you are using Buffer for base64 encoding/decoding, you can easily polyfill a atob or btoa function:
function atob(e){let r="";if((e=e.replace(/=+$/,"")).length%4==1)throw Error("Invalid base64 string");for(let t=0,n,a,o=0;a=e.charAt(o++);~a&&(n=t%4?64*n+a:a,t++%4)&&(r+=String.fromCharCode(255&n>>(-2*t&6))))a="ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/".indexOf(a);return r}
function btoa(t){let a="ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/",h="",r=0;for(;r<t.length;){let e=t.charCodeAt(r++),$=r<t.length?t.charCodeAt(r++):Number.NaN,c=r<t.length?t.charCodeAt(r++):Number.NaN;h+=a.charAt(e>>2&63),h+=a.charAt((e<<4|$>>4&15)&63),h+=isNaN($)?"=":a.charAt(($<<2|c>>6&3)&63),h+=isNaN(c)?"=":a.charAt(63&c)}return h}

// converts base64 to ascii, equivalent of Buffer.from(base64ref, 'base64')
atob(base64ref)

// converts ascii to base64
btoa("hello world")
JSON.stringify(this)The this keyword now has a circular reference, and calling JSON.stringify(this) will result in recursion:
TypeError: Converting circular structure to JSON
--> starting at object with constructor 'global'
--- property 'global' closes the circle"
You can still use this.Other minor changesDateTimeFormatIntl.DateTimeFormat() constructor. As per the MDN documentation:so Date(A).toLocaleTimeString.(B, &#123; timeZone, timeZoneName, timeStyle &#125;) would crash.fetchThe fetch() command already doesn’t work today - we don’t really support any await or async actions, but after the cutoff, the Javascript step will go down the fail port, with a debug message saying "ReferenceError: fetch is not defined".resource changesThe new backend solution has the same timeout but will have a definited limit on CPU and memory. In our monitoring no present blocks are affected by these limits.Changes to Javascript step behaviorChanges to Javascript step behavior
August 5th, 2024
Deprecated

Sunset Announcement: Alexa Hosting

Effective Date: October 30th, 2024As AI agents continue to evolve, we have observed a significant shift towards building on Web chat and deploying on channels like WhatsApp, Slack, and Discord using our Dialog Manager API. To better align with this trend and allocate our resources effectively, we have decided to discontinue our direct hosting integration with Alexa.Impact
  • Existing Alexa skills managed and deployed on Voiceflow will no longer function.
  • We strongly encourage you to migrate your skills directly to Alexa, as we will not be providing updates or support for our Alexa integration.
Support We understand this transition may be challenging for some of our customers. Our team is here to support you during this period. If you have any questions or concerns, please reach out to us. Additionally, we have a growing Discord community with tutorials and community members ready to assist you.Thank you for your understanding and continued support. We look forward to working with you to build amazing AI agents on our enhanced platform.
July 30th, 2024
Added

Added support for GPT-4o mini

We are excited to announce the addition of GPT-4o mini to our platform. This integration brings new capabilities and improvements to enhance your AI agent experience. Below are the details of this update:New Features:
  • GPT-4o mini Integration: Our platform now supports GPT-4o mini, providing exceptional cost-efficiency and speed. This compact model is perfect for many of your agent tasks.
Improvements:
  • Cost Efficiency: Enjoy significant cost savings with GPT-4o mini with a 0.2x token multiplier.
  • Faster Performance: Experience quicker response times, ensuring your AI agents perform efficiently.
We are committed to continuously improving our platform and providing you with the best tools available. Thank you for your continued support!
July 23rd, 2024
Deprecated

Sunset Announcement: FAQ API

We are announcing the sunset of the FAQ API, effective August 6th, 2024. This decision is based on user feedback and our commitment to providing the best tools for your conversational AI needs. We believe that our new feature for uploading tabular data to the Knowledge Base is a more efficient and versatile solution for managing FAQs.To help you transition smoothly, we have created a short video demonstrating how to use this new feature. You can watch the video below.We appreciate your understanding and cooperation as we make this transition. Please reach out to our support team if you have any questions or need assistance.
July 23rd, 2024
Improved

Introducing the New Voiceflow Docs

We’re proud to announce Voiceflow’s newly revamped documentation. You can find it at docs.voiceflow.com.This upgrade seeks to streamline all learning resources to help you use Voiceflow’s designer tools as well as the APIs. Below are the details of this update:
  • Unified documentation: The old learn.voiceflow.com and developer.voiceflow.com sites have had all their articles merged into our new domain, docs.voiceflow.com. This is now the only site you’ll need to visit to access all of Voiceflow’s documentation. No more separate designer and developer documentation splits. Using the old domains will still redirect you to docs.voiceflow.com. Links inside the Voiceflow platform have also been updated.
  • New organization: We’ve reorganized our documentation into Building, Deploying, and Improving Agents, as well as new docs and guides for Getting Started and Improving agents.
  • Updated documentation: Many new articles have been written to teach you more about the state of the modern Voiceflow platform.
  • New API docs and guides: Our API docs have been restructured, with new conceptual articles added and a guide for Getting Started with APIs.
  • Moved changelogs: We’ve also brought over our changelogs from the old website (changelog.voiceflow.com). The link will still work to reach the changelogs.
You can learn more about our new docs in this video.We are committed to continuously improving our resources and providing you with the best documentation available. Thank you for your continued support!
July 9th, 2024
Deprecated

Sunset Announcement: Native Integrations for WhatsApp, Twilio SMS, and Microsoft Teams

Effective August 12th, we will be sunsetting our native integrations for WhatsApp, Twilio SMS, and Microsoft Teams. The project types for these channels will remain accessible, but all active connections to these channels will be severed, and it will no longer be possible to publish new projects or maintain existing ones on these platforms.We understand this change may impact your workflows, and we are here to support you through this transition. Please explore our other integration options or contact our support team for assistance in adapting your projects.Thank you for your understanding and continued support.
July 9th, 2024
Added

Added support for Gemini 1.5 Pro

We are excited to announce the addition of Gemini 1.5 Pro, the first Google model we support, to our platform. This integration brings new capabilities and improvements to enhance your experience. Below are the details of this update:New Features:
  • Gemini 1.5 Pro Integration: Our platform now supports Gemini Pro 1.5, offering enhanced performance and a broader range of functionalities. This model brings advanced natural language understanding and generation, making it ideal for various applications.
Improvements:
  • Enhanced Accuracy: With Gemini 1.5 Pro, you can expect improved accuracy in natural language processing tasks, leading to more precise and reliable outputs.
  • Faster Response Times: Enjoy quicker response times, thanks to the optimized performance of Gemini 1.5 Pro.
  • Broader Language Support: Gemini 1.5 Pro offers support for more languages, providing a better experience for multilingual users.
We are committed to continuously improving our platform and providing you with the best tools available. Thank you for your continued support!
July 8th, 2024
Added

More visibility with new updates to the Agent CMS

We are thrilled to introduce new management tools that enhance your ability to oversee and scale your AI agents. Our latest update offers improved visibility into the connections between your agent data and project workflows and components, making it easier to understand, manage, and grow your AI projects effectively.What’s New:Intents, Components, and Functions tables
  • A new “Used By” column has been added to these tables.
  • This column displays which workflows or components are utilizing each resource.
  • Each item in the dropdown is a direct link.
  • Easily navigate to the referenced workflow or component for quick access and management.
Benefits:Improved Visibility
  • Understand how all parts of your agent design come together at a glance.
  • Quickly identify dependencies and interconnections between resources. More visibility with new updates to the Agent CMS
July 4th, 2024
Improved

Set Step Redesign

We are excited to announce the redesign of the Set Step in Voiceflow! The updated Set Step now includes:
  • Improved Interface: A more intuitive design for easier navigation and configuration.
  • New error messages: We’re revamped our error messages for the value input to make it easier to debug.
  • Labels: Option to rename and label individual variable sets for better organization and clarity.
  • Improved canvas visibility: More visibility on your configuration available directly on the step. Set Step Redesign
July 3rd, 2024
Update

Upcoming Release: Migration of the Text step to Message step

Introducing the new Message stepWe’re excited to announce a significant update coming in July: the introduction of new Message step and CMS and in Webchat projects. This update will unlock the ability to users to have manage all their agent text responses in a central location, re-use the same message across multiple steps in your agent, and add conditions directly to your response variants. As part of this update, we will be automatically migrating all Text steps to the new Message step.Key ChangesIntroduction of Message Steps: We are introducing a new type of step called Message along with a new Messages CMS page in webchat projects to manage all your agent text messages. All webchat and voice projects will contain a new property called responseMessages that will replace existing responseVariants.Automatic Conversion: All existing Text steps will be automatically converted into Message steps when a project is opened in-app. This migration ensures that all existing functionalities are preserved while providing the benefits of the new system.Technical OverviewThe new Message step structure is outlined below:Webchat Projects - Properties Affected:
  • Fields Added: responseMessages
  • Fields Modified: response now has a new field called type
  • Fields Removed: responseVariants, responseAttachments, and nodes with type text
Voice Projects - Properties Affected:
  • Fields Added: responseMessages
  • Fields Modified: response now has a new field called type
  • Fields Removed: responseVariants and responseAttachments
Additional Notes:
  • On webchat projects, all text steps are now messages. Text steps will have a few new features, and all text data will no longer be stored in the node itself but in a new field called response/responseMessages.
  • On both webchat and voice projects, requiredEntities re-prompts will be attached toresponse/messages instead of response/variants (variants will be sunset as well).
Example of Message Node Definition
{
  "type": "message",
  "data": {
    "name": "",
    "draft": false,
    "portsV2": {
      "byKey": {
        "next": {
          "type": "next",
          "target": null,
          "id": "unique-port-id"
        }
      },
      "builtIn": {},
      "dynamic": []
    },
    "messageID": "unique-message-id"
  },
  "nodeID": "unique-node-id"
}
Updated Response Structure
{
  "responses": [
    {
      "id": "unique-response-id",
      "name": "Required entity reprompt",
      "createdByID": 22,
      "folderID": null,
      "type": "message",
      "draft": false,
      "createdAt": "2024-07-03T02:29:25.000Z",
      "updatedAt": "2024-07-03T02:29:25.000Z",
      "updatedByID": 22
    }
  ],
  "responseMessages": [
    {
      "id": "unique-message-id-1",
      "discriminatorID": "unique-discriminator-id-1",
      "text": [
        {
          "text": [
            "my reprompt"
          ]
        }
      ],
      "condition": null,
      "delay": null,
      "createdAt": "2024-07-03T01:30:53.000Z"
    },
    {
      "id": "unique-message-id-2",
      "discriminatorID": "unique-discriminator-id-2",
      "text": [
        {
          "text": [""]
        }
      ],
      "condition": {
        "type": "expression",
        "assertions": [...]
      },
      "delay": 10000,
      "createdAt": "2024-07-03T01:19:22.000Z"
    }
  ],
  "responseDiscriminators": [
    {
      "id": "unique-discriminator-id-1",
      "channel": "default",
      "language": "en-us",
      "responseID": "unique-response-id",
      "variantOrder": [
        "unique-variant-id"
      ],
      "createdAt": "2024-07-03T02:29:25.000Z"
    }
  ]
}
Migration ExampleTo illustrate how the migration transforms Text nodes into Message nodes, here are examples before and after the migration:Webchat ExampleBefore Migration:
{
  "type": "text",
  "data": {
    "name": "Text",
    "texts": [
      {
        "id": "text-id-1",
        "content": [{ "children": [{ "text": "My root message " }] }],
        "messageDelayMilliseconds": 10000
      },
      {
        "id": "text-id-2",
        "content": [{ "children": [{ "text": "My first variant" }] }]
      }
    ],
    "canvasVisibility": "preview",
    "portsV2": {
      "byKey": {},
      "builtIn": {
        "next": {
          "type": "next",
          "target": null,
          "id": "unique-port-id"
        }
      },
      "dynamic": []
    }
  },
  "nodeID": "unique-node-id"
}
After Migration:
{
  "type": "message",
  "data": {
    "name": "",
    "draft": false,
    "portsV2": {
      "byKey": {
        "next": {
          "type": "next",
          "target": null,
          "id": "unique-port-id"
        }
      },
      "builtIn": {},
      "dynamic": []
    },
    "messageID": "unique-message-id"
  },
  "nodeID": "unique-node-id"
}
  1. All text data will be saved into responses (responses/discriminator/messages) instead of node data. In message node data, there will only be messageID which references response.id.
  2. repromptID in required entities is a reference to response.id, and its text data is now saved under responseMessages instead of responseVariants.
  3. draft in the message node data means that your message step has no content and will not display the response on the CMS until you have content, so draft is false.
Voice ExampleBefore Migration
{
	...
  "requiredEntities": [
    {
      "id": "6684b7854986c834e67ee772",
      "entityID": "6642a6fee94258e65e603fdd",
      "intentID": "6642aa05e94258e65e603fe0",
      "repromptID": "6684b7854986c834e67ee76f",
      "createdAt": "2024-07-03T02:29:25.000Z"
    }
  ],
  "responses": [
    {
      "id": "6684b7854986c834e67ee76f",
      "name": "Required entity reprompt",
      "createdByID": 22,
      "folderID": null,
      "type": "message",
      "draft": false,
      "createdAt": "2024-07-03T02:29:25.000Z",
      "updatedAt": "2024-07-03T02:29:25.000Z",
      "updatedByID": 22
    }
  ],
  "responseDiscriminators": [
    {
      "id": "6684b7854986c834e67ee770",
      "channel": "default",
      "language": "en-us",
      "responseID": "6684b7854986c834e67ee76f",
      "variantOrder": [
        "6684b7854986c834e67ee771"
      ],
      "createdAt": "2024-07-03T02:29:25.000Z"
    }
  ],
  "responseVariants": [
    {
      "id": "6684b7854986c834e67ee771",
      "conditionID": null,
      "discriminatorID": "6684b7854986c834e67ee770",
      "attachmentOrder": [],
      "text": [
        {
          "text": [
            "my reprompt"
          ]
        }
      ],
      "speed": null,
      "cardLayout": "carousel",
      "createdAt": "2024-07-03T02:29:25.000Z",
      "type": "text"
    }
  ],
  "responseAttachments": [...],
}
After Migration
{
	...
  "requiredEntities": [
    {
      "id": "6684b7854986c834e67ee772",
      "entityID": "6642a6fee94258e65e603fdd",
      "intentID": "6642aa05e94258e65e603fe0",
      "repromptID": "6684b7854986c834e67ee76f",
      "createdAt": "2024-07-03T02:29:25.000Z"
    }
  ],
  "responses": [
    {
      "id": "6684b7854986c834e67ee76f",
      "name": "Required entity reprompt",
      "createdByID": 22,
      "folderID": null,
      "type": "message",
      "draft": false,
      "createdAt": "2024-07-03T02:29:25.000Z",
      "updatedAt": "2024-07-03T02:29:25.000Z",
      "updatedByID": 22
    }
  ],
  "responseMessages": [
    {
      "id": "6684a9cc3ad70f045227d820",
      "discriminatorID": "6684a9cc3ad70f045227d81f",
      "text": [
        {
          "text": [
            "my reprompt"
          ]
        }
      ],
      "condition": null,
      "delay": null,
      "createdAt": "2024-07-03T01:30:53.000Z"
    },
     {
      "id": "6684a71a3ad70f045227d810",
      "discriminatorID": "6684a7173ad70f045227d807",
      "text": [
        {
          "text": [
            ""
          ]
        }
      ],
      "condition": {
	      "type": "expression",
	      "assertions": [...]
      },
      "delay": 10000,
      "createdAt": "2024-07-03T01:19:22.000Z"
    },
  ],
  "responseDiscriminators": [
    {
      "id": "6684a9cc3ad70f045227d81f",
      "channel": "default",
      "language": "en-us",
      "responseID": "6684b7854986c834e67ee76f",
      "variantOrder": [
        "6684b7854986c834e67ee771"
      ],
      "createdAt": "2024-07-03T02:29:25.000Z"
    }
  ],
}
  1. repromptID in required entities is a reference to response.id, and its text data is now saved under responseMessages instead of responseVariants.
  2. draft in the message node data means that your message step has no content and will not display the response on the CMS until you have content, so draft is false.
Next StepsAs we approach the completion date of this project, we will announce the exact release date for this migration. We encourage all users who rely on the Voiceflow project export file to familiarize themselves with these changes to leverage the improved functionality and design flexibility in their agents. As always, our support team is available to assist with any questions or issues related to this migration.Upcoming Release: Migration of the Text step to Message step
July 2nd, 2024
Added

Knowledge Base: Tabular Data, Metadata, and Query Filtering

We are excited to announce a powerful new feature in Voiceflow’s Knowledge Base: the ability to handle tabular data, enrich all data sources with metadata, and perform advanced query filtering. This enhancement provides our users with more flexibility, precision, and control over their AI agents, ensuring they deliver the most relevant and contextually accurate responses.Key FeaturesTabular Data Support With the introduction of tabular data support, users can now upload structured data in table format. This is ideal for scenarios where information is best represented in rows and columns, such as product catalogs, inventories, or detailed records. This structured format allows for more organized and efficient data management within your Voiceflow agents.Metadata Enrichment Metadata is a critical component for enhancing the context and relevance of your data. You can now attach key-value pairs to all your data sources, providing additional layers of information that can be used to fine-tune your AI agent’s responses. Whether it’s categorizing products, tagging items with specific attributes, or adding detailed descriptions, metadata makes your data more insightful and searchable.Advanced Query Filtering Our new query filtering capabilities enable users to perform precise searches within their data using a robust set of operators. You can apply various filters to your data, such as:
  • Equality and Comparison: $eq, $ne, $gt, $gte, $lt, $lte
  • Array Operations: $in, $nin, $all
  • Logical Operators: $and, $or
This means you can now create complex queries to retrieve exactly what you need, whether it’s finding products under a certain price, filtering by multiple tags, or combining multiple conditions.How to UseInserting Metadata to Your Data SourcesVoiceflow supports adding metadata to your data sources through three methods: FILE upload, URL upload, and TABLE upload. Depending on the type of document being uploaded, follow the respective method below:
  1. FILE Upload
curl --request POST \ 
--url 'https://api.voiceflow.com/v1/knowledge-base/docs/upload?overwrite=true' \ 
--header 'Authorization: YOUR_DM_API_KEY' \
--header 'Content-Type: multipart/form-data' \ 
--header 'User-Agent: insomnia/8.0.0' \  
--form 'file=@/path/to/your/file.pdf' \  
--form 'metadata={"inner": {"text": "some test value", "price": 5, "tags": ["t1", "t2", "t3", "t4"]}}'
  1. URL Upload
curl --request POST  
  --url '<https://api.voiceflow.com/v1/knowledge-base/docs/upload>  
  --header 'Authorization: YOUR_DM_API_KEY'  
  --header 'Content-Type: application/json'  
  --data '{  
    "data": {  
        "type": "url",  
        "url": "<https://example.com/">,  
        "metadata": {"test": 5}  
    }  
}'
  1. TABLE Upload
curl --request POST \
  --url 'https://api.voiceflow.com/v1/knowledge-base/docs/upload/table' \
  --header 'Authorization: YOUR_DM_API_KEY' \
  --header 'Content-Type: application/json' \
  --data '{
    "data": {
        "name": "products",
        "schema": {
            "searchableFields": ["name", "description"],
            "metadataFields": ["developer", "project"]
        },
        "items": [
            {
                "name": "example_name",
                "description": "example_description",
                "developer": {
                    "name": "Jane Doe",
                    "level": "senior",
                    "skills": ["Python", "JavaScript"],
                    "languages": [
                        {
                            "name": "Russian"
                        },
                        {
                            "name": "German"
                        }
                    ]
                },
                "project": {
                    "name": "AI Development",
                    "deadline": "2024-12-31"
                }
            }
        ]
    }
}'
Practical Query Examples Here are some practical examples to demonstrate the power of query filtering using the Query API:
  • Match All Specific Tags: Identify chunks that include every specified tag in the list.
{  
    "filters": {  
        "inner.tags": {  
            "$all": ["t1", "t2"]  
        }  
    }  
}
  • Match Any of the Specified Tags: Find chunks containing any of the tags listed.
{  
    "filters": {  
        "inner.tags": {  
            "$in": ["t1", "t2"]  
        }  
    }  
}
  • Exclude Chunks With Certain Tags: Filter out chunks that include any of the tags specified.
{
    "filters": {
        "developer.tags": {
            "$nin": ["t1", "t2"]
        }
    }
}

  • Combination of Conditions: Search for chunks that either contain a specific tag or match a text value exactly.
{
    "filters": {
        "$or": [
            {
                "developer.tags": {
                    "$in": ["t1"]
                }
            },
            {
                "developer.name": {
                    "$eq": "Jane Doe"
                }
            }
        ]
    }
}

Important UpdateWe have also updated the route for document upload from https://api.voiceflow.com/v3alpha/knowledge-base/docs/upload to https://api.voiceflow.com/v1/knowledge-base/docs/upload. The v3alpha version is still operational, but we recommend moving over to the new URL for improved performance and support. Stay tuned for more updates and happy agent building!
June 12th, 2024
Added

Introducing the Triggers Step: Elevate Your Workflow Designs

Today we’re introducing the new Triggers step designed to make it easy to add and manage one or many triggers when developing your agent workflows. This update is part of our ongoing effort to help all Voiceflow users develop their agents for scale by default.Replacing the Intent StepWith the introduction of the Triggers step, we have migrated all the settings from any existing Intent steps into the new format. The new Triggers step offers a more robust and flexible way to handle multiple user intents within a single Workflow, enhancing both the design and functionality of your agents.Why This MattersOur goal with these changes is to simplify the design process and support design best practices. By defining each Workflow as a response to a single user intent and allowing for multiple triggers within that framework, we are aiming to:
  • Enhance Clarity: Users will have a better understanding of how to build efficient Workflows that address user intents effectively.
  • Improving Scalability: These best practices will help users build more complex agents that can scale and adapt to new use cases and collaborations.
Key Changes at a Glance
  • Single Entry Point per Workflow: One entry point ensures clarity and simplicity in Workflow design. Commands: We are deprecating Commands so they will no longer be an option on new projects.
  • Right-Click to Add Triggers step: Users can still add multiple entry points if necessary, but the default UX encourages a single entry point.
  • Updated Start Chip: Users can add additional triggers directly on the start chip, offering more flexibility in design.
  • The Event section from the step toolbar has been removed as we . There is still an option to add additional triggers to the canvas available through the right-click on canvas menu.
Stay tuned for more information about designing your AI agent for scale taking advantage of the new tools in Voiceflow.Introducing the Triggers Step: Elevate Your Workflow Designs
May 28th, 2024
Improved

Update: Improved Navigation and Export Options 🧭

We’ve made some updates to our share and export options:
  • Simplified Navigation: We have removed the Share button from the CMS to streamline your navigation. You can now find all export, share prototype, or invite members options under the Agent menu.
  • Enhanced Export Functionality: You now have more control over what you export. We’ve added options to specify the workflow or component you want to export when selecting PNG and PDF formats. This allows for more precise and tailored exports. Update: Improved Navigation and Export Options Update: Improved Navigation and Export Options
May 24th, 2024
Added

We’ve added support for GPT 4o!

GPT-4o is now available for use in AI steps and Knowledge Base. Some benefits of this model include:Faster responses than GPT-4 Turbo with matched performance for English text and code Significant improvement for non-English languages 50% cheaper than Turbo Check it out in the model options menu on AI Steps and the Knowledge Base!
May 16th, 2024
Added

Introducing Workflows: Organize and Scale Your AI Agents

We are thrilled to announce the release of Workflows, our latest system designed to improve the way you organize and manage AI agents. At Voiceflow, our mission is to enable you to build scalable, efficient, and collaborative AI solutions, and Workflows is a significant step forward in achieving that goal.Watch the Video WalkthroughStreamlined Organization and Management Workflows provides a clear and intuitive organization structure, replacing our previous system of domains, topics, and subtopics. All your existing designs have been automatically migrated, ensuring no data is lost while giving you immediate access to new features.In the new Workflows page within the Agent CMS, you can organize your agents by use case, team, or any structure that suits your needs, using the new folder directory system. This high-level visibility allows you to monitor each workflow’s progress without needing to navigate to the canvas. Detailed descriptions, intent triggers, and project management features like status and assignee columns provide comprehensive context at a glance.Enhanced Collaboration Collaboration is at the heart of Workflows. You can now assign workflows to team members and set statuses to track progress through the development process. This ensures everyone is on the same page and tasks are clearly defined and managed as you scale the complexity of your AI Agent.Improved Navigation and Usability Navigating between workflows has never been easier. You can access the canvas directly from the Workflows table or use the new design navigation that mirrors your CMS structure. We’ve also introduced a “Return to Designer” button, allowing you to jump back to your canvas from any page, making your experience more cohesive and streamlined.What’s Changed With the release of Workflows, we’ve made some other notable changes to improve design practices and reduce visual clutter:
  • Removed Intent Steps from Components: To promote best practices in agent design, we have removed the ability to add intent steps directly within Components.
  • Cleaner Canvas View: The share button and agent navigation menu have been relocated to the Agent CMS, reducing clutter on the canvas and keeping your focus on designing agent workflows.
Overview of Project Data Structure Changes As part of our transition to the new Workflows system, we are making modifications to the project data structure within Voiceflow. These changes are designed to enhance functionality, improve navigation, and simplify management. Below is a detailed breakdown of what these changes entail:
  1. Newworkflows field:
We are introducing a new workflows field in the .vf project data file. This field will encapsulate a variety of properties essential for managing and organizing your agents:
{  
    workflows: [  
        {  
            status: "to_do" | "complete" | "in_progress" | null,  
            id: string,  
            createdAt: string,  
            updatedAt: string,  
            updatedByID: number,  
            name: string,  
            folderID: string | null,  
            createdByID: number,  
            description: string | null,  
            diagramID: string,  
            isStart: boolean,  
            assigneeID: number | null,  
            assistantID?: string | undefined,  
            environmentID?: string | undefined  
        }  
    ];  
}
  1. Node Reference Modifications:
  • Replacing goToDomain Nodes: The existing goToDomain nodes within the diagram.nodes will be replaced with goToNode. The goToNode.nodeID will reference the workflow that corresponds to what the domain’s start step previously pointed to.
  • Updating goToNode References: All goToNode nodes that previously referenced a domain’s start block (with the exception of the root/home domain) will be updated to reference the new workflow steps accordingly.
  • Removal of Start Steps: All start steps, except for those in the root/home diagram, will be removed. This simplification is part of our broader effort to streamline navigation and reduce redundancy within the agent design environment.
Learn More For detailed instructions and more information, please refer to the [Workflows Feature Documentation](Workflows Feature Documentation).Your Feedback Matters We are excited about the possibilities Workflows will unlock for your AI projects and look forward to hearing your feedback. This release marks a new chapter in AI agent management, bringing us closer to our vision of providing the world’s best agent creation tool.Thank you for being part of our journey.