You can now upload CSV files directly into your Knowledge Base. Voiceflow supports both .CSV and .XLSX file formats.

This makes it easy to turn structured data—like FAQs, product catalogs, policies, pricing tables, or support logs—into instantly usable knowledge for your agent without manual copy-pasting or reformatting.

Your agent can now read rows as discrete knowledge entries, reference specific fields as context, and answer questions grounded in large, structured datasets.


Drive measurable agent improvements with objective, repeatable scoring by evaluating conversations against predefined outcome options instead of subjective criteria.

  1. Define a set of options and the true/expected state for each
  2. Choose which LLM audits the transcript against these options
  3. Color-code options for fast visual scanning
  4. Control which options are included in reporting

How do I use this feature?

  1. Navigate to the evaluations tab under transcripts
  2. Create new evaluation (top right)
  3. From the metric dropdown, select "Options"
  4. Define a list of pre-determined options
    1. Provide a general description and true description for each option
    2. Select a color for each option
    3. Test on last transcript to see how it works (button at bottom of modal)
  5. Create evaluation

Tip
If you want to retroactively run evaluations on old transcripts, you can do so by using the 'Bulk run evaluations' feature found in the transcripts tab.

You can now run Function and API tool steps asynchronously.

Async execution allows the conversation to continue immediately without waiting for the tools to complete. No outputs or variables from the step will be returned or updated.

This is ideal for non-blocking tasks such as logging, analytics, telemetry, or background reporting that don’t affect the conversation.

Note: This setting applies to the reference of the Function or API tool — either where the tool is attached to an agent or where it’s used as a step on the canvas. It is not part of the underlying API or function definition, which allows the same tool to be reused with different async behaviour throughout your project.

Tool messages

by Michael Hood

Tool messages let you define static messages that are surfaced to the user as a tool progresses through its lifecycle:

  1. Start — Message delivered when the tool is initiated
  2. Complete — Message delivered when the tool finishes successfully
  3. Failed — Message delivered if the tool encounters an error
  4. Delayed — Message delivered if the tool takes longer than a specified duration (default: 3000ms, configurable)

This provides clear, predictable feedback during tool execution, improving transparency and user trust—especially for long-running or failure-prone tools.


GPT 5.2

by Michael Hood

Added global support for GPT 5.2

Your web widget now supports hands-free, real-time voice conversations. Enable it from the Widget tab for existing projects — it’s on by default for new ones.

Users can talk naturally, see transcripts stream in instantly, and get a frictionless voice-first experience. It also doubles as the perfect in-browser way to test your phone conversations—no dialing in, just open the widget and run the full voice flow instantly.


Native web search tool

by Michael Hood

We’ve shipped a native Web Search tool so your agents can look up real-time information on the web mid-conversation—no custom integrations required.

  • Toggle on the web search tool in any agent to answer questions that need live data (news, prices, schedules, etc.).
  • Configure search prompts and guardrails so the agent only pulls what you want it to.
  • Results are summarized and grounded back into the conversation for more accurate, up-to-date answers.

You can now connect your Telnyx account to import and manage phone numbers directly in Voiceflow, enabling Telnyx as your telephony provider for both inbound and outbound calls..


Added native support for DTMF keypad input in phone conversations. Users can now enter digits via their phone keypad, sending a DTMF trace to the runtime. Configure timeout and delimiters (#, *) to control when input is processed. See documentation here.

  • Keypad input is off by default and can be turned on from Settings/Behaviour/Voice.
  • When on in project settings, keypad input can be turned off at the step level via the "Listen for other triggers" toggle.
  • View full documentation here