Today we're introducing two powerful new capabilities in Voiceflow: Structured Outputs and Variable Pathing. These features expand the possibilities for working with data from large language models (LLMs) in your agents. Let's explore what they enable!

🎉 Structured Outputs

Structured Outputs let you define the format of the data you expect an LLM to return, giving you more control and predictability over the results.

  • In a prompt step, enable the new "JSON Output" option to specify the structure of the LLM's response.
  • Today, Structured Outputs support the following data types:
    • String
    • Number
    • Boolean
    • Integer
    • Enum
  • Support for arrays and nested objects is planned for the near future.
  • Structured Outputs are available with gpt-4o-mini and gpt-4o models.


💪 Variable Pathing

Variable Pathing provides a streamlined way to work with complex data structures in your Voiceflow project.

  • Store an entire object in a single variable, then access its properties using dot notation (e.g. user.name, user.email).
  • Capture Structured Output responses or API results as objects.
  • Use object properties directly in conditions, messages, and other steps.
  • Reduce the need for multiple variables to represent a single entity.

🍰 Bringing it All Together

Combining Structured Outputs and Variable Pathing opens up new design patterns for crafting agent experiences:

  • Define precise data requirements for LLMs to provide relevant information
  • Capture responses as feature-rich objects in a single step
  • Access and manipulate object properties throughout your project
  • Streamline your project's design while expanding its capabilities

We're excited to see the voice experiences you create with these new tools! Feel free to share your questions and feedback with us.

We're excited to announce the beta release of Voiceflow Telephony, bringing enterprise-grade voice capabilities to your conversational experiences. This release represents a significant milestone in our mission to provide comprehensive, low-latency voice solutions for businesses of all sizes.

Native Twilio Integration

We've integrated with Twilio to make phone-based interactions as simple as possible. The new integration allows you to:

  • Import existing Twilio phone numbers directly into Voiceflow
  • Associate phone numbers with specific agents
  • Configure separate numbers for development and production environments
  • Test different versions of your agent against different phone numbers

Setting up telephony is straightforward: simply connect your Twilio account with existing phone numbers, import them into Voiceflow, and assign them to your agents. Your voice experience will be live within minutes.

High-Performance Voice

Streaming Technology

We've built our telephony feature on top of our streaming API, delivering exceptional performance improvements:

  • Dramatically reduced response times
  • Near real-time agent reactions
  • Optimized voice processing pipeline

Speech Recognition

We've selected Deepgram to provide industry-leading Automatic Speech Recognition (ASR):

  • High-accuracy transcription
  • Low-latency processing
  • Support for over 20 languages

Advanced Voice Capabilities

Outbound Calling

We've introduced powerful outbound calling capabilities:

  • Programmatically initiate calls to any phone number
  • Test outbound calls directly from the Voiceflow interface
  • Integrate outbound calling into your existing workflows

Voice Technology Stack


Our comprehensive voice stack includes:

  • Premium text-to-speech voices from industry leaders, such as:
    • ElevenLabs
    • Rime
    • Google
  • Support for advanced telephony features through custom actions:
    • Call forwarding
    • DTMF handling
    • Interruption behaviour

Voice Experience Configuration

We've exposed detailed configuration options to fine-tune your voice experiences:

Audio Settings

  • Background audio customization
  • Audio cue configuration

Interaction Parameters

  • Interruption threshold controls
  • Utterance end detection
  • Response timing optimization
  • User input acceptance timing

Beta Program Details

Access and Limitations

During the beta period, all users will have access to telephony features with the following concurrent call limits:

PlanConcurrent Calls
Starter1
Pro5
Team15
EnterpriseCustom

Coming Soon

  • Enhanced call analytics and reporting
  • Additional voice customization options

New AI-Native Webchat

by Zoran Slamkov

We're excited to announce a complete reimagining of the Voiceflow webchat experience. This new version introduces AI-native capabilities, enhanced customization options, and flexible deployment methods to help you create more engaging conversational experiences.

AI-Native

Our webchat has been rebuilt from the ground up to provide a more natural, AI-driven conversation experience:

  • Streaming Text Support: Experience real-time message generation with character-by-word streaming, creating a more engaging and dynamic conversation flow. Users can see responses being crafted in real-time, similar to popular AI chat interfaces.

  • AI Disclaimers: Built-in support for displaying AI disclosure messages and customizable AI usage notifications to maintain transparency with your users.

Enhanced Customization

We've significantly expanded the customization capabilities to give you more control over your chat interface:

Interface Types

You can now choose from three distinct interface modes:

  • Widget: Traditional chat window that appears in the corner of your website
  • Popover: Full-screen chat experience that overlays your content
  • Embed: Seamlessly integrate the chat interface directly into your webpage layout

Visual Customization

The new version introduces comprehensive styling options:

  • Color System:

    • Expanded colour palette support with primary, secondary, and accent colour definitions
  • Typography:

    • Custom font family support
  • Launcher Variations:

    • Classic bubble launcher with customizable icons
    • Button-style launcher with text support

Important Notes

  • Chat Persistence: Now configured through the snippet rather than UI settings.
  • Custom CSS: Maintained compatibility with most existing class names.
  • Proactive Messages: Temporarily unavailable in this release, with support coming soon

You can find more details here.

Migration

For detailed instructions on migrating from the legacy webchat, please refer to our Migration Guide.

Function Libraries

Integrate your agent with your favorite tools using our new function libraries. Access pre-built functions for popular platforms like Hubspot, Intercom, Shopify, Zendesk, and Zapier. These functions, sourced from Voiceflow and the community, make it easier than ever to connect Rev with your existing workflows. Showcase readily available integrations to your team and clients.


Transcript Review Hotkeys

Reviewing transcripts just got faster and more efficient. You can now press R to mark a transcript as Reviewed or S to Save it for Later. These handy shortcut keys are perfect for power users who review a high volume of transcripts.

Project Starter Templates

Getting started is now a breeze. When creating a new project, choose from a set of templates tailored for common use cases like customer support, ecommerce support, and scheduling. These templates help you hit the ground running without the need for extensive setup and customization. Ideal for new users and busy teams.


Expanded Voice Support

We now offer an even greater selection of natural-sounding AI voices. We've added support for a variety of new options from ElevenLabs and Rime. Please note that using these voices consumes AI tokens. Check them out for your projects that could benefit from additional voice choices.

This is an important update to our platform. As part of our ongoing commitment to enhancing your experience and providing the most advanced tools for AI agent development, we have made the decision to deprecate the AI Response and AI Set steps.

What does this mean for you?

  • On February 4th, 2025, the AI Response and AI Set steps will be disabled from the step toolbar in the Voiceflow interface to encourage users to move away of these deprecated steps. Existing steps will remain untouched and will continue working as per normal.
  • On June 3rd, 2025, these steps will no longer be supported. Any existing projects using these steps will need to be migrated to the new Prompt and Set steps. We will be sending out additional communication in advance to the sunset date.

We understand that this change may require some adjustments to your workflow, but rest assured that we are here to support you throughout this transition. The new Prompt and Set steps, along with our powerful Prompt CMS, offer even more flexibility and control over your conversational experiences.

Some key benefits of the new approach include:

  • Centralized prompt management: The Prompt CMS serves as a hub for all your prompts, making it easy to create, edit, and reuse them across your projects.
  • Advanced prompt configuration: Leverage system prompts, message pairs, conversation history, and variables to craft highly contextual and dynamic responses.
  • Seamless integration: The Prompt step allows you to bring your prompts directly into your conversation flows, while the Set step lets you assign prompt outputs to variables for enhanced logic and control.
  • Continued innovation: We are committed to expanding the capabilities of these new features, with exciting updates planned for the near future.

For those using the Knowledge Base, we recommend transitioning to the KB Search step. This step allows you to query your Knowledge Base and feed the results into a prompt, enabling even more intelligent and relevant responses.

To help guide you through migrating from the AI steps to the Prompt step, check our walkthrough below:

We value your feedback and are here to address any questions or concerns you may have. Our team is dedicated to ensuring a smooth transition and helping you unlock the full potential of these powerful new features.

Thank you for your understanding and continued support. We are excited about the future of conversational AI development on Voiceflow and look forward to seeing the incredible experiences you will create with these enhanced capabilities.

Best regards,

Voiceflow

API Step V2

by Zoran Slamkov

We're introducing a new API step with a cleaner, more intuitive interface for configuring your API requests. While the existing API step remains fully functional, we recommend trying out the new version at your earliest convenience.

Project Data Changes

For users working with our API programmatically, we've included the new step type definition below:

type CodeText = (
  | string
  | {
      variableID: string;
    }
  | {
      entityID: string;
    }
)[];

interface MarkupSpan {
  text: Markup;
  attributes?: Record<string, unknown> | undefined;
}

type Markup = (
  | string
  | {
      variableID: string;
    }
  | {
      entityID: string;
    }
  | MarkupSpan
)[];

type ApiV2Node = {
  type: "api-v2";
  data: {
    name: string;
    url?: Markup | null | undefined;
    headers?: Array<{ id: string; key: string; value: Markup }> | undefined;
    httpMethod: "get" | "post" | "put" | "patch" | "delete";
    queryParameters?: Array<{ id: string; key: string; value: Markup }> | undefined;
    responseMappings?: Array<{ id: string; path: string; variableID: string }> | undefined;
    body?:
      | {
          type: "form-data";
          formData: Array<{ id: string; key: string; value: Markup }>;
        }
      | {
          type: "params";
          params: Array<{ id: string; key: string; value: Markup }>;
        }
      | {
          type: "raw-input";
          content: CodeText;
        }
      | null
      | undefined;
    fallback?:
      | {
          path: boolean;
          pathLabel: string;
        }
      | null
      | undefined;
    portsV2: {
      byKey: Record
        string,
        {
          type: string;
          id: string;
          target: string | null;
        }
      >;
    };
  };
  nodeID: string;
  coords?: [number, number] | undefined;
};

Added

We've added support for Anthropic's Claude Haiku 3.5 model.

You can now select Claude Haiku 3.5 when creating or editing your agents, allowing you to:

  • Benefit from improved performance and enhanced conversational abilities
  • Create more engaging and human-like interactions

Updated

To streamline model selection and encourage the use of the latest models, we've hidden Claude Sonnet 3.0 and Haiku 3.0 from the model dropdown.

Don't worry - this change won't affect any existing agents using these models. You can continue to use and edit them without interruption.

Added

  • Document Metadata Update - New PATCH endpoint /v1/knowledge-base/docs/{documentID} to update metadata for entire documents

    • Updates metadata across all chunks simultaneously
    • Note: Not supported for documents of type 'table'
  • Chunk Metadata Update - New PATCH endpoint /v1/knowledge-base/docs/{documentID}/chunk/{chunkID} to update metadata for specific chunks

    • Allows targeted updates to individual chunk metadata
    • Supports all document types, including tables
    • Other chunks in the document remain unchanged

Examples

Check our API documentation for detailed request/response examples and metadata formatting guidelines.

We're excited to announce that Condition steps now support prompts as a condition type, allowing you to use AI responses to determine conversation paths.

What's New

  • Prompt Conditions: Condition steps can now evaluate prompt responses to intelligently branch conversations down different paths based on AI analysis.
  • Message Variant Conditions: Message steps can now use prompt responses to select the most appropriate response text, helping your agent say the right thing at the right time.
  • Seamless Prompt Integration: Choose from your existing prompts in the Prompt CMS or create new ones directly within the Condition or Message step.

Getting Started

For Condition Steps:

  1. Create or select a Condition step
  2. Choose "Prompt" as your condition type
  3. Select or create a prompt
  4. Add paths and define evaluation criteria

For Message Variants:

  1. Add variants to your Message step
  2. Select a prompt to determine variant selection
  3. Define your variant conditions
  4. Test your dynamic messaging

Learn more

Read more about the options available with the Condition step and Messages step.

Simplify your workflow and make managing your agents even easier with more sharing options.

Import and Export Variables and Entities in the CMS

You can now import and export variables and entities directly in your Agent CMS, saving you time and effort when setting up and sharing your agents.

  • Quickly populate your variables and entities by importing exported JSON files
  • Easily create new versions of variables and entities by importing new files
  • Export your variables and entities as JSON files for backup or sharing
  • Save time by bulk importing and exporting variables and entities

These new features are particularly useful when you have a large number of variables or entities to manage, when you need to create new versions frequently, or when you want to share your variables and entities with others.

Importing and exporting variables and entities is especially helpful when you're working with large datasets, complex agents that require numerous variables and entities, or collaborating with team members.

These new import and export features in the Agent CMS will help you set up, manage, and share your agents more efficiently, allowing you to focus on creating engaging conversational experiences.