At Voiceflow, our mission is ambitious: to provide you with the best agent creation platform in the world. We believe in empowering creators like you to build advanced conversational AI agents without limits.

As we’ve integrated more powerful language models into Voiceflow, we’ve seen your projects become more innovative and dynamic. Your agents are smarter, more engaging, and pushing the boundaries of what’s possible in conversational AI.

But we don’t want you to slow down—we want to give you more “fuel” to keep that engine running at full speed.

That’s why we’re excited to announce that we’ve doubled the included monthly AI token allotments for our Pro, Team, and Enterprise plans!

Here are the new allotments:

Pro Plan: Now includes 4 million AI tokens per month.
Teams Plan: Now includes 20 million AI tokens per month.
Enterprise Plan: Now includes 200 million AI tokens per month.

Need more? Additional tokens are available for purchase across all plans.

This upgrade is about more than just numbers. It’s about supporting your vision and giving you the resources to bring your most ambitious ideas to life. Whether you’re developing complex conversational flows, experimenting with new AI features, or scaling up your existing projects, we’ve got you covered.

We’re committed to making Voiceflow not just a tool, but a platform that grows with you—a place where your creativity has no bounds.

Thank you for being an essential part of our journey. We can’t wait to see what incredible things you’ll build with this extra boost.

We're excited to announce the release of our new Streaming API endpoint, designed to enhance real-time interactions with your Voiceflow agents. This feature allows you to receive server-sent events (SSE) in real time using the text/event-stream format, providing immediate responses and a smoother conversational experience for your users.

Key Highlights

  • Real-Time Event Streaming: Receive immediate trace events as your Voiceflow project progresses, allowing for dynamic and responsive conversations.

  • Improved User Experience: Drastically reduce latency by sending information to users as soon as it's ready, rather than waiting for the entire turn to finish.

  • Support for Long-Running Operations: Break up long-running steps (e.g., API calls, AI responses, JavaScript functions) by sending immediate feedback to the user while processing continues in the background.

  • Streaming LLM Responses: With the completion_events query parameter set to true, stream large language model (LLM) responses (e.g., from Response AI or Prompt steps) as they are generated, providing instant feedback to users.

How to Use the Streaming API

Endpoint

POST /v2/project/{projectID}/user/{userID}/interact/stream

Required Headers

  • Accept: text/event-stream
  • Authorization: {Your Voiceflow API Key}
  • Content-Type: application/json

Query Parameters

  • completion_events (optional): Set to true to enable streaming of LLM responses as they are generated.

Example Request

curl --request POST \
     --url https://general-runtime.voiceflow.com/v2/project/{projectID}/user/{userID}/interact/stream \
     --header 'Accept: text/event-stream' \
     --header 'Authorization: {Your Voiceflow API Key}' \
     --header 'Content-Type: application/json' \
     --data '{
       "action": {
         "type": "launch"
       }
     }'

Example Response

event: trace
id: 1
data: {
  "type": "text",
  "payload": {
    "message": "Give me a moment...",
  },
  "time": 1725899197143
}

event: trace
id: 2
data: {
  "type": "debug",
  "payload": {
    "type": "api",
    "message": "API call successfully triggered"
  },
  "time": 1725899197146
}

event: trace
id: 3
data: {
  "type": "text",
  "payload": {
    "message": "Got it, your flight is booked for June 2nd, from London to Sydney.",
  },
  "time": 1725899197148
}

event: end
id: 4

Streaming LLM Responses with completion_events

By setting completion_events=true, you can stream responses from LLMs token by token as they are generated. This is particularly useful for steps like Response AI or Prompt, where responses may be lengthy.

Example Response with completion_events=true

event: trace
id: 1
data: {
  "type": "completion",
  "payload": {
    "state": "start"
  },
  "time": 1725899197143
}

event: trace
id: 2
data: {
  "type": "completion",
  "payload": {
    "state": "content",
    "content": "Welcome to our service. How can I help you today? Perh"
  },
  "time": 1725899197144
}

... [additional content events] ...

event: trace
id: 6
data: {
  "type": "completion",
  "payload": {
    "state": "end"
  },
  "time": 1725899197148
}

event: end
id: 7

Getting Started

  • Find Your projectID: Locate your projectID in the agent's settings page within the Voiceflow Creator. Note that this is not the same as the ID in the URL creator.voiceflow.com/project/.../.

Find your projectID

  • Include Your API Key: Ensure you include your Voiceflow API Key in the Authorization header of your requests.

Additional Resources

Notes

  • Compatibility: This new streaming endpoint complements the existing interact endpoint and is designed to enhance real-time communication scenarios.

  • Deterministic and Streamed Messages: When using completion_events, you may receive a mix of streamed and fully completed messages. Consider implementing logic in your client application to handle these different message types for a seamless user experience.

  • Latency Reduction: By streaming events as they occur, you can significantly reduce perceived latency, keeping users engaged and informed throughout their interaction.


We believe this new Streaming API will greatly enhance the interactivity and responsiveness of your Voiceflow agents. We can't wait to see how you leverage this new capability in your projects!

For any questions or support, please reach out to our support team or visit our community forums.

We are pleased to announce the launch of the Smart Chunking beta program. Over the coming weeks, we will be testing and validating several LLM-based strategies to enhance the quality of your knowledge base chunks. Better chunks lead to better responses and higher-quality AI agents.

First Strategy: HTML to Markdown Conversion

Our initial strategy focuses on automatic HTML to Markdown conversion. Many users import content from web pages—either directly using our web scraper or via APIs from services like Zendesk Help Center or Kustomer. This content often contains raw HTML, which can be noisy and degrade chunk performance and response quality.

By converting HTML to Markdown automatically, we aim to improve the cleanliness and readability of your content. This conversion is supported for all data sources you can upload into Voiceflow.

How to Use the New Feature


Join the Beta Program

If you're interested in participating in the Smart Chunking beta, please sign up via the waitlist link. We will be granting access to participants over the next few weeks.

Announcing the new Secrets Manager in Voiceflow! This new feature enables you to securely store and manage a variety of sensitive information within your AI agents, including API keys, database credentials, encryption keys, and more.

Video Walkthrough


What's New

  • Secure Storage of Sensitive Data

    • Safely store passwords and credentials used to access essential services and functions within your software.
    • Utilize AES-256 GCM encryption to ensure confidentiality and integrity of your secrets.
  • Visibility Controls

    • Masked Secrets: Values are hidden but can be temporarily revealed when needed.
    • Restricted Secrets: Values remain hidden and cannot be revealed after creation, enhancing security for highly sensitive data.
  • Environment Overrides

    • Specify different secret values for Development and Production environments.
    • Seamlessly manage environment-specific configurations without altering your agent's logic.
  • Integration with Function and API Steps

    • Easily insert secrets into Function and API steps without hardcoding sensitive information.
    • Access secrets directly from the canvas by typing { and selecting from the Secrets tab.
  • Secure Project Sharing

    • When duplicating or exporting projects, secret values are excluded to maintain security.
    • Re-add secret values as needed in the new project instance.

Benefits

  • Enhanced Security

    • Protects sensitive information using industry-standard encryption.
    • Prevents unauthorized access through robust key management and encryption practices.
  • Simplified Management

    • Centralizes the handling of all your secrets within the Secrets Manager.
    • Reduces the risk of accidental exposure by avoiding hardcoded credentials.
  • Operational Flexibility

    • Environment overrides allow for different configurations across development and production stages.
    • Streamlines the deployment process by separating environment-specific data.

Getting Started

  1. Access the Secrets Manager

    • Navigate to Agent Settings in the left sidebar.
    • Click on the Secrets tab to open the Secrets Manager interface.
  2. Create a New Secret

    • Click New Secret in the top-right corner.
    • Enter the Name, Value, and choose the Visibility setting.
    • Click Create Secret to add it to your Secrets Manager.
  3. Use Secrets in Your Agent

    • In a Function or API step, type { to open the variable selector.
    • Switch to the Secrets tab and select the desired secret.
  4. Set Up Environment Overrides

    • Go to the Environments tab in Agent Settings.
    • Click Override Secrets next to the desired environment.
    • Enter environment-specific values for your secrets and click Save.

Learn More

For detailed instructions and best practices, please refer to our updated User Guide for the Secrets Manager.


Updated Button Step

What's Changed?

Simplified and Improved UI

  • Streamlined Button Creation: We've made it quicker and more intuitive to add buttons. Simply click the "+" button next to the Buttons label in the editor to add a new button.
  • Clean Editor Layout: The editor has been decluttered to focus on what's important, allowing you to configure your buttons without any unnecessary distractions.

How to Start Using the Redesigned Button Step

The redesigned Button Step is available now in your step toolbar, labeled as Button. To take advantage of the new user interface:

  1. Replace Existing Steps: You can replace your existing Button steps with the redesigned version in your projects to benefit from the improved UI.
  2. Update Your Conversation Paths: Reconnect any paths as needed using the ports associated with your buttons.
  3. Review Settings: Check your No Match, No Reply, and Listen for Other Triggers settings to ensure they are configured as desired.

Note: Your existing Button steps will continue to function as before unless you decide to update them. This means your current projects won't be affected until you're ready to make the switch.

Some significant updates that we've been working on to make your conversational agents even better. Over the next week, we'll be rolling out new features that leverage advanced AI capabilities to improve how your projects understand and interact with users.

What's New?

Transition to AI-Powered Entity Extraction

We've moved from traditional Natural Language Understanding (NLU) to using Large Language Models (LLMs) for entity extraction. This change is all about making your agents more accurate and adaptable when interpreting user inputs. By embracing AI-powered entity extraction, your agents can now handle a wider range of conversational scenarios with greater reliability.

New AI-Powered Steps with Enhanced Entity Collection: Choice, Capture, and Trigger

To fully leverage the improved entity extraction capabilities, we've updated some core steps in Voiceflow. The Choice, Capture, and Trigger steps have all been upgraded to natively support the new AI-powered entity collection features. This means these steps are now better equipped to collect necessary information from users, making your conversational agents more effective and responsive.

Activating the New Steps in Your Project

The new Choice and Capture steps are available in the step toolbar. To start using the new AI-powered features, you'll need to replace your existing steps with these updated versions in your projects. Your existing Choice and Capture steps will remain unchanged, so your current setups won't be affected unless you choose to update them.

Introducing Rules and Exit Scenarios

We've added two new features to give you more control over how your agents handle user inputs:

  • Rules: You can now define specific criteria that the user's input must meet. This helps ensure that the information collected is valid and meets your requirements.
  • Exit Scenarios: These allow your agent to gracefully handle situations where the user can't provide the necessary information. You can set up alternative paths or responses for these cases, improving the overall user experience.

Automatic Reprompt Feature

We're also introducing an Automatic Reprompt feature. If a user provides incomplete or incorrect information, your agent will now generate personalized prompts to help them provide the missing details. This makes interactions smoother and less frustrating for your users.

Why This Matters

We know that accurately understanding user inputs is crucial for effective conversations. By moving to AI-powered entity extraction, we're providing you with tools that offer greater accuracy and adaptability. This helps your agents handle a wider range of conversational scenarios, providing better experiences for your users.

What You Need to Do

To start benefiting from these new features:

  1. Update Your Steps: Replace your existing Choice and Capture steps with the new AI-powered versions available in the step toolbar.

  2. Configure Rules and Exit Scenarios: Use the new options within these steps to define rules and exit scenarios that suit your project's needs.

  3. Test Your Agent: After making changes, be sure to test your agent thoroughly to ensure everything works as expected.

Rollout Details

These updates will be introduced to all users over the next week. If you don't see them in your workspace right away, they'll be available soon.

We're Here to Help

As always, we're committed to supporting you. If you have any questions or need assistance with the new features, please don't hesitate to reach out to our support team or visit our Discord community.


Thank you for being part of the Voiceflow community. We're excited to see how you'll use these new capabilities to create even more engaging and effective conversational agents.

New Multimodal Projects

by Zoran Slamkov

It is easier than ever to create and manage multimodal agents that support both chat and voice interactions.

What’s New

Multimodal Projects

Single Project Type for Chat and Voice: You no longer need to choose between a chat project and a voice project. All new projects (and existing chat projects) will support both modalities by default, streamlining your workflow and expanding your agent’s capabilities.

Voice Features in Existing Chat Projects

Immediate Access to Voice Features: Existing chat projects now have built-in voice capabilities. You can start leveraging voice input and output options in your conversations without creating a new project.

Default Text-to-Speech (TTS) Settings: In the agent settings, you can now select a default TTS technology for your agent.

Voice Prototyping Tools

Voice Prototyping: The designer prototype tool and web widget now includes voice input and output support. You can test voice and/or chat interactions anywhere and everything, making it easier to refine your agent’s conversational flow.

Web Chat Integration

Optional Voice Interface: In the web chat integration settings, there is a new toggle to enable voice input and output. This allows your web chat experiences to include voice interactions. Currently, voice input (ASR) for hosted deployments is limited to Chrome browsers. We will be looking to expand support in the future.

Impact on Existing Voice Projects

No Changes to Existing Voice Projects: Your current voice projects will remain unchanged and fully functional. However, the option to create new voice-only projects from the dashboard has been removed. All new projects will support both chat and voice modalities.

Resources

Feedback: We value your input. If you have any questions or encounter issues, please reach out to our support team.

We are introducing new step types to diagrams[id].nodes to enhance the functionality and flexibility of your AI agents.

Affected Property: diagrams[id].nodes

What's Changing:

  • New Node Types Added:

    1. ButtonsV2 Node (buttons-v2)

      • Description: Enhances button interactions with more dynamic features.

      • Type Definition:

        type ButtonsV2Node = {
          type: "buttons-v2";
          data: {
            portsV2: {
              byKey: Record<string, {
                id: string;
                type: string;
                target: string | null;
                data?: {
                  type?: "CURVED" | "STRAIGHT" | null;
                  color?: string | null;
                  points?: {
                    point: [number, number];
                    toTop?: boolean | null;
                    locked?: boolean | null;
                    reversed?: boolean | null;
                    allowedToTop?: boolean | null;
                  }[] | null;
                  caption?: {
                    value: string;
                    width: number;
                    height: number;
                  } | null;
                } | null;
              }>;
            };
            listenForOtherTriggers: boolean;
            items: {
              id: string;
              label: Markup;
            }[];
            name?: string;
            noReply?: {
              repromptID: string | null;
              path?: boolean | null;
              pathLabel?: string | null;
              inactivityTime?: number | null;
            } | null;
            noMatch?: {
              repromptID: string | null;
              path?: boolean | null;
              pathLabel?: string | null;
            } | null;
          };
          nodeID: string;
          coords?: [number, number];
        };
        
        
    2. CaptureV3 Node (capture-v3)

      • Description: Provides advanced data capture capabilities, including entity capture and automatic reprompting.

      • Type Definition:

        type CaptureV3Node = {
          type: "capture-v3";
          data: {
            portsV2: {
              byKey: Record<string, {
                id: string;
                type: string;
                target: string | null;
                data?: {
                  type?: "CURVED" | "STRAIGHT" | null;
                  color?: string | null;
                  points?: {
                    point: [number, number];
                    toTop?: boolean | null;
                    locked?: boolean | null;
                    reversed?: boolean | null;
                    allowedToTop?: boolean | null;
                  }[] | null;
                  caption?: {
                    value: string;
                    width: number;
                    height: number;
                  } | null;
                } | null;
              }>;
            };
            listenForOtherTriggers: boolean;
            capture:
              | {
                  type: "entity";
                  items: {
                    id: string;
                    entityID: string;
                    path?: boolean | null;
                    pathLabel?: string | null;
                    repromptID?: string | null;
                    placeholder?: string | null;
                  }[];
                  automaticReprompt: {
                    params: {
                      model?:
                        | "text-davinci-003"
                        | "gpt-3.5-turbo-1106"
                        | "gpt-3.5-turbo"
                        | "gpt-4"
                        | "gpt-4-turbo"
                        | "gpt-4o"
                        | "gpt-4o-mini"
                        | "claude-v1"
                        | "claude-v2"
                        | "claude-3-haiku"
                        | "claude-3-sonnet"
                        | "claude-3.5-sonnet"
                        | "claude-3-opus"
                        | "claude-instant-v1"
                        | "gemini-pro-1.5";
                      system?: string;
                      maxTokens?: number;
                      temperature?: number;
                    } | null;
                    exitScenario?: {
                      items: {
                        id: string;
                        text: string;
                      }[];
                      path?: boolean | null;
                      pathLabel?: string | null;
                    } | null;
                  } | null;
                  rules?: {
                    id: string;
                    text: string;
                  }[];
                }
              | {
                  type: "user-reply";
                  variableID?: string | null;
                };
            name?: string;
            noReply?: {
              repromptID: string | null;
              path?: boolean | null;
              pathLabel?: string | null;
              inactivityTime?: number | null;
            } | null;
          };
          nodeID: string;
          coords?: [number, number];
        };
        
        
    3. ChoiceV2 Node (choice-v2)

      • Description: Enhances user choice interactions with improved intent handling and automatic reprompt options.

      • Type Definition:

        type ChoiceV2Node = {
          type: "choice-v2";
          data: {
            portsV2: {
              byKey: Record<string, {
                id: string;
                type: string;
                target: string | null;
                data?: {
                  type?: "CURVED" | "STRAIGHT" | null;
                  color?: string | null;
                  points?: {
                    point: [number, number];
                    toTop?: boolean | null;
                    locked?: boolean | null;
                    reversed?: boolean | null;
                    allowedToTop?: boolean | null;
                  }[] | null;
                  caption?: {
                    value: string;
                    width: number;
                    height: number;
                  } | null;
                } | null;
              }>;
            };
            listenForOtherTriggers: boolean;
            items: {
              id: string;
              intentID: string;
              rules?: {
                id: string;
                text: string;
              }[];
              button?: {
                label: Markup;
              } | null;
              automaticReprompt?: {
                exitScenario?: {
                  items: {
                    id: string;
                    text: string;
                  }[];
                  path?: boolean | null;
                  pathLabel?: string | null;
                } | null;
              } | null;
            }[];
            name?: string;
            noReply?: {
              repromptID: string | null;
              path?: boolean | null;
              pathLabel?: string | null;
              inactivityTime?: number | null;
            } | null;
            noMatch?: {
              repromptID: string | null;
              path?: boolean | null;
              pathLabel?: string | null;
            } | null;
          };
          nodeID: string;
          coords?: [number, number];
        };
        
        

What This Means for You:

  • Enhanced Capabilities: The new node types offer advanced features, allowing for more complex and dynamic conversational flows.
  • Existing Nodes Remain Unchanged: Your current diagrams and nodes will continue to function as before.
  • New Features Available in New Steps: Only new instances of these steps created after the update will utilize the updated formats and capabilities.

Action Required:

  • Update Integrations: If you have custom integrations that interact with diagrams[id].nodes, plan to update them to accommodate these new node types when creating new steps.

Condition Step Update

by Zoran Slamkov

What's New:

  • Redesigned UI: The Condition Step now features an updated interface for easier navigation and setup of conditional paths.
  • Flexible Logic Configuration: Users can now choose between a Condition Builder (for non-technical users) or Expression (for custom JavaScript logic) to create paths based on variables, values, logic groups, or expressions.

Only new condition steps created after this update will use the new version, existing condition steps will remain untouched.

We’re excited to announce the release of the KB Search Step, a new feature designed to simplify querying data from your Knowledge Base directly within Voiceflow. This step streamlines the process of fetching data chunks, providing you with greater control and efficiency in your conversational workflows.


What’s New:

  • Native KB Querying: Easily query your Knowledge Base without the need for manual API configurations. Simply input a question or variable and retrieve relevant data chunks directly in your workflow.

  • Customizable Chunk Retrieval: Use the chunk limit slider to specify the number of data chunks to be returned, ranging from 1 to 10. Fine-tune responses by adjusting this setting to suit your agent’s needs.

  • No Match Handling: Set a minimum chunk score for chunk matching. If no data meets this threshold, the workflow can automatically follow a customizable Not Found path, ensuring a seamless user experience even when no exact match is found.

  • Enhanced Control Over Prompts: Fetched data chunks are automatically converted into a string format, making them easy to incorporate into text-based prompts and other steps. You have the flexibility to control how this data is used, enhancing the quality and relevance of your agent's responses.

  • Testing Experience: Test your queries directly within the editor. View the returned chunks and their scores for insight into optimizing your knowledge base.

How It Benefits You:

  • Improved Efficiency: Eliminate the complexity of setting up manual API calls. The KB Search Step simplifies querying your Knowledge Base, saving you time and effort.

  • Greater Flexibility: Control how many data chunks are returned and set minimum chunk scores to ensure the most relevant information is fetched for your agents.

  • Enhanced Workflow Integration: Automatically convert and save data into variables, ready for use in subsequent steps within your workflow. This streamlines the process of integrating Knowledge Base data into your conversational designs.

How to Get Started:

To start using the KB Search Step, simply drag the step onto your canvas from the Developer section in the step toolbar. Configure your query question, set your chunk limit, and map the data to a variable—all in less than a minute!

Learn More:

For detailed instructions on how to use the KB Search Step, check out our updated User Guide.

And more exciting updates

  • Token Multiplier Reduction:

    • Tokens are now more affordable, with a reduction of ~50% or more on many of the supported models! Here are the new multipliers:
      • GPT-3.5-Turbo: Reduced from 0.6x to 0.25x
      • GPT-4 Turbo: Reduced from 12x to 5x
      • GPT-4: Reduced from 25x to 14x
      • GPT-4o: Reduced from 6x to 2.5x
      • GPT-4o mini: Reduced from 0.2x to 0.08x
      • Claude Instant 1.2: Reduced from 1x to 0.4x
      • Claude 1: Reduced from 10x to 4x
      • Claude 2: Reduced from 10x to 4x
      • Claude Haiku: Reduced from 0.5x to 0.15x
      • Claude Sonnet 3: Reduced from 5x to 1.75x
      • Claude Sonnet 3.5: Reduced from 5x to 1.75x
      • Claude Opus: Reduced from 20x to 8.5x
      • Gemini: Reduced from 8x to 3.5x
  • Billing Email Setting:

    • You can now set a Billing Email on your account from the Billing page on the dashboard. This allows all billing-related emails, such as invoices and payment-related notifications, to be routed to the appropriate contact.

We are introducing a new version of the Condition step type to enhance condition handling in your AI agents.

Affected Property: diagrams[id].nodes

What's Changing:

  • A new condition-v3 node type with an updated data structure will be added for greater flexibility and expressiveness.

New Format Preview:

type CodeText = (string | { variableID: string } | { entityID: string })[];

interface MarkupSpan {
  text: Markup;
  attributes?: Record<string, unknown>;
}

type Markup = (string | { variableID: string } | { entityID: string } | MarkupSpan)[];

type ConditionV3Node = {
  type: "condition-v3";
  data: {
    portsV2: {
      byKey: Record<string, {
        type: string;
        id: string;
        target: string | null;
      }>;
    };
    condition: {
      type: "prompt";
    } | {
      type: "logic";
      items: {
        value: {
          type: "script";
          code: CodeText;
        } | {
          type: "value-variable";
          matchAll: boolean;
          assertions: {
            key: string;
            lhs: {
              variableID: string | null;
            };
            rhs: Markup;
            operation:
              | "is"
              | "is_not"
              | "greater_than"
              | "greater_or_equal"
              | "less_than"
              | "less_or_equal"
              | "contains"
              | "not_contains"
              | "starts_with"
              | "ends_with"
              | "is_empty"
              | "is_not_empty";
          }[];
        };
        id: string;
        label?: string | null;
      }[];
    };
    name?: string;
    noMatch?: {
      repromptID: string | null;
      path?: boolean | null;
      pathLabel?: string | null;
    } | null;
  };
  nodeID: string;
  coords?: [number, number];
};

What This Means for You:

  • The condition-v3 node will support both script-based and variable-based conditions, allowing for more complex logic.
  • Only new condition steps created after the update will utilize the condition-v3 format, unlocking additional features.

Action Required:

  • Plan to modify any custom integrations that interact with diagrams[id].nodes to accommodate the new node type.

We will provide more details and release notes when the update becomes available.