We are pleased to announce the launch of the Smart Chunking beta program. Over the coming weeks, we will be testing and validating several LLM-based strategies to enhance the quality of your knowledge base chunks. Better chunks lead to better responses and higher-quality AI agents.

First Strategy: HTML to Markdown Conversion

Our initial strategy focuses on automatic HTML to Markdown conversion. Many users import content from web pages—either directly using our web scraper or via APIs from services like Zendesk Help Center or Kustomer. This content often contains raw HTML, which can be noisy and degrade chunk performance and response quality.

By converting HTML to Markdown automatically, we aim to improve the cleanliness and readability of your content. This conversion is supported for all data sources you can upload into Voiceflow.

How to Use the New Feature


Join the Beta Program

If you're interested in participating in the Smart Chunking beta, please sign up via the waitlist link. We will be granting access to participants over the next few weeks.

Announcing the new Secrets Manager in Voiceflow! This new feature enables you to securely store and manage a variety of sensitive information within your AI agents, including API keys, database credentials, encryption keys, and more.

Video Walkthrough


What's New

  • Secure Storage of Sensitive Data

    • Safely store passwords and credentials used to access essential services and functions within your software.
    • Utilize AES-256 GCM encryption to ensure confidentiality and integrity of your secrets.
  • Visibility Controls

    • Masked Secrets: Values are hidden but can be temporarily revealed when needed.
    • Restricted Secrets: Values remain hidden and cannot be revealed after creation, enhancing security for highly sensitive data.
  • Environment Overrides

    • Specify different secret values for Development and Production environments.
    • Seamlessly manage environment-specific configurations without altering your agent's logic.
  • Integration with Function and API Steps

    • Easily insert secrets into Function and API steps without hardcoding sensitive information.
    • Access secrets directly from the canvas by typing { and selecting from the Secrets tab.
  • Secure Project Sharing

    • When duplicating or exporting projects, secret values are excluded to maintain security.
    • Re-add secret values as needed in the new project instance.

Benefits

  • Enhanced Security

    • Protects sensitive information using industry-standard encryption.
    • Prevents unauthorized access through robust key management and encryption practices.
  • Simplified Management

    • Centralizes the handling of all your secrets within the Secrets Manager.
    • Reduces the risk of accidental exposure by avoiding hardcoded credentials.
  • Operational Flexibility

    • Environment overrides allow for different configurations across development and production stages.
    • Streamlines the deployment process by separating environment-specific data.

Getting Started

  1. Access the Secrets Manager

    • Navigate to Agent Settings in the left sidebar.
    • Click on the Secrets tab to open the Secrets Manager interface.
  2. Create a New Secret

    • Click New Secret in the top-right corner.
    • Enter the Name, Value, and choose the Visibility setting.
    • Click Create Secret to add it to your Secrets Manager.
  3. Use Secrets in Your Agent

    • In a Function or API step, type { to open the variable selector.
    • Switch to the Secrets tab and select the desired secret.
  4. Set Up Environment Overrides

    • Go to the Environments tab in Agent Settings.
    • Click Override Secrets next to the desired environment.
    • Enter environment-specific values for your secrets and click Save.

Learn More

For detailed instructions and best practices, please refer to our updated User Guide for the Secrets Manager.


Updated Button Step

What's Changed?

Simplified and Improved UI

  • Streamlined Button Creation: We've made it quicker and more intuitive to add buttons. Simply click the "+" button next to the Buttons label in the editor to add a new button.
  • Clean Editor Layout: The editor has been decluttered to focus on what's important, allowing you to configure your buttons without any unnecessary distractions.

How to Start Using the Redesigned Button Step

The redesigned Button Step is available now in your step toolbar, labeled as Button. To take advantage of the new user interface:

  1. Replace Existing Steps: You can replace your existing Button steps with the redesigned version in your projects to benefit from the improved UI.
  2. Update Your Conversation Paths: Reconnect any paths as needed using the ports associated with your buttons.
  3. Review Settings: Check your No Match, No Reply, and Listen for Other Triggers settings to ensure they are configured as desired.

Note: Your existing Button steps will continue to function as before unless you decide to update them. This means your current projects won't be affected until you're ready to make the switch.

Some significant updates that we've been working on to make your conversational agents even better. Over the next week, we'll be rolling out new features that leverage advanced AI capabilities to improve how your projects understand and interact with users.

What's New?

Transition to AI-Powered Entity Extraction

We've moved from traditional Natural Language Understanding (NLU) to using Large Language Models (LLMs) for entity extraction. This change is all about making your agents more accurate and adaptable when interpreting user inputs. By embracing AI-powered entity extraction, your agents can now handle a wider range of conversational scenarios with greater reliability.

New AI-Powered Steps with Enhanced Entity Collection: Choice, Capture, and Trigger

To fully leverage the improved entity extraction capabilities, we've updated some core steps in Voiceflow. The Choice, Capture, and Trigger steps have all been upgraded to natively support the new AI-powered entity collection features. This means these steps are now better equipped to collect necessary information from users, making your conversational agents more effective and responsive.

Activating the New Steps in Your Project

The new Choice and Capture steps are available in the step toolbar. To start using the new AI-powered features, you'll need to replace your existing steps with these updated versions in your projects. Your existing Choice and Capture steps will remain unchanged, so your current setups won't be affected unless you choose to update them.

Introducing Rules and Exit Scenarios

We've added two new features to give you more control over how your agents handle user inputs:

  • Rules: You can now define specific criteria that the user's input must meet. This helps ensure that the information collected is valid and meets your requirements.
  • Exit Scenarios: These allow your agent to gracefully handle situations where the user can't provide the necessary information. You can set up alternative paths or responses for these cases, improving the overall user experience.

Automatic Reprompt Feature

We're also introducing an Automatic Reprompt feature. If a user provides incomplete or incorrect information, your agent will now generate personalized prompts to help them provide the missing details. This makes interactions smoother and less frustrating for your users.

Why This Matters

We know that accurately understanding user inputs is crucial for effective conversations. By moving to AI-powered entity extraction, we're providing you with tools that offer greater accuracy and adaptability. This helps your agents handle a wider range of conversational scenarios, providing better experiences for your users.

What You Need to Do

To start benefiting from these new features:

  1. Update Your Steps: Replace your existing Choice and Capture steps with the new AI-powered versions available in the step toolbar.

  2. Configure Rules and Exit Scenarios: Use the new options within these steps to define rules and exit scenarios that suit your project's needs.

  3. Test Your Agent: After making changes, be sure to test your agent thoroughly to ensure everything works as expected.

Rollout Details

These updates will be introduced to all users over the next week. If you don't see them in your workspace right away, they'll be available soon.

We're Here to Help

As always, we're committed to supporting you. If you have any questions or need assistance with the new features, please don't hesitate to reach out to our support team or visit our Discord community.


Thank you for being part of the Voiceflow community. We're excited to see how you'll use these new capabilities to create even more engaging and effective conversational agents.

New Multimodal Projects

by Zoran Slamkov

It is easier than ever to create and manage multimodal agents that support both chat and voice interactions.

What’s New

Multimodal Projects

Single Project Type for Chat and Voice: You no longer need to choose between a chat project and a voice project. All new projects (and existing chat projects) will support both modalities by default, streamlining your workflow and expanding your agent’s capabilities.

Voice Features in Existing Chat Projects

Immediate Access to Voice Features: Existing chat projects now have built-in voice capabilities. You can start leveraging voice input and output options in your conversations without creating a new project.

Default Text-to-Speech (TTS) Settings: In the agent settings, you can now select a default TTS technology for your agent.

Voice Prototyping Tools

Voice Prototyping: The designer prototype tool and web widget now includes voice input and output support. You can test voice and/or chat interactions anywhere and everything, making it easier to refine your agent’s conversational flow.

Web Chat Integration

Optional Voice Interface: In the web chat integration settings, there is a new toggle to enable voice input and output. This allows your web chat experiences to include voice interactions. Currently, voice input (ASR) for hosted deployments is limited to Chrome browsers. We will be looking to expand support in the future.

Impact on Existing Voice Projects

No Changes to Existing Voice Projects: Your current voice projects will remain unchanged and fully functional. However, the option to create new voice-only projects from the dashboard has been removed. All new projects will support both chat and voice modalities.

Resources

Feedback: We value your input. If you have any questions or encounter issues, please reach out to our support team.

We are introducing new step types to diagrams[id].nodes to enhance the functionality and flexibility of your AI agents.

Affected Property: diagrams[id].nodes

What's Changing:

  • New Node Types Added:

    1. ButtonsV2 Node (buttons-v2)

      • Description: Enhances button interactions with more dynamic features.

      • Type Definition:

        type ButtonsV2Node = {
          type: "buttons-v2";
          data: {
            portsV2: {
              byKey: Record<string, {
                id: string;
                type: string;
                target: string | null;
                data?: {
                  type?: "CURVED" | "STRAIGHT" | null;
                  color?: string | null;
                  points?: {
                    point: [number, number];
                    toTop?: boolean | null;
                    locked?: boolean | null;
                    reversed?: boolean | null;
                    allowedToTop?: boolean | null;
                  }[] | null;
                  caption?: {
                    value: string;
                    width: number;
                    height: number;
                  } | null;
                } | null;
              }>;
            };
            listenForOtherTriggers: boolean;
            items: {
              id: string;
              label: Markup;
            }[];
            name?: string;
            noReply?: {
              repromptID: string | null;
              path?: boolean | null;
              pathLabel?: string | null;
              inactivityTime?: number | null;
            } | null;
            noMatch?: {
              repromptID: string | null;
              path?: boolean | null;
              pathLabel?: string | null;
            } | null;
          };
          nodeID: string;
          coords?: [number, number];
        };
        
        
    2. CaptureV3 Node (capture-v3)

      • Description: Provides advanced data capture capabilities, including entity capture and automatic reprompting.

      • Type Definition:

        type CaptureV3Node = {
          type: "capture-v3";
          data: {
            portsV2: {
              byKey: Record<string, {
                id: string;
                type: string;
                target: string | null;
                data?: {
                  type?: "CURVED" | "STRAIGHT" | null;
                  color?: string | null;
                  points?: {
                    point: [number, number];
                    toTop?: boolean | null;
                    locked?: boolean | null;
                    reversed?: boolean | null;
                    allowedToTop?: boolean | null;
                  }[] | null;
                  caption?: {
                    value: string;
                    width: number;
                    height: number;
                  } | null;
                } | null;
              }>;
            };
            listenForOtherTriggers: boolean;
            capture:
              | {
                  type: "entity";
                  items: {
                    id: string;
                    entityID: string;
                    path?: boolean | null;
                    pathLabel?: string | null;
                    repromptID?: string | null;
                    placeholder?: string | null;
                  }[];
                  automaticReprompt: {
                    params: {
                      model?:
                        | "text-davinci-003"
                        | "gpt-3.5-turbo-1106"
                        | "gpt-3.5-turbo"
                        | "gpt-4"
                        | "gpt-4-turbo"
                        | "gpt-4o"
                        | "gpt-4o-mini"
                        | "claude-v1"
                        | "claude-v2"
                        | "claude-3-haiku"
                        | "claude-3-sonnet"
                        | "claude-3.5-sonnet"
                        | "claude-3-opus"
                        | "claude-instant-v1"
                        | "gemini-pro-1.5";
                      system?: string;
                      maxTokens?: number;
                      temperature?: number;
                    } | null;
                    exitScenario?: {
                      items: {
                        id: string;
                        text: string;
                      }[];
                      path?: boolean | null;
                      pathLabel?: string | null;
                    } | null;
                  } | null;
                  rules?: {
                    id: string;
                    text: string;
                  }[];
                }
              | {
                  type: "user-reply";
                  variableID?: string | null;
                };
            name?: string;
            noReply?: {
              repromptID: string | null;
              path?: boolean | null;
              pathLabel?: string | null;
              inactivityTime?: number | null;
            } | null;
          };
          nodeID: string;
          coords?: [number, number];
        };
        
        
    3. ChoiceV2 Node (choice-v2)

      • Description: Enhances user choice interactions with improved intent handling and automatic reprompt options.

      • Type Definition:

        type ChoiceV2Node = {
          type: "choice-v2";
          data: {
            portsV2: {
              byKey: Record<string, {
                id: string;
                type: string;
                target: string | null;
                data?: {
                  type?: "CURVED" | "STRAIGHT" | null;
                  color?: string | null;
                  points?: {
                    point: [number, number];
                    toTop?: boolean | null;
                    locked?: boolean | null;
                    reversed?: boolean | null;
                    allowedToTop?: boolean | null;
                  }[] | null;
                  caption?: {
                    value: string;
                    width: number;
                    height: number;
                  } | null;
                } | null;
              }>;
            };
            listenForOtherTriggers: boolean;
            items: {
              id: string;
              intentID: string;
              rules?: {
                id: string;
                text: string;
              }[];
              button?: {
                label: Markup;
              } | null;
              automaticReprompt?: {
                exitScenario?: {
                  items: {
                    id: string;
                    text: string;
                  }[];
                  path?: boolean | null;
                  pathLabel?: string | null;
                } | null;
              } | null;
            }[];
            name?: string;
            noReply?: {
              repromptID: string | null;
              path?: boolean | null;
              pathLabel?: string | null;
              inactivityTime?: number | null;
            } | null;
            noMatch?: {
              repromptID: string | null;
              path?: boolean | null;
              pathLabel?: string | null;
            } | null;
          };
          nodeID: string;
          coords?: [number, number];
        };
        
        

What This Means for You:

  • Enhanced Capabilities: The new node types offer advanced features, allowing for more complex and dynamic conversational flows.
  • Existing Nodes Remain Unchanged: Your current diagrams and nodes will continue to function as before.
  • New Features Available in New Steps: Only new instances of these steps created after the update will utilize the updated formats and capabilities.

Action Required:

  • Update Integrations: If you have custom integrations that interact with diagrams[id].nodes, plan to update them to accommodate these new node types when creating new steps.

Condition Step Update

by Zoran Slamkov

What's New:

  • Redesigned UI: The Condition Step now features an updated interface for easier navigation and setup of conditional paths.
  • Flexible Logic Configuration: Users can now choose between a Condition Builder (for non-technical users) or Expression (for custom JavaScript logic) to create paths based on variables, values, logic groups, or expressions.

Only new condition steps created after this update will use the new version, existing condition steps will remain untouched.

We’re excited to announce the release of the KB Search Step, a new feature designed to simplify querying data from your Knowledge Base directly within Voiceflow. This step streamlines the process of fetching data chunks, providing you with greater control and efficiency in your conversational workflows.


What’s New:

  • Native KB Querying: Easily query your Knowledge Base without the need for manual API configurations. Simply input a question or variable and retrieve relevant data chunks directly in your workflow.

  • Customizable Chunk Retrieval: Use the chunk limit slider to specify the number of data chunks to be returned, ranging from 1 to 10. Fine-tune responses by adjusting this setting to suit your agent’s needs.

  • No Match Handling: Set a minimum chunk score for chunk matching. If no data meets this threshold, the workflow can automatically follow a customizable Not Found path, ensuring a seamless user experience even when no exact match is found.

  • Enhanced Control Over Prompts: Fetched data chunks are automatically converted into a string format, making them easy to incorporate into text-based prompts and other steps. You have the flexibility to control how this data is used, enhancing the quality and relevance of your agent's responses.

  • Testing Experience: Test your queries directly within the editor. View the returned chunks and their scores for insight into optimizing your knowledge base.

How It Benefits You:

  • Improved Efficiency: Eliminate the complexity of setting up manual API calls. The KB Search Step simplifies querying your Knowledge Base, saving you time and effort.

  • Greater Flexibility: Control how many data chunks are returned and set minimum chunk scores to ensure the most relevant information is fetched for your agents.

  • Enhanced Workflow Integration: Automatically convert and save data into variables, ready for use in subsequent steps within your workflow. This streamlines the process of integrating Knowledge Base data into your conversational designs.

How to Get Started:

To start using the KB Search Step, simply drag the step onto your canvas from the Developer section in the step toolbar. Configure your query question, set your chunk limit, and map the data to a variable—all in less than a minute!

Learn More:

For detailed instructions on how to use the KB Search Step, check out our updated User Guide.

And more exciting updates

  • Token Multiplier Reduction:

    • Tokens are now more affordable, with a reduction of ~50% or more on many of the supported models! Here are the new multipliers:
      • GPT-3.5-Turbo: Reduced from 0.6x to 0.25x
      • GPT-4 Turbo: Reduced from 12x to 5x
      • GPT-4: Reduced from 25x to 14x
      • GPT-4o: Reduced from 6x to 2.5x
      • GPT-4o mini: Reduced from 0.2x to 0.08x
      • Claude Instant 1.2: Reduced from 1x to 0.4x
      • Claude 1: Reduced from 10x to 4x
      • Claude 2: Reduced from 10x to 4x
      • Claude Haiku: Reduced from 0.5x to 0.15x
      • Claude Sonnet 3: Reduced from 5x to 1.75x
      • Claude Sonnet 3.5: Reduced from 5x to 1.75x
      • Claude Opus: Reduced from 20x to 8.5x
      • Gemini: Reduced from 8x to 3.5x
  • Billing Email Setting:

    • You can now set a Billing Email on your account from the Billing page on the dashboard. This allows all billing-related emails, such as invoices and payment-related notifications, to be routed to the appropriate contact.

We are introducing a new version of the Condition step type to enhance condition handling in your AI agents.

Affected Property: diagrams[id].nodes

What's Changing:

  • A new condition-v3 node type with an updated data structure will be added for greater flexibility and expressiveness.

New Format Preview:

type CodeText = (string | { variableID: string } | { entityID: string })[];

interface MarkupSpan {
  text: Markup;
  attributes?: Record<string, unknown>;
}

type Markup = (string | { variableID: string } | { entityID: string } | MarkupSpan)[];

type ConditionV3Node = {
  type: "condition-v3";
  data: {
    portsV2: {
      byKey: Record<string, {
        type: string;
        id: string;
        target: string | null;
      }>;
    };
    condition: {
      type: "prompt";
    } | {
      type: "logic";
      items: {
        value: {
          type: "script";
          code: CodeText;
        } | {
          type: "value-variable";
          matchAll: boolean;
          assertions: {
            key: string;
            lhs: {
              variableID: string | null;
            };
            rhs: Markup;
            operation:
              | "is"
              | "is_not"
              | "greater_than"
              | "greater_or_equal"
              | "less_than"
              | "less_or_equal"
              | "contains"
              | "not_contains"
              | "starts_with"
              | "ends_with"
              | "is_empty"
              | "is_not_empty";
          }[];
        };
        id: string;
        label?: string | null;
      }[];
    };
    name?: string;
    noMatch?: {
      repromptID: string | null;
      path?: boolean | null;
      pathLabel?: string | null;
    } | null;
  };
  nodeID: string;
  coords?: [number, number];
};

What This Means for You:

  • The condition-v3 node will support both script-based and variable-based conditions, allowing for more complex logic.
  • Only new condition steps created after the update will utilize the condition-v3 format, unlocking additional features.

Action Required:

  • Plan to modify any custom integrations that interact with diagrams[id].nodes to accommodate the new node type.

We will provide more details and release notes when the update becomes available.


Improved Set Step

by Zoran Slamkov

We’re excited to announce an update to the Set step, making it more versatile and user-friendly for all Voiceflow users!

What’s New?

We’ve enhanced the Set step by splitting the traditional single input into two distinct options: Value and Expression. This change aims to streamline the user experience, providing a simpler path for non-technical users while offering more flexibility for advanced use cases.

  • Value Input: This new input option simplifies the process of setting variables. You can now directly assign values to variables without needing to worry about syntax like quotes or data types. This makes it easier than ever to quickly set variables to specific text (String) values or numbers.
  • Expression Input: For those needing more advanced functionality, the Expression input allows you to use JavaScript to set variables dynamically. Whether you’re incrementing or decrementing values, or performing complex calculations, the Expression input gives you the power to build sophisticated logic right within the Set step.

How to Use the Updated Set Step

To ensure a smooth transition to the new structure, all existing Set steps in your projects have been automatically migrated to the updated version. You’ll notice that your previous configurations now utilize the new Expression input, preserving all functionality while aligning with the updated structure.

Learn more about the new step in our step guides.

Thank you for being a part of our community, and happy agent building!

We are excited to announce the release of Messages, our latest feature designed to revolutionize how you manage, scale, and customize your AI agent's responses. At Voiceflow, our mission is to enable you to build scalable, efficient, and collaborative AI solutions, and Messages is a significant step forward in achieving that goal.


Effortless Reusability for Consistency and Efficiency
Messages enable you to reuse responses across different workflows, ensuring consistency and saving time. By typing a forward slash (/) in the message editor, you can quickly select from existing messages in the CMS. Any changes made to a reused message will propagate across all instances, streamlining updates and maintaining uniformity.

Dynamic Responses with Conditional Variants
With Messages, you can create multiple variants of a response to cater to diverse scenarios. Use AI to generate variants automatically or manually create them. Set conditions using the expression builder or JavaScript to tailor responses based on specific criteria, ensuring your agent's interactions are contextually appropriate and dynamic.

Enhanced Management with Improved Visibility
The Messages CMS provides a clear view of where each message is used, who last edited it, and when it was last updated. This enhanced visibility helps you manage your responses efficiently, track their usage across your projects, and ensure all team members are on the same page.

📘

The Messages feature is currently available exclusively for chat projects. We recognize the value this feature brings and are actively working on unifying our chat and voice project types. Our goal is to ensure that both modalities will soon benefit from the enhanced flexibility and efficiency that Messages provide.

What's Changed
With the release of Messages, we’ve made several notable improvements to enhance your experience:

  • Centralized Message Management: Manage all your responses from a single, unified interface.
  • Variants and Conditions: Create flexible message variants and set conditions for tailored responses.
  • Effortless Reusability: Easily reuse messages across workflows for consistency and efficiency.
  • Enhanced Visibility: Track where each message is used and manage updates efficiently.

We’re confident that Messages will significantly improve your AI agent design process, making it more efficient, scalable, and collaborative. Explore the new Messages feature today and experience the difference! Learn more about using Messages here.

Thank you for being a valued member of the Voiceflow community. We look forward to seeing how you leverage Messages to create exceptional AI experiences.