Simplify your workflow and make managing your agents even easier with more sharing options.

Import and Export Variables and Entities in the CMS

You can now import and export variables and entities directly in your Agent CMS, saving you time and effort when setting up and sharing your agents.

  • Quickly populate your variables and entities by importing exported JSON files
  • Easily create new versions of variables and entities by importing new files
  • Export your variables and entities as JSON files for backup or sharing
  • Save time by bulk importing and exporting variables and entities

These new features are particularly useful when you have a large number of variables or entities to manage, when you need to create new versions frequently, or when you want to share your variables and entities with others.

Importing and exporting variables and entities is especially helpful when you're working with large datasets, complex agents that require numerous variables and entities, or collaborating with team members.

These new import and export features in the Agent CMS will help you set up, manage, and share your agents more efficiently, allowing you to focus on creating engaging conversational experiences.

This week we're excited to introduce two new features that will enhance your workflow and make it easier to build and debug your agents.

Export and Import Prompts in the Prompt CMS


Reusing and sharing prompts across different agents is now a breeze with our new export and import functionality in the Prompt CMS. Easily export prompts individually or in bulk as JSON files.

  • Import prompts into any agent with just a few clicks, including any variables or entities that are used in the prompt.
  • Streamline your workflow by reusing effective prompts across projects.
  • Share your best prompts with colleagues and the community.

Building great agent experiences often involves iterating on prompts. Now you can save time by leveraging your best prompts across all your agents.

Improved Variable Debugging for Objects and Arrays

Debugging when using complex variables is now more intuitive in the debug panel. We've improved the display of objects and arrays so you can easily inspect their values.

  • See and modify objects and arrays assigned to variables during prototyping.
  • Quickly identify issues with variable assignments.

Previously, objects were not displayed from the debug panel, making it difficult to understand what data they contained. This improvement brings more transparency to your debugging process.

This week, we’re excited to announce the beta release of a number of new Smart Chunking features, designed to enhance the way you process and retrieve knowledge base content. These improvements address previous limitations and bring more efficiency to your document management workflow.

LLM-Generated Questions

Enhance retrieval accuracy by prepending AI-generated questions to document chunks. This aligns your content more closely with potential user queries, making it easier for users to find the information they need.

Context Summarization

Provide additional context by adding AI-generated summaries to each chunk. This helps users understand the content more quickly and improves the relevance of search results.

LLM-Based Chunking

Experience optimal document segmentation determined by semantic similarity and retrieval effectiveness. This AI-driven approach ensures your content is chunked in the most meaningful way.

Content Summarization

Let AI summarize and refine your content, focusing on the most important information. This feature streamlines your documents, making your chunks more concise and optimized for retrieval performance.


We encourage you to explore these new capabilities and share your feedback.

To start using the Smart Chunking beta features, join the waiting list here.

Events enables users to trigger workflows without direct user input. They enable your agent to respond to user-defined events tailored to specific use cases. With Events, your agent becomes more context-aware and responsive, providing a more engaging and dynamic user experience.

What’s New

Events System

  • Custom Triggers: Define custom events in the new Event CMS, allowing your agent to respond to specific user actions beyond just conversational input.
  • Seamless Integration: Events act as signals from the user’s interactions—like button clicks, page navigations, or in-app actions—enabling your agent to initiate specific workflows dynamically.
  • Event Triggers in Workflows: Use the new Event type in the Trigger step to associate events with specific flows in your agent, giving you full control over the conversational paths.

Why Use Events?

  • Expand Interaction Capabilities: Respond to a wide range of user actions within your application, making your agent more intelligent and adaptable.
  • Create Contextual Experiences: Provide relevant interactions based on what the user is doing.
  • Streamline User Journeys: Assist users at critical points, offering guidance, confirmations, or additional information exactly when needed.

Examples of How Events Can Enhance Your Agent

  • User Clicks a Checkout Button: Trigger an event to initiate a checkout assistance flow, confirming items or offering shipping options.
  • In-App Feature Usage: Start a tutorial when a user accesses a new feature for the first time.
  • User Sends a Command in a Messaging App: Provide immediate responses to specific commands, like showing recent transactions.
  • User Navigates to a Specific Page: Offer assistance related to the content of the page, such as explaining pricing plans.

Learn More

Recognizing that your AI prompts are the cornerstone of agent behaviour, we’ve developed a comprehensive suite of tools designed to provide a central hub for creating, updating, and testing prompts with ease and efficiency.

What’s New

  1. Prompts CMS

    • Centralized Prompt Hub: Manage all your prompts in one place, ensuring consistency and easy access across your entire agent.

    • Advanced Prompt Editor: Craft, edit, and test your prompts within an intuitive interface equipped with the necessary tooling to refine your AI agent’s responses.

    • Message Pairs & Conversation Memory: Utilize message pairs to simulate interactions and inject conversation memory, allowing for more dynamic and context-aware agent behaviour.

    • Visibility into Performance Metrics: Gain insights into latency and token consumption, now split by input and output tokens, to optimize your prompts for performance and cost-efficiency.

  2. New Prompt Step

    • Prompt Integration: Incorporate response prompts directly into your agent workflows using the new Prompt step.
    • Reuse Across Agent: The prompts you create can be easily reused across your agent, making any updates available wherever the prompt is used.
  3. Assign Prompts in Set Step

    • Simplify Designs: This feature brings prompts to the Set step for purposes of reusability and consolidating methods of setting variable values in your agent.

Looking Ahead

  • Expanded Prompt Support: Soon, you’ll be able to use prompts in more steps within your agent’s flow, unlocking new possibilities for interaction design.
  • Community Sharing: We’re developing features that will allow you to share prompts across your agents and with the wider community, facilitating collaboration and collective improvement.

Learn More

  • Prompt CMS and Editor: Explore the central hub for creating, testing, and managing prompts within your agent.
  • Prompt step: Learn how to integrate prompts directly into your agent’s flow.
  • Set step: Discover how to dynamically assign prompt outputs to variables for greater control over agent behaviour.

What’s New

  • Persistent Events: Functions can now define events that persist for the entire conversation session. This means that events associated with components like carousels, choice traces, or buttons remain active even after the conversation moves past the function step.
  • Delayed User Interaction: Users can interact with these persistent components at any point during the session. When they do, the agent will refer back to the original function and proceed down the relevant path defined in your function code.
  • Flexible Agent Behaviour: You now have control over whether the agent waits for user input at the function step or continues execution immediately, thanks to the new listen parameter settings.

How It Works

Option 1: Agent Waits for User Input (listen: true)

  • Behavior:
    • The agent pauses execution at the function step.
    • It waits for immediate user input before proceeding.
  • Use Case:
    • Ideal when you require the user to make a choice or provide input before moving on.
  • Implementation:
    • Set listen: true in your function’s next command.
next: {
  listen: true,
  to: [
    {
      on: { 'event.type': 'option_selected', 'event.payload.value': '1' },
      dest: 'option_1_path',
    },
    // Additional event handlers...
  ],
  defaultTo: 'default_path',
},

Option 2: Agent Continues Execution (listen: false)

  • Behavior:
    • The agent continues execution without waiting at the function step.
    • Events defined in the function persist throughout the session and can be triggered later.
  • Use Case:
    • Perfect for non-blocking interactions where users might choose to interact with components at their convenience.
  • Implementation:
    • Set listen: false in your function’s next command.
next: {
  listen: false, // Must be explicitly set to false
  to: [
    {
      on: { 'event.type': 'item_selected', 'event.payload.label': 'Item A' },
      dest: 'item_A_path',
    },
    // Additional event handlers...
  ],
  defaultTo: 'default_path',
},

Benefits

  • Dynamic Conversations: Create agents that can handle delayed interactions, allowing users to make choices at any point during the conversation.
  • Persistent Interactivity: Keep buttons and interactive elements active throughout the session.

Example

Function Code with Persistent Events:

export default async function main(args) {
  return {
    trace: [
      {
        type: 'carousel',
        payload: {
          cards: [
            {
              title: 'Product A',
              description: { text: 'Description of Product A' },
              imageUrl: 'https://example.com/productA.png',
              buttons: [
                {
                  name: 'Buy Now',
                  request: { type: 'purchase', payload: { productId: 'A' } },
                },
              ],
            },
            // Additional cards...
          ],
        },
      },
    ],
    next: {
      listen: false, // Agent continues without waiting
      to: [
        {
          on: { 'event.type': 'purchase', 'event.payload.productId': 'A' },
          dest: 'purchase_A_path',
        },
        // Additional event handlers...
      ],
      defaultTo: 'continue_path',
    },
  };
}
  • Agent Behavior:
    • Continues to 'continue_path' immediately.
    • The purchase event remains active and can be triggered later.
    • When the user clicks “Buy Now” for Product A, the agent navigates to 'purchase_A_path'.

Learn More

Updated Documentation: Visit our Supporting listen in functions to learn more about using persistent listen functionality.

We’re introducing several new features to enhance your experience with Voiceflow:

  • New Publishing Workflow:
    • Versioned Publishing with Release Notes: Easily add release notes to each version of your agent during the publishing process, making it simpler to track changes and updates.
    • Dedicated Publishing Tab: Access a new Publishing tab within the Agent CMS, where you can name your version, add release notes, and publish directly. The tab includes:
      • Publish View: Name your versions and add notes before publishing.
      • Release Notes View: Review your agent’s release history with a clean, organized display.
  • Folders for the Knowledge Base CMS:
    • We’ve added folders to the Voiceflow Knowledge Base CMS, enabling you to organize your data sources more effectively. This makes managing large datasets and sources easier, keeping everything in order.

  • Enhanced Choice Step with Button Support:
    • You can now attach buttons to a choice step, with intents being optional. This means you can add paths to a Choice step that are only triggered with a button click, offering greater flexibility in how users interact with your agents.

At Voiceflow, our mission is ambitious: to provide you with the best agent creation platform in the world. We believe in empowering creators like you to build advanced conversational AI agents without limits.

As we’ve integrated more powerful language models into Voiceflow, we’ve seen your projects become more innovative and dynamic. Your agents are smarter, more engaging, and pushing the boundaries of what’s possible in conversational AI.

But we don’t want you to slow down—we want to give you more “fuel” to keep that engine running at full speed.

That’s why we’re excited to announce that we’ve doubled the included monthly AI token allotments for our Pro, Team, and Enterprise plans!

Here are the new allotments:

Pro Plan: Now includes 4 million AI tokens per month.
Teams Plan: Now includes 20 million AI tokens per month.
Enterprise Plan: Now includes 200 million AI tokens per month.

Need more? Additional tokens are available for purchase across all plans.

This upgrade is about more than just numbers. It’s about supporting your vision and giving you the resources to bring your most ambitious ideas to life. Whether you’re developing complex conversational flows, experimenting with new AI features, or scaling up your existing projects, we’ve got you covered.

We’re committed to making Voiceflow not just a tool, but a platform that grows with you—a place where your creativity has no bounds.

Thank you for being an essential part of our journey. We can’t wait to see what incredible things you’ll build with this extra boost.

We're excited to announce the release of our new Streaming API endpoint, designed to enhance real-time interactions with your Voiceflow agents. This feature allows you to receive server-sent events (SSE) in real time using the text/event-stream format, providing immediate responses and a smoother conversational experience for your users.

Key Highlights

  • Real-Time Event Streaming: Receive immediate trace events as your Voiceflow project progresses, allowing for dynamic and responsive conversations.

  • Improved User Experience: Drastically reduce latency by sending information to users as soon as it's ready, rather than waiting for the entire turn to finish.

  • Support for Long-Running Operations: Break up long-running steps (e.g., API calls, AI responses, JavaScript functions) by sending immediate feedback to the user while processing continues in the background.

  • Streaming LLM Responses: With the completion_events query parameter set to true, stream large language model (LLM) responses (e.g., from Response AI or Prompt steps) as they are generated, providing instant feedback to users.

How to Use the Streaming API

Endpoint

POST /v2/project/{projectID}/user/{userID}/interact/stream

Required Headers

  • Accept: text/event-stream
  • Authorization: {Your Voiceflow API Key}
  • Content-Type: application/json

Query Parameters

  • completion_events (optional): Set to true to enable streaming of LLM responses as they are generated.

Example Request

curl --request POST \
     --url https://general-runtime.voiceflow.com/v2/project/{projectID}/user/{userID}/interact/stream \
     --header 'Accept: text/event-stream' \
     --header 'Authorization: {Your Voiceflow API Key}' \
     --header 'Content-Type: application/json' \
     --data '{
       "action": {
         "type": "launch"
       }
     }'

Example Response

event: trace
id: 1
data: {
  "type": "text",
  "payload": {
    "message": "Give me a moment...",
  },
  "time": 1725899197143
}

event: trace
id: 2
data: {
  "type": "debug",
  "payload": {
    "type": "api",
    "message": "API call successfully triggered"
  },
  "time": 1725899197146
}

event: trace
id: 3
data: {
  "type": "text",
  "payload": {
    "message": "Got it, your flight is booked for June 2nd, from London to Sydney.",
  },
  "time": 1725899197148
}

event: end
id: 4

Streaming LLM Responses with completion_events

By setting completion_events=true, you can stream responses from LLMs token by token as they are generated. This is particularly useful for steps like Response AI or Prompt, where responses may be lengthy.

Example Response with completion_events=true

event: trace
id: 1
data: {
  "type": "completion",
  "payload": {
    "state": "start"
  },
  "time": 1725899197143
}

event: trace
id: 2
data: {
  "type": "completion",
  "payload": {
    "state": "content",
    "content": "Welcome to our service. How can I help you today? Perh"
  },
  "time": 1725899197144
}

... [additional content events] ...

event: trace
id: 6
data: {
  "type": "completion",
  "payload": {
    "state": "end"
  },
  "time": 1725899197148
}

event: end
id: 7

Getting Started

  • Find Your projectID: Locate your projectID in the agent's settings page within the Voiceflow Creator. Note that this is not the same as the ID in the URL creator.voiceflow.com/project/.../.

Find your projectID

  • Include Your API Key: Ensure you include your Voiceflow API Key in the Authorization header of your requests.

Additional Resources

Notes

  • Compatibility: This new streaming endpoint complements the existing interact endpoint and is designed to enhance real-time communication scenarios.

  • Deterministic and Streamed Messages: When using completion_events, you may receive a mix of streamed and fully completed messages. Consider implementing logic in your client application to handle these different message types for a seamless user experience.

  • Latency Reduction: By streaming events as they occur, you can significantly reduce perceived latency, keeping users engaged and informed throughout their interaction.


We believe this new Streaming API will greatly enhance the interactivity and responsiveness of your Voiceflow agents. We can't wait to see how you leverage this new capability in your projects!

For any questions or support, please reach out to our support team or visit our community forums.

We are pleased to announce the launch of the Smart Chunking beta program. Over the coming weeks, we will be testing and validating several LLM-based strategies to enhance the quality of your knowledge base chunks. Better chunks lead to better responses and higher-quality AI agents.

First Strategy: HTML to Markdown Conversion

Our initial strategy focuses on automatic HTML to Markdown conversion. Many users import content from web pages—either directly using our web scraper or via APIs from services like Zendesk Help Center or Kustomer. This content often contains raw HTML, which can be noisy and degrade chunk performance and response quality.

By converting HTML to Markdown automatically, we aim to improve the cleanliness and readability of your content. This conversion is supported for all data sources you can upload into Voiceflow.

How to Use the New Feature


Join the Beta Program

If you're interested in participating in the Smart Chunking beta, please sign up via the waitlist link. We will be granting access to participants over the next few weeks.