This is an important update to our platform. As part of our ongoing commitment to enhancing your experience and providing the most advanced tools for AI agent development, we have made the decision to deprecate the AI Response and AI Set steps.

What does this mean for you?

  • On February 4th, 2025, the AI Response and AI Set steps will be disabled from the step toolbar in the Voiceflow interface to encourage users to move away of these deprecated steps. Existing steps will remain untouched and will continue working as per normal.
  • On June 3rd, 2025, these steps will no longer be supported. Any existing projects using these steps will need to be migrated to the new Prompt and Set steps. We will be sending out additional communication in advance to the sunset date.

We understand that this change may require some adjustments to your workflow, but rest assured that we are here to support you throughout this transition. The new Prompt and Set steps, along with our powerful Prompt CMS, offer even more flexibility and control over your conversational experiences.

Some key benefits of the new approach include:

  • Centralized prompt management: The Prompt CMS serves as a hub for all your prompts, making it easy to create, edit, and reuse them across your projects.
  • Advanced prompt configuration: Leverage system prompts, message pairs, conversation history, and variables to craft highly contextual and dynamic responses.
  • Seamless integration: The Prompt step allows you to bring your prompts directly into your conversation flows, while the Set step lets you assign prompt outputs to variables for enhanced logic and control.
  • Continued innovation: We are committed to expanding the capabilities of these new features, with exciting updates planned for the near future.

For those using the Knowledge Base, we recommend transitioning to the KB Search step. This step allows you to query your Knowledge Base and feed the results into a prompt, enabling even more intelligent and relevant responses.

To help guide you through migrating from the AI steps to the Prompt step, check our walkthrough below:

We value your feedback and are here to address any questions or concerns you may have. Our team is dedicated to ensuring a smooth transition and helping you unlock the full potential of these powerful new features.

Thank you for your understanding and continued support. We are excited about the future of conversational AI development on Voiceflow and look forward to seeing the incredible experiences you will create with these enhanced capabilities.

Best regards,

Voiceflow

API Step V2

by Zoran Slamkov

We're introducing a new API step with a cleaner, more intuitive interface for configuring your API requests. While the existing API step remains fully functional, we recommend trying out the new version at your earliest convenience.

Project Data Changes

For users working with our API programmatically, we've included the new step type definition below:

type CodeText = (
  | string
  | {
      variableID: string;
    }
  | {
      entityID: string;
    }
)[];

interface MarkupSpan {
  text: Markup;
  attributes?: Record<string, unknown> | undefined;
}

type Markup = (
  | string
  | {
      variableID: string;
    }
  | {
      entityID: string;
    }
  | MarkupSpan
)[];

type ApiV2Node = {
  type: "api-v2";
  data: {
    name: string;
    url?: Markup | null | undefined;
    headers?: Array<{ id: string; key: string; value: Markup }> | undefined;
    httpMethod: "get" | "post" | "put" | "patch" | "delete";
    queryParameters?: Array<{ id: string; key: string; value: Markup }> | undefined;
    responseMappings?: Array<{ id: string; path: string; variableID: string }> | undefined;
    body?:
      | {
          type: "form-data";
          formData: Array<{ id: string; key: string; value: Markup }>;
        }
      | {
          type: "params";
          params: Array<{ id: string; key: string; value: Markup }>;
        }
      | {
          type: "raw-input";
          content: CodeText;
        }
      | null
      | undefined;
    fallback?:
      | {
          path: boolean;
          pathLabel: string;
        }
      | null
      | undefined;
    portsV2: {
      byKey: Record
        string,
        {
          type: string;
          id: string;
          target: string | null;
        }
      >;
    };
  };
  nodeID: string;
  coords?: [number, number] | undefined;
};

Added

We've added support for Anthropic's Claude Haiku 3.5 model.

You can now select Claude Haiku 3.5 when creating or editing your agents, allowing you to:

  • Benefit from improved performance and enhanced conversational abilities
  • Create more engaging and human-like interactions

Updated

To streamline model selection and encourage the use of the latest models, we've hidden Claude Sonnet 3.0 and Haiku 3.0 from the model dropdown.

Don't worry - this change won't affect any existing agents using these models. You can continue to use and edit them without interruption.

Added

  • Document Metadata Update - New PATCH endpoint /v1/knowledge-base/docs/{documentID} to update metadata for entire documents

    • Updates metadata across all chunks simultaneously
    • Note: Not supported for documents of type 'table'
  • Chunk Metadata Update - New PATCH endpoint /v1/knowledge-base/docs/{documentID}/chunk/{chunkID} to update metadata for specific chunks

    • Allows targeted updates to individual chunk metadata
    • Supports all document types, including tables
    • Other chunks in the document remain unchanged

Examples

Check our API documentation for detailed request/response examples and metadata formatting guidelines.

We're excited to announce that Condition steps now support prompts as a condition type, allowing you to use AI responses to determine conversation paths.

What's New

  • Prompt Conditions: Condition steps can now evaluate prompt responses to intelligently branch conversations down different paths based on AI analysis.
  • Message Variant Conditions: Message steps can now use prompt responses to select the most appropriate response text, helping your agent say the right thing at the right time.
  • Seamless Prompt Integration: Choose from your existing prompts in the Prompt CMS or create new ones directly within the Condition or Message step.

Getting Started

For Condition Steps:

  1. Create or select a Condition step
  2. Choose "Prompt" as your condition type
  3. Select or create a prompt
  4. Add paths and define evaluation criteria

For Message Variants:

  1. Add variants to your Message step
  2. Select a prompt to determine variant selection
  3. Define your variant conditions
  4. Test your dynamic messaging

Learn more

Read more about the options available with the Condition step and Messages step.

Simplify your workflow and make managing your agents even easier with more sharing options.

Import and Export Variables and Entities in the CMS

You can now import and export variables and entities directly in your Agent CMS, saving you time and effort when setting up and sharing your agents.

  • Quickly populate your variables and entities by importing exported JSON files
  • Easily create new versions of variables and entities by importing new files
  • Export your variables and entities as JSON files for backup or sharing
  • Save time by bulk importing and exporting variables and entities

These new features are particularly useful when you have a large number of variables or entities to manage, when you need to create new versions frequently, or when you want to share your variables and entities with others.

Importing and exporting variables and entities is especially helpful when you're working with large datasets, complex agents that require numerous variables and entities, or collaborating with team members.

These new import and export features in the Agent CMS will help you set up, manage, and share your agents more efficiently, allowing you to focus on creating engaging conversational experiences.

This week we're excited to introduce two new features that will enhance your workflow and make it easier to build and debug your agents.

Export and Import Prompts in the Prompt CMS


Reusing and sharing prompts across different agents is now a breeze with our new export and import functionality in the Prompt CMS. Easily export prompts individually or in bulk as JSON files.

  • Import prompts into any agent with just a few clicks, including any variables or entities that are used in the prompt.
  • Streamline your workflow by reusing effective prompts across projects.
  • Share your best prompts with colleagues and the community.

Building great agent experiences often involves iterating on prompts. Now you can save time by leveraging your best prompts across all your agents.

Improved Variable Debugging for Objects and Arrays

Debugging when using complex variables is now more intuitive in the debug panel. We've improved the display of objects and arrays so you can easily inspect their values.

  • See and modify objects and arrays assigned to variables during prototyping.
  • Quickly identify issues with variable assignments.

Previously, objects were not displayed from the debug panel, making it difficult to understand what data they contained. This improvement brings more transparency to your debugging process.

This week, we’re excited to announce the beta release of a number of new Smart Chunking features, designed to enhance the way you process and retrieve knowledge base content. These improvements address previous limitations and bring more efficiency to your document management workflow.

LLM-Generated Questions

Enhance retrieval accuracy by prepending AI-generated questions to document chunks. This aligns your content more closely with potential user queries, making it easier for users to find the information they need.

Context Summarization

Provide additional context by adding AI-generated summaries to each chunk. This helps users understand the content more quickly and improves the relevance of search results.

LLM-Based Chunking

Experience optimal document segmentation determined by semantic similarity and retrieval effectiveness. This AI-driven approach ensures your content is chunked in the most meaningful way.

Content Summarization

Let AI summarize and refine your content, focusing on the most important information. This feature streamlines your documents, making your chunks more concise and optimized for retrieval performance.


We encourage you to explore these new capabilities and share your feedback.

To start using the Smart Chunking beta features, join the waiting list here.

Events enables users to trigger workflows without direct user input. They enable your agent to respond to user-defined events tailored to specific use cases. With Events, your agent becomes more context-aware and responsive, providing a more engaging and dynamic user experience.

What’s New

Events System

  • Custom Triggers: Define custom events in the new Event CMS, allowing your agent to respond to specific user actions beyond just conversational input.
  • Seamless Integration: Events act as signals from the user’s interactions—like button clicks, page navigations, or in-app actions—enabling your agent to initiate specific workflows dynamically.
  • Event Triggers in Workflows: Use the new Event type in the Trigger step to associate events with specific flows in your agent, giving you full control over the conversational paths.

Why Use Events?

  • Expand Interaction Capabilities: Respond to a wide range of user actions within your application, making your agent more intelligent and adaptable.
  • Create Contextual Experiences: Provide relevant interactions based on what the user is doing.
  • Streamline User Journeys: Assist users at critical points, offering guidance, confirmations, or additional information exactly when needed.

Examples of How Events Can Enhance Your Agent

  • User Clicks a Checkout Button: Trigger an event to initiate a checkout assistance flow, confirming items or offering shipping options.
  • In-App Feature Usage: Start a tutorial when a user accesses a new feature for the first time.
  • User Sends a Command in a Messaging App: Provide immediate responses to specific commands, like showing recent transactions.
  • User Navigates to a Specific Page: Offer assistance related to the content of the page, such as explaining pricing plans.

Learn More

Recognizing that your AI prompts are the cornerstone of agent behaviour, we’ve developed a comprehensive suite of tools designed to provide a central hub for creating, updating, and testing prompts with ease and efficiency.

What’s New

  1. Prompts CMS

    • Centralized Prompt Hub: Manage all your prompts in one place, ensuring consistency and easy access across your entire agent.

    • Advanced Prompt Editor: Craft, edit, and test your prompts within an intuitive interface equipped with the necessary tooling to refine your AI agent’s responses.

    • Message Pairs & Conversation Memory: Utilize message pairs to simulate interactions and inject conversation memory, allowing for more dynamic and context-aware agent behaviour.

    • Visibility into Performance Metrics: Gain insights into latency and token consumption, now split by input and output tokens, to optimize your prompts for performance and cost-efficiency.

  2. New Prompt Step

    • Prompt Integration: Incorporate response prompts directly into your agent workflows using the new Prompt step.
    • Reuse Across Agent: The prompts you create can be easily reused across your agent, making any updates available wherever the prompt is used.
  3. Assign Prompts in Set Step

    • Simplify Designs: This feature brings prompts to the Set step for purposes of reusability and consolidating methods of setting variable values in your agent.

Looking Ahead

  • Expanded Prompt Support: Soon, you’ll be able to use prompts in more steps within your agent’s flow, unlocking new possibilities for interaction design.
  • Community Sharing: We’re developing features that will allow you to share prompts across your agents and with the wider community, facilitating collaboration and collective improvement.

Learn More

  • Prompt CMS and Editor: Explore the central hub for creating, testing, and managing prompts within your agent.
  • Prompt step: Learn how to integrate prompts directly into your agent’s flow.
  • Set step: Discover how to dynamically assign prompt outputs to variables for greater control over agent behaviour.