Memory is a great way to provide context for LLMs and prompts to make answers feel more organic and contextual. There are three ways to incorporate chat memory into Voiceflow, automatic at the step level, with the vf_memory variable or custom with low level variables.

Automatic memory

Voiceflow automatically keeps track of memory of the conversation in a variable called VF_Memory. It stores the last ten turns of the conversation, each in their own line, labeled assistant and user. For example:

assistant: Hi there
user: how are you doing
assistant: I'm an AI, so I don't have feelings, but I'm here to help you. How can I assist you today?
user: are mangoes tasty
assistant: Yes, mangoes are often known for their delicious taste. 

Each Voiceflow AI step allows you to configure it to use the built-in memory. Once you have placed your AI step, you can configure it in the editor, where you will have three different options to prompt your Agent:

Respond with prompt — When the step is hit during a user's session, the prompt, and system prompt you provide will be the only data passed to the LLM to generate a response. This option is useful when you want an LLM to perform a very specific function, or generate a very specific response, and where including memory could confuse it.

Use Memory and Prompt — When hit during a user session, the prompt you provide will be augmented with the previous 10 turns in the conversation, and the LLM will generate a response based off both pieces of data. This option is best if you want to enable the most dynamic possible output from the LLM, because this will provide more context to inform its response.

Respond using memory only — This will pass only the previous 10 turns of the conversation to the LLM and allow it to response without any guidance from you. This will enable a purely-conversational interaction between the user and your Assistant, without you providing it a specific goal or task. (only available on Response AI)

Low level configuration with last_response and last_utterance

If you need more low level control, can manually construct your memory within the Voiceflow builder, using variables, the JavaScript step and custom functions.

📘

Custom memory is a great way to improve prompt performance especially when building personas or omitting certain types of messages from the history

Just using current variables

In the builder you can access the last_utterance and last_response variables either in response steps, or code steps (JavaScript and functions) an example below or via the API

Using last_utterance and last_response in Voiceflow step

Using last_utterance and last_response in Voiceflow step

Using Voiceflow functions to construct custom memory variables

Voiceflow functions allow you to build modular and reusable functions with code within the Voiceflow IDE. We create a new variable named custom_memory.

Defining a custom memory

Defining a custom memory

We then define a new function map input variables for the last_utterance, last_response, and our previous custom_memory instance, append them and output them. In this example, we are building an e-commerce agent.

Building a custom memory function

Building a custom memory function

Using a custom memory function and having a conversation

Using a custom memory function and having a conversation

And that's how to use functions to build custom memory! For more information on functions, checkout the documentation here

Using the API to update memory variables

Another approach to build memory is to use the Dialogue Manager API.

📘

Use this approach when you have some custom logic orchestrated outside of Voiceflow or need to enrich the data

In our example, we'll make the memory e-commerce specific and inject additional context from our website. We customize the custom_memory variable in the variables section of the request.

import requests

url = "https://general-runtime.voiceflow.com/state/user/userID"

payload = {
    "stack": [
        {
            "programID": "6062631246b44d80a8a345b4",
            "diagramID": "653fb8df7d32ab70457438f4",
            "nodeID": "60626307fd9a230006a5e289"
        }
    ],
    "variables": {
        "custom_memory": """Clerk: Were there any mugs you were looking at?
        Customer: Yes i liked the blue colour but the style of the rex mug.
        Additional information: The Customer clicked on two items the light blue origin mug and the green rex mug.""",
    }
}
headers = {
    "accept": "application/json",
    "content-type": "application/json"
}

response = requests.put(url, json=payload, headers=headers)

print(response.text)

For more details on the dialog manager API, checkout our API docs