curl --request POST \
--url https://realtime-api.voiceflow.com/v1alpha1/public/knowledge-base/document/upload/table \
--header 'Content-Type: application/json' \
--header 'authorization: <api-key>' \
--data '
{
"data": {
"name": "<string>",
"items": [
{}
],
"schema": {
"searchableFields": [
"<string>"
],
"metadataFields": [
"<string>"
]
}
}
}
'{
"data": {
"documentID": "<string>",
"data": {
"type": "csv",
"name": "<string>",
"rowsCount": 123
},
"updatedAt": "2023-11-07T05:31:56Z",
"status": {
"type": "ERROR",
"data": "<unknown>"
}
},
"chunks": [
{
"chunkID": "<string>",
"content": "<string>",
"metadata": {}
}
],
"metadata": [
{
"key": "<string>",
"values": [
"<string>"
]
}
]
}Upload a structured document to the knowledge base.
curl --request POST \
--url https://realtime-api.voiceflow.com/v1alpha1/public/knowledge-base/document/upload/table \
--header 'Content-Type: application/json' \
--header 'authorization: <api-key>' \
--data '
{
"data": {
"name": "<string>",
"items": [
{}
],
"schema": {
"searchableFields": [
"<string>"
],
"metadataFields": [
"<string>"
]
}
}
}
'{
"data": {
"documentID": "<string>",
"data": {
"type": "csv",
"name": "<string>",
"rowsCount": 123
},
"updatedAt": "2023-11-07T05:31:56Z",
"status": {
"type": "ERROR",
"data": "<unknown>"
}
},
"chunks": [
{
"chunkID": "<string>",
"content": "<string>",
"metadata": {}
}
],
"metadata": [
{
"key": "<string>",
"values": [
"<string>"
]
}
]
}Voiceflow Dialog Manager API key (VF.DM) or Workspace API key (VF.WS)
If set to true, the existing table with the same name will be overwritten.
When enabled, HTML is automatically converted to markdown to generate better chunks.
When enabled, an LLM will be used to generate a question based on the document context and specific chunk, then prepend it to the chunk. This enhances retrieval by aligning chunks with potential user queries.
When enabled, an LLM summarizes and rewrites the content, removing unnecessary information and focusing on important parts to optimize for retrieval. Limited to 15 rows per table upload.
When enabled, an LLM generates a context summary based on the document and chunk context, and prepends it to each chunk. This improves retrieval by providing additional context to each chunk. Note: If both llmGeneratedQ and llmPrependContext are set to true, llmGeneratedQ takes precedence, and the context summarization will not be applied.
Hide child attributes
A unique name identifying the table.
The document was uploaded successfully.
Hide child attributes
Was this page helpful?