Skip to main content
POST
/
v1alpha1
/
public
/
knowledge-base
/
document
/
upload
/
table
Upload table document
curl --request POST \
  --url https://realtime-api.voiceflow.com/v1alpha1/public/knowledge-base/document/upload/table \
  --header 'Content-Type: application/json' \
  --header 'authorization: <api-key>' \
  --data '
{
  "data": {
    "name": "<string>",
    "items": [
      {}
    ],
    "schema": {
      "searchableFields": [
        "<string>"
      ],
      "metadataFields": [
        "<string>"
      ]
    }
  }
}
'
{
  "data": {
    "documentID": "<string>",
    "data": {
      "type": "csv",
      "name": "<string>",
      "rowsCount": 123
    },
    "updatedAt": "2023-11-07T05:31:56Z",
    "status": {
      "type": "ERROR",
      "data": "<unknown>"
    }
  },
  "chunks": [
    {
      "chunkID": "<string>",
      "content": "<string>",
      "metadata": {}
    }
  ],
  "metadata": [
    {
      "key": "<string>",
      "values": [
        "<string>"
      ]
    }
  ]
}

Authorizations

authorization
string
header
required

Voiceflow Dialog Manager API key (VF.DM) or Workspace API key (VF.WS)

Query Parameters

overwrite

If set to true, the existing table with the same name will be overwritten.

markdownConversion

When enabled, HTML is automatically converted to markdown to generate better chunks.

llmBasedChunks
llmGeneratedQ

When enabled, an LLM will be used to generate a question based on the document context and specific chunk, then prepend it to the chunk. This enhances retrieval by aligning chunks with potential user queries.

llmContentSummarization

When enabled, an LLM summarizes and rewrites the content, removing unnecessary information and focusing on important parts to optimize for retrieval. Limited to 15 rows per table upload.

llmPrependContext

When enabled, an LLM generates a context summary based on the document and chunk context, and prepends it to each chunk. This improves retrieval by providing additional context to each chunk. Note: If both llmGeneratedQ and llmPrependContext are set to true, llmGeneratedQ takes precedence, and the context summarization will not be applied.

llmVision

Body

application/json
data
object
required

Response

200 - application/json

The document was uploaded successfully.

data
object
required
chunks
object[]
metadata
object[]