curl --request POST \
--url https://realtime-api.voiceflow.com/v1alpha1/public/knowledge-base/document \
--header 'Content-Type: multipart/form-data' \
--header 'authorization: <api-key>' \
--form 'data={
"type": "url",
"url": "<string>",
"metadata": {},
"name": "<string>",
"refreshRate": "daily",
"lastSuccessUpdate": "<string>",
"accessTokenID": 123,
"integrationExternalID": "<string>",
"source": "zendesk"
}'{
"data": {
"documentID": "<string>",
"data": {
"type": "csv",
"name": "<string>",
"rowsCount": 123
},
"updatedAt": "2023-11-07T05:31:56Z",
"status": {
"type": "ERROR",
"data": "<unknown>"
}
}
}Upload or provide a URL to scrape and import as a knowledge base document.
curl --request POST \
--url https://realtime-api.voiceflow.com/v1alpha1/public/knowledge-base/document \
--header 'Content-Type: multipart/form-data' \
--header 'authorization: <api-key>' \
--form 'data={
"type": "url",
"url": "<string>",
"metadata": {},
"name": "<string>",
"refreshRate": "daily",
"lastSuccessUpdate": "<string>",
"accessTokenID": 123,
"integrationExternalID": "<string>",
"source": "zendesk"
}'{
"data": {
"documentID": "<string>",
"data": {
"type": "csv",
"name": "<string>",
"rowsCount": 123
},
"updatedAt": "2023-11-07T05:31:56Z",
"status": {
"type": "ERROR",
"data": "<unknown>"
}
}
}Voiceflow Dialog Manager API key (VF.DM) or Workspace API key (VF.WS)
Determines how granularly each document is broken up. Range available is 500-1500 tokens, default is 1000. Smaller chunk size means narrower context, faster response, less tokens consumed, and greater risk of less accurate answers. Max chunk size affects the total amount of chunks parsed from a document - i.e., larger chunks means less chunks retrieved.
If set to true, the existing table with the same name will be overwritten.
When enabled, HTML is automatically converted to markdown to generate better chunks.
When enabled, an LLM will be used to generate a question based on the document context and specific chunk, then prepend it to the chunk. This enhances retrieval by aligning chunks with potential user queries.
When enabled, an LLM summarizes and rewrites the content, removing unnecessary information and focusing on important parts to optimize for retrieval. Limited to 15 rows per table upload.
When enabled, an LLM generates a context summary based on the document and chunk context, and prepends it to each chunk. This improves retrieval by providing additional context to each chunk. Note: If both llmGeneratedQ and llmPrependContext are set to true, llmGeneratedQ takes precedence, and the context summarization will not be applied.
Hide child attributes
url daily, weekly, monthly, never zendesk, shopify The document was created successfully.
Hide child attributes
Was this page helpful?