writinwaters
commited on
Commit
·
186246a
1
Parent(s):
6ec0dc3
Updated outdated descriptions and added multi-turn optimization (#4362)
Browse files### What problem does this PR solve?
_Briefly describe what this PR aims to solve. Include background context
that will help reviewers understand the purpose of the PR._
### Type of change
- [x] Documentation Update
- docs/guides/start_chat.md +25 -5
- docs/references/agent_component_reference/_category_.json +1 -1
- docs/references/agent_component_reference/begin.mdx +2 -2
- docs/references/faq.md +14 -2
- docs/references/http_api_reference.md +1 -1
- docs/references/python_api_reference.md +1 -1
- docs/references/supported_models.mdx +4 -0
docs/guides/start_chat.md
CHANGED
|
@@ -5,6 +5,10 @@ slug: /start_chat
|
|
| 5 |
|
| 6 |
# Start an AI-powered chat
|
| 7 |
|
|
|
|
|
|
|
|
|
|
|
|
|
| 8 |
Knowledge base, hallucination-free chat, and file management are the three pillars of RAGFlow. Chats in RAGFlow are based on a particular knowledge base or multiple knowledge bases. Once you have created your knowledge base and finished file parsing, you can go ahead and start an AI conversation.
|
| 9 |
|
| 10 |
## Start an AI chat
|
|
@@ -21,16 +25,23 @@ You start an AI conversation by creating an assistant.
|
|
| 21 |
- **Empty response**:
|
| 22 |
- If you wish to *confine* RAGFlow's answers to your knowledge bases, leave a response here. Then, when it doesn't retrieve an answer, it *uniformly* responds with what you set here.
|
| 23 |
- If you wish RAGFlow to *improvise* when it doesn't retrieve an answer from your knowledge bases, leave it blank, which may give rise to hallucinations.
|
| 24 |
-
- **Show quote**: This is a key feature of RAGFlow and enabled by default. RAGFlow does not work like a black box.
|
| 25 |
- Select the corresponding knowledge bases. You can select one or multiple knowledge bases, but ensure that they use the same embedding model, otherwise an error would occur.
|
| 26 |
|
| 27 |
3. Update **Prompt Engine**:
|
| 28 |
|
| 29 |
- In **System**, you fill in the prompts for your LLM, you can also leave the default prompt as-is for the beginning.
|
| 30 |
- **Similarity threshold** sets the similarity "bar" for each chunk of text. The default is 0.2. Text chunks with lower similarity scores are filtered out of the final response.
|
| 31 |
-
- **
|
|
|
|
|
|
|
| 32 |
- **Top N** determines the *maximum* number of chunks to feed to the LLM. In other words, even if more chunks are retrieved, only the top N chunks are provided as input.
|
| 33 |
-
- **
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 34 |
|
| 35 |
4. Update **Model Setting**:
|
| 36 |
|
|
@@ -42,9 +53,18 @@ You start an AI conversation by creating an assistant.
|
|
| 42 |
|
| 43 |
5. Now, let's start the show:
|
| 44 |
|
| 45 |
-
 to be used in the system prompt. `{knowledge}` is a reserved variable. Click **Add** to add more variables for the system prompt.
|
| 44 |
+
- If you are uncertain about the logic behind **Variable**, leave it *as-is*.
|
| 45 |
|
| 46 |
4. Update **Model Setting**:
|
| 47 |
|
|
|
|
| 53 |
|
| 54 |
5. Now, let's start the show:
|
| 55 |
|
| 56 |
+

|
| 57 |
+
|
| 58 |
+
:::tip NOTE
|
| 59 |
|
| 60 |
+
Click the light bubble logo above the answer to view the expanded system prompt:
|
| 61 |
+
|
| 62 |
+

|
| 63 |
+
|
| 64 |
+
Scroll down the expanded prompt to view the time consumed for each task:
|
| 65 |
+
|
| 66 |
+

|
| 67 |
+
:::
|
| 68 |
|
| 69 |
## Update settings of an existing chat assistant
|
| 70 |
|
docs/references/agent_component_reference/_category_.json
CHANGED
|
@@ -1,5 +1,5 @@
|
|
| 1 |
{
|
| 2 |
-
"label": "Agent
|
| 3 |
"position": 3,
|
| 4 |
"link": {
|
| 5 |
"type": "generated-index",
|
|
|
|
| 1 |
{
|
| 2 |
+
"label": "Agent Components",
|
| 3 |
"position": 3,
|
| 4 |
"link": {
|
| 5 |
"type": "generated-index",
|
docs/references/agent_component_reference/begin.mdx
CHANGED
|
@@ -29,7 +29,7 @@ An opening greeting is the agent's first message to the user. It can be a welcom
|
|
| 29 |
|
| 30 |
### Global variables
|
| 31 |
|
| 32 |
-
You can set global variables within the **Begin** component, which can be either required or optional. Once established, users will need to provide values for these variables when interacting or with the agent. Click **+ Add variable** to add a global variable, each with the following attributes:
|
| 33 |
|
| 34 |
- **Key**: *Required*
|
| 35 |
The unique variable name.
|
|
@@ -41,7 +41,7 @@ You can set global variables within the **Begin** component, which can be either
|
|
| 41 |
- **line**: Accepts a single line of text without line breaks.
|
| 42 |
- **paragraph**: Accepts multiple lines of text, including line breaks.
|
| 43 |
- **options**: Requires the user to select a value for this variable from a dropdown menu. And you are required to set *at least* one option for the dropdown menu.
|
| 44 |
-
- **file**: Requires the user to upload
|
| 45 |
- **integer**: Accepts an integer as input.
|
| 46 |
- **boolean**: Requires the user to toggle between on and off.
|
| 47 |
- **Optional**: A toggle indicating whether the variable is optional.
|
|
|
|
| 29 |
|
| 30 |
### Global variables
|
| 31 |
|
| 32 |
+
You can set global variables within the **Begin** component, which can be either required or optional. Once established, users will need to provide values for these variables when interacting or chatting with the agent. Click **+ Add variable** to add a global variable, each with the following attributes:
|
| 33 |
|
| 34 |
- **Key**: *Required*
|
| 35 |
The unique variable name.
|
|
|
|
| 41 |
- **line**: Accepts a single line of text without line breaks.
|
| 42 |
- **paragraph**: Accepts multiple lines of text, including line breaks.
|
| 43 |
- **options**: Requires the user to select a value for this variable from a dropdown menu. And you are required to set *at least* one option for the dropdown menu.
|
| 44 |
+
- **file**: Requires the user to upload one or multiple files from their device.
|
| 45 |
- **integer**: Accepts an integer as input.
|
| 46 |
- **boolean**: Requires the user to toggle between on and off.
|
| 47 |
- **Optional**: A toggle indicating whether the variable is optional.
|
docs/references/faq.md
CHANGED
|
@@ -81,9 +81,13 @@ No, this feature is not supported.
|
|
| 81 |
|
| 82 |
---
|
| 83 |
|
| 84 |
-
### Do you support multiple rounds of dialogues,
|
| 85 |
|
| 86 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
| 87 |
|
| 88 |
---
|
| 89 |
|
|
@@ -388,6 +392,14 @@ You can use Ollama or Xinference to deploy local LLM. See [here](../guides/deplo
|
|
| 388 |
|
| 389 |
---
|
| 390 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 391 |
### How to interconnect RAGFlow with Ollama?
|
| 392 |
|
| 393 |
- If RAGFlow is locally deployed, ensure that your RAGFlow and Ollama are in the same LAN.
|
|
|
|
| 81 |
|
| 82 |
---
|
| 83 |
|
| 84 |
+
### Do you support multiple rounds of dialogues, referencing previous dialogues as context for the current query?
|
| 85 |
|
| 86 |
+
Yes, we support enhancing user queries based on existing context of an ongoing conversation:
|
| 87 |
+
|
| 88 |
+
1. On the **Chat** page, hover over the desired assistant and select **Edit**.
|
| 89 |
+
2. In the **Chat Configuration** popup, click the **Prompt Engine** tab.
|
| 90 |
+
3. Toggle on **Multi-turn optimization** to enable this feature.
|
| 91 |
|
| 92 |
---
|
| 93 |
|
|
|
|
| 392 |
|
| 393 |
---
|
| 394 |
|
| 395 |
+
### Is it possible to add an LLM that is not supported?
|
| 396 |
+
|
| 397 |
+
If your model is not currently supported but has APIs compatible with those of OpenAI, click **OpenAI-API-Compatible** on the **Model providers** page to configure your model:
|
| 398 |
+
|
| 399 |
+

|
| 400 |
+
|
| 401 |
+
---
|
| 402 |
+
|
| 403 |
### How to interconnect RAGFlow with Ollama?
|
| 404 |
|
| 405 |
- If RAGFlow is locally deployed, ensure that your RAGFlow and Ollama are in the same LAN.
|
docs/references/http_api_reference.md
CHANGED
|
@@ -3,7 +3,7 @@ sidebar_position: 1
|
|
| 3 |
slug: /http_api_reference
|
| 4 |
---
|
| 5 |
|
| 6 |
-
# HTTP API
|
| 7 |
|
| 8 |
A complete reference for RAGFlow's RESTful API. Before proceeding, please ensure you [have your RAGFlow API key ready for authentication](https://ragflow.io/docs/dev/acquire_ragflow_api_key).
|
| 9 |
|
|
|
|
| 3 |
slug: /http_api_reference
|
| 4 |
---
|
| 5 |
|
| 6 |
+
# HTTP API
|
| 7 |
|
| 8 |
A complete reference for RAGFlow's RESTful API. Before proceeding, please ensure you [have your RAGFlow API key ready for authentication](https://ragflow.io/docs/dev/acquire_ragflow_api_key).
|
| 9 |
|
docs/references/python_api_reference.md
CHANGED
|
@@ -3,7 +3,7 @@ sidebar_position: 2
|
|
| 3 |
slug: /python_api_reference
|
| 4 |
---
|
| 5 |
|
| 6 |
-
# Python API
|
| 7 |
|
| 8 |
A complete reference for RAGFlow's Python APIs. Before proceeding, please ensure you [have your RAGFlow API key ready for authentication](https://ragflow.io/docs/dev/acquire_ragflow_api_key).
|
| 9 |
|
|
|
|
| 3 |
slug: /python_api_reference
|
| 4 |
---
|
| 5 |
|
| 6 |
+
# Python API
|
| 7 |
|
| 8 |
A complete reference for RAGFlow's Python APIs. Before proceeding, please ensure you [have your RAGFlow API key ready for authentication](https://ragflow.io/docs/dev/acquire_ragflow_api_key).
|
| 9 |
|
docs/references/supported_models.mdx
CHANGED
|
@@ -62,6 +62,10 @@ A complete list of models supported by RAGFlow, which will continue to expand.
|
|
| 62 |
</APITable>
|
| 63 |
```
|
| 64 |
|
|
|
|
|
|
|
|
|
|
|
|
|
| 65 |
:::note
|
| 66 |
The list of supported models is extracted from [this source](https://github.com/infiniflow/ragflow/blob/main/rag/llm/__init__.py) and may not be the most current. For the latest supported model list, please refer to the Python file.
|
| 67 |
:::
|
|
|
|
| 62 |
</APITable>
|
| 63 |
```
|
| 64 |
|
| 65 |
+
:::danger IMPORTANT
|
| 66 |
+
If your model is not listed here but has APIs compatible with those of OpenAI, click **OpenAI-API-Compatible** on the **Model providers** page to configure your model.
|
| 67 |
+
:::
|
| 68 |
+
|
| 69 |
:::note
|
| 70 |
The list of supported models is extracted from [this source](https://github.com/infiniflow/ragflow/blob/main/rag/llm/__init__.py) and may not be the most current. For the latest supported model list, please refer to the Python file.
|
| 71 |
:::
|