Zhenting Gao
Yingfeng
commited on
Commit
·
318a7e5
1
Parent(s):
66012be
Add an entry in Debugging section (#481)
Browse files### What problem does this PR solve?
_Add an entry in Debugging section._
### Type of change
- [x] Documentation Update
---------
Co-authored-by: Yingfeng <[email protected]>
- docs/faq.md +12 -4
docs/faq.md
CHANGED
@@ -78,7 +78,16 @@ docker build -t infiniflow/ragflow:v0.3.0 . --network host
|
|
78 |
|
79 |
### 2. Issues with huggingface models.
|
80 |
|
81 |
-
#### 2.1
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
82 |
|
83 |
This error suggests that you do not have Internet access or are unable to connect to hf-mirror.com. Try the following:
|
84 |
|
@@ -88,7 +97,7 @@ This error suggests that you do not have Internet access or are unable to connec
|
|
88 |
- ~/deepdoc:/ragflow/rag/res/deepdoc
|
89 |
```
|
90 |
|
91 |
-
#### 2.
|
92 |
|
93 |
1. Check your network from within Docker, for example:
|
94 |
```bash
|
@@ -165,7 +174,7 @@ If your RAGFlow is deployed *locally*, try the following:
|
|
165 |
```bash
|
166 |
docker logs -f ragflow-server
|
167 |
```
|
168 |
-
2. Check if the **
|
169 |
3. Check if your RAGFlow server can access hf-mirror.com or huggingface.com.
|
170 |
|
171 |
|
@@ -304,7 +313,6 @@ You limit what the system responds to what you specify in **Empty response** if
|
|
304 |
|
305 |

|
306 |
|
307 |
-
|
308 |
### 4. How to run RAGFlow with a locally deployed LLM?
|
309 |
|
310 |
You can use Ollama to deploy local LLM. See [here](https://github.com/infiniflow/ragflow/blob/main/docs/ollama.md) for more information.
|
|
|
78 |
|
79 |
### 2. Issues with huggingface models.
|
80 |
|
81 |
+
#### 2.1 If https://huggingface.co can not be accessed
|
82 |
+
- If RAGflow is installed by docker, it will automatically download the OCR and embedding modules from Huggingface website (https://huggingface.co).
|
83 |
+
- If your computer can not access https://huggingface.co, such error will appear and PDF file parsing will fail
|
84 |
+
- FileNotFoundError: [Errno 2] No such file or directory: '/root/.cache/huggingface/hub/models--InfiniFlow--deepdoc/snapshots/be0c1e50eef6047b412d1800aa89aba4d275f997/ocr.res'
|
85 |
+
- if your computer can access https://hf-mirror.com
|
86 |
+
- cd ragflow-0.3.0/docker/; docker compose down
|
87 |
+
- replace https://huggingface.co with https://hf-mirror.com in the ragflow-0.3.0/docker/docker-compose.yml
|
88 |
+
- docker compose up -d
|
89 |
+
|
90 |
+
#### 2.2. `MaxRetryError: HTTPSConnectionPool(host='hf-mirror.com', port=443)`
|
91 |
|
92 |
This error suggests that you do not have Internet access or are unable to connect to hf-mirror.com. Try the following:
|
93 |
|
|
|
97 |
- ~/deepdoc:/ragflow/rag/res/deepdoc
|
98 |
```
|
99 |
|
100 |
+
#### 2.3 `FileNotFoundError: [Errno 2] No such file or directory: '/root/.cache/huggingface/hub/models--InfiniFlow--deepdoc/snapshots/FileNotFoundError: [Errno 2] No such file or directory: '/ragflow/rag/res/deepdoc/ocr.res'be0c1e50eef6047b412d1800aa89aba4d275f997/ocr.res'`
|
101 |
|
102 |
1. Check your network from within Docker, for example:
|
103 |
```bash
|
|
|
174 |
```bash
|
175 |
docker logs -f ragflow-server
|
176 |
```
|
177 |
+
2. Check if the **task_executor.py** process exist.
|
178 |
3. Check if your RAGFlow server can access hf-mirror.com or huggingface.com.
|
179 |
|
180 |
|
|
|
313 |
|
314 |

|
315 |
|
|
|
316 |
### 4. How to run RAGFlow with a locally deployed LLM?
|
317 |
|
318 |
You can use Ollama to deploy local LLM. See [here](https://github.com/infiniflow/ragflow/blob/main/docs/ollama.md) for more information.
|