jinhai-2012 writinwaters commited on
Commit
5a552df
·
1 Parent(s): bfeb66a

Update document (#3746)

Browse files

### What problem does this PR solve?

Fix description on local LLM deployment case

### Type of change

- [x] Documentation Update

---------

Signed-off-by: jinhai <[email protected]>
Co-authored-by: writinwaters <[email protected]>

Files changed (1) hide show
  1. docs/guides/deploy_local_llm.mdx +2 -2
docs/guides/deploy_local_llm.mdx CHANGED
@@ -74,9 +74,9 @@ In the popup window, complete basic settings for Ollama:
74
  4. OPTIONAL: Switch on the toggle under **Does it support Vision?** if your model includes an image-to-text model.
75
 
76
  :::caution NOTE
 
77
  - If your Ollama and RAGFlow run on the same machine, use `http://localhost:11434` as base URL.
78
- - If your Ollama and RAGFlow run on the same machine and Ollama is in Docker, use `http://host.docker.internal:11434` as base URL.
79
- - If your Ollama runs on a different machine from RAGFlow, use `http://<IP_OF_OLLAMA_MACHINE>:11434` as base URL.
80
  :::
81
 
82
  :::danger WARNING
 
74
  4. OPTIONAL: Switch on the toggle under **Does it support Vision?** if your model includes an image-to-text model.
75
 
76
  :::caution NOTE
77
+ - If RAGFlow is in Docker and Ollama runs on the same host machine, use `http://host.docker.internal:11434` as base URL.
78
  - If your Ollama and RAGFlow run on the same machine, use `http://localhost:11434` as base URL.
79
+ - If your Ollama runs on a different machine from RAGFlow, use `http://<IP_OF_OLLAMA_MACHINE>:11434` as base URL.
 
80
  :::
81
 
82
  :::danger WARNING