RebeccaQian1 commited on
Commit
88f2d95
1 Parent(s): c1021ea

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +8 -4
README.md CHANGED
@@ -2,14 +2,18 @@
2
  base_model: PatronusAI/Patronus-Lynx-8B-Instruct
3
  library_name: transformers
4
  tags:
5
- - llama-cpp
6
- - gguf-my-repo
 
 
7
  ---
8
 
9
  # PatronusAI/Llama-3-Patronus-Lynx-8B-Instruct-Q4_K_M-GGUF
10
- This model was converted to GGUF format from [`PatronusAI/Patronus-Lynx-8B-Instruct`](https://huggingface.co/PatronusAI/Patronus-Lynx-8B-Instruct) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
11
  Refer to the [original model card](https://huggingface.co/PatronusAI/Patronus-Lynx-8B-Instruct) for more details on the model.
12
 
 
 
13
  ## Use with llama.cpp
14
  Install llama.cpp through brew (works on Mac and Linux)
15
 
@@ -48,4 +52,4 @@ Step 3: Run inference through the main binary.
48
  or
49
  ```
50
  ./llama-server --hf-repo PatronusAI/Llama-3-Patronus-Lynx-8B-Instruct-Q4_K_M-GGUF --hf-file patronus-lynx-8b-instruct-q4_k_m.gguf -c 2048
51
- ```
 
2
  base_model: PatronusAI/Patronus-Lynx-8B-Instruct
3
  library_name: transformers
4
  tags:
5
+ - patronus
6
+ - hallucination detection
7
+ - llama 3
8
+ license: cc-by-nc-4.0
9
  ---
10
 
11
  # PatronusAI/Llama-3-Patronus-Lynx-8B-Instruct-Q4_K_M-GGUF
12
+ This model is a quantized version of [`PatronusAI/Patronus-Lynx-8B-Instruct`](https://huggingface.co/PatronusAI/Patronus-Lynx-8B-Instruct).
13
  Refer to the [original model card](https://huggingface.co/PatronusAI/Patronus-Lynx-8B-Instruct) for more details on the model.
14
 
15
+ License: [https://creativecommons.org/licenses/by-nc/4.0/](https://creativecommons.org/licenses/by-nc/4.0/)
16
+
17
  ## Use with llama.cpp
18
  Install llama.cpp through brew (works on Mac and Linux)
19
 
 
52
  or
53
  ```
54
  ./llama-server --hf-repo PatronusAI/Llama-3-Patronus-Lynx-8B-Instruct-Q4_K_M-GGUF --hf-file patronus-lynx-8b-instruct-q4_k_m.gguf -c 2048
55
+ ```