Text Generation
Russian
conversational
IlyaGusev commited on
Commit
546dfbe
2 Parent(s): a404369 8a2eb76

Merge branch 'main' of https://huggingface.co/IlyaGusev/saiga_13b_lora_llamacpp into main

Browse files
Files changed (1) hide show
  1. README.md +4 -3
README.md CHANGED
@@ -12,15 +12,16 @@ pipeline_tag: text2text-generation
12
  Llama.cpp compatible versions of an original [13B model](https://huggingface.co/IlyaGusev/saiga_13b_lora).
13
 
14
  * Download one of the versions, for example `ggml-model-q4_1.bin`.
15
- * Download [interact_saiga_llamacpp.py]([https://raw.githubusercontent.com/IlyaGusev/rulm/master/self_instruct/src/infer_saiga_llamacpp.py)
16
 
17
  How to run:
18
  ```
19
  sudo apt-get install git-lfs
20
  pip install llama-cpp-python fire
21
 
22
- python3 interact_saiga_llamacpp.py ggml-model-q4_1.bin
23
  ```
24
 
25
  System requirements:
26
- * 10GB RAM
 
 
12
  Llama.cpp compatible versions of an original [13B model](https://huggingface.co/IlyaGusev/saiga_13b_lora).
13
 
14
  * Download one of the versions, for example `ggml-model-q4_1.bin`.
15
+ * Download [interact_llamacpp.py](https://raw.githubusercontent.com/IlyaGusev/rulm/master/self_instruct/src/interact_llamacpp.py)
16
 
17
  How to run:
18
  ```
19
  sudo apt-get install git-lfs
20
  pip install llama-cpp-python fire
21
 
22
+ python3 interact_llamacpp.py ggml-model-q4_1.bin
23
  ```
24
 
25
  System requirements:
26
+ * 18GB RAM for q8_0
27
+ * 13GB RAM for q4_1