tpmccallum commited on
Commit
8d06f06
·
1 Parent(s): 7093426

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +7 -4
README.md CHANGED
@@ -1,12 +1,15 @@
1
- This [GGML model](https://huggingface.co/tpmccallum/llama-2-13b-deep-haiku-GGML/blob/main/llama-2-13b-deep-haiku.ggml.fp16.bin) was generated by following the Collab environments at https://github.com/robgon-art/DeepHaiku-LLaMa
 
 
 
2
 
3
- The only change (aside from adding usernames and access tokens) was to use the following code (note the downgrading of `llama.cpp` using `checkout cf348a6` below; this is so that the Collab created the older GGML version of the model instead of the newer GGUF version) in the [2_Deep_Haiku_Quantize_Model_to_GGML](https://github.com/robgon-art/DeepHaiku-LLaMa/blob/main/2_Deep_Haiku_Quantize_Model_to_GGML.ipynb) part of the process:
 
 
4
 
5
  ```
6
  !rm -rf llama.cpp
7
  !git clone https://github.com/ggerganov/llama.cpp
8
  !cd llama.cpp && git pull && git checkout cf348a6 && make clean && LLAMA_CUBLAS=1 make
9
  !pip install numpy==1.23
10
- !pip install sentencepiece==0.1.98
11
- !pip install gguf>=0.1.0
12
  ```
 
1
+ ---
2
+ license: cc-by-4.0
3
+ ---
4
+ The [GGML model](https://huggingface.co/tpmccallum/llama-2-13b-deep-haiku-GGML/blob/main/llama-2-13b-deep-haiku.ggml.fp16.bin) contained herein was generated by following the successive process in the Collab environments at https://github.com/robgon-art/DeepHaiku-LLaMa.
5
 
6
+ > **NOTE:** This tutorial uses [Meta AI's Llama-2-13b-chat-hf](https://huggingface.co/meta-llama/Llama-2-13b-chat-hf) as the starting point (before the above quantizing process is performed). Therefore you will need to visit [Meta's Llama webpage](https://ai.meta.com/resources/models-and-libraries/llama-downloads/) and agree to Meta's License, Acceptable Use Policy, and to Meta’s privacy policy before fetching and using Llama models.
7
+
8
+ > **TIP:** The only change (aside from adding usernames and access tokens) was to substitute the following code (note the downgrading of `llama.cpp` using `checkout cf348a6` below; this is so that the Collab created the older GGML version of the model instead of the newer GGUF version) in the [2_Deep_Haiku_Quantize_Model_to_GGML](https://github.com/robgon-art/DeepHaiku-LLaMa/blob/main/2_Deep_Haiku_Quantize_Model_to_GGML.ipynb) part of the aforementioned process:
9
 
10
  ```
11
  !rm -rf llama.cpp
12
  !git clone https://github.com/ggerganov/llama.cpp
13
  !cd llama.cpp && git pull && git checkout cf348a6 && make clean && LLAMA_CUBLAS=1 make
14
  !pip install numpy==1.23
 
 
15
  ```