manojkumarvohra's picture
Update README.md
47b1cd2
|
raw
history blame
567 Bytes

llama2-7B-Chat-hf-8bit-guanaco-pico-finetuned

This is a 8bit quanitzed fine tuned adapter (https://huggingface.co/manojkumarvohra/llama2-7B-Chat-8bit-guanaco-pico-adapter-hf) merged with fp16 llama2-7b-chat-hf checkpoint. Fine tuning has been performed on a very small/pico Guanaco dataset: manojkumarvohra/guanaco-pico-100-samples https://huggingface.co/datasets/manojkumarvohra/guanaco-pico-100-samples with 100 training samples and 20 validation samples. This model is created for learning purpose only and is not recommended to be used for a business purpose.