|
--- |
|
base_model: mwitiderrick/open_llama_3b_code_instruct_0.1 |
|
created_by: mwitiderrick |
|
datasets: |
|
- mwitiderrick/AlpacaCode |
|
inference: false |
|
language: |
|
- en |
|
library_name: transformers |
|
license: apache-2.0 |
|
model-index: |
|
- name: mwitiderrick/open_llama_3b_instruct_v_0.2 |
|
results: |
|
- dataset: |
|
name: hellaswag |
|
type: hellaswag |
|
metrics: |
|
- name: hellaswag(0-Shot) |
|
type: hellaswag (0-Shot) |
|
value: 0.6581 |
|
task: |
|
type: text-generation |
|
- dataset: |
|
name: winogrande |
|
type: winogrande |
|
metrics: |
|
- name: winogrande(0-Shot) |
|
type: winogrande (0-Shot) |
|
value: 0.6267 |
|
task: |
|
type: text-generation |
|
- dataset: |
|
name: arc_challenge |
|
type: arc_challenge |
|
metrics: |
|
- name: arc_challenge(0-Shot) |
|
type: arc_challenge (0-Shot) |
|
value: 0.3712 |
|
source: |
|
name: open_llama_3b_instruct_v_0.2 model card |
|
url: https://huggingface.co/mwitiderrick/open_llama_3b_instruct_v_0.2 |
|
task: |
|
type: text-generation |
|
model_creator: mwitiderrick |
|
model_name: open_llama_3b_code_instruct_0.1 |
|
model_type: llama |
|
pipeline_tag: text-generation |
|
prompt_template: '### Instruction:\n |
|
|
|
{prompt} |
|
|
|
### Response: |
|
|
|
' |
|
quantized_by: afrideva |
|
tags: |
|
- transformers |
|
- gguf |
|
- ggml |
|
- quantized |
|
- q2_k |
|
- q3_k_m |
|
- q4_k_m |
|
- q5_k_m |
|
- q6_k |
|
- q8_0 |
|
--- |
|
# mwitiderrick/open_llama_3b_code_instruct_0.1-GGUF |
|
|
|
Quantized GGUF model files for [open_llama_3b_code_instruct_0.1](https://huggingface.co/mwitiderrick/open_llama_3b_code_instruct_0.1) from [mwitiderrick](https://huggingface.co/mwitiderrick) |
|
|
|
|
|
| Name | Quant method | Size | |
|
| ---- | ---- | ---- | |
|
| [open_llama_3b_code_instruct_0.1.fp16.gguf](https://huggingface.co/afrideva/open_llama_3b_code_instruct_0.1-GGUF/resolve/main/open_llama_3b_code_instruct_0.1.fp16.gguf) | fp16 | 6.86 GB | |
|
| [open_llama_3b_code_instruct_0.1.q2_k.gguf](https://huggingface.co/afrideva/open_llama_3b_code_instruct_0.1-GGUF/resolve/main/open_llama_3b_code_instruct_0.1.q2_k.gguf) | q2_k | 2.15 GB | |
|
| [open_llama_3b_code_instruct_0.1.q3_k_m.gguf](https://huggingface.co/afrideva/open_llama_3b_code_instruct_0.1-GGUF/resolve/main/open_llama_3b_code_instruct_0.1.q3_k_m.gguf) | q3_k_m | 2.27 GB | |
|
| [open_llama_3b_code_instruct_0.1.q4_k_m.gguf](https://huggingface.co/afrideva/open_llama_3b_code_instruct_0.1-GGUF/resolve/main/open_llama_3b_code_instruct_0.1.q4_k_m.gguf) | q4_k_m | 2.58 GB | |
|
| [open_llama_3b_code_instruct_0.1.q5_k_m.gguf](https://huggingface.co/afrideva/open_llama_3b_code_instruct_0.1-GGUF/resolve/main/open_llama_3b_code_instruct_0.1.q5_k_m.gguf) | q5_k_m | 2.76 GB | |
|
| [open_llama_3b_code_instruct_0.1.q6_k.gguf](https://huggingface.co/afrideva/open_llama_3b_code_instruct_0.1-GGUF/resolve/main/open_llama_3b_code_instruct_0.1.q6_k.gguf) | q6_k | 3.64 GB | |
|
| [open_llama_3b_code_instruct_0.1.q8_0.gguf](https://huggingface.co/afrideva/open_llama_3b_code_instruct_0.1-GGUF/resolve/main/open_llama_3b_code_instruct_0.1.q8_0.gguf) | q8_0 | 3.64 GB | |
|
|
|
|
|
|
|
## Original Model Card: |
|
# OpenLLaMA Code Instruct: An Open Reproduction of LLaMA |
|
|
|
This is an [OpenLlama model](https://huggingface.co/openlm-research/open_llama_3b) that has been fine-tuned on 1 epoch of the |
|
[AlpacaCode](https://huggingface.co/datasets/mwitiderrick/AlpacaCode) dataset (122K rows). |
|
|
|
## Prompt Template |
|
``` |
|
### Instruction: |
|
|
|
{query} |
|
|
|
### Response: |
|
<Leave new line for model to respond> |
|
``` |
|
## Usage |
|
```python |
|
from transformers import AutoTokenizer, AutoModelForCausalLM,pipeline |
|
|
|
tokenizer = AutoTokenizer.from_pretrained("mwitiderrick/open_llama_3b_code_instruct_0.1") |
|
model = AutoModelForCausalLM.from_pretrained("mwitiderrick/open_llama_3b_code_instruct_0.1") |
|
query = "Write a quick sort algorithm in Python" |
|
text_gen = pipeline(task="text-generation", model=model, tokenizer=tokenizer, max_length=200) |
|
output = text_gen(f"### Instruction:\n{query}\n### Response:\n") |
|
print(output[0]['generated_text']) |
|
""" |
|
### Instruction: |
|
write a quick sort algorithm in Python |
|
### Response: |
|
def quick_sort(arr): |
|
if len(arr) <= 1: |
|
return arr |
|
else: |
|
pivot = arr[len(arr) // 2] |
|
left = [x for x in arr if x < pivot] |
|
middle = [x for x in arr if x == pivot] |
|
right = [x for x in arr if x > pivot] |
|
return quick_sort(left) + middle + quick_sort(right) |
|
|
|
arr = [5,2,4,3,1] |
|
print(quick_sort(arr)) |
|
""" |
|
[1, 2, 3, 4, 5] |
|
""" |
|
``` |
|
## Metrics |
|
[Detailed metrics](https://huggingface.co/datasets/open-llm-leaderboard/details_mwitiderrick__open_llama_3b_code_instruct_0.1) |
|
``` |
|
| Tasks |Version|Filter|n-shot|Metric|Value | |Stderr| |
|
|----------|-------|------|-----:|------|-----:|---|-----:| |
|
|winogrande|Yaml |none | 0|acc |0.6267|± |0.0136| |
|
|hellaswag|Yaml |none | 0|acc |0.4962|± |0.0050| |
|
| | |none | 0|acc_norm|0.6581|± |0.0047| |
|
|arc_challenge|Yaml |none | 0|acc |0.3481|± |0.0139| |
|
| | |none | 0|acc_norm|0.3712|± |0.0141| |
|
|truthfulqa|N/A |none | 0|bleu_max | 24.2580|± |0.5985| |
|
| | |none | 0|bleu_acc | 0.2876|± |0.0003| |
|
| | |none | 0|bleu_diff | -8.3685|± |0.6065| |
|
| | |none | 0|rouge1_max | 49.3907|± |0.7350| |
|
| | |none | 0|rouge1_acc | 0.2558|± |0.0002| |
|
| | |none | 0|rouge1_diff|-10.6617|± |0.6450| |
|
| | |none | 0|rouge2_max | 32.4189|± |0.9587| |
|
| | |none | 0|rouge2_acc | 0.2142|± |0.0002| |
|
| | |none | 0|rouge2_diff|-12.9903|± |0.9539| |
|
| | |none | 0|rougeL_max | 46.2337|± |0.7493| |
|
| | |none | 0|rougeL_acc | 0.2424|± |0.0002| |
|
| | |none | 0|rougeL_diff|-11.0285|± |0.6576| |
|
| | |none | 0|acc | 0.3072|± |0.0405| |
|
``` |