File size: 3,278 Bytes
7884efe a9c43b5 9ce8caa 7884efe 9ce8caa 7884efe ba33f58 400c001 ba33f58 7884efe a7b6c41 7884efe ba33f58 9ce8caa |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 |
---
model-index:
- name: starcoder-1b-textbook
results:
- task:
type: text-generation
dataset:
type: openai_humaneval
name: HumanEval
metrics:
- name: pass@1
type: pass@1
value: 27.0%
verified: false
datasets:
- jinaai/code_exercises
language:
- en
tags:
- HumanEval
- StarCoder
license: cc-by-nc-sa-4.0
---
# StarCoder-1b-textbook
StarCoder-1b-textbook is a finetuned version of [starcoderbase-1b](https://huggingface.co/bigcode/starcoderbase-1b) on the [code_exercices](https://huggingface.co/datasets/jinaai/code_exercises) dataset
It achieves 27.0 pass@1 on the [Human Eval](https://github.com/openai/human-eval) coding benchmark while being only 1b parameters.
That is an improvement of almost 12 points over the starcoder 1b baseline, almost doubling the score.
The results (on the human eval benchmark) are on par with other open-source models like StarCoderBase (30.4) StarCoder(33.6) CodeGen-16B-Mono(29.3) while the model being 15 times smaller.
It still underperforms compared to other models like CodeLLama (53%) chat gpt 4 (82) or wizard coder (73.2), but these model are more than 30 times bigger.
## Usage
You can download and use the model like so:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained(
"jinaai/starcoder-1b-textbook", device_map='auto'
)
tokenizer = AutoTokenizer.from_pretrained("jinaai/starcoder-1b-textbook")
prompt = '''
def unique(l: list):
"""Return sorted unique elements in a list
>>> unique([5, 3, 5, 2, 3, 3, 9, 0, 123])
[0, 2, 3, 5, 9, 123]
"""
'''
inputs = tokenizer(prompt.rstrip(), return_tensors="pt").to("cuda")
generation_output = model.generate(
**inputs,
max_new_tokens=128,
eos_token_id=tokenizer.eos_token_id,
return_dict_in_generate=True,
)
s = generation_output.sequences[0]
output = tokenizer.decode(s, skip_special_tokens=True)
print(output)
```
## Finetuning details
We did full parameter fine-tuning and used a Nvidia a40 for 12 hours using a batch size of 128 and a micro-batch size of 8.
To reproduce the training just follow the training instructions in our [open source codebase](https://github.com/jina-ai/textbook)
## Disclaimer
* The human eval benchmark is not a perfect benchmark and does not fully represent the coding abilities of an LLM. This model performs well on the task described in the benchmark but it does not necessarily mean that our model is on par with bigger models on coding assistant LLM.
* This model is not an instruction tuned model and cannot be used as a chatbot. We recommend using the [Evol-Instruct-Code-80k-v1](https://huggingface.co/datasets/nickrosh/Evol-Instruct-Code-80k-v1) to finetune it into a instrution following model
* This model has not been aligned with human preferences and therefore could potentially generate harmful content
* This model has been trained on a dataset generated by ChatGPT 3.5, and you should check the legal status of AI-generated content in your jurisdiction before using it. You should make sure that your usage complies with the OpenAI Terms of Use, in so far as legally applicable.
## Credits
This model was trained and released by [Jina.ai](https://jina.ai/) |