File size: 1,619 Bytes
8ca2f4a |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 |
---
license: mit
widget:
- text: >
<|system|>
You are a chatbot who can help code!</s>
<|user|>
Write me a function to calculate the first 10 digits of the fibonacci
sequence in Python and print it out to the CLI.</s>
<|assistant|>
library_name: transformers
pipeline_tag: text-generation
---
# Tiny-llama
## Model Description
Tiny llamix is a model built from [TinyLlama](https://huggingface.co/TinyLlama/TinyLlama-1.1B-Chat-v1.0) using [Charles Goddard's](https://github.com/cg123) mergekit on the mixtral branch.
## Configuration
```yaml
base_model: TinyLlama/TinyLlama-1.1B-Chat-v1.0
gate_mode: hidden
dtype: bfloat16
experts:
- source_model: TinyLlama/TinyLlama-1.1B-Chat-v1.0
positive_prompts:
- "M1"
- source_model: TinyLlama/TinyLlama-1.1B-Chat-v1.0
positive_prompts:
- "M2"
```
## Usage
It can be used like any other model
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
#load model and tokenizer
model = AutoModelForCausalLM.from_pretrained("SE6446/Tiny-llamix").to("cuda")
tokenizer = AutoTokenizer.from_pretrained("SE6446/Tiny-llamix")
#write and tokenize prompt
instruction = '''<|system|>\nYou are a chatbot who can help code!</s>
<|user|> Write me a function to calculate the first 10 digits of the fibonacci sequence in Python and print it out to the CLI.</s>
<|assistant|>'''
inputs = tokenizer(instruction, return_tensors="pt", return_attention_mask=False).to("cuda")
#generate
outputs = model.generate(**inputs, max_length=200)
#print
text = tokenizer.batch_decode(outputs)[0]
print(text)
```
## Performance (coming soon!) |