File size: 4,413 Bytes
45fcb6c 1ff45e2 faacc69 45fcb6c 0348118 45fcb6c 2ed062e 1294f9e 2ed062e 0348118 2ed062e 0348118 2ed062e 78a8f80 2ed062e 78a8f80 2ed062e 78a8f80 2ed062e a68b640 2ed062e 1294f9e 0348118 45fcb6c c232480 45fcb6c |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 |
---
license: other
tags:
- merge
- mergekit
- lazymergekit
base_model:
- meta-llama/Meta-Llama-3-70B-Instruct
- meta-llama/Meta-Llama-3-70B-Instruct
- meta-llama/Meta-Llama-3-70B-Instruct
- meta-llama/Meta-Llama-3-70B-Instruct
- meta-llama/Meta-Llama-3-70B-Instruct
- meta-llama/Meta-Llama-3-70B-Instruct
- meta-llama/Meta-Llama-3-70B-Instruct
---
![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/61b8e2ba285851687028d395/C-Xw_m97bhXaTA1TEpHB7.jpeg)
# Meta-Llama-3-120B-Instruct
Meta-Llama-3-120B-Instruct is a [meta-llama/Meta-Llama-3-70B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-70B-Instruct) self-merge made with [MergeKit](https://github.com/arcee-ai/mergekit/tree/main).
It was inspired by large merges like:
- [alpindale/goliath-120b](https://huggingface.co/alpindale/goliath-120b)
- [nsfwthrowitaway69/Venus-120b-v1.0](https://huggingface.co/nsfwthrowitaway69/Venus-120b-v1.0)
- [cognitivecomputations/MegaDolphin-120b](https://huggingface.co/cognitivecomputations/MegaDolphin-120b)
- [wolfram/miquliz-120b-v2.0](https://huggingface.co/wolfram/miquliz-120b-v2.0).
Special thanks to [Eric Hartford](https://huggingface.co/ehartford) for both inspiring and evaluating this model and to [Charles Goddard](https://huggingface.co/chargoddard) for creating MergeKit.
## 🔍 Applications
I recommend using this model for creative writing. It uses the Llama 3 chat template with a default context window of 8K (can be extended with rope theta).
Check the examples in the evaluation section to get an idea of its performance. The model is generally quite unhinged but has a good writing style. It sometimes outputs typos and is a big fan of uppercase.
## ⚡ Quantized models
Thanks to [Bartowski](https://huggingface.co/ehartford), [elinas](https://huggingface.co/elinas), the [mlx-community](https://huggingface.co/mlx-community) and others for providing these models.
* **GGUF**: https://huggingface.co/lmstudio-community/Meta-Llama-3-120B-Instruct-GGUF
* **EXL2**: https://huggingface.co/elinas/Meta-Llama-3-120B-Instruct-4.0bpw-exl2
* **mlx**: https://huggingface.co/mlx-community/Meta-Llama-3-120B-Instruct-4bit
## 🏆 Evaluation
This model is great for creative writing but struggles in other tasks. I'd say use it with caution and don't expect it to outperform GPT-4 outside of some very specific use cases.
* **X thread by Eric Hartford (creative writing)**: https://twitter.com/erhartford/status/1787050962114207886
* **X thread by Daniel Kaiser (creative writing)**: https://twitter.com/spectate_or/status/1787257261309518101
* **X thread by Simon (reasoning)**: https://twitter.com/NewDigitalEdu/status/1787403266894020893
* **r/LocalLLaMa**: https://www.reddit.com/r/LocalLLaMA/comments/1cl525q/goliath_lovers_where_is_the_feedback_about/
### Creative Writing
Thanks to [Sam Paech](https://huggingface.co/sam-paech) for evaluating this model and sending me his outputs!
![image/png](https://cdn-uploads.huggingface.co/production/uploads/61b8e2ba285851687028d395/-LJ7ivCRIPR1ur-LJHk3m.png)
## 🧩 Configuration
```yaml
slices:
- sources:
- layer_range: [0, 20]
model: meta-llama/Meta-Llama-3-70B-Instruct
- sources:
- layer_range: [10, 30]
model: meta-llama/Meta-Llama-3-70B-Instruct
- sources:
- layer_range: [20, 40]
model: meta-llama/Meta-Llama-3-70B-Instruct
- sources:
- layer_range: [30, 50]
model: meta-llama/Meta-Llama-3-70B-Instruct
- sources:
- layer_range: [40, 60]
model: meta-llama/Meta-Llama-3-70B-Instruct
- sources:
- layer_range: [50, 70]
model: meta-llama/Meta-Llama-3-70B-Instruct
- sources:
- layer_range: [60, 80]
model: meta-llama/Meta-Llama-3-70B-Instruct
merge_method: passthrough
dtype: float16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "mlabonne/Meta-Llama-3-120B-Instruct"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
``` |