File size: 941 Bytes
c161aec
 
a84a38e
 
 
 
 
 
 
 
 
 
14c2c6c
 
 
 
 
a84a38e
b663718
 
 
c161aec
 
a84a38e
c161aec
f10a779
 
 
b663718
f10a779
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
b663718
f10a779
 
 
14c2c6c
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
---
library_name: transformers
tags:
- llama-3
- tiny-llama
- nano-llama
- small-llama
- random-llama
- tiny
- small
- nano
- random
- debug
- llama-3-debug
- gpt
- generation
- xiaodongguaAIGC
pipeline_tag: text-generation
language:
- en
- zh
---

# llama-3-debug


This model use for debug,  the parameter is random.

It's small only '~32MB' memory size, that is efficent for you to download and debug.

`llama-3-debug` model config modified as follow

```python
config.intermediate_size = 128
config.hidden_size = 64
config.num_attention_heads = 2
config.num_key_value_heads = 2
config.num_hidden_layers = 1
```

If you want to load it by this code

```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = 'xiaodongguaAIGC/llama-3-debug'
model = AutoModelForCausalLM.from_pretrained(model_name, torch_dtype=torch.bfloat16)
tokenizer = AutoTokenizer.from_pretrained(model_name)
print(model)
print(tokenizer)
```