llama-3-debug / README.md
xiaodongguaAIGC's picture
Update README.md
b663718 verified
metadata
library_name: transformers
tags:
  - llama-3
  - tiny-llama
  - nano-llama
  - small-llama
  - random-llama
  - tiny
  - small
  - nano
  - random
  - debug
  - llama-3-debug
  - gpt
  - generation
  - xiaodongguaAIGC
pipeline_tag: text-generation
language:
  - en
  - zh

llama-3-debug

This model use for debug, the parameter is random.

It's small only '~32MB' memory size, that is efficent for you to download and debug.

llama-3-debug model config modified as follow

config.intermediate_size = 128
config.hidden_size = 64
config.num_attention_heads = 2
config.num_key_value_heads = 2
config.num_hidden_layers = 1

If you want to load it by this code

from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = 'xiaodongguaAIGC/llama-3-debug'
model = AutoModelForCausalLM.from_pretrained(model_name, torch_dtype=torch.bfloat16)
tokenizer = AutoTokenizer.from_pretrained(model_name)
print(model)
print(tokenizer)