rwkv7-0.4B-g1

This is RWKV-7 model under flash-linear attention format.

Model Details

Model Description

  • Developed by: Bo Peng, Yu Zhang, Songlin Yang, Ruichong Zhang
  • Funded by: RWKV Project (Under LF AI & Data Foundation)
  • Model type: RWKV7
  • Language(s) (NLP): English
  • License: Apache-2.0
  • Parameter count: 450M
  • Tokenizer: RWKV World tokenizer
  • Vocabulary size: 65,536

Model Sources

Uses

Install flash-linear-attention and the latest version of transformers before using this model:

pip install git+https://github.com/fla-org/flash-linear-attention
pip install 'transformers>=4.48.0'

Direct Use

You can use this model just as any other HuggingFace models:

from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained('fla-hub/rwkv7-0.4B-g1', trust_remote_code=True)
tokenizer = AutoTokenizer.from_pretrained('fla-hub/rwkv7-0.4B-g1', trust_remote_code=True)

Training Data

This model is trained on the World v3 with a total of 3.119 trillion tokens.

Training Hyperparameters

  • Token Count: 1.1T + 2T + 2T

FAQ

Q: safetensors metadata is none.

A: upgrade transformers to >=4.48.0: pip install 'transformers>=4.48.0'

Downloads last month
19
Safetensors
Model size
451M params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for fla-hub/rwkv7-0.4B-g1

Base model

BlinkDL/rwkv7-g1
Finetuned
(2)
this model
Quantizations
1 model

Collection including fla-hub/rwkv7-0.4B-g1