Llama-3.1-8B Diffusion Model (LAD)
This is a Language Autoregressive Diffusion (LAD) model based on Llama-3.1-8B-Instruct.
Features
- π― Dual mode: Autoregressive + Diffusion generation
- π Cosine noise schedule with 1000 timesteps
- π§ LoRA fine-tuning (rank 32)
- β‘ Custom diffusion components
Usage
from transformers import AutoTokenizer, AutoModelForCausalLM
model = AutoModelForCausalLM.from_pretrained("rootxhacker/llama3-diffusion")
tokenizer = AutoTokenizer.from_pretrained("rootxhacker/llama3-diffusion")
# Generate text
inputs = tokenizer("The future of AI", return_tensors="pt")
outputs = model.generate(**inputs, max_length=100)
print(tokenizer.decode(outputs[0]))
Training Details
- Base: Meta-Llama-3.1-8B-Instruct
- Dataset: PatrickHaller/cosmopedia-v2-1B
- Framework: Unsloth + Custom Diffusion
- Context: 256 tokens
- Training: 60% AR + 40% Diffusion
Uploaded: 2025-06-08 23:13
- Downloads last month
- 0
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
π
Ask for provider support
Model tree for rootxhacker/llama3-diffusion
Base model
meta-llama/Llama-3.1-8B
Finetuned
meta-llama/Llama-3.1-8B-Instruct
Finetuned
unsloth/Meta-Llama-3.1-8B-Instruct