llama-3.2b-reflection
Bu model meta-llama/Llama-3.2-3B temel alınarak Java kod üretimi için fine-tune edilmiştir.
Model Detayları
- Base Model: meta-llama/Llama-3.2-3B
- Fine-tuning: Supervised Fine-tuning (SFT)
- Domain: Java kod üretimi
- Precision: Float32
Kullanım
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("Naholav/llama-3.2b-reflection")
model = AutoModelForCausalLM.from_pretrained("Naholav/llama-3.2b-reflection")
# Java kod üretimi için kullan
prompt = "You are an expert Java programmer. Generate a method that sorts an array."
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_length=200)
print(tokenizer.decode(outputs[0]))
Training Details
- Training strategy: Pure SFT
- Epochs: 3
- Float32 precision for stability
- Enhanced NaN protection
- Downloads last month
- 59
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support
Model tree for Naholav/llama-3.2b-reflection
Base model
meta-llama/Llama-3.2-3B