πŸ† mibera-v1-merged πŸ†

πŸš€ Fine-tuned model based on microsoft/phi-4 using LoRA adapters

πŸ”Ή Model Details

  • Base Model: microsoft/phi-4
  • Fine-tuned on: Custom dataset
  • Architecture: Transformer-based Causal LM
  • LoRA Adapter Merging: βœ… Yes
  • Merged Model: βœ… Ready for inference without adapters

πŸ“š Training & Fine-tuning Details

  • Training Method: Fine-tuning with LoRA (Low-Rank Adaptation)
  • LoRA Rank: 32
  • Dataset: Custom curated dataset (details not publicly available)
  • Training Library: πŸ€— Hugging Face transformers + peft

πŸš€ How to Use the Model

from transformers import AutoModelForCausalLM, AutoTokenizer

model_name = "ivxxdegen/mibera-v1-merged"

# Load tokenizer
tokenizer = AutoTokenizer.from_pretrained(model_name)

# Load model
model = AutoModelForCausalLM.from_pretrained(model_name, device_map="auto")

print("βœ… Model loaded successfully!")
Downloads last month
7
Safetensors
Model size
14.7B params
Tensor type
F32
Β·
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.

Space using ivxxdegen/mibera-v1-merged 1