Fine-tuned Phi-3-mini Model
This is a fine-tuned version of microsoft/Phi-3-mini-128k-instruct using distributed training.
Model Details
- Base Model: microsoft/Phi-3-mini-128k-instruct
- Training Method: Distributed fine-tuning with Ray
- Shards Used: 2
- Parameters: ~3.8B
Training Information
The model was fine-tuned using a distributed approach across multiple shards. While the base architecture is preserved, this model has been through a fine-tuning process optimized for specific tasks.
Usage
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("a-k-aAiMGoD/phi3-mini-distributed-fine-tune")
model = AutoModelForCausalLM.from_pretrained("a-k-aAiMGoD/phi3-mini-distributed-fine-tune")
# Example usage
text = "Hello, how are you?"
inputs = tokenizer(text, return_tensors="pt")
outputs = model.generate(**inputs, max_length=100)
response = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(response)
Training Configuration
- Distributed across 2 shards
- Optimized for large-scale deployment
- Enhanced with Ray-based parallelization
- Downloads last month
- 4
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support
Model tree for a-k-aAiMGoD/phi3-mini-distributed-fine-tune
Base model
microsoft/Phi-3-mini-128k-instruct