Phi-3 Mini 4K Instruct β Fine-Tuned on U.S. Electric Utility Rates (2020)
This model is a fine-tuned version of microsoft/phi-3-mini-4k-instruct
on a structured dataset containing U.S. electric utility rate data. The training dataset was sourced from data.gov and reformatted into instruction-based examples for instruction-following language modeling.
π‘ Use Case
This model is capable of answering natural language questions such as:
"What is the residential electricity rate for PG&E in California?"
It is trained on structured tabular data in natural language format and can be useful for:
- Question answering over regulatory datasets
- Data summarization
- Instruction-tuned downstream reasoning
π§ Base Model
- Model:
microsoft/phi-3-mini-4k-instruct
- Architecture: Transformer-based causal language model
- Context Length: 4K tokens
ποΈ Fine-tuning Details
- Training Dataset: Aggregated from
iou_zipcodes_2020.csv
andnon_iou_zipcodes_2020.csv
- Sample Size: 147 instruction-style records
- Epochs: 1
- Batch Size: 1
- Precision: fp32 (CPU fine-tuning on low-resource device)
π Example Format
Each training sample was structured as follows:
{
"instruction": "What is the residential electricity rate for PG&E in California?",
"input": "Zip: 94103, Utility: PG&E, State: CA, Service Type: Residential, Ownership: IOU",
"output": "The residential rate is $0.21 per kWh."
}
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
model_path = "your-username/phi3-finetuned-electric-rates"
tokenizer = AutoTokenizer.from_pretrained(model_path, trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained(model_path, trust_remote_code=True)
prompt = """### Instruction:
What is the residential electricity rate for PG&E in California?
### Response:"""
inputs = tokenizer(prompt, return_tensors="pt")
with torch.no_grad():
output = model.generate(**inputs, max_new_tokens=100)
print(tokenizer.decode(output[0], skip_special_tokens=True))
## π€ Author
Trained and uploaded by [Faisal Syed].
For feedback or questions, contact [[email protected]] or open an issue on the repo.
- Downloads last month
- 9
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
π
Ask for provider support