Model_v0

Our model, fine-tuned with Llama 3.1 8B, has been trained using a dataset generated from the EPDK and TEDAŞ regulations within the Turkish Electricity Distribution system.

Model Details

It can answer questions related to regulations and provide detailed information about specific materials.

Model Description

  • Developed by: hdogrukan ,ecokumus
  • Language(s) (NLP): Turkish
  • Finetuned from model : Llama 3.1 8B

Model Sources

Metrics

-https://wandb.ai/hdogrukan/Fine-tune%20llama-3.1-8b-it%20on%20Turkish%20Energy%20Sector%20V0/reports/Engpt-v0--Vmlldzo5MjIzMTIx

Uses

!pip install --upgrade transformers

import transformers
import torch

model_id = "hdogrukan/Llama-3.1-8B-Instruct-Energy"

pipeline = transformers.pipeline(
    "text-generation",
    model=model_id,
    model_kwargs={"torch_dtype": torch.bfloat16},
    device_map="auto",
)


messages = [
    {"role": "system", "content": "You are helpful asistant."},
    {"role": "user", "content": "TEDAŞ-MLZ/99-032.E şartnamesini hangi kurum yayınlamıştır?"},
]

outputs = pipeline(
    messages,
    max_new_tokens=1024,
    temperature=0.2
)
# For more accuracy try this!
""" 
outputs = pipeline(
    messages,
    max_new_tokens=512,
    #max_length=50,
    num_return_sequences=3,
    do_sample=True,
    top_k=50,
    top_p=0.95,
    temperature=0.5 )
"""
print(outputs[0]["generated_text"][-1])

Output : "TEDAŞ-MLZ/99-032.E şartnamesini Türkiye Elektrik Dağıtım A.Ş. yayınlamıştır."

Downloads last month
20
Safetensors
Model size
8.03B params
Tensor type
FP16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for hdogrukan/Llama-3.1-8B-Instruct-Energy

Finetuned
(1418)
this model