YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/model-cards#model-card-metadata)

PEFT Adapter for Quantum Computing QA

PEFT adapter for Quantum Computing QA

Model Description

This is a PEFT adapter trained for quantum computing question-answering tasks.

  • Base Model: mistralai/Mistral-7B-Instruct-v0.3
  • Training Method: LoRA
  • Task: Question Answering
  • Domain: Quantum Computing

Usage

from peft import PeftModel, PeftConfig
from transformers import AutoModelForCausalLM, AutoTokenizer

# Load base model
base_model = AutoModelForCausalLM.from_pretrained(
    "mistralai/Mistral-7B-Instruct-v0.3",
    device_map="auto",
    trust_remote_code=True
)

# Load adapter
model = PeftModel.from_pretrained(
    base_model,
    "darkB/quantum-qa-test-adapter"
)
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support