MNLP M3 MCQA Merged Model

This model is a merged version of:

  • Base SFT Model: AnnaelleMyriam/MNLP_M3_sft_dpo_1024_beta0.5_2e-5_FINAL_v3_16_check1500
  • LoRA Adapter: aymanbakiri/MNLP_M3_mcqa_dpo_model

Model Description

This is a specialized model for Multiple Choice Question Answering (MCQA) tasks, created by:

  1. Starting with the SFT model AnnaelleMyriam/MNLP_M3_sft_dpo_1024_beta0.5_2e-5_FINAL_v3_16_check1500
  2. Fine-tuning with LoRA adapters on MCQA data
  3. Merging the LoRA weights back into the base model

Usage

from transformers import AutoModelForCausalLM, AutoTokenizer

model = AutoModelForCausalLM.from_pretrained("aymanbakiri/MNLP_M3_mcqa_dpo_model_full")
tokenizer = AutoTokenizer.from_pretrained("aymanbakiri/MNLP_M3_mcqa_dpo_model_full")

# Example usage for MCQA
prompt = """Question: What is the capital of France?
Options: (A) London (B) Berlin (C) Paris (D) Madrid
Answer:"""

inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=5)
answer = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(answer)

Training Details

  • Base Model: SFT model fine-tuned for instruction following
  • LoRA Configuration: r=16, alpha=32, dropout=0.1
  • Target Modules: q_proj, k_proj, v_proj, o_proj, gate_proj, up_proj, down_proj, lm_head
  • Training Data: MNLP M2 MCQA Dataset

Performance

This merged model should provide better performance than the original LoRA adapter while being easier to deploy and use.

Downloads last month
9
Safetensors
Model size
596M params
Tensor type
BF16
Inference Providers NEW
This model isn't deployed by any Inference Provider. 馃檵 Ask for provider support

Model tree for aymanbakiri/MNLP_M3_mcqa_dpo_model_full