--- library_name: transformers datasets: - Intel/orca_dpo_pairs base_model: - Qwen/Qwen2.5-0.5B-Instruct language: - en - fr - ar pipeline_tag: text-generation --- # Qwen 2.5-0.5B-Instruct – French DPO A lightweight (≈ 494 M parameters) Qwen 2.5 model fine-tuned with Direct Preference Optimization (DPO) on the [Intel/orca_dpo_pairs](https://huggingface.co/datasets/Intel/orca_dpo_pairs) dataset. The goal is to provide a fully English-aligned assistant while preserving the multilingual strengths, coding skill and long-context support already present in the base Qwen2.5-0.5B-Instruct model. # Try it ```python from transformers import AutoTokenizer, AutoModelForCausalLM model_id = "BounharAbdelaziz/Qwen2.5-0.5B-DPO-English-Orca" tok = AutoTokenizer.from_pretrained(model_id, use_fast=True) model = AutoModelForCausalLM.from_pretrained(model_id, torch_dtype="auto", device_map="auto") messages = [ {"role": "system", "content": "You are a helpful and concise English-speaking assistant."}, {"role": "user", content": "Explain the difference between nuclear fusion and fission in three sentences."} ] text = tok.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) output_ids = model.generate(**tok(text, return_tensors="pt").to(model.device), max_new_tokens=256) print(tok.decode(output_ids[0], skip_special_tokens=True)) ``` # Intended use & limitations • Intended: French conversational agent, tutoring, summarisation, coding help in constrained contexts. • Not intended: Unfiltered medical, legal or financial advice; high-stakes decision making. Although DPO reduces harmful completions, the model can still produce errors, hallucinations or biased outputs inherited from the base model and data. Always verify critical facts.