radm's picture
Improve language tag (#1)
14fd7ec verified
metadata
library_name: transformers
tags:
  - qwen2
  - orpo
  - unsloth
  - philosophical
  - esoteric
  - multilingual
base_model:
  - Qwen/Qwen2.5-7B-Instruct
language:
  - zho
  - eng
  - fra
  - spa
  - por
  - deu
  - ita
  - rus
  - jpn
  - kor
  - vie
  - tha
  - ara

Model Card for forerunner-qwen7b-orpo

Model Image

Model Details

Model Description

This model is a fine-tuned version of Qwen/Qwen2.5-7B-Instruct using the ORPO algorithm with Unsloth for optimization. It has been trained on a custom dataset with a focus on philosophical and esoteric topics. The model is multilingual.

  • Developed by: radm
  • Finetuned from model: Qwen/Qwen2.5-7B-Instruct
  • Model type: Causal LM based on Qwen2 architecture
  • Language(s): Multilingual
  • License: Apache 2.0 (inherited from base model)

Uses

Example Output (with temperature=0.6, repetition_penalty=1.15):

User: As a result of what mental process does duality appear in a person?

Model: Dualistic thinking arises from the interaction between an individual and their environment. When a person's consciousness is not yet developed to a certain level, they experience the world through sensory perception. This perception leads to the formation of concepts such as "good" and "bad," "right" and "wrong." These oppositions manifest themselves in actions or emotions, leading to dualism. The mind then begins to differentiate between these opposites, creating a division into two parts: one that experiences and another that observes this experience. This observation is necessary for the development of language and thought processes, which are based on distinguishing and naming objects according to their properties. As a result, the mind learns to distinguish between things, people, events, and their qualities, forming a complex system of relationships. This process is similar to how a child learns to recognize and name objects around them. However, it's important to note that the duality formed within the mind does not necessarily reflect external reality but rather is a product of internal cognitive processes.

Out-of-Scope Use

The model is not designed for generating harmful, unethical, biased, or factually incorrect content. Performance on tasks outside its training domain (philosophical/esoteric chat) may be suboptimal.

Bias, Risks, and Limitations

The model inherits biases from its base model (Qwen/Qwen2.5-7B-Instruct) and the fine-tuning dataset. It may generate plausible-sounding but incorrect or nonsensical information, especially on complex topics. Its understanding is based on patterns in the data, not genuine comprehension or consciousness. Use the outputs with critical judgment.

Training Details

Training Data

The model was fine-tuned using the ORPO algorithm on a custom dataset. This dataset contains pairs of preferred and rejected responses for given prompts, focusing on philosophical and esoteric themes. A small validation split (0.5%) was used.

Training Procedure

The training was performed using the Unsloth library integrated with trl's ORPOTrainer.

Training Hyperparameters

  • Framework: Unsloth + ORPO
  • Base Model: Qwen/Qwen2.5-7B-Instruct
  • LoRA Configuration:
    • r: 64
    • lora_alpha: 64
    • lora_dropout: 0.0
    • bias: "none"
    • target_modules: ["q_proj", "v_proj", "k_proj", "o_proj", "gate_proj", "up_proj", "down_proj"]
    • use_rslora: True
  • Precision: bfloat16 mixed precision
  • Quantization: Loaded in 4-bit
  • Optimizer: Paged AdamW 8-bit
  • Learning Rate: 1e-5
  • LR Scheduler: Cosine
  • Warmup Steps: 8
  • Batch Size (per device): 1
  • Gradient Accumulation Steps: 32 (Effective Batch Size: 32)
  • Max Sequence Length: 3072
  • Epochs: 1
  • ORPO Beta: 0.1