--- base_model: - rombodawg/Rombos-LLM-V2.5-Qwen-7b - suayptalha/Clarus-7B-v0.1 - gz987/qwen2.5-7b-cabs-v0.3 - prithivMLmods/WebMind-7B-v0.1 - fblgit/cybertron-v4-qw7B-MGS - Xiaojian9992024/Qwen2.5-THREADRIPPER-Small library_name: transformers tags: - mergekit - merge model-index: - name: Qwen2.5-Dyanka-7B-Preview results: - task: type: text-generation name: Text Generation dataset: name: IFEval (0-Shot) type: wis-k/instruction-following-eval split: train args: num_few_shot: 0 metrics: - type: inst_level_strict_acc and prompt_level_strict_acc value: 76.4 name: averaged accuracy source: url: >- https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard#/?search=Xiaojian9992024%2FQwen2.5-Dyanka-7B-Preview name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: BBH (3-Shot) type: SaylorTwift/bbh split: test args: num_few_shot: 3 metrics: - type: acc_norm value: 36.62 name: normalized accuracy source: url: >- https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard#/?search=Xiaojian9992024%2FQwen2.5-Dyanka-7B-Preview name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MATH Lvl 5 (4-Shot) type: lighteval/MATH-Hard split: test args: num_few_shot: 4 metrics: - type: exact_match value: 48.79 name: exact match source: url: >- https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard#/?search=Xiaojian9992024%2FQwen2.5-Dyanka-7B-Preview name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GPQA (0-shot) type: Idavidrein/gpqa split: train args: num_few_shot: 0 metrics: - type: acc_norm value: 8.95 name: acc_norm source: url: >- https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard#/?search=Xiaojian9992024%2FQwen2.5-Dyanka-7B-Preview name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MuSR (0-shot) type: TAUR-Lab/MuSR args: num_few_shot: 0 metrics: - type: acc_norm value: 15.51 name: acc_norm source: url: >- https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard#/?search=Xiaojian9992024%2FQwen2.5-Dyanka-7B-Preview name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU-PRO (5-shot) type: TIGER-Lab/MMLU-Pro config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 37.51 name: accuracy source: url: >- https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard#/?search=Xiaojian9992024%2FQwen2.5-Dyanka-7B-Preview name: Open LLM Leaderboard license: apache-2.0 --- ![Qwen2.5-Dyanka-7B-Preview](https://huggingface.co/Xiaojian9992024/Qwen2.5-Dyanka-7B-Preview/resolve/main/Costume1(62).png) # merge This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the [TIES](https://arxiv.org/abs/2306.01708) merge method using [gz987/qwen2.5-7b-cabs-v0.3](https://huggingface.co/gz987/qwen2.5-7b-cabs-v0.3) as a base. ### Models Merged The following models were included in the merge: * [rombodawg/Rombos-LLM-V2.5-Qwen-7b](https://huggingface.co/rombodawg/Rombos-LLM-V2.5-Qwen-7b) * [suayptalha/Clarus-7B-v0.1](https://huggingface.co/suayptalha/Clarus-7B-v0.1) * [prithivMLmods/WebMind-7B-v0.1](https://huggingface.co/prithivMLmods/WebMind-7B-v0.1) * [fblgit/cybertron-v4-qw7B-MGS](https://huggingface.co/fblgit/cybertron-v4-qw7B-MGS) * [Xiaojian9992024/Qwen2.5-THREADRIPPER-Small](https://huggingface.co/Xiaojian9992024/Qwen2.5-THREADRIPPER-Small) ### Configuration The following YAML configuration was used to produce this model: ```yaml models: - model: gz987/qwen2.5-7b-cabs-v0.3 #no parameters necessary for base model - model: suayptalha/Clarus-7B-v0.1 parameters: density: 0.2 weight: 0.2 - model: Xiaojian9992024/Qwen2.5-THREADRIPPER-Small parameters: density: 0.2 weight: 0.2 - model: rombodawg/Rombos-LLM-V2.5-Qwen-7b parameters: density: 0.2 weight: 0.2 - model: prithivMLmods/WebMind-7B-v0.1 parameters: density: 0.2 weight: 0.2 - model: fblgit/cybertron-v4-qw7B-MGS parameters: density: 0.2 weight: 0.2 merge_method: ties base_model: gz987/qwen2.5-7b-cabs-v0.3 parameters: normalize: false int8_mask: true dtype: bfloat16 ``` # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/Xiaojian9992024__Qwen2.5-Dyanka-7B-Preview-details)! Summarized results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/contents/viewer/default/train?q=Xiaojian9992024%2FQwen2.5-Dyanka-7B-Preview&sort[column]=Average%20%E2%AC%86%EF%B8%8F&sort[direction]=desc)! | Metric |Value (%)| |-------------------|--------:| |**Average** | 37.30| |IFEval (0-Shot) | 76.40| |BBH (3-Shot) | 36.62| |MATH Lvl 5 (4-Shot)| 48.79| |GPQA (0-shot) | 8.95| |MuSR (0-shot) | 15.51| |MMLU-PRO (5-shot) | 37.51|