--- base_model: - TareksLab/M-NS-STEP3 - TareksLab/M-MERGE4 library_name: transformers tags: - mergekit - merge - not-for-all-audiences license: llama3.3 --- *~ We are Legion...* My biggest merge yet, consisting of a total of 15 specially curated models. My methodology in approaching this was to create 5 highly specialized models: 1. A very coherent but completely uncensored base 2. A very intelligent model based on UGI, Willingness and NatInt scores on the UGI Leaderboard 3. A highly descriptive writing model, specializing in creative and natural prose 4. A RP model specially merged with fine-tuned models that use a lot of RP datasets 5. The secret ingredient: A completely unhinged, uncensored final model These five models went through a series of iterations until I got something I thought worked well and then combined them to make LEGION. The full list of models used in this merge is below: * [TheDrummer/Fallen-Llama-3.3-R1-70B-v1](https://huggingface.co/TheDrummer/Fallen-Llama-3.3-R1-70B-v1) * [Sao10K/L3-70B-Euryale-v2.1](https://huggingface.co/Sao10K/L3-70B-Euryale-v2.1) * [SicariusSicariiStuff/Negative_LLAMA_70B](https://huggingface.co/SicariusSicariiStuff/Negative_LLAMA_70B) * [allura-org/Bigger-Body-70b](https://huggingface.co/allura-org/Bigger-Body-70b) * [Sao10K/70B-L3.3-mhnnn-x1](https://huggingface.co/Sao10K/70B-L3.3-mhnnn-x1) * [Sao10K/L3.3-70B-Euryale-v2.3](https://huggingface.co/Sao10K/L3.3-70B-Euryale-v2.3) * [Doctor-Shotgun/L3.3-70B-Magnum-v4-SE](https://huggingface.co/Doctor-Shotgun/L3.3-70B-Magnum-v4-SE) * [Sao10K/L3.1-70B-Hanami-x1](https://huggingface.co/Sao10K/L3.1-70B-Hanami-x1) * [EVA-UNIT-01/EVA-LLaMA-3.33-70B-v0.1](https://huggingface.co/EVA-UNIT-01/EVA-LLaMA-3.33-70B-v0.1) * [TheDrummer/Anubis-70B-v1](https://huggingface.co/TheDrummer/Anubis-70B-v1) * [ArliAI/Llama-3.3-70B-ArliAI-RPMax-v1.4](https://huggingface.co/ArliAI/Llama-3.3-70B-ArliAI-RPMax-v1.4) * [LatitudeGames/Wayfarer-Large-70B-Llama-3.3](https://huggingface.co/LatitudeGames/Wayfarer-Large-70B-Llama-3.3) * [NeverSleep/Lumimaid-v0.2-70B](https://huggingface.co/NeverSleep/Lumimaid-v0.2-70B) * [ReadyArt/Forgotten-Safeword-70B-3.6](https://huggingface.co/ReadyArt/Forgotten-Safeword-70B-3.6) * [huihui-ai/Llama-3.3-70B-Instruct-abliterated](https://huggingface.co/huihui-ai/Llama-3.3-70B-Instruct-abliterated) This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the [NearSwap](https://huggingface.co/alchemonaut/QuartetAnemoi-70B-t0.0001) merge method using [TareksLab/M-NS-STEP3](https://huggingface.co/TareksLab/M-NS-STEP3) as a base. ### Models Merged The following models were included in the merge: * [TareksLab/M-MERGE4](https://huggingface.co/TareksLab/M-MERGE4) ### Configuration The following YAML configuration was used to produce this model: ```yaml models: - model: TareksLab/M-MERGE4 - model: TareksLab/M-NS-STEP3 merge_method: nearswap base_model: TareksLab/M-NS-STEP3 parameters: t: - value: 0.0001 dtype: bfloat16 tokenizer: source: base ```