--- base_model: - ABX-AI/Cerebral-Infinity-7B - ABX-AI/Starfinite-Laymons-7B library_name: transformers tags: - mergekit - merge - mistral - not-for-all-audiences --- # GGUF / IQ / Imatrix for [Starbral-Infinimons-9B](https://huggingface.co/ABX-AI/Starbral-Infinimons-9B) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/65d936ad52eca001fdcd3245/ecK9RSCHPWOA2SUaVAeQV.png) **Why Importance Matrix?** **Importance Matrix**, at least based on my testing, has shown to improve the output and performance of "IQ"-type quantizations, where the compression becomes quite heavy. The **Imatrix** performs a calibration, using a provided dataset. Testing has shown that semi-randomized data can help perserve more important segments as the compression is applied. Related discussions in Github: [[1]](https://github.com/ggerganov/llama.cpp/discussions/5006) [[2]](https://github.com/ggerganov/llama.cpp/discussions/5263#discussioncomment-8395384) The imatrix.txt file that I used contains general, semi-random data, with some custom kink. # Starbral-Infinimons-9B The concept behind this merge was to combine: - The conversational abilities or the newly added [Starling-LM-7B-beta](https://huggingface.co/Nexusflow/Starling-LM-7B-beta) - The reasoning abilities of [Cerebrum-1.0-7b ](https://huggingface.co/AetherResearch/Cerebrum-1.0-7b) - The originality of [LemonadeRP-4.5.3](https://huggingface.co/KatyTheCutie/LemonadeRP-4.5.3) - The already well-performing previous merges I did based on InfinityRP, Layla v4, Laydiculous, between these models and others into a 9B frankenmerge Based on preliminary tests, I'm quite happy with the results. Very original responses and basically no alignment issues. In my experience, it works well with ChatML, Alpaca, and likely other instruction sets - you can chat, or ask it to develop a story. This model is intended for fictional storytelling and role-playing, and may not be intended for all audiences. ## Merge Details This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ### Merge Method This model was merged using the passthrough merge method. ### Models Merged The following models were included in the merge: * [ABX-AI/Cerebral-Infinity-7B](https://huggingface.co/ABX-AI/Cerebral-Infinity-7B) * [ABX-AI/Starfinite-Laymons-7B](https://huggingface.co/ABX-AI/Starfinite-Laymons-7B) ### Configuration The following YAML configuration was used to produce this model: ```yaml slices: - sources: - model: ABX-AI/Cerebral-Infinity-7B layer_range: [0, 20] - sources: - model: ABX-AI/Starfinite-Laymons-7B layer_range: [12, 32] merge_method: passthrough dtype: float16 ```