--- base_model: - google/gemma-3-4b-it-qat-int4-unquantized - huihui-ai/gemma-3-4b-it-abliterated - ZySec-AI/gemma-3-4b-document-writer - google/gemma-3-4b-it-qat-q4_0-unquantized - VIDraft/Gemma-3-R1984-4B library_name: transformers tags: - mergekit - merge --- # merge This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the [Model Stock](https://arxiv.org/abs/2403.19522) merge method using [google/gemma-3-4b-it-qat-q4_0-unquantized](https://huggingface.co/google/gemma-3-4b-it-qat-q4_0-unquantized) as a base. ### Models Merged The following models were included in the merge: * [google/gemma-3-4b-it-qat-int4-unquantized](https://huggingface.co/google/gemma-3-4b-it-qat-int4-unquantized) * [huihui-ai/gemma-3-4b-it-abliterated](https://huggingface.co/huihui-ai/gemma-3-4b-it-abliterated) * [ZySec-AI/gemma-3-4b-document-writer](https://huggingface.co/ZySec-AI/gemma-3-4b-document-writer) * [VIDraft/Gemma-3-R1984-4B](https://huggingface.co/VIDraft/Gemma-3-R1984-4B) ### Configuration The following YAML configuration was used to produce this model: ```yaml models: - model: google/gemma-3-4b-it-qat-q4_0-unquantized layer_range: [0, 25] - model: google/gemma-3-4b-it-qat-int4-unquantized layer_range: [0, 25] - model: ZySec-AI/gemma-3-4b-document-writer layer_range: [0, 25] - model: huihui-ai/gemma-3-4b-it-abliterated layer_range: [0, 25] - model: VIDraft/Gemma-3-R1984-4B layer_range: [0, 25] merge_method: model_stock base_model: google/gemma-3-4b-it-qat-q4_0-unquantized dtype: bfloat16 ```