--- license: apache-2.0 base_model: abeja/ABEJA-Qwen2.5-7b-Japanese-v0.1 base_model_relation: quantized language: - ja --- Using turboderp's ExLlamaV2 v0.2.8 for quantization. **[2.2bpw](https://huggingface.co/rioshiina/ABEJA-Qwen2.5-7b-Japanese-v0.1-exl2/tree/2.2bpw)** **[3.0bpw](https://huggingface.co/rioshiina/ABEJA-Qwen2.5-7b-Japanese-v0.1-exl2/tree/3.0bpw)** **[4.0bpw](https://huggingface.co/rioshiina/ABEJA-Qwen2.5-7b-Japanese-v0.1-exl2/tree/4.0bpw)** **[5.0bpw](https://huggingface.co/rioshiina/ABEJA-Qwen2.5-7b-Japanese-v0.1-exl2/tree/5.0bpw)** **[6.0bpw](https://huggingface.co/rioshiina/ABEJA-Qwen2.5-7b-Japanese-v0.1-exl2/tree/6.0bpw)** **[7.0bpw](https://huggingface.co/rioshiina/ABEJA-Qwen2.5-7b-Japanese-v0.1-exl2/tree/7.0bpw)** **[8.0bpw](https://huggingface.co/rioshiina/ABEJA-Qwen2.5-7b-Japanese-v0.1-exl2/tree/8.0bpw)** ## Calibration Dataset [TFMC/imatrix-dataset-for-japanese-llm](https://huggingface.co/datasets/TFMC/imatrix-dataset-for-japanese-llm) ## ABEJA-Qwen2.5-7b-Japanese-v0.1-exl2 - Model creator: [abeja](https://huggingface.co/abeja) - Original model: [ABEJA-Qwen2.5-7b-Japanese-v0.1](https://huggingface.co/abeja/ABEJA-Qwen2.5-7b-Japanese-v0.1)