Edit model card

ELYZA-japanese-Llama-2-MoE-2x7B-v0.1-GGUF

概要

Aratako/ELYZA-japanese-Llama-2-MoE-2x7B-v0.1の量子化済みGGUF版です。ライセンス等詳細は元モデルをご確認ください。

現在はQ4_K_Mのみです。需要ありそうであれば他のものも用意します。

Description

This is the quantized GGUF version of Aratako/ELYZA-japanese-Llama-2-MoE-2x7B-v0.1. Please refer to the original model for license details and more information.

Currently, only Q4_K_M is available. If there is demand, other versions may be provided as well.

Downloads last month
8
GGUF
Model size
11.1B params
Architecture
llama

4-bit

Inference API
Unable to determine this model's library. Check the docs .

Model tree for Aratako/ELYZA-japanese-Llama-2-MoE-2x7B-v0.1-GGUF