Description

This repo contains GGUF format model files for cloudyu/Yi-34Bx2-MoE-60B.

About GGUF

GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.

How to run GGUF with llama.cpp on an A10 (24G vram)

   git clone https://github.com/ggerganov/llama.cpp.git
   cd llama.cpp/
   make LLAMA_CUBLAS=1
   ./main --model ./cloudyu_Yi-34Bx2-MoE-60B_Q3_K_XS.gguf -p "what is biggest animal?" -i -ngl 36
Downloads last month
24
GGUF
Model size
60.8B params
Architecture
llama
Hardware compatibility
Log In to view the estimation

4-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support