YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/model-cards#model-card-metadata)

Quantization made by Richard Erkhov.

Github

Discord

Request more models

Eris_PrimeV4.20-Vision-32k-7B - GGUF

Original model description:

base_model: - l3utterfly/mistral-7b-v0.2-layla-v4 - Nitral-AI/Eris_PrimeV4-Vision-32k-7B library_name: transformers tags: - mergekit - merge license: other

image/png

Eris Prime: Version 4.20 "Blaze it" 32k Edition

"Eris decided to pick up some recreational hobbies that led her to become a bit unhinged i guess."

Quants Available Here Thanks to Lewdiculus: https://huggingface.co/Lewdiculous/Eris_PrimeV4.20-Vision-32k-7B-GGUF-IQ-Imatrix

Vision/multimodal capabilities:

If you want to use vision functionality:

  • You must use the latest versions of Koboldcpp.

To use the multimodal capabilities of this model and use vision you need to load the specified mmproj file, this can be found inside this model repo.

  • You can load the mmproj by using the corresponding section in the interface:

image/png


Downloads last month
6
GGUF
Model size
7.24B params
Architecture
llama
Hardware compatibility
Log In to view the estimation

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support