Text Generation
GGUF
English

My first quantization, this is a q4_0 GGML(ggjtv3) and GGUFv2 quantization of the model https://huggingface.co/acrastt/OmegLLaMA-3B I hope it's working fine. ๐Ÿค—

Prompt format:

Interests: {interests}
Conversation:
You: {prompt}
Stranger: 
Downloads last month
5
GGUF
Model size
3.43B params
Architecture
llama
Hardware compatibility
Log In to view the estimation

4-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Dataset used to train Aryanne/OmegLLaMA-3B-ggml-and-gguf