YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/model-cards#model-card-metadata)

Quantization made by Richard Erkhov.

Github

Discord

Request more models

VersatiLlama-Llama-3.2-3B-Instruct-Abliterated - GGUF

Original model description:

base_model: - meta-llama/Llama-3.2-3B-Instruct license: cc-by-4.0 language: - en pipeline_tag: text-generation library_name: transformers

Model Card for Model ID

VersatiLlama-Llama-3.2-3B-Instruct-Abliterated

image/webp

Model Description

Small but Smart

Fine-Tuned on Vast dataset of Conversations

Able to Generate Human like text with high performance within its size.

It is Very Versatile when compared for it's size and Parameters and offers capability almost as good as Llama 3.1 8B Instruct

Feel free to Check it out!!

Check the quantized model here: Devarui379/VersatiLlama-Llama-3.2-3B-Instruct-Abliterated-Imatrix-GGUF

[This model was trained for 5hrs on GPU T4 15gb vram]

  • Developed by: Meta AI
  • Fine-Tuned by: Devarui379
  • Model type: Transformers
  • Language(s) (NLP): English
  • License: cc-by-4.0

Model Sources [optional]

base model:meta-llama/Llama-3.2-3B-Instruct

Uses

Use desired System prompt when using in LM Studio The optimal chat template seems to be Jinja but feel free to test it out as you want!

Technical Specifications

Model Architecture and Objective

Llama 3.2

Hardware

NVIDIA TESLA T4 GPU 15GB VRAM

Downloads last month
13
GGUF
Model size
3.21B params
Architecture
llama
Hardware compatibility
Log In to view the estimation

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support