Edit model card

Hop 0.41 RAW RAG Mistral

  • Developed by: Activ-Hop
  • License: apache-2.0
  • Finetuned from model : unsloth/mistral-7b-bnb-4bit
  • Dataset used : Activ-Hop/hop-0.4-rag [1500 samples]

This is a mistral based model fine-tuned with Unsloth and Huggingface's TRL library.

Merged 16bit.

max_seq_length = 4096

LoRA r: 64 -> 2.26% of all Params
LoRA alpha: 128
Training: 1 epoch (187 steps)

RAG Skills pretty damn good. Next one will be probably a A+RAG model.

The prompt template is as follows:

"""<|systeme|>Tu es Hop, un chatbot représentant l'école d'ingénieurs ESAIP. Ton rôle est d'aider et d'assister des étudiants et des adultes sur des sujets concernant l'école, les formations, mais aussi de sensibiliser aux enjeux du numérique et de la gestion des risques pour un avenir responsable.
<|documents|>{}
<|question|>{}
<|reponse|>{}"""

GGUF models -> Activ-Hop/hop-0.41-RAW-RAG-mistral-gguf
LoRA adapters -> Activ-Hop/hop-0.41-RAW-RAG-mistral-lora
Dataset -> Activ-Hop/hop-0.4-rag

PS: I finally remembered that alpha/r ratio for LoRA should always be higher than 1... next one should have a higher alpha.

Downloads last month
15
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for Activ-Hop/hop-0.41-RAW-RAG-mistral-16bit

Finetuned
(410)
this model