Model Card for Model ID
We fine-tuned mistralai/Mistral-7B-Instruct-v0.2 on a subset (n=2000) of the community notes data of X.
Data source: https://communitynotes.x.com/guide/en/under-the-hood/download-data
Note that we filtered and formatted the community notes data based on notes rated as helpful.
Model Details
Model Description
- Finetuned from: [mistralai/Mistral-7B-Instruct-v0.2]
Training Hyperparameters
training_arguments = TrainingArguments( output_dir="./results", num_train_epochs=1, per_device_train_batch_size=4, gradient_accumulation_steps=1, optim="paged_adamw_32bit", save_steps=50, logging_steps=1, learning_rate=2e-4, weight_decay=0.001, fp16=False, bf16=False, max_grad_norm=0.3, max_steps=-1, warmup_ratio=0.03, group_by_length=True, lr_scheduler_type="constant", )
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.