Model Card for Model ID

it was used to fine-tune OuteAI/Lite-Oute-1-300M-Instruct for tweet tone classification problem. Default model achieved 0.08 f1-score, while fine-tuned version achieved 0.51 f1-score in less than 15 minutes of fine-tuning on a single A100

Prameters

DoRA was used with r=8 and alpha=16 to fine-tune "k_proj", "v_proj".

Training parameters

BATCH_SIZE = 32 LEARNING_RATE = 3e-4 NUM_EPOCHS = 2

Metrics

F1 score is 0.51 on a test set

image/png

Examples

========== "Ben Smith / Smith (concussion) remains out of the lineup Thursday, Curtis #NHL #SJ" neutral assistant neutral assistant neutral

========== Sorry bout the stream last night I crashed out but will be on tonight for sure. Then back to Minecraft in pc tomorrow night. neutral assistant positive assistant positive

========== Chase Headley's RBI double in the 8th inning off David Price snapped a Yankees streak of 33 consecutive scoreless innings against Blue Jays neutral assistant neutral assistant neutral

========== @user Alciato: Bee will invest 150 million in January, another 200 in the Summer and plans to bring Messi by 2017" positive assistant neutral assistant neutral

==========

Downloads last month
7
Safetensors
Model size
300M params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for spankevich/llm-course-hw3-dora

Finetuned
(32)
this model

Dataset used to train spankevich/llm-course-hw3-dora

Collection including spankevich/llm-course-hw3-dora