**RoshidereRP: A Specialized Llama 3.1 8B Finetune**

We present RoshidereRP, a customized finetune of the Llama 3.1 8B base model, optimized on a proprietary 1.1 million token dataset. This dataset comprises a curated selection of fanfiction and English books centered around the theme of "Alya Sometimes Hides Her Feelings in Russian".

Technical Details

  • Training Methodology: Our finetune was performed using 8-bit LoRA (Low-Rank Adaptation) with targeted updates to the up, down, q, v, m, and lm head components.
  • Dataset: The dataset was carefully crafted to focus on the specific theme, ensuring a high degree of relevance and coherence.
  • Experimental Goals: This finetune was undertaken to explore the capabilities of Augmentoolkit 3 and to investigate various finetuning methods.

Key Features

  • Specialized Domain Knowledge: RoshidereRP exhibits enhanced understanding and generation capabilities within the domain of "Alya Sometimes Hides Her Feelings in Russian".
  • Improved Performance: Our finetune demonstrates superior performance compared to the base model, particularly in tasks related to the target theme.

Usage

We invite researchers and developers to explore the potential of RoshidereRP in their applications, and to further build upon this specialized finetune.

Downloads last month
13
Safetensors
Model size
8.03B params
Tensor type
BF16
ยท
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for c4tdr0ut/RoshidereRP-8B

Finetuned
(266)
this model