Edit model card

This model is a result of second stage pre-training of Google's Gemma 2B (https://huggingface.co/google/gemma-2b) for roughly 150B tokens on the combination of English + Russian subset of oscar and wiki datasets.

This is a raw pre-trained model, created with further fine-tuning in mind. Goal of this project is to further research cross-linguistic capabilities of open-source LLMs and to create a strong open-source foundational LLM that would be fluent in Russian language. More about it will be in the upcoming blog and/or research paper.

This model was pre-trained using EasyLM's fork as a framework (JAX) on Google's v4-32 TPU which was generously provided under the TRC program. The model reached ~ 1.5 in training loss, LR was roughly 5e-5.

I'm planning on releasing a chat model that would ungergo full-parameter SFT and DPO on Ilya Gusev's datasets.

Downloads last month
623
Safetensors
Model size
2.51B params
Tensor type
BF16
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Dataset used to train Defetya/gemma-2b-ru