|
--- |
|
library_name: transformers |
|
tags: |
|
- lora |
|
- finetuned |
|
- gemma |
|
- causal-lm |
|
datasets: |
|
- HuggingFaceH4/Multilingual-Thinking |
|
base_model: |
|
- google/gemma-2b |
|
co2_eq_emissions: |
|
emissions: 10 |
|
source: "N/A" |
|
training_type: "fine-tuning using LoRA" |
|
geographical_location: "EU" |
|
hardware_used: "Google GPU T4" |
|
--- |
|
|
|
# MCOLLM-2b |
|
|
|
A simple 2b model, fine-tuned version of the Gemma 2b model, optimized for step by step thinking |
|
|
|
|
|
|
|
## Model Details |
|
|
|
### Model Description |
|
|
|
<!-- Provide a longer summary of what this model is. --> |
|
|
|
|
|
- **Developed by:** Martico2432 |
|
<!-- - **Funded by [optional]:** [More Information Needed] --> |
|
<!-- - **Shared by [optional]:** [More Information Needed] --> |
|
- **Model type:** Causal Language Model (transformer) |
|
- **Language(s) (NLP):** English |
|
- **License:** Apache-2.0 |
|
- **Finetuned from model:** Gemma-2B |
|
|
|
### Model Sources |
|
|
|
<!-- Provide the basic links for the model. --> |
|
|
|
- **Repository:** (https://huggingface.co/google/gemma-2b) |
|
<!-- - **Paper [optional]:** [More Information Needed] --> |
|
<!-- - **Demo [optional]:** [More Information Needed] --> |
|
|
|
## Uses |
|
|
|
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> |
|
|
|
### Direct Use |
|
|
|
- Chatbots and conversational AI applications |
|
- Text generation for creative or educational purposes |
|
- Experimentation with LoRA fine-tuning on small datasets |
|
|
|
|
|
### Downstream Use |
|
|
|
- Can be further fine-tuned for any specific tasks |
|
|
|
|
|
### Out-of-Scope Use |
|
|
|
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> |
|
- Not designed for high-stakes decision making (legal, medical, safety-critical) |
|
- May generate biased, offensive, or factually incorrect text |
|
- Limited generalization due to small fine-tuning dataset |
|
|
|
## Bias, Risks, and Limitations |
|
|
|
<!-- This section is meant to convey both technical and sociotechnical limitations. --> |
|
|
|
- Fine-tuned on a very small dataset (1000 examples) → risk of overfitting or narrow outputs |
|
- Model may inherit biases from base Gemma‑2B |
|
- Outputs should be critically evaluated before deployment |
|
|
|
### Recommendations |
|
|
|
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> |
|
|
|
- Monitor outputs for unsafe or biased content |
|
- Use in low-stakes research or prototyping environments |
|
|
|
## How to Get Started with the Model |
|
|
|
You can get started by using the example in the files |
|
|
|
## Training Details |
|
|
|
### Training Data |
|
|
|
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> |
|
|
|
- 1000 examples of thinking |
|
- Gemma 2b tokenizer |
|
|
|
### Training Procedure |
|
|
|
- Fine-tuning via LoRA on top of Gemma-2B base |
|
- 3 epochs, small learning rate |
|
|
|
#### Training Hyperparameters |
|
|
|
- **Training regime:** fp16 mixed precision <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> |
|
|
|
|
|
## Model Examination [optional] |
|
|
|
<!-- Relevant interpretability work for the model goes here --> |
|
|
|
[More Information Needed] |
|
|
|
## Environmental Impact |
|
|
|
- **Hardware Type:** T4 |
|
- **Hours used:** 0.5 |
|
- **Cloud Provider:** Google Cloud |
|
- **Compute Region:** EU |
|
- **Carbon Emitted:** 0.01kg |
|
|
|
|