justsomerandomdude264 commited on
Commit
5f0df3e
·
verified ·
1 Parent(s): 5bac16c

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -24,7 +24,7 @@ This is a Large Language Model (LLM) fine-tuned to solve math problems with deta
24
  - **Fine-tuning Method**: PEFT (Parameter-Efficient Fine-Tuning) with QLoRA
25
  - **Quantization**: 4-bit quantization for reduced memory usage
26
  - **Training Framework**: Unsloth, optimized for efficient fine-tuning of large language models
27
- - **Training Environment**: Google Colab (free tier), NVIDIA T4 GPU (12GB VRAM), 12GB RAM
28
  - **Dataset Used**: TIGER-Lab/MathInstruct (Yue, X., Qu, X., Zhang, G., Fu, Y., Huang, W., Sun, H., Su, Y., & Chen, W. (2023). MAmmoTH: Building Math Generalist Models through Hybrid Instruction Tuning. *arXiv preprint arXiv:2309.05653*.
29
  ), 560 selected math problems and solutions
30
 
 
24
  - **Fine-tuning Method**: PEFT (Parameter-Efficient Fine-Tuning) with QLoRA
25
  - **Quantization**: 4-bit quantization for reduced memory usage
26
  - **Training Framework**: Unsloth, optimized for efficient fine-tuning of large language models
27
+ - **Training Environment**: Google Colab (free tier), NVIDIA T4 GPU (16GB VRAM), 12GB RAM
28
  - **Dataset Used**: TIGER-Lab/MathInstruct (Yue, X., Qu, X., Zhang, G., Fu, Y., Huang, W., Sun, H., Su, Y., & Chen, W. (2023). MAmmoTH: Building Math Generalist Models through Hybrid Instruction Tuning. *arXiv preprint arXiv:2309.05653*.
29
  ), 560 selected math problems and solutions
30