justsomerandomdude264 commited on
Commit
a3399d3
·
verified ·
1 Parent(s): 7b13be0

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -28,7 +28,7 @@ This is a Large Language Model (LLM) fine-tuned to solve sst problems with detai
28
  - **Fine-tuning Method**: PEFT (Parameter-Efficient Fine-Tuning) with QLoRA
29
  - **Quantization**: 4-bit quantization for reduced memory usage
30
  - **Training Framework**: Unsloth, optimized for efficient fine-tuning of large language models
31
- - **Training Environment**: Google Colab (free tier), NVIDIA T4 GPU (12GB VRAM), 12GB RAM
32
  - **Dataset Used**: Combination of ambrosfitz/10k_history_data_v4, adamo1139/basic_economics_questions_ts_test_1, adamo1139/basic_economics_questions_ts_test_2, adamo1139/basic_economics_questions_ts_test_3, adamo1139/basic_economics_questions_ts_test_4
33
 
34
  ## Capabilities
 
28
  - **Fine-tuning Method**: PEFT (Parameter-Efficient Fine-Tuning) with QLoRA
29
  - **Quantization**: 4-bit quantization for reduced memory usage
30
  - **Training Framework**: Unsloth, optimized for efficient fine-tuning of large language models
31
+ - **Training Environment**: Google Colab (free tier), NVIDIA T4 GPU (16GB VRAM), 12GB RAM
32
  - **Dataset Used**: Combination of ambrosfitz/10k_history_data_v4, adamo1139/basic_economics_questions_ts_test_1, adamo1139/basic_economics_questions_ts_test_2, adamo1139/basic_economics_questions_ts_test_3, adamo1139/basic_economics_questions_ts_test_4
33
 
34
  ## Capabilities