Ojaswa commited on
Commit
4ee6937
·
verified ·
1 Parent(s): be0580a

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +8 -7
README.md CHANGED
@@ -1,6 +1,11 @@
1
  ---
2
  base_model: Qwen/Qwen2.5-3B-Instruct
3
  library_name: peft
 
 
 
 
 
4
  ---
5
 
6
 
@@ -33,7 +38,7 @@ Financial question answering
33
  Analyzing financial reports
34
  Conversational AI for finance-related customer support
35
 
36
- [More Information Needed]
37
 
38
  ### Downstream Use [optional]
39
 
@@ -42,7 +47,7 @@ Can be integrated into other systems for:
42
  Financial sentiment analysis
43
  Advanced financial data retrieval pipelines
44
 
45
- [More Information Needed]
46
 
47
  ### Out-of-Scope Use
48
 
@@ -64,7 +69,6 @@ Limitations: Focused on English financial data and may not generalize to other l
64
 
65
  ### Recommendations
66
 
67
- <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
68
 
69
  Use the model with a RAG for best results
70
 
@@ -101,9 +105,7 @@ Dropout: 0.1
101
  Optimizer: AdamW with learning rate 2e-5.
102
  Hardware: Trained on consumer-grade GPUs (NVIDIA L4).
103
 
104
- #### Preprocessing [optional]
105
 
106
- [More Information Needed]
107
 
108
 
109
  #### Training Hyperparameters
@@ -118,7 +120,6 @@ Batch Size: 32
118
  Training Time: Approximately 24 hours
119
 
120
 
121
- [More Information Needed]
122
- ### Framework versions
123
 
124
  - PEFT 0.13.2
 
1
  ---
2
  base_model: Qwen/Qwen2.5-3B-Instruct
3
  library_name: peft
4
+ license: apache-2.0
5
+ datasets:
6
+ - gbharti/wealth-alpaca_lora
7
+ language:
8
+ - en
9
  ---
10
 
11
 
 
38
  Analyzing financial reports
39
  Conversational AI for finance-related customer support
40
 
41
+
42
 
43
  ### Downstream Use [optional]
44
 
 
47
  Financial sentiment analysis
48
  Advanced financial data retrieval pipelines
49
 
50
+
51
 
52
  ### Out-of-Scope Use
53
 
 
69
 
70
  ### Recommendations
71
 
 
72
 
73
  Use the model with a RAG for best results
74
 
 
105
  Optimizer: AdamW with learning rate 2e-5.
106
  Hardware: Trained on consumer-grade GPUs (NVIDIA L4).
107
 
 
108
 
 
109
 
110
 
111
  #### Training Hyperparameters
 
120
  Training Time: Approximately 24 hours
121
 
122
 
123
+
 
124
 
125
  - PEFT 0.13.2