lbourdois commited on
Commit
e2a94ce
·
verified ·
1 Parent(s): 4afa21f

Improve language tag

Browse files

Hi! As the model is multilingual, this is a PR to add other languages than English to the language tag to improve the referencing. Note that 29 languages are announced in the README, but only 13 are explicitly listed. I was therefore only able to add these 13 languages.

Files changed (1) hide show
  1. README.md +59 -47
README.md CHANGED
@@ -1,48 +1,60 @@
1
- ---
2
- base_model: Qwen/Qwen2.5-7B-Instruct
3
- library_name: peft
4
- license: apache-2.0
5
- datasets:
6
- - medalpaca/medical_meadow_medical_flashcards
7
- language:
8
- - en
9
- pipeline_tag: text-generation
10
- ---
11
-
12
- # Model Card for FlowerTune-Qwen2.5-7B-Instruct-Medical-PEFT
13
-
14
- This PEFT adapter has been trained by using [Flower](https://flower.ai/), a friendly federated AI framework.
15
-
16
- The adapter and benchmark results have been submitted to the [FlowerTune LLM Medical Leaderboard](https://flower.ai/benchmarks/llm-leaderboard/medical/).
17
-
18
-
19
- ## Model Details
20
-
21
- Please check the following GitHub project for model details and evaluation results:
22
-
23
- [https://github.com/mrs83/FlowerTune-Qwen2.5-7B-Instruct-Medical](https://github.com/mrs83/FlowerTune-Qwen2.5-7B-Instruct-Medical)
24
-
25
-
26
- ## Training procedure
27
-
28
-
29
- The following `bitsandbytes` quantization config was used during training:
30
- - quant_method: bitsandbytes
31
- - _load_in_8bit: False
32
- - _load_in_4bit: True
33
- - llm_int8_threshold: 6.0
34
- - llm_int8_skip_modules: None
35
- - llm_int8_enable_fp32_cpu_offload: False
36
- - llm_int8_has_fp16_weight: False
37
- - bnb_4bit_quant_type: fp4
38
- - bnb_4bit_use_double_quant: False
39
- - bnb_4bit_compute_dtype: float32
40
- - bnb_4bit_quant_storage: uint8
41
- - load_in_4bit: True
42
- - load_in_8bit: False
43
-
44
- ### Framework versions
45
-
46
-
47
- - PEFT 0.6.2
 
 
 
 
 
 
 
 
 
 
 
 
48
  - Flower 1.12.0
 
1
+ ---
2
+ base_model: Qwen/Qwen2.5-7B-Instruct
3
+ library_name: peft
4
+ license: apache-2.0
5
+ datasets:
6
+ - medalpaca/medical_meadow_medical_flashcards
7
+ language:
8
+ - zho
9
+ - eng
10
+ - fra
11
+ - spa
12
+ - por
13
+ - deu
14
+ - ita
15
+ - rus
16
+ - jpn
17
+ - kor
18
+ - vie
19
+ - tha
20
+ - ara
21
+ pipeline_tag: text-generation
22
+ ---
23
+
24
+ # Model Card for FlowerTune-Qwen2.5-7B-Instruct-Medical-PEFT
25
+
26
+ This PEFT adapter has been trained by using [Flower](https://flower.ai/), a friendly federated AI framework.
27
+
28
+ The adapter and benchmark results have been submitted to the [FlowerTune LLM Medical Leaderboard](https://flower.ai/benchmarks/llm-leaderboard/medical/).
29
+
30
+
31
+ ## Model Details
32
+
33
+ Please check the following GitHub project for model details and evaluation results:
34
+
35
+ [https://github.com/mrs83/FlowerTune-Qwen2.5-7B-Instruct-Medical](https://github.com/mrs83/FlowerTune-Qwen2.5-7B-Instruct-Medical)
36
+
37
+
38
+ ## Training procedure
39
+
40
+
41
+ The following `bitsandbytes` quantization config was used during training:
42
+ - quant_method: bitsandbytes
43
+ - _load_in_8bit: False
44
+ - _load_in_4bit: True
45
+ - llm_int8_threshold: 6.0
46
+ - llm_int8_skip_modules: None
47
+ - llm_int8_enable_fp32_cpu_offload: False
48
+ - llm_int8_has_fp16_weight: False
49
+ - bnb_4bit_quant_type: fp4
50
+ - bnb_4bit_use_double_quant: False
51
+ - bnb_4bit_compute_dtype: float32
52
+ - bnb_4bit_quant_storage: uint8
53
+ - load_in_4bit: True
54
+ - load_in_8bit: False
55
+
56
+ ### Framework versions
57
+
58
+
59
+ - PEFT 0.6.2
60
  - Flower 1.12.0