lyfeyvutha commited on
Commit
8f007b1
·
verified ·
1 Parent(s): 979d722

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +8 -4
README.md CHANGED
@@ -27,9 +27,9 @@ model-index:
27
  type: mutiyama/alt
28
  metrics:
29
  - type: chrf
30
- value: 38.83
31
  - type: bertscore
32
- value: 0.8608
33
  pipeline_tag: translation
34
  new_version: lyfeyvutha/nllb_350M_en_km_v10
35
  ---
@@ -114,8 +114,6 @@ outputs = model.generate(**inputs, generation_config=generation_config)
114
  translation = tokenizer.decode(outputs, skip_special_tokens=True)
115
  print(translation)
116
 
117
-
118
-
119
  ## Training Details
120
 
121
  ### Training Data
@@ -136,6 +134,12 @@ print(translation)
136
  ### Testing Data
137
  The model was evaluated on the Asian Language Treebank (ALT) corpus, containing manually translated English-Khmer pairs.
138
 
 
 
 
 
 
 
139
  ### Results
140
  This proof-of-concept model demonstrates that knowledge distillation can achieve reasonable translation quality with significantly reduced parameters (350M vs 600M baseline).
141
 
 
27
  type: mutiyama/alt
28
  metrics:
29
  - type: chrf
30
+ value: 21.3502
31
  - type: bertscore
32
+ value: 0.8983
33
  pipeline_tag: translation
34
  new_version: lyfeyvutha/nllb_350M_en_km_v10
35
  ---
 
114
  translation = tokenizer.decode(outputs, skip_special_tokens=True)
115
  print(translation)
116
 
 
 
117
  ## Training Details
118
 
119
  ### Training Data
 
134
  ### Testing Data
135
  The model was evaluated on the Asian Language Treebank (ALT) corpus, containing manually translated English-Khmer pairs.
136
 
137
+ ### Metrics
138
+ | Metric | Value |
139
+ |--------|-------|
140
+ | chrF Score | 21.3502 |
141
+ | BERTScore F1 | 0.8983 |
142
+
143
  ### Results
144
  This proof-of-concept model demonstrates that knowledge distillation can achieve reasonable translation quality with significantly reduced parameters (350M vs 600M baseline).
145