trinhhuy commited on
Commit
d266fa1
·
verified ·
1 Parent(s): c0a7463

update Readme

Browse files
Files changed (1) hide show
  1. README.md +36 -7
README.md CHANGED
@@ -23,27 +23,56 @@ It achieves the following results on the evaluation set:
23
  - Train Loss: 0.1596
24
  - Validation Loss: 0.2877
25
  - Epoch: 3
 
26
 
27
  ## Model description
28
 
29
- More information needed
30
 
31
- ## Intended uses & limitations
 
 
 
32
 
33
- More information needed
34
 
35
- ## Training and evaluation data
 
 
36
 
37
- More information needed
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
38
 
39
- ## Training procedure
40
 
41
  ### Training hyperparameters
42
 
43
  The following hyperparameters were used during training:
44
- - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 5e-05, 'decay_steps': 25412, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'decay': 0.0, 'beta_1': np.float32(0.9), 'beta_2': np.float32(0.999), 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
 
45
  - training_precision: mixed_float16
46
 
 
47
  ### Training results
48
 
49
  | Train Loss | Validation Loss | Epoch |
 
23
  - Train Loss: 0.1596
24
  - Validation Loss: 0.2877
25
  - Epoch: 3
26
+ - Bleu: 77.71982942978268
27
 
28
  ## Model description
29
 
30
+ ### 🧪 Quick Start
31
 
32
+ First, make sure to install the `transformers` library:
33
+ ```bash
34
+ pip install transformers
35
+ ```
36
 
 
37
 
38
+ Option 1: Use a pipeline as a high-level helper
39
+ ```python
40
+ from transformers import pipeline
41
 
42
+ pipe = pipeline("translation", model="trinhhuy/finetuned-opus-mt-en-vi")
43
+
44
+ result = pipe("I'm confident that my friend will pass the exam because he has been studying hard and staying focused for weeks.")
45
+ print(result)
46
+ ```
47
+ ```text
48
+ [{'translation_text': 'Tôi tự tin rằng bạn tôi sẽ vượt qua kỳ thi vì anh ấy đã học tập chăm chỉ và tập trung nhiều tuần.'}]
49
+ ```
50
+ Option 2: Load model directly
51
+ ```python
52
+ from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
53
+
54
+ tokenizer = AutoTokenizer.from_pretrained("trinhhuy/finetuned-opus-mt-en-vi")
55
+ model = AutoModelForSeq2SeqLM.from_pretrained("trinhhuy/finetuned-opus-mt-en-vi")
56
+
57
+ input_tokenized = tokenizer("I'm confident that my friend will pass the exam because he has been studying hard and staying focused for weeks.", return_tensors="pt")
58
+ output_tokenized = model.generate(**input_tokenized)
59
+
60
+ translated_text = tokenizer.decode(output_tokenized[0], skip_special_tokens=True)
61
+ print(translated_text)
62
+ ```
63
+ ```text
64
+ Tôi tự tin rằng bạn tôi sẽ vượt qua kỳ thi vì anh ấy đã học tập chăm chỉ và tập trung nhiều tuần.
65
+ ```
66
 
 
67
 
68
  ### Training hyperparameters
69
 
70
  The following hyperparameters were used during training:
71
+ - initial_learning_rate = 5e-05
72
+ - weight_decay_rate = 0.01
73
  - training_precision: mixed_float16
74
 
75
+
76
  ### Training results
77
 
78
  | Train Loss | Validation Loss | Epoch |