Update README.md
Browse files
README.md
CHANGED
@@ -1,5 +1,5 @@
|
|
1 |
---
|
2 |
-
license:
|
3 |
inference:
|
4 |
parameters:
|
5 |
num_beams: 2
|
@@ -40,10 +40,11 @@ T5 model expects a task related prefix: since it is a description generation tas
|
|
40 |
|
41 |
```python
|
42 |
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
|
|
|
43 |
|
44 |
-
device = "cuda"
|
45 |
-
tokenizer = AutoTokenizer.from_pretrained("Ateeqq/product-description-generator"
|
46 |
-
model = AutoModelForSeq2SeqLM.from_pretrained("Ateeqq/product-description-generator"
|
47 |
|
48 |
def generate_description(title):
|
49 |
input_ids = tokenizer(f'description: {title}', return_tensors="pt", padding="longest", truncation=True, max_length=128).input_ids.to(device)
|
@@ -100,8 +101,4 @@ generate_description(title)
|
|
100 |
|
101 |
- **Update Training Data**: Retrain the model using the latest 0.5 million cleaned examples.
|
102 |
- **Optimize Training Parameters**: Experiment with different batch sizes, learning rates, and epochs to further improve model performance.
|
103 |
-
- **Expand Dataset**: Incorporate more diverse product datasets to enhance the model's versatility and robustness.
|
104 |
-
|
105 |
-
## License
|
106 |
-
|
107 |
-
Limited Use: It grants a non-exclusive, non-transferable license to use the this model. This means you can't freely share it with others or sell the model itself.
|
|
|
1 |
---
|
2 |
+
license: apache-2.0
|
3 |
inference:
|
4 |
parameters:
|
5 |
num_beams: 2
|
|
|
40 |
|
41 |
```python
|
42 |
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
|
43 |
+
import torch
|
44 |
|
45 |
+
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
|
46 |
+
tokenizer = AutoTokenizer.from_pretrained("Ateeqq/product-description-generator")
|
47 |
+
model = AutoModelForSeq2SeqLM.from_pretrained("Ateeqq/product-description-generator").to(device)
|
48 |
|
49 |
def generate_description(title):
|
50 |
input_ids = tokenizer(f'description: {title}', return_tensors="pt", padding="longest", truncation=True, max_length=128).input_ids.to(device)
|
|
|
101 |
|
102 |
- **Update Training Data**: Retrain the model using the latest 0.5 million cleaned examples.
|
103 |
- **Optimize Training Parameters**: Experiment with different batch sizes, learning rates, and epochs to further improve model performance.
|
104 |
+
- **Expand Dataset**: Incorporate more diverse product datasets to enhance the model's versatility and robustness.
|
|
|
|
|
|
|
|