Commit
·
b94f375
1
Parent(s):
b76709a
update readme
Browse files
README.md
CHANGED
@@ -7,9 +7,9 @@ datasets:
|
|
7 |
- uspto
|
8 |
---
|
9 |
|
10 |
-
# Adapter `doncamilom/OChemSegm-flan-T5-large` for
|
11 |
|
12 |
-
An [adapter](https://adapterhub.ml) for the `
|
13 |
|
14 |
This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library.
|
15 |
|
@@ -25,10 +25,19 @@ _Note: adapter-transformers is a fork of transformers that acts as a drop-in rep
|
|
25 |
Now, the adapter can be loaded and activated like this:
|
26 |
|
27 |
```python
|
28 |
-
from transformers import
|
29 |
|
30 |
-
|
31 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
32 |
```
|
33 |
|
34 |
## Architecture & Training
|
@@ -41,4 +50,4 @@ adapter_name = model.load_adapter("doncamilom/OChemSegm-flan-T5-large", source="
|
|
41 |
|
42 |
## Citation
|
43 |
|
44 |
-
<!-- Add some description here -->
|
|
|
7 |
- uspto
|
8 |
---
|
9 |
|
10 |
+
# Adapter `doncamilom/OChemSegm-flan-T5-large` for google/flan-t5-large
|
11 |
|
12 |
+
An [adapter](https://adapterhub.ml) for the `google/flan-t5-large` model that was trained on the [USPTO-segment](www.tobedone.undone) dataset.
|
13 |
|
14 |
This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library.
|
15 |
|
|
|
25 |
Now, the adapter can be loaded and activated like this:
|
26 |
|
27 |
```python
|
28 |
+
from transformers import AutoModelForSeq2SeqLM, AutoTokenizer
|
29 |
|
30 |
+
adapter = 'doncamilom/OChemSegm-flan-T5-large'
|
31 |
+
|
32 |
+
model = AutoModelForSeq2SeqLM.from_pretrained(
|
33 |
+
'google/flan-t5-large',
|
34 |
+
)
|
35 |
+
|
36 |
+
# Load adapter
|
37 |
+
adapter_name = self.model.load_adapter(adapter, source='hf', set_active=True)
|
38 |
+
|
39 |
+
# Load tokenizer
|
40 |
+
tokenizer = AutoTokenizer.from_pretrained(adapter)
|
41 |
```
|
42 |
|
43 |
## Architecture & Training
|
|
|
50 |
|
51 |
## Citation
|
52 |
|
53 |
+
<!-- Add some description here -->
|