Update README.md
Browse files
README.md
CHANGED
|
@@ -76,6 +76,28 @@ It results in balanced probabilities of gendered tokens in the model's output, a
|
|
| 76 |
The method for obtaining `P_c` is based on the Partial Least Square algorithm.
|
| 77 |
For more details, please refer to the [paper](https://openreview.net/pdf?id=XIZEFyVGC9).
|
| 78 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 79 |
## Evaluation
|
| 80 |
|
| 81 |
We evaluate the models on multiple benchmarks to assess gender bias and language understanding capabilities.
|
|
|
|
| 76 |
The method for obtaining `P_c` is based on the Partial Least Square algorithm.
|
| 77 |
For more details, please refer to the [paper](https://openreview.net/pdf?id=XIZEFyVGC9).
|
| 78 |
|
| 79 |
+
## Use
|
| 80 |
+
|
| 81 |
+
Following snippet shows the basic usage od DAMA for text generation.
|
| 82 |
+
|
| 83 |
+
```python
|
| 84 |
+
from transformers import AutoModelForCausalLM, AutoTokenizer
|
| 85 |
+
|
| 86 |
+
DAMA_SIZE= '7B'
|
| 87 |
+
OUTPUT_DIR = 'output'
|
| 88 |
+
model = AutoModelForCausalLM.from_pretrained(f"ufal/DAMA-{DAMA_SIZE}", offload_folder=OUTPUT_DIR,
|
| 89 |
+
torch_dtype=torch.float16, low_cpu_mem_usage=True,
|
| 90 |
+
device_map='auto')
|
| 91 |
+
|
| 92 |
+
tokenizer = AutoTokenizer.from_pretrained(f"ufal/DAMA-{DAMA_SIZE}", use_fast=True, return_token_type_ids=False)
|
| 93 |
+
|
| 94 |
+
prompt = "The lifeguard laughed because"
|
| 95 |
+
inputs = tokenizer(prompt, return_tensors="pt")
|
| 96 |
+
|
| 97 |
+
generate_ids = model.generate(inputs.input_ids, max_length=30)
|
| 98 |
+
tokenizer.batch_decode(generate_ids, skip_special_tokens=True)[0]
|
| 99 |
+
```
|
| 100 |
+
|
| 101 |
## Evaluation
|
| 102 |
|
| 103 |
We evaluate the models on multiple benchmarks to assess gender bias and language understanding capabilities.
|