Update README.md
Browse files
README.md
CHANGED
|
@@ -6,6 +6,9 @@ datasets:
|
|
| 6 |
- StanfordAIMI/interpret-cxr-test-hidden
|
| 7 |
---
|
| 8 |
|
|
|
|
|
|
|
|
|
|
| 9 |
# CXRMate-RRG4: Entropy-Augmented Self-Critical Sequence Training for Radiology Report Generation
|
| 10 |
|
| 11 |
This is an evolution of https://huggingface.co/aehrc/cxrmate developed for the Radiology Report Generation task of BioNLP @ ACL 2024.
|
|
@@ -57,12 +60,37 @@ output_ids = model.generate(
|
|
| 57 |
pixel_values=batch['images'],
|
| 58 |
max_length=512,
|
| 59 |
num_beams=4,
|
| 60 |
-
use_cache=True,
|
| 61 |
bad_words_ids=[[tokenizer.convert_tokens_to_ids('[NF]')], [tokenizer.convert_tokens_to_ids('[NI]')]],
|
| 62 |
)
|
| 63 |
findings, impression = model.split_and_decode_sections(output_ids, tokenizer)
|
| 64 |
```
|
| 65 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 66 |
## Notebook example:
|
| 67 |
https://huggingface.co/aehrc/cxrmate-rrg24/blob/main/demo.ipynb
|
| 68 |
|
|
|
|
| 6 |
- StanfordAIMI/interpret-cxr-test-hidden
|
| 7 |
---
|
| 8 |
|
| 9 |
+
<Gallery />
|
| 10 |
+
|
| 11 |
+
|
| 12 |
# CXRMate-RRG4: Entropy-Augmented Self-Critical Sequence Training for Radiology Report Generation
|
| 13 |
|
| 14 |
This is an evolution of https://huggingface.co/aehrc/cxrmate developed for the Radiology Report Generation task of BioNLP @ ACL 2024.
|
|
|
|
| 60 |
pixel_values=batch['images'],
|
| 61 |
max_length=512,
|
| 62 |
num_beams=4,
|
|
|
|
| 63 |
bad_words_ids=[[tokenizer.convert_tokens_to_ids('[NF]')], [tokenizer.convert_tokens_to_ids('[NI]')]],
|
| 64 |
)
|
| 65 |
findings, impression = model.split_and_decode_sections(output_ids, tokenizer)
|
| 66 |
```
|
| 67 |
|
| 68 |
+
## Generate findings only:
|
| 69 |
+
|
| 70 |
+
```python
|
| 71 |
+
output_ids = model.generate(
|
| 72 |
+
pixel_values=batch['images'],
|
| 73 |
+
max_length=512,
|
| 74 |
+
num_beams=4,
|
| 75 |
+
bad_words_ids=[[tokenizer.convert_tokens_to_ids('[NF]')]],
|
| 76 |
+
eos_token_id=tokenizer.sep_token_id,
|
| 77 |
+
)
|
| 78 |
+
findings, _ = model.split_and_decode_sections(output_ids, tokenizer)
|
| 79 |
+
```
|
| 80 |
+
|
| 81 |
+
## Generate impression only:
|
| 82 |
+
|
| 83 |
+
```python
|
| 84 |
+
output_ids = model.generate(
|
| 85 |
+
pixel_values=batch['images'],
|
| 86 |
+
max_length=512,
|
| 87 |
+
num_beams=4,
|
| 88 |
+
bad_words_ids=[[tokenizer.convert_tokens_to_ids('[NI]')]],
|
| 89 |
+
input_ids=torch.tensor([[tokenizer.bos_token_id, tokenizer.convert_tokens_to_ids('[NF]'), tokenizer.sep_token_id]]*mbatch_size, device=device, dtype=torch.long),
|
| 90 |
+
)
|
| 91 |
+
_, impression = model.split_and_decode_sections(output_ids, tokenizer)
|
| 92 |
+
```
|
| 93 |
+
|
| 94 |
## Notebook example:
|
| 95 |
https://huggingface.co/aehrc/cxrmate-rrg24/blob/main/demo.ipynb
|
| 96 |
|