IAMJB commited on
Commit
fc66243
·
verified ·
1 Parent(s): 9240f21

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +43 -0
README.md CHANGED
@@ -11,6 +11,49 @@ widget:
11
  ---
12
  [Evaluation on chexpert-plus](https://github.com/Stanford-AIMI/chexpert-plus)
13
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
14
  If you are using this model, please be sure to cite:
15
  ```
16
  @misc{chambon2024chexpertplusaugmentinglarge,
 
11
  ---
12
  [Evaluation on chexpert-plus](https://github.com/Stanford-AIMI/chexpert-plus)
13
 
14
+
15
+ Usage:
16
+
17
+ ```python
18
+ import torch
19
+ from PIL import Image
20
+ from transformers import BertTokenizer, ViTImageProcessor, VisionEncoderDecoderModel, GenerationConfig
21
+ import requests
22
+
23
+ mode = "findings"
24
+ # Model
25
+ model = VisionEncoderDecoderModel.from_pretrained(f"IAMJB/chexpert-mimic-cxr-{mode}-baseline").eval()
26
+ tokenizer = BertTokenizer.from_pretrained(f"IAMJB/chexpert-mimic-cxr-{mode}-baseline")
27
+ image_processor = ViTImageProcessor.from_pretrained(f"IAMJB/chexpert-mimic-cxr-{mode}-baseline")
28
+ #
29
+ # Dataset
30
+ generation_args = {
31
+ "bos_token_id": model.config.bos_token_id,
32
+ "eos_token_id": model.config.eos_token_id,
33
+ "pad_token_id": model.config.pad_token_id,
34
+ "num_return_sequences": 1,
35
+ "max_length": 128,
36
+ "use_cache": True,
37
+ "beam_width": 2,
38
+ }
39
+ #
40
+ # Inference
41
+ refs = []
42
+ hyps = []
43
+ with torch.no_grad():
44
+ url = "https://huggingface.co/IAMJB/interpret-cxr-impression-baseline/resolve/main/effusions-bibasal.jpg"
45
+ image = Image.open(requests.get(url, stream=True).raw)
46
+ pixel_values = image_processor(image, return_tensors="pt").pixel_values
47
+ # Generate predictions
48
+ generated_ids = model.generate(
49
+ pixel_values,
50
+ generation_config=GenerationConfig(
51
+ **{**generation_args, "decoder_start_token_id": tokenizer.cls_token_id})
52
+ )
53
+ generated_texts = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)
54
+ print(generated_texts)
55
+ ```
56
+
57
  If you are using this model, please be sure to cite:
58
  ```
59
  @misc{chambon2024chexpertplusaugmentinglarge,