English
Irena Gao commited on
Commit
452d974
·
1 Parent(s): 54d3e79

update readme

Browse files
Files changed (1) hide show
  1. README.md +19 -0
README.md CHANGED
@@ -20,7 +20,26 @@ The [MPT-1B](https://huggingface.co/mosaicml/mpt-1b-redpajama-200b) modeling cod
20
 
21
  ## Uses
22
  OpenFlamingo models process arbitrarily interleaved sequences of images and text to output text. This allows the models to accept in-context examples and undertake tasks like captioning, visual question answering, and image classification.
 
23
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
24
  ### Generation example
25
  Below is an example of generating text conditioned on interleaved images/text. In particular, let's try few-shot image captioning.
26
 
 
20
 
21
  ## Uses
22
  OpenFlamingo models process arbitrarily interleaved sequences of images and text to output text. This allows the models to accept in-context examples and undertake tasks like captioning, visual question answering, and image classification.
23
+ ### Initialization
24
 
25
+ ``` python
26
+ from open_flamingo import create_model_and_transforms
27
+
28
+ model, image_processor, tokenizer = create_model_and_transforms(
29
+ clip_vision_encoder_path="ViT-L-14",
30
+ clip_vision_encoder_pretrained="openai",
31
+ lang_encoder_path="anas-awadalla/mpt-1b-redpajama-200b",
32
+ tokenizer_path="anas-awadalla/mpt-1b-redpajama-200b",
33
+ cross_attn_every_n_layers=1
34
+ )
35
+
36
+ # grab model checkpoint from huggingface hub
37
+ from huggingface_hub import hf_hub_download
38
+ import torch
39
+
40
+ checkpoint_path = hf_hub_download("openflamingo/OpenFlamingo-3B-vitl-mpt1b", "checkpoint.pt")
41
+ model.load_state_dict(torch.load(checkpoint_path), strict=False)
42
+ ```
43
  ### Generation example
44
  Below is an example of generating text conditioned on interleaved images/text. In particular, let's try few-shot image captioning.
45