BGLab commited on
Commit
d93b7c0
·
verified ·
1 Parent(s): 4b2c447

updated readme

Browse files
Files changed (1) hide show
  1. README.md +169 -3
README.md CHANGED
@@ -1,3 +1,169 @@
1
- ---
2
- license: mit
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ language:
4
+ - en
5
+ tags:
6
+ - zero-shot-image-classification
7
+ - OpenCLIP
8
+ - clip
9
+ - biology
10
+ - biodiversity
11
+ - agronomy
12
+ - CV
13
+ - images
14
+ - animals
15
+ - species
16
+ - taxonomy
17
+ - rare species
18
+ - endangered species
19
+ - evolutionary biology
20
+ - multimodal
21
+ - knowledge-guided
22
+ datasets:
23
+ - ChihHsuan-Yang/Arboretum
24
+ - EOL
25
+ base_model:
26
+ - openai/clip-vit-base-patch16
27
+ - openai/clip-vit-large-patch14
28
+ pipeline_tag: zero-shot-image-classification
29
+ ---
30
+
31
+
32
+ # Model Card for BioTrove-CLIP
33
+
34
+ <!-- Banner links -->
35
+ <div style="text-align:center;">
36
+ <a href="https://baskargroup.github.io/BioTrove/" target="_blank">
37
+ <img src="https://img.shields.io/badge/Project%20Page-Visit-blue" alt="Project Page" style="margin-right:10px;">
38
+ </a>
39
+ <a href="https://github.com/baskargroup/BioTrove" target="_blank">
40
+ <img src="https://img.shields.io/badge/GitHub-Visit-lightgrey" alt="GitHub" style="margin-right:10px;">
41
+ </a>
42
+ <a href="https://pypi.org/project/arbor-process/" target="_blank">
43
+ <img src="https://img.shields.io/badge/PyPI-arbor--process%200.1.0-orange" alt="PyPI biotrove-process 0.1.0">
44
+ </a>
45
+ </div>
46
+
47
+
48
+ BioTrove-CLIP is a new suite of vision-language foundation models for biodiversity. These CLIP-style foundation models were trained on [BioTrove-Train](https://huggingface.co/BGLab/BioTrove-Train), which is a large-scale dataset of `40 million` images of `33K species` of plants and animals. The models are evaluated on zero-shot image classification tasks.
49
+
50
+ - **Model type:** Vision Transformer (ViT-B/16, ViT-L/14)
51
+ - **License:** MIT
52
+ - **Fine-tuned from model:** [OpenAI CLIP](https://github.com/mlfoundations/open_clip), [MetaCLIP](https://github.com/facebookresearch/MetaCLIP), [BioCLIP](https://github.com/Imageomics/BioCLIP)
53
+
54
+ These models were developed for the benefit of the AI community as an open-source product. Thus, we request that any derivative products are also open-source.
55
+
56
+
57
+ ### Model Description
58
+
59
+ BioTrove-CLIP is based on OpenAI's [CLIP](https://openai.com/research/clip) model.
60
+ The models were trained on [BioTrove-Train](https://huggingface.co/BGLab/BioTrove-Train) for the following configurations:
61
+
62
+ - **BioTrove-CLIP-O:** Trained a ViT-B/16 backbone initialized from the [OpenCLIP's](https://github.com/mlfoundations/open_clip) checkpoint. The training was conducted for 40 epochs.
63
+ - **BioTrove-CLIP-B:** Trained a ViT-B/16 backbone initialized from the [BioCLIP's](https://github.com/Imageomics/BioCLIP) checkpoint. The training was conducted for 8 epochs.
64
+ - **BioTrove-CLIP-M:** Trained a ViT-L/14 backbone initialized from the [MetaCLIP's](https://github.com/facebookresearch/MetaCLIP) checkpoint. The training was conducted for 12 epochs.
65
+
66
+
67
+ To access the checkpoints of the above models, go to the `Files and versions` tab and download the weights. These weights can be directly used for zero-shot classification and finetuning. The filenames correspond to the specific model weights -
68
+ - **BioTrove-CLIP-O:** - `biotroveclip-vit-b-16-from-openai-epoch-40.pt`,
69
+ - **BioTrove-CLIP-B:** - `biotroveclip-vit-b-16-from-bioclip-epoch-8.pt`
70
+ - **BioTrove-CLIP-M** - `biotroveclip-vit-l-14-from-metaclip-epoch-12.pt`
71
+
72
+ ### Model Training
73
+ **See the [Model Training](https://github.com/baskargroup/BioTrove/tree/main/model_training) section on the [Github](https://github.com/baskargroup/BioTrove) for examples of how to use BioTrove-CLIP models in zero-shot image classification tasks.**
74
+
75
+ We train three models using a modified version of the [BioCLIP / OpenCLIP](https://github.com/Imageomics/bioclip/tree/main/src/training) codebase. Each model is trained on Arboretum-40M, on 2 nodes, 8xH100 GPUs, on NYU's [Greene](https://sites.google.com/nyu.edu/nyu-hpc/hpc-systems/greene) high-performance compute cluster. We publicly release all code needed to reproduce our results on the [Github](https://github.com/baskargroup/Arboretum) page.
76
+
77
+ We optimize our hyperparameters prior to training with [Ray](https://docs.ray.io/en/latest/index.html). Our standard training parameters are as follows:
78
+
79
+ ```
80
+ --dataset-type webdataset
81
+ --pretrained openai
82
+ --text_type random
83
+ --dataset-resampled
84
+ --warmup 5000
85
+ --batch-size 4096
86
+ --accum-freq 1
87
+ --epochs 40
88
+ --workers 8
89
+ --model ViT-B-16
90
+ --lr 0.0005
91
+ --wd 0.0004
92
+ --precision bf16
93
+ --beta1 0.98
94
+ --beta2 0.99
95
+ --eps 1.0e-6
96
+ --local-loss
97
+ --gather-with-grad
98
+ --ddp-static-graph
99
+ --grad-checkpointing
100
+ ```
101
+
102
+ For more extensive documentation of the training process and the significance of each hyperparameter, we recommend referencing the [OpenCLIP](https://github.com/mlfoundations/open_clip) and [BioCLIP](https://github.com/Imageomics/BioCLIP) documentation, respectively.
103
+
104
+ ### Model Validation
105
+
106
+ For validating the zero-shot accuracy of our trained models and comparing to other benchmarks, we use the [VLHub](https://github.com/penfever/vlhub) repository with some slight modifications.
107
+
108
+ #### Pre-Run
109
+
110
+ After cloning the [Github](https://github.com/baskargroup/BioTrove) repository and navigating to the `BioTrove/model_validation` directory, we recommend installing all the project requirements into a conda container; `pip install -r requirements.txt`. Also, before executing a command in VLHub, please add `BioTrove/model_validation/src` to your PYTHONPATH.
111
+
112
+ ```bash
113
+ export PYTHONPATH="$PYTHONPATH:$PWD/src";
114
+ ```
115
+
116
+ #### Base Command
117
+
118
+ A basic BioTrove-CLIP model evaluation command can be launched as follows. This example would evaluate a CLIP-ResNet50 checkpoint whose weights resided at the path designated via the `--resume` flag on the ImageNet validation set, and would report the results to Weights and Biases.
119
+
120
+ ```bash
121
+ python src/training/main.py --batch-size=32 --workers=8 --imagenet-val "/imagenet/val/" --model="resnet50" --zeroshot-frequency=1 --image-size=224 --resume "/PATH/TO/WEIGHTS.pth" --report-to wandb
122
+ ```
123
+
124
+ ### Training Links
125
+ - **Main Dataset Repository:** [BioTrove](https://github.com/baskargroup/BioTrove)
126
+ - **Dataset Paper:** BioTrove: A Large Curated Image Dataset Enabling AI for Biodiversity ([arXiv](https://arxiv.org/abs/2406.17720))
127
+ - **HF Dataset card:** [BioTrove-Train (40M)](https://huggingface.co/datasets/BGLab/BioTrove-Train)
128
+
129
+
130
+ ### Model's Limitation
131
+ All the `BioTrove-CLIP` models were evaluated on the challenging [CONFOUNDING-SPECIES](https://arxiv.org/abs/2306.02507) benchmark. However, all the models performed at or below random chance. This could be an interesting avenue for follow-up work and further expand the models capabilities.
132
+
133
+ In general, we found that models trained on web-scraped data performed better with common
134
+ names, whereas models trained on specialist datasets performed better when using scientific names.
135
+ Additionally, models trained on web-scraped data excel at classifying at the highest taxonomic
136
+ level (kingdom), while models begin to benefit from specialist datasets like [BioTrove-Train (40M)](https://huggingface.co/datasets/BGLab/BioTrove-Train) and
137
+ [Tree-of-Life-10M](https://huggingface.co/datasets/imageomics/TreeOfLife-10M) at the lower taxonomic levels (order and species). From a practical standpoint, `BioTrove-CLIP` is highly accurate at the species level, and higher-level taxa can be deterministically derived from lower ones.
138
+
139
+ Addressing these limitations will further enhance the applicability of models like `BioTrove-CLIP` in real-world biodiversity monitoring tasks.
140
+
141
+ ### Acknowledgements
142
+ This work was supported by the AI Research Institutes program supported by the NSF and USDA-NIFA under [AI Institute: for Resilient Agriculture](https://aiira.iastate.edu/), Award No. 2021-67021-35329. This was also
143
+ partly supported by the NSF under CPS Frontier grant CNS-1954556. Also, we gratefully
144
+ acknowledge the support of NYU IT [High Performance Computing](https://www.nyu.edu/life/information-technology/research-computing-services/high-performance-computing.html) resources, services, and staff
145
+ expertise.
146
+
147
+ <!--BibTex citation -->
148
+ <section class="section" id="BibTeX">
149
+ <div class="container is-max-widescreen content">
150
+ <h2 class="title">Citation</h2>
151
+ If you find the models and datasets useful in your research, please consider citing our paper:
152
+ <pre><code>@misc{yang2024arboretumlargemultimodaldataset,
153
+ title={Arboretum: A Large Multimodal Dataset Enabling AI for Biodiversity},
154
+ author={Chih-Hsuan Yang, Benjamin Feuer, Zaki Jubery, Zi K. Deng, Andre Nakkab,
155
+ Md Zahid Hasan, Shivani Chiranjeevi, Kelly Marshall, Nirmal Baishnab, Asheesh K Singh,
156
+ Arti Singh, Soumik Sarkar, Nirav Merchant, Chinmay Hegde, Baskar Ganapathysubramanian},
157
+ year={2024},
158
+ eprint={2406.17720},
159
+ archivePrefix={arXiv},
160
+ primaryClass={cs.CV},
161
+ url={https://arxiv.org/abs/2406.17720},
162
+ }</code></pre>
163
+ </div>
164
+ </section>
165
+ <!--End BibTex citation -->
166
+
167
+ ---
168
+
169
+ For more details and access to the Arboretum dataset, please visit the [Project Page](https://baskargroup.github.io/Arboretum/).