Xenova HF staff commited on
Commit
4b637ff
·
verified ·
1 Parent(s): 5e23e27

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +86 -0
README.md ADDED
@@ -0,0 +1,86 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ library_name: transformers.js
3
+ tags:
4
+ - background-removal
5
+ - mask-generation
6
+ - Dichotomous Image Segmentation
7
+ - Camouflaged Object Detection
8
+ - Salient Object Detection
9
+ repo_url: https://github.com/ZhengPeng7/BiRefNet
10
+ pipeline_tag: image-segmentation
11
+ license: mit
12
+ base_model:
13
+ - ZhengPeng7/BiRefNet-DIS5K-TR_TEs
14
+ ---
15
+ <h1 align="center">Bilateral Reference for High-Resolution Dichotomous Image Segmentation</h1>
16
+
17
+ <div align='center'>
18
+ <a href='https://scholar.google.com/citations?user=TZRzWOsAAAAJ' target='_blank'><strong>Peng Zheng</strong></a><sup> 1,4,5,6</sup>,&thinsp;
19
+ <a href='https://scholar.google.com/citations?user=0uPb8MMAAAAJ' target='_blank'><strong>Dehong Gao</strong></a><sup> 2</sup>,&thinsp;
20
+ <a href='https://scholar.google.com/citations?user=kakwJ5QAAAAJ' target='_blank'><strong>Deng-Ping Fan</strong></a><sup> 1*</sup>,&thinsp;
21
+ <a href='https://scholar.google.com/citations?user=9cMQrVsAAAAJ' target='_blank'><strong>Li Liu</strong></a><sup> 3</sup>,&thinsp;
22
+ <a href='https://scholar.google.com/citations?user=qQP6WXIAAAAJ' target='_blank'><strong>Jorma Laaksonen</strong></a><sup> 4</sup>,&thinsp;
23
+ <a href='https://scholar.google.com/citations?user=pw_0Z_UAAAAJ' target='_blank'><strong>Wanli Ouyang</strong></a><sup> 5</sup>,&thinsp;
24
+ <a href='https://scholar.google.com/citations?user=stFCYOAAAAAJ' target='_blank'><strong>Nicu Sebe</strong></a><sup> 6</sup>
25
+ </div>
26
+
27
+ <div align='center'>
28
+ <sup>1 </sup>Nankai University&ensp; <sup>2 </sup>Northwestern Polytechnical University&ensp; <sup>3 </sup>National University of Defense Technology&ensp; <sup>4 </sup>Aalto University&ensp; <sup>5 </sup>Shanghai AI Laboratory&ensp; <sup>6 </sup>University of Trento&ensp;
29
+ </div>
30
+
31
+ | *DIS-Sample_1* | *DIS-Sample_2* |
32
+ | :------------------------------: | :-------------------------------: |
33
+ | <img src="https://drive.google.com/thumbnail?id=1ItXaA26iYnE8XQ_GgNLy71MOWePoS2-g&sz=w400" /> | <img src="https://drive.google.com/thumbnail?id=1Z-esCujQF_uEa_YJjkibc3NUrW4aR_d4&sz=w400" /> |
34
+
35
+ For more information, check out the official [repository](https://github.com/ZhengPeng7/BiRefNet).
36
+
37
+ ## Usage (Transformers.js)
38
+
39
+ If you haven't already, you can install the [Transformers.js](https://huggingface.co/docs/transformers.js) JavaScript library from [NPM](https://www.npmjs.com/package/@xenova/transformers) using:
40
+ ```bash
41
+ npm i @huggingface/transformers
42
+ ```
43
+
44
+ You can then use the model for image matting, as follows:
45
+
46
+ ```js
47
+ import { AutoModel, AutoProcessor, RawImage } from '@huggingface/transformers';
48
+
49
+ // Load model and processor
50
+ const model_id = 'onnx-community/BiRefNet-DIS5K-TR_TEs-ONNX';
51
+ const model = await AutoModel.from_pretrained(model_id, { dtype: 'fp32' });
52
+ const processor = await AutoProcessor.from_pretrained(model_id);
53
+
54
+ // Load image from URL
55
+ const url = 'https://images.pexels.com/photos/5965592/pexels-photo-5965592.jpeg?auto=compress&cs=tinysrgb&w=1024';
56
+ const image = await RawImage.fromURL(url);
57
+
58
+ // Pre-process image
59
+ const { pixel_values } = await processor(image);
60
+
61
+ // Predict alpha matte
62
+ const { output_image } = await model({ input_image: pixel_values });
63
+
64
+ // Save output mask
65
+ const mask = await RawImage.fromTensor(output_image[0].sigmoid().mul(255).to('uint8')).resize(image.width, image.height);
66
+ mask.save('mask.png');
67
+ ```
68
+
69
+ | Input image | Output mask |
70
+ |--------|--------|
71
+ | ![image/png](https://cdn-uploads.huggingface.co/production/uploads/61b253b7ac5ecaae3d1efe0c/cRw4xmlhgkCZ72qJckrps.png) | ![image/png](https://cdn-uploads.huggingface.co/production/uploads/61b253b7ac5ecaae3d1efe0c/pcUeTxkZKPRVfT5oDn0Un.png) |
72
+
73
+ ## Citation
74
+
75
+ ```
76
+ @article{BiRefNet,
77
+ title={Bilateral Reference for High-Resolution Dichotomous Image Segmentation},
78
+ author={Zheng, Peng and Gao, Dehong and Fan, Deng-Ping and Liu, Li and Laaksonen, Jorma and Ouyang, Wanli and Sebe, Nicu},
79
+ journal={CAAI Artificial Intelligence Research},
80
+ year={2024}
81
+ }
82
+ ```
83
+
84
+ ---
85
+
86
+ Note: Having a separate repo for ONNX weights is intended to be a temporary solution until WebML gains more traction. If you would like to make your models web-ready, we recommend converting to ONNX using [🤗 Optimum](https://huggingface.co/docs/optimum/index) and structuring your repo like this one (with ONNX weights located in a subfolder named `onnx`).