Matthijs Hollemans
commited on
Commit
•
5282e0e
1
Parent(s):
713ec89
add model card
Browse files
README.md
ADDED
@@ -0,0 +1,63 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
license: other
|
3 |
+
tags:
|
4 |
+
- vision
|
5 |
+
- image-segmentation
|
6 |
+
datasets:
|
7 |
+
- pascal-voc
|
8 |
+
widget:
|
9 |
+
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/cat-2.jpg
|
10 |
+
example_title: Cat
|
11 |
+
---
|
12 |
+
|
13 |
+
# MobileNetV2 with DeepLabV3+
|
14 |
+
|
15 |
+
MobileNet V2 model pre-trained on PASCAL VOC at resolution 513x513. It was introduced in [MobileNetV2: Inverted Residuals and Linear Bottlenecks](https://arxiv.org/abs/1801.04381) by Mark Sandler, Andrew Howard, Menglong Zhu, Andrey Zhmoginov, Liang-Chieh Chen. It was first released in [this repository](https://github.com/tensorflow/models/tree/master/research/deeplab).
|
16 |
+
|
17 |
+
Disclaimer: The team releasing MobileNet V2 did not write a model card for this model so this model card has been written by the Hugging Face team.
|
18 |
+
|
19 |
+
## Model description
|
20 |
+
|
21 |
+
From the [original README](https://github.com/tensorflow/models/blob/master/research/slim/nets/mobilenet_v1.md):
|
22 |
+
|
23 |
+
> MobileNets are small, low-latency, low-power models parameterized to meet the resource constraints of a variety of use cases. They can be built upon for classification, detection, embeddings and segmentation similar to how other popular large scale models, such as Inception, are used. MobileNets can be run efficiently on mobile devices [...] MobileNets trade off between latency, size and accuracy while comparing favorably with popular models from the literature.
|
24 |
+
|
25 |
+
The model in this repo adds a [DeepLabV3+](https://arxiv.org/abs/1802.02611) head to the MobileNetV2 backbone for semantic segmentation.
|
26 |
+
|
27 |
+
## Intended uses & limitations
|
28 |
+
|
29 |
+
You can use the raw model for semantic segmentation. See the [model hub](https://huggingface.co/models?search=mobilenet_v2) to look for fine-tuned versions on a task that interests you.
|
30 |
+
|
31 |
+
### How to use
|
32 |
+
|
33 |
+
Here is how to use this model:
|
34 |
+
|
35 |
+
```python
|
36 |
+
from transformers import AutoImageProcessor, AutoModelForSemanticSegmentation
|
37 |
+
from PIL import Image
|
38 |
+
import requests
|
39 |
+
|
40 |
+
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
|
41 |
+
image = Image.open(requests.get(url, stream=True).raw)
|
42 |
+
|
43 |
+
preprocessor = AutoImageProcessor.from_pretrained("google/deeplabv3_mobilenet_v2_1.0_513")
|
44 |
+
model = AutoModelForSemanticSegmentation.from_pretrained("google/deeplabv3_mobilenet_v2_1.0_513")
|
45 |
+
|
46 |
+
inputs = preprocessor(images=image, return_tensors="pt")
|
47 |
+
|
48 |
+
outputs = model(**inputs)
|
49 |
+
predicted_mask = preprocessor.post_process_semantic_segmentation(outputs)
|
50 |
+
```
|
51 |
+
|
52 |
+
Currently, both the feature extractor and model support PyTorch.
|
53 |
+
|
54 |
+
### BibTeX entry and citation info
|
55 |
+
|
56 |
+
```bibtex
|
57 |
+
@inproceedings{deeplabv3plus2018,
|
58 |
+
title={Encoder-Decoder with Atrous Separable Convolution for Semantic Image Segmentation},
|
59 |
+
author={Liang-Chieh Chen and Yukun Zhu and George Papandreou and Florian Schroff and Hartwig Adam},
|
60 |
+
booktitle={ECCV},
|
61 |
+
year={2018}
|
62 |
+
}
|
63 |
+
```
|