Dataset Viewer
model_id
stringlengths 12
92
| model_card
stringlengths 166
900k
| model_labels
listlengths 2
250
|
---|---|---|
nvidia/segformer-b1-finetuned-ade-512-512 |
# SegFormer (b1-sized) model fine-tuned on ADE20k
SegFormer model fine-tuned on ADE20k at resolution 512x512. It was introduced in the paper [SegFormer: Simple and Efficient Design for Semantic Segmentation with Transformers](https://arxiv.org/abs/2105.15203) by Xie et al. and first released in [this repository](https://github.com/NVlabs/SegFormer).
Disclaimer: The team releasing SegFormer did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
SegFormer consists of a hierarchical Transformer encoder and a lightweight all-MLP decode head to achieve great results on semantic segmentation benchmarks such as ADE20K and Cityscapes. The hierarchical Transformer is first pre-trained on ImageNet-1k, after which a decode head is added and fine-tuned altogether on a downstream dataset.
## Intended uses & limitations
You can use the raw model for semantic segmentation. See the [model hub](https://huggingface.co/models?other=segformer) to look for fine-tuned versions on a task that interests you.
### How to use
Here is how to use this model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes:
```python
from transformers import SegformerFeatureExtractor, SegformerForSemanticSegmentation
from PIL import Image
import requests
feature_extractor = SegformerFeatureExtractor.from_pretrained("nvidia/segformer-b1-finetuned-ade-512-512")
model = SegformerForSemanticSegmentation.from_pretrained("nvidia/segformer-b1-finetuned-ade-512-512")
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
inputs = feature_extractor(images=image, return_tensors="pt")
outputs = model(**inputs)
logits = outputs.logits # shape (batch_size, num_labels, height/4, width/4)
```
For more code examples, we refer to the [documentation](https://huggingface.co/transformers/model_doc/segformer.html#).
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2105-15203,
author = {Enze Xie and
Wenhai Wang and
Zhiding Yu and
Anima Anandkumar and
Jose M. Alvarez and
Ping Luo},
title = {SegFormer: Simple and Efficient Design for Semantic Segmentation with
Transformers},
journal = {CoRR},
volume = {abs/2105.15203},
year = {2021},
url = {https://arxiv.org/abs/2105.15203},
eprinttype = {arXiv},
eprint = {2105.15203},
timestamp = {Wed, 02 Jun 2021 11:46:42 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2105-15203.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
| [
"wall",
"building",
"sky",
"floor",
"tree",
"ceiling",
"road",
"bed ",
"windowpane",
"grass",
"cabinet",
"sidewalk",
"person",
"earth",
"door",
"table",
"mountain",
"plant",
"curtain",
"chair",
"car",
"water",
"painting",
"sofa",
"shelf",
"house",
"sea",
"mirror",
"rug",
"field",
"armchair",
"seat",
"fence",
"desk",
"rock",
"wardrobe",
"lamp",
"bathtub",
"railing",
"cushion",
"base",
"box",
"column",
"signboard",
"chest of drawers",
"counter",
"sand",
"sink",
"skyscraper",
"fireplace",
"refrigerator",
"grandstand",
"path",
"stairs",
"runway",
"case",
"pool table",
"pillow",
"screen door",
"stairway",
"river",
"bridge",
"bookcase",
"blind",
"coffee table",
"toilet",
"flower",
"book",
"hill",
"bench",
"countertop",
"stove",
"palm",
"kitchen island",
"computer",
"swivel chair",
"boat",
"bar",
"arcade machine",
"hovel",
"bus",
"towel",
"light",
"truck",
"tower",
"chandelier",
"awning",
"streetlight",
"booth",
"television receiver",
"airplane",
"dirt track",
"apparel",
"pole",
"land",
"bannister",
"escalator",
"ottoman",
"bottle",
"buffet",
"poster",
"stage",
"van",
"ship",
"fountain",
"conveyer belt",
"canopy",
"washer",
"plaything",
"swimming pool",
"stool",
"barrel",
"basket",
"waterfall",
"tent",
"bag",
"minibike",
"cradle",
"oven",
"ball",
"food",
"step",
"tank",
"trade name",
"microwave",
"pot",
"animal",
"bicycle",
"lake",
"dishwasher",
"screen",
"blanket",
"sculpture",
"hood",
"sconce",
"vase",
"traffic light",
"tray",
"ashcan",
"fan",
"pier",
"crt screen",
"plate",
"monitor",
"bulletin board",
"shower",
"radiator",
"glass",
"clock",
"flag"
] |
jonathandinu/face-parsing |
# Face Parsing

[Semantic segmentation](https://huggingface.co/docs/transformers/tasks/semantic_segmentation) model fine-tuned from [nvidia/mit-b5](https://huggingface.co/nvidia/mit-b5) with [CelebAMask-HQ](https://github.com/switchablenorms/CelebAMask-HQ) for face parsing. For additional options, see the Transformers [Segformer docs](https://huggingface.co/docs/transformers/model_doc/segformer).
> ONNX model for web inference contributed by [Xenova](https://huggingface.co/Xenova).
## Usage in Python
Exhaustive list of labels can be extracted from [config.json](https://huggingface.co/jonathandinu/face-parsing/blob/65972ac96180b397f86fda0980bbe68e6ee01b8f/config.json#L30).
| id | label | note |
| :-: | :--------- | :---------------- |
| 0 | background | |
| 1 | skin | |
| 2 | nose | |
| 3 | eye_g | eyeglasses |
| 4 | l_eye | left eye |
| 5 | r_eye | right eye |
| 6 | l_brow | left eyebrow |
| 7 | r_brow | right eyebrow |
| 8 | l_ear | left ear |
| 9 | r_ear | right ear |
| 10 | mouth | area between lips |
| 11 | u_lip | upper lip |
| 12 | l_lip | lower lip |
| 13 | hair | |
| 14 | hat | |
| 15 | ear_r | earring |
| 16 | neck_l | necklace |
| 17 | neck | |
| 18 | cloth | clothing |
```python
import torch
from torch import nn
from transformers import SegformerImageProcessor, SegformerForSemanticSegmentation
from PIL import Image
import matplotlib.pyplot as plt
import requests
# convenience expression for automatically determining device
device = (
"cuda"
# Device for NVIDIA or AMD GPUs
if torch.cuda.is_available()
else "mps"
# Device for Apple Silicon (Metal Performance Shaders)
if torch.backends.mps.is_available()
else "cpu"
)
# load models
image_processor = SegformerImageProcessor.from_pretrained("jonathandinu/face-parsing")
model = SegformerForSemanticSegmentation.from_pretrained("jonathandinu/face-parsing")
model.to(device)
# expects a PIL.Image or torch.Tensor
url = "https://images.unsplash.com/photo-1539571696357-5a69c17a67c6"
image = Image.open(requests.get(url, stream=True).raw)
# run inference on image
inputs = image_processor(images=image, return_tensors="pt").to(device)
outputs = model(**inputs)
logits = outputs.logits # shape (batch_size, num_labels, ~height/4, ~width/4)
# resize output to match input image dimensions
upsampled_logits = nn.functional.interpolate(logits,
size=image.size[::-1], # H x W
mode='bilinear',
align_corners=False)
# get label masks
labels = upsampled_logits.argmax(dim=1)[0]
# move to CPU to visualize in matplotlib
labels_viz = labels.cpu().numpy()
plt.imshow(labels_viz)
plt.show()
```
## Usage in the browser (Transformers.js)
```js
import {
pipeline,
env,
} from "https://cdn.jsdelivr.net/npm/@xenova/[email protected]";
// important to prevent errors since the model files are likely remote on HF hub
env.allowLocalModels = false;
// instantiate image segmentation pipeline with pretrained face parsing model
model = await pipeline("image-segmentation", "jonathandinu/face-parsing");
// async inference since it could take a few seconds
const output = await model(url);
// each label is a separate mask object
// [
// { score: null, label: 'background', mask: transformers.js RawImage { ... }}
// { score: null, label: 'hair', mask: transformers.js RawImage { ... }}
// ...
// ]
for (const m of output) {
print(`Found ${m.label}`);
m.mask.save(`${m.label}.png`);
}
```
### p5.js
Since [p5.js](https://p5js.org/) uses an animation loop abstraction, we need to take care loading the model and making predictions.
```js
// ...
// asynchronously load transformers.js and instantiate model
async function preload() {
// load transformers.js library with a dynamic import
const { pipeline, env } = await import(
"https://cdn.jsdelivr.net/npm/@xenova/[email protected]"
);
// important to prevent errors since the model files are remote on HF hub
env.allowLocalModels = false;
// instantiate image segmentation pipeline with pretrained face parsing model
model = await pipeline("image-segmentation", "jonathandinu/face-parsing");
print("face-parsing model loaded");
}
// ...
```
[full p5.js example](https://editor.p5js.org/jonathan.ai/sketches/wZn15Dvgh)
### Model Description
- **Developed by:** [Jonathan Dinu](https://twitter.com/jonathandinu)
- **Model type:** Transformer-based semantic segmentation image model
- **License:** non-commercial research and educational purposes
- **Resources for more information:** Transformers docs on [Segformer](https://huggingface.co/docs/transformers/model_doc/segformer) and/or the [original research paper](https://arxiv.org/abs/2105.15203).
## Limitations and Bias
### Bias
While the capabilities of computer vision models are impressive, they can also reinforce or exacerbate social biases. The [CelebAMask-HQ](https://github.com/switchablenorms/CelebAMask-HQ) dataset used for fine-tuning is large but not necessarily perfectly diverse or representative. Also, they are images of.... just celebrities.
| [
"background",
"skin",
"nose",
"eye_g",
"l_eye",
"r_eye",
"l_brow",
"r_brow",
"l_ear",
"r_ear",
"mouth",
"u_lip",
"l_lip",
"hair",
"hat",
"ear_r",
"neck_l",
"neck",
"cloth"
] |
facebook/mask2former-swin-tiny-coco-instance |
# Mask2Former
Mask2Former model trained on COCO instance segmentation (tiny-sized version, Swin backbone). It was introduced in the paper [Masked-attention Mask Transformer for Universal Image Segmentation
](https://arxiv.org/abs/2112.01527) and first released in [this repository](https://github.com/facebookresearch/Mask2Former/).
Disclaimer: The team releasing Mask2Former did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
Mask2Former addresses instance, semantic and panoptic segmentation with the same paradigm: by predicting a set of masks and corresponding labels. Hence, all 3 tasks are treated as if they were instance segmentation. Mask2Former outperforms the previous SOTA,
[MaskFormer](https://arxiv.org/abs/2107.06278) both in terms of performance an efficiency by (i) replacing the pixel decoder with a more advanced multi-scale deformable attention Transformer, (ii) adopting a Transformer decoder with masked attention to boost performance without
without introducing additional computation and (iii) improving training efficiency by calculating the loss on subsampled points instead of whole masks.

## Intended uses & limitations
You can use this particular checkpoint for instance segmentation. See the [model hub](https://huggingface.co/models?search=mask2former) to look for other
fine-tuned versions on a task that interests you.
### How to use
Here is how to use this model:
```python
import requests
import torch
from PIL import Image
from transformers import AutoImageProcessor, Mask2FormerForUniversalSegmentation
# load Mask2Former fine-tuned on COCO instance segmentation
processor = AutoImageProcessor.from_pretrained("facebook/mask2former-swin-tiny-coco-instance")
model = Mask2FormerForUniversalSegmentation.from_pretrained("facebook/mask2former-swin-tiny-coco-instance")
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
inputs = processor(images=image, return_tensors="pt")
with torch.no_grad():
outputs = model(**inputs)
# model predicts class_queries_logits of shape `(batch_size, num_queries)`
# and masks_queries_logits of shape `(batch_size, num_queries, height, width)`
class_queries_logits = outputs.class_queries_logits
masks_queries_logits = outputs.masks_queries_logits
# you can pass them to processor for postprocessing
result = processor.post_process_instance_segmentation(outputs, target_sizes=[image.size[::-1]])[0]
# we refer to the demo notebooks for visualization (see "Resources" section in the Mask2Former docs)
predicted_instance_map = result["segmentation"]
```
For more code examples, we refer to the [documentation](https://huggingface.co/docs/transformers/master/en/model_doc/mask2former). | [
"person",
"bicycle",
"car",
"motorbike",
"aeroplane",
"bus",
"train",
"truck",
"boat",
"traffic light",
"fire hydrant",
"stop sign",
"parking meter",
"bench",
"bird",
"cat",
"dog",
"horse",
"sheep",
"cow",
"elephant",
"bear",
"zebra",
"giraffe",
"backpack",
"umbrella",
"handbag",
"tie",
"suitcase",
"frisbee",
"skis",
"snowboard",
"sports ball",
"kite",
"baseball bat",
"baseball glove",
"skateboard",
"surfboard",
"tennis racket",
"bottle",
"wine glass",
"cup",
"fork",
"knife",
"spoon",
"bowl",
"banana",
"apple",
"sandwich",
"orange",
"broccoli",
"carrot",
"hot dog",
"pizza",
"donut",
"cake",
"chair",
"sofa",
"pottedplant",
"bed",
"diningtable",
"toilet",
"tvmonitor",
"laptop",
"mouse",
"remote",
"keyboard",
"cell phone",
"microwave",
"oven",
"toaster",
"sink",
"refrigerator",
"book",
"clock",
"vase",
"scissors",
"teddy bear",
"hair drier",
"toothbrush"
] |
facebook/mask2former-swin-large-ade-semantic |
# Mask2Former
Mask2Former model trained on ADE20k semantic segmentation (large-sized version, Swin backbone). It was introduced in the paper [Masked-attention Mask Transformer for Universal Image Segmentation
](https://arxiv.org/abs/2112.01527) and first released in [this repository](https://github.com/facebookresearch/Mask2Former/).
Disclaimer: The team releasing Mask2Former did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
Mask2Former addresses instance, semantic and panoptic segmentation with the same paradigm: by predicting a set of masks and corresponding labels. Hence, all 3 tasks are treated as if they were instance segmentation. Mask2Former outperforms the previous SOTA,
[MaskFormer](https://arxiv.org/abs/2107.06278) both in terms of performance an efficiency by (i) replacing the pixel decoder with a more advanced multi-scale deformable attention Transformer, (ii) adopting a Transformer decoder with masked attention to boost performance without
without introducing additional computation and (iii) improving training efficiency by calculating the loss on subsampled points instead of whole masks.

## Intended uses & limitations
You can use this particular checkpoint for panoptic segmentation. See the [model hub](https://huggingface.co/models?search=mask2former) to look for other
fine-tuned versions on a task that interests you.
### How to use
Here is how to use this model:
```python
import requests
import torch
from PIL import Image
from transformers import AutoImageProcessor, Mask2FormerForUniversalSegmentation
# load Mask2Former fine-tuned on ADE20k semantic segmentation
processor = AutoImageProcessor.from_pretrained("facebook/mask2former-swin-large-ade-semantic")
model = Mask2FormerForUniversalSegmentation.from_pretrained("facebook/mask2former-swin-large-ade-semantic")
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
inputs = processor(images=image, return_tensors="pt")
with torch.no_grad():
outputs = model(**inputs)
# model predicts class_queries_logits of shape `(batch_size, num_queries)`
# and masks_queries_logits of shape `(batch_size, num_queries, height, width)`
class_queries_logits = outputs.class_queries_logits
masks_queries_logits = outputs.masks_queries_logits
# you can pass them to processor for postprocessing
predicted_semantic_map = processor.post_process_semantic_segmentation(outputs, target_sizes=[image.size[::-1]])[0]
# we refer to the demo notebooks for visualization (see "Resources" section in the Mask2Former docs)
```
For more code examples, we refer to the [documentation](https://huggingface.co/docs/transformers/master/en/model_doc/mask2former). | [
"wall",
"building",
"sky",
"floor",
"tree",
"ceiling",
"road",
"bed ",
"windowpane",
"grass",
"cabinet",
"sidewalk",
"person",
"earth",
"door",
"table",
"mountain",
"plant",
"curtain",
"chair",
"car",
"water",
"painting",
"sofa",
"shelf",
"house",
"sea",
"mirror",
"rug",
"field",
"armchair",
"seat",
"fence",
"desk",
"rock",
"wardrobe",
"lamp",
"bathtub",
"railing",
"cushion",
"base",
"box",
"column",
"signboard",
"chest of drawers",
"counter",
"sand",
"sink",
"skyscraper",
"fireplace",
"refrigerator",
"grandstand",
"path",
"stairs",
"runway",
"case",
"pool table",
"pillow",
"screen door",
"stairway",
"river",
"bridge",
"bookcase",
"blind",
"coffee table",
"toilet",
"flower",
"book",
"hill",
"bench",
"countertop",
"stove",
"palm",
"kitchen island",
"computer",
"swivel chair",
"boat",
"bar",
"arcade machine",
"hovel",
"bus",
"towel",
"light",
"truck",
"tower",
"chandelier",
"awning",
"streetlight",
"booth",
"television receiver",
"airplane",
"dirt track",
"apparel",
"pole",
"land",
"bannister",
"escalator",
"ottoman",
"bottle",
"buffet",
"poster",
"stage",
"van",
"ship",
"fountain",
"conveyer belt",
"canopy",
"washer",
"plaything",
"swimming pool",
"stool",
"barrel",
"basket",
"waterfall",
"tent",
"bag",
"minibike",
"cradle",
"oven",
"ball",
"food",
"step",
"tank",
"trade name",
"microwave",
"pot",
"animal",
"bicycle",
"lake",
"dishwasher",
"screen",
"blanket",
"sculpture",
"hood",
"sconce",
"vase",
"traffic light",
"tray",
"ashcan",
"fan",
"pier",
"crt screen",
"plate",
"monitor",
"bulletin board",
"shower",
"radiator",
"glass",
"clock",
"flag"
] |
isjackwild/segformer-b0-finetuned-segments-skin-hair-clothing | # Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | [
"background",
"skin",
"hair",
"clothing"
] |
facebook/detr-resnet-101-panoptic |
# DETR (End-to-End Object Detection) model with ResNet-101 backbone
DEtection TRansformer (DETR) model trained end-to-end on COCO 2017 panoptic (118k annotated images). It was introduced in the paper [End-to-End Object Detection with Transformers](https://arxiv.org/abs/2005.12872) by Carion et al. and first released in [this repository](https://github.com/facebookresearch/detr).
Disclaimer: The team releasing DETR did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
The DETR model is an encoder-decoder transformer with a convolutional backbone. Two heads are added on top of the decoder outputs in order to perform object detection: a linear layer for the class labels and a MLP (multi-layer perceptron) for the bounding boxes. The model uses so-called object queries to detect objects in an image. Each object query looks for a particular object in the image. For COCO, the number of object queries is set to 100.
The model is trained using a "bipartite matching loss": one compares the predicted classes + bounding boxes of each of the N = 100 object queries to the ground truth annotations, padded up to the same length N (so if an image only contains 4 objects, 96 annotations will just have a "no object" as class and "no bounding box" as bounding box). The Hungarian matching algorithm is used to create an optimal one-to-one mapping between each of the N queries and each of the N annotations. Next, standard cross-entropy (for the classes) and a linear combination of the L1 and generalized IoU loss (for the bounding boxes) are used to optimize the parameters of the model.
DETR can be naturally extended to perform panoptic segmentation, by adding a mask head on top of the decoder outputs.
## Intended uses & limitations
You can use the raw model for panoptic segmentation. See the [model hub](https://huggingface.co/models?search=facebook/detr) to look for all available DETR models.
### How to use
Here is how to use this model:
```python
from transformers import DetrFeatureExtractor, DetrForSegmentation
from PIL import Image
import requests
url = 'http://images.cocodataset.org/val2017/000000039769.jpg'
image = Image.open(requests.get(url, stream=True).raw)
feature_extractor = DetrFeatureExtractor.from_pretrained('facebook/detr-resnet-101-panoptic')
model = DetrForSegmentation.from_pretrained('facebook/detr-resnet-101-panoptic')
# prepare inputs for the model
inputs = feature_extractor(images=image, return_tensors="pt")
# forward pass
outputs = model(**inputs)
# use the `post_process_panoptic` method of `DetrFeatureExtractor` to convert to COCO format
processed_sizes = torch.as_tensor(inputs["pixel_values"].shape[-2:]).unsqueeze(0)
result = feature_extractor.post_process_panoptic(outputs, processed_sizes)[0]
# the segmentation is stored in a special-format png
panoptic_seg = Image.open(io.BytesIO(result["png_string"]))
panoptic_seg = numpy.array(panoptic_seg, dtype=numpy.uint8)
# retrieve the ids corresponding to each mask
panoptic_seg_id = rgb_to_id(panoptic_seg)
```
Currently, both the feature extractor and model support PyTorch.
## Training data
The DETR model was trained on [COCO 2017 panoptic](https://cocodataset.org/#download), a dataset consisting of 118k/5k annotated images for training/validation respectively.
## Training procedure
### Preprocessing
The exact details of preprocessing of images during training/validation can be found [here](https://github.com/facebookresearch/detr/blob/master/datasets/coco_panoptic.py).
Images are resized/rescaled such that the shortest side is at least 800 pixels and the largest side at most 1333 pixels, and normalized across the RGB channels with the ImageNet mean (0.485, 0.456, 0.406) and standard deviation (0.229, 0.224, 0.225).
### Training
The model was trained for 300 epochs on 16 V100 GPUs. This takes 3 days, with 4 images per GPU (hence a total batch size of 64).
## Evaluation results
This model achieves the following results on COCO 2017 validation: a box AP (average precision) of **40.1**, a segmentation AP (average precision) of **33** and a PQ (panoptic quality) of **45.1**.
For more details regarding evaluation results, we refer to table 5 of the original paper.
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2005-12872,
author = {Nicolas Carion and
Francisco Massa and
Gabriel Synnaeve and
Nicolas Usunier and
Alexander Kirillov and
Sergey Zagoruyko},
title = {End-to-End Object Detection with Transformers},
journal = {CoRR},
volume = {abs/2005.12872},
year = {2020},
url = {https://arxiv.org/abs/2005.12872},
archivePrefix = {arXiv},
eprint = {2005.12872},
timestamp = {Thu, 28 May 2020 17:38:09 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2005-12872.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
``` | [
"n/a",
"person",
"traffic light",
"cardboard",
"carpet",
"ceiling-other",
"ceiling-tile",
"cloth",
"clothes",
"clouds",
"counter",
"cupboard",
"curtain",
"fire hydrant",
"desk-stuff",
"dirt",
"door-stuff",
"fence",
"floor-marble",
"floor-other",
"floor-stone",
"floor-tile",
"floor-wood",
"flower",
"street sign",
"fog",
"food-other",
"fruit",
"furniture-other",
"grass",
"gravel",
"ground-other",
"hill",
"house",
"leaves",
"stop sign",
"light",
"mat",
"metal",
"mirror-stuff",
"moss",
"mountain",
"mud",
"napkin",
"net",
"paper",
"parking meter",
"pavement",
"pillow",
"plant-other",
"plastic",
"platform",
"playingfield",
"railing",
"railroad",
"river",
"road",
"bench",
"rock",
"roof",
"rug",
"salad",
"sand",
"sea",
"shelf",
"sky-other",
"skyscraper",
"snow",
"bird",
"solid-other",
"stairs",
"stone",
"straw",
"structural-other",
"table",
"tent",
"textile-other",
"towel",
"tree",
"cat",
"vegetable",
"wall-brick",
"wall-concrete",
"wall-other",
"wall-panel",
"wall-stone",
"wall-tile",
"wall-wood",
"water-other",
"waterdrops",
"dog",
"window-blind",
"window-other",
"wood",
"label_183",
"label_184",
"label_185",
"label_186",
"label_187",
"label_188",
"label_189",
"horse",
"label_190",
"label_191",
"label_192",
"label_193",
"label_194",
"label_195",
"label_196",
"label_197",
"label_198",
"label_199",
"bicycle",
"sheep",
"label_200",
"label_201",
"label_202",
"label_203",
"label_204",
"label_205",
"label_206",
"label_207",
"label_208",
"label_209",
"cow",
"label_210",
"label_211",
"label_212",
"label_213",
"label_214",
"label_215",
"label_216",
"label_217",
"label_218",
"label_219",
"elephant",
"label_220",
"label_221",
"label_222",
"label_223",
"label_224",
"label_225",
"label_226",
"label_227",
"label_228",
"label_229",
"bear",
"label_230",
"label_231",
"label_232",
"label_233",
"label_234",
"label_235",
"label_236",
"label_237",
"label_238",
"label_239",
"zebra",
"label_240",
"label_241",
"label_242",
"label_243",
"label_244",
"label_245",
"label_246",
"label_247",
"label_248",
"label_249",
"giraffe",
"hat",
"backpack",
"umbrella",
"shoe",
"car",
"eye glasses",
"handbag",
"tie",
"suitcase",
"frisbee",
"skis",
"snowboard",
"sports ball",
"kite",
"baseball bat",
"motorcycle",
"baseball glove",
"skateboard",
"surfboard",
"tennis racket",
"bottle",
"plate",
"wine glass",
"cup",
"fork",
"knife",
"airplane",
"spoon",
"bowl",
"banana",
"apple",
"sandwich",
"orange",
"broccoli",
"carrot",
"hot dog",
"pizza",
"bus",
"donut",
"cake",
"chair",
"couch",
"potted plant",
"bed",
"mirror",
"dining table",
"window",
"desk",
"train",
"toilet",
"door",
"tv",
"laptop",
"mouse",
"remote",
"keyboard",
"cell phone",
"microwave",
"oven",
"truck",
"toaster",
"sink",
"refrigerator",
"blender",
"book",
"clock",
"vase",
"scissors",
"teddy bear",
"hair drier",
"boat",
"toothbrush",
"hair brush",
"banner",
"blanket",
"branch",
"bridge",
"building-other",
"bush",
"cabinet",
"cage"
] |
facebook/detr-resnet-50-dc5-panoptic |
# DETR (End-to-End Object Detection) model with ResNet-50 backbone (dilated C5 stage)
DEtection TRansformer (DETR) model trained end-to-end on COCO 2017 panoptic (118k annotated images). It was introduced in the paper [End-to-End Object Detection with Transformers](https://arxiv.org/abs/2005.12872) by Carion et al. and first released in [this repository](https://github.com/facebookresearch/detr).
Disclaimer: The team releasing DETR did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
The DETR model is an encoder-decoder transformer with a convolutional backbone. Two heads are added on top of the decoder outputs in order to perform object detection: a linear layer for the class labels and a MLP (multi-layer perceptron) for the bounding boxes. The model uses so-called object queries to detect objects in an image. Each object query looks for a particular object in the image. For COCO, the number of object queries is set to 100.
The model is trained using a "bipartite matching loss": one compares the predicted classes + bounding boxes of each of the N = 100 object queries to the ground truth annotations, padded up to the same length N (so if an image only contains 4 objects, 96 annotations will just have a "no object" as class and "no bounding box" as bounding box). The Hungarian matching algorithm is used to create an optimal one-to-one mapping between each of the N queries and each of the N annotations. Next, standard cross-entropy (for the classes) and a linear combination of the L1 and generalized IoU loss (for the bounding boxes) are used to optimize the parameters of the model.
DETR can be naturally extended to perform panoptic segmentation, by adding a mask head on top of the decoder outputs.
## Intended uses & limitations
You can use the raw model for panoptic segmentation. See the [model hub](https://huggingface.co/models?search=facebook/detr) to look for all available DETR models.
### How to use
Here is how to use this model:
```python
from transformers import DetrFeatureExtractor, DetrForSegmentation
from PIL import Image
import requests
url = 'http://images.cocodataset.org/val2017/000000039769.jpg'
image = Image.open(requests.get(url, stream=True).raw)
feature_extractor = DetrFeatureExtractor.from_pretrained('facebook/detr-resnet-50-dc5-panoptic')
model = DetrForSegmentation.from_pretrained('facebook/detr-resnet-50-dc5-panoptic')
inputs = feature_extractor(images=image, return_tensors="pt")
outputs = model(**inputs)
# model predicts COCO classes, bounding boxes, and masks
logits = outputs.logits
bboxes = outputs.pred_boxes
masks = outputs.pred_masks
```
Currently, both the feature extractor and model support PyTorch.
## Training data
The DETR model was trained on [COCO 2017 panoptic](https://cocodataset.org/#download), a dataset consisting of 118k/5k annotated images for training/validation respectively.
## Training procedure
### Preprocessing
The exact details of preprocessing of images during training/validation can be found [here](https://github.com/facebookresearch/detr/blob/master/datasets/coco_panoptic.py).
Images are resized/rescaled such that the shortest side is at least 800 pixels and the largest side at most 1333 pixels, and normalized across the RGB channels with the ImageNet mean (0.485, 0.456, 0.406) and standard deviation (0.229, 0.224, 0.225).
### Training
The model was trained for 300 epochs on 16 V100 GPUs. This takes 3 days, with 4 images per GPU (hence a total batch size of 64).
## Evaluation results
This model achieves the following results on COCO 2017 validation: a box AP (average precision) of **40.2**, a segmentation AP (average precision) of **31.9** and a PQ (panoptic quality) of **44.6**.
For more details regarding evaluation results, we refer to table 5 of the original paper.
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2005-12872,
author = {Nicolas Carion and
Francisco Massa and
Gabriel Synnaeve and
Nicolas Usunier and
Alexander Kirillov and
Sergey Zagoruyko},
title = {End-to-End Object Detection with Transformers},
journal = {CoRR},
volume = {abs/2005.12872},
year = {2020},
url = {https://arxiv.org/abs/2005.12872},
archivePrefix = {arXiv},
eprint = {2005.12872},
timestamp = {Thu, 28 May 2020 17:38:09 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2005-12872.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
``` | [
"n/a",
"person",
"traffic light",
"cardboard",
"carpet",
"ceiling-other",
"ceiling-tile",
"cloth",
"clothes",
"clouds",
"counter",
"cupboard",
"curtain",
"fire hydrant",
"desk-stuff",
"dirt",
"door-stuff",
"fence",
"floor-marble",
"floor-other",
"floor-stone",
"floor-tile",
"floor-wood",
"flower",
"street sign",
"fog",
"food-other",
"fruit",
"furniture-other",
"grass",
"gravel",
"ground-other",
"hill",
"house",
"leaves",
"stop sign",
"light",
"mat",
"metal",
"mirror-stuff",
"moss",
"mountain",
"mud",
"napkin",
"net",
"paper",
"parking meter",
"pavement",
"pillow",
"plant-other",
"plastic",
"platform",
"playingfield",
"railing",
"railroad",
"river",
"road",
"bench",
"rock",
"roof",
"rug",
"salad",
"sand",
"sea",
"shelf",
"sky-other",
"skyscraper",
"snow",
"bird",
"solid-other",
"stairs",
"stone",
"straw",
"structural-other",
"table",
"tent",
"textile-other",
"towel",
"tree",
"cat",
"vegetable",
"wall-brick",
"wall-concrete",
"wall-other",
"wall-panel",
"wall-stone",
"wall-tile",
"wall-wood",
"water-other",
"waterdrops",
"dog",
"window-blind",
"window-other",
"wood",
"label_183",
"label_184",
"label_185",
"label_186",
"label_187",
"label_188",
"label_189",
"horse",
"label_190",
"label_191",
"label_192",
"label_193",
"label_194",
"label_195",
"label_196",
"label_197",
"label_198",
"label_199",
"bicycle",
"sheep",
"label_200",
"label_201",
"label_202",
"label_203",
"label_204",
"label_205",
"label_206",
"label_207",
"label_208",
"label_209",
"cow",
"label_210",
"label_211",
"label_212",
"label_213",
"label_214",
"label_215",
"label_216",
"label_217",
"label_218",
"label_219",
"elephant",
"label_220",
"label_221",
"label_222",
"label_223",
"label_224",
"label_225",
"label_226",
"label_227",
"label_228",
"label_229",
"bear",
"label_230",
"label_231",
"label_232",
"label_233",
"label_234",
"label_235",
"label_236",
"label_237",
"label_238",
"label_239",
"zebra",
"label_240",
"label_241",
"label_242",
"label_243",
"label_244",
"label_245",
"label_246",
"label_247",
"label_248",
"label_249",
"giraffe",
"hat",
"backpack",
"umbrella",
"shoe",
"car",
"eye glasses",
"handbag",
"tie",
"suitcase",
"frisbee",
"skis",
"snowboard",
"sports ball",
"kite",
"baseball bat",
"motorcycle",
"baseball glove",
"skateboard",
"surfboard",
"tennis racket",
"bottle",
"plate",
"wine glass",
"cup",
"fork",
"knife",
"airplane",
"spoon",
"bowl",
"banana",
"apple",
"sandwich",
"orange",
"broccoli",
"carrot",
"hot dog",
"pizza",
"bus",
"donut",
"cake",
"chair",
"couch",
"potted plant",
"bed",
"mirror",
"dining table",
"window",
"desk",
"train",
"toilet",
"door",
"tv",
"laptop",
"mouse",
"remote",
"keyboard",
"cell phone",
"microwave",
"oven",
"truck",
"toaster",
"sink",
"refrigerator",
"blender",
"book",
"clock",
"vase",
"scissors",
"teddy bear",
"hair drier",
"boat",
"toothbrush",
"hair brush",
"banner",
"blanket",
"branch",
"bridge",
"building-other",
"bush",
"cabinet",
"cage"
] |
facebook/detr-resnet-50-panoptic |
# DETR (End-to-End Object Detection) model with ResNet-50 backbone
DEtection TRansformer (DETR) model trained end-to-end on COCO 2017 panoptic (118k annotated images). It was introduced in the paper [End-to-End Object Detection with Transformers](https://arxiv.org/abs/2005.12872) by Carion et al. and first released in [this repository](https://github.com/facebookresearch/detr).
Disclaimer: The team releasing DETR did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
The DETR model is an encoder-decoder transformer with a convolutional backbone. Two heads are added on top of the decoder outputs in order to perform object detection: a linear layer for the class labels and a MLP (multi-layer perceptron) for the bounding boxes. The model uses so-called object queries to detect objects in an image. Each object query looks for a particular object in the image. For COCO, the number of object queries is set to 100.
The model is trained using a "bipartite matching loss": one compares the predicted classes + bounding boxes of each of the N = 100 object queries to the ground truth annotations, padded up to the same length N (so if an image only contains 4 objects, 96 annotations will just have a "no object" as class and "no bounding box" as bounding box). The Hungarian matching algorithm is used to create an optimal one-to-one mapping between each of the N queries and each of the N annotations. Next, standard cross-entropy (for the classes) and a linear combination of the L1 and generalized IoU loss (for the bounding boxes) are used to optimize the parameters of the model.
DETR can be naturally extended to perform panoptic segmentation, by adding a mask head on top of the decoder outputs.

## Intended uses & limitations
You can use the raw model for panoptic segmentation. See the [model hub](https://huggingface.co/models?search=facebook/detr) to look for all available DETR models.
### How to use
Here is how to use this model:
```python
import io
import requests
from PIL import Image
import torch
import numpy
from transformers import DetrFeatureExtractor, DetrForSegmentation
from transformers.models.detr.feature_extraction_detr import rgb_to_id
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
feature_extractor = DetrFeatureExtractor.from_pretrained("facebook/detr-resnet-50-panoptic")
model = DetrForSegmentation.from_pretrained("facebook/detr-resnet-50-panoptic")
# prepare image for the model
inputs = feature_extractor(images=image, return_tensors="pt")
# forward pass
outputs = model(**inputs)
# use the `post_process_panoptic` method of `DetrFeatureExtractor` to convert to COCO format
processed_sizes = torch.as_tensor(inputs["pixel_values"].shape[-2:]).unsqueeze(0)
result = feature_extractor.post_process_panoptic(outputs, processed_sizes)[0]
# the segmentation is stored in a special-format png
panoptic_seg = Image.open(io.BytesIO(result["png_string"]))
panoptic_seg = numpy.array(panoptic_seg, dtype=numpy.uint8)
# retrieve the ids corresponding to each mask
panoptic_seg_id = rgb_to_id(panoptic_seg)
```
Currently, both the feature extractor and model support PyTorch.
## Training data
The DETR model was trained on [COCO 2017 panoptic](https://cocodataset.org/#download), a dataset consisting of 118k/5k annotated images for training/validation respectively.
## Training procedure
### Preprocessing
The exact details of preprocessing of images during training/validation can be found [here](https://github.com/facebookresearch/detr/blob/master/datasets/coco_panoptic.py).
Images are resized/rescaled such that the shortest side is at least 800 pixels and the largest side at most 1333 pixels, and normalized across the RGB channels with the ImageNet mean (0.485, 0.456, 0.406) and standard deviation (0.229, 0.224, 0.225).
### Training
The model was trained for 300 epochs on 16 V100 GPUs. This takes 3 days, with 4 images per GPU (hence a total batch size of 64).
## Evaluation results
This model achieves the following results on COCO 2017 validation: a box AP (average precision) of **38.8**, a segmentation AP (average precision) of **31.1** and a PQ (panoptic quality) of **43.4**.
For more details regarding evaluation results, we refer to table 5 of the original paper.
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2005-12872,
author = {Nicolas Carion and
Francisco Massa and
Gabriel Synnaeve and
Nicolas Usunier and
Alexander Kirillov and
Sergey Zagoruyko},
title = {End-to-End Object Detection with Transformers},
journal = {CoRR},
volume = {abs/2005.12872},
year = {2020},
url = {https://arxiv.org/abs/2005.12872},
archivePrefix = {arXiv},
eprint = {2005.12872},
timestamp = {Thu, 28 May 2020 17:38:09 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2005-12872.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
``` | [
"n/a",
"person",
"traffic light",
"cardboard",
"carpet",
"ceiling-other",
"ceiling-tile",
"cloth",
"clothes",
"clouds",
"counter",
"cupboard",
"curtain",
"fire hydrant",
"desk-stuff",
"dirt",
"door-stuff",
"fence",
"floor-marble",
"floor-other",
"floor-stone",
"floor-tile",
"floor-wood",
"flower",
"street sign",
"fog",
"food-other",
"fruit",
"furniture-other",
"grass",
"gravel",
"ground-other",
"hill",
"house",
"leaves",
"stop sign",
"light",
"mat",
"metal",
"mirror-stuff",
"moss",
"mountain",
"mud",
"napkin",
"net",
"paper",
"parking meter",
"pavement",
"pillow",
"plant-other",
"plastic",
"platform",
"playingfield",
"railing",
"railroad",
"river",
"road",
"bench",
"rock",
"roof",
"rug",
"salad",
"sand",
"sea",
"shelf",
"sky-other",
"skyscraper",
"snow",
"bird",
"solid-other",
"stairs",
"stone",
"straw",
"structural-other",
"table",
"tent",
"textile-other",
"towel",
"tree",
"cat",
"vegetable",
"wall-brick",
"wall-concrete",
"wall-other",
"wall-panel",
"wall-stone",
"wall-tile",
"wall-wood",
"water-other",
"waterdrops",
"dog",
"window-blind",
"window-other",
"wood",
"label_183",
"label_184",
"label_185",
"label_186",
"label_187",
"label_188",
"label_189",
"horse",
"label_190",
"label_191",
"label_192",
"label_193",
"label_194",
"label_195",
"label_196",
"label_197",
"label_198",
"label_199",
"bicycle",
"sheep",
"label_200",
"label_201",
"label_202",
"label_203",
"label_204",
"label_205",
"label_206",
"label_207",
"label_208",
"label_209",
"cow",
"label_210",
"label_211",
"label_212",
"label_213",
"label_214",
"label_215",
"label_216",
"label_217",
"label_218",
"label_219",
"elephant",
"label_220",
"label_221",
"label_222",
"label_223",
"label_224",
"label_225",
"label_226",
"label_227",
"label_228",
"label_229",
"bear",
"label_230",
"label_231",
"label_232",
"label_233",
"label_234",
"label_235",
"label_236",
"label_237",
"label_238",
"label_239",
"zebra",
"label_240",
"label_241",
"label_242",
"label_243",
"label_244",
"label_245",
"label_246",
"label_247",
"label_248",
"label_249",
"giraffe",
"hat",
"backpack",
"umbrella",
"shoe",
"car",
"eye glasses",
"handbag",
"tie",
"suitcase",
"frisbee",
"skis",
"snowboard",
"sports ball",
"kite",
"baseball bat",
"motorcycle",
"baseball glove",
"skateboard",
"surfboard",
"tennis racket",
"bottle",
"plate",
"wine glass",
"cup",
"fork",
"knife",
"airplane",
"spoon",
"bowl",
"banana",
"apple",
"sandwich",
"orange",
"broccoli",
"carrot",
"hot dog",
"pizza",
"bus",
"donut",
"cake",
"chair",
"couch",
"potted plant",
"bed",
"mirror",
"dining table",
"window",
"desk",
"train",
"toilet",
"door",
"tv",
"laptop",
"mouse",
"remote",
"keyboard",
"cell phone",
"microwave",
"oven",
"truck",
"toaster",
"sink",
"refrigerator",
"blender",
"book",
"clock",
"vase",
"scissors",
"teddy bear",
"hair drier",
"boat",
"toothbrush",
"hair brush",
"banner",
"blanket",
"branch",
"bridge",
"building-other",
"bush",
"cabinet",
"cage"
] |
facebook/maskformer-swin-base-ade |
# MaskFormer
MaskFormer model trained on ADE20k semantic segmentation (base-sized version, Swin backbone). It was introduced in the paper [Per-Pixel Classification is Not All You Need for Semantic Segmentation](https://arxiv.org/abs/2107.06278) and first released in [this repository](https://github.com/facebookresearch/MaskFormer/blob/da3e60d85fdeedcb31476b5edd7d328826ce56cc/mask_former/modeling/criterion.py#L169).
Disclaimer: The team releasing MaskFormer did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
MaskFormer addresses instance, semantic and panoptic segmentation with the same paradigm: by predicting a set of masks and corresponding labels. Hence, all 3 tasks are treated as if they were instance segmentation.

## Intended uses & limitations
You can use this particular checkpoint for semantic segmentation. See the [model hub](https://huggingface.co/models?search=maskformer) to look for other
fine-tuned versions on a task that interests you.
### How to use
Here is how to use this model:
```python
from transformers import MaskFormerFeatureExtractor, MaskFormerForInstanceSegmentation
from PIL import Image
import requests
url = "https://huggingface.co/datasets/hf-internal-testing/fixtures_ade20k/resolve/main/ADE_val_00000001.jpg"
image = Image.open(requests.get(url, stream=True).raw)
feature_extractor = MaskFormerFeatureExtractor.from_pretrained("facebook/maskformer-swin-base-ade")
inputs = feature_extractor(images=image, return_tensors="pt")
model = MaskFormerForInstanceSegmentation.from_pretrained("facebook/maskformer-swin-base-ade")
outputs = model(**inputs)
# model predicts class_queries_logits of shape `(batch_size, num_queries)`
# and masks_queries_logits of shape `(batch_size, num_queries, height, width)`
class_queries_logits = outputs.class_queries_logits
masks_queries_logits = outputs.masks_queries_logits
# you can pass them to feature_extractor for postprocessing
# we refer to the demo notebooks for visualization (see "Resources" section in the MaskFormer docs)
predicted_semantic_map = feature_extractor.post_process_semantic_segmentation(outputs, target_sizes=[image.size[::-1]])[0]
```
For more code examples, we refer to the [documentation](https://huggingface.co/docs/transformers/master/en/model_doc/maskformer). | [
"wall",
"building",
"sky",
"floor",
"tree",
"ceiling",
"road, route",
"bed",
"window ",
"grass",
"cabinet",
"sidewalk, pavement",
"person",
"earth, ground",
"door",
"table",
"mountain, mount",
"plant",
"curtain",
"chair",
"car",
"water",
"painting, picture",
"sofa",
"shelf",
"house",
"sea",
"mirror",
"rug",
"field",
"armchair",
"seat",
"fence",
"desk",
"rock, stone",
"wardrobe, closet, press",
"lamp",
"tub",
"rail",
"cushion",
"base, pedestal, stand",
"box",
"column, pillar",
"signboard, sign",
"chest of drawers, chest, bureau, dresser",
"counter",
"sand",
"sink",
"skyscraper",
"fireplace",
"refrigerator, icebox",
"grandstand, covered stand",
"path",
"stairs",
"runway",
"case, display case, showcase, vitrine",
"pool table, billiard table, snooker table",
"pillow",
"screen door, screen",
"stairway, staircase",
"river",
"bridge, span",
"bookcase",
"blind, screen",
"coffee table",
"toilet, can, commode, crapper, pot, potty, stool, throne",
"flower",
"book",
"hill",
"bench",
"countertop",
"stove",
"palm, palm tree",
"kitchen island",
"computer",
"swivel chair",
"boat",
"bar",
"arcade machine",
"hovel, hut, hutch, shack, shanty",
"bus",
"towel",
"light",
"truck",
"tower",
"chandelier",
"awning, sunshade, sunblind",
"street lamp",
"booth",
"tv",
"plane",
"dirt track",
"clothes",
"pole",
"land, ground, soil",
"bannister, banister, balustrade, balusters, handrail",
"escalator, moving staircase, moving stairway",
"ottoman, pouf, pouffe, puff, hassock",
"bottle",
"buffet, counter, sideboard",
"poster, posting, placard, notice, bill, card",
"stage",
"van",
"ship",
"fountain",
"conveyer belt, conveyor belt, conveyer, conveyor, transporter",
"canopy",
"washer, automatic washer, washing machine",
"plaything, toy",
"pool",
"stool",
"barrel, cask",
"basket, handbasket",
"falls",
"tent",
"bag",
"minibike, motorbike",
"cradle",
"oven",
"ball",
"food, solid food",
"step, stair",
"tank, storage tank",
"trade name",
"microwave",
"pot",
"animal",
"bicycle",
"lake",
"dishwasher",
"screen",
"blanket, cover",
"sculpture",
"hood, exhaust hood",
"sconce",
"vase",
"traffic light",
"tray",
"trash can",
"fan",
"pier",
"crt screen",
"plate",
"monitor",
"bulletin board",
"shower",
"radiator",
"glass, drinking glass",
"clock",
"flag"
] |
facebook/maskformer-swin-base-coco |
# MaskFormer
MaskFormer model trained on COCO panoptic segmentation (base-sized version, Swin backbone). It was introduced in the paper [Per-Pixel Classification is Not All You Need for Semantic Segmentation](https://arxiv.org/abs/2107.06278) and first released in [this repository](https://github.com/facebookresearch/MaskFormer/blob/da3e60d85fdeedcb31476b5edd7d328826ce56cc/mask_former/modeling/criterion.py#L169).
Disclaimer: The team releasing MaskFormer did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
MaskFormer addresses instance, semantic and panoptic segmentation with the same paradigm: by predicting a set of masks and corresponding labels. Hence, all 3 tasks are treated as if they were instance segmentation.

## Intended uses & limitations
You can use this particular checkpoint for semantic segmentation. See the [model hub](https://huggingface.co/models?search=maskformer) to look for other
fine-tuned versions on a task that interests you.
### How to use
Here is how to use this model:
```python
from transformers import MaskFormerFeatureExtractor, MaskFormerForInstanceSegmentation
from PIL import Image
import requests
# load MaskFormer fine-tuned on COCO panoptic segmentation
feature_extractor = MaskFormerFeatureExtractor.from_pretrained("facebook/maskformer-swin-base-coco")
model = MaskFormerForInstanceSegmentation.from_pretrained("facebook/maskformer-swin-base-coco")
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
inputs = feature_extractor(images=image, return_tensors="pt")
outputs = model(**inputs)
# model predicts class_queries_logits of shape `(batch_size, num_queries)`
# and masks_queries_logits of shape `(batch_size, num_queries, height, width)`
class_queries_logits = outputs.class_queries_logits
masks_queries_logits = outputs.masks_queries_logits
# you can pass them to feature_extractor for postprocessing
result = feature_extractor.post_process_panoptic_segmentation(outputs, target_sizes=[image.size[::-1]])[0]
# we refer to the demo notebooks for visualization (see "Resources" section in the MaskFormer docs)
predicted_panoptic_map = result["segmentation"]
```
For more code examples, we refer to the [documentation](https://huggingface.co/docs/transformers/master/en/model_doc/maskformer). | [
"person",
"bicycle",
"car",
"motorcycle",
"airplane",
"bus",
"train",
"truck",
"boat",
"traffic light",
"fire hydrant",
"stop sign",
"parking meter",
"bench",
"bird",
"cat",
"dog",
"horse",
"sheep",
"cow",
"elephant",
"bear",
"zebra",
"giraffe",
"backpack",
"umbrella",
"handbag",
"tie",
"suitcase",
"frisbee",
"skis",
"snowboard",
"sports ball",
"kite",
"baseball bat",
"baseball glove",
"skateboard",
"surfboard",
"tennis racket",
"bottle",
"wine glass",
"cup",
"fork",
"knife",
"spoon",
"bowl",
"banana",
"apple",
"sandwich",
"orange",
"broccoli",
"carrot",
"hot dog",
"pizza",
"donut",
"cake",
"chair",
"couch",
"potted plant",
"bed",
"dining table",
"toilet",
"tv",
"laptop",
"mouse",
"remote",
"keyboard",
"cell phone",
"microwave",
"oven",
"toaster",
"sink",
"refrigerator",
"book",
"clock",
"vase",
"scissors",
"teddy bear",
"hair drier",
"toothbrush",
"banner",
"blanket",
"bridge",
"cardboard",
"counter",
"curtain",
"door-stuff",
"floor-wood",
"flower",
"fruit",
"gravel",
"house",
"light",
"mirror-stuff",
"net",
"pillow",
"platform",
"playingfield",
"railroad",
"river",
"road",
"roof",
"sand",
"sea",
"shelf",
"snow",
"stairs",
"tent",
"towel",
"wall-brick",
"wall-stone",
"wall-tile",
"wall-wood",
"water-other",
"window-blind",
"window-other",
"tree-merged",
"fence-merged",
"ceiling-merged",
"sky-other-merged",
"cabinet-merged",
"table-merged",
"floor-other-merged",
"pavement-merged",
"mountain-merged",
"grass-merged",
"dirt-merged",
"paper-merged",
"food-other-merged",
"building-other-merged",
"rock-merged",
"wall-other-merged",
"rug-merged"
] |
facebook/maskformer-swin-large-ade |
# MaskFormer
MaskFormer model trained on ADE20k semantic segmentation (large-sized version, Swin backbone). It was introduced in the paper [Per-Pixel Classification is Not All You Need for Semantic Segmentation](https://arxiv.org/abs/2107.06278) and first released in [this repository](https://github.com/facebookresearch/MaskFormer/blob/da3e60d85fdeedcb31476b5edd7d328826ce56cc/mask_former/modeling/criterion.py#L169).
Disclaimer: The team releasing MaskFormer did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
MaskFormer addresses instance, semantic and panoptic segmentation with the same paradigm: by predicting a set of masks and corresponding labels. Hence, all 3 tasks are treated as if they were instance segmentation.

## Intended uses & limitations
You can use this particular checkpoint for semantic segmentation. See the [model hub](https://huggingface.co/models?search=maskformer) to look for other
fine-tuned versions on a task that interests you.
### How to use
Here is how to use this model:
```python
from transformers import MaskFormerImageProcessor, MaskFormerForInstanceSegmentation
from PIL import Image
import requests
url = "https://huggingface.co/datasets/hf-internal-testing/fixtures_ade20k/resolve/main/ADE_val_00000001.jpg"
image = Image.open(requests.get(url, stream=True).raw)
processor = MaskFormerImageProcessor.from_pretrained("facebook/maskformer-swin-large-ade")
inputs = processor(images=image, return_tensors="pt")
model = MaskFormerForInstanceSegmentation.from_pretrained("facebook/maskformer-swin-large-ade")
outputs = model(**inputs)
# model predicts class_queries_logits of shape `(batch_size, num_queries)`
# and masks_queries_logits of shape `(batch_size, num_queries, height, width)`
class_queries_logits = outputs.class_queries_logits
masks_queries_logits = outputs.masks_queries_logits
# you can pass them to processor for postprocessing
# we refer to the demo notebooks for visualization (see "Resources" section in the MaskFormer docs)
predicted_semantic_map = processor.post_process_semantic_segmentation(outputs, target_sizes=[image.size[::-1]])[0]
```
For more code examples, we refer to the [documentation](https://huggingface.co/docs/transformers/master/en/model_doc/maskformer). | [
"wall",
"building",
"sky",
"floor",
"tree",
"ceiling",
"road, route",
"bed",
"window ",
"grass",
"cabinet",
"sidewalk, pavement",
"person",
"earth, ground",
"door",
"table",
"mountain, mount",
"plant",
"curtain",
"chair",
"car",
"water",
"painting, picture",
"sofa",
"shelf",
"house",
"sea",
"mirror",
"rug",
"field",
"armchair",
"seat",
"fence",
"desk",
"rock, stone",
"wardrobe, closet, press",
"lamp",
"tub",
"rail",
"cushion",
"base, pedestal, stand",
"box",
"column, pillar",
"signboard, sign",
"chest of drawers, chest, bureau, dresser",
"counter",
"sand",
"sink",
"skyscraper",
"fireplace",
"refrigerator, icebox",
"grandstand, covered stand",
"path",
"stairs",
"runway",
"case, display case, showcase, vitrine",
"pool table, billiard table, snooker table",
"pillow",
"screen door, screen",
"stairway, staircase",
"river",
"bridge, span",
"bookcase",
"blind, screen",
"coffee table",
"toilet, can, commode, crapper, pot, potty, stool, throne",
"flower",
"book",
"hill",
"bench",
"countertop",
"stove",
"palm, palm tree",
"kitchen island",
"computer",
"swivel chair",
"boat",
"bar",
"arcade machine",
"hovel, hut, hutch, shack, shanty",
"bus",
"towel",
"light",
"truck",
"tower",
"chandelier",
"awning, sunshade, sunblind",
"street lamp",
"booth",
"tv",
"plane",
"dirt track",
"clothes",
"pole",
"land, ground, soil",
"bannister, banister, balustrade, balusters, handrail",
"escalator, moving staircase, moving stairway",
"ottoman, pouf, pouffe, puff, hassock",
"bottle",
"buffet, counter, sideboard",
"poster, posting, placard, notice, bill, card",
"stage",
"van",
"ship",
"fountain",
"conveyer belt, conveyor belt, conveyer, conveyor, transporter",
"canopy",
"washer, automatic washer, washing machine",
"plaything, toy",
"pool",
"stool",
"barrel, cask",
"basket, handbasket",
"falls",
"tent",
"bag",
"minibike, motorbike",
"cradle",
"oven",
"ball",
"food, solid food",
"step, stair",
"tank, storage tank",
"trade name",
"microwave",
"pot",
"animal",
"bicycle",
"lake",
"dishwasher",
"screen",
"blanket, cover",
"sculpture",
"hood, exhaust hood",
"sconce",
"vase",
"traffic light",
"tray",
"trash can",
"fan",
"pier",
"crt screen",
"plate",
"monitor",
"bulletin board",
"shower",
"radiator",
"glass, drinking glass",
"clock",
"flag"
] |
facebook/maskformer-swin-large-coco |
# MaskFormer
MaskFormer model trained on COCO panoptic segmentation (large-sized version, Swin backbone). It was introduced in the paper [Per-Pixel Classification is Not All You Need for Semantic Segmentation](https://arxiv.org/abs/2107.06278) and first released in [this repository](https://github.com/facebookresearch/MaskFormer/blob/da3e60d85fdeedcb31476b5edd7d328826ce56cc/mask_former/modeling/criterion.py#L169).
Disclaimer: The team releasing MaskFormer did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
MaskFormer addresses instance, semantic and panoptic segmentation with the same paradigm: by predicting a set of masks and corresponding labels. Hence, all 3 tasks are treated as if they were instance segmentation.

## Intended uses & limitations
You can use this particular checkpoint for semantic segmentation. See the [model hub](https://huggingface.co/models?search=maskformer) to look for other
fine-tuned versions on a task that interests you.
### How to use
Here is how to use this model:
```python
from transformers import MaskFormerImageProcessor, MaskFormerForInstanceSegmentation
from PIL import Image
import requests
# load MaskFormer fine-tuned on COCO panoptic segmentation
processor = MaskFormerImageProcessor.from_pretrained("facebook/maskformer-swin-large-coco")
model = MaskFormerForInstanceSegmentation.from_pretrained("facebook/maskformer-swin-large-coco")
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
inputs = processor(images=image, return_tensors="pt")
outputs = model(**inputs)
# model predicts class_queries_logits of shape `(batch_size, num_queries)`
# and masks_queries_logits of shape `(batch_size, num_queries, height, width)`
class_queries_logits = outputs.class_queries_logits
masks_queries_logits = outputs.masks_queries_logits
# you can pass them to processor for postprocessing
result = processor.post_process_panoptic_segmentation(outputs, target_sizes=[image.size[::-1]])[0]
# we refer to the demo notebooks for visualization (see "Resources" section in the MaskFormer docs)
predicted_panoptic_map = result["segmentation"]
```
For more code examples, we refer to the [documentation](https://huggingface.co/docs/transformers/master/en/model_doc/maskformer). | [
"person",
"bicycle",
"car",
"motorcycle",
"airplane",
"bus",
"train",
"truck",
"boat",
"traffic light",
"fire hydrant",
"stop sign",
"parking meter",
"bench",
"bird",
"cat",
"dog",
"horse",
"sheep",
"cow",
"elephant",
"bear",
"zebra",
"giraffe",
"backpack",
"umbrella",
"handbag",
"tie",
"suitcase",
"frisbee",
"skis",
"snowboard",
"sports ball",
"kite",
"baseball bat",
"baseball glove",
"skateboard",
"surfboard",
"tennis racket",
"bottle",
"wine glass",
"cup",
"fork",
"knife",
"spoon",
"bowl",
"banana",
"apple",
"sandwich",
"orange",
"broccoli",
"carrot",
"hot dog",
"pizza",
"donut",
"cake",
"chair",
"couch",
"potted plant",
"bed",
"dining table",
"toilet",
"tv",
"laptop",
"mouse",
"remote",
"keyboard",
"cell phone",
"microwave",
"oven",
"toaster",
"sink",
"refrigerator",
"book",
"clock",
"vase",
"scissors",
"teddy bear",
"hair drier",
"toothbrush",
"banner",
"blanket",
"bridge",
"cardboard",
"counter",
"curtain",
"door-stuff",
"floor-wood",
"flower",
"fruit",
"gravel",
"house",
"light",
"mirror-stuff",
"net",
"pillow",
"platform",
"playingfield",
"railroad",
"river",
"road",
"roof",
"sand",
"sea",
"shelf",
"snow",
"stairs",
"tent",
"towel",
"wall-brick",
"wall-stone",
"wall-tile",
"wall-wood",
"water-other",
"window-blind",
"window-other",
"tree-merged",
"fence-merged",
"ceiling-merged",
"sky-other-merged",
"cabinet-merged",
"table-merged",
"floor-other-merged",
"pavement-merged",
"mountain-merged",
"grass-merged",
"dirt-merged",
"paper-merged",
"food-other-merged",
"building-other-merged",
"rock-merged",
"wall-other-merged",
"rug-merged"
] |
facebook/maskformer-swin-small-ade |
# MaskFormer
MaskFormer model trained on ADE20k semantic segmentation (small-sized version, Swin backbone). It was introduced in the paper [Per-Pixel Classification is Not All You Need for Semantic Segmentation](https://arxiv.org/abs/2107.06278) and first released in [this repository](https://github.com/facebookresearch/MaskFormer/blob/da3e60d85fdeedcb31476b5edd7d328826ce56cc/mask_former/modeling/criterion.py#L169).
Disclaimer: The team releasing MaskFormer did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
MaskFormer addresses instance, semantic and panoptic segmentation with the same paradigm: by predicting a set of masks and corresponding labels. Hence, all 3 tasks are treated as if they were instance segmentation.

## Intended uses & limitations
You can use this particular checkpoint for semantic segmentation. See the [model hub](https://huggingface.co/models?search=maskformer) to look for other
fine-tuned versions on a task that interests you.
### How to use
Here is how to use this model:
```python
from transformers import MaskFormerFeatureExtractor, MaskFormerForInstanceSegmentation
from PIL import Image
import requests
url = "https://huggingface.co/datasets/hf-internal-testing/fixtures_ade20k/resolve/main/ADE_val_00000001.jpg"
image = Image.open(requests.get(url, stream=True).raw)
feature_extractor = MaskFormerFeatureExtractor.from_pretrained("facebook/maskformer-swin-small-ade")
inputs = feature_extractor(images=image, return_tensors="pt")
model = MaskFormerForInstanceSegmentation.from_pretrained("facebook/maskformer-swin-small-ade")
outputs = model(**inputs)
# model predicts class_queries_logits of shape `(batch_size, num_queries)`
# and masks_queries_logits of shape `(batch_size, num_queries, height, width)`
class_queries_logits = outputs.class_queries_logits
masks_queries_logits = outputs.masks_queries_logits
# you can pass them to feature_extractor for postprocessing
# we refer to the demo notebooks for visualization (see "Resources" section in the MaskFormer docs)
predicted_semantic_map = feature_extractor.post_process_semantic_segmentation(outputs, target_sizes=[image.size[::-1]])[0]
```
For more code examples, we refer to the [documentation](https://huggingface.co/docs/transformers/master/en/model_doc/maskformer). | [
"wall",
"building",
"sky",
"floor",
"tree",
"ceiling",
"road, route",
"bed",
"window ",
"grass",
"cabinet",
"sidewalk, pavement",
"person",
"earth, ground",
"door",
"table",
"mountain, mount",
"plant",
"curtain",
"chair",
"car",
"water",
"painting, picture",
"sofa",
"shelf",
"house",
"sea",
"mirror",
"rug",
"field",
"armchair",
"seat",
"fence",
"desk",
"rock, stone",
"wardrobe, closet, press",
"lamp",
"tub",
"rail",
"cushion",
"base, pedestal, stand",
"box",
"column, pillar",
"signboard, sign",
"chest of drawers, chest, bureau, dresser",
"counter",
"sand",
"sink",
"skyscraper",
"fireplace",
"refrigerator, icebox",
"grandstand, covered stand",
"path",
"stairs",
"runway",
"case, display case, showcase, vitrine",
"pool table, billiard table, snooker table",
"pillow",
"screen door, screen",
"stairway, staircase",
"river",
"bridge, span",
"bookcase",
"blind, screen",
"coffee table",
"toilet, can, commode, crapper, pot, potty, stool, throne",
"flower",
"book",
"hill",
"bench",
"countertop",
"stove",
"palm, palm tree",
"kitchen island",
"computer",
"swivel chair",
"boat",
"bar",
"arcade machine",
"hovel, hut, hutch, shack, shanty",
"bus",
"towel",
"light",
"truck",
"tower",
"chandelier",
"awning, sunshade, sunblind",
"street lamp",
"booth",
"tv",
"plane",
"dirt track",
"clothes",
"pole",
"land, ground, soil",
"bannister, banister, balustrade, balusters, handrail",
"escalator, moving staircase, moving stairway",
"ottoman, pouf, pouffe, puff, hassock",
"bottle",
"buffet, counter, sideboard",
"poster, posting, placard, notice, bill, card",
"stage",
"van",
"ship",
"fountain",
"conveyer belt, conveyor belt, conveyer, conveyor, transporter",
"canopy",
"washer, automatic washer, washing machine",
"plaything, toy",
"pool",
"stool",
"barrel, cask",
"basket, handbasket",
"falls",
"tent",
"bag",
"minibike, motorbike",
"cradle",
"oven",
"ball",
"food, solid food",
"step, stair",
"tank, storage tank",
"trade name",
"microwave",
"pot",
"animal",
"bicycle",
"lake",
"dishwasher",
"screen",
"blanket, cover",
"sculpture",
"hood, exhaust hood",
"sconce",
"vase",
"traffic light",
"tray",
"trash can",
"fan",
"pier",
"crt screen",
"plate",
"monitor",
"bulletin board",
"shower",
"radiator",
"glass, drinking glass",
"clock",
"flag"
] |
facebook/maskformer-swin-small-coco |
# MaskFormer
MaskFormer model trained on COCO panoptic segmentation (small-sized version, Swin backbone). It was introduced in the paper [Per-Pixel Classification is Not All You Need for Semantic Segmentation](https://arxiv.org/abs/2107.06278) and first released in [this repository](https://github.com/facebookresearch/MaskFormer/blob/da3e60d85fdeedcb31476b5edd7d328826ce56cc/mask_former/modeling/criterion.py#L169).
Disclaimer: The team releasing MaskFormer did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
MaskFormer addresses instance, semantic and panoptic segmentation with the same paradigm: by predicting a set of masks and corresponding labels. Hence, all 3 tasks are treated as if they were instance segmentation.

## Intended uses & limitations
You can use this particular checkpoint for semantic segmentation. See the [model hub](https://huggingface.co/models?search=maskformer) to look for other
fine-tuned versions on a task that interests you.
### How to use
Here is how to use this model:
```python
from transformers import MaskFormerFeatureExtractor, MaskFormerForInstanceSegmentation
from PIL import Image
import requests
# load MaskFormer fine-tuned on COCO panoptic segmentation
feature_extractor = MaskFormerFeatureExtractor.from_pretrained("facebook/maskformer-swin-small-coco")
model = MaskFormerForInstanceSegmentation.from_pretrained("facebook/maskformer-swin-small-coco")
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
inputs = feature_extractor(images=image, return_tensors="pt")
outputs = model(**inputs)
# model predicts class_queries_logits of shape `(batch_size, num_queries)`
# and masks_queries_logits of shape `(batch_size, num_queries, height, width)`
class_queries_logits = outputs.class_queries_logits
masks_queries_logits = outputs.masks_queries_logits
# you can pass them to feature_extractor for postprocessing
result = feature_extractor.post_process_panoptic_segmentation(outputs, target_sizes=[image.size[::-1]])[0]
# we refer to the demo notebooks for visualization (see "Resources" section in the MaskFormer docs)
predicted_panoptic_map = result["segmentation"]
```
For more code examples, we refer to the [documentation](https://huggingface.co/docs/transformers/master/en/model_doc/maskformer). | [
"person",
"bicycle",
"car",
"motorcycle",
"airplane",
"bus",
"train",
"truck",
"boat",
"traffic light",
"fire hydrant",
"stop sign",
"parking meter",
"bench",
"bird",
"cat",
"dog",
"horse",
"sheep",
"cow",
"elephant",
"bear",
"zebra",
"giraffe",
"backpack",
"umbrella",
"handbag",
"tie",
"suitcase",
"frisbee",
"skis",
"snowboard",
"sports ball",
"kite",
"baseball bat",
"baseball glove",
"skateboard",
"surfboard",
"tennis racket",
"bottle",
"wine glass",
"cup",
"fork",
"knife",
"spoon",
"bowl",
"banana",
"apple",
"sandwich",
"orange",
"broccoli",
"carrot",
"hot dog",
"pizza",
"donut",
"cake",
"chair",
"couch",
"potted plant",
"bed",
"dining table",
"toilet",
"tv",
"laptop",
"mouse",
"remote",
"keyboard",
"cell phone",
"microwave",
"oven",
"toaster",
"sink",
"refrigerator",
"book",
"clock",
"vase",
"scissors",
"teddy bear",
"hair drier",
"toothbrush",
"banner",
"blanket",
"bridge",
"cardboard",
"counter",
"curtain",
"door-stuff",
"floor-wood",
"flower",
"fruit",
"gravel",
"house",
"light",
"mirror-stuff",
"net",
"pillow",
"platform",
"playingfield",
"railroad",
"river",
"road",
"roof",
"sand",
"sea",
"shelf",
"snow",
"stairs",
"tent",
"towel",
"wall-brick",
"wall-stone",
"wall-tile",
"wall-wood",
"water-other",
"window-blind",
"window-other",
"tree-merged",
"fence-merged",
"ceiling-merged",
"sky-other-merged",
"cabinet-merged",
"table-merged",
"floor-other-merged",
"pavement-merged",
"mountain-merged",
"grass-merged",
"dirt-merged",
"paper-merged",
"food-other-merged",
"building-other-merged",
"rock-merged",
"wall-other-merged",
"rug-merged"
] |
facebook/maskformer-swin-tiny-ade |
# MaskFormer
MaskFormer model trained on ADE20k semantic segmentation (tiny-sized version, Swin backbone). It was introduced in the paper [Per-Pixel Classification is Not All You Need for Semantic Segmentation](https://arxiv.org/abs/2107.06278) and first released in [this repository](https://github.com/facebookresearch/MaskFormer/blob/da3e60d85fdeedcb31476b5edd7d328826ce56cc/mask_former/modeling/criterion.py#L169).
Disclaimer: The team releasing MaskFormer did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
MaskFormer addresses instance, semantic and panoptic segmentation with the same paradigm: by predicting a set of masks and corresponding labels. Hence, all 3 tasks are treated as if they were instance segmentation.

## Intended uses & limitations
You can use this particular checkpoint for semantic segmentation. See the [model hub](https://huggingface.co/models?search=maskformer) to look for other
fine-tuned versions on a task that interests you.
### How to use
Here is how to use this model:
```python
from transformers import MaskFormerFeatureExtractor, MaskFormerForInstanceSegmentation
from PIL import Image
import requests
url = "https://huggingface.co/datasets/hf-internal-testing/fixtures_ade20k/resolve/main/ADE_val_00000001.jpg"
image = Image.open(requests.get(url, stream=True).raw)
feature_extractor = MaskFormerFeatureExtractor.from_pretrained("facebook/maskformer-swin-tiny-ade")
inputs = feature_extractor(images=image, return_tensors="pt")
model = MaskFormerForInstanceSegmentation.from_pretrained("facebook/maskformer-swin-tiny-ade")
outputs = model(**inputs)
# model predicts class_queries_logits of shape `(batch_size, num_queries)`
# and masks_queries_logits of shape `(batch_size, num_queries, height, width)`
class_queries_logits = outputs.class_queries_logits
masks_queries_logits = outputs.masks_queries_logits
# you can pass them to feature_extractor for postprocessing
# we refer to the demo notebooks for visualization (see "Resources" section in the MaskFormer docs)
predicted_semantic_map = feature_extractor.post_process_semantic_segmentation(outputs, target_sizes=[image.size[::-1]])[0]
```
For more code examples, we refer to the [documentation](https://huggingface.co/docs/transformers/master/en/model_doc/maskformer). | [
"wall",
"building",
"sky",
"floor",
"tree",
"ceiling",
"road, route",
"bed",
"window ",
"grass",
"cabinet",
"sidewalk, pavement",
"person",
"earth, ground",
"door",
"table",
"mountain, mount",
"plant",
"curtain",
"chair",
"car",
"water",
"painting, picture",
"sofa",
"shelf",
"house",
"sea",
"mirror",
"rug",
"field",
"armchair",
"seat",
"fence",
"desk",
"rock, stone",
"wardrobe, closet, press",
"lamp",
"tub",
"rail",
"cushion",
"base, pedestal, stand",
"box",
"column, pillar",
"signboard, sign",
"chest of drawers, chest, bureau, dresser",
"counter",
"sand",
"sink",
"skyscraper",
"fireplace",
"refrigerator, icebox",
"grandstand, covered stand",
"path",
"stairs",
"runway",
"case, display case, showcase, vitrine",
"pool table, billiard table, snooker table",
"pillow",
"screen door, screen",
"stairway, staircase",
"river",
"bridge, span",
"bookcase",
"blind, screen",
"coffee table",
"toilet, can, commode, crapper, pot, potty, stool, throne",
"flower",
"book",
"hill",
"bench",
"countertop",
"stove",
"palm, palm tree",
"kitchen island",
"computer",
"swivel chair",
"boat",
"bar",
"arcade machine",
"hovel, hut, hutch, shack, shanty",
"bus",
"towel",
"light",
"truck",
"tower",
"chandelier",
"awning, sunshade, sunblind",
"street lamp",
"booth",
"tv",
"plane",
"dirt track",
"clothes",
"pole",
"land, ground, soil",
"bannister, banister, balustrade, balusters, handrail",
"escalator, moving staircase, moving stairway",
"ottoman, pouf, pouffe, puff, hassock",
"bottle",
"buffet, counter, sideboard",
"poster, posting, placard, notice, bill, card",
"stage",
"van",
"ship",
"fountain",
"conveyer belt, conveyor belt, conveyer, conveyor, transporter",
"canopy",
"washer, automatic washer, washing machine",
"plaything, toy",
"pool",
"stool",
"barrel, cask",
"basket, handbasket",
"falls",
"tent",
"bag",
"minibike, motorbike",
"cradle",
"oven",
"ball",
"food, solid food",
"step, stair",
"tank, storage tank",
"trade name",
"microwave",
"pot",
"animal",
"bicycle",
"lake",
"dishwasher",
"screen",
"blanket, cover",
"sculpture",
"hood, exhaust hood",
"sconce",
"vase",
"traffic light",
"tray",
"trash can",
"fan",
"pier",
"crt screen",
"plate",
"monitor",
"bulletin board",
"shower",
"radiator",
"glass, drinking glass",
"clock",
"flag"
] |
facebook/maskformer-swin-tiny-coco |
# MaskFormer
MaskFormer model trained on COCO panoptic segmentation (tiny-sized version, Swin backbone). It was introduced in the paper [Per-Pixel Classification is Not All You Need for Semantic Segmentation](https://arxiv.org/abs/2107.06278) and first released in [this repository](https://github.com/facebookresearch/MaskFormer/blob/da3e60d85fdeedcb31476b5edd7d328826ce56cc/mask_former/modeling/criterion.py#L169).
Disclaimer: The team releasing MaskFormer did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
MaskFormer addresses instance, semantic and panoptic segmentation with the same paradigm: by predicting a set of masks and corresponding labels. Hence, all 3 tasks are treated as if they were instance segmentation.

## Intended uses & limitations
You can use this particular checkpoint for semantic segmentation. See the [model hub](https://huggingface.co/models?search=maskformer) to look for other
fine-tuned versions on a task that interests you.
### How to use
Here is how to use this model:
```python
from transformers import MaskFormerFeatureExtractor, MaskFormerForInstanceSegmentation
from PIL import Image
import requests
# load MaskFormer fine-tuned on COCO panoptic segmentation
feature_extractor = MaskFormerFeatureExtractor.from_pretrained("facebook/maskformer-swin-tiny-coco")
model = MaskFormerForInstanceSegmentation.from_pretrained("facebook/maskformer-swin-tiny-coco")
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
inputs = feature_extractor(images=image, return_tensors="pt")
outputs = model(**inputs)
# model predicts class_queries_logits of shape `(batch_size, num_queries)`
# and masks_queries_logits of shape `(batch_size, num_queries, height, width)`
class_queries_logits = outputs.class_queries_logits
masks_queries_logits = outputs.masks_queries_logits
# you can pass them to feature_extractor for postprocessing
result = feature_extractor.post_process_panoptic_segmentation(outputs, target_sizes=[image.size[::-1]])[0]
# we refer to the demo notebooks for visualization (see "Resources" section in the MaskFormer docs)
predicted_panoptic_map = result["segmentation"]
```
For more code examples, we refer to the [documentation](https://huggingface.co/docs/transformers/master/en/model_doc/maskformer). | [
"person",
"bicycle",
"car",
"motorcycle",
"airplane",
"bus",
"train",
"truck",
"boat",
"traffic light",
"fire hydrant",
"stop sign",
"parking meter",
"bench",
"bird",
"cat",
"dog",
"horse",
"sheep",
"cow",
"elephant",
"bear",
"zebra",
"giraffe",
"backpack",
"umbrella",
"handbag",
"tie",
"suitcase",
"frisbee",
"skis",
"snowboard",
"sports ball",
"kite",
"baseball bat",
"baseball glove",
"skateboard",
"surfboard",
"tennis racket",
"bottle",
"wine glass",
"cup",
"fork",
"knife",
"spoon",
"bowl",
"banana",
"apple",
"sandwich",
"orange",
"broccoli",
"carrot",
"hot dog",
"pizza",
"donut",
"cake",
"chair",
"couch",
"potted plant",
"bed",
"dining table",
"toilet",
"tv",
"laptop",
"mouse",
"remote",
"keyboard",
"cell phone",
"microwave",
"oven",
"toaster",
"sink",
"refrigerator",
"book",
"clock",
"vase",
"scissors",
"teddy bear",
"hair drier",
"toothbrush",
"banner",
"blanket",
"bridge",
"cardboard",
"counter",
"curtain",
"door-stuff",
"floor-wood",
"flower",
"fruit",
"gravel",
"house",
"light",
"mirror-stuff",
"net",
"pillow",
"platform",
"playingfield",
"railroad",
"river",
"road",
"roof",
"sand",
"sea",
"shelf",
"snow",
"stairs",
"tent",
"towel",
"wall-brick",
"wall-stone",
"wall-tile",
"wall-wood",
"water-other",
"window-blind",
"window-other",
"tree-merged",
"fence-merged",
"ceiling-merged",
"sky-other-merged",
"cabinet-merged",
"table-merged",
"floor-other-merged",
"pavement-merged",
"mountain-merged",
"grass-merged",
"dirt-merged",
"paper-merged",
"food-other-merged",
"building-other-merged",
"rock-merged",
"wall-other-merged",
"rug-merged"
] |
microsoft/beit-base-finetuned-ade-640-640 |
# BEiT (base-sized model, fine-tuned on ADE20k)
BEiT model pre-trained in a self-supervised fashion on ImageNet-21k (14 million images, 21,841 classes) at resolution 224x224, and fine-tuned on [ADE20k](http://sceneparsing.csail.mit.edu/) (an important benchmark for semantic segmentation of images) at resolution 640x640. It was introduced in the paper [BEIT: BERT Pre-Training of Image Transformers](https://arxiv.org/abs/2106.08254) by Hangbo Bao, Li Dong and Furu Wei and first released in [this repository](https://github.com/microsoft/unilm/tree/master/beit).
Disclaimer: The team releasing BEiT did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
The BEiT model is a Vision Transformer (ViT), which is a transformer encoder model (BERT-like). In contrast to the original ViT model, BEiT is pretrained on a large collection of images in a self-supervised fashion, namely ImageNet-21k, at a resolution of 224x224 pixels. The pre-training objective for the model is to predict visual tokens from the encoder of OpenAI's DALL-E's VQ-VAE, based on masked patches.
Next, the model was fine-tuned in a supervised fashion on ImageNet (also referred to as ILSVRC2012), a dataset comprising 1 million images and 1,000 classes, also at resolution 224x224.
Images are presented to the model as a sequence of fixed-size patches (resolution 16x16), which are linearly embedded. Contrary to the original ViT models, BEiT models do use relative position embeddings (similar to T5) instead of absolute position embeddings, and perform classification of images by mean-pooling the final hidden states of the patches, instead of placing a linear layer on top of the final hidden state of the [CLS] token.
By pre-training the model, it learns an inner representation of images that can then be used to extract features useful for downstream tasks: for semantic segmentation, one can just add one of the decode heads available in the [mmseg library](https://github.com/open-mmlab/mmsegmentation) for example, and fine-tune the model in a supervised fashion on annotated images. This is what the authors did: they fine-tuned BEiT with an UperHead segmentation decode head, allowing it to obtain SOTA results on important benchmarks such as ADE20k and CityScapes.
## Intended uses & limitations
You can use the raw model for semantic segmentation of images. See the [model hub](https://huggingface.co/models?search=microsoft/beit) to look for fine-tuned versions on a task that interests you.
### How to use
Here is how to use this model for semantic segmentation:
```python
from transformers import BeitFeatureExtractor, BeitForSemanticSegmentation
from datasets import load_dataset
from PIL import Image
# load ADE20k image
ds = load_dataset("hf-internal-testing/fixtures_ade20k", split="test")
image = Image.open(ds[0]['file'])
feature_extractor = BeitFeatureExtractor.from_pretrained('microsoft/beit-base-finetuned-ade-640-640')
model = BeitForSemanticSegmentation.from_pretrained('microsoft/beit-base-finetuned-ade-640-640')
inputs = feature_extractor(images=image, return_tensors="pt")
outputs = model(**inputs)
# logits are of shape (batch_size, num_labels, height/4, width/4)
logits = outputs.logits
```
Currently, both the feature extractor and model support PyTorch.
## Training data
This BEiT model was pretrained on [ImageNet-21k](http://www.image-net.org/), a dataset consisting of 14 million images and 21k classes, and fine-tuned on [ADE20k](http://sceneparsing.csail.mit.edu/), a dataset consisting of thousands of annotated images and 150 classes.
## Training procedure
### Preprocessing
The exact details of preprocessing of images during training/validation can be found [here](https://github.com/microsoft/unilm/blob/master/beit/datasets.py).
Images are cropped and padded to the same resolution (640x640) and normalized across the RGB channels with the ImageNet mean and standard deviation.
### Pretraining
For all pre-training related hyperparameters, we refer to page 15 of the [original paper](https://arxiv.org/abs/2106.08254).
## Evaluation results
For evaluation results on several image classification benchmarks, we refer to tables 1 and 2 of the original paper. Note that for fine-tuning, the best results are obtained with a higher resolution (384x384). Of course, increasing the model size will result in better performance.
### BibTeX entry and citation info
```@article{DBLP:journals/corr/abs-2106-08254,
author = {Hangbo Bao and
Li Dong and
Furu Wei},
title = {BEiT: {BERT} Pre-Training of Image Transformers},
journal = {CoRR},
volume = {abs/2106.08254},
year = {2021},
url = {https://arxiv.org/abs/2106.08254},
archivePrefix = {arXiv},
eprint = {2106.08254},
timestamp = {Tue, 29 Jun 2021 16:55:04 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2106-08254.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
``` | [
"wall",
"building",
"sky",
"floor",
"tree",
"ceiling",
"road",
"bed ",
"windowpane",
"grass",
"cabinet",
"sidewalk",
"person",
"earth",
"door",
"table",
"mountain",
"plant",
"curtain",
"chair",
"car",
"water",
"painting",
"sofa",
"shelf",
"house",
"sea",
"mirror",
"rug",
"field",
"armchair",
"seat",
"fence",
"desk",
"rock",
"wardrobe",
"lamp",
"bathtub",
"railing",
"cushion",
"base",
"box",
"column",
"signboard",
"chest of drawers",
"counter",
"sand",
"sink",
"skyscraper",
"fireplace",
"refrigerator",
"grandstand",
"path",
"stairs",
"runway",
"case",
"pool table",
"pillow",
"screen door",
"stairway",
"river",
"bridge",
"bookcase",
"blind",
"coffee table",
"toilet",
"flower",
"book",
"hill",
"bench",
"countertop",
"stove",
"palm",
"kitchen island",
"computer",
"swivel chair",
"boat",
"bar",
"arcade machine",
"hovel",
"bus",
"towel",
"light",
"truck",
"tower",
"chandelier",
"awning",
"streetlight",
"booth",
"television receiver",
"airplane",
"dirt track",
"apparel",
"pole",
"land",
"bannister",
"escalator",
"ottoman",
"bottle",
"buffet",
"poster",
"stage",
"van",
"ship",
"fountain",
"conveyer belt",
"canopy",
"washer",
"plaything",
"swimming pool",
"stool",
"barrel",
"basket",
"waterfall",
"tent",
"bag",
"minibike",
"cradle",
"oven",
"ball",
"food",
"step",
"tank",
"trade name",
"microwave",
"pot",
"animal",
"bicycle",
"lake",
"dishwasher",
"screen",
"blanket",
"sculpture",
"hood",
"sconce",
"vase",
"traffic light",
"tray",
"ashcan",
"fan",
"pier",
"crt screen",
"plate",
"monitor",
"bulletin board",
"shower",
"radiator",
"glass",
"clock",
"flag"
] |
microsoft/beit-large-finetuned-ade-640-640 |
# BEiT (large-sized model, fine-tuned on ADE20k)
BEiT model pre-trained in a self-supervised fashion on ImageNet-21k (14 million images, 21,841 classes) at resolution 224x224, and fine-tuned on [ADE20k](https://huggingface.co/datasets/scene_parse_150) (an important benchmark for semantic segmentation of images) at resolution 640x640. It was introduced in the paper [BEIT: BERT Pre-Training of Image Transformers](https://arxiv.org/abs/2106.08254) by Hangbo Bao, Li Dong and Furu Wei and first released in [this repository](https://github.com/microsoft/unilm/tree/master/beit).
Disclaimer: The team releasing BEiT did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
The BEiT model is a Vision Transformer (ViT), which is a transformer encoder model (BERT-like). In contrast to the original ViT model, BEiT is pretrained on a large collection of images in a self-supervised fashion, namely ImageNet-21k, at a resolution of 224x224 pixels. The pre-training objective for the model is to predict visual tokens from the encoder of OpenAI's DALL-E's VQ-VAE, based on masked patches.
Next, the model was fine-tuned in a supervised fashion on ImageNet (also referred to as ILSVRC2012), a dataset comprising 1 million images and 1,000 classes, also at resolution 224x224.
Images are presented to the model as a sequence of fixed-size patches (resolution 16x16), which are linearly embedded. Contrary to the original ViT models, BEiT models do use relative position embeddings (similar to T5) instead of absolute position embeddings, and perform classification of images by mean-pooling the final hidden states of the patches, instead of placing a linear layer on top of the final hidden state of the [CLS] token.
By pre-training the model, it learns an inner representation of images that can then be used to extract features useful for downstream tasks: for semantic segmentation, one can just add one of the decode heads available in the [mmseg library](https://github.com/open-mmlab/mmsegmentation) for example, and fine-tune the model in a supervised fashion on annotated images. This is what the authors did: they fine-tuned BEiT with an UperHead segmentation decode head, allowing it to obtain SOTA results on important benchmarks such as ADE20k and CityScapes.
## Intended uses & limitations
You can use the raw model for semantic segmentation of images. See the [model hub](https://huggingface.co/models?search=microsoft/beit) to look for fine-tuned versions on a task that interests you.
### How to use
Here is how to use this model for semantic segmentation:
```python
from transformers import BeitFeatureExtractor, BeitForSemanticSegmentation
from datasets import load_dataset
from PIL import Image
# load ADE20k image
ds = load_dataset("hf-internal-testing/fixtures_ade20k", split="test")
feature_extractor = BeitFeatureExtractor.from_pretrained('microsoft/beit-large-finetuned-ade-640-640')
model = BeitForSemanticSegmentation.from_pretrained('microsoft/beit-large-finetuned-ade-640-640')
inputs = feature_extractor(images=image, return_tensors="pt")
outputs = model(**inputs)
# logits are of shape (batch_size, num_labels, height/4, width/4)
logits = outputs.logits
```
Currently, both the feature extractor and model support PyTorch.
## Training data
This BEiT model was pretrained on [ImageNet-21k](http://www.image-net.org/), a dataset consisting of 14 million images and 21k classes, and fine-tuned on [ADE20k](http://sceneparsing.csail.mit.edu/), a dataset consisting of thousands of annotated images and 150 classes.
## Training procedure
### Preprocessing
The exact details of preprocessing of images during training/validation can be found [here](https://github.com/microsoft/unilm/blob/master/beit/datasets.py).
Images are cropped and padded to the same resolution (640x640) and normalized across the RGB channels with the ImageNet mean and standard deviation.
### Pretraining
For all pre-training related hyperparameters, we refer to page 15 of the [original paper](https://arxiv.org/abs/2106.08254).
## Evaluation results
For evaluation results on several image classification benchmarks, we refer to tables 1 and 2 of the original paper. Note that for fine-tuning, the best results are obtained with a higher resolution (384x384). Of course, increasing the model size will result in better performance.
### BibTeX entry and citation info
```@article{DBLP:journals/corr/abs-2106-08254,
author = {Hangbo Bao and
Li Dong and
Furu Wei},
title = {BEiT: {BERT} Pre-Training of Image Transformers},
journal = {CoRR},
volume = {abs/2106.08254},
year = {2021},
url = {https://arxiv.org/abs/2106.08254},
archivePrefix = {arXiv},
eprint = {2106.08254},
timestamp = {Tue, 29 Jun 2021 16:55:04 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2106-08254.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
``` | [
"wall",
"building",
"sky",
"floor",
"tree",
"ceiling",
"road",
"bed ",
"windowpane",
"grass",
"cabinet",
"sidewalk",
"person",
"earth",
"door",
"table",
"mountain",
"plant",
"curtain",
"chair",
"car",
"water",
"painting",
"sofa",
"shelf",
"house",
"sea",
"mirror",
"rug",
"field",
"armchair",
"seat",
"fence",
"desk",
"rock",
"wardrobe",
"lamp",
"bathtub",
"railing",
"cushion",
"base",
"box",
"column",
"signboard",
"chest of drawers",
"counter",
"sand",
"sink",
"skyscraper",
"fireplace",
"refrigerator",
"grandstand",
"path",
"stairs",
"runway",
"case",
"pool table",
"pillow",
"screen door",
"stairway",
"river",
"bridge",
"bookcase",
"blind",
"coffee table",
"toilet",
"flower",
"book",
"hill",
"bench",
"countertop",
"stove",
"palm",
"kitchen island",
"computer",
"swivel chair",
"boat",
"bar",
"arcade machine",
"hovel",
"bus",
"towel",
"light",
"truck",
"tower",
"chandelier",
"awning",
"streetlight",
"booth",
"television receiver",
"airplane",
"dirt track",
"apparel",
"pole",
"land",
"bannister",
"escalator",
"ottoman",
"bottle",
"buffet",
"poster",
"stage",
"van",
"ship",
"fountain",
"conveyer belt",
"canopy",
"washer",
"plaything",
"swimming pool",
"stool",
"barrel",
"basket",
"waterfall",
"tent",
"bag",
"minibike",
"cradle",
"oven",
"ball",
"food",
"step",
"tank",
"trade name",
"microwave",
"pot",
"animal",
"bicycle",
"lake",
"dishwasher",
"screen",
"blanket",
"sculpture",
"hood",
"sconce",
"vase",
"traffic light",
"tray",
"ashcan",
"fan",
"pier",
"crt screen",
"plate",
"monitor",
"bulletin board",
"shower",
"radiator",
"glass",
"clock",
"flag"
] |
Intel/dpt-large-ade |
# DPT (large-sized model) fine-tuned on ADE20k
The model is used for semantic segmentation of input images such as seen in the table below:
| Input Image | Output Segmented Image |
| --- | --- |
|  | |
## Model description
The Midas 3.0 nbased Dense Prediction Transformer (DPT) model was trained on ADE20k for semantic segmentation. It was introduced in the paper
[Vision Transformers for Dense Prediction](https://arxiv.org/abs/2103.13413) by Ranftl et al. and first released in [this repository](https://github.com/isl-org/DPT).
The MiDaS v3.0 DPT uses the Vision Transformer (ViT) as backbone and adds a neck + head on top for semantic segmentation.

Disclaimer: The team releasing DPT did not write a model card for this model so this model card has been written by the Hugging Face and the Intel AI Community team.
## Results:
According to the authors, at the time of publication, when applied to semantic segmentation, dense vision transformers set a new state of the art on
**ADE20K with 49.02% mIoU.**
We further show that the architecture can be fine-tuned on smaller datasets such as NYUv2, KITTI, and Pascal Context where it also sets the new state of the art. Our models are available at
[Intel DPT GItHub Repository](https://github.com/intel-isl/DPT).
## Intended uses & limitations
You can use the raw model for semantic segmentation. See the [model hub](https://huggingface.co/models?search=dpt) to look for
fine-tuned versions on a task that interests you.
### How to use
Here is how to use this model:
```python
from transformers import DPTFeatureExtractor, DPTForSemanticSegmentation
from PIL import Image
import requests
url = "http://images.cocodataset.org/val2017/000000026204.jpg"
image = Image.open(requests.get(url, stream=True).raw)
feature_extractor = DPTImageProcessor .from_pretrained("Intel/dpt-large-ade")
model = DPTForSemanticSegmentation.from_pretrained("Intel/dpt-large-ade")
inputs = feature_extractor(images=image, return_tensors="pt")
outputs = model(**inputs)
logits = outputs.logits
print(logits.shape)
logits
prediction = torch.nn.functional.interpolate(
logits,
size=image.size[::-1], # Reverse the size of the original image (width, height)
mode="bicubic",
align_corners=False
)
# Convert logits to class predictions
prediction = torch.argmax(prediction, dim=1) + 1
# Squeeze the prediction tensor to remove dimensions
prediction = prediction.squeeze()
# Move the prediction tensor to the CPU and convert it to a numpy array
prediction = prediction.cpu().numpy()
# Convert the prediction array to an image
predicted_seg = Image.fromarray(prediction.squeeze().astype('uint8'))
# Define the ADE20K palette
adepallete = [0,0,0,120,120,120,180,120,120,6,230,230,80,50,50,4,200,3,120,120,80,140,140,140,204,5,255,230,230,230,4,250,7,224,5,255,235,255,7,150,5,61,120,120,70,8,255,51,255,6,82,143,255,140,204,255,4,255,51,7,204,70,3,0,102,200,61,230,250,255,6,51,11,102,255,255,7,71,255,9,224,9,7,230,220,220,220,255,9,92,112,9,255,8,255,214,7,255,224,255,184,6,10,255,71,255,41,10,7,255,255,224,255,8,102,8,255,255,61,6,255,194,7,255,122,8,0,255,20,255,8,41,255,5,153,6,51,255,235,12,255,160,150,20,0,163,255,140,140,140,250,10,15,20,255,0,31,255,0,255,31,0,255,224,0,153,255,0,0,0,255,255,71,0,0,235,255,0,173,255,31,0,255,11,200,200,255,82,0,0,255,245,0,61,255,0,255,112,0,255,133,255,0,0,255,163,0,255,102,0,194,255,0,0,143,255,51,255,0,0,82,255,0,255,41,0,255,173,10,0,255,173,255,0,0,255,153,255,92,0,255,0,255,255,0,245,255,0,102,255,173,0,255,0,20,255,184,184,0,31,255,0,255,61,0,71,255,255,0,204,0,255,194,0,255,82,0,10,255,0,112,255,51,0,255,0,194,255,0,122,255,0,255,163,255,153,0,0,255,10,255,112,0,143,255,0,82,0,255,163,255,0,255,235,0,8,184,170,133,0,255,0,255,92,184,0,255,255,0,31,0,184,255,0,214,255,255,0,112,92,255,0,0,224,255,112,224,255,70,184,160,163,0,255,153,0,255,71,255,0,255,0,163,255,204,0,255,0,143,0,255,235,133,255,0,255,0,235,245,0,255,255,0,122,255,245,0,10,190,212,214,255,0,0,204,255,20,0,255,255,255,0,0,153,255,0,41,255,0,255,204,41,0,255,41,255,0,173,0,255,0,245,255,71,0,255,122,0,255,0,255,184,0,92,255,184,255,0,0,133,255,255,214,0,25,194,194,102,255,0,92,0,255]
# Apply the color map to the predicted segmentation image
predicted_seg.putpalette(adepallete)
# Blend the original image and the predicted segmentation image
out = Image.blend(image, predicted_seg.convert("RGB"), alpha=0.5)
out
```
For more code examples, we refer to the [documentation](https://huggingface.co/docs/transformers/master/en/model_doc/dpt).
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2103-13413,
author = {Ren{\'{e}} Ranftl and
Alexey Bochkovskiy and
Vladlen Koltun},
title = {Vision Transformers for Dense Prediction},
journal = {CoRR},
volume = {abs/2103.13413},
year = {2021},
url = {https://arxiv.org/abs/2103.13413},
eprinttype = {arXiv},
eprint = {2103.13413},
timestamp = {Wed, 07 Apr 2021 15:31:46 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2103-13413.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
``` | [
"wall",
"building",
"sky",
"floor",
"tree",
"ceiling",
"road",
"bed ",
"windowpane",
"grass",
"cabinet",
"sidewalk",
"person",
"earth",
"door",
"table",
"mountain",
"plant",
"curtain",
"chair",
"car",
"water",
"painting",
"sofa",
"shelf",
"house",
"sea",
"mirror",
"rug",
"field",
"armchair",
"seat",
"fence",
"desk",
"rock",
"wardrobe",
"lamp",
"bathtub",
"railing",
"cushion",
"base",
"box",
"column",
"signboard",
"chest of drawers",
"counter",
"sand",
"sink",
"skyscraper",
"fireplace",
"refrigerator",
"grandstand",
"path",
"stairs",
"runway",
"case",
"pool table",
"pillow",
"screen door",
"stairway",
"river",
"bridge",
"bookcase",
"blind",
"coffee table",
"toilet",
"flower",
"book",
"hill",
"bench",
"countertop",
"stove",
"palm",
"kitchen island",
"computer",
"swivel chair",
"boat",
"bar",
"arcade machine",
"hovel",
"bus",
"towel",
"light",
"truck",
"tower",
"chandelier",
"awning",
"streetlight",
"booth",
"television receiver",
"airplane",
"dirt track",
"apparel",
"pole",
"land",
"bannister",
"escalator",
"ottoman",
"bottle",
"buffet",
"poster",
"stage",
"van",
"ship",
"fountain",
"conveyer belt",
"canopy",
"washer",
"plaything",
"swimming pool",
"stool",
"barrel",
"basket",
"waterfall",
"tent",
"bag",
"minibike",
"cradle",
"oven",
"ball",
"food",
"step",
"tank",
"trade name",
"microwave",
"pot",
"animal",
"bicycle",
"lake",
"dishwasher",
"screen",
"blanket",
"sculpture",
"hood",
"sconce",
"vase",
"traffic light",
"tray",
"ashcan",
"fan",
"pier",
"crt screen",
"plate",
"monitor",
"bulletin board",
"shower",
"radiator",
"glass",
"clock",
"flag"
] |
nvidia/segformer-b0-finetuned-ade-512-512 |
# SegFormer (b0-sized) model fine-tuned on ADE20k
SegFormer model fine-tuned on ADE20k at resolution 512x512. It was introduced in the paper [SegFormer: Simple and Efficient Design for Semantic Segmentation with Transformers](https://arxiv.org/abs/2105.15203) by Xie et al. and first released in [this repository](https://github.com/NVlabs/SegFormer).
Disclaimer: The team releasing SegFormer did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
SegFormer consists of a hierarchical Transformer encoder and a lightweight all-MLP decode head to achieve great results on semantic segmentation benchmarks such as ADE20K and Cityscapes. The hierarchical Transformer is first pre-trained on ImageNet-1k, after which a decode head is added and fine-tuned altogether on a downstream dataset.
## Intended uses & limitations
You can use the raw model for semantic segmentation. See the [model hub](https://huggingface.co/models?other=segformer) to look for fine-tuned versions on a task that interests you.
### How to use
Here is how to use this model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes:
```python
from transformers import SegformerImageProcessor, SegformerForSemanticSegmentation
from PIL import Image
import requests
processor = SegformerImageProcessor.from_pretrained("nvidia/segformer-b0-finetuned-ade-512-512")
model = SegformerForSemanticSegmentation.from_pretrained("nvidia/segformer-b0-finetuned-ade-512-512")
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
inputs = processor(images=image, return_tensors="pt")
outputs = model(**inputs)
logits = outputs.logits # shape (batch_size, num_labels, height/4, width/4)
```
For more code examples, we refer to the [documentation](https://huggingface.co/transformers/model_doc/segformer.html#).
### License
The license for this model can be found [here](https://github.com/NVlabs/SegFormer/blob/master/LICENSE).
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2105-15203,
author = {Enze Xie and
Wenhai Wang and
Zhiding Yu and
Anima Anandkumar and
Jose M. Alvarez and
Ping Luo},
title = {SegFormer: Simple and Efficient Design for Semantic Segmentation with
Transformers},
journal = {CoRR},
volume = {abs/2105.15203},
year = {2021},
url = {https://arxiv.org/abs/2105.15203},
eprinttype = {arXiv},
eprint = {2105.15203},
timestamp = {Wed, 02 Jun 2021 11:46:42 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2105-15203.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
| [
"wall",
"building",
"sky",
"floor",
"tree",
"ceiling",
"road",
"bed ",
"windowpane",
"grass",
"cabinet",
"sidewalk",
"person",
"earth",
"door",
"table",
"mountain",
"plant",
"curtain",
"chair",
"car",
"water",
"painting",
"sofa",
"shelf",
"house",
"sea",
"mirror",
"rug",
"field",
"armchair",
"seat",
"fence",
"desk",
"rock",
"wardrobe",
"lamp",
"bathtub",
"railing",
"cushion",
"base",
"box",
"column",
"signboard",
"chest of drawers",
"counter",
"sand",
"sink",
"skyscraper",
"fireplace",
"refrigerator",
"grandstand",
"path",
"stairs",
"runway",
"case",
"pool table",
"pillow",
"screen door",
"stairway",
"river",
"bridge",
"bookcase",
"blind",
"coffee table",
"toilet",
"flower",
"book",
"hill",
"bench",
"countertop",
"stove",
"palm",
"kitchen island",
"computer",
"swivel chair",
"boat",
"bar",
"arcade machine",
"hovel",
"bus",
"towel",
"light",
"truck",
"tower",
"chandelier",
"awning",
"streetlight",
"booth",
"television receiver",
"airplane",
"dirt track",
"apparel",
"pole",
"land",
"bannister",
"escalator",
"ottoman",
"bottle",
"buffet",
"poster",
"stage",
"van",
"ship",
"fountain",
"conveyer belt",
"canopy",
"washer",
"plaything",
"swimming pool",
"stool",
"barrel",
"basket",
"waterfall",
"tent",
"bag",
"minibike",
"cradle",
"oven",
"ball",
"food",
"step",
"tank",
"trade name",
"microwave",
"pot",
"animal",
"bicycle",
"lake",
"dishwasher",
"screen",
"blanket",
"sculpture",
"hood",
"sconce",
"vase",
"traffic light",
"tray",
"ashcan",
"fan",
"pier",
"crt screen",
"plate",
"monitor",
"bulletin board",
"shower",
"radiator",
"glass",
"clock",
"flag"
] |
nvidia/segformer-b0-finetuned-cityscapes-1024-1024 |
# SegFormer (b0-sized) model fine-tuned on CityScapes
SegFormer model fine-tuned on CityScapes at resolution 1024x1024. It was introduced in the paper [SegFormer: Simple and Efficient Design for Semantic Segmentation with Transformers](https://arxiv.org/abs/2105.15203) by Xie et al. and first released in [this repository](https://github.com/NVlabs/SegFormer).
Disclaimer: The team releasing SegFormer did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
SegFormer consists of a hierarchical Transformer encoder and a lightweight all-MLP decode head to achieve great results on semantic segmentation benchmarks such as ADE20K and Cityscapes. The hierarchical Transformer is first pre-trained on ImageNet-1k, after which a decode head is added and fine-tuned altogether on a downstream dataset.
## Intended uses & limitations
You can use the raw model for semantic segmentation. See the [model hub](https://huggingface.co/models?other=segformer) to look for fine-tuned versions on a task that interests you.
### How to use
Here is how to use this model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes:
```python
from transformers import SegformerFeatureExtractor, SegformerForSemanticSegmentation
from PIL import Image
import requests
feature_extractor = SegformerFeatureExtractor.from_pretrained("nvidia/segformer-b0-finetuned-cityscapes-1024-1024")
model = SegformerForSemanticSegmentation.from_pretrained("nvidia/segformer-b0-finetuned-cityscapes-1024-1024")
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
inputs = feature_extractor(images=image, return_tensors="pt")
outputs = model(**inputs)
logits = outputs.logits # shape (batch_size, num_labels, height/4, width/4)
```
For more code examples, we refer to the [documentation](https://huggingface.co/transformers/model_doc/segformer.html#).
### License
The license for this model can be found [here](https://github.com/NVlabs/SegFormer/blob/master/LICENSE).
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2105-15203,
author = {Enze Xie and
Wenhai Wang and
Zhiding Yu and
Anima Anandkumar and
Jose M. Alvarez and
Ping Luo},
title = {SegFormer: Simple and Efficient Design for Semantic Segmentation with
Transformers},
journal = {CoRR},
volume = {abs/2105.15203},
year = {2021},
url = {https://arxiv.org/abs/2105.15203},
eprinttype = {arXiv},
eprint = {2105.15203},
timestamp = {Wed, 02 Jun 2021 11:46:42 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2105-15203.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
| [
"road",
"sidewalk",
"building",
"wall",
"fence",
"pole",
"traffic light",
"traffic sign",
"vegetation",
"terrain",
"sky",
"person",
"rider",
"car",
"truck",
"bus",
"train",
"motorcycle",
"bicycle"
] |
nvidia/segformer-b0-finetuned-cityscapes-512-1024 |
# SegFormer (b4-sized) model fine-tuned on CityScapes
SegFormer model fine-tuned on CityScapes at resolution 512x1024. It was introduced in the paper [SegFormer: Simple and Efficient Design for Semantic Segmentation with Transformers](https://arxiv.org/abs/2105.15203) by Xie et al. and first released in [this repository](https://github.com/NVlabs/SegFormer).
Disclaimer: The team releasing SegFormer did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
SegFormer consists of a hierarchical Transformer encoder and a lightweight all-MLP decode head to achieve great results on semantic segmentation benchmarks such as ADE20K and Cityscapes. The hierarchical Transformer is first pre-trained on ImageNet-1k, after which a decode head is added and fine-tuned altogether on a downstream dataset.
## Intended uses & limitations
You can use the raw model for semantic segmentation. See the [model hub](https://huggingface.co/models?other=segformer) to look for fine-tuned versions on a task that interests you.
### How to use
Here is how to use this model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes:
```python
from transformers import SegformerFeatureExtractor, SegformerForSemanticSegmentation
from PIL import Image
import requests
feature_extractor = SegformerFeatureExtractor.from_pretrained("nvidia/segformer-b0-finetuned-cityscapes-512-1024")
model = SegformerForSemanticSegmentation.from_pretrained("nvidia/segformer-b0-finetuned-cityscapes-512-1024")
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
inputs = feature_extractor(images=image, return_tensors="pt")
outputs = model(**inputs)
logits = outputs.logits # shape (batch_size, num_labels, height/4, width/4)
```
For more code examples, we refer to the [documentation](https://huggingface.co/transformers/model_doc/segformer.html#).
### License
The license for this model can be found [here](https://github.com/NVlabs/SegFormer/blob/master/LICENSE).
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2105-15203,
author = {Enze Xie and
Wenhai Wang and
Zhiding Yu and
Anima Anandkumar and
Jose M. Alvarez and
Ping Luo},
title = {SegFormer: Simple and Efficient Design for Semantic Segmentation with
Transformers},
journal = {CoRR},
volume = {abs/2105.15203},
year = {2021},
url = {https://arxiv.org/abs/2105.15203},
eprinttype = {arXiv},
eprint = {2105.15203},
timestamp = {Wed, 02 Jun 2021 11:46:42 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2105-15203.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
| [
"road",
"sidewalk",
"building",
"wall",
"fence",
"pole",
"traffic light",
"traffic sign",
"vegetation",
"terrain",
"sky",
"person",
"rider",
"car",
"truck",
"bus",
"train",
"motorcycle",
"bicycle"
] |
nvidia/segformer-b0-finetuned-cityscapes-640-1280 |
# SegFormer (b5-sized) model fine-tuned on CityScapes
SegFormer model fine-tuned on CityScapes at resolution 640x1280. It was introduced in the paper [SegFormer: Simple and Efficient Design for Semantic Segmentation with Transformers](https://arxiv.org/abs/2105.15203) by Xie et al. and first released in [this repository](https://github.com/NVlabs/SegFormer).
Disclaimer: The team releasing SegFormer did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
SegFormer consists of a hierarchical Transformer encoder and a lightweight all-MLP decode head to achieve great results on semantic segmentation benchmarks such as ADE20K and Cityscapes. The hierarchical Transformer is first pre-trained on ImageNet-1k, after which a decode head is added and fine-tuned altogether on a downstream dataset.
## Intended uses & limitations
You can use the raw model for semantic segmentation. See the [model hub](https://huggingface.co/models?other=segformer) to look for fine-tuned versions on a task that interests you.
### How to use
Here is how to use this model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes:
```python
from transformers import SegformerFeatureExtractor, SegformerForSemanticSegmentation
from PIL import Image
import requests
feature_extractor = SegformerFeatureExtractor.from_pretrained("nvidia/segformer-b0-finetuned-cityscapes-640-1280")
model = SegformerForSemanticSegmentation.from_pretrained("nvidia/segformer-b0-finetuned-cityscapes-640-1280")
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
inputs = feature_extractor(images=image, return_tensors="pt")
outputs = model(**inputs)
logits = outputs.logits # shape (batch_size, num_labels, height/4, width/4)
```
For more code examples, we refer to the [documentation](https://huggingface.co/transformers/model_doc/segformer.html#).
### License
The license for this model can be found [here](https://github.com/NVlabs/SegFormer/blob/master/LICENSE).
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2105-15203,
author = {Enze Xie and
Wenhai Wang and
Zhiding Yu and
Anima Anandkumar and
Jose M. Alvarez and
Ping Luo},
title = {SegFormer: Simple and Efficient Design for Semantic Segmentation with
Transformers},
journal = {CoRR},
volume = {abs/2105.15203},
year = {2021},
url = {https://arxiv.org/abs/2105.15203},
eprinttype = {arXiv},
eprint = {2105.15203},
timestamp = {Wed, 02 Jun 2021 11:46:42 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2105-15203.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
| [
"road",
"sidewalk",
"building",
"wall",
"fence",
"pole",
"traffic light",
"traffic sign",
"vegetation",
"terrain",
"sky",
"person",
"rider",
"car",
"truck",
"bus",
"train",
"motorcycle",
"bicycle"
] |
nvidia/segformer-b0-finetuned-cityscapes-768-768 |
# SegFormer (b0-sized) model fine-tuned on CityScapes
SegFormer model fine-tuned on CityScapes at resolution 768x768. It was introduced in the paper [SegFormer: Simple and Efficient Design for Semantic Segmentation with Transformers](https://arxiv.org/abs/2105.15203) by Xie et al. and first released in [this repository](https://github.com/NVlabs/SegFormer).
Disclaimer: The team releasing SegFormer did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
SegFormer consists of a hierarchical Transformer encoder and a lightweight all-MLP decode head to achieve great results on semantic segmentation benchmarks such as ADE20K and Cityscapes. The hierarchical Transformer is first pre-trained on ImageNet-1k, after which a decode head is added and fine-tuned altogether on a downstream dataset.
## Intended uses & limitations
You can use the raw model for semantic segmentation. See the [model hub](https://huggingface.co/models?other=segformer) to look for fine-tuned versions on a task that interests you.
### How to use
Here is how to use this model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes:
```python
from transformers import SegformerFeatureExtractor, SegformerForSemanticSegmentation
from PIL import Image
import requests
feature_extractor = SegformerFeatureExtractor.from_pretrained("nvidia/segformer-b0-finetuned-cityscapes-768-768")
model = SegformerForSemanticSegmentation.from_pretrained("nvidia/segformer-b0-finetuned-cityscapes-768-768")
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
inputs = feature_extractor(images=image, return_tensors="pt")
outputs = model(**inputs)
logits = outputs.logits # shape (batch_size, num_labels, height/4, width/4)
```
For more code examples, we refer to the [documentation](https://huggingface.co/transformers/model_doc/segformer.html#).
### License
The license for this model can be found [here](https://github.com/NVlabs/SegFormer/blob/master/LICENSE).
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2105-15203,
author = {Enze Xie and
Wenhai Wang and
Zhiding Yu and
Anima Anandkumar and
Jose M. Alvarez and
Ping Luo},
title = {SegFormer: Simple and Efficient Design for Semantic Segmentation with
Transformers},
journal = {CoRR},
volume = {abs/2105.15203},
year = {2021},
url = {https://arxiv.org/abs/2105.15203},
eprinttype = {arXiv},
eprint = {2105.15203},
timestamp = {Wed, 02 Jun 2021 11:46:42 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2105-15203.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
| [
"road",
"sidewalk",
"building",
"wall",
"fence",
"pole",
"traffic light",
"traffic sign",
"vegetation",
"terrain",
"sky",
"person",
"rider",
"car",
"truck",
"bus",
"train",
"motorcycle",
"bicycle"
] |
nvidia/segformer-b1-finetuned-cityscapes-1024-1024 |
# SegFormer (b1-sized) model fine-tuned on CityScapes
SegFormer model fine-tuned on CityScapes at resolution 1024x1024. It was introduced in the paper [SegFormer: Simple and Efficient Design for Semantic Segmentation with Transformers](https://arxiv.org/abs/2105.15203) by Xie et al. and first released in [this repository](https://github.com/NVlabs/SegFormer).
Disclaimer: The team releasing SegFormer did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
SegFormer consists of a hierarchical Transformer encoder and a lightweight all-MLP decode head to achieve great results on semantic segmentation benchmarks such as ADE20K and Cityscapes. The hierarchical Transformer is first pre-trained on ImageNet-1k, after which a decode head is added and fine-tuned altogether on a downstream dataset.
## Intended uses & limitations
You can use the raw model for semantic segmentation. See the [model hub](https://huggingface.co/models?other=segformer) to look for fine-tuned versions on a task that interests you.
### How to use
Here is how to use this model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes:
```python
from transformers import SegformerFeatureExtractor, SegformerForSemanticSegmentation
from PIL import Image
import requests
feature_extractor = SegformerFeatureExtractor.from_pretrained("nvidia/segformer-b1-finetuned-cityscapes-1024-1024")
model = SegformerForSemanticSegmentation.from_pretrained("nvidia/segformer-b1-finetuned-cityscapes-1024-1024")
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
inputs = feature_extractor(images=image, return_tensors="pt")
outputs = model(**inputs)
logits = outputs.logits # shape (batch_size, num_labels, height/4, width/4)
```
For more code examples, we refer to the [documentation](https://huggingface.co/transformers/model_doc/segformer.html#).
### License
The license for this model can be found [here](https://github.com/NVlabs/SegFormer/blob/master/LICENSE).
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2105-15203,
author = {Enze Xie and
Wenhai Wang and
Zhiding Yu and
Anima Anandkumar and
Jose M. Alvarez and
Ping Luo},
title = {SegFormer: Simple and Efficient Design for Semantic Segmentation with
Transformers},
journal = {CoRR},
volume = {abs/2105.15203},
year = {2021},
url = {https://arxiv.org/abs/2105.15203},
eprinttype = {arXiv},
eprint = {2105.15203},
timestamp = {Wed, 02 Jun 2021 11:46:42 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2105-15203.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
| [
"road",
"sidewalk",
"building",
"wall",
"fence",
"pole",
"traffic light",
"traffic sign",
"vegetation",
"terrain",
"sky",
"person",
"rider",
"car",
"truck",
"bus",
"train",
"motorcycle",
"bicycle"
] |
nvidia/segformer-b2-finetuned-ade-512-512 |
# SegFormer (b2-sized) model fine-tuned on ADE20k
SegFormer model fine-tuned on ADE20k at resolution 512x512. It was introduced in the paper [SegFormer: Simple and Efficient Design for Semantic Segmentation with Transformers](https://arxiv.org/abs/2105.15203) by Xie et al. and first released in [this repository](https://github.com/NVlabs/SegFormer).
Disclaimer: The team releasing SegFormer did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
SegFormer consists of a hierarchical Transformer encoder and a lightweight all-MLP decode head to achieve great results on semantic segmentation benchmarks such as ADE20K and Cityscapes. The hierarchical Transformer is first pre-trained on ImageNet-1k, after which a decode head is added and fine-tuned altogether on a downstream dataset.
## Intended uses & limitations
You can use the raw model for semantic segmentation. See the [model hub](https://huggingface.co/models?other=segformer) to look for fine-tuned versions on a task that interests you.
### How to use
Here is how to use this model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes:
```python
from transformers import SegformerFeatureExtractor, SegformerForSemanticSegmentation
from PIL import Image
import requests
feature_extractor = SegformerFeatureExtractor.from_pretrained("nvidia/segformer-b2-finetuned-ade-512-512")
model = SegformerForSemanticSegmentation.from_pretrained("nvidia/segformer-b2-finetuned-ade-512-512")
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
inputs = feature_extractor(images=image, return_tensors="pt")
outputs = model(**inputs)
logits = outputs.logits # shape (batch_size, num_labels, height/4, width/4)
```
For more code examples, we refer to the [documentation](https://huggingface.co/transformers/model_doc/segformer.html#).
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2105-15203,
author = {Enze Xie and
Wenhai Wang and
Zhiding Yu and
Anima Anandkumar and
Jose M. Alvarez and
Ping Luo},
title = {SegFormer: Simple and Efficient Design for Semantic Segmentation with
Transformers},
journal = {CoRR},
volume = {abs/2105.15203},
year = {2021},
url = {https://arxiv.org/abs/2105.15203},
eprinttype = {arXiv},
eprint = {2105.15203},
timestamp = {Wed, 02 Jun 2021 11:46:42 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2105-15203.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
| [
"wall",
"building",
"sky",
"floor",
"tree",
"ceiling",
"road",
"bed ",
"windowpane",
"grass",
"cabinet",
"sidewalk",
"person",
"earth",
"door",
"table",
"mountain",
"plant",
"curtain",
"chair",
"car",
"water",
"painting",
"sofa",
"shelf",
"house",
"sea",
"mirror",
"rug",
"field",
"armchair",
"seat",
"fence",
"desk",
"rock",
"wardrobe",
"lamp",
"bathtub",
"railing",
"cushion",
"base",
"box",
"column",
"signboard",
"chest of drawers",
"counter",
"sand",
"sink",
"skyscraper",
"fireplace",
"refrigerator",
"grandstand",
"path",
"stairs",
"runway",
"case",
"pool table",
"pillow",
"screen door",
"stairway",
"river",
"bridge",
"bookcase",
"blind",
"coffee table",
"toilet",
"flower",
"book",
"hill",
"bench",
"countertop",
"stove",
"palm",
"kitchen island",
"computer",
"swivel chair",
"boat",
"bar",
"arcade machine",
"hovel",
"bus",
"towel",
"light",
"truck",
"tower",
"chandelier",
"awning",
"streetlight",
"booth",
"television receiver",
"airplane",
"dirt track",
"apparel",
"pole",
"land",
"bannister",
"escalator",
"ottoman",
"bottle",
"buffet",
"poster",
"stage",
"van",
"ship",
"fountain",
"conveyer belt",
"canopy",
"washer",
"plaything",
"swimming pool",
"stool",
"barrel",
"basket",
"waterfall",
"tent",
"bag",
"minibike",
"cradle",
"oven",
"ball",
"food",
"step",
"tank",
"trade name",
"microwave",
"pot",
"animal",
"bicycle",
"lake",
"dishwasher",
"screen",
"blanket",
"sculpture",
"hood",
"sconce",
"vase",
"traffic light",
"tray",
"ashcan",
"fan",
"pier",
"crt screen",
"plate",
"monitor",
"bulletin board",
"shower",
"radiator",
"glass",
"clock",
"flag"
] |
nvidia/segformer-b2-finetuned-cityscapes-1024-1024 |
# SegFormer (b2-sized) model fine-tuned on CityScapes
SegFormer model fine-tuned on CityScapes at resolution 1024x1024. It was introduced in the paper [SegFormer: Simple and Efficient Design for Semantic Segmentation with Transformers](https://arxiv.org/abs/2105.15203) by Xie et al. and first released in [this repository](https://github.com/NVlabs/SegFormer).
Disclaimer: The team releasing SegFormer did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
SegFormer consists of a hierarchical Transformer encoder and a lightweight all-MLP decode head to achieve great results on semantic segmentation benchmarks such as ADE20K and Cityscapes. The hierarchical Transformer is first pre-trained on ImageNet-1k, after which a decode head is added and fine-tuned altogether on a downstream dataset.
## Intended uses & limitations
You can use the raw model for semantic segmentation. See the [model hub](https://huggingface.co/models?other=segformer) to look for fine-tuned versions on a task that interests you.
### How to use
Here is how to use this model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes:
```python
from transformers import SegformerFeatureExtractor, SegformerForSemanticSegmentation
from PIL import Image
import requests
feature_extractor = SegformerFeatureExtractor.from_pretrained("nvidia/segformer-b2-finetuned-cityscapes-1024-1024")
model = SegformerForSemanticSegmentation.from_pretrained("nvidia/segformer-b2-finetuned-cityscapes-1024-1024")
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
inputs = feature_extractor(images=image, return_tensors="pt")
outputs = model(**inputs)
logits = outputs.logits # shape (batch_size, num_labels, height/4, width/4)
```
For more code examples, we refer to the [documentation](https://huggingface.co/transformers/model_doc/segformer.html#).
### License
The license for this model can be found [here](https://github.com/NVlabs/SegFormer/blob/master/LICENSE).
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2105-15203,
author = {Enze Xie and
Wenhai Wang and
Zhiding Yu and
Anima Anandkumar and
Jose M. Alvarez and
Ping Luo},
title = {SegFormer: Simple and Efficient Design for Semantic Segmentation with
Transformers},
journal = {CoRR},
volume = {abs/2105.15203},
year = {2021},
url = {https://arxiv.org/abs/2105.15203},
eprinttype = {arXiv},
eprint = {2105.15203},
timestamp = {Wed, 02 Jun 2021 11:46:42 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2105-15203.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
| [
"road",
"sidewalk",
"building",
"wall",
"fence",
"pole",
"traffic light",
"traffic sign",
"vegetation",
"terrain",
"sky",
"person",
"rider",
"car",
"truck",
"bus",
"train",
"motorcycle",
"bicycle"
] |
nvidia/segformer-b3-finetuned-ade-512-512 |
# SegFormer (b3-sized) model fine-tuned on ADE20k
SegFormer model fine-tuned on ADE20k at resolution 512x512. It was introduced in the paper [SegFormer: Simple and Efficient Design for Semantic Segmentation with Transformers](https://arxiv.org/abs/2105.15203) by Xie et al. and first released in [this repository](https://github.com/NVlabs/SegFormer).
Disclaimer: The team releasing SegFormer did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
SegFormer consists of a hierarchical Transformer encoder and a lightweight all-MLP decode head to achieve great results on semantic segmentation benchmarks such as ADE20K and Cityscapes. The hierarchical Transformer is first pre-trained on ImageNet-1k, after which a decode head is added and fine-tuned altogether on a downstream dataset.
## Intended uses & limitations
You can use the raw model for semantic segmentation. See the [model hub](https://huggingface.co/models?other=segformer) to look for fine-tuned versions on a task that interests you.
### How to use
Here is how to use this model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes:
```python
from transformers import SegformerFeatureExtractor, SegformerForSemanticSegmentation
from PIL import Image
import requests
feature_extractor = SegformerFeatureExtractor.from_pretrained("nvidia/segformer-b3-finetuned-ade-512-512")
model = SegformerForSemanticSegmentation.from_pretrained("nvidia/segformer-b3-finetuned-ade-512-512")
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
inputs = feature_extractor(images=image, return_tensors="pt")
outputs = model(**inputs)
logits = outputs.logits # shape (batch_size, num_labels, height/4, width/4)
```
For more code examples, we refer to the [documentation](https://huggingface.co/transformers/model_doc/segformer.html#).
### License
The license for this model can be found [here](https://github.com/NVlabs/SegFormer/blob/master/LICENSE).
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2105-15203,
author = {Enze Xie and
Wenhai Wang and
Zhiding Yu and
Anima Anandkumar and
Jose M. Alvarez and
Ping Luo},
title = {SegFormer: Simple and Efficient Design for Semantic Segmentation with
Transformers},
journal = {CoRR},
volume = {abs/2105.15203},
year = {2021},
url = {https://arxiv.org/abs/2105.15203},
eprinttype = {arXiv},
eprint = {2105.15203},
timestamp = {Wed, 02 Jun 2021 11:46:42 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2105-15203.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
| [
"wall",
"building",
"sky",
"floor",
"tree",
"ceiling",
"road",
"bed ",
"windowpane",
"grass",
"cabinet",
"sidewalk",
"person",
"earth",
"door",
"table",
"mountain",
"plant",
"curtain",
"chair",
"car",
"water",
"painting",
"sofa",
"shelf",
"house",
"sea",
"mirror",
"rug",
"field",
"armchair",
"seat",
"fence",
"desk",
"rock",
"wardrobe",
"lamp",
"bathtub",
"railing",
"cushion",
"base",
"box",
"column",
"signboard",
"chest of drawers",
"counter",
"sand",
"sink",
"skyscraper",
"fireplace",
"refrigerator",
"grandstand",
"path",
"stairs",
"runway",
"case",
"pool table",
"pillow",
"screen door",
"stairway",
"river",
"bridge",
"bookcase",
"blind",
"coffee table",
"toilet",
"flower",
"book",
"hill",
"bench",
"countertop",
"stove",
"palm",
"kitchen island",
"computer",
"swivel chair",
"boat",
"bar",
"arcade machine",
"hovel",
"bus",
"towel",
"light",
"truck",
"tower",
"chandelier",
"awning",
"streetlight",
"booth",
"television receiver",
"airplane",
"dirt track",
"apparel",
"pole",
"land",
"bannister",
"escalator",
"ottoman",
"bottle",
"buffet",
"poster",
"stage",
"van",
"ship",
"fountain",
"conveyer belt",
"canopy",
"washer",
"plaything",
"swimming pool",
"stool",
"barrel",
"basket",
"waterfall",
"tent",
"bag",
"minibike",
"cradle",
"oven",
"ball",
"food",
"step",
"tank",
"trade name",
"microwave",
"pot",
"animal",
"bicycle",
"lake",
"dishwasher",
"screen",
"blanket",
"sculpture",
"hood",
"sconce",
"vase",
"traffic light",
"tray",
"ashcan",
"fan",
"pier",
"crt screen",
"plate",
"monitor",
"bulletin board",
"shower",
"radiator",
"glass",
"clock",
"flag"
] |
nvidia/segformer-b3-finetuned-cityscapes-1024-1024 |
# SegFormer (b3-sized) model fine-tuned on CityScapes
SegFormer model fine-tuned on CityScapes at resolution 1024x1024. It was introduced in the paper [SegFormer: Simple and Efficient Design for Semantic Segmentation with Transformers](https://arxiv.org/abs/2105.15203) by Xie et al. and first released in [this repository](https://github.com/NVlabs/SegFormer).
Disclaimer: The team releasing SegFormer did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
SegFormer consists of a hierarchical Transformer encoder and a lightweight all-MLP decode head to achieve great results on semantic segmentation benchmarks such as ADE20K and Cityscapes. The hierarchical Transformer is first pre-trained on ImageNet-1k, after which a decode head is added and fine-tuned altogether on a downstream dataset.
## Intended uses & limitations
You can use the raw model for semantic segmentation. See the [model hub](https://huggingface.co/models?other=segformer) to look for fine-tuned versions on a task that interests you.
### How to use
Here is how to use this model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes:
```python
from transformers import SegformerFeatureExtractor, SegformerForSemanticSegmentation
from PIL import Image
import requests
feature_extractor = SegformerFeatureExtractor.from_pretrained("nvidia/segformer-b3-finetuned-cityscapes-1024-1024")
model = SegformerForSemanticSegmentation.from_pretrained("nvidia/segformer-b3-finetuned-cityscapes-1024-1024")
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
inputs = feature_extractor(images=image, return_tensors="pt")
outputs = model(**inputs)
logits = outputs.logits # shape (batch_size, num_labels, height/4, width/4)
```
For more code examples, we refer to the [documentation](https://huggingface.co/transformers/model_doc/segformer.html#).
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2105-15203,
author = {Enze Xie and
Wenhai Wang and
Zhiding Yu and
Anima Anandkumar and
Jose M. Alvarez and
Ping Luo},
title = {SegFormer: Simple and Efficient Design for Semantic Segmentation with
Transformers},
journal = {CoRR},
volume = {abs/2105.15203},
year = {2021},
url = {https://arxiv.org/abs/2105.15203},
eprinttype = {arXiv},
eprint = {2105.15203},
timestamp = {Wed, 02 Jun 2021 11:46:42 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2105-15203.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
| [
"road",
"sidewalk",
"building",
"wall",
"fence",
"pole",
"traffic light",
"traffic sign",
"vegetation",
"terrain",
"sky",
"person",
"rider",
"car",
"truck",
"bus",
"train",
"motorcycle",
"bicycle"
] |
nvidia/segformer-b4-finetuned-ade-512-512 |
# SegFormer (b4-sized) model fine-tuned on ADE20k
SegFormer model fine-tuned on ADE20k at resolution 512x512. It was introduced in the paper [SegFormer: Simple and Efficient Design for Semantic Segmentation with Transformers](https://arxiv.org/abs/2105.15203) by Xie et al. and first released in [this repository](https://github.com/NVlabs/SegFormer).
Disclaimer: The team releasing SegFormer did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
SegFormer consists of a hierarchical Transformer encoder and a lightweight all-MLP decode head to achieve great results on semantic segmentation benchmarks such as ADE20K and Cityscapes. The hierarchical Transformer is first pre-trained on ImageNet-1k, after which a decode head is added and fine-tuned altogether on a downstream dataset.
## Intended uses & limitations
You can use the raw model for semantic segmentation. See the [model hub](https://huggingface.co/models?other=segformer) to look for fine-tuned versions on a task that interests you.
### How to use
Here is how to use this model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes:
```python
from transformers import SegformerFeatureExtractor, SegformerForSemanticSegmentation
from PIL import Image
import requests
feature_extractor = SegformerFeatureExtractor.from_pretrained("nvidia/segformer-b4-finetuned-ade-512-512")
model = SegformerForSemanticSegmentation.from_pretrained("nvidia/segformer-b4-finetuned-ade-512-512")
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
inputs = feature_extractor(images=image, return_tensors="pt")
outputs = model(**inputs)
logits = outputs.logits # shape (batch_size, num_labels, height/4, width/4)
```
For more code examples, we refer to the [documentation](https://huggingface.co/transformers/model_doc/segformer.html#).
### License
The license for this model can be found [here](https://github.com/NVlabs/SegFormer/blob/master/LICENSE).
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2105-15203,
author = {Enze Xie and
Wenhai Wang and
Zhiding Yu and
Anima Anandkumar and
Jose M. Alvarez and
Ping Luo},
title = {SegFormer: Simple and Efficient Design for Semantic Segmentation with
Transformers},
journal = {CoRR},
volume = {abs/2105.15203},
year = {2021},
url = {https://arxiv.org/abs/2105.15203},
eprinttype = {arXiv},
eprint = {2105.15203},
timestamp = {Wed, 02 Jun 2021 11:46:42 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2105-15203.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
| [
"wall",
"building",
"sky",
"floor",
"tree",
"ceiling",
"road",
"bed ",
"windowpane",
"grass",
"cabinet",
"sidewalk",
"person",
"earth",
"door",
"table",
"mountain",
"plant",
"curtain",
"chair",
"car",
"water",
"painting",
"sofa",
"shelf",
"house",
"sea",
"mirror",
"rug",
"field",
"armchair",
"seat",
"fence",
"desk",
"rock",
"wardrobe",
"lamp",
"bathtub",
"railing",
"cushion",
"base",
"box",
"column",
"signboard",
"chest of drawers",
"counter",
"sand",
"sink",
"skyscraper",
"fireplace",
"refrigerator",
"grandstand",
"path",
"stairs",
"runway",
"case",
"pool table",
"pillow",
"screen door",
"stairway",
"river",
"bridge",
"bookcase",
"blind",
"coffee table",
"toilet",
"flower",
"book",
"hill",
"bench",
"countertop",
"stove",
"palm",
"kitchen island",
"computer",
"swivel chair",
"boat",
"bar",
"arcade machine",
"hovel",
"bus",
"towel",
"light",
"truck",
"tower",
"chandelier",
"awning",
"streetlight",
"booth",
"television receiver",
"airplane",
"dirt track",
"apparel",
"pole",
"land",
"bannister",
"escalator",
"ottoman",
"bottle",
"buffet",
"poster",
"stage",
"van",
"ship",
"fountain",
"conveyer belt",
"canopy",
"washer",
"plaything",
"swimming pool",
"stool",
"barrel",
"basket",
"waterfall",
"tent",
"bag",
"minibike",
"cradle",
"oven",
"ball",
"food",
"step",
"tank",
"trade name",
"microwave",
"pot",
"animal",
"bicycle",
"lake",
"dishwasher",
"screen",
"blanket",
"sculpture",
"hood",
"sconce",
"vase",
"traffic light",
"tray",
"ashcan",
"fan",
"pier",
"crt screen",
"plate",
"monitor",
"bulletin board",
"shower",
"radiator",
"glass",
"clock",
"flag"
] |
nvidia/segformer-b4-finetuned-cityscapes-1024-1024 |
# SegFormer (b4-sized) model fine-tuned on CityScapes
SegFormer model fine-tuned on CityScapes at resolution 1024x1024. It was introduced in the paper [SegFormer: Simple and Efficient Design for Semantic Segmentation with Transformers](https://arxiv.org/abs/2105.15203) by Xie et al. and first released in [this repository](https://github.com/NVlabs/SegFormer).
Disclaimer: The team releasing SegFormer did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
SegFormer consists of a hierarchical Transformer encoder and a lightweight all-MLP decode head to achieve great results on semantic segmentation benchmarks such as ADE20K and Cityscapes. The hierarchical Transformer is first pre-trained on ImageNet-1k, after which a decode head is added and fine-tuned altogether on a downstream dataset.
## Intended uses & limitations
You can use the raw model for semantic segmentation. See the [model hub](https://huggingface.co/models?other=segformer) to look for fine-tuned versions on a task that interests you.
### How to use
Here is how to use this model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes:
```python
from transformers import SegformerImageProcessor, SegformerForSemanticSegmentation
from PIL import Image
import requests
processor = SegformerImageProcessor.from_pretrained("nvidia/segformer-b4-finetuned-cityscapes-1024-1024")
model = SegformerForSemanticSegmentation.from_pretrained("nvidia/segformer-b4-finetuned-cityscapes-1024-1024")
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
inputs = processor(images=image, return_tensors="pt")
outputs = model(**inputs)
logits = outputs.logits # shape (batch_size, num_labels, height/4, width/4)
```
For more code examples, we refer to the [documentation](https://huggingface.co/transformers/model_doc/segformer.html#).
### License
The license for this model can be found [here](https://github.com/NVlabs/SegFormer/blob/master/LICENSE).
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2105-15203,
author = {Enze Xie and
Wenhai Wang and
Zhiding Yu and
Anima Anandkumar and
Jose M. Alvarez and
Ping Luo},
title = {SegFormer: Simple and Efficient Design for Semantic Segmentation with
Transformers},
journal = {CoRR},
volume = {abs/2105.15203},
year = {2021},
url = {https://arxiv.org/abs/2105.15203},
eprinttype = {arXiv},
eprint = {2105.15203},
timestamp = {Wed, 02 Jun 2021 11:46:42 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2105-15203.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
| [
"road",
"sidewalk",
"building",
"wall",
"fence",
"pole",
"traffic light",
"traffic sign",
"vegetation",
"terrain",
"sky",
"person",
"rider",
"car",
"truck",
"bus",
"train",
"motorcycle",
"bicycle"
] |
nvidia/segformer-b5-finetuned-ade-640-640 |
# SegFormer (b5-sized) model fine-tuned on ADE20k
SegFormer model fine-tuned on ADE20k at resolution 640x640. It was introduced in the paper [SegFormer: Simple and Efficient Design for Semantic Segmentation with Transformers](https://arxiv.org/abs/2105.15203) by Xie et al. and first released in [this repository](https://github.com/NVlabs/SegFormer).
Disclaimer: The team releasing SegFormer did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
SegFormer consists of a hierarchical Transformer encoder and a lightweight all-MLP decode head to achieve great results on semantic segmentation benchmarks such as ADE20K and Cityscapes. The hierarchical Transformer is first pre-trained on ImageNet-1k, after which a decode head is added and fine-tuned altogether on a downstream dataset.
## Intended uses & limitations
You can use the raw model for semantic segmentation. See the [model hub](https://huggingface.co/models?other=segformer) to look for fine-tuned versions on a task that interests you.
### How to use
Here is how to use this model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes:
```python
from transformers import SegformerFeatureExtractor, SegformerForSemanticSegmentation
from PIL import Image
import requests
feature_extractor = SegformerFeatureExtractor.from_pretrained("nvidia/segformer-b5-finetuned-ade-512-512")
model = SegformerForSemanticSegmentation.from_pretrained("nvidia/segformer-b5-finetuned-ade-512-512")
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
inputs = feature_extractor(images=image, return_tensors="pt")
outputs = model(**inputs)
logits = outputs.logits # shape (batch_size, num_labels, height/4, width/4)
```
For more code examples, we refer to the [documentation](https://huggingface.co/transformers/model_doc/segformer.html#).
### License
The license for this model can be found [here](https://github.com/NVlabs/SegFormer/blob/master/LICENSE).
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2105-15203,
author = {Enze Xie and
Wenhai Wang and
Zhiding Yu and
Anima Anandkumar and
Jose M. Alvarez and
Ping Luo},
title = {SegFormer: Simple and Efficient Design for Semantic Segmentation with
Transformers},
journal = {CoRR},
volume = {abs/2105.15203},
year = {2021},
url = {https://arxiv.org/abs/2105.15203},
eprinttype = {arXiv},
eprint = {2105.15203},
timestamp = {Wed, 02 Jun 2021 11:46:42 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2105-15203.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
| [
"wall",
"building",
"sky",
"floor",
"tree",
"ceiling",
"road",
"bed ",
"windowpane",
"grass",
"cabinet",
"sidewalk",
"person",
"earth",
"door",
"table",
"mountain",
"plant",
"curtain",
"chair",
"car",
"water",
"painting",
"sofa",
"shelf",
"house",
"sea",
"mirror",
"rug",
"field",
"armchair",
"seat",
"fence",
"desk",
"rock",
"wardrobe",
"lamp",
"bathtub",
"railing",
"cushion",
"base",
"box",
"column",
"signboard",
"chest of drawers",
"counter",
"sand",
"sink",
"skyscraper",
"fireplace",
"refrigerator",
"grandstand",
"path",
"stairs",
"runway",
"case",
"pool table",
"pillow",
"screen door",
"stairway",
"river",
"bridge",
"bookcase",
"blind",
"coffee table",
"toilet",
"flower",
"book",
"hill",
"bench",
"countertop",
"stove",
"palm",
"kitchen island",
"computer",
"swivel chair",
"boat",
"bar",
"arcade machine",
"hovel",
"bus",
"towel",
"light",
"truck",
"tower",
"chandelier",
"awning",
"streetlight",
"booth",
"television receiver",
"airplane",
"dirt track",
"apparel",
"pole",
"land",
"bannister",
"escalator",
"ottoman",
"bottle",
"buffet",
"poster",
"stage",
"van",
"ship",
"fountain",
"conveyer belt",
"canopy",
"washer",
"plaything",
"swimming pool",
"stool",
"barrel",
"basket",
"waterfall",
"tent",
"bag",
"minibike",
"cradle",
"oven",
"ball",
"food",
"step",
"tank",
"trade name",
"microwave",
"pot",
"animal",
"bicycle",
"lake",
"dishwasher",
"screen",
"blanket",
"sculpture",
"hood",
"sconce",
"vase",
"traffic light",
"tray",
"ashcan",
"fan",
"pier",
"crt screen",
"plate",
"monitor",
"bulletin board",
"shower",
"radiator",
"glass",
"clock",
"flag"
] |
nvidia/segformer-b5-finetuned-cityscapes-1024-1024 |
# SegFormer (b5-sized) model fine-tuned on CityScapes
SegFormer model fine-tuned on CityScapes at resolution 1024x1024. It was introduced in the paper [SegFormer: Simple and Efficient Design for Semantic Segmentation with Transformers](https://arxiv.org/abs/2105.15203) by Xie et al. and first released in [this repository](https://github.com/NVlabs/SegFormer).
Disclaimer: The team releasing SegFormer did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
SegFormer consists of a hierarchical Transformer encoder and a lightweight all-MLP decode head to achieve great results on semantic segmentation benchmarks such as ADE20K and Cityscapes. The hierarchical Transformer is first pre-trained on ImageNet-1k, after which a decode head is added and fine-tuned altogether on a downstream dataset.
## Intended uses & limitations
You can use the raw model for semantic segmentation. See the [model hub](https://huggingface.co/models?other=segformer) to look for fine-tuned versions on a task that interests you.
### How to use
Here is how to use this model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes:
```python
from transformers import SegformerFeatureExtractor, SegformerForSemanticSegmentation
from PIL import Image
import requests
feature_extractor = SegformerFeatureExtractor.from_pretrained("nvidia/segformer-b5-finetuned-cityscapes-1024-1024")
model = SegformerForSemanticSegmentation.from_pretrained("nvidia/segformer-b5-finetuned-cityscapes-1024-1024")
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
inputs = feature_extractor(images=image, return_tensors="pt")
outputs = model(**inputs)
logits = outputs.logits # shape (batch_size, num_labels, height/4, width/4)
```
For more code examples, we refer to the [documentation](https://huggingface.co/transformers/model_doc/segformer.html#).
### License
The license for this model can be found [here](https://github.com/NVlabs/SegFormer/blob/master/LICENSE).
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2105-15203,
author = {Enze Xie and
Wenhai Wang and
Zhiding Yu and
Anima Anandkumar and
Jose M. Alvarez and
Ping Luo},
title = {SegFormer: Simple and Efficient Design for Semantic Segmentation with
Transformers},
journal = {CoRR},
volume = {abs/2105.15203},
year = {2021},
url = {https://arxiv.org/abs/2105.15203},
eprinttype = {arXiv},
eprint = {2105.15203},
timestamp = {Wed, 02 Jun 2021 11:46:42 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2105-15203.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
| [
"road",
"sidewalk",
"building",
"wall",
"fence",
"pole",
"traffic light",
"traffic sign",
"vegetation",
"terrain",
"sky",
"person",
"rider",
"car",
"truck",
"bus",
"train",
"motorcycle",
"bicycle"
] |
tobiasc/segformer-b0-finetuned-segments-sidewalk | # SegFormer (b0-sized) model fine-tuned on Segments.ai sidewalk-semantic.
SegFormer model fine-tuned on [Segments.ai](https://segments.ai) [`sidewalk-semantic`](https://huggingface.co/datasets/segments/sidewalk-semantic). It was introduced in the paper [SegFormer: Simple and Efficient Design for Semantic Segmentation with Transformers](https://arxiv.org/abs/2105.15203) by Xie et al. and first released in [this repository](https://github.com/NVlabs/SegFormer).
## Model description
SegFormer consists of a hierarchical Transformer encoder and a lightweight all-MLP decode head to achieve great results on semantic segmentation benchmarks such as ADE20K and Cityscapes. The hierarchical Transformer is first pre-trained on ImageNet-1k, after which a decode head is added and fine-tuned altogether on a downstream dataset.
### How to use
Here is how to use this model to classify an image of the sidewalk dataset:
```python
from transformers import SegformerFeatureExtractor, SegformerForSemanticSegmentation
from PIL import Image
import requests
feature_extractor = SegformerFeatureExtractor.from_pretrained("nvidia/segformer-b0-finetuned-ade-512-512")
model = SegformerForSemanticSegmentation.from_pretrained("segments-tobias/segformer-b0-finetuned-segments-sidewalk")
url = "https://segmentsai-prod.s3.eu-west-2.amazonaws.com/assets/admin-tobias/439f6843-80c5-47ce-9b17-0b2a1d54dbeb.jpg"
image = Image.open(requests.get(url, stream=True).raw)
inputs = feature_extractor(images=image, return_tensors="pt")
outputs = model(**inputs)
logits = outputs.logits # shape (batch_size, num_labels, height/4, width/4)
```
For more code examples, we refer to the [documentation](https://huggingface.co/transformers/model_doc/segformer.html#).
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2105-15203,
author = {Enze Xie and
Wenhai Wang and
Zhiding Yu and
Anima Anandkumar and
Jose M. Alvarez and
Ping Luo},
title = {SegFormer: Simple and Efficient Design for Semantic Segmentation with
Transformers},
journal = {CoRR},
volume = {abs/2105.15203},
year = {2021},
url = {https://arxiv.org/abs/2105.15203},
eprinttype = {arXiv},
eprint = {2105.15203},
timestamp = {Wed, 02 Jun 2021 11:46:42 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2105-15203.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
``` | [
"unlabeled",
"flat-road",
"flat-sidewalk",
"flat-crosswalk",
"flat-cyclinglane",
"flat-parkingdriveway",
"flat-railtrack",
"flat-curb",
"human-person",
"human-rider",
"vehicle-car",
"vehicle-truck",
"vehicle-bus",
"vehicle-tramtrain",
"vehicle-motorcycle",
"vehicle-bicycle",
"vehicle-caravan",
"vehicle-cartrailer",
"construction-building",
"construction-door",
"construction-wall",
"construction-fenceguardrail",
"construction-bridge",
"construction-tunnel",
"construction-stairs",
"object-pole",
"object-trafficsign",
"object-trafficlight",
"nature-vegetation",
"nature-terrain",
"sky",
"void-ground",
"void-dynamic",
"void-static",
"void-unclear"
] |
End of preview. Expand
in Data Studio
README.md exists but content is empty.
- Downloads last month
- 84
Size of downloaded dataset files:
6.8 MB
Size of the auto-converted Parquet files:
6.8 MB
Number of rows:
658