model_id
stringlengths 12
92
| model_card
stringlengths 166
900k
| model_labels
listlengths 2
250
|
---|---|---|
nvidia/segformer-b1-finetuned-ade-512-512 |
# SegFormer (b1-sized) model fine-tuned on ADE20k
SegFormer model fine-tuned on ADE20k at resolution 512x512. It was introduced in the paper [SegFormer: Simple and Efficient Design for Semantic Segmentation with Transformers](https://arxiv.org/abs/2105.15203) by Xie et al. and first released in [this repository](https://github.com/NVlabs/SegFormer).
Disclaimer: The team releasing SegFormer did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
SegFormer consists of a hierarchical Transformer encoder and a lightweight all-MLP decode head to achieve great results on semantic segmentation benchmarks such as ADE20K and Cityscapes. The hierarchical Transformer is first pre-trained on ImageNet-1k, after which a decode head is added and fine-tuned altogether on a downstream dataset.
## Intended uses & limitations
You can use the raw model for semantic segmentation. See the [model hub](https://huggingface.co/models?other=segformer) to look for fine-tuned versions on a task that interests you.
### How to use
Here is how to use this model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes:
```python
from transformers import SegformerFeatureExtractor, SegformerForSemanticSegmentation
from PIL import Image
import requests
feature_extractor = SegformerFeatureExtractor.from_pretrained("nvidia/segformer-b1-finetuned-ade-512-512")
model = SegformerForSemanticSegmentation.from_pretrained("nvidia/segformer-b1-finetuned-ade-512-512")
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
inputs = feature_extractor(images=image, return_tensors="pt")
outputs = model(**inputs)
logits = outputs.logits # shape (batch_size, num_labels, height/4, width/4)
```
For more code examples, we refer to the [documentation](https://huggingface.co/transformers/model_doc/segformer.html#).
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2105-15203,
author = {Enze Xie and
Wenhai Wang and
Zhiding Yu and
Anima Anandkumar and
Jose M. Alvarez and
Ping Luo},
title = {SegFormer: Simple and Efficient Design for Semantic Segmentation with
Transformers},
journal = {CoRR},
volume = {abs/2105.15203},
year = {2021},
url = {https://arxiv.org/abs/2105.15203},
eprinttype = {arXiv},
eprint = {2105.15203},
timestamp = {Wed, 02 Jun 2021 11:46:42 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2105-15203.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
| [
"wall",
"building",
"sky",
"floor",
"tree",
"ceiling",
"road",
"bed ",
"windowpane",
"grass",
"cabinet",
"sidewalk",
"person",
"earth",
"door",
"table",
"mountain",
"plant",
"curtain",
"chair",
"car",
"water",
"painting",
"sofa",
"shelf",
"house",
"sea",
"mirror",
"rug",
"field",
"armchair",
"seat",
"fence",
"desk",
"rock",
"wardrobe",
"lamp",
"bathtub",
"railing",
"cushion",
"base",
"box",
"column",
"signboard",
"chest of drawers",
"counter",
"sand",
"sink",
"skyscraper",
"fireplace",
"refrigerator",
"grandstand",
"path",
"stairs",
"runway",
"case",
"pool table",
"pillow",
"screen door",
"stairway",
"river",
"bridge",
"bookcase",
"blind",
"coffee table",
"toilet",
"flower",
"book",
"hill",
"bench",
"countertop",
"stove",
"palm",
"kitchen island",
"computer",
"swivel chair",
"boat",
"bar",
"arcade machine",
"hovel",
"bus",
"towel",
"light",
"truck",
"tower",
"chandelier",
"awning",
"streetlight",
"booth",
"television receiver",
"airplane",
"dirt track",
"apparel",
"pole",
"land",
"bannister",
"escalator",
"ottoman",
"bottle",
"buffet",
"poster",
"stage",
"van",
"ship",
"fountain",
"conveyer belt",
"canopy",
"washer",
"plaything",
"swimming pool",
"stool",
"barrel",
"basket",
"waterfall",
"tent",
"bag",
"minibike",
"cradle",
"oven",
"ball",
"food",
"step",
"tank",
"trade name",
"microwave",
"pot",
"animal",
"bicycle",
"lake",
"dishwasher",
"screen",
"blanket",
"sculpture",
"hood",
"sconce",
"vase",
"traffic light",
"tray",
"ashcan",
"fan",
"pier",
"crt screen",
"plate",
"monitor",
"bulletin board",
"shower",
"radiator",
"glass",
"clock",
"flag"
] |
jonathandinu/face-parsing |
# Face Parsing

[Semantic segmentation](https://huggingface.co/docs/transformers/tasks/semantic_segmentation) model fine-tuned from [nvidia/mit-b5](https://huggingface.co/nvidia/mit-b5) with [CelebAMask-HQ](https://github.com/switchablenorms/CelebAMask-HQ) for face parsing. For additional options, see the Transformers [Segformer docs](https://huggingface.co/docs/transformers/model_doc/segformer).
> ONNX model for web inference contributed by [Xenova](https://huggingface.co/Xenova).
## Usage in Python
Exhaustive list of labels can be extracted from [config.json](https://huggingface.co/jonathandinu/face-parsing/blob/65972ac96180b397f86fda0980bbe68e6ee01b8f/config.json#L30).
| id | label | note |
| :-: | :--------- | :---------------- |
| 0 | background | |
| 1 | skin | |
| 2 | nose | |
| 3 | eye_g | eyeglasses |
| 4 | l_eye | left eye |
| 5 | r_eye | right eye |
| 6 | l_brow | left eyebrow |
| 7 | r_brow | right eyebrow |
| 8 | l_ear | left ear |
| 9 | r_ear | right ear |
| 10 | mouth | area between lips |
| 11 | u_lip | upper lip |
| 12 | l_lip | lower lip |
| 13 | hair | |
| 14 | hat | |
| 15 | ear_r | earring |
| 16 | neck_l | necklace |
| 17 | neck | |
| 18 | cloth | clothing |
```python
import torch
from torch import nn
from transformers import SegformerImageProcessor, SegformerForSemanticSegmentation
from PIL import Image
import matplotlib.pyplot as plt
import requests
# convenience expression for automatically determining device
device = (
"cuda"
# Device for NVIDIA or AMD GPUs
if torch.cuda.is_available()
else "mps"
# Device for Apple Silicon (Metal Performance Shaders)
if torch.backends.mps.is_available()
else "cpu"
)
# load models
image_processor = SegformerImageProcessor.from_pretrained("jonathandinu/face-parsing")
model = SegformerForSemanticSegmentation.from_pretrained("jonathandinu/face-parsing")
model.to(device)
# expects a PIL.Image or torch.Tensor
url = "https://images.unsplash.com/photo-1539571696357-5a69c17a67c6"
image = Image.open(requests.get(url, stream=True).raw)
# run inference on image
inputs = image_processor(images=image, return_tensors="pt").to(device)
outputs = model(**inputs)
logits = outputs.logits # shape (batch_size, num_labels, ~height/4, ~width/4)
# resize output to match input image dimensions
upsampled_logits = nn.functional.interpolate(logits,
size=image.size[::-1], # H x W
mode='bilinear',
align_corners=False)
# get label masks
labels = upsampled_logits.argmax(dim=1)[0]
# move to CPU to visualize in matplotlib
labels_viz = labels.cpu().numpy()
plt.imshow(labels_viz)
plt.show()
```
## Usage in the browser (Transformers.js)
```js
import {
pipeline,
env,
} from "https://cdn.jsdelivr.net/npm/@xenova/[email protected]";
// important to prevent errors since the model files are likely remote on HF hub
env.allowLocalModels = false;
// instantiate image segmentation pipeline with pretrained face parsing model
model = await pipeline("image-segmentation", "jonathandinu/face-parsing");
// async inference since it could take a few seconds
const output = await model(url);
// each label is a separate mask object
// [
// { score: null, label: 'background', mask: transformers.js RawImage { ... }}
// { score: null, label: 'hair', mask: transformers.js RawImage { ... }}
// ...
// ]
for (const m of output) {
print(`Found ${m.label}`);
m.mask.save(`${m.label}.png`);
}
```
### p5.js
Since [p5.js](https://p5js.org/) uses an animation loop abstraction, we need to take care loading the model and making predictions.
```js
// ...
// asynchronously load transformers.js and instantiate model
async function preload() {
// load transformers.js library with a dynamic import
const { pipeline, env } = await import(
"https://cdn.jsdelivr.net/npm/@xenova/[email protected]"
);
// important to prevent errors since the model files are remote on HF hub
env.allowLocalModels = false;
// instantiate image segmentation pipeline with pretrained face parsing model
model = await pipeline("image-segmentation", "jonathandinu/face-parsing");
print("face-parsing model loaded");
}
// ...
```
[full p5.js example](https://editor.p5js.org/jonathan.ai/sketches/wZn15Dvgh)
### Model Description
- **Developed by:** [Jonathan Dinu](https://twitter.com/jonathandinu)
- **Model type:** Transformer-based semantic segmentation image model
- **License:** non-commercial research and educational purposes
- **Resources for more information:** Transformers docs on [Segformer](https://huggingface.co/docs/transformers/model_doc/segformer) and/or the [original research paper](https://arxiv.org/abs/2105.15203).
## Limitations and Bias
### Bias
While the capabilities of computer vision models are impressive, they can also reinforce or exacerbate social biases. The [CelebAMask-HQ](https://github.com/switchablenorms/CelebAMask-HQ) dataset used for fine-tuning is large but not necessarily perfectly diverse or representative. Also, they are images of.... just celebrities.
| [
"background",
"skin",
"nose",
"eye_g",
"l_eye",
"r_eye",
"l_brow",
"r_brow",
"l_ear",
"r_ear",
"mouth",
"u_lip",
"l_lip",
"hair",
"hat",
"ear_r",
"neck_l",
"neck",
"cloth"
] |
facebook/mask2former-swin-tiny-coco-instance |
# Mask2Former
Mask2Former model trained on COCO instance segmentation (tiny-sized version, Swin backbone). It was introduced in the paper [Masked-attention Mask Transformer for Universal Image Segmentation
](https://arxiv.org/abs/2112.01527) and first released in [this repository](https://github.com/facebookresearch/Mask2Former/).
Disclaimer: The team releasing Mask2Former did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
Mask2Former addresses instance, semantic and panoptic segmentation with the same paradigm: by predicting a set of masks and corresponding labels. Hence, all 3 tasks are treated as if they were instance segmentation. Mask2Former outperforms the previous SOTA,
[MaskFormer](https://arxiv.org/abs/2107.06278) both in terms of performance an efficiency by (i) replacing the pixel decoder with a more advanced multi-scale deformable attention Transformer, (ii) adopting a Transformer decoder with masked attention to boost performance without
without introducing additional computation and (iii) improving training efficiency by calculating the loss on subsampled points instead of whole masks.

## Intended uses & limitations
You can use this particular checkpoint for instance segmentation. See the [model hub](https://huggingface.co/models?search=mask2former) to look for other
fine-tuned versions on a task that interests you.
### How to use
Here is how to use this model:
```python
import requests
import torch
from PIL import Image
from transformers import AutoImageProcessor, Mask2FormerForUniversalSegmentation
# load Mask2Former fine-tuned on COCO instance segmentation
processor = AutoImageProcessor.from_pretrained("facebook/mask2former-swin-tiny-coco-instance")
model = Mask2FormerForUniversalSegmentation.from_pretrained("facebook/mask2former-swin-tiny-coco-instance")
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
inputs = processor(images=image, return_tensors="pt")
with torch.no_grad():
outputs = model(**inputs)
# model predicts class_queries_logits of shape `(batch_size, num_queries)`
# and masks_queries_logits of shape `(batch_size, num_queries, height, width)`
class_queries_logits = outputs.class_queries_logits
masks_queries_logits = outputs.masks_queries_logits
# you can pass them to processor for postprocessing
result = processor.post_process_instance_segmentation(outputs, target_sizes=[image.size[::-1]])[0]
# we refer to the demo notebooks for visualization (see "Resources" section in the Mask2Former docs)
predicted_instance_map = result["segmentation"]
```
For more code examples, we refer to the [documentation](https://huggingface.co/docs/transformers/master/en/model_doc/mask2former). | [
"person",
"bicycle",
"car",
"motorbike",
"aeroplane",
"bus",
"train",
"truck",
"boat",
"traffic light",
"fire hydrant",
"stop sign",
"parking meter",
"bench",
"bird",
"cat",
"dog",
"horse",
"sheep",
"cow",
"elephant",
"bear",
"zebra",
"giraffe",
"backpack",
"umbrella",
"handbag",
"tie",
"suitcase",
"frisbee",
"skis",
"snowboard",
"sports ball",
"kite",
"baseball bat",
"baseball glove",
"skateboard",
"surfboard",
"tennis racket",
"bottle",
"wine glass",
"cup",
"fork",
"knife",
"spoon",
"bowl",
"banana",
"apple",
"sandwich",
"orange",
"broccoli",
"carrot",
"hot dog",
"pizza",
"donut",
"cake",
"chair",
"sofa",
"pottedplant",
"bed",
"diningtable",
"toilet",
"tvmonitor",
"laptop",
"mouse",
"remote",
"keyboard",
"cell phone",
"microwave",
"oven",
"toaster",
"sink",
"refrigerator",
"book",
"clock",
"vase",
"scissors",
"teddy bear",
"hair drier",
"toothbrush"
] |
facebook/mask2former-swin-large-ade-semantic |
# Mask2Former
Mask2Former model trained on ADE20k semantic segmentation (large-sized version, Swin backbone). It was introduced in the paper [Masked-attention Mask Transformer for Universal Image Segmentation
](https://arxiv.org/abs/2112.01527) and first released in [this repository](https://github.com/facebookresearch/Mask2Former/).
Disclaimer: The team releasing Mask2Former did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
Mask2Former addresses instance, semantic and panoptic segmentation with the same paradigm: by predicting a set of masks and corresponding labels. Hence, all 3 tasks are treated as if they were instance segmentation. Mask2Former outperforms the previous SOTA,
[MaskFormer](https://arxiv.org/abs/2107.06278) both in terms of performance an efficiency by (i) replacing the pixel decoder with a more advanced multi-scale deformable attention Transformer, (ii) adopting a Transformer decoder with masked attention to boost performance without
without introducing additional computation and (iii) improving training efficiency by calculating the loss on subsampled points instead of whole masks.

## Intended uses & limitations
You can use this particular checkpoint for panoptic segmentation. See the [model hub](https://huggingface.co/models?search=mask2former) to look for other
fine-tuned versions on a task that interests you.
### How to use
Here is how to use this model:
```python
import requests
import torch
from PIL import Image
from transformers import AutoImageProcessor, Mask2FormerForUniversalSegmentation
# load Mask2Former fine-tuned on ADE20k semantic segmentation
processor = AutoImageProcessor.from_pretrained("facebook/mask2former-swin-large-ade-semantic")
model = Mask2FormerForUniversalSegmentation.from_pretrained("facebook/mask2former-swin-large-ade-semantic")
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
inputs = processor(images=image, return_tensors="pt")
with torch.no_grad():
outputs = model(**inputs)
# model predicts class_queries_logits of shape `(batch_size, num_queries)`
# and masks_queries_logits of shape `(batch_size, num_queries, height, width)`
class_queries_logits = outputs.class_queries_logits
masks_queries_logits = outputs.masks_queries_logits
# you can pass them to processor for postprocessing
predicted_semantic_map = processor.post_process_semantic_segmentation(outputs, target_sizes=[image.size[::-1]])[0]
# we refer to the demo notebooks for visualization (see "Resources" section in the Mask2Former docs)
```
For more code examples, we refer to the [documentation](https://huggingface.co/docs/transformers/master/en/model_doc/mask2former). | [
"wall",
"building",
"sky",
"floor",
"tree",
"ceiling",
"road",
"bed ",
"windowpane",
"grass",
"cabinet",
"sidewalk",
"person",
"earth",
"door",
"table",
"mountain",
"plant",
"curtain",
"chair",
"car",
"water",
"painting",
"sofa",
"shelf",
"house",
"sea",
"mirror",
"rug",
"field",
"armchair",
"seat",
"fence",
"desk",
"rock",
"wardrobe",
"lamp",
"bathtub",
"railing",
"cushion",
"base",
"box",
"column",
"signboard",
"chest of drawers",
"counter",
"sand",
"sink",
"skyscraper",
"fireplace",
"refrigerator",
"grandstand",
"path",
"stairs",
"runway",
"case",
"pool table",
"pillow",
"screen door",
"stairway",
"river",
"bridge",
"bookcase",
"blind",
"coffee table",
"toilet",
"flower",
"book",
"hill",
"bench",
"countertop",
"stove",
"palm",
"kitchen island",
"computer",
"swivel chair",
"boat",
"bar",
"arcade machine",
"hovel",
"bus",
"towel",
"light",
"truck",
"tower",
"chandelier",
"awning",
"streetlight",
"booth",
"television receiver",
"airplane",
"dirt track",
"apparel",
"pole",
"land",
"bannister",
"escalator",
"ottoman",
"bottle",
"buffet",
"poster",
"stage",
"van",
"ship",
"fountain",
"conveyer belt",
"canopy",
"washer",
"plaything",
"swimming pool",
"stool",
"barrel",
"basket",
"waterfall",
"tent",
"bag",
"minibike",
"cradle",
"oven",
"ball",
"food",
"step",
"tank",
"trade name",
"microwave",
"pot",
"animal",
"bicycle",
"lake",
"dishwasher",
"screen",
"blanket",
"sculpture",
"hood",
"sconce",
"vase",
"traffic light",
"tray",
"ashcan",
"fan",
"pier",
"crt screen",
"plate",
"monitor",
"bulletin board",
"shower",
"radiator",
"glass",
"clock",
"flag"
] |
isjackwild/segformer-b0-finetuned-segments-skin-hair-clothing | # Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | [
"background",
"skin",
"hair",
"clothing"
] |
facebook/detr-resnet-101-panoptic |
# DETR (End-to-End Object Detection) model with ResNet-101 backbone
DEtection TRansformer (DETR) model trained end-to-end on COCO 2017 panoptic (118k annotated images). It was introduced in the paper [End-to-End Object Detection with Transformers](https://arxiv.org/abs/2005.12872) by Carion et al. and first released in [this repository](https://github.com/facebookresearch/detr).
Disclaimer: The team releasing DETR did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
The DETR model is an encoder-decoder transformer with a convolutional backbone. Two heads are added on top of the decoder outputs in order to perform object detection: a linear layer for the class labels and a MLP (multi-layer perceptron) for the bounding boxes. The model uses so-called object queries to detect objects in an image. Each object query looks for a particular object in the image. For COCO, the number of object queries is set to 100.
The model is trained using a "bipartite matching loss": one compares the predicted classes + bounding boxes of each of the N = 100 object queries to the ground truth annotations, padded up to the same length N (so if an image only contains 4 objects, 96 annotations will just have a "no object" as class and "no bounding box" as bounding box). The Hungarian matching algorithm is used to create an optimal one-to-one mapping between each of the N queries and each of the N annotations. Next, standard cross-entropy (for the classes) and a linear combination of the L1 and generalized IoU loss (for the bounding boxes) are used to optimize the parameters of the model.
DETR can be naturally extended to perform panoptic segmentation, by adding a mask head on top of the decoder outputs.
## Intended uses & limitations
You can use the raw model for panoptic segmentation. See the [model hub](https://huggingface.co/models?search=facebook/detr) to look for all available DETR models.
### How to use
Here is how to use this model:
```python
from transformers import DetrFeatureExtractor, DetrForSegmentation
from PIL import Image
import requests
url = 'http://images.cocodataset.org/val2017/000000039769.jpg'
image = Image.open(requests.get(url, stream=True).raw)
feature_extractor = DetrFeatureExtractor.from_pretrained('facebook/detr-resnet-101-panoptic')
model = DetrForSegmentation.from_pretrained('facebook/detr-resnet-101-panoptic')
# prepare inputs for the model
inputs = feature_extractor(images=image, return_tensors="pt")
# forward pass
outputs = model(**inputs)
# use the `post_process_panoptic` method of `DetrFeatureExtractor` to convert to COCO format
processed_sizes = torch.as_tensor(inputs["pixel_values"].shape[-2:]).unsqueeze(0)
result = feature_extractor.post_process_panoptic(outputs, processed_sizes)[0]
# the segmentation is stored in a special-format png
panoptic_seg = Image.open(io.BytesIO(result["png_string"]))
panoptic_seg = numpy.array(panoptic_seg, dtype=numpy.uint8)
# retrieve the ids corresponding to each mask
panoptic_seg_id = rgb_to_id(panoptic_seg)
```
Currently, both the feature extractor and model support PyTorch.
## Training data
The DETR model was trained on [COCO 2017 panoptic](https://cocodataset.org/#download), a dataset consisting of 118k/5k annotated images for training/validation respectively.
## Training procedure
### Preprocessing
The exact details of preprocessing of images during training/validation can be found [here](https://github.com/facebookresearch/detr/blob/master/datasets/coco_panoptic.py).
Images are resized/rescaled such that the shortest side is at least 800 pixels and the largest side at most 1333 pixels, and normalized across the RGB channels with the ImageNet mean (0.485, 0.456, 0.406) and standard deviation (0.229, 0.224, 0.225).
### Training
The model was trained for 300 epochs on 16 V100 GPUs. This takes 3 days, with 4 images per GPU (hence a total batch size of 64).
## Evaluation results
This model achieves the following results on COCO 2017 validation: a box AP (average precision) of **40.1**, a segmentation AP (average precision) of **33** and a PQ (panoptic quality) of **45.1**.
For more details regarding evaluation results, we refer to table 5 of the original paper.
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2005-12872,
author = {Nicolas Carion and
Francisco Massa and
Gabriel Synnaeve and
Nicolas Usunier and
Alexander Kirillov and
Sergey Zagoruyko},
title = {End-to-End Object Detection with Transformers},
journal = {CoRR},
volume = {abs/2005.12872},
year = {2020},
url = {https://arxiv.org/abs/2005.12872},
archivePrefix = {arXiv},
eprint = {2005.12872},
timestamp = {Thu, 28 May 2020 17:38:09 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2005-12872.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
``` | [
"n/a",
"person",
"traffic light",
"cardboard",
"carpet",
"ceiling-other",
"ceiling-tile",
"cloth",
"clothes",
"clouds",
"counter",
"cupboard",
"curtain",
"fire hydrant",
"desk-stuff",
"dirt",
"door-stuff",
"fence",
"floor-marble",
"floor-other",
"floor-stone",
"floor-tile",
"floor-wood",
"flower",
"street sign",
"fog",
"food-other",
"fruit",
"furniture-other",
"grass",
"gravel",
"ground-other",
"hill",
"house",
"leaves",
"stop sign",
"light",
"mat",
"metal",
"mirror-stuff",
"moss",
"mountain",
"mud",
"napkin",
"net",
"paper",
"parking meter",
"pavement",
"pillow",
"plant-other",
"plastic",
"platform",
"playingfield",
"railing",
"railroad",
"river",
"road",
"bench",
"rock",
"roof",
"rug",
"salad",
"sand",
"sea",
"shelf",
"sky-other",
"skyscraper",
"snow",
"bird",
"solid-other",
"stairs",
"stone",
"straw",
"structural-other",
"table",
"tent",
"textile-other",
"towel",
"tree",
"cat",
"vegetable",
"wall-brick",
"wall-concrete",
"wall-other",
"wall-panel",
"wall-stone",
"wall-tile",
"wall-wood",
"water-other",
"waterdrops",
"dog",
"window-blind",
"window-other",
"wood",
"label_183",
"label_184",
"label_185",
"label_186",
"label_187",
"label_188",
"label_189",
"horse",
"label_190",
"label_191",
"label_192",
"label_193",
"label_194",
"label_195",
"label_196",
"label_197",
"label_198",
"label_199",
"bicycle",
"sheep",
"label_200",
"label_201",
"label_202",
"label_203",
"label_204",
"label_205",
"label_206",
"label_207",
"label_208",
"label_209",
"cow",
"label_210",
"label_211",
"label_212",
"label_213",
"label_214",
"label_215",
"label_216",
"label_217",
"label_218",
"label_219",
"elephant",
"label_220",
"label_221",
"label_222",
"label_223",
"label_224",
"label_225",
"label_226",
"label_227",
"label_228",
"label_229",
"bear",
"label_230",
"label_231",
"label_232",
"label_233",
"label_234",
"label_235",
"label_236",
"label_237",
"label_238",
"label_239",
"zebra",
"label_240",
"label_241",
"label_242",
"label_243",
"label_244",
"label_245",
"label_246",
"label_247",
"label_248",
"label_249",
"giraffe",
"hat",
"backpack",
"umbrella",
"shoe",
"car",
"eye glasses",
"handbag",
"tie",
"suitcase",
"frisbee",
"skis",
"snowboard",
"sports ball",
"kite",
"baseball bat",
"motorcycle",
"baseball glove",
"skateboard",
"surfboard",
"tennis racket",
"bottle",
"plate",
"wine glass",
"cup",
"fork",
"knife",
"airplane",
"spoon",
"bowl",
"banana",
"apple",
"sandwich",
"orange",
"broccoli",
"carrot",
"hot dog",
"pizza",
"bus",
"donut",
"cake",
"chair",
"couch",
"potted plant",
"bed",
"mirror",
"dining table",
"window",
"desk",
"train",
"toilet",
"door",
"tv",
"laptop",
"mouse",
"remote",
"keyboard",
"cell phone",
"microwave",
"oven",
"truck",
"toaster",
"sink",
"refrigerator",
"blender",
"book",
"clock",
"vase",
"scissors",
"teddy bear",
"hair drier",
"boat",
"toothbrush",
"hair brush",
"banner",
"blanket",
"branch",
"bridge",
"building-other",
"bush",
"cabinet",
"cage"
] |
facebook/detr-resnet-50-dc5-panoptic |
# DETR (End-to-End Object Detection) model with ResNet-50 backbone (dilated C5 stage)
DEtection TRansformer (DETR) model trained end-to-end on COCO 2017 panoptic (118k annotated images). It was introduced in the paper [End-to-End Object Detection with Transformers](https://arxiv.org/abs/2005.12872) by Carion et al. and first released in [this repository](https://github.com/facebookresearch/detr).
Disclaimer: The team releasing DETR did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
The DETR model is an encoder-decoder transformer with a convolutional backbone. Two heads are added on top of the decoder outputs in order to perform object detection: a linear layer for the class labels and a MLP (multi-layer perceptron) for the bounding boxes. The model uses so-called object queries to detect objects in an image. Each object query looks for a particular object in the image. For COCO, the number of object queries is set to 100.
The model is trained using a "bipartite matching loss": one compares the predicted classes + bounding boxes of each of the N = 100 object queries to the ground truth annotations, padded up to the same length N (so if an image only contains 4 objects, 96 annotations will just have a "no object" as class and "no bounding box" as bounding box). The Hungarian matching algorithm is used to create an optimal one-to-one mapping between each of the N queries and each of the N annotations. Next, standard cross-entropy (for the classes) and a linear combination of the L1 and generalized IoU loss (for the bounding boxes) are used to optimize the parameters of the model.
DETR can be naturally extended to perform panoptic segmentation, by adding a mask head on top of the decoder outputs.
## Intended uses & limitations
You can use the raw model for panoptic segmentation. See the [model hub](https://huggingface.co/models?search=facebook/detr) to look for all available DETR models.
### How to use
Here is how to use this model:
```python
from transformers import DetrFeatureExtractor, DetrForSegmentation
from PIL import Image
import requests
url = 'http://images.cocodataset.org/val2017/000000039769.jpg'
image = Image.open(requests.get(url, stream=True).raw)
feature_extractor = DetrFeatureExtractor.from_pretrained('facebook/detr-resnet-50-dc5-panoptic')
model = DetrForSegmentation.from_pretrained('facebook/detr-resnet-50-dc5-panoptic')
inputs = feature_extractor(images=image, return_tensors="pt")
outputs = model(**inputs)
# model predicts COCO classes, bounding boxes, and masks
logits = outputs.logits
bboxes = outputs.pred_boxes
masks = outputs.pred_masks
```
Currently, both the feature extractor and model support PyTorch.
## Training data
The DETR model was trained on [COCO 2017 panoptic](https://cocodataset.org/#download), a dataset consisting of 118k/5k annotated images for training/validation respectively.
## Training procedure
### Preprocessing
The exact details of preprocessing of images during training/validation can be found [here](https://github.com/facebookresearch/detr/blob/master/datasets/coco_panoptic.py).
Images are resized/rescaled such that the shortest side is at least 800 pixels and the largest side at most 1333 pixels, and normalized across the RGB channels with the ImageNet mean (0.485, 0.456, 0.406) and standard deviation (0.229, 0.224, 0.225).
### Training
The model was trained for 300 epochs on 16 V100 GPUs. This takes 3 days, with 4 images per GPU (hence a total batch size of 64).
## Evaluation results
This model achieves the following results on COCO 2017 validation: a box AP (average precision) of **40.2**, a segmentation AP (average precision) of **31.9** and a PQ (panoptic quality) of **44.6**.
For more details regarding evaluation results, we refer to table 5 of the original paper.
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2005-12872,
author = {Nicolas Carion and
Francisco Massa and
Gabriel Synnaeve and
Nicolas Usunier and
Alexander Kirillov and
Sergey Zagoruyko},
title = {End-to-End Object Detection with Transformers},
journal = {CoRR},
volume = {abs/2005.12872},
year = {2020},
url = {https://arxiv.org/abs/2005.12872},
archivePrefix = {arXiv},
eprint = {2005.12872},
timestamp = {Thu, 28 May 2020 17:38:09 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2005-12872.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
``` | [
"n/a",
"person",
"traffic light",
"cardboard",
"carpet",
"ceiling-other",
"ceiling-tile",
"cloth",
"clothes",
"clouds",
"counter",
"cupboard",
"curtain",
"fire hydrant",
"desk-stuff",
"dirt",
"door-stuff",
"fence",
"floor-marble",
"floor-other",
"floor-stone",
"floor-tile",
"floor-wood",
"flower",
"street sign",
"fog",
"food-other",
"fruit",
"furniture-other",
"grass",
"gravel",
"ground-other",
"hill",
"house",
"leaves",
"stop sign",
"light",
"mat",
"metal",
"mirror-stuff",
"moss",
"mountain",
"mud",
"napkin",
"net",
"paper",
"parking meter",
"pavement",
"pillow",
"plant-other",
"plastic",
"platform",
"playingfield",
"railing",
"railroad",
"river",
"road",
"bench",
"rock",
"roof",
"rug",
"salad",
"sand",
"sea",
"shelf",
"sky-other",
"skyscraper",
"snow",
"bird",
"solid-other",
"stairs",
"stone",
"straw",
"structural-other",
"table",
"tent",
"textile-other",
"towel",
"tree",
"cat",
"vegetable",
"wall-brick",
"wall-concrete",
"wall-other",
"wall-panel",
"wall-stone",
"wall-tile",
"wall-wood",
"water-other",
"waterdrops",
"dog",
"window-blind",
"window-other",
"wood",
"label_183",
"label_184",
"label_185",
"label_186",
"label_187",
"label_188",
"label_189",
"horse",
"label_190",
"label_191",
"label_192",
"label_193",
"label_194",
"label_195",
"label_196",
"label_197",
"label_198",
"label_199",
"bicycle",
"sheep",
"label_200",
"label_201",
"label_202",
"label_203",
"label_204",
"label_205",
"label_206",
"label_207",
"label_208",
"label_209",
"cow",
"label_210",
"label_211",
"label_212",
"label_213",
"label_214",
"label_215",
"label_216",
"label_217",
"label_218",
"label_219",
"elephant",
"label_220",
"label_221",
"label_222",
"label_223",
"label_224",
"label_225",
"label_226",
"label_227",
"label_228",
"label_229",
"bear",
"label_230",
"label_231",
"label_232",
"label_233",
"label_234",
"label_235",
"label_236",
"label_237",
"label_238",
"label_239",
"zebra",
"label_240",
"label_241",
"label_242",
"label_243",
"label_244",
"label_245",
"label_246",
"label_247",
"label_248",
"label_249",
"giraffe",
"hat",
"backpack",
"umbrella",
"shoe",
"car",
"eye glasses",
"handbag",
"tie",
"suitcase",
"frisbee",
"skis",
"snowboard",
"sports ball",
"kite",
"baseball bat",
"motorcycle",
"baseball glove",
"skateboard",
"surfboard",
"tennis racket",
"bottle",
"plate",
"wine glass",
"cup",
"fork",
"knife",
"airplane",
"spoon",
"bowl",
"banana",
"apple",
"sandwich",
"orange",
"broccoli",
"carrot",
"hot dog",
"pizza",
"bus",
"donut",
"cake",
"chair",
"couch",
"potted plant",
"bed",
"mirror",
"dining table",
"window",
"desk",
"train",
"toilet",
"door",
"tv",
"laptop",
"mouse",
"remote",
"keyboard",
"cell phone",
"microwave",
"oven",
"truck",
"toaster",
"sink",
"refrigerator",
"blender",
"book",
"clock",
"vase",
"scissors",
"teddy bear",
"hair drier",
"boat",
"toothbrush",
"hair brush",
"banner",
"blanket",
"branch",
"bridge",
"building-other",
"bush",
"cabinet",
"cage"
] |
facebook/detr-resnet-50-panoptic |
# DETR (End-to-End Object Detection) model with ResNet-50 backbone
DEtection TRansformer (DETR) model trained end-to-end on COCO 2017 panoptic (118k annotated images). It was introduced in the paper [End-to-End Object Detection with Transformers](https://arxiv.org/abs/2005.12872) by Carion et al. and first released in [this repository](https://github.com/facebookresearch/detr).
Disclaimer: The team releasing DETR did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
The DETR model is an encoder-decoder transformer with a convolutional backbone. Two heads are added on top of the decoder outputs in order to perform object detection: a linear layer for the class labels and a MLP (multi-layer perceptron) for the bounding boxes. The model uses so-called object queries to detect objects in an image. Each object query looks for a particular object in the image. For COCO, the number of object queries is set to 100.
The model is trained using a "bipartite matching loss": one compares the predicted classes + bounding boxes of each of the N = 100 object queries to the ground truth annotations, padded up to the same length N (so if an image only contains 4 objects, 96 annotations will just have a "no object" as class and "no bounding box" as bounding box). The Hungarian matching algorithm is used to create an optimal one-to-one mapping between each of the N queries and each of the N annotations. Next, standard cross-entropy (for the classes) and a linear combination of the L1 and generalized IoU loss (for the bounding boxes) are used to optimize the parameters of the model.
DETR can be naturally extended to perform panoptic segmentation, by adding a mask head on top of the decoder outputs.

## Intended uses & limitations
You can use the raw model for panoptic segmentation. See the [model hub](https://huggingface.co/models?search=facebook/detr) to look for all available DETR models.
### How to use
Here is how to use this model:
```python
import io
import requests
from PIL import Image
import torch
import numpy
from transformers import DetrFeatureExtractor, DetrForSegmentation
from transformers.models.detr.feature_extraction_detr import rgb_to_id
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
feature_extractor = DetrFeatureExtractor.from_pretrained("facebook/detr-resnet-50-panoptic")
model = DetrForSegmentation.from_pretrained("facebook/detr-resnet-50-panoptic")
# prepare image for the model
inputs = feature_extractor(images=image, return_tensors="pt")
# forward pass
outputs = model(**inputs)
# use the `post_process_panoptic` method of `DetrFeatureExtractor` to convert to COCO format
processed_sizes = torch.as_tensor(inputs["pixel_values"].shape[-2:]).unsqueeze(0)
result = feature_extractor.post_process_panoptic(outputs, processed_sizes)[0]
# the segmentation is stored in a special-format png
panoptic_seg = Image.open(io.BytesIO(result["png_string"]))
panoptic_seg = numpy.array(panoptic_seg, dtype=numpy.uint8)
# retrieve the ids corresponding to each mask
panoptic_seg_id = rgb_to_id(panoptic_seg)
```
Currently, both the feature extractor and model support PyTorch.
## Training data
The DETR model was trained on [COCO 2017 panoptic](https://cocodataset.org/#download), a dataset consisting of 118k/5k annotated images for training/validation respectively.
## Training procedure
### Preprocessing
The exact details of preprocessing of images during training/validation can be found [here](https://github.com/facebookresearch/detr/blob/master/datasets/coco_panoptic.py).
Images are resized/rescaled such that the shortest side is at least 800 pixels and the largest side at most 1333 pixels, and normalized across the RGB channels with the ImageNet mean (0.485, 0.456, 0.406) and standard deviation (0.229, 0.224, 0.225).
### Training
The model was trained for 300 epochs on 16 V100 GPUs. This takes 3 days, with 4 images per GPU (hence a total batch size of 64).
## Evaluation results
This model achieves the following results on COCO 2017 validation: a box AP (average precision) of **38.8**, a segmentation AP (average precision) of **31.1** and a PQ (panoptic quality) of **43.4**.
For more details regarding evaluation results, we refer to table 5 of the original paper.
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2005-12872,
author = {Nicolas Carion and
Francisco Massa and
Gabriel Synnaeve and
Nicolas Usunier and
Alexander Kirillov and
Sergey Zagoruyko},
title = {End-to-End Object Detection with Transformers},
journal = {CoRR},
volume = {abs/2005.12872},
year = {2020},
url = {https://arxiv.org/abs/2005.12872},
archivePrefix = {arXiv},
eprint = {2005.12872},
timestamp = {Thu, 28 May 2020 17:38:09 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2005-12872.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
``` | [
"n/a",
"person",
"traffic light",
"cardboard",
"carpet",
"ceiling-other",
"ceiling-tile",
"cloth",
"clothes",
"clouds",
"counter",
"cupboard",
"curtain",
"fire hydrant",
"desk-stuff",
"dirt",
"door-stuff",
"fence",
"floor-marble",
"floor-other",
"floor-stone",
"floor-tile",
"floor-wood",
"flower",
"street sign",
"fog",
"food-other",
"fruit",
"furniture-other",
"grass",
"gravel",
"ground-other",
"hill",
"house",
"leaves",
"stop sign",
"light",
"mat",
"metal",
"mirror-stuff",
"moss",
"mountain",
"mud",
"napkin",
"net",
"paper",
"parking meter",
"pavement",
"pillow",
"plant-other",
"plastic",
"platform",
"playingfield",
"railing",
"railroad",
"river",
"road",
"bench",
"rock",
"roof",
"rug",
"salad",
"sand",
"sea",
"shelf",
"sky-other",
"skyscraper",
"snow",
"bird",
"solid-other",
"stairs",
"stone",
"straw",
"structural-other",
"table",
"tent",
"textile-other",
"towel",
"tree",
"cat",
"vegetable",
"wall-brick",
"wall-concrete",
"wall-other",
"wall-panel",
"wall-stone",
"wall-tile",
"wall-wood",
"water-other",
"waterdrops",
"dog",
"window-blind",
"window-other",
"wood",
"label_183",
"label_184",
"label_185",
"label_186",
"label_187",
"label_188",
"label_189",
"horse",
"label_190",
"label_191",
"label_192",
"label_193",
"label_194",
"label_195",
"label_196",
"label_197",
"label_198",
"label_199",
"bicycle",
"sheep",
"label_200",
"label_201",
"label_202",
"label_203",
"label_204",
"label_205",
"label_206",
"label_207",
"label_208",
"label_209",
"cow",
"label_210",
"label_211",
"label_212",
"label_213",
"label_214",
"label_215",
"label_216",
"label_217",
"label_218",
"label_219",
"elephant",
"label_220",
"label_221",
"label_222",
"label_223",
"label_224",
"label_225",
"label_226",
"label_227",
"label_228",
"label_229",
"bear",
"label_230",
"label_231",
"label_232",
"label_233",
"label_234",
"label_235",
"label_236",
"label_237",
"label_238",
"label_239",
"zebra",
"label_240",
"label_241",
"label_242",
"label_243",
"label_244",
"label_245",
"label_246",
"label_247",
"label_248",
"label_249",
"giraffe",
"hat",
"backpack",
"umbrella",
"shoe",
"car",
"eye glasses",
"handbag",
"tie",
"suitcase",
"frisbee",
"skis",
"snowboard",
"sports ball",
"kite",
"baseball bat",
"motorcycle",
"baseball glove",
"skateboard",
"surfboard",
"tennis racket",
"bottle",
"plate",
"wine glass",
"cup",
"fork",
"knife",
"airplane",
"spoon",
"bowl",
"banana",
"apple",
"sandwich",
"orange",
"broccoli",
"carrot",
"hot dog",
"pizza",
"bus",
"donut",
"cake",
"chair",
"couch",
"potted plant",
"bed",
"mirror",
"dining table",
"window",
"desk",
"train",
"toilet",
"door",
"tv",
"laptop",
"mouse",
"remote",
"keyboard",
"cell phone",
"microwave",
"oven",
"truck",
"toaster",
"sink",
"refrigerator",
"blender",
"book",
"clock",
"vase",
"scissors",
"teddy bear",
"hair drier",
"boat",
"toothbrush",
"hair brush",
"banner",
"blanket",
"branch",
"bridge",
"building-other",
"bush",
"cabinet",
"cage"
] |
facebook/maskformer-swin-base-ade |
# MaskFormer
MaskFormer model trained on ADE20k semantic segmentation (base-sized version, Swin backbone). It was introduced in the paper [Per-Pixel Classification is Not All You Need for Semantic Segmentation](https://arxiv.org/abs/2107.06278) and first released in [this repository](https://github.com/facebookresearch/MaskFormer/blob/da3e60d85fdeedcb31476b5edd7d328826ce56cc/mask_former/modeling/criterion.py#L169).
Disclaimer: The team releasing MaskFormer did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
MaskFormer addresses instance, semantic and panoptic segmentation with the same paradigm: by predicting a set of masks and corresponding labels. Hence, all 3 tasks are treated as if they were instance segmentation.

## Intended uses & limitations
You can use this particular checkpoint for semantic segmentation. See the [model hub](https://huggingface.co/models?search=maskformer) to look for other
fine-tuned versions on a task that interests you.
### How to use
Here is how to use this model:
```python
from transformers import MaskFormerFeatureExtractor, MaskFormerForInstanceSegmentation
from PIL import Image
import requests
url = "https://huggingface.co/datasets/hf-internal-testing/fixtures_ade20k/resolve/main/ADE_val_00000001.jpg"
image = Image.open(requests.get(url, stream=True).raw)
feature_extractor = MaskFormerFeatureExtractor.from_pretrained("facebook/maskformer-swin-base-ade")
inputs = feature_extractor(images=image, return_tensors="pt")
model = MaskFormerForInstanceSegmentation.from_pretrained("facebook/maskformer-swin-base-ade")
outputs = model(**inputs)
# model predicts class_queries_logits of shape `(batch_size, num_queries)`
# and masks_queries_logits of shape `(batch_size, num_queries, height, width)`
class_queries_logits = outputs.class_queries_logits
masks_queries_logits = outputs.masks_queries_logits
# you can pass them to feature_extractor for postprocessing
# we refer to the demo notebooks for visualization (see "Resources" section in the MaskFormer docs)
predicted_semantic_map = feature_extractor.post_process_semantic_segmentation(outputs, target_sizes=[image.size[::-1]])[0]
```
For more code examples, we refer to the [documentation](https://huggingface.co/docs/transformers/master/en/model_doc/maskformer). | [
"wall",
"building",
"sky",
"floor",
"tree",
"ceiling",
"road, route",
"bed",
"window ",
"grass",
"cabinet",
"sidewalk, pavement",
"person",
"earth, ground",
"door",
"table",
"mountain, mount",
"plant",
"curtain",
"chair",
"car",
"water",
"painting, picture",
"sofa",
"shelf",
"house",
"sea",
"mirror",
"rug",
"field",
"armchair",
"seat",
"fence",
"desk",
"rock, stone",
"wardrobe, closet, press",
"lamp",
"tub",
"rail",
"cushion",
"base, pedestal, stand",
"box",
"column, pillar",
"signboard, sign",
"chest of drawers, chest, bureau, dresser",
"counter",
"sand",
"sink",
"skyscraper",
"fireplace",
"refrigerator, icebox",
"grandstand, covered stand",
"path",
"stairs",
"runway",
"case, display case, showcase, vitrine",
"pool table, billiard table, snooker table",
"pillow",
"screen door, screen",
"stairway, staircase",
"river",
"bridge, span",
"bookcase",
"blind, screen",
"coffee table",
"toilet, can, commode, crapper, pot, potty, stool, throne",
"flower",
"book",
"hill",
"bench",
"countertop",
"stove",
"palm, palm tree",
"kitchen island",
"computer",
"swivel chair",
"boat",
"bar",
"arcade machine",
"hovel, hut, hutch, shack, shanty",
"bus",
"towel",
"light",
"truck",
"tower",
"chandelier",
"awning, sunshade, sunblind",
"street lamp",
"booth",
"tv",
"plane",
"dirt track",
"clothes",
"pole",
"land, ground, soil",
"bannister, banister, balustrade, balusters, handrail",
"escalator, moving staircase, moving stairway",
"ottoman, pouf, pouffe, puff, hassock",
"bottle",
"buffet, counter, sideboard",
"poster, posting, placard, notice, bill, card",
"stage",
"van",
"ship",
"fountain",
"conveyer belt, conveyor belt, conveyer, conveyor, transporter",
"canopy",
"washer, automatic washer, washing machine",
"plaything, toy",
"pool",
"stool",
"barrel, cask",
"basket, handbasket",
"falls",
"tent",
"bag",
"minibike, motorbike",
"cradle",
"oven",
"ball",
"food, solid food",
"step, stair",
"tank, storage tank",
"trade name",
"microwave",
"pot",
"animal",
"bicycle",
"lake",
"dishwasher",
"screen",
"blanket, cover",
"sculpture",
"hood, exhaust hood",
"sconce",
"vase",
"traffic light",
"tray",
"trash can",
"fan",
"pier",
"crt screen",
"plate",
"monitor",
"bulletin board",
"shower",
"radiator",
"glass, drinking glass",
"clock",
"flag"
] |
facebook/maskformer-swin-base-coco |
# MaskFormer
MaskFormer model trained on COCO panoptic segmentation (base-sized version, Swin backbone). It was introduced in the paper [Per-Pixel Classification is Not All You Need for Semantic Segmentation](https://arxiv.org/abs/2107.06278) and first released in [this repository](https://github.com/facebookresearch/MaskFormer/blob/da3e60d85fdeedcb31476b5edd7d328826ce56cc/mask_former/modeling/criterion.py#L169).
Disclaimer: The team releasing MaskFormer did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
MaskFormer addresses instance, semantic and panoptic segmentation with the same paradigm: by predicting a set of masks and corresponding labels. Hence, all 3 tasks are treated as if they were instance segmentation.

## Intended uses & limitations
You can use this particular checkpoint for semantic segmentation. See the [model hub](https://huggingface.co/models?search=maskformer) to look for other
fine-tuned versions on a task that interests you.
### How to use
Here is how to use this model:
```python
from transformers import MaskFormerFeatureExtractor, MaskFormerForInstanceSegmentation
from PIL import Image
import requests
# load MaskFormer fine-tuned on COCO panoptic segmentation
feature_extractor = MaskFormerFeatureExtractor.from_pretrained("facebook/maskformer-swin-base-coco")
model = MaskFormerForInstanceSegmentation.from_pretrained("facebook/maskformer-swin-base-coco")
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
inputs = feature_extractor(images=image, return_tensors="pt")
outputs = model(**inputs)
# model predicts class_queries_logits of shape `(batch_size, num_queries)`
# and masks_queries_logits of shape `(batch_size, num_queries, height, width)`
class_queries_logits = outputs.class_queries_logits
masks_queries_logits = outputs.masks_queries_logits
# you can pass them to feature_extractor for postprocessing
result = feature_extractor.post_process_panoptic_segmentation(outputs, target_sizes=[image.size[::-1]])[0]
# we refer to the demo notebooks for visualization (see "Resources" section in the MaskFormer docs)
predicted_panoptic_map = result["segmentation"]
```
For more code examples, we refer to the [documentation](https://huggingface.co/docs/transformers/master/en/model_doc/maskformer). | [
"person",
"bicycle",
"car",
"motorcycle",
"airplane",
"bus",
"train",
"truck",
"boat",
"traffic light",
"fire hydrant",
"stop sign",
"parking meter",
"bench",
"bird",
"cat",
"dog",
"horse",
"sheep",
"cow",
"elephant",
"bear",
"zebra",
"giraffe",
"backpack",
"umbrella",
"handbag",
"tie",
"suitcase",
"frisbee",
"skis",
"snowboard",
"sports ball",
"kite",
"baseball bat",
"baseball glove",
"skateboard",
"surfboard",
"tennis racket",
"bottle",
"wine glass",
"cup",
"fork",
"knife",
"spoon",
"bowl",
"banana",
"apple",
"sandwich",
"orange",
"broccoli",
"carrot",
"hot dog",
"pizza",
"donut",
"cake",
"chair",
"couch",
"potted plant",
"bed",
"dining table",
"toilet",
"tv",
"laptop",
"mouse",
"remote",
"keyboard",
"cell phone",
"microwave",
"oven",
"toaster",
"sink",
"refrigerator",
"book",
"clock",
"vase",
"scissors",
"teddy bear",
"hair drier",
"toothbrush",
"banner",
"blanket",
"bridge",
"cardboard",
"counter",
"curtain",
"door-stuff",
"floor-wood",
"flower",
"fruit",
"gravel",
"house",
"light",
"mirror-stuff",
"net",
"pillow",
"platform",
"playingfield",
"railroad",
"river",
"road",
"roof",
"sand",
"sea",
"shelf",
"snow",
"stairs",
"tent",
"towel",
"wall-brick",
"wall-stone",
"wall-tile",
"wall-wood",
"water-other",
"window-blind",
"window-other",
"tree-merged",
"fence-merged",
"ceiling-merged",
"sky-other-merged",
"cabinet-merged",
"table-merged",
"floor-other-merged",
"pavement-merged",
"mountain-merged",
"grass-merged",
"dirt-merged",
"paper-merged",
"food-other-merged",
"building-other-merged",
"rock-merged",
"wall-other-merged",
"rug-merged"
] |
facebook/maskformer-swin-large-ade |
# MaskFormer
MaskFormer model trained on ADE20k semantic segmentation (large-sized version, Swin backbone). It was introduced in the paper [Per-Pixel Classification is Not All You Need for Semantic Segmentation](https://arxiv.org/abs/2107.06278) and first released in [this repository](https://github.com/facebookresearch/MaskFormer/blob/da3e60d85fdeedcb31476b5edd7d328826ce56cc/mask_former/modeling/criterion.py#L169).
Disclaimer: The team releasing MaskFormer did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
MaskFormer addresses instance, semantic and panoptic segmentation with the same paradigm: by predicting a set of masks and corresponding labels. Hence, all 3 tasks are treated as if they were instance segmentation.

## Intended uses & limitations
You can use this particular checkpoint for semantic segmentation. See the [model hub](https://huggingface.co/models?search=maskformer) to look for other
fine-tuned versions on a task that interests you.
### How to use
Here is how to use this model:
```python
from transformers import MaskFormerImageProcessor, MaskFormerForInstanceSegmentation
from PIL import Image
import requests
url = "https://huggingface.co/datasets/hf-internal-testing/fixtures_ade20k/resolve/main/ADE_val_00000001.jpg"
image = Image.open(requests.get(url, stream=True).raw)
processor = MaskFormerImageProcessor.from_pretrained("facebook/maskformer-swin-large-ade")
inputs = processor(images=image, return_tensors="pt")
model = MaskFormerForInstanceSegmentation.from_pretrained("facebook/maskformer-swin-large-ade")
outputs = model(**inputs)
# model predicts class_queries_logits of shape `(batch_size, num_queries)`
# and masks_queries_logits of shape `(batch_size, num_queries, height, width)`
class_queries_logits = outputs.class_queries_logits
masks_queries_logits = outputs.masks_queries_logits
# you can pass them to processor for postprocessing
# we refer to the demo notebooks for visualization (see "Resources" section in the MaskFormer docs)
predicted_semantic_map = processor.post_process_semantic_segmentation(outputs, target_sizes=[image.size[::-1]])[0]
```
For more code examples, we refer to the [documentation](https://huggingface.co/docs/transformers/master/en/model_doc/maskformer). | [
"wall",
"building",
"sky",
"floor",
"tree",
"ceiling",
"road, route",
"bed",
"window ",
"grass",
"cabinet",
"sidewalk, pavement",
"person",
"earth, ground",
"door",
"table",
"mountain, mount",
"plant",
"curtain",
"chair",
"car",
"water",
"painting, picture",
"sofa",
"shelf",
"house",
"sea",
"mirror",
"rug",
"field",
"armchair",
"seat",
"fence",
"desk",
"rock, stone",
"wardrobe, closet, press",
"lamp",
"tub",
"rail",
"cushion",
"base, pedestal, stand",
"box",
"column, pillar",
"signboard, sign",
"chest of drawers, chest, bureau, dresser",
"counter",
"sand",
"sink",
"skyscraper",
"fireplace",
"refrigerator, icebox",
"grandstand, covered stand",
"path",
"stairs",
"runway",
"case, display case, showcase, vitrine",
"pool table, billiard table, snooker table",
"pillow",
"screen door, screen",
"stairway, staircase",
"river",
"bridge, span",
"bookcase",
"blind, screen",
"coffee table",
"toilet, can, commode, crapper, pot, potty, stool, throne",
"flower",
"book",
"hill",
"bench",
"countertop",
"stove",
"palm, palm tree",
"kitchen island",
"computer",
"swivel chair",
"boat",
"bar",
"arcade machine",
"hovel, hut, hutch, shack, shanty",
"bus",
"towel",
"light",
"truck",
"tower",
"chandelier",
"awning, sunshade, sunblind",
"street lamp",
"booth",
"tv",
"plane",
"dirt track",
"clothes",
"pole",
"land, ground, soil",
"bannister, banister, balustrade, balusters, handrail",
"escalator, moving staircase, moving stairway",
"ottoman, pouf, pouffe, puff, hassock",
"bottle",
"buffet, counter, sideboard",
"poster, posting, placard, notice, bill, card",
"stage",
"van",
"ship",
"fountain",
"conveyer belt, conveyor belt, conveyer, conveyor, transporter",
"canopy",
"washer, automatic washer, washing machine",
"plaything, toy",
"pool",
"stool",
"barrel, cask",
"basket, handbasket",
"falls",
"tent",
"bag",
"minibike, motorbike",
"cradle",
"oven",
"ball",
"food, solid food",
"step, stair",
"tank, storage tank",
"trade name",
"microwave",
"pot",
"animal",
"bicycle",
"lake",
"dishwasher",
"screen",
"blanket, cover",
"sculpture",
"hood, exhaust hood",
"sconce",
"vase",
"traffic light",
"tray",
"trash can",
"fan",
"pier",
"crt screen",
"plate",
"monitor",
"bulletin board",
"shower",
"radiator",
"glass, drinking glass",
"clock",
"flag"
] |
facebook/maskformer-swin-large-coco |
# MaskFormer
MaskFormer model trained on COCO panoptic segmentation (large-sized version, Swin backbone). It was introduced in the paper [Per-Pixel Classification is Not All You Need for Semantic Segmentation](https://arxiv.org/abs/2107.06278) and first released in [this repository](https://github.com/facebookresearch/MaskFormer/blob/da3e60d85fdeedcb31476b5edd7d328826ce56cc/mask_former/modeling/criterion.py#L169).
Disclaimer: The team releasing MaskFormer did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
MaskFormer addresses instance, semantic and panoptic segmentation with the same paradigm: by predicting a set of masks and corresponding labels. Hence, all 3 tasks are treated as if they were instance segmentation.

## Intended uses & limitations
You can use this particular checkpoint for semantic segmentation. See the [model hub](https://huggingface.co/models?search=maskformer) to look for other
fine-tuned versions on a task that interests you.
### How to use
Here is how to use this model:
```python
from transformers import MaskFormerImageProcessor, MaskFormerForInstanceSegmentation
from PIL import Image
import requests
# load MaskFormer fine-tuned on COCO panoptic segmentation
processor = MaskFormerImageProcessor.from_pretrained("facebook/maskformer-swin-large-coco")
model = MaskFormerForInstanceSegmentation.from_pretrained("facebook/maskformer-swin-large-coco")
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
inputs = processor(images=image, return_tensors="pt")
outputs = model(**inputs)
# model predicts class_queries_logits of shape `(batch_size, num_queries)`
# and masks_queries_logits of shape `(batch_size, num_queries, height, width)`
class_queries_logits = outputs.class_queries_logits
masks_queries_logits = outputs.masks_queries_logits
# you can pass them to processor for postprocessing
result = processor.post_process_panoptic_segmentation(outputs, target_sizes=[image.size[::-1]])[0]
# we refer to the demo notebooks for visualization (see "Resources" section in the MaskFormer docs)
predicted_panoptic_map = result["segmentation"]
```
For more code examples, we refer to the [documentation](https://huggingface.co/docs/transformers/master/en/model_doc/maskformer). | [
"person",
"bicycle",
"car",
"motorcycle",
"airplane",
"bus",
"train",
"truck",
"boat",
"traffic light",
"fire hydrant",
"stop sign",
"parking meter",
"bench",
"bird",
"cat",
"dog",
"horse",
"sheep",
"cow",
"elephant",
"bear",
"zebra",
"giraffe",
"backpack",
"umbrella",
"handbag",
"tie",
"suitcase",
"frisbee",
"skis",
"snowboard",
"sports ball",
"kite",
"baseball bat",
"baseball glove",
"skateboard",
"surfboard",
"tennis racket",
"bottle",
"wine glass",
"cup",
"fork",
"knife",
"spoon",
"bowl",
"banana",
"apple",
"sandwich",
"orange",
"broccoli",
"carrot",
"hot dog",
"pizza",
"donut",
"cake",
"chair",
"couch",
"potted plant",
"bed",
"dining table",
"toilet",
"tv",
"laptop",
"mouse",
"remote",
"keyboard",
"cell phone",
"microwave",
"oven",
"toaster",
"sink",
"refrigerator",
"book",
"clock",
"vase",
"scissors",
"teddy bear",
"hair drier",
"toothbrush",
"banner",
"blanket",
"bridge",
"cardboard",
"counter",
"curtain",
"door-stuff",
"floor-wood",
"flower",
"fruit",
"gravel",
"house",
"light",
"mirror-stuff",
"net",
"pillow",
"platform",
"playingfield",
"railroad",
"river",
"road",
"roof",
"sand",
"sea",
"shelf",
"snow",
"stairs",
"tent",
"towel",
"wall-brick",
"wall-stone",
"wall-tile",
"wall-wood",
"water-other",
"window-blind",
"window-other",
"tree-merged",
"fence-merged",
"ceiling-merged",
"sky-other-merged",
"cabinet-merged",
"table-merged",
"floor-other-merged",
"pavement-merged",
"mountain-merged",
"grass-merged",
"dirt-merged",
"paper-merged",
"food-other-merged",
"building-other-merged",
"rock-merged",
"wall-other-merged",
"rug-merged"
] |
facebook/maskformer-swin-small-ade |
# MaskFormer
MaskFormer model trained on ADE20k semantic segmentation (small-sized version, Swin backbone). It was introduced in the paper [Per-Pixel Classification is Not All You Need for Semantic Segmentation](https://arxiv.org/abs/2107.06278) and first released in [this repository](https://github.com/facebookresearch/MaskFormer/blob/da3e60d85fdeedcb31476b5edd7d328826ce56cc/mask_former/modeling/criterion.py#L169).
Disclaimer: The team releasing MaskFormer did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
MaskFormer addresses instance, semantic and panoptic segmentation with the same paradigm: by predicting a set of masks and corresponding labels. Hence, all 3 tasks are treated as if they were instance segmentation.

## Intended uses & limitations
You can use this particular checkpoint for semantic segmentation. See the [model hub](https://huggingface.co/models?search=maskformer) to look for other
fine-tuned versions on a task that interests you.
### How to use
Here is how to use this model:
```python
from transformers import MaskFormerFeatureExtractor, MaskFormerForInstanceSegmentation
from PIL import Image
import requests
url = "https://huggingface.co/datasets/hf-internal-testing/fixtures_ade20k/resolve/main/ADE_val_00000001.jpg"
image = Image.open(requests.get(url, stream=True).raw)
feature_extractor = MaskFormerFeatureExtractor.from_pretrained("facebook/maskformer-swin-small-ade")
inputs = feature_extractor(images=image, return_tensors="pt")
model = MaskFormerForInstanceSegmentation.from_pretrained("facebook/maskformer-swin-small-ade")
outputs = model(**inputs)
# model predicts class_queries_logits of shape `(batch_size, num_queries)`
# and masks_queries_logits of shape `(batch_size, num_queries, height, width)`
class_queries_logits = outputs.class_queries_logits
masks_queries_logits = outputs.masks_queries_logits
# you can pass them to feature_extractor for postprocessing
# we refer to the demo notebooks for visualization (see "Resources" section in the MaskFormer docs)
predicted_semantic_map = feature_extractor.post_process_semantic_segmentation(outputs, target_sizes=[image.size[::-1]])[0]
```
For more code examples, we refer to the [documentation](https://huggingface.co/docs/transformers/master/en/model_doc/maskformer). | [
"wall",
"building",
"sky",
"floor",
"tree",
"ceiling",
"road, route",
"bed",
"window ",
"grass",
"cabinet",
"sidewalk, pavement",
"person",
"earth, ground",
"door",
"table",
"mountain, mount",
"plant",
"curtain",
"chair",
"car",
"water",
"painting, picture",
"sofa",
"shelf",
"house",
"sea",
"mirror",
"rug",
"field",
"armchair",
"seat",
"fence",
"desk",
"rock, stone",
"wardrobe, closet, press",
"lamp",
"tub",
"rail",
"cushion",
"base, pedestal, stand",
"box",
"column, pillar",
"signboard, sign",
"chest of drawers, chest, bureau, dresser",
"counter",
"sand",
"sink",
"skyscraper",
"fireplace",
"refrigerator, icebox",
"grandstand, covered stand",
"path",
"stairs",
"runway",
"case, display case, showcase, vitrine",
"pool table, billiard table, snooker table",
"pillow",
"screen door, screen",
"stairway, staircase",
"river",
"bridge, span",
"bookcase",
"blind, screen",
"coffee table",
"toilet, can, commode, crapper, pot, potty, stool, throne",
"flower",
"book",
"hill",
"bench",
"countertop",
"stove",
"palm, palm tree",
"kitchen island",
"computer",
"swivel chair",
"boat",
"bar",
"arcade machine",
"hovel, hut, hutch, shack, shanty",
"bus",
"towel",
"light",
"truck",
"tower",
"chandelier",
"awning, sunshade, sunblind",
"street lamp",
"booth",
"tv",
"plane",
"dirt track",
"clothes",
"pole",
"land, ground, soil",
"bannister, banister, balustrade, balusters, handrail",
"escalator, moving staircase, moving stairway",
"ottoman, pouf, pouffe, puff, hassock",
"bottle",
"buffet, counter, sideboard",
"poster, posting, placard, notice, bill, card",
"stage",
"van",
"ship",
"fountain",
"conveyer belt, conveyor belt, conveyer, conveyor, transporter",
"canopy",
"washer, automatic washer, washing machine",
"plaything, toy",
"pool",
"stool",
"barrel, cask",
"basket, handbasket",
"falls",
"tent",
"bag",
"minibike, motorbike",
"cradle",
"oven",
"ball",
"food, solid food",
"step, stair",
"tank, storage tank",
"trade name",
"microwave",
"pot",
"animal",
"bicycle",
"lake",
"dishwasher",
"screen",
"blanket, cover",
"sculpture",
"hood, exhaust hood",
"sconce",
"vase",
"traffic light",
"tray",
"trash can",
"fan",
"pier",
"crt screen",
"plate",
"monitor",
"bulletin board",
"shower",
"radiator",
"glass, drinking glass",
"clock",
"flag"
] |
facebook/maskformer-swin-small-coco |
# MaskFormer
MaskFormer model trained on COCO panoptic segmentation (small-sized version, Swin backbone). It was introduced in the paper [Per-Pixel Classification is Not All You Need for Semantic Segmentation](https://arxiv.org/abs/2107.06278) and first released in [this repository](https://github.com/facebookresearch/MaskFormer/blob/da3e60d85fdeedcb31476b5edd7d328826ce56cc/mask_former/modeling/criterion.py#L169).
Disclaimer: The team releasing MaskFormer did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
MaskFormer addresses instance, semantic and panoptic segmentation with the same paradigm: by predicting a set of masks and corresponding labels. Hence, all 3 tasks are treated as if they were instance segmentation.

## Intended uses & limitations
You can use this particular checkpoint for semantic segmentation. See the [model hub](https://huggingface.co/models?search=maskformer) to look for other
fine-tuned versions on a task that interests you.
### How to use
Here is how to use this model:
```python
from transformers import MaskFormerFeatureExtractor, MaskFormerForInstanceSegmentation
from PIL import Image
import requests
# load MaskFormer fine-tuned on COCO panoptic segmentation
feature_extractor = MaskFormerFeatureExtractor.from_pretrained("facebook/maskformer-swin-small-coco")
model = MaskFormerForInstanceSegmentation.from_pretrained("facebook/maskformer-swin-small-coco")
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
inputs = feature_extractor(images=image, return_tensors="pt")
outputs = model(**inputs)
# model predicts class_queries_logits of shape `(batch_size, num_queries)`
# and masks_queries_logits of shape `(batch_size, num_queries, height, width)`
class_queries_logits = outputs.class_queries_logits
masks_queries_logits = outputs.masks_queries_logits
# you can pass them to feature_extractor for postprocessing
result = feature_extractor.post_process_panoptic_segmentation(outputs, target_sizes=[image.size[::-1]])[0]
# we refer to the demo notebooks for visualization (see "Resources" section in the MaskFormer docs)
predicted_panoptic_map = result["segmentation"]
```
For more code examples, we refer to the [documentation](https://huggingface.co/docs/transformers/master/en/model_doc/maskformer). | [
"person",
"bicycle",
"car",
"motorcycle",
"airplane",
"bus",
"train",
"truck",
"boat",
"traffic light",
"fire hydrant",
"stop sign",
"parking meter",
"bench",
"bird",
"cat",
"dog",
"horse",
"sheep",
"cow",
"elephant",
"bear",
"zebra",
"giraffe",
"backpack",
"umbrella",
"handbag",
"tie",
"suitcase",
"frisbee",
"skis",
"snowboard",
"sports ball",
"kite",
"baseball bat",
"baseball glove",
"skateboard",
"surfboard",
"tennis racket",
"bottle",
"wine glass",
"cup",
"fork",
"knife",
"spoon",
"bowl",
"banana",
"apple",
"sandwich",
"orange",
"broccoli",
"carrot",
"hot dog",
"pizza",
"donut",
"cake",
"chair",
"couch",
"potted plant",
"bed",
"dining table",
"toilet",
"tv",
"laptop",
"mouse",
"remote",
"keyboard",
"cell phone",
"microwave",
"oven",
"toaster",
"sink",
"refrigerator",
"book",
"clock",
"vase",
"scissors",
"teddy bear",
"hair drier",
"toothbrush",
"banner",
"blanket",
"bridge",
"cardboard",
"counter",
"curtain",
"door-stuff",
"floor-wood",
"flower",
"fruit",
"gravel",
"house",
"light",
"mirror-stuff",
"net",
"pillow",
"platform",
"playingfield",
"railroad",
"river",
"road",
"roof",
"sand",
"sea",
"shelf",
"snow",
"stairs",
"tent",
"towel",
"wall-brick",
"wall-stone",
"wall-tile",
"wall-wood",
"water-other",
"window-blind",
"window-other",
"tree-merged",
"fence-merged",
"ceiling-merged",
"sky-other-merged",
"cabinet-merged",
"table-merged",
"floor-other-merged",
"pavement-merged",
"mountain-merged",
"grass-merged",
"dirt-merged",
"paper-merged",
"food-other-merged",
"building-other-merged",
"rock-merged",
"wall-other-merged",
"rug-merged"
] |
facebook/maskformer-swin-tiny-ade |
# MaskFormer
MaskFormer model trained on ADE20k semantic segmentation (tiny-sized version, Swin backbone). It was introduced in the paper [Per-Pixel Classification is Not All You Need for Semantic Segmentation](https://arxiv.org/abs/2107.06278) and first released in [this repository](https://github.com/facebookresearch/MaskFormer/blob/da3e60d85fdeedcb31476b5edd7d328826ce56cc/mask_former/modeling/criterion.py#L169).
Disclaimer: The team releasing MaskFormer did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
MaskFormer addresses instance, semantic and panoptic segmentation with the same paradigm: by predicting a set of masks and corresponding labels. Hence, all 3 tasks are treated as if they were instance segmentation.

## Intended uses & limitations
You can use this particular checkpoint for semantic segmentation. See the [model hub](https://huggingface.co/models?search=maskformer) to look for other
fine-tuned versions on a task that interests you.
### How to use
Here is how to use this model:
```python
from transformers import MaskFormerFeatureExtractor, MaskFormerForInstanceSegmentation
from PIL import Image
import requests
url = "https://huggingface.co/datasets/hf-internal-testing/fixtures_ade20k/resolve/main/ADE_val_00000001.jpg"
image = Image.open(requests.get(url, stream=True).raw)
feature_extractor = MaskFormerFeatureExtractor.from_pretrained("facebook/maskformer-swin-tiny-ade")
inputs = feature_extractor(images=image, return_tensors="pt")
model = MaskFormerForInstanceSegmentation.from_pretrained("facebook/maskformer-swin-tiny-ade")
outputs = model(**inputs)
# model predicts class_queries_logits of shape `(batch_size, num_queries)`
# and masks_queries_logits of shape `(batch_size, num_queries, height, width)`
class_queries_logits = outputs.class_queries_logits
masks_queries_logits = outputs.masks_queries_logits
# you can pass them to feature_extractor for postprocessing
# we refer to the demo notebooks for visualization (see "Resources" section in the MaskFormer docs)
predicted_semantic_map = feature_extractor.post_process_semantic_segmentation(outputs, target_sizes=[image.size[::-1]])[0]
```
For more code examples, we refer to the [documentation](https://huggingface.co/docs/transformers/master/en/model_doc/maskformer). | [
"wall",
"building",
"sky",
"floor",
"tree",
"ceiling",
"road, route",
"bed",
"window ",
"grass",
"cabinet",
"sidewalk, pavement",
"person",
"earth, ground",
"door",
"table",
"mountain, mount",
"plant",
"curtain",
"chair",
"car",
"water",
"painting, picture",
"sofa",
"shelf",
"house",
"sea",
"mirror",
"rug",
"field",
"armchair",
"seat",
"fence",
"desk",
"rock, stone",
"wardrobe, closet, press",
"lamp",
"tub",
"rail",
"cushion",
"base, pedestal, stand",
"box",
"column, pillar",
"signboard, sign",
"chest of drawers, chest, bureau, dresser",
"counter",
"sand",
"sink",
"skyscraper",
"fireplace",
"refrigerator, icebox",
"grandstand, covered stand",
"path",
"stairs",
"runway",
"case, display case, showcase, vitrine",
"pool table, billiard table, snooker table",
"pillow",
"screen door, screen",
"stairway, staircase",
"river",
"bridge, span",
"bookcase",
"blind, screen",
"coffee table",
"toilet, can, commode, crapper, pot, potty, stool, throne",
"flower",
"book",
"hill",
"bench",
"countertop",
"stove",
"palm, palm tree",
"kitchen island",
"computer",
"swivel chair",
"boat",
"bar",
"arcade machine",
"hovel, hut, hutch, shack, shanty",
"bus",
"towel",
"light",
"truck",
"tower",
"chandelier",
"awning, sunshade, sunblind",
"street lamp",
"booth",
"tv",
"plane",
"dirt track",
"clothes",
"pole",
"land, ground, soil",
"bannister, banister, balustrade, balusters, handrail",
"escalator, moving staircase, moving stairway",
"ottoman, pouf, pouffe, puff, hassock",
"bottle",
"buffet, counter, sideboard",
"poster, posting, placard, notice, bill, card",
"stage",
"van",
"ship",
"fountain",
"conveyer belt, conveyor belt, conveyer, conveyor, transporter",
"canopy",
"washer, automatic washer, washing machine",
"plaything, toy",
"pool",
"stool",
"barrel, cask",
"basket, handbasket",
"falls",
"tent",
"bag",
"minibike, motorbike",
"cradle",
"oven",
"ball",
"food, solid food",
"step, stair",
"tank, storage tank",
"trade name",
"microwave",
"pot",
"animal",
"bicycle",
"lake",
"dishwasher",
"screen",
"blanket, cover",
"sculpture",
"hood, exhaust hood",
"sconce",
"vase",
"traffic light",
"tray",
"trash can",
"fan",
"pier",
"crt screen",
"plate",
"monitor",
"bulletin board",
"shower",
"radiator",
"glass, drinking glass",
"clock",
"flag"
] |
facebook/maskformer-swin-tiny-coco |
# MaskFormer
MaskFormer model trained on COCO panoptic segmentation (tiny-sized version, Swin backbone). It was introduced in the paper [Per-Pixel Classification is Not All You Need for Semantic Segmentation](https://arxiv.org/abs/2107.06278) and first released in [this repository](https://github.com/facebookresearch/MaskFormer/blob/da3e60d85fdeedcb31476b5edd7d328826ce56cc/mask_former/modeling/criterion.py#L169).
Disclaimer: The team releasing MaskFormer did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
MaskFormer addresses instance, semantic and panoptic segmentation with the same paradigm: by predicting a set of masks and corresponding labels. Hence, all 3 tasks are treated as if they were instance segmentation.

## Intended uses & limitations
You can use this particular checkpoint for semantic segmentation. See the [model hub](https://huggingface.co/models?search=maskformer) to look for other
fine-tuned versions on a task that interests you.
### How to use
Here is how to use this model:
```python
from transformers import MaskFormerFeatureExtractor, MaskFormerForInstanceSegmentation
from PIL import Image
import requests
# load MaskFormer fine-tuned on COCO panoptic segmentation
feature_extractor = MaskFormerFeatureExtractor.from_pretrained("facebook/maskformer-swin-tiny-coco")
model = MaskFormerForInstanceSegmentation.from_pretrained("facebook/maskformer-swin-tiny-coco")
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
inputs = feature_extractor(images=image, return_tensors="pt")
outputs = model(**inputs)
# model predicts class_queries_logits of shape `(batch_size, num_queries)`
# and masks_queries_logits of shape `(batch_size, num_queries, height, width)`
class_queries_logits = outputs.class_queries_logits
masks_queries_logits = outputs.masks_queries_logits
# you can pass them to feature_extractor for postprocessing
result = feature_extractor.post_process_panoptic_segmentation(outputs, target_sizes=[image.size[::-1]])[0]
# we refer to the demo notebooks for visualization (see "Resources" section in the MaskFormer docs)
predicted_panoptic_map = result["segmentation"]
```
For more code examples, we refer to the [documentation](https://huggingface.co/docs/transformers/master/en/model_doc/maskformer). | [
"person",
"bicycle",
"car",
"motorcycle",
"airplane",
"bus",
"train",
"truck",
"boat",
"traffic light",
"fire hydrant",
"stop sign",
"parking meter",
"bench",
"bird",
"cat",
"dog",
"horse",
"sheep",
"cow",
"elephant",
"bear",
"zebra",
"giraffe",
"backpack",
"umbrella",
"handbag",
"tie",
"suitcase",
"frisbee",
"skis",
"snowboard",
"sports ball",
"kite",
"baseball bat",
"baseball glove",
"skateboard",
"surfboard",
"tennis racket",
"bottle",
"wine glass",
"cup",
"fork",
"knife",
"spoon",
"bowl",
"banana",
"apple",
"sandwich",
"orange",
"broccoli",
"carrot",
"hot dog",
"pizza",
"donut",
"cake",
"chair",
"couch",
"potted plant",
"bed",
"dining table",
"toilet",
"tv",
"laptop",
"mouse",
"remote",
"keyboard",
"cell phone",
"microwave",
"oven",
"toaster",
"sink",
"refrigerator",
"book",
"clock",
"vase",
"scissors",
"teddy bear",
"hair drier",
"toothbrush",
"banner",
"blanket",
"bridge",
"cardboard",
"counter",
"curtain",
"door-stuff",
"floor-wood",
"flower",
"fruit",
"gravel",
"house",
"light",
"mirror-stuff",
"net",
"pillow",
"platform",
"playingfield",
"railroad",
"river",
"road",
"roof",
"sand",
"sea",
"shelf",
"snow",
"stairs",
"tent",
"towel",
"wall-brick",
"wall-stone",
"wall-tile",
"wall-wood",
"water-other",
"window-blind",
"window-other",
"tree-merged",
"fence-merged",
"ceiling-merged",
"sky-other-merged",
"cabinet-merged",
"table-merged",
"floor-other-merged",
"pavement-merged",
"mountain-merged",
"grass-merged",
"dirt-merged",
"paper-merged",
"food-other-merged",
"building-other-merged",
"rock-merged",
"wall-other-merged",
"rug-merged"
] |
microsoft/beit-base-finetuned-ade-640-640 |
# BEiT (base-sized model, fine-tuned on ADE20k)
BEiT model pre-trained in a self-supervised fashion on ImageNet-21k (14 million images, 21,841 classes) at resolution 224x224, and fine-tuned on [ADE20k](http://sceneparsing.csail.mit.edu/) (an important benchmark for semantic segmentation of images) at resolution 640x640. It was introduced in the paper [BEIT: BERT Pre-Training of Image Transformers](https://arxiv.org/abs/2106.08254) by Hangbo Bao, Li Dong and Furu Wei and first released in [this repository](https://github.com/microsoft/unilm/tree/master/beit).
Disclaimer: The team releasing BEiT did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
The BEiT model is a Vision Transformer (ViT), which is a transformer encoder model (BERT-like). In contrast to the original ViT model, BEiT is pretrained on a large collection of images in a self-supervised fashion, namely ImageNet-21k, at a resolution of 224x224 pixels. The pre-training objective for the model is to predict visual tokens from the encoder of OpenAI's DALL-E's VQ-VAE, based on masked patches.
Next, the model was fine-tuned in a supervised fashion on ImageNet (also referred to as ILSVRC2012), a dataset comprising 1 million images and 1,000 classes, also at resolution 224x224.
Images are presented to the model as a sequence of fixed-size patches (resolution 16x16), which are linearly embedded. Contrary to the original ViT models, BEiT models do use relative position embeddings (similar to T5) instead of absolute position embeddings, and perform classification of images by mean-pooling the final hidden states of the patches, instead of placing a linear layer on top of the final hidden state of the [CLS] token.
By pre-training the model, it learns an inner representation of images that can then be used to extract features useful for downstream tasks: for semantic segmentation, one can just add one of the decode heads available in the [mmseg library](https://github.com/open-mmlab/mmsegmentation) for example, and fine-tune the model in a supervised fashion on annotated images. This is what the authors did: they fine-tuned BEiT with an UperHead segmentation decode head, allowing it to obtain SOTA results on important benchmarks such as ADE20k and CityScapes.
## Intended uses & limitations
You can use the raw model for semantic segmentation of images. See the [model hub](https://huggingface.co/models?search=microsoft/beit) to look for fine-tuned versions on a task that interests you.
### How to use
Here is how to use this model for semantic segmentation:
```python
from transformers import BeitFeatureExtractor, BeitForSemanticSegmentation
from datasets import load_dataset
from PIL import Image
# load ADE20k image
ds = load_dataset("hf-internal-testing/fixtures_ade20k", split="test")
image = Image.open(ds[0]['file'])
feature_extractor = BeitFeatureExtractor.from_pretrained('microsoft/beit-base-finetuned-ade-640-640')
model = BeitForSemanticSegmentation.from_pretrained('microsoft/beit-base-finetuned-ade-640-640')
inputs = feature_extractor(images=image, return_tensors="pt")
outputs = model(**inputs)
# logits are of shape (batch_size, num_labels, height/4, width/4)
logits = outputs.logits
```
Currently, both the feature extractor and model support PyTorch.
## Training data
This BEiT model was pretrained on [ImageNet-21k](http://www.image-net.org/), a dataset consisting of 14 million images and 21k classes, and fine-tuned on [ADE20k](http://sceneparsing.csail.mit.edu/), a dataset consisting of thousands of annotated images and 150 classes.
## Training procedure
### Preprocessing
The exact details of preprocessing of images during training/validation can be found [here](https://github.com/microsoft/unilm/blob/master/beit/datasets.py).
Images are cropped and padded to the same resolution (640x640) and normalized across the RGB channels with the ImageNet mean and standard deviation.
### Pretraining
For all pre-training related hyperparameters, we refer to page 15 of the [original paper](https://arxiv.org/abs/2106.08254).
## Evaluation results
For evaluation results on several image classification benchmarks, we refer to tables 1 and 2 of the original paper. Note that for fine-tuning, the best results are obtained with a higher resolution (384x384). Of course, increasing the model size will result in better performance.
### BibTeX entry and citation info
```@article{DBLP:journals/corr/abs-2106-08254,
author = {Hangbo Bao and
Li Dong and
Furu Wei},
title = {BEiT: {BERT} Pre-Training of Image Transformers},
journal = {CoRR},
volume = {abs/2106.08254},
year = {2021},
url = {https://arxiv.org/abs/2106.08254},
archivePrefix = {arXiv},
eprint = {2106.08254},
timestamp = {Tue, 29 Jun 2021 16:55:04 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2106-08254.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
``` | [
"wall",
"building",
"sky",
"floor",
"tree",
"ceiling",
"road",
"bed ",
"windowpane",
"grass",
"cabinet",
"sidewalk",
"person",
"earth",
"door",
"table",
"mountain",
"plant",
"curtain",
"chair",
"car",
"water",
"painting",
"sofa",
"shelf",
"house",
"sea",
"mirror",
"rug",
"field",
"armchair",
"seat",
"fence",
"desk",
"rock",
"wardrobe",
"lamp",
"bathtub",
"railing",
"cushion",
"base",
"box",
"column",
"signboard",
"chest of drawers",
"counter",
"sand",
"sink",
"skyscraper",
"fireplace",
"refrigerator",
"grandstand",
"path",
"stairs",
"runway",
"case",
"pool table",
"pillow",
"screen door",
"stairway",
"river",
"bridge",
"bookcase",
"blind",
"coffee table",
"toilet",
"flower",
"book",
"hill",
"bench",
"countertop",
"stove",
"palm",
"kitchen island",
"computer",
"swivel chair",
"boat",
"bar",
"arcade machine",
"hovel",
"bus",
"towel",
"light",
"truck",
"tower",
"chandelier",
"awning",
"streetlight",
"booth",
"television receiver",
"airplane",
"dirt track",
"apparel",
"pole",
"land",
"bannister",
"escalator",
"ottoman",
"bottle",
"buffet",
"poster",
"stage",
"van",
"ship",
"fountain",
"conveyer belt",
"canopy",
"washer",
"plaything",
"swimming pool",
"stool",
"barrel",
"basket",
"waterfall",
"tent",
"bag",
"minibike",
"cradle",
"oven",
"ball",
"food",
"step",
"tank",
"trade name",
"microwave",
"pot",
"animal",
"bicycle",
"lake",
"dishwasher",
"screen",
"blanket",
"sculpture",
"hood",
"sconce",
"vase",
"traffic light",
"tray",
"ashcan",
"fan",
"pier",
"crt screen",
"plate",
"monitor",
"bulletin board",
"shower",
"radiator",
"glass",
"clock",
"flag"
] |
microsoft/beit-large-finetuned-ade-640-640 |
# BEiT (large-sized model, fine-tuned on ADE20k)
BEiT model pre-trained in a self-supervised fashion on ImageNet-21k (14 million images, 21,841 classes) at resolution 224x224, and fine-tuned on [ADE20k](https://huggingface.co/datasets/scene_parse_150) (an important benchmark for semantic segmentation of images) at resolution 640x640. It was introduced in the paper [BEIT: BERT Pre-Training of Image Transformers](https://arxiv.org/abs/2106.08254) by Hangbo Bao, Li Dong and Furu Wei and first released in [this repository](https://github.com/microsoft/unilm/tree/master/beit).
Disclaimer: The team releasing BEiT did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
The BEiT model is a Vision Transformer (ViT), which is a transformer encoder model (BERT-like). In contrast to the original ViT model, BEiT is pretrained on a large collection of images in a self-supervised fashion, namely ImageNet-21k, at a resolution of 224x224 pixels. The pre-training objective for the model is to predict visual tokens from the encoder of OpenAI's DALL-E's VQ-VAE, based on masked patches.
Next, the model was fine-tuned in a supervised fashion on ImageNet (also referred to as ILSVRC2012), a dataset comprising 1 million images and 1,000 classes, also at resolution 224x224.
Images are presented to the model as a sequence of fixed-size patches (resolution 16x16), which are linearly embedded. Contrary to the original ViT models, BEiT models do use relative position embeddings (similar to T5) instead of absolute position embeddings, and perform classification of images by mean-pooling the final hidden states of the patches, instead of placing a linear layer on top of the final hidden state of the [CLS] token.
By pre-training the model, it learns an inner representation of images that can then be used to extract features useful for downstream tasks: for semantic segmentation, one can just add one of the decode heads available in the [mmseg library](https://github.com/open-mmlab/mmsegmentation) for example, and fine-tune the model in a supervised fashion on annotated images. This is what the authors did: they fine-tuned BEiT with an UperHead segmentation decode head, allowing it to obtain SOTA results on important benchmarks such as ADE20k and CityScapes.
## Intended uses & limitations
You can use the raw model for semantic segmentation of images. See the [model hub](https://huggingface.co/models?search=microsoft/beit) to look for fine-tuned versions on a task that interests you.
### How to use
Here is how to use this model for semantic segmentation:
```python
from transformers import BeitFeatureExtractor, BeitForSemanticSegmentation
from datasets import load_dataset
from PIL import Image
# load ADE20k image
ds = load_dataset("hf-internal-testing/fixtures_ade20k", split="test")
feature_extractor = BeitFeatureExtractor.from_pretrained('microsoft/beit-large-finetuned-ade-640-640')
model = BeitForSemanticSegmentation.from_pretrained('microsoft/beit-large-finetuned-ade-640-640')
inputs = feature_extractor(images=image, return_tensors="pt")
outputs = model(**inputs)
# logits are of shape (batch_size, num_labels, height/4, width/4)
logits = outputs.logits
```
Currently, both the feature extractor and model support PyTorch.
## Training data
This BEiT model was pretrained on [ImageNet-21k](http://www.image-net.org/), a dataset consisting of 14 million images and 21k classes, and fine-tuned on [ADE20k](http://sceneparsing.csail.mit.edu/), a dataset consisting of thousands of annotated images and 150 classes.
## Training procedure
### Preprocessing
The exact details of preprocessing of images during training/validation can be found [here](https://github.com/microsoft/unilm/blob/master/beit/datasets.py).
Images are cropped and padded to the same resolution (640x640) and normalized across the RGB channels with the ImageNet mean and standard deviation.
### Pretraining
For all pre-training related hyperparameters, we refer to page 15 of the [original paper](https://arxiv.org/abs/2106.08254).
## Evaluation results
For evaluation results on several image classification benchmarks, we refer to tables 1 and 2 of the original paper. Note that for fine-tuning, the best results are obtained with a higher resolution (384x384). Of course, increasing the model size will result in better performance.
### BibTeX entry and citation info
```@article{DBLP:journals/corr/abs-2106-08254,
author = {Hangbo Bao and
Li Dong and
Furu Wei},
title = {BEiT: {BERT} Pre-Training of Image Transformers},
journal = {CoRR},
volume = {abs/2106.08254},
year = {2021},
url = {https://arxiv.org/abs/2106.08254},
archivePrefix = {arXiv},
eprint = {2106.08254},
timestamp = {Tue, 29 Jun 2021 16:55:04 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2106-08254.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
``` | [
"wall",
"building",
"sky",
"floor",
"tree",
"ceiling",
"road",
"bed ",
"windowpane",
"grass",
"cabinet",
"sidewalk",
"person",
"earth",
"door",
"table",
"mountain",
"plant",
"curtain",
"chair",
"car",
"water",
"painting",
"sofa",
"shelf",
"house",
"sea",
"mirror",
"rug",
"field",
"armchair",
"seat",
"fence",
"desk",
"rock",
"wardrobe",
"lamp",
"bathtub",
"railing",
"cushion",
"base",
"box",
"column",
"signboard",
"chest of drawers",
"counter",
"sand",
"sink",
"skyscraper",
"fireplace",
"refrigerator",
"grandstand",
"path",
"stairs",
"runway",
"case",
"pool table",
"pillow",
"screen door",
"stairway",
"river",
"bridge",
"bookcase",
"blind",
"coffee table",
"toilet",
"flower",
"book",
"hill",
"bench",
"countertop",
"stove",
"palm",
"kitchen island",
"computer",
"swivel chair",
"boat",
"bar",
"arcade machine",
"hovel",
"bus",
"towel",
"light",
"truck",
"tower",
"chandelier",
"awning",
"streetlight",
"booth",
"television receiver",
"airplane",
"dirt track",
"apparel",
"pole",
"land",
"bannister",
"escalator",
"ottoman",
"bottle",
"buffet",
"poster",
"stage",
"van",
"ship",
"fountain",
"conveyer belt",
"canopy",
"washer",
"plaything",
"swimming pool",
"stool",
"barrel",
"basket",
"waterfall",
"tent",
"bag",
"minibike",
"cradle",
"oven",
"ball",
"food",
"step",
"tank",
"trade name",
"microwave",
"pot",
"animal",
"bicycle",
"lake",
"dishwasher",
"screen",
"blanket",
"sculpture",
"hood",
"sconce",
"vase",
"traffic light",
"tray",
"ashcan",
"fan",
"pier",
"crt screen",
"plate",
"monitor",
"bulletin board",
"shower",
"radiator",
"glass",
"clock",
"flag"
] |
Intel/dpt-large-ade |
# DPT (large-sized model) fine-tuned on ADE20k
The model is used for semantic segmentation of input images such as seen in the table below:
| Input Image | Output Segmented Image |
| --- | --- |
|  | |
## Model description
The Midas 3.0 nbased Dense Prediction Transformer (DPT) model was trained on ADE20k for semantic segmentation. It was introduced in the paper
[Vision Transformers for Dense Prediction](https://arxiv.org/abs/2103.13413) by Ranftl et al. and first released in [this repository](https://github.com/isl-org/DPT).
The MiDaS v3.0 DPT uses the Vision Transformer (ViT) as backbone and adds a neck + head on top for semantic segmentation.

Disclaimer: The team releasing DPT did not write a model card for this model so this model card has been written by the Hugging Face and the Intel AI Community team.
## Results:
According to the authors, at the time of publication, when applied to semantic segmentation, dense vision transformers set a new state of the art on
**ADE20K with 49.02% mIoU.**
We further show that the architecture can be fine-tuned on smaller datasets such as NYUv2, KITTI, and Pascal Context where it also sets the new state of the art. Our models are available at
[Intel DPT GItHub Repository](https://github.com/intel-isl/DPT).
## Intended uses & limitations
You can use the raw model for semantic segmentation. See the [model hub](https://huggingface.co/models?search=dpt) to look for
fine-tuned versions on a task that interests you.
### How to use
Here is how to use this model:
```python
from transformers import DPTFeatureExtractor, DPTForSemanticSegmentation
from PIL import Image
import requests
url = "http://images.cocodataset.org/val2017/000000026204.jpg"
image = Image.open(requests.get(url, stream=True).raw)
feature_extractor = DPTImageProcessor .from_pretrained("Intel/dpt-large-ade")
model = DPTForSemanticSegmentation.from_pretrained("Intel/dpt-large-ade")
inputs = feature_extractor(images=image, return_tensors="pt")
outputs = model(**inputs)
logits = outputs.logits
print(logits.shape)
logits
prediction = torch.nn.functional.interpolate(
logits,
size=image.size[::-1], # Reverse the size of the original image (width, height)
mode="bicubic",
align_corners=False
)
# Convert logits to class predictions
prediction = torch.argmax(prediction, dim=1) + 1
# Squeeze the prediction tensor to remove dimensions
prediction = prediction.squeeze()
# Move the prediction tensor to the CPU and convert it to a numpy array
prediction = prediction.cpu().numpy()
# Convert the prediction array to an image
predicted_seg = Image.fromarray(prediction.squeeze().astype('uint8'))
# Define the ADE20K palette
adepallete = [0,0,0,120,120,120,180,120,120,6,230,230,80,50,50,4,200,3,120,120,80,140,140,140,204,5,255,230,230,230,4,250,7,224,5,255,235,255,7,150,5,61,120,120,70,8,255,51,255,6,82,143,255,140,204,255,4,255,51,7,204,70,3,0,102,200,61,230,250,255,6,51,11,102,255,255,7,71,255,9,224,9,7,230,220,220,220,255,9,92,112,9,255,8,255,214,7,255,224,255,184,6,10,255,71,255,41,10,7,255,255,224,255,8,102,8,255,255,61,6,255,194,7,255,122,8,0,255,20,255,8,41,255,5,153,6,51,255,235,12,255,160,150,20,0,163,255,140,140,140,250,10,15,20,255,0,31,255,0,255,31,0,255,224,0,153,255,0,0,0,255,255,71,0,0,235,255,0,173,255,31,0,255,11,200,200,255,82,0,0,255,245,0,61,255,0,255,112,0,255,133,255,0,0,255,163,0,255,102,0,194,255,0,0,143,255,51,255,0,0,82,255,0,255,41,0,255,173,10,0,255,173,255,0,0,255,153,255,92,0,255,0,255,255,0,245,255,0,102,255,173,0,255,0,20,255,184,184,0,31,255,0,255,61,0,71,255,255,0,204,0,255,194,0,255,82,0,10,255,0,112,255,51,0,255,0,194,255,0,122,255,0,255,163,255,153,0,0,255,10,255,112,0,143,255,0,82,0,255,163,255,0,255,235,0,8,184,170,133,0,255,0,255,92,184,0,255,255,0,31,0,184,255,0,214,255,255,0,112,92,255,0,0,224,255,112,224,255,70,184,160,163,0,255,153,0,255,71,255,0,255,0,163,255,204,0,255,0,143,0,255,235,133,255,0,255,0,235,245,0,255,255,0,122,255,245,0,10,190,212,214,255,0,0,204,255,20,0,255,255,255,0,0,153,255,0,41,255,0,255,204,41,0,255,41,255,0,173,0,255,0,245,255,71,0,255,122,0,255,0,255,184,0,92,255,184,255,0,0,133,255,255,214,0,25,194,194,102,255,0,92,0,255]
# Apply the color map to the predicted segmentation image
predicted_seg.putpalette(adepallete)
# Blend the original image and the predicted segmentation image
out = Image.blend(image, predicted_seg.convert("RGB"), alpha=0.5)
out
```
For more code examples, we refer to the [documentation](https://huggingface.co/docs/transformers/master/en/model_doc/dpt).
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2103-13413,
author = {Ren{\'{e}} Ranftl and
Alexey Bochkovskiy and
Vladlen Koltun},
title = {Vision Transformers for Dense Prediction},
journal = {CoRR},
volume = {abs/2103.13413},
year = {2021},
url = {https://arxiv.org/abs/2103.13413},
eprinttype = {arXiv},
eprint = {2103.13413},
timestamp = {Wed, 07 Apr 2021 15:31:46 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2103-13413.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
``` | [
"wall",
"building",
"sky",
"floor",
"tree",
"ceiling",
"road",
"bed ",
"windowpane",
"grass",
"cabinet",
"sidewalk",
"person",
"earth",
"door",
"table",
"mountain",
"plant",
"curtain",
"chair",
"car",
"water",
"painting",
"sofa",
"shelf",
"house",
"sea",
"mirror",
"rug",
"field",
"armchair",
"seat",
"fence",
"desk",
"rock",
"wardrobe",
"lamp",
"bathtub",
"railing",
"cushion",
"base",
"box",
"column",
"signboard",
"chest of drawers",
"counter",
"sand",
"sink",
"skyscraper",
"fireplace",
"refrigerator",
"grandstand",
"path",
"stairs",
"runway",
"case",
"pool table",
"pillow",
"screen door",
"stairway",
"river",
"bridge",
"bookcase",
"blind",
"coffee table",
"toilet",
"flower",
"book",
"hill",
"bench",
"countertop",
"stove",
"palm",
"kitchen island",
"computer",
"swivel chair",
"boat",
"bar",
"arcade machine",
"hovel",
"bus",
"towel",
"light",
"truck",
"tower",
"chandelier",
"awning",
"streetlight",
"booth",
"television receiver",
"airplane",
"dirt track",
"apparel",
"pole",
"land",
"bannister",
"escalator",
"ottoman",
"bottle",
"buffet",
"poster",
"stage",
"van",
"ship",
"fountain",
"conveyer belt",
"canopy",
"washer",
"plaything",
"swimming pool",
"stool",
"barrel",
"basket",
"waterfall",
"tent",
"bag",
"minibike",
"cradle",
"oven",
"ball",
"food",
"step",
"tank",
"trade name",
"microwave",
"pot",
"animal",
"bicycle",
"lake",
"dishwasher",
"screen",
"blanket",
"sculpture",
"hood",
"sconce",
"vase",
"traffic light",
"tray",
"ashcan",
"fan",
"pier",
"crt screen",
"plate",
"monitor",
"bulletin board",
"shower",
"radiator",
"glass",
"clock",
"flag"
] |
nvidia/segformer-b0-finetuned-ade-512-512 |
# SegFormer (b0-sized) model fine-tuned on ADE20k
SegFormer model fine-tuned on ADE20k at resolution 512x512. It was introduced in the paper [SegFormer: Simple and Efficient Design for Semantic Segmentation with Transformers](https://arxiv.org/abs/2105.15203) by Xie et al. and first released in [this repository](https://github.com/NVlabs/SegFormer).
Disclaimer: The team releasing SegFormer did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
SegFormer consists of a hierarchical Transformer encoder and a lightweight all-MLP decode head to achieve great results on semantic segmentation benchmarks such as ADE20K and Cityscapes. The hierarchical Transformer is first pre-trained on ImageNet-1k, after which a decode head is added and fine-tuned altogether on a downstream dataset.
## Intended uses & limitations
You can use the raw model for semantic segmentation. See the [model hub](https://huggingface.co/models?other=segformer) to look for fine-tuned versions on a task that interests you.
### How to use
Here is how to use this model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes:
```python
from transformers import SegformerImageProcessor, SegformerForSemanticSegmentation
from PIL import Image
import requests
processor = SegformerImageProcessor.from_pretrained("nvidia/segformer-b0-finetuned-ade-512-512")
model = SegformerForSemanticSegmentation.from_pretrained("nvidia/segformer-b0-finetuned-ade-512-512")
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
inputs = processor(images=image, return_tensors="pt")
outputs = model(**inputs)
logits = outputs.logits # shape (batch_size, num_labels, height/4, width/4)
```
For more code examples, we refer to the [documentation](https://huggingface.co/transformers/model_doc/segformer.html#).
### License
The license for this model can be found [here](https://github.com/NVlabs/SegFormer/blob/master/LICENSE).
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2105-15203,
author = {Enze Xie and
Wenhai Wang and
Zhiding Yu and
Anima Anandkumar and
Jose M. Alvarez and
Ping Luo},
title = {SegFormer: Simple and Efficient Design for Semantic Segmentation with
Transformers},
journal = {CoRR},
volume = {abs/2105.15203},
year = {2021},
url = {https://arxiv.org/abs/2105.15203},
eprinttype = {arXiv},
eprint = {2105.15203},
timestamp = {Wed, 02 Jun 2021 11:46:42 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2105-15203.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
| [
"wall",
"building",
"sky",
"floor",
"tree",
"ceiling",
"road",
"bed ",
"windowpane",
"grass",
"cabinet",
"sidewalk",
"person",
"earth",
"door",
"table",
"mountain",
"plant",
"curtain",
"chair",
"car",
"water",
"painting",
"sofa",
"shelf",
"house",
"sea",
"mirror",
"rug",
"field",
"armchair",
"seat",
"fence",
"desk",
"rock",
"wardrobe",
"lamp",
"bathtub",
"railing",
"cushion",
"base",
"box",
"column",
"signboard",
"chest of drawers",
"counter",
"sand",
"sink",
"skyscraper",
"fireplace",
"refrigerator",
"grandstand",
"path",
"stairs",
"runway",
"case",
"pool table",
"pillow",
"screen door",
"stairway",
"river",
"bridge",
"bookcase",
"blind",
"coffee table",
"toilet",
"flower",
"book",
"hill",
"bench",
"countertop",
"stove",
"palm",
"kitchen island",
"computer",
"swivel chair",
"boat",
"bar",
"arcade machine",
"hovel",
"bus",
"towel",
"light",
"truck",
"tower",
"chandelier",
"awning",
"streetlight",
"booth",
"television receiver",
"airplane",
"dirt track",
"apparel",
"pole",
"land",
"bannister",
"escalator",
"ottoman",
"bottle",
"buffet",
"poster",
"stage",
"van",
"ship",
"fountain",
"conveyer belt",
"canopy",
"washer",
"plaything",
"swimming pool",
"stool",
"barrel",
"basket",
"waterfall",
"tent",
"bag",
"minibike",
"cradle",
"oven",
"ball",
"food",
"step",
"tank",
"trade name",
"microwave",
"pot",
"animal",
"bicycle",
"lake",
"dishwasher",
"screen",
"blanket",
"sculpture",
"hood",
"sconce",
"vase",
"traffic light",
"tray",
"ashcan",
"fan",
"pier",
"crt screen",
"plate",
"monitor",
"bulletin board",
"shower",
"radiator",
"glass",
"clock",
"flag"
] |
nvidia/segformer-b0-finetuned-cityscapes-1024-1024 |
# SegFormer (b0-sized) model fine-tuned on CityScapes
SegFormer model fine-tuned on CityScapes at resolution 1024x1024. It was introduced in the paper [SegFormer: Simple and Efficient Design for Semantic Segmentation with Transformers](https://arxiv.org/abs/2105.15203) by Xie et al. and first released in [this repository](https://github.com/NVlabs/SegFormer).
Disclaimer: The team releasing SegFormer did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
SegFormer consists of a hierarchical Transformer encoder and a lightweight all-MLP decode head to achieve great results on semantic segmentation benchmarks such as ADE20K and Cityscapes. The hierarchical Transformer is first pre-trained on ImageNet-1k, after which a decode head is added and fine-tuned altogether on a downstream dataset.
## Intended uses & limitations
You can use the raw model for semantic segmentation. See the [model hub](https://huggingface.co/models?other=segformer) to look for fine-tuned versions on a task that interests you.
### How to use
Here is how to use this model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes:
```python
from transformers import SegformerFeatureExtractor, SegformerForSemanticSegmentation
from PIL import Image
import requests
feature_extractor = SegformerFeatureExtractor.from_pretrained("nvidia/segformer-b0-finetuned-cityscapes-1024-1024")
model = SegformerForSemanticSegmentation.from_pretrained("nvidia/segformer-b0-finetuned-cityscapes-1024-1024")
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
inputs = feature_extractor(images=image, return_tensors="pt")
outputs = model(**inputs)
logits = outputs.logits # shape (batch_size, num_labels, height/4, width/4)
```
For more code examples, we refer to the [documentation](https://huggingface.co/transformers/model_doc/segformer.html#).
### License
The license for this model can be found [here](https://github.com/NVlabs/SegFormer/blob/master/LICENSE).
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2105-15203,
author = {Enze Xie and
Wenhai Wang and
Zhiding Yu and
Anima Anandkumar and
Jose M. Alvarez and
Ping Luo},
title = {SegFormer: Simple and Efficient Design for Semantic Segmentation with
Transformers},
journal = {CoRR},
volume = {abs/2105.15203},
year = {2021},
url = {https://arxiv.org/abs/2105.15203},
eprinttype = {arXiv},
eprint = {2105.15203},
timestamp = {Wed, 02 Jun 2021 11:46:42 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2105-15203.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
| [
"road",
"sidewalk",
"building",
"wall",
"fence",
"pole",
"traffic light",
"traffic sign",
"vegetation",
"terrain",
"sky",
"person",
"rider",
"car",
"truck",
"bus",
"train",
"motorcycle",
"bicycle"
] |
nvidia/segformer-b0-finetuned-cityscapes-512-1024 |
# SegFormer (b4-sized) model fine-tuned on CityScapes
SegFormer model fine-tuned on CityScapes at resolution 512x1024. It was introduced in the paper [SegFormer: Simple and Efficient Design for Semantic Segmentation with Transformers](https://arxiv.org/abs/2105.15203) by Xie et al. and first released in [this repository](https://github.com/NVlabs/SegFormer).
Disclaimer: The team releasing SegFormer did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
SegFormer consists of a hierarchical Transformer encoder and a lightweight all-MLP decode head to achieve great results on semantic segmentation benchmarks such as ADE20K and Cityscapes. The hierarchical Transformer is first pre-trained on ImageNet-1k, after which a decode head is added and fine-tuned altogether on a downstream dataset.
## Intended uses & limitations
You can use the raw model for semantic segmentation. See the [model hub](https://huggingface.co/models?other=segformer) to look for fine-tuned versions on a task that interests you.
### How to use
Here is how to use this model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes:
```python
from transformers import SegformerFeatureExtractor, SegformerForSemanticSegmentation
from PIL import Image
import requests
feature_extractor = SegformerFeatureExtractor.from_pretrained("nvidia/segformer-b0-finetuned-cityscapes-512-1024")
model = SegformerForSemanticSegmentation.from_pretrained("nvidia/segformer-b0-finetuned-cityscapes-512-1024")
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
inputs = feature_extractor(images=image, return_tensors="pt")
outputs = model(**inputs)
logits = outputs.logits # shape (batch_size, num_labels, height/4, width/4)
```
For more code examples, we refer to the [documentation](https://huggingface.co/transformers/model_doc/segformer.html#).
### License
The license for this model can be found [here](https://github.com/NVlabs/SegFormer/blob/master/LICENSE).
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2105-15203,
author = {Enze Xie and
Wenhai Wang and
Zhiding Yu and
Anima Anandkumar and
Jose M. Alvarez and
Ping Luo},
title = {SegFormer: Simple and Efficient Design for Semantic Segmentation with
Transformers},
journal = {CoRR},
volume = {abs/2105.15203},
year = {2021},
url = {https://arxiv.org/abs/2105.15203},
eprinttype = {arXiv},
eprint = {2105.15203},
timestamp = {Wed, 02 Jun 2021 11:46:42 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2105-15203.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
| [
"road",
"sidewalk",
"building",
"wall",
"fence",
"pole",
"traffic light",
"traffic sign",
"vegetation",
"terrain",
"sky",
"person",
"rider",
"car",
"truck",
"bus",
"train",
"motorcycle",
"bicycle"
] |
nvidia/segformer-b0-finetuned-cityscapes-640-1280 |
# SegFormer (b5-sized) model fine-tuned on CityScapes
SegFormer model fine-tuned on CityScapes at resolution 640x1280. It was introduced in the paper [SegFormer: Simple and Efficient Design for Semantic Segmentation with Transformers](https://arxiv.org/abs/2105.15203) by Xie et al. and first released in [this repository](https://github.com/NVlabs/SegFormer).
Disclaimer: The team releasing SegFormer did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
SegFormer consists of a hierarchical Transformer encoder and a lightweight all-MLP decode head to achieve great results on semantic segmentation benchmarks such as ADE20K and Cityscapes. The hierarchical Transformer is first pre-trained on ImageNet-1k, after which a decode head is added and fine-tuned altogether on a downstream dataset.
## Intended uses & limitations
You can use the raw model for semantic segmentation. See the [model hub](https://huggingface.co/models?other=segformer) to look for fine-tuned versions on a task that interests you.
### How to use
Here is how to use this model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes:
```python
from transformers import SegformerFeatureExtractor, SegformerForSemanticSegmentation
from PIL import Image
import requests
feature_extractor = SegformerFeatureExtractor.from_pretrained("nvidia/segformer-b0-finetuned-cityscapes-640-1280")
model = SegformerForSemanticSegmentation.from_pretrained("nvidia/segformer-b0-finetuned-cityscapes-640-1280")
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
inputs = feature_extractor(images=image, return_tensors="pt")
outputs = model(**inputs)
logits = outputs.logits # shape (batch_size, num_labels, height/4, width/4)
```
For more code examples, we refer to the [documentation](https://huggingface.co/transformers/model_doc/segformer.html#).
### License
The license for this model can be found [here](https://github.com/NVlabs/SegFormer/blob/master/LICENSE).
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2105-15203,
author = {Enze Xie and
Wenhai Wang and
Zhiding Yu and
Anima Anandkumar and
Jose M. Alvarez and
Ping Luo},
title = {SegFormer: Simple and Efficient Design for Semantic Segmentation with
Transformers},
journal = {CoRR},
volume = {abs/2105.15203},
year = {2021},
url = {https://arxiv.org/abs/2105.15203},
eprinttype = {arXiv},
eprint = {2105.15203},
timestamp = {Wed, 02 Jun 2021 11:46:42 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2105-15203.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
| [
"road",
"sidewalk",
"building",
"wall",
"fence",
"pole",
"traffic light",
"traffic sign",
"vegetation",
"terrain",
"sky",
"person",
"rider",
"car",
"truck",
"bus",
"train",
"motorcycle",
"bicycle"
] |
nvidia/segformer-b0-finetuned-cityscapes-768-768 |
# SegFormer (b0-sized) model fine-tuned on CityScapes
SegFormer model fine-tuned on CityScapes at resolution 768x768. It was introduced in the paper [SegFormer: Simple and Efficient Design for Semantic Segmentation with Transformers](https://arxiv.org/abs/2105.15203) by Xie et al. and first released in [this repository](https://github.com/NVlabs/SegFormer).
Disclaimer: The team releasing SegFormer did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
SegFormer consists of a hierarchical Transformer encoder and a lightweight all-MLP decode head to achieve great results on semantic segmentation benchmarks such as ADE20K and Cityscapes. The hierarchical Transformer is first pre-trained on ImageNet-1k, after which a decode head is added and fine-tuned altogether on a downstream dataset.
## Intended uses & limitations
You can use the raw model for semantic segmentation. See the [model hub](https://huggingface.co/models?other=segformer) to look for fine-tuned versions on a task that interests you.
### How to use
Here is how to use this model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes:
```python
from transformers import SegformerFeatureExtractor, SegformerForSemanticSegmentation
from PIL import Image
import requests
feature_extractor = SegformerFeatureExtractor.from_pretrained("nvidia/segformer-b0-finetuned-cityscapes-768-768")
model = SegformerForSemanticSegmentation.from_pretrained("nvidia/segformer-b0-finetuned-cityscapes-768-768")
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
inputs = feature_extractor(images=image, return_tensors="pt")
outputs = model(**inputs)
logits = outputs.logits # shape (batch_size, num_labels, height/4, width/4)
```
For more code examples, we refer to the [documentation](https://huggingface.co/transformers/model_doc/segformer.html#).
### License
The license for this model can be found [here](https://github.com/NVlabs/SegFormer/blob/master/LICENSE).
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2105-15203,
author = {Enze Xie and
Wenhai Wang and
Zhiding Yu and
Anima Anandkumar and
Jose M. Alvarez and
Ping Luo},
title = {SegFormer: Simple and Efficient Design for Semantic Segmentation with
Transformers},
journal = {CoRR},
volume = {abs/2105.15203},
year = {2021},
url = {https://arxiv.org/abs/2105.15203},
eprinttype = {arXiv},
eprint = {2105.15203},
timestamp = {Wed, 02 Jun 2021 11:46:42 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2105-15203.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
| [
"road",
"sidewalk",
"building",
"wall",
"fence",
"pole",
"traffic light",
"traffic sign",
"vegetation",
"terrain",
"sky",
"person",
"rider",
"car",
"truck",
"bus",
"train",
"motorcycle",
"bicycle"
] |
nvidia/segformer-b1-finetuned-cityscapes-1024-1024 |
# SegFormer (b1-sized) model fine-tuned on CityScapes
SegFormer model fine-tuned on CityScapes at resolution 1024x1024. It was introduced in the paper [SegFormer: Simple and Efficient Design for Semantic Segmentation with Transformers](https://arxiv.org/abs/2105.15203) by Xie et al. and first released in [this repository](https://github.com/NVlabs/SegFormer).
Disclaimer: The team releasing SegFormer did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
SegFormer consists of a hierarchical Transformer encoder and a lightweight all-MLP decode head to achieve great results on semantic segmentation benchmarks such as ADE20K and Cityscapes. The hierarchical Transformer is first pre-trained on ImageNet-1k, after which a decode head is added and fine-tuned altogether on a downstream dataset.
## Intended uses & limitations
You can use the raw model for semantic segmentation. See the [model hub](https://huggingface.co/models?other=segformer) to look for fine-tuned versions on a task that interests you.
### How to use
Here is how to use this model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes:
```python
from transformers import SegformerFeatureExtractor, SegformerForSemanticSegmentation
from PIL import Image
import requests
feature_extractor = SegformerFeatureExtractor.from_pretrained("nvidia/segformer-b1-finetuned-cityscapes-1024-1024")
model = SegformerForSemanticSegmentation.from_pretrained("nvidia/segformer-b1-finetuned-cityscapes-1024-1024")
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
inputs = feature_extractor(images=image, return_tensors="pt")
outputs = model(**inputs)
logits = outputs.logits # shape (batch_size, num_labels, height/4, width/4)
```
For more code examples, we refer to the [documentation](https://huggingface.co/transformers/model_doc/segformer.html#).
### License
The license for this model can be found [here](https://github.com/NVlabs/SegFormer/blob/master/LICENSE).
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2105-15203,
author = {Enze Xie and
Wenhai Wang and
Zhiding Yu and
Anima Anandkumar and
Jose M. Alvarez and
Ping Luo},
title = {SegFormer: Simple and Efficient Design for Semantic Segmentation with
Transformers},
journal = {CoRR},
volume = {abs/2105.15203},
year = {2021},
url = {https://arxiv.org/abs/2105.15203},
eprinttype = {arXiv},
eprint = {2105.15203},
timestamp = {Wed, 02 Jun 2021 11:46:42 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2105-15203.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
| [
"road",
"sidewalk",
"building",
"wall",
"fence",
"pole",
"traffic light",
"traffic sign",
"vegetation",
"terrain",
"sky",
"person",
"rider",
"car",
"truck",
"bus",
"train",
"motorcycle",
"bicycle"
] |
nvidia/segformer-b2-finetuned-ade-512-512 |
# SegFormer (b2-sized) model fine-tuned on ADE20k
SegFormer model fine-tuned on ADE20k at resolution 512x512. It was introduced in the paper [SegFormer: Simple and Efficient Design for Semantic Segmentation with Transformers](https://arxiv.org/abs/2105.15203) by Xie et al. and first released in [this repository](https://github.com/NVlabs/SegFormer).
Disclaimer: The team releasing SegFormer did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
SegFormer consists of a hierarchical Transformer encoder and a lightweight all-MLP decode head to achieve great results on semantic segmentation benchmarks such as ADE20K and Cityscapes. The hierarchical Transformer is first pre-trained on ImageNet-1k, after which a decode head is added and fine-tuned altogether on a downstream dataset.
## Intended uses & limitations
You can use the raw model for semantic segmentation. See the [model hub](https://huggingface.co/models?other=segformer) to look for fine-tuned versions on a task that interests you.
### How to use
Here is how to use this model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes:
```python
from transformers import SegformerFeatureExtractor, SegformerForSemanticSegmentation
from PIL import Image
import requests
feature_extractor = SegformerFeatureExtractor.from_pretrained("nvidia/segformer-b2-finetuned-ade-512-512")
model = SegformerForSemanticSegmentation.from_pretrained("nvidia/segformer-b2-finetuned-ade-512-512")
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
inputs = feature_extractor(images=image, return_tensors="pt")
outputs = model(**inputs)
logits = outputs.logits # shape (batch_size, num_labels, height/4, width/4)
```
For more code examples, we refer to the [documentation](https://huggingface.co/transformers/model_doc/segformer.html#).
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2105-15203,
author = {Enze Xie and
Wenhai Wang and
Zhiding Yu and
Anima Anandkumar and
Jose M. Alvarez and
Ping Luo},
title = {SegFormer: Simple and Efficient Design for Semantic Segmentation with
Transformers},
journal = {CoRR},
volume = {abs/2105.15203},
year = {2021},
url = {https://arxiv.org/abs/2105.15203},
eprinttype = {arXiv},
eprint = {2105.15203},
timestamp = {Wed, 02 Jun 2021 11:46:42 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2105-15203.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
| [
"wall",
"building",
"sky",
"floor",
"tree",
"ceiling",
"road",
"bed ",
"windowpane",
"grass",
"cabinet",
"sidewalk",
"person",
"earth",
"door",
"table",
"mountain",
"plant",
"curtain",
"chair",
"car",
"water",
"painting",
"sofa",
"shelf",
"house",
"sea",
"mirror",
"rug",
"field",
"armchair",
"seat",
"fence",
"desk",
"rock",
"wardrobe",
"lamp",
"bathtub",
"railing",
"cushion",
"base",
"box",
"column",
"signboard",
"chest of drawers",
"counter",
"sand",
"sink",
"skyscraper",
"fireplace",
"refrigerator",
"grandstand",
"path",
"stairs",
"runway",
"case",
"pool table",
"pillow",
"screen door",
"stairway",
"river",
"bridge",
"bookcase",
"blind",
"coffee table",
"toilet",
"flower",
"book",
"hill",
"bench",
"countertop",
"stove",
"palm",
"kitchen island",
"computer",
"swivel chair",
"boat",
"bar",
"arcade machine",
"hovel",
"bus",
"towel",
"light",
"truck",
"tower",
"chandelier",
"awning",
"streetlight",
"booth",
"television receiver",
"airplane",
"dirt track",
"apparel",
"pole",
"land",
"bannister",
"escalator",
"ottoman",
"bottle",
"buffet",
"poster",
"stage",
"van",
"ship",
"fountain",
"conveyer belt",
"canopy",
"washer",
"plaything",
"swimming pool",
"stool",
"barrel",
"basket",
"waterfall",
"tent",
"bag",
"minibike",
"cradle",
"oven",
"ball",
"food",
"step",
"tank",
"trade name",
"microwave",
"pot",
"animal",
"bicycle",
"lake",
"dishwasher",
"screen",
"blanket",
"sculpture",
"hood",
"sconce",
"vase",
"traffic light",
"tray",
"ashcan",
"fan",
"pier",
"crt screen",
"plate",
"monitor",
"bulletin board",
"shower",
"radiator",
"glass",
"clock",
"flag"
] |
nvidia/segformer-b2-finetuned-cityscapes-1024-1024 |
# SegFormer (b2-sized) model fine-tuned on CityScapes
SegFormer model fine-tuned on CityScapes at resolution 1024x1024. It was introduced in the paper [SegFormer: Simple and Efficient Design for Semantic Segmentation with Transformers](https://arxiv.org/abs/2105.15203) by Xie et al. and first released in [this repository](https://github.com/NVlabs/SegFormer).
Disclaimer: The team releasing SegFormer did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
SegFormer consists of a hierarchical Transformer encoder and a lightweight all-MLP decode head to achieve great results on semantic segmentation benchmarks such as ADE20K and Cityscapes. The hierarchical Transformer is first pre-trained on ImageNet-1k, after which a decode head is added and fine-tuned altogether on a downstream dataset.
## Intended uses & limitations
You can use the raw model for semantic segmentation. See the [model hub](https://huggingface.co/models?other=segformer) to look for fine-tuned versions on a task that interests you.
### How to use
Here is how to use this model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes:
```python
from transformers import SegformerFeatureExtractor, SegformerForSemanticSegmentation
from PIL import Image
import requests
feature_extractor = SegformerFeatureExtractor.from_pretrained("nvidia/segformer-b2-finetuned-cityscapes-1024-1024")
model = SegformerForSemanticSegmentation.from_pretrained("nvidia/segformer-b2-finetuned-cityscapes-1024-1024")
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
inputs = feature_extractor(images=image, return_tensors="pt")
outputs = model(**inputs)
logits = outputs.logits # shape (batch_size, num_labels, height/4, width/4)
```
For more code examples, we refer to the [documentation](https://huggingface.co/transformers/model_doc/segformer.html#).
### License
The license for this model can be found [here](https://github.com/NVlabs/SegFormer/blob/master/LICENSE).
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2105-15203,
author = {Enze Xie and
Wenhai Wang and
Zhiding Yu and
Anima Anandkumar and
Jose M. Alvarez and
Ping Luo},
title = {SegFormer: Simple and Efficient Design for Semantic Segmentation with
Transformers},
journal = {CoRR},
volume = {abs/2105.15203},
year = {2021},
url = {https://arxiv.org/abs/2105.15203},
eprinttype = {arXiv},
eprint = {2105.15203},
timestamp = {Wed, 02 Jun 2021 11:46:42 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2105-15203.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
| [
"road",
"sidewalk",
"building",
"wall",
"fence",
"pole",
"traffic light",
"traffic sign",
"vegetation",
"terrain",
"sky",
"person",
"rider",
"car",
"truck",
"bus",
"train",
"motorcycle",
"bicycle"
] |
nvidia/segformer-b3-finetuned-ade-512-512 |
# SegFormer (b3-sized) model fine-tuned on ADE20k
SegFormer model fine-tuned on ADE20k at resolution 512x512. It was introduced in the paper [SegFormer: Simple and Efficient Design for Semantic Segmentation with Transformers](https://arxiv.org/abs/2105.15203) by Xie et al. and first released in [this repository](https://github.com/NVlabs/SegFormer).
Disclaimer: The team releasing SegFormer did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
SegFormer consists of a hierarchical Transformer encoder and a lightweight all-MLP decode head to achieve great results on semantic segmentation benchmarks such as ADE20K and Cityscapes. The hierarchical Transformer is first pre-trained on ImageNet-1k, after which a decode head is added and fine-tuned altogether on a downstream dataset.
## Intended uses & limitations
You can use the raw model for semantic segmentation. See the [model hub](https://huggingface.co/models?other=segformer) to look for fine-tuned versions on a task that interests you.
### How to use
Here is how to use this model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes:
```python
from transformers import SegformerFeatureExtractor, SegformerForSemanticSegmentation
from PIL import Image
import requests
feature_extractor = SegformerFeatureExtractor.from_pretrained("nvidia/segformer-b3-finetuned-ade-512-512")
model = SegformerForSemanticSegmentation.from_pretrained("nvidia/segformer-b3-finetuned-ade-512-512")
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
inputs = feature_extractor(images=image, return_tensors="pt")
outputs = model(**inputs)
logits = outputs.logits # shape (batch_size, num_labels, height/4, width/4)
```
For more code examples, we refer to the [documentation](https://huggingface.co/transformers/model_doc/segformer.html#).
### License
The license for this model can be found [here](https://github.com/NVlabs/SegFormer/blob/master/LICENSE).
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2105-15203,
author = {Enze Xie and
Wenhai Wang and
Zhiding Yu and
Anima Anandkumar and
Jose M. Alvarez and
Ping Luo},
title = {SegFormer: Simple and Efficient Design for Semantic Segmentation with
Transformers},
journal = {CoRR},
volume = {abs/2105.15203},
year = {2021},
url = {https://arxiv.org/abs/2105.15203},
eprinttype = {arXiv},
eprint = {2105.15203},
timestamp = {Wed, 02 Jun 2021 11:46:42 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2105-15203.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
| [
"wall",
"building",
"sky",
"floor",
"tree",
"ceiling",
"road",
"bed ",
"windowpane",
"grass",
"cabinet",
"sidewalk",
"person",
"earth",
"door",
"table",
"mountain",
"plant",
"curtain",
"chair",
"car",
"water",
"painting",
"sofa",
"shelf",
"house",
"sea",
"mirror",
"rug",
"field",
"armchair",
"seat",
"fence",
"desk",
"rock",
"wardrobe",
"lamp",
"bathtub",
"railing",
"cushion",
"base",
"box",
"column",
"signboard",
"chest of drawers",
"counter",
"sand",
"sink",
"skyscraper",
"fireplace",
"refrigerator",
"grandstand",
"path",
"stairs",
"runway",
"case",
"pool table",
"pillow",
"screen door",
"stairway",
"river",
"bridge",
"bookcase",
"blind",
"coffee table",
"toilet",
"flower",
"book",
"hill",
"bench",
"countertop",
"stove",
"palm",
"kitchen island",
"computer",
"swivel chair",
"boat",
"bar",
"arcade machine",
"hovel",
"bus",
"towel",
"light",
"truck",
"tower",
"chandelier",
"awning",
"streetlight",
"booth",
"television receiver",
"airplane",
"dirt track",
"apparel",
"pole",
"land",
"bannister",
"escalator",
"ottoman",
"bottle",
"buffet",
"poster",
"stage",
"van",
"ship",
"fountain",
"conveyer belt",
"canopy",
"washer",
"plaything",
"swimming pool",
"stool",
"barrel",
"basket",
"waterfall",
"tent",
"bag",
"minibike",
"cradle",
"oven",
"ball",
"food",
"step",
"tank",
"trade name",
"microwave",
"pot",
"animal",
"bicycle",
"lake",
"dishwasher",
"screen",
"blanket",
"sculpture",
"hood",
"sconce",
"vase",
"traffic light",
"tray",
"ashcan",
"fan",
"pier",
"crt screen",
"plate",
"monitor",
"bulletin board",
"shower",
"radiator",
"glass",
"clock",
"flag"
] |
nvidia/segformer-b3-finetuned-cityscapes-1024-1024 |
# SegFormer (b3-sized) model fine-tuned on CityScapes
SegFormer model fine-tuned on CityScapes at resolution 1024x1024. It was introduced in the paper [SegFormer: Simple and Efficient Design for Semantic Segmentation with Transformers](https://arxiv.org/abs/2105.15203) by Xie et al. and first released in [this repository](https://github.com/NVlabs/SegFormer).
Disclaimer: The team releasing SegFormer did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
SegFormer consists of a hierarchical Transformer encoder and a lightweight all-MLP decode head to achieve great results on semantic segmentation benchmarks such as ADE20K and Cityscapes. The hierarchical Transformer is first pre-trained on ImageNet-1k, after which a decode head is added and fine-tuned altogether on a downstream dataset.
## Intended uses & limitations
You can use the raw model for semantic segmentation. See the [model hub](https://huggingface.co/models?other=segformer) to look for fine-tuned versions on a task that interests you.
### How to use
Here is how to use this model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes:
```python
from transformers import SegformerFeatureExtractor, SegformerForSemanticSegmentation
from PIL import Image
import requests
feature_extractor = SegformerFeatureExtractor.from_pretrained("nvidia/segformer-b3-finetuned-cityscapes-1024-1024")
model = SegformerForSemanticSegmentation.from_pretrained("nvidia/segformer-b3-finetuned-cityscapes-1024-1024")
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
inputs = feature_extractor(images=image, return_tensors="pt")
outputs = model(**inputs)
logits = outputs.logits # shape (batch_size, num_labels, height/4, width/4)
```
For more code examples, we refer to the [documentation](https://huggingface.co/transformers/model_doc/segformer.html#).
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2105-15203,
author = {Enze Xie and
Wenhai Wang and
Zhiding Yu and
Anima Anandkumar and
Jose M. Alvarez and
Ping Luo},
title = {SegFormer: Simple and Efficient Design for Semantic Segmentation with
Transformers},
journal = {CoRR},
volume = {abs/2105.15203},
year = {2021},
url = {https://arxiv.org/abs/2105.15203},
eprinttype = {arXiv},
eprint = {2105.15203},
timestamp = {Wed, 02 Jun 2021 11:46:42 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2105-15203.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
| [
"road",
"sidewalk",
"building",
"wall",
"fence",
"pole",
"traffic light",
"traffic sign",
"vegetation",
"terrain",
"sky",
"person",
"rider",
"car",
"truck",
"bus",
"train",
"motorcycle",
"bicycle"
] |
nvidia/segformer-b4-finetuned-ade-512-512 |
# SegFormer (b4-sized) model fine-tuned on ADE20k
SegFormer model fine-tuned on ADE20k at resolution 512x512. It was introduced in the paper [SegFormer: Simple and Efficient Design for Semantic Segmentation with Transformers](https://arxiv.org/abs/2105.15203) by Xie et al. and first released in [this repository](https://github.com/NVlabs/SegFormer).
Disclaimer: The team releasing SegFormer did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
SegFormer consists of a hierarchical Transformer encoder and a lightweight all-MLP decode head to achieve great results on semantic segmentation benchmarks such as ADE20K and Cityscapes. The hierarchical Transformer is first pre-trained on ImageNet-1k, after which a decode head is added and fine-tuned altogether on a downstream dataset.
## Intended uses & limitations
You can use the raw model for semantic segmentation. See the [model hub](https://huggingface.co/models?other=segformer) to look for fine-tuned versions on a task that interests you.
### How to use
Here is how to use this model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes:
```python
from transformers import SegformerFeatureExtractor, SegformerForSemanticSegmentation
from PIL import Image
import requests
feature_extractor = SegformerFeatureExtractor.from_pretrained("nvidia/segformer-b4-finetuned-ade-512-512")
model = SegformerForSemanticSegmentation.from_pretrained("nvidia/segformer-b4-finetuned-ade-512-512")
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
inputs = feature_extractor(images=image, return_tensors="pt")
outputs = model(**inputs)
logits = outputs.logits # shape (batch_size, num_labels, height/4, width/4)
```
For more code examples, we refer to the [documentation](https://huggingface.co/transformers/model_doc/segformer.html#).
### License
The license for this model can be found [here](https://github.com/NVlabs/SegFormer/blob/master/LICENSE).
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2105-15203,
author = {Enze Xie and
Wenhai Wang and
Zhiding Yu and
Anima Anandkumar and
Jose M. Alvarez and
Ping Luo},
title = {SegFormer: Simple and Efficient Design for Semantic Segmentation with
Transformers},
journal = {CoRR},
volume = {abs/2105.15203},
year = {2021},
url = {https://arxiv.org/abs/2105.15203},
eprinttype = {arXiv},
eprint = {2105.15203},
timestamp = {Wed, 02 Jun 2021 11:46:42 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2105-15203.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
| [
"wall",
"building",
"sky",
"floor",
"tree",
"ceiling",
"road",
"bed ",
"windowpane",
"grass",
"cabinet",
"sidewalk",
"person",
"earth",
"door",
"table",
"mountain",
"plant",
"curtain",
"chair",
"car",
"water",
"painting",
"sofa",
"shelf",
"house",
"sea",
"mirror",
"rug",
"field",
"armchair",
"seat",
"fence",
"desk",
"rock",
"wardrobe",
"lamp",
"bathtub",
"railing",
"cushion",
"base",
"box",
"column",
"signboard",
"chest of drawers",
"counter",
"sand",
"sink",
"skyscraper",
"fireplace",
"refrigerator",
"grandstand",
"path",
"stairs",
"runway",
"case",
"pool table",
"pillow",
"screen door",
"stairway",
"river",
"bridge",
"bookcase",
"blind",
"coffee table",
"toilet",
"flower",
"book",
"hill",
"bench",
"countertop",
"stove",
"palm",
"kitchen island",
"computer",
"swivel chair",
"boat",
"bar",
"arcade machine",
"hovel",
"bus",
"towel",
"light",
"truck",
"tower",
"chandelier",
"awning",
"streetlight",
"booth",
"television receiver",
"airplane",
"dirt track",
"apparel",
"pole",
"land",
"bannister",
"escalator",
"ottoman",
"bottle",
"buffet",
"poster",
"stage",
"van",
"ship",
"fountain",
"conveyer belt",
"canopy",
"washer",
"plaything",
"swimming pool",
"stool",
"barrel",
"basket",
"waterfall",
"tent",
"bag",
"minibike",
"cradle",
"oven",
"ball",
"food",
"step",
"tank",
"trade name",
"microwave",
"pot",
"animal",
"bicycle",
"lake",
"dishwasher",
"screen",
"blanket",
"sculpture",
"hood",
"sconce",
"vase",
"traffic light",
"tray",
"ashcan",
"fan",
"pier",
"crt screen",
"plate",
"monitor",
"bulletin board",
"shower",
"radiator",
"glass",
"clock",
"flag"
] |
nvidia/segformer-b4-finetuned-cityscapes-1024-1024 |
# SegFormer (b4-sized) model fine-tuned on CityScapes
SegFormer model fine-tuned on CityScapes at resolution 1024x1024. It was introduced in the paper [SegFormer: Simple and Efficient Design for Semantic Segmentation with Transformers](https://arxiv.org/abs/2105.15203) by Xie et al. and first released in [this repository](https://github.com/NVlabs/SegFormer).
Disclaimer: The team releasing SegFormer did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
SegFormer consists of a hierarchical Transformer encoder and a lightweight all-MLP decode head to achieve great results on semantic segmentation benchmarks such as ADE20K and Cityscapes. The hierarchical Transformer is first pre-trained on ImageNet-1k, after which a decode head is added and fine-tuned altogether on a downstream dataset.
## Intended uses & limitations
You can use the raw model for semantic segmentation. See the [model hub](https://huggingface.co/models?other=segformer) to look for fine-tuned versions on a task that interests you.
### How to use
Here is how to use this model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes:
```python
from transformers import SegformerImageProcessor, SegformerForSemanticSegmentation
from PIL import Image
import requests
processor = SegformerImageProcessor.from_pretrained("nvidia/segformer-b4-finetuned-cityscapes-1024-1024")
model = SegformerForSemanticSegmentation.from_pretrained("nvidia/segformer-b4-finetuned-cityscapes-1024-1024")
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
inputs = processor(images=image, return_tensors="pt")
outputs = model(**inputs)
logits = outputs.logits # shape (batch_size, num_labels, height/4, width/4)
```
For more code examples, we refer to the [documentation](https://huggingface.co/transformers/model_doc/segformer.html#).
### License
The license for this model can be found [here](https://github.com/NVlabs/SegFormer/blob/master/LICENSE).
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2105-15203,
author = {Enze Xie and
Wenhai Wang and
Zhiding Yu and
Anima Anandkumar and
Jose M. Alvarez and
Ping Luo},
title = {SegFormer: Simple and Efficient Design for Semantic Segmentation with
Transformers},
journal = {CoRR},
volume = {abs/2105.15203},
year = {2021},
url = {https://arxiv.org/abs/2105.15203},
eprinttype = {arXiv},
eprint = {2105.15203},
timestamp = {Wed, 02 Jun 2021 11:46:42 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2105-15203.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
| [
"road",
"sidewalk",
"building",
"wall",
"fence",
"pole",
"traffic light",
"traffic sign",
"vegetation",
"terrain",
"sky",
"person",
"rider",
"car",
"truck",
"bus",
"train",
"motorcycle",
"bicycle"
] |
nvidia/segformer-b5-finetuned-ade-640-640 |
# SegFormer (b5-sized) model fine-tuned on ADE20k
SegFormer model fine-tuned on ADE20k at resolution 640x640. It was introduced in the paper [SegFormer: Simple and Efficient Design for Semantic Segmentation with Transformers](https://arxiv.org/abs/2105.15203) by Xie et al. and first released in [this repository](https://github.com/NVlabs/SegFormer).
Disclaimer: The team releasing SegFormer did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
SegFormer consists of a hierarchical Transformer encoder and a lightweight all-MLP decode head to achieve great results on semantic segmentation benchmarks such as ADE20K and Cityscapes. The hierarchical Transformer is first pre-trained on ImageNet-1k, after which a decode head is added and fine-tuned altogether on a downstream dataset.
## Intended uses & limitations
You can use the raw model for semantic segmentation. See the [model hub](https://huggingface.co/models?other=segformer) to look for fine-tuned versions on a task that interests you.
### How to use
Here is how to use this model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes:
```python
from transformers import SegformerFeatureExtractor, SegformerForSemanticSegmentation
from PIL import Image
import requests
feature_extractor = SegformerFeatureExtractor.from_pretrained("nvidia/segformer-b5-finetuned-ade-512-512")
model = SegformerForSemanticSegmentation.from_pretrained("nvidia/segformer-b5-finetuned-ade-512-512")
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
inputs = feature_extractor(images=image, return_tensors="pt")
outputs = model(**inputs)
logits = outputs.logits # shape (batch_size, num_labels, height/4, width/4)
```
For more code examples, we refer to the [documentation](https://huggingface.co/transformers/model_doc/segformer.html#).
### License
The license for this model can be found [here](https://github.com/NVlabs/SegFormer/blob/master/LICENSE).
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2105-15203,
author = {Enze Xie and
Wenhai Wang and
Zhiding Yu and
Anima Anandkumar and
Jose M. Alvarez and
Ping Luo},
title = {SegFormer: Simple and Efficient Design for Semantic Segmentation with
Transformers},
journal = {CoRR},
volume = {abs/2105.15203},
year = {2021},
url = {https://arxiv.org/abs/2105.15203},
eprinttype = {arXiv},
eprint = {2105.15203},
timestamp = {Wed, 02 Jun 2021 11:46:42 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2105-15203.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
| [
"wall",
"building",
"sky",
"floor",
"tree",
"ceiling",
"road",
"bed ",
"windowpane",
"grass",
"cabinet",
"sidewalk",
"person",
"earth",
"door",
"table",
"mountain",
"plant",
"curtain",
"chair",
"car",
"water",
"painting",
"sofa",
"shelf",
"house",
"sea",
"mirror",
"rug",
"field",
"armchair",
"seat",
"fence",
"desk",
"rock",
"wardrobe",
"lamp",
"bathtub",
"railing",
"cushion",
"base",
"box",
"column",
"signboard",
"chest of drawers",
"counter",
"sand",
"sink",
"skyscraper",
"fireplace",
"refrigerator",
"grandstand",
"path",
"stairs",
"runway",
"case",
"pool table",
"pillow",
"screen door",
"stairway",
"river",
"bridge",
"bookcase",
"blind",
"coffee table",
"toilet",
"flower",
"book",
"hill",
"bench",
"countertop",
"stove",
"palm",
"kitchen island",
"computer",
"swivel chair",
"boat",
"bar",
"arcade machine",
"hovel",
"bus",
"towel",
"light",
"truck",
"tower",
"chandelier",
"awning",
"streetlight",
"booth",
"television receiver",
"airplane",
"dirt track",
"apparel",
"pole",
"land",
"bannister",
"escalator",
"ottoman",
"bottle",
"buffet",
"poster",
"stage",
"van",
"ship",
"fountain",
"conveyer belt",
"canopy",
"washer",
"plaything",
"swimming pool",
"stool",
"barrel",
"basket",
"waterfall",
"tent",
"bag",
"minibike",
"cradle",
"oven",
"ball",
"food",
"step",
"tank",
"trade name",
"microwave",
"pot",
"animal",
"bicycle",
"lake",
"dishwasher",
"screen",
"blanket",
"sculpture",
"hood",
"sconce",
"vase",
"traffic light",
"tray",
"ashcan",
"fan",
"pier",
"crt screen",
"plate",
"monitor",
"bulletin board",
"shower",
"radiator",
"glass",
"clock",
"flag"
] |
nvidia/segformer-b5-finetuned-cityscapes-1024-1024 |
# SegFormer (b5-sized) model fine-tuned on CityScapes
SegFormer model fine-tuned on CityScapes at resolution 1024x1024. It was introduced in the paper [SegFormer: Simple and Efficient Design for Semantic Segmentation with Transformers](https://arxiv.org/abs/2105.15203) by Xie et al. and first released in [this repository](https://github.com/NVlabs/SegFormer).
Disclaimer: The team releasing SegFormer did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
SegFormer consists of a hierarchical Transformer encoder and a lightweight all-MLP decode head to achieve great results on semantic segmentation benchmarks such as ADE20K and Cityscapes. The hierarchical Transformer is first pre-trained on ImageNet-1k, after which a decode head is added and fine-tuned altogether on a downstream dataset.
## Intended uses & limitations
You can use the raw model for semantic segmentation. See the [model hub](https://huggingface.co/models?other=segformer) to look for fine-tuned versions on a task that interests you.
### How to use
Here is how to use this model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes:
```python
from transformers import SegformerFeatureExtractor, SegformerForSemanticSegmentation
from PIL import Image
import requests
feature_extractor = SegformerFeatureExtractor.from_pretrained("nvidia/segformer-b5-finetuned-cityscapes-1024-1024")
model = SegformerForSemanticSegmentation.from_pretrained("nvidia/segformer-b5-finetuned-cityscapes-1024-1024")
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
inputs = feature_extractor(images=image, return_tensors="pt")
outputs = model(**inputs)
logits = outputs.logits # shape (batch_size, num_labels, height/4, width/4)
```
For more code examples, we refer to the [documentation](https://huggingface.co/transformers/model_doc/segformer.html#).
### License
The license for this model can be found [here](https://github.com/NVlabs/SegFormer/blob/master/LICENSE).
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2105-15203,
author = {Enze Xie and
Wenhai Wang and
Zhiding Yu and
Anima Anandkumar and
Jose M. Alvarez and
Ping Luo},
title = {SegFormer: Simple and Efficient Design for Semantic Segmentation with
Transformers},
journal = {CoRR},
volume = {abs/2105.15203},
year = {2021},
url = {https://arxiv.org/abs/2105.15203},
eprinttype = {arXiv},
eprint = {2105.15203},
timestamp = {Wed, 02 Jun 2021 11:46:42 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2105-15203.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
| [
"road",
"sidewalk",
"building",
"wall",
"fence",
"pole",
"traffic light",
"traffic sign",
"vegetation",
"terrain",
"sky",
"person",
"rider",
"car",
"truck",
"bus",
"train",
"motorcycle",
"bicycle"
] |
tobiasc/segformer-b0-finetuned-segments-sidewalk | # SegFormer (b0-sized) model fine-tuned on Segments.ai sidewalk-semantic.
SegFormer model fine-tuned on [Segments.ai](https://segments.ai) [`sidewalk-semantic`](https://huggingface.co/datasets/segments/sidewalk-semantic). It was introduced in the paper [SegFormer: Simple and Efficient Design for Semantic Segmentation with Transformers](https://arxiv.org/abs/2105.15203) by Xie et al. and first released in [this repository](https://github.com/NVlabs/SegFormer).
## Model description
SegFormer consists of a hierarchical Transformer encoder and a lightweight all-MLP decode head to achieve great results on semantic segmentation benchmarks such as ADE20K and Cityscapes. The hierarchical Transformer is first pre-trained on ImageNet-1k, after which a decode head is added and fine-tuned altogether on a downstream dataset.
### How to use
Here is how to use this model to classify an image of the sidewalk dataset:
```python
from transformers import SegformerFeatureExtractor, SegformerForSemanticSegmentation
from PIL import Image
import requests
feature_extractor = SegformerFeatureExtractor.from_pretrained("nvidia/segformer-b0-finetuned-ade-512-512")
model = SegformerForSemanticSegmentation.from_pretrained("segments-tobias/segformer-b0-finetuned-segments-sidewalk")
url = "https://segmentsai-prod.s3.eu-west-2.amazonaws.com/assets/admin-tobias/439f6843-80c5-47ce-9b17-0b2a1d54dbeb.jpg"
image = Image.open(requests.get(url, stream=True).raw)
inputs = feature_extractor(images=image, return_tensors="pt")
outputs = model(**inputs)
logits = outputs.logits # shape (batch_size, num_labels, height/4, width/4)
```
For more code examples, we refer to the [documentation](https://huggingface.co/transformers/model_doc/segformer.html#).
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2105-15203,
author = {Enze Xie and
Wenhai Wang and
Zhiding Yu and
Anima Anandkumar and
Jose M. Alvarez and
Ping Luo},
title = {SegFormer: Simple and Efficient Design for Semantic Segmentation with
Transformers},
journal = {CoRR},
volume = {abs/2105.15203},
year = {2021},
url = {https://arxiv.org/abs/2105.15203},
eprinttype = {arXiv},
eprint = {2105.15203},
timestamp = {Wed, 02 Jun 2021 11:46:42 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2105-15203.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
``` | [
"unlabeled",
"flat-road",
"flat-sidewalk",
"flat-crosswalk",
"flat-cyclinglane",
"flat-parkingdriveway",
"flat-railtrack",
"flat-curb",
"human-person",
"human-rider",
"vehicle-car",
"vehicle-truck",
"vehicle-bus",
"vehicle-tramtrain",
"vehicle-motorcycle",
"vehicle-bicycle",
"vehicle-caravan",
"vehicle-cartrailer",
"construction-building",
"construction-door",
"construction-wall",
"construction-fenceguardrail",
"construction-bridge",
"construction-tunnel",
"construction-stairs",
"object-pole",
"object-trafficsign",
"object-trafficlight",
"nature-vegetation",
"nature-terrain",
"sky",
"void-ground",
"void-dynamic",
"void-static",
"void-unclear"
] |
tobiasc/segformer-b3-finetuned-segments-sidewalk |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# segformer-b3-finetuned-segments-sidewalk
This model is a fine-tuned version of [nvidia/mit-b3](https://huggingface.co/nvidia/mit-b3) on the [`sidewalk-semantic`](https://huggingface.co/datasets/segments/sidewalk-semantic) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8527
- Miou: 0.4345
- Macc: 0.5079
- Overall Accuracy: 0.8871
- Per Category Iou: [nan, 0.8382620833593052, 0.8876413942052827, 0.6261839847460975, 0.6590417473673477, 0.48228357004057837, 0.0, 0.6202905105623743, 0.748344409080285, 0.39096811362981676, 0.8848513296576286, 0.2415092028297553, 0.0, 0.07068982339740462, 0.41356382978723405, 0.6474134903246308, 0.0, 0.3062052505966587, 0.7704161510118073, 0.16108765491481541, 0.49752934863906867, 0.4734664813860761, 0.09820294554789893, nan, 0.17153699720635862, 0.514555863370054, 0.4660696051735875, 0.08826901031715705, 0.8991007829081079, 0.829742650939299, 0.9612781430019607, 0.01112666737555973, 0.1861992251927429, 0.391388886866003, 0.0]
- Per Category Accuracy: [nan, 0.9255583122183136, 0.9555184973850358, 0.8927561553139153, 0.7130378697969978, 0.6275811980710011, 0.0, 0.7474676455043131, 0.8545937449541465, 0.43523520560447965, 0.9672661630501664, 0.28627436744473084, 0.0, 0.0707036205718747, 0.47675012774655084, 0.7689381524189783, 0.0, 0.31600985221674877, 0.9278457312029238, 0.2055231456928555, 0.6363063556709445, 0.5255962863991213, 0.10240946878962942, nan, 0.30514996921453075, 0.6575213496395762, 0.6054551483999336, 0.08830275229357798, 0.9550074747938649, 0.8984159398975186, 0.9823971352874257, 0.013025497748978224, 0.3256981066248004, 0.49491941043060034, 0.0]
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 6e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss | Miou | Macc | Overall Accuracy | Per Category Iou | Per Category Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:------:|:------:|:----------------:|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------:|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------:|
| 0.4111 | 5.0 | 250 | 0.5342 | 0.3203 | 0.3895 | 0.8534 | [nan, 0.7411544992329885, 0.8587185188919024, 0.5322704558305212, 0.6145803724062279, 0.4207354824823325, 0.0, 0.4207652960849892, 0.6330214639515686, 0.0, 0.8090628889518269, 0.0, 0.0, 0.0, 0.0, 0.5525831345173927, 0.0, 0.0, 0.7449180731329554, 0.0, 0.39030048997684846, 0.5341813036240857, 0.0, nan, 0.0, 0.33603046089798805, 0.0, 0.0, 0.8611153164212525, 0.7580460497843906, 0.9307216449484303, 0.0, 0.12255543837545918, 0.30973651706611804, 0.0] | [nan, 0.865566426722594, 0.9394823497202754, 0.7339862219054845, 0.6960293899277608, 0.579233048631689, 0.0, 0.5226808772686938, 0.8148925583341846, 0.0, 0.954498711658196, 0.0, 0.0, 0.0, 0.0, 0.7453393323599813, 0.0, 0.0, 0.8609332075296946, 0.0, 0.5752897519263941, 0.6257335170644275, 0.0, nan, 0.0, 0.48320796165623753, 0.0, 0.0, 0.9563707209678979, 0.8591391181347248, 0.9690236728180618, 0.0, 0.23234437690377469, 0.43908949309871237, 0.0] |
| 0.2527 | 10.0 | 500 | 0.5899 | 0.3521 | 0.4258 | 0.8567 | [nan, 0.7536144931874272, 0.8666514611419747, 0.5791278186302583, 0.5507597043116981, 0.38697553330878387, 0.0, 0.49981379939131665, 0.6547462641660816, 0.006951340615690168, 0.8411064971463371, 0.21915505349651998, 0.0, 0.0, 0.0, 0.5704538365564567, 0.0, 0.0, 0.7601855085224487, 0.12506138175041864, 0.39942757047955846, 0.4668252406895441, 0.0, nan, 0.1030902538148915, 0.3805134719351324, 0.3639179515418502, 0.0, 0.8767798800448732, 0.7800121144818535, 0.9401348565379605, 0.00018008110081004338, 0.16755112790045706, 0.3264804931974313, 0.0] | [nan, 0.90406121258153, 0.933431363952898, 0.7264726392177598, 0.5740020955021516, 0.6563755737609668, 0.0, 0.6039363626224962, 0.8605186066359769, 0.0072522755864722855, 0.9522222704681134, 0.25318546484190657, 0.0, 0.0, 0.0, 0.7265874080033372, 0.0, 0.0, 0.9034801649669348, 0.15050382604742785, 0.6282930136175867, 0.4977144779061467, 0.0, nan, 0.1478142316826458, 0.4757332103391217, 0.43831868678494446, 0.0, 0.9461766367056283, 0.8685344399584078, 0.9696726615409282, 0.00019363628190676414, 0.39697811413925904, 0.4314488757452496, 0.0] |
| 0.1643 | 15.0 | 750 | 0.5756 | 0.3745 | 0.4534 | 0.8670 | [nan, 0.7726733036696652, 0.8671375594955328, 0.6103086102682944, 0.6314757371793478, 0.4273275344315441, 0.0, 0.5317600409405491, 0.6720224116289428, 0.16158774132109774, 0.8523694222801956, 0.24038155802861685, 0.0, 0.0, 0.04680851063829787, 0.5899459811865512, 0.0, 0.0, 0.7737178234025645, 0.14913933159903917, 0.4521741438458425, 0.5380504294958312, 0.0, nan, 0.14005003894540563, 0.40247802412573747, 0.41014102702120786, 0.0, 0.8822285387940414, 0.7982006290565458, 0.9485248204807992, 0.0010217644126931384, 0.12182141082818915, 0.3359618308006764, 0.0] | [nan, 0.8685068344016257, 0.9339337963085826, 0.7830275791780654, 0.71311646057369, 0.6411881935971181, 0.0, 0.7043771304992945, 0.8750572549898341, 0.18416833172993907, 0.9605602195211583, 0.301884052709914, 0.0, 0.0, 0.047777210015329585, 0.7549536664580913, 0.0, 0.0, 0.9068618510561295, 0.22672929767406622, 0.5668210000093578, 0.6053490157566916, 0.0, nan, 0.26095083120767, 0.5263161648629628, 0.5264190570939037, 0.0, 0.9540034951620896, 0.8931918202171408, 0.9742561443961733, 0.0012759247861356422, 0.2862606175274747, 0.465761930571415, 0.0] |
| 0.1302 | 20.0 | 1000 | 0.6021 | 0.3949 | 0.4802 | 0.8703 | [nan, 0.7801307689107726, 0.8786287731596124, 0.5996414476192669, 0.5791044393247451, 0.40685088294894184, 0.0, 0.5532316603013168, 0.7004119209771223, 0.3567117426846971, 0.8682022390275189, 0.4354632088736135, 0.0, 0.08566271525440265, 0.0, 0.592928092042186, 0.0, 0.07216748768472907, 0.7775334326155094, 0.16241710128130835, 0.46182139479806994, 0.562496456296332, 0.0, nan, 0.17592232145836345, 0.4180433534862313, 0.4040778498609824, 0.0, 0.8871669760617459, 0.8059650048666752, 0.9507085299921569, 0.0116151761673367, 0.16524860560484375, 0.34088472074456944, 0.0] | [nan, 0.901160937374861, 0.9425971578567806, 0.7984110745840901, 0.6312022008440196, 0.6127889140665853, 0.0, 0.6839893129548904, 0.8679197408614445, 0.4606921729692395, 0.9554783385950772, 0.5059353105601336, 0.0, 0.08568361471650239, 0.0, 0.7677860214733371, 0.0, 0.07216748768472907, 0.9086920613558305, 0.26755814834457153, 0.6342091828512193, 0.6617058325161462, 0.0, nan, 0.347809833758466, 0.541995549384712, 0.5421986403581496, 0.0, 0.9485582664128994, 0.9007181197365832, 0.9752496697792675, 0.013976390204770367, 0.39040296284368586, 0.42825081431510703, 0.0] |
| 0.1124 | 25.0 | 1250 | 0.5783 | 0.4085 | 0.4818 | 0.8809 | [nan, 0.8123818380472958, 0.8869254012115516, 0.5989965500806077, 0.6513288286982387, 0.45923979621249245, 0.0, 0.551056327882726, 0.7019146834355392, 0.2950008215576734, 0.8706733575298916, 0.3601874581566615, 0.0, 0.10517468206402572, 0.08712413261372398, 0.6136850006388144, 0.0, 0.2600985221674877, 0.7849825834204975, 0.17919511788917702, 0.45289730566932423, 0.5903637402399543, 0.0, nan, 0.18690435558822757, 0.42362687815353783, 0.43259719089833193, 0.0, 0.8841707465292419, 0.8032936112469397, 0.952030831872504, 0.008140849441390317, 0.16554455213884192, 0.3617462711649899, 0.0] | [nan, 0.9178324592492587, 0.9561686622912909, 0.7680310658482571, 0.7215460770544782, 0.5924548254023589, 0.0, 0.6491584679315913, 0.8452550030151549, 0.35430079676361037, 0.9581720479074639, 0.410135404944277, 0.0, 0.10532350931980092, 0.11548288196218702, 0.763316547977315, 0.0, 0.2600985221674877, 0.9177799037685564, 0.22825214031366012, 0.572804752898559, 0.6994932257437348, 0.0, nan, 0.31308822235904654, 0.5407402476367994, 0.5353727961089925, 0.0, 0.9583768797437656, 0.8894811289823983, 0.976516152184038, 0.010158989218608448, 0.379761952685748, 0.458744875997832, 0.0] |
| 0.1 | 30.0 | 1500 | 0.6125 | 0.4071 | 0.4817 | 0.8777 | [nan, 0.7976347312880722, 0.8842065126488408, 0.6220522211975981, 0.5992989007197456, 0.4754131699628208, 0.0, 0.5620206554196702, 0.7103054176260091, 0.3001608040201005, 0.8696414262339918, 0.2710134279595442, 0.0, 0.10693402202514375, 0.19945219123505975, 0.6229581109493774, 0.0, 0.21330049261083744, 0.7784639440974739, 0.1842071699891868, 0.4662422580117327, 0.5517361225824782, 0.001549819657348963, nan, 0.17199259716224552, 0.43358794468966694, 0.4268464617063853, 0.0, 0.8891718707035294, 0.8054920070330026, 0.9535609872146814, 0.01007383935063937, 0.16253665133576994, 0.3658318614584579, 0.0] | [nan, 0.8697241860632949, 0.9524319715036934, 0.8257718568242948, 0.7468530628299254, 0.5881267793852769, 0.0, 0.7164141181490659, 0.8437754352203041, 0.3683613310639138, 0.9597225061081064, 0.31468036446800013, 0.0, 0.10708012101102762, 0.20464997445068983, 0.7651242017023728, 0.0, 0.21330049261083744, 0.9302847679052355, 0.2516781574361694, 0.5984553495582629, 0.5925001383659759, 0.0015499506833873467, nan, 0.27588178379804734, 0.5562888715598076, 0.518736527938982, 0.0, 0.9534904946715259, 0.896063924459724, 0.9793106212730868, 0.011784150870325931, 0.3741529460703407, 0.47874361308587277, 0.0] |
| 0.0886 | 35.0 | 1750 | 0.6327 | 0.4115 | 0.4892 | 0.8822 | [nan, 0.8188467619727383, 0.8891141466002311, 0.6466411212625193, 0.6192450697021801, 0.4878651026475247, 0.0, 0.5804609572704323, 0.6873373994573425, 0.24242875689020368, 0.8707606811583432, 0.23605331403413546, 0.0, 0.13050222997866978, 0.2175902389425521, 0.6145514015738078, 0.0, 0.21711822660098523, 0.7803908730722577, 0.17679611946673174, 0.4549480658658346, 0.5467616324171395, 0.03352848701685911, nan, 0.19210202055245182, 0.44554925412112634, 0.43457869634340224, 0.06254767353165523, 0.8901464405497997, 0.8074494955970959, 0.9551576666105007, 0.009091384084852917, 0.16846681832699967, 0.3645371672657186, 0.0] | [nan, 0.9119479474792671, 0.9590241346815159, 0.845415986574404, 0.6953594791245139, 0.6061356109464877, 0.0, 0.7276180593606199, 0.892360619111798, 0.28098867756974766, 0.9616378091517278, 0.2841688750136131, 0.0, 0.1313555186883966, 0.21870209504343383, 0.7725733241957431, 0.0, 0.21711822660098523, 0.9161171536509721, 0.21953178271081142, 0.5994171169644333, 0.6121438495259369, 0.034859799915457235, nan, 0.3531093323951095, 0.6003076440268559, 0.6043221135245676, 0.06269113149847094, 0.9560401237295135, 0.884834427780536, 0.9792357012514029, 0.010829800623785451, 0.34680568415120167, 0.46030641401411304, 0.0] |
| 0.0795 | 40.0 | 2000 | 0.6240 | 0.4282 | 0.5017 | 0.8835 | [nan, 0.8168639361241289, 0.8876591799999074, 0.6570827724213207, 0.6202745367509233, 0.48734716072991435, 0.0, 0.5833200793037147, 0.7249773695346732, 0.31780959887896304, 0.8734250949568915, 0.2279957413675295, 0.0, 0.19478847928505513, 0.2588001983143282, 0.6266940289672047, 0.0, 0.3150246305418719, 0.7870743183835168, 0.18024107181885737, 0.48180217328687497, 0.5880553963585522, 0.042404523149135905, nan, 0.17844859516376527, 0.45068592007174485, 0.44004742517113327, 0.19134396355353075, 0.892022331516544, 0.8143712718909341, 0.9551036492731949, 0.016888403579096854, 0.15958069694966476, 0.36017381107545093, 0.0] | [nan, 0.8991722677575189, 0.9610711923215693, 0.8649585814233277, 0.7118098889111815, 0.594659810586253, 0.0, 0.7184642464033051, 0.8603538440753031, 0.3580502725770246, 0.9623214298952487, 0.26042763277307873, 0.0, 0.1956914218795745, 0.26673479816044965, 0.788603835801476, 0.0, 0.3150246305418719, 0.9230146256606502, 0.2499204485188272, 0.6278490409879275, 0.6625473814771242, 0.04480766521065239, nan, 0.29663998592664265, 0.6117266104950834, 0.5436080252031172, 0.1926605504587156, 0.9509771523653007, 0.887874399303051, 0.9819309132416605, 0.02064301076756039, 0.34012318344672116, 0.46386756263254, 0.0] |
| 0.0754 | 45.0 | 2250 | 0.6471 | 0.4302 | 0.5116 | 0.8840 | [nan, 0.8281984303346407, 0.8897375767546668, 0.6335678497580041, 0.6461049225195123, 0.4896718508137295, 0.0, 0.5769963172973805, 0.7160045601555046, 0.31492773499314275, 0.8789298786291031, 0.41197707824430413, 0.0, 0.19778300628229073, 0.19288119288119288, 0.6158351667955045, 0.0, 0.26785714285714285, 0.7860686941589031, 0.17655380387956127, 0.40860437517167547, 0.5549189258475934, 0.060120717954148355, nan, 0.1768935762224353, 0.45137771772158236, 0.44662611174687306, 0.24400299850074963, 0.8917308479385957, 0.8178316117483762, 0.9546822647246874, 0.0181622066651208, 0.17782411648425822, 0.3692233084050129, 0.0] | [nan, 0.9127907293988842, 0.9579732772469148, 0.8546614098408393, 0.7189306666878257, 0.617758410318982, 0.0, 0.7117038660531152, 0.8630197023070054, 0.3681886578356644, 0.9609314187010253, 0.48673176752459435, 0.0, 0.19893627403142383, 0.2049054675523761, 0.8228995957609527, 0.0, 0.26785714285714285, 0.9313026975574736, 0.22976740662171377, 0.482567055983181, 0.7372479787923986, 0.06399887276313936, nan, 0.30833846424487643, 0.6003932327823953, 0.6147681423755044, 0.24885321100917432, 0.947219534571164, 0.890392783205778, 0.9840024279813396, 0.0241042593066438, 0.3959314574024127, 0.47575603698227187, 0.0] |
| 0.0746 | 50.0 | 2500 | 0.6936 | 0.4117 | 0.4867 | 0.8749 | [nan, 0.7957936899551392, 0.8814366206724774, 0.5436114176098814, 0.6151632247714599, 0.4361122655202057, 0.0, 0.5671206613898421, 0.7141211613500584, 0.3419340943355589, 0.870823541579283, 0.1755482015278508, 0.0, 0.14690036810414178, 0.3004324599338591, 0.6098619199234538, 0.0, 0.16824615384615385, 0.7756330550603614, 0.17781881780267358, 0.4502871856554716, 0.45687245610992666, 0.06802365130029826, nan, 0.19639260088210125, 0.4534812252031405, 0.42577189666036547, 0.27414561664190196, 0.8856918914231561, 0.8034178358523514, 0.9553431034562543, 0.003146721773436032, 0.12501083138368427, 0.36698838817524204, 0.0] | [nan, 0.9180616367888037, 0.9472426408781908, 0.8013172716614175, 0.7302314913997886, 0.524968334204869, 0.0, 0.7028351702702309, 0.8539511709675187, 0.39162288166950343, 0.9630569443900208, 0.20604784550041746, 0.0, 0.14701863960183467, 0.30173735309146654, 0.7343444275597668, 0.0, 0.16834975369458127, 0.9160513108532854, 0.22030456852791877, 0.6251207408000449, 0.5084687072928094, 0.0732703959419473, nan, 0.3280191749494239, 0.6051695608345855, 0.5601890233792074, 0.28211009174311924, 0.9506023739291599, 0.8749006566683216, 0.9851772884487643, 0.003475079702076749, 0.25166727050709176, 0.48520419707741125, 0.0] |
| 0.067 | 55.0 | 2750 | 0.6778 | 0.4277 | 0.5121 | 0.8816 | [nan, 0.8194645919335458, 0.8855287302003849, 0.6053350056000855, 0.654773528870809, 0.4697667824136534, 0.0, 0.5710052174119353, 0.7267313389676074, 0.3551371282700238, 0.8755668722529796, 0.19964417520695182, 0.0, 0.13208006623484148, 0.3486218302094818, 0.6180969846096706, 0.0, 0.20360432519022828, 0.7807972584967618, 0.18003748362164762, 0.4432680689367132, 0.45560830868332836, 0.08040790777737207, nan, 0.1822721323375752, 0.45364137665335047, 0.45602216206006424, 0.36234396671289876, 0.8940119674114063, 0.8166972645181432, 0.9573128637395036, 0.03034622884202592, 0.18678678678678678, 0.3686953575810984, 0.0] | [nan, 0.9128102675762187, 0.9552788883754972, 0.8469619991264167, 0.7317413411289339, 0.5563598861193423, 0.0, 0.746876075856685, 0.8778384470140081, 0.40151459088778707, 0.9621413903500922, 0.24645151922169384, 0.0, 0.13233141407241145, 0.4039345937659683, 0.7933911385238819, 0.0, 0.31305418719211825, 0.9172193620842494, 0.246723236608834, 0.6190652452610861, 0.49203224849677785, 0.08688178103423982, nan, 0.36441199753716247, 0.5990761169332598, 0.6004808489471066, 0.39946483180428133, 0.9529255570362644, 0.9017875242386488, 0.9826782046681377, 0.03842297079549934, 0.3380298699730285, 0.48104842741150405, 0.0] |
| 0.0687 | 60.0 | 3000 | 0.6879 | 0.4291 | 0.5100 | 0.8823 | [nan, 0.8196948326057852, 0.8831657666830767, 0.6467890499563872, 0.6516417841503617, 0.4681981224281317, 0.0, 0.5880231738461575, 0.724187852815783, 0.2984189272432753, 0.8789400109991544, 0.2520251481078467, 0.013058335367341958, 0.10452562571588721, 0.3387726959319697, 0.616015263989506, 0.0, 0.29198813056379824, 0.784720416853429, 0.1792090810910177, 0.44576935641947074, 0.48202529113784476, 0.09516336506303061, nan, 0.18832282614869086, 0.45650264775637484, 0.4556005895357406, 0.2745886654478976, 0.8952007693743541, 0.8138636450290234, 0.9572388978933325, 0.010588595444700982, 0.1924104882672224, 0.35264721130282095, 0.0] | [nan, 0.9097946995213146, 0.9621478252989295, 0.8651175889100899, 0.7142492257108215, 0.566678868165708, 0.0, 0.7143145307931398, 0.867036613536233, 0.32265226078590986, 0.9605385088701248, 0.3026826877699931, 0.020373191165270373, 0.10464038255099053, 0.3765968318855391, 0.820913163096054, 0.0, 0.30295566502463056, 0.9221738131341896, 0.23635123873020683, 0.6270817065600855, 0.5335377838453821, 0.1042412286881781, nan, 0.364565924883455, 0.6191561899689979, 0.615072127342066, 0.2870795107033639, 0.9511700815454721, 0.8878140719993255, 0.9816183488199858, 0.012797283559588108, 0.43857601009084446, 0.4393736482895436, 0.0] |
| 0.0629 | 65.0 | 3250 | 0.6960 | 0.4222 | 0.4985 | 0.8831 | [nan, 0.8385937711298211, 0.8847888472425782, 0.6149328797554199, 0.6525561252288588, 0.48169461209819614, 0.0, 0.5971548536892575, 0.7135824408049566, 0.27369317672375143, 0.8823153606699299, 0.36233237512950345, 0.0, 0.14186935456382538, 0.24867603039373704, 0.6160654277501113, 0.0, 0.08275862068965517, 0.7805731162375585, 0.17752144045477705, 0.44269702931252913, 0.48059292296084216, 0.08923905090414969, nan, 0.17907819011708673, 0.4873286783174559, 0.4527498618417013, 0.22163588390501318, 0.8943575787945166, 0.8201429759960558, 0.9581956395009911, 0.019349515805194163, 0.1776665799886147, 0.3600628431614535, 0.0] | [nan, 0.9267115927398949, 0.9487909172436692, 0.8034188525406715, 0.7109193193887602, 0.6603282784265877, 0.0, 0.725064612012743, 0.8812655082760235, 0.3011914452749204, 0.9638660632870772, 0.45070243583693326, 0.0, 0.14216356006636088, 0.27593254982115484, 0.7556489178908058, 0.0, 0.08275862068965517, 0.9325111323710189, 0.2190847791499356, 0.625637495828009, 0.527886775476724, 0.09623784697759617, nan, 0.31748614653883367, 0.6121141373604427, 0.6112861327585254, 0.22477064220183487, 0.9484829229283243, 0.8975837228691066, 0.9804554182923197, 0.025625687235911233, 0.3392241321471224, 0.46249546141014647, 0.0] |
| 0.0629 | 70.0 | 3500 | 0.7101 | 0.4217 | 0.4989 | 0.8789 | [nan, 0.786640982710835, 0.8880498247990368, 0.6213814597589751, 0.6071277471550605, 0.4592909171926279, 0.0, 0.5867507688789444, 0.7333167906428527, 0.3791430524621254, 0.8814043667546686, 0.28100956352915796, 0.0, 0.0984556925025576, 0.3509064388414253, 0.6349712777519019, 0.0, 0.0, 0.7875471953847744, 0.1780650489932298, 0.4450250049891278, 0.4999114269705531, 0.08133596346637123, nan, 0.18414185986147352, 0.4677542129328365, 0.45241313162139773, 0.28850405305821664, 0.8909480603280158, 0.8200828649597152, 0.9579545152813692, 0.007145844060159359, 0.17539286131557424, 0.37038789587688453, 0.0] | [nan, 0.8598412127047438, 0.9543510233299178, 0.8082120661777665, 0.7612604902628672, 0.5904985183894021, 0.0, 0.7265619620716575, 0.8569696210790629, 0.424110116184415, 0.9631369031291932, 0.35466656986241696, 0.0, 0.09861422855469894, 0.43025038323965253, 0.8124211634536118, 0.0, 0.0, 0.9232874343190659, 0.2156072429729525, 0.6237066827758375, 0.5406547619892345, 0.08482457376356208, nan, 0.2963981000967543, 0.6442741122544078, 0.57353672691096, 0.2993119266055046, 0.9601885858498842, 0.8969635881631085, 0.9813282126850572, 0.008685970359817705, 0.3223031815681065, 0.45102269558033437, 0.0] |
| 0.056 | 75.0 | 3750 | 0.6888 | 0.4319 | 0.5074 | 0.8864 | [nan, 0.846983759179929, 0.8871265021170364, 0.6327919532904038, 0.6690289787883766, 0.4809385638926465, 0.0, 0.5929931910773564, 0.7319858245513943, 0.3873577190849818, 0.8821459096044979, 0.31863963925997724, 0.0, 0.23505840639191783, 0.3168200047180939, 0.6339963432877168, 0.0, 0.0, 0.7891815340906951, 0.16853589090364154, 0.44962094152977145, 0.5116482092488317, 0.10324211857041271, nan, 0.19139417066912298, 0.46438574150773454, 0.4679743443307121, 0.26584176977877766, 0.893033114012553, 0.8167232339927487, 0.958758389465055, 0.00683255888015518, 0.17629150606516764, 0.37230474365117394, 0.0] | [nan, 0.9383299434889024, 0.9547491546521122, 0.8273944994904098, 0.7246575916990003, 0.6112474580210331, 0.0, 0.7317551677487866, 0.879380624581915, 0.4323244283282765, 0.9640948194150409, 0.41171452426761534, 0.0, 0.23614228554698935, 0.34312723556463975, 0.7955762144552705, 0.0, 0.0, 0.9198500013278298, 0.21301613758618076, 0.6485306793405083, 0.5580917132262262, 0.11046921234324363, nan, 0.3198390359750198, 0.6043089183483272, 0.5887636102360029, 0.27102446483180426, 0.9532878705576775, 0.8931910708096411, 0.9816932688416696, 0.008160386166070774, 0.350442145377937, 0.47428658629635284, 0.0] |
| 0.0545 | 80.0 | 4000 | 0.7242 | 0.4313 | 0.5097 | 0.8839 | [nan, 0.8315018755718794, 0.8905184158955881, 0.5801625429382188, 0.6532970384376523, 0.4694179481073208, 0.0, 0.5983799840636467, 0.7235855215136249, 0.3640520350334879, 0.8784869607735561, 0.3143670199951819, 0.0, 0.2781527188584651, 0.3326551373346897, 0.6281559683282705, 0.0, 0.08645320197044334, 0.7821189057727206, 0.19111444811384393, 0.4452253857934852, 0.4994405348435919, 0.10157298545122671, nan, 0.17629709076283684, 0.46700401281623927, 0.4615519817207136, 0.2734785875281743, 0.8899163053914229, 0.8095455355998507, 0.9581430685733312, 0.005790762673569464, 0.17969789570113207, 0.36411010043900494, 0.0] | [nan, 0.9123876444791007, 0.9612296601404773, 0.8930262764661256, 0.7126551176008956, 0.5714955551682064, 0.0, 0.7523059093928652, 0.865652608026573, 0.41833789684007994, 0.9665973690927172, 0.37897048680437073, 0.0, 0.2815458182882795, 0.41773122125702605, 0.7517058490509818, 0.0, 0.08645320197044334, 0.9215057282136607, 0.24684445791347828, 0.6202401611194349, 0.5561602661167979, 0.10663660701704945, nan, 0.2891635148210045, 0.6095369648325313, 0.5805560161388382, 0.2782874617737003, 0.9568961863731891, 0.870963644368671, 0.9845664755331252, 0.007178373593543613, 0.36061350187190533, 0.46088130206223, 0.0] |
| 0.05 | 85.0 | 4250 | 0.7236 | 0.4310 | 0.5096 | 0.8865 | [nan, 0.8344804679717858, 0.891480804753714, 0.6039392215856049, 0.6561901191296589, 0.5040396418009069, 0.0, 0.5972644983662688, 0.7352912849624103, 0.4166594809002328, 0.882374306124748, 0.291759692976696, 0.0, 0.11696789594193015, 0.4100259636508888, 0.6420473687097001, 0.0, 0.0, 0.7822126517859589, 0.18499892874997023, 0.45949977357159744, 0.5246592278602004, 0.10855595092676192, nan, 0.18756695799266987, 0.4678528011435098, 0.4557543571262987, 0.2325056433408578, 0.8913224348625648, 0.8136362687377343, 0.9598605495290813, 0.008994566889922168, 0.1923180020267399, 0.3698758474475382, 0.0] | [nan, 0.9238238149259353, 0.9605341564359651, 0.8564066606895178, 0.714878329764632, 0.6240479925628958, 0.0, 0.7253836717079392, 0.8553615384868866, 0.47677545080046374, 0.96226053416674, 0.36290703161868804, 0.0, 0.11715624085098078, 0.5245273377618804, 0.8139308522789349, 0.0, 0.0, 0.9272880427065164, 0.23551026592923707, 0.5960753651336961, 0.5733261619548913, 0.11520360715795407, nan, 0.29259389568123845, 0.634985354812941, 0.5976344442602112, 0.23623853211009174, 0.9580478059949592, 0.8761671553428071, 0.9800020805814939, 0.011116797255897263, 0.39472377655220536, 0.470034782700211, 0.0] |
| 0.0483 | 90.0 | 4500 | 0.7448 | 0.4348 | 0.5119 | 0.8858 | [nan, 0.8389020217362697, 0.8904583684155554, 0.6053893552299984, 0.6609445788027536, 0.48826307798392343, 0.0, 0.5990805851530085, 0.741553407283815, 0.3904125924159313, 0.8810578364409596, 0.24072208997131173, 0.007595345830639948, 0.11408382066276804, 0.3854978354978355, 0.6358003169572107, 0.0, 0.3205665024630542, 0.7799325512458637, 0.18157179971658008, 0.44179222083868513, 0.4810432700260739, 0.10200241902970031, nan, 0.17958766620104505, 0.47953821940837715, 0.46267085062022195, 0.20652173913043478, 0.8936165310088457, 0.8196186094828226, 0.9601551959806593, 0.007783159441927215, 0.17946660884648744, 0.3712830781592127, 0.0] | [nan, 0.9268645537858738, 0.9579552943101062, 0.8624259561522487, 0.7130170885820071, 0.6134222299692057, 0.0, 0.7456444472460493, 0.8743388902252963, 0.44418954586940973, 0.9629775151789223, 0.28632881983519076, 0.00894897182025895, 0.11422855469893628, 0.45503321410321923, 0.7969369208307261, 0.0, 0.3205665024630542, 0.9184567677287768, 0.2329797711947875, 0.6319321335328264, 0.5199750799329599, 0.10694659715372692, nan, 0.30532588618172224, 0.6374674287235863, 0.6071132482175426, 0.2106269113149847, 0.9560636685684433, 0.8940191660968048, 0.9818139998320264, 0.009301457113021348, 0.3331991465721992, 0.46443061088103893, 0.0] |
| 0.0488 | 95.0 | 4750 | 0.7572 | 0.4392 | 0.5164 | 0.8870 | [nan, 0.8412265993316759, 0.8902791647105773, 0.6166091899398941, 0.6573127590169391, 0.49795139519110443, 0.0, 0.6045930992650757, 0.740872213808363, 0.3893914038172305, 0.8838233368096821, 0.33872329970362863, 0.004128819157720892, 0.1232210193407128, 0.36835222319093286, 0.6420211202135859, 0.0, 0.2602216748768473, 0.7833929304386752, 0.17934607063412256, 0.4671484042901698, 0.5449281805918343, 0.09757754723390911, nan, 0.1862480907024973, 0.4739074459454693, 0.46393408427200666, 0.20655861289106672, 0.8908646555131348, 0.8077701092850268, 0.959734031170495, 0.015509419333207602, 0.19220623899538222, 0.36528917777672343, 0.0] | [nan, 0.9329796512523355, 0.9594185059351048, 0.832704966397695, 0.7156041609282175, 0.6057294753355412, 0.0, 0.740442513492152, 0.8672541001163223, 0.4534398973827672, 0.964824509100999, 0.4003702762551276, 0.00476009139375476, 0.1235727530008783, 0.4317833418497701, 0.8025088644557671, 0.0, 0.2602216748768473, 0.9244890653768502, 0.22295628456701266, 0.6153075940114643, 0.6122502848919965, 0.10522756094124278, nan, 0.32980033424223765, 0.6388606234665348, 0.6146299673907036, 0.20948012232415902, 0.9577606974590687, 0.8682935054472558, 0.9823331908103197, 0.02047357902089197, 0.388175462608859, 0.4557849260933397, 0.0] |
| 0.0466 | 100.0 | 5000 | 0.7516 | 0.4340 | 0.5089 | 0.8868 | [nan, 0.8369914869418346, 0.8917253025027853, 0.63431934846412, 0.6595590976640465, 0.490185886416082, 0.0, 0.6019878455204862, 0.7389529158865543, 0.34824032232931906, 0.8841782288939659, 0.3149823779040495, 0.0, 0.1793690267212795, 0.3540386803185438, 0.6423088361774469, 0.0, 0.145935960591133, 0.7781632167836338, 0.18123317726357693, 0.45431638450718936, 0.5090139572607015, 0.10249373268241192, nan, 0.1875506294119916, 0.501633275054173, 0.45008636966215404, 0.17736422331940752, 0.8917030821290204, 0.8118398661365593, 0.9594706627009374, 0.014780075321537696, 0.20062550586608202, 0.37857391883524044, 0.0] | [nan, 0.9373271597386813, 0.9596797489625617, 0.8314003387051043, 0.7185675621858967, 0.5884759746673639, 0.0, 0.7444904015400207, 0.8778911710334237, 0.3858999975332396, 0.9637834569075349, 0.3974298471702908, 0.0, 0.17949155850492826, 0.397547266223812, 0.7936692390969677, 0.0, 0.145935960591133, 0.9165776142827953, 0.24282142586559588, 0.6377640831341348, 0.5628898195281933, 0.10945469916866281, nan, 0.3207406104318761, 0.6268758202255739, 0.6192450118830487, 0.17851681957186544, 0.9569449380396788, 0.8769881312587235, 0.9830475556030632, 0.01869973236699608, 0.34259221985158944, 0.47854628309223995, 0.0] |
| 0.0681 | 105.0 | 5250 | 0.7608 | 0.4243 | 0.4961 | 0.8801 | [nan, 0.8053305712022708, 0.8888831373349202, 0.6063781727951514, 0.6458484552441548, 0.4450952774354321, 0.0, 0.5835976764940738, 0.7449298281412959, 0.38801677910396126, 0.8805089961159074, 0.14255831144524309, 0.0, 0.1778948138395143, 0.3797164667393675, 0.6438507708603036, 0.0, 0.2848522167487685, 0.7757003332539172, 0.14560873446405273, 0.46351390150988186, 0.47026329896747027, 0.08670882625524723, nan, 0.16717484516436398, 0.49040240585388206, 0.4269185360094451, 0.09782193351165457, 0.8929769955183823, 0.8046204535691968, 0.9590862138793831, 0.04553666153467317, 0.1919049851539303, 0.36759942734721646, 0.0] | [nan, 0.8461725854729251, 0.9657024524747764, 0.8717211889928504, 0.7386199232908679, 0.5728516646330835, 0.0, 0.7229524174348182, 0.8661468957085944, 0.44266015441920126, 0.9636971438314745, 0.16451882237630233, 0.0, 0.17800331804430566, 0.44481349003576903, 0.8150531867346027, 0.0, 0.2848522167487685, 0.9260951906884237, 0.2249185544359421, 0.6512735360080518, 0.5153941017777545, 0.0896435113428209, nan, 0.23148473920309615, 0.6005358807082946, 0.49964074503951805, 0.09785932721712538, 0.9555801683760682, 0.8920875682663394, 0.9854006169210447, 0.0684193055373061, 0.28012828254364425, 0.47628225029862603, 0.0] |
| 0.0668 | 110.0 | 5500 | 0.7138 | 0.4340 | 0.5140 | 0.8742 | [nan, 0.7871483106350147, 0.8799748398030087, 0.6039422540580079, 0.58793837643889, 0.4164255041075429, 0.0, 0.6184209066896527, 0.7402801021253262, 0.3308593247243554, 0.8857427628712552, 0.35066959646049234, 0.0, 0.16199673226522301, 0.42935960591133004, 0.6284724323670036, 0.0, 0.3552955665024631, 0.7640465559057021, 0.1673140841039061, 0.4603793394796352, 0.4502083383450174, 0.08286035553651745, nan, 0.19144741314841254, 0.494703324736749, 0.49196363166286033, 0.21928518242740133, 0.8942953842754613, 0.8018772529737324, 0.9608524553067362, 0.025030461104976583, 0.16785196891874093, 0.3735661360500572, 0.0] | [nan, 0.8648334810431274, 0.9433503159465763, 0.7861368460577638, 0.8401580732564278, 0.456157108825751, 0.0, 0.7569977355489718, 0.8541785433012485, 0.38047312464540317, 0.9656267441330937, 0.428703670091117, 0.0, 0.1620718259002635, 0.5567194685743485, 0.8251045360189903, 0.0, 0.3552955665024631, 0.9128087725432023, 0.21700886430790212, 0.6164003697345833, 0.5046228427325222, 0.08721995209243343, nan, 0.3096138622570147, 0.6316283736234475, 0.6310175205880727, 0.22515290519877676, 0.9574614010065557, 0.8952916600312878, 0.9807011750513465, 0.036369043090988304, 0.3078378487178455, 0.47336308192615123, 0.0] |
| 0.0456 | 115.0 | 5750 | 0.7481 | 0.4396 | 0.5149 | 0.8874 | [nan, 0.8535949387776991, 0.889196790918221, 0.6590754161484988, 0.6643237184774637, 0.46255227979529023, 0.0, 0.6160656034941906, 0.7414819627132849, 0.33609977221984166, 0.881638905287202, 0.26364535016348567, 0.0, 0.11007294284111147, 0.47720425788310905, 0.6368556033975671, 0.0, 0.32869458128078816, 0.7703600738384895, 0.17442321190028753, 0.46530941552214283, 0.48260002610416075, 0.09418922868453915, nan, 0.20518864654252, 0.4743353551385976, 0.4722508031833358, 0.20610399397136397, 0.8954748076190832, 0.8187194150221221, 0.9605552926063987, 0.012601025462761798, 0.17920223292081403, 0.3762309075548745, 0.0] | [nan, 0.9413675139957597, 0.9627770101122414, 0.853864456654176, 0.7242582145309057, 0.5528162221834872, 0.0, 0.7381053284908671, 0.8687863919305888, 0.3676213029428452, 0.9679646105797591, 0.3146622136711802, 0.0, 0.11008099931687323, 0.6070516096065406, 0.8065015941122136, 0.0, 0.32869458128078816, 0.912257229374579, 0.23346465641336464, 0.6230433232027166, 0.5299729086514923, 0.09990136677469354, nan, 0.3254024100624505, 0.6366091637027598, 0.621511081633781, 0.2090978593272171, 0.9563050724169996, 0.8984035746737735, 0.9820063104609347, 0.01504138975525757, 0.32565785059646013, 0.47864626362234725, 0.0] |
| 0.0432 | 120.0 | 6000 | 0.7519 | 0.4416 | 0.5185 | 0.8876 | [nan, 0.8517831570119985, 0.8901004311397058, 0.6339355013970817, 0.6606286462755991, 0.4746063751504886, 0.0, 0.6132450026307165, 0.7426311341925447, 0.3602046617396248, 0.8859214231639748, 0.3273784162152292, 0.0, 0.15872087354977088, 0.4255713403335392, 0.6326264779996124, 0.0, 0.35557744397931546, 0.7741301715457662, 0.17043647800201933, 0.46161159879531216, 0.5113488607281433, 0.11327498751609766, nan, 0.19760381654559253, 0.47813157752711966, 0.46921250159026334, 0.1416030534351145, 0.8955479192568264, 0.8197854779969181, 0.9604275470620833, 0.010892456172159384, 0.18561124493594658, 0.3689976212003217, 0.0] | [nan, 0.9296893165774394, 0.9616835385667785, 0.87624044997203, 0.7260692029572803, 0.5797304049735634, 0.0, 0.7494101274784102, 0.8745695578102397, 0.39073484792422114, 0.9642129041755406, 0.3904962427850583, 0.0, 0.15887576851761492, 0.528104241185488, 0.8103950021354152, 0.0, 0.3556650246305419, 0.9162409381106233, 0.22253201000075765, 0.6204044413898943, 0.5625662560153721, 0.12145977173453572, nan, 0.3206086727064825, 0.6318803849592027, 0.6115348477311667, 0.14181957186544342, 0.9576213674122256, 0.8924536538299407, 0.9825164346850114, 0.013502672872248463, 0.3639547522241456, 0.4569004983240106, 0.0] |
| 0.0446 | 125.0 | 6250 | 0.7468 | 0.4334 | 0.5064 | 0.8877 | [nan, 0.8499567507325978, 0.8871076417101389, 0.6330569753090723, 0.6639770881242221, 0.4871746836767682, 0.0, 0.5980424732505424, 0.7360705192073508, 0.30519138810716817, 0.8812845049064242, 0.23256457139345144, 0.0, 0.13761825807080855, 0.4344916900496439, 0.6344221105527639, 0.0, 0.31022167487684726, 0.7799696347321634, 0.17147761834567948, 0.4735415094048958, 0.5082152629506022, 0.10032137118371719, nan, 0.19083052625766195, 0.477693792160024, 0.4774453072902102, 0.10550458715596331, 0.8982375671163275, 0.8273146135730871, 0.9607895023001171, 0.016035198543508544, 0.15227804315598747, 0.37272481048329426, 0.0] | [nan, 0.9294944628629415, 0.9603275161439091, 0.8696425971478271, 0.7134799429158917, 0.6058991342745919, 0.0, 0.7261197395153978, 0.8763951269825055, 0.32904117023113544, 0.9650643853185165, 0.2747304606672233, 0.0, 0.13769883868449304, 0.5143076136944302, 0.7674085992670063, 0.0, 0.31022167487684726, 0.9269199814674473, 0.20887946056519432, 0.6072557812618596, 0.5566839281178112, 0.10556573199943638, nan, 0.3039625296859882, 0.6508858436198338, 0.6133587575305367, 0.10550458715596331, 0.9551001306600062, 0.9014341786025424, 0.9824066792392325, 0.020100137620071783, 0.3101324423332394, 0.48336771260333516, 0.0] |
| 0.0401 | 130.0 | 6500 | 0.7766 | 0.4379 | 0.5140 | 0.8867 | [nan, 0.8468760227965516, 0.8886795707269431, 0.622437352951649, 0.6682970140214559, 0.4786959592750148, 0.0, 0.6085294389146897, 0.7427519649223919, 0.3908760790623845, 0.8822040839218181, 0.20753357844976364, 0.0, 0.17475089531512655, 0.47288964490750585, 0.6415406446381512, 0.0, 0.2750554050726422, 0.778568992850166, 0.17143968092188597, 0.46392364840506783, 0.4823894964669603, 0.09554546178978404, nan, 0.20017632982073136, 0.47654683547891147, 0.4713058003824428, 0.1655881233346022, 0.8956585893822123, 0.8232044008477167, 0.9608808597268595, 0.012288627559172788, 0.18044196123782585, 0.37141827889613904, 0.0] | [nan, 0.9354963797165556, 0.9559979333791044, 0.8707192502509636, 0.7183888437369763, 0.6083772006275057, 0.0, 0.7415955894118731, 0.866331429776549, 0.4434248501443055, 0.9662129317110005, 0.2510073692235089, 0.0, 0.17500243973846002, 0.5682166581502299, 0.7858128979072931, 0.0, 0.2751231527093596, 0.9165205505248, 0.2204712478218047, 0.6489507377535817, 0.5256828538301831, 0.10136677469353247, nan, 0.3145395373383763, 0.6396451870589802, 0.5925772398165036, 0.16628440366972477, 0.9558582744735443, 0.8949720377326676, 0.9824620341597123, 0.014785513239880775, 0.3475571300135529, 0.47879491888421727, 0.0] |
| 0.0532 | 135.0 | 6750 | 0.8100 | 0.4370 | 0.5099 | 0.8867 | [nan, 0.8418463475820702, 0.8855647993577028, 0.6407052153749961, 0.6672622261373646, 0.48550215050970236, 0.0, 0.6013553074721314, 0.7358587165510544, 0.41406543029797876, 0.8806464817122883, 0.20844846800909883, 0.0, 0.10624649381692236, 0.46624287593160896, 0.6367459896871661, 0.0, 0.2729064039408867, 0.7800250020493483, 0.16987653185041204, 0.47226725829848964, 0.5354231045094412, 0.10532085561497326, nan, 0.19529110166632935, 0.4793455617996517, 0.4643273310907372, 0.1317799847211612, 0.8929265734089717, 0.8098728542013477, 0.9610867606622594, 0.009269971902267766, 0.1905821312686735, 0.3815049812671639, 0.0] | [nan, 0.9263081557808802, 0.9609817135875093, 0.8755450316865522, 0.7097842872099934, 0.608116901981291, 0.0, 0.7151553355218178, 0.871465431167145, 0.49016995979180544, 0.9649383576369068, 0.24783097978001234, 0.0, 0.10627500731921538, 0.5434338272866632, 0.7518349671742002, 0.0, 0.2729064039408867, 0.918908888272893, 0.2238048336995227, 0.6329937167995292, 0.5943152161418457, 0.11100464985205016, nan, 0.31827777289119535, 0.6406199478859578, 0.5836235008014149, 0.13188073394495411, 0.9580930951851359, 0.8802967653698794, 0.9799166622128225, 0.011248193304333996, 0.3184117654952162, 0.4786317927561475, 0.0] |
| 0.039 | 140.0 | 7000 | 0.7955 | 0.4374 | 0.5145 | 0.8873 | [nan, 0.8453406127060666, 0.8894584400292076, 0.618765500137779, 0.6661462422914772, 0.48188110711842147, 0.0, 0.608878748711235, 0.7435697628283624, 0.3796956629902977, 0.8857966705291055, 0.3616908539636749, 0.0, 0.12437204311564161, 0.5013698630136987, 0.6370300461309403, 0.0, 0.18285784554845055, 0.7737808450225561, 0.16547070030804295, 0.47332405936901073, 0.47251187823235086, 0.09493722374379694, nan, 0.19320955193290454, 0.47309349183647703, 0.4585451464536432, 0.13724742661075104, 0.8963119205284326, 0.8287376073022066, 0.9613351708673005, 0.00971653416847346, 0.18365372022293688, 0.38471762753712496, 0.0] | [nan, 0.9325931121764209, 0.9570000189093305, 0.8775718982045564, 0.7170735817481989, 0.6105484864330951, 0.0, 0.7451100949905688, 0.8584129411105655, 0.45178716791238066, 0.9654472341160111, 0.4460376810541983, 0.0, 0.12442666146189128, 0.5610628513030148, 0.7776685239812083, 0.0, 0.183128078817734, 0.9154620178139884, 0.2112205470111372, 0.6293701931124976, 0.5103334549061737, 0.09866140622798365, nan, 0.31508927786084967, 0.6503865758791867, 0.6281711159011772, 0.13761467889908258, 0.9610215191517875, 0.9003190602429954, 0.981970520641659, 0.01171499505535923, 0.3387008037786992, 0.48129837873677234, 0.0] |
| 0.0406 | 145.0 | 7250 | 0.8306 | 0.4360 | 0.5141 | 0.8867 | [nan, 0.8435997939171356, 0.886366406157634, 0.6223465646375345, 0.6631770897769883, 0.4788596814657396, 0.0, 0.6085666309373553, 0.7410466976722848, 0.31492224002889196, 0.8837966051190714, 0.22238290725881693, 0.0, 0.13819236298949727, 0.5232347616173808, 0.6307999909800885, 0.0, 0.3076828669612175, 0.7764942343062243, 0.16667183036627153, 0.4750608982109485, 0.4864866269041335, 0.08490179473871118, nan, 0.1946730634021258, 0.47966615140417673, 0.46086619157494946, 0.12857687905379625, 0.8998584935109988, 0.8307591913787293, 0.9614240003370637, 0.006127383872241452, 0.19595372863270513, 0.37590210909466404, 0.0] | [nan, 0.9378495859578592, 0.954765284801492, 0.8295152378981893, 0.7149554091802339, 0.6165097902504213, 0.0, 0.7444147582080288, 0.8516346093644449, 0.3441624115049705, 0.9674811514482063, 0.25957454532253965, 0.0, 0.13835756806870303, 0.6645375574859479, 0.8335170783548365, 0.0, 0.308743842364532, 0.922605741887015, 0.2037881657701341, 0.6393476201715377, 0.5315353798252473, 0.08758630407214316, nan, 0.31051543671387105, 0.6595398177910493, 0.623721881390593, 0.12882262996941896, 0.9541870064066892, 0.8996719468670082, 0.9800173509043849, 0.007351263130960367, 0.3720998886249883, 0.47889753048090633, 0.0] |
| 0.037 | 150.0 | 7500 | 0.8222 | 0.4343 | 0.5077 | 0.8875 | [nan, 0.844207269702504, 0.8878295221933561, 0.6214984234657922, 0.6643742580050236, 0.48557575036716316, 0.0, 0.6097768299571183, 0.7465852256395515, 0.3695119182746879, 0.884482746916304, 0.229786147654232, 0.0, 0.10648001365753726, 0.45458553791887124, 0.6442341311464989, 0.0, 0.258520979451212, 0.7755187494699113, 0.17377147325464898, 0.4744249539706051, 0.5001041209924736, 0.08993947946915624, nan, 0.19405005327880656, 0.4817597684924271, 0.45507234290956095, 0.1162079510703364, 0.898116706658797, 0.8266099378191127, 0.9613809600564381, 0.008963162954562462, 0.1934702763543734, 0.37436200278398785, 0.0] | [nan, 0.9305498764782347, 0.9581999167444519, 0.848117198096508, 0.7216662302611518, 0.6072343268839695, 0.0, 0.7464716749664212, 0.8558986644346832, 0.40151459088778707, 0.9658946853385326, 0.27772534214252004, 0.0, 0.10651898116521909, 0.5268267756770567, 0.8052302772066784, 0.0, 0.258743842364532, 0.9231421412121703, 0.21550875066292902, 0.6330883339173254, 0.5487097904926255, 0.09339157390446667, nan, 0.29633213123405755, 0.648311048557354, 0.6032167136461615, 0.1162079510703364, 0.9569935512071162, 0.8966612022369814, 0.9819461835645514, 0.01076756039031542, 0.35058975081518456, 0.467014318264338, 0.0] |
| 0.0359 | 155.0 | 7750 | 0.8264 | 0.4336 | 0.5100 | 0.8876 | [nan, 0.8425150819450634, 0.8887259579748503, 0.6062849127025877, 0.6661436167605636, 0.477463082002611, 0.0, 0.608982398838398, 0.74429892821273, 0.3660286553193368, 0.8814051326012079, 0.18797448685125717, 0.0, 0.206084945843982, 0.4612220916568743, 0.6472122569202, 0.0, 0.1635491016490278, 0.7777400139827546, 0.16735784151426214, 0.4777184910568181, 0.5271252583451728, 0.1026913327220754, nan, 0.20569207071077533, 0.49218430887769665, 0.4574078290930921, 0.0779816513761468, 0.8958569293152772, 0.8268185544245148, 0.961547775435119, 0.016675747796079745, 0.1920671902330555, 0.3826628162758937, 0.0007393715341959334] | [nan, 0.9373452897590907, 0.9575174915394369, 0.8346226350031035, 0.7189249990837373, 0.592023705769566, 0.0, 0.743514796514588, 0.8620311269429625, 0.40898887491057995, 0.9669574481830303, 0.230551421207391, 0.0, 0.2065726554113399, 0.6016862544711293, 0.8009495148138216, 0.0, 0.16366995073891627, 0.9155667078623104, 0.2259489355254186, 0.6286330113925248, 0.5697087786470788, 0.10731294913343667, nan, 0.3083164746239775, 0.6503009871236473, 0.6106228928314817, 0.0779816513761468, 0.9592060735712507, 0.8998802821519236, 0.9815085933742069, 0.02063609518606372, 0.36394133354803215, 0.48287175655267134, 0.0011527377521613833] |
| 0.0335 | 160.0 | 8000 | 0.8518 | 0.4340 | 0.5059 | 0.8886 | [nan, 0.8436530764368111, 0.8895900440620743, 0.6082310506714941, 0.6647265197368698, 0.48458344251575175, 0.0, 0.6090840245108227, 0.7404627804506331, 0.38335284631867284, 0.8815549567555062, 0.18294506042107886, 0.0, 0.07282879016921051, 0.4207551435677142, 0.6530114804312678, 0.0, 0.3558657849620377, 0.7775443898061408, 0.17116698280457718, 0.4806890482304907, 0.4933879226304321, 0.09181473293485085, nan, 0.17767671317351422, 0.4911045514027132, 0.4719998327724242, 0.08830275229357798, 0.9007817953005852, 0.8305455831626325, 0.9611232513095775, 0.006788911045474309, 0.20454109523352834, 0.3848491020278139, 0.0] | [nan, 0.9335459063558043, 0.9603316031750019, 0.8547916810348131, 0.7148900428130813, 0.5963366451687874, 0.0, 0.7503401525473862, 0.8677483877983438, 0.42808160043414983, 0.9664967587586591, 0.2132355610411297, 0.0, 0.0728749878013077, 0.47547266223811957, 0.7982678307162083, 0.0, 0.3578817733990148, 0.9317873005484486, 0.20920524282142586, 0.6235621577277751, 0.5465768257567909, 0.09480061998027335, nan, 0.25343038086023395, 0.6593662628145387, 0.6239982313601945, 0.08830275229357798, 0.9526813832066575, 0.902749388764508, 0.9824663289380254, 0.007828438254230607, 0.3434241777706212, 0.48473192062598336, 0.0] |
| 0.0346 | 165.0 | 8250 | 0.8438 | 0.4379 | 0.5103 | 0.8883 | [nan, 0.8459468636033894, 0.8888331369606564, 0.6143356921364396, 0.6654980544147341, 0.48167853831328056, 0.0, 0.6135617243950853, 0.7453493425593741, 0.36501505490612823, 0.8871093023776453, 0.28924392439243923, 0.0, 0.11610167426217922, 0.44053678852383155, 0.6419692508995748, 0.0, 0.31108930323846906, 0.7764850703242182, 0.17769648792669843, 0.48261405652354455, 0.5041534749331448, 0.09703109762704519, nan, 0.1935639159166168, 0.4981157384329542, 0.45534552215680196, 0.08371559633027523, 0.8969250693293208, 0.8249491172270096, 0.9618063555393217, 0.009535384030237478, 0.19902344047093898, 0.3833148309593847, 0.0] | [nan, 0.9345559069102661, 0.95845124190979, 0.8289156072553392, 0.7178118816407789, 0.6027575387833363, 0.0, 0.7548031091349021, 0.8646673279137435, 0.4066947877352673, 0.9652041807300498, 0.34996551348604205, 0.0, 0.11622914023616668, 0.4864588656106285, 0.7796748209727561, 0.0, 0.31231527093596056, 0.925766196175982, 0.22965376164860973, 0.6295864608103177, 0.5586281474711666, 0.10094406087079047, nan, 0.2917802797079778, 0.6533227456872777, 0.6091029679986735, 0.08371559633027523, 0.9575749702296287, 0.8960586786072262, 0.9801156536079956, 0.011096050511407251, 0.3511399165358346, 0.48239553350137077, 0.0] |
| 0.0359 | 170.0 | 8500 | 0.8588 | 0.4298 | 0.5008 | 0.8882 | [nan, 0.843094419260262, 0.8900013429866321, 0.6133301326394077, 0.6661149601220273, 0.4853624310010443, 0.0, 0.6120054866295084, 0.7375298943289792, 0.3408351470819216, 0.8829721413070726, 0.22209681464760472, 0.0, 0.03861163959217523, 0.4175319971021492, 0.6376489814784245, 0.0, 0.28027511667894867, 0.7789104093843366, 0.17390202354217138, 0.47461354628029206, 0.516023356843965, 0.08927792321116929, nan, 0.18421222487575034, 0.4871304688103021, 0.45871426798494186, 0.05387848681696599, 0.8994123394635088, 0.8242101331834862, 0.9615335975044262, 0.007916133605582808, 0.22646747269605874, 0.37908474344043297, 0.0] | [nan, 0.9397440850808414, 0.9577213526503497, 0.8272086714637118, 0.7156158739766668, 0.6048939631630934, 0.0, 0.7494343721360998, 0.8668388984634243, 0.3747009053010681, 0.9680392740381917, 0.25716048934548225, 0.0, 0.03862105982238704, 0.441747572815534, 0.7673092776337614, 0.0, 0.2810344827586207, 0.9210972833920102, 0.22352450943253277, 0.6446638544934293, 0.5682428088718845, 0.09226433704382134, nan, 0.31379189022781245, 0.6451870589801624, 0.6128613275852539, 0.05389908256880734, 0.9556559273578009, 0.9013431255913293, 0.9796174593236774, 0.009692187467583211, 0.33278316761268334, 0.4804472286975694, 0.0] |
| 0.0342 | 175.0 | 8750 | 0.8689 | 0.4339 | 0.5051 | 0.8880 | [nan, 0.842207631443645, 0.8893284445771101, 0.6225399576081035, 0.6646476520665043, 0.48347573182283166, 0.0, 0.6145921797450942, 0.7331767170008916, 0.3267635558167394, 0.8840148558277702, 0.2103112515380292, 0.0, 0.10012921471584953, 0.3746216530849825, 0.6392775627666964, 0.0, 0.4631879914224446, 0.7770691785507862, 0.1792685215596115, 0.48551142385802487, 0.48582005237755577, 0.08915524176996963, nan, 0.18459143368114972, 0.48183353534471146, 0.4823333617820261, 0.029434250764525993, 0.897290929740743, 0.8192668128466759, 0.9613327742988569, 0.0055269961977186316, 0.2091533037018423, 0.3819620509014621, 0.0] | [nan, 0.9364984593883142, 0.9624452521749953, 0.8451305393993732, 0.7156570585663757, 0.5934239730404973, 0.0, 0.7383128627606906, 0.8535853980828229, 0.35661955154295866, 0.965347682838101, 0.2512977819726286, 0.0, 0.10020005855372303, 0.41108840061318347, 0.7677959536366616, 0.0, 0.47881773399014776, 0.9253472165067016, 0.23157814985983788, 0.6453594462715136, 0.5296791470411678, 0.09181344229956319, nan, 0.31548509103703054, 0.6294458603571904, 0.6247167412811585, 0.029434250764525993, 0.9557927644216986, 0.8822924375415687, 0.9825359997862155, 0.006282805789724829, 0.3573930196046858, 0.48477927982445523, 0.0] |
| 0.0621 | 180.0 | 9000 | 0.7787 | 0.4015 | 0.4924 | 0.8783 | [0.0, 0.8086755425666048, 0.8830559170088975, 0.5349712025714258, 0.645925544331418, 0.4397485010784333, 0.0, 0.6035436142733216, 0.7401548966695519, 0.27901830172394715, 0.8781545312615516, 0.15653466918823716, 0.0007045974986788797, 0.12723599990265033, 0.20456217807211186, 0.629064116632701, 0.0, 0.28005299927728255, 0.7801685900058292, 0.18456300860811892, 0.45049561474148564, 0.5454936336497989, 0.09604580812445981, nan, 0.13710408411674824, 0.4796006742513984, 0.4462842458656277, 0.08326967150496563, 0.895986048178371, 0.8195021626448673, 0.9584500399303424, 0.012936392680801627, 0.2073265351363334, 0.33898081262786167, 0.001953125] | [nan, 0.9020274819425876, 0.9445349555322843, 0.7582243269960229, 0.7115816733865559, 0.6725024693509964, 0.0, 0.7246456643278654, 0.8622486135230519, 0.3110091516810972, 0.9623436700743563, 0.19680908991904744, 0.0007616146230007616, 0.12754952669073874, 0.21308124680633622, 0.7971156997705671, 0.0, 0.28633004926108374, 0.9141247505929693, 0.2378134707174786, 0.6194613894575736, 0.6931652884469377, 0.10021135691137101, nan, 0.23282610607793122, 0.6372106624569679, 0.5951196595368374, 0.08333333333333333, 0.9429497472807788, 0.9053891766821857, 0.9799066410634253, 0.015400999993084419, 0.4015941387222737, 0.4187395086220052, 0.0023054755043227667] |
| 0.0374 | 185.0 | 9250 | 0.8500 | 0.4261 | 0.5005 | 0.8835 | [nan, 0.8434716396594377, 0.8889128861529657, 0.64763139635125, 0.6591157906879173, 0.47535724026979675, 0.0, 0.6200541090314029, 0.749098883299684, 0.3885603318916056, 0.8826306979452221, 0.1625372623759957, 0.0, 0.08342478113492818, 0.39311682016480853, 0.6380806324629313, 0.0, 0.22758620689655173, 0.7521926906996731, 0.17508827683615819, 0.39885397225233327, 0.46177267841868885, 0.09434473050163783, nan, 0.14603587039096305, 0.4816513597668971, 0.4814476488755492, 0.10313216195569137, 0.9008454163938971, 0.818761674014968, 0.9607465658084764, 0.006843049110009815, 0.22781082971393046, 0.39319498274838577, 0.0] | [nan, 0.9312379371557167, 0.9615129186420878, 0.8851103856793643, 0.708138414982727, 0.5974376852013248, 0.0, 0.7408003646396516, 0.865632836519292, 0.4343471718591973, 0.9632888776864283, 0.1811086506697644, 0.0, 0.08346345271786865, 0.41440981093510476, 0.772771967462233, 0.0, 0.22758620689655173, 0.8636544903580905, 0.22540343965451928, 0.6805965245365061, 0.5255778376023376, 0.09821051148372552, nan, 0.29167033160348316, 0.6413712269623599, 0.6117282927098878, 0.10321100917431193, 0.9538809235006024, 0.8999256213056552, 0.9828299535018667, 0.007811149300488931, 0.3540249319002187, 0.5106163536574456, 0.0] |
| 0.0312 | 190.0 | 9500 | 0.8366 | 0.4271 | 0.5011 | 0.8871 | [nan, 0.8383583648435936, 0.8893585287734083, 0.6242144991822743, 0.6523942357118304, 0.4788692097394316, 0.0, 0.6222419857542325, 0.7495553204266636, 0.3855623463905866, 0.8844989483482312, 0.21960980490245122, 0.0, 0.03046415766238201, 0.39732965009208104, 0.6460657345680039, 0.0, 0.16235120873726838, 0.7700717667212197, 0.16549668505209203, 0.49368437402670146, 0.46331160358515755, 0.09818201434967902, nan, 0.17114682596121936, 0.5135764361691169, 0.4659315099786098, 0.10504201680672269, 0.9002915149364578, 0.8254822330499596, 0.9604699442360148, 0.009150900078881995, 0.18152508685955304, 0.3910305542248974, 0.0] | [nan, 0.9283969805595136, 0.9598165282698033, 0.8767078936680537, 0.7034928688316159, 0.5971018534658068, 0.0, 0.7531496234804661, 0.8637248860666893, 0.4372826167394361, 0.9671062455718215, 0.2709187933350274, 0.0, 0.03047233336586318, 0.4409810935104752, 0.7719575300696244, 0.0, 0.1629310344827586, 0.9197306063880243, 0.22505492840366695, 0.6378119115673065, 0.5240721319571477, 0.10257855431872623, nan, 0.31753012578063156, 0.6519010213591494, 0.620018791797933, 0.10512232415902141, 0.9564885836615992, 0.9020052271173103, 0.9846571430752903, 0.010750271436573745, 0.32109550071789916, 0.49397091092787193, 0.0] |
| 0.0326 | 195.0 | 9750 | 0.8707 | 0.4272 | 0.4984 | 0.8861 | [nan, 0.8261617719245659, 0.8854917604179252, 0.6200336534230758, 0.660580250534605, 0.4498640011519204, 0.0, 0.6209593550575648, 0.7414471855553728, 0.34006487158979826, 0.8877441348891416, 0.23385327442671236, 0.0, 0.0332081728190374, 0.43202489229296315, 0.6361883362956504, 0.0, 0.1902200488997555, 0.7701853795262287, 0.15860467354944288, 0.49904952690861926, 0.46916590678565206, 0.09274864326815566, nan, 0.17989392302744164, 0.5138984658207596, 0.4806735961411222, 0.12204424103737604, 0.9008746454479115, 0.8221407501198316, 0.9611822232918834, 0.00719201457815406, 0.1665572869766945, 0.3941783403071965, 0.0] | [nan, 0.9130443736650012, 0.965349714444587, 0.8908710545070002, 0.7139688682285827, 0.5282453082331068, 0.0, 0.7562413022290538, 0.8614280959708963, 0.37758701497323566, 0.9644183610682487, 0.28246270011253494, 0.0, 0.03322923782570508, 0.46116504854368934, 0.7536724173892315, 0.0, 0.1916256157635468, 0.9292328194741926, 0.19972725206455033, 0.628904385763347, 0.5198814168108274, 0.0958433140763703, nan, 0.325930160964025, 0.6750836867831942, 0.6223677665395457, 0.12232415902140673, 0.9573260874322359, 0.8965555357795243, 0.9795764203309079, 0.007942545348925665, 0.29287602485138814, 0.49881075790503954, 0.0] |
| 0.0323 | 200.0 | 10000 | 0.8527 | 0.4345 | 0.5079 | 0.8871 | [nan, 0.8382620833593052, 0.8876413942052827, 0.6261839847460975, 0.6590417473673477, 0.48228357004057837, 0.0, 0.6202905105623743, 0.748344409080285, 0.39096811362981676, 0.8848513296576286, 0.2415092028297553, 0.0, 0.07068982339740462, 0.41356382978723405, 0.6474134903246308, 0.0, 0.3062052505966587, 0.7704161510118073, 0.16108765491481541, 0.49752934863906867, 0.4734664813860761, 0.09820294554789893, nan, 0.17153699720635862, 0.514555863370054, 0.4660696051735875, 0.08826901031715705, 0.8991007829081079, 0.829742650939299, 0.9612781430019607, 0.01112666737555973, 0.1861992251927429, 0.391388886866003, 0.0] | [nan, 0.9255583122183136, 0.9555184973850358, 0.8927561553139153, 0.7130378697969978, 0.6275811980710011, 0.0, 0.7474676455043131, 0.8545937449541465, 0.43523520560447965, 0.9672661630501664, 0.28627436744473084, 0.0, 0.0707036205718747, 0.47675012774655084, 0.7689381524189783, 0.0, 0.31600985221674877, 0.9278457312029238, 0.2055231456928555, 0.6363063556709445, 0.5255962863991213, 0.10240946878962942, nan, 0.30514996921453075, 0.6575213496395762, 0.6054551483999336, 0.08830275229357798, 0.9550074747938649, 0.8984159398975186, 0.9823971352874257, 0.013025497748978224, 0.3256981066248004, 0.49491941043060034, 0.0] |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.7.1+cu110
- Datasets 1.18.3
- Tokenizers 0.10.3
| [
"unlabeled",
"flat-road",
"flat-sidewalk",
"flat-crosswalk",
"flat-cyclinglane",
"flat-parkingdriveway",
"flat-railtrack",
"flat-curb",
"human-person",
"human-rider",
"vehicle-car",
"vehicle-truck",
"vehicle-bus",
"vehicle-tramtrain",
"vehicle-motorcycle",
"vehicle-bicycle",
"vehicle-caravan",
"vehicle-cartrailer",
"construction-building",
"construction-door",
"construction-wall",
"construction-fenceguardrail",
"construction-bridge",
"construction-tunnel",
"construction-stairs",
"object-pole",
"object-trafficsign",
"object-trafficlight",
"nature-vegetation",
"nature-terrain",
"sky",
"void-ground",
"void-dynamic",
"void-static",
"void-unclear"
] |
nielsr/segformer-b0-finetuned-segments-sidewalk |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# segformer-b0-finetuned-segments-sidewalk
This model is a fine-tuned version of [nvidia/mit-b0](https://huggingface.co/nvidia/mit-b0) on the segments/sidewalk-semantic dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5679
- Miou: 0.2769
- Macc: 0.3331
- Overall Accuracy: 0.8424
- Per Category Iou: [nan, 0.7174911859423314, 0.8790751054409742, 0.6065232798410057, 0.6975274018055722, 0.3486407385349508, nan, 0.40093167116703843, 0.28779837903852556, 0.0, 0.7870339041746186, 0.0, 0.0, 0.0, 0.0, 0.1464360606454247, 0.0, 0.0, 0.6770283275082656, 0.0, 0.338555175257431, 0.14697310016578427, 0.0, nan, 0.0, 0.27163002251763635, 0.0, 0.0, 0.8257437911843676, 0.7169333376341568, 0.9108105550493353, 0.0, 0.0, 0.1016801552778885, 0.0]
- Per Category Accuracy: [nan, 0.9199960254104915, 0.9327745517652714, 0.7304629327758765, 0.7378309547498484, 0.45295941407150275, nan, 0.5188608021128075, 0.5327441812670195, 0.0, 0.9353764765979435, 0.0, 0.0, 0.0, 0.0, 0.1588525415198792, 0.0, 0.0, 0.9238854794385364, 0.0, 0.4400394213522207, 0.15130051149615126, 0.0, nan, 0.0, 0.3570096986572905, 0.0, 0.0, 0.9359897980968498, 0.8570458108260572, 0.9549583230619891, 0.0, 0.0, 0.11786971668879294, 0.0]
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 6e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Miou | Macc | Overall Accuracy | Per Category Iou | Per Category Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:----------------:|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------:|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------:|
| 1.357 | 1.0 | 400 | 1.0006 | 0.1632 | 0.2069 | 0.7524 | [nan, 0.5642795884663824, 0.7491853309192827, 0.0, 0.40589649630192104, 0.02723606910696284, nan, 0.0002207740938439576, 0.0, 0.0, 0.6632462867093903, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.5671699281129761, 0.0, 0.0009207911027492868, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.7507253434892517, 0.6157793573905029, 0.8774768871968204, 0.0, 0.0, 0.0, 0.0] | [nan, 0.6839993330882016, 0.9786792586618772, 0.0, 0.4818162160949784, 0.02785198456498826, nan, 0.00022133459131411787, 0.0, 0.0, 0.9043689536433023, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.8606078323791991, 0.0, 0.0009210330367246509, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.895198618615298, 0.8549807032886052, 0.9328734839751688, 0.0, 0.0, 0.0, 0.0] |
| 1.6346 | 2.0 | 800 | 0.7856 | 0.1903 | 0.2334 | 0.7917 | [nan, 0.6276046255936906, 0.8379492348238635, 0.0, 0.5220035981992285, 0.19441920935217594, nan, 0.16135703555333, 0.0, 0.0, 0.7357165628674137, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.567598980063164, 0.0, 0.07867871139133086, 0.0, 0.0, nan, 0.0, 0.02123705398363847, 0.0, 0.0, 0.7917172051343153, 0.6589515948064048, 0.8916684207946344, 0.0, 0.0, 0.00013685918191589503, 0.0] | [nan, 0.8610263337355926, 0.9499345560017969, 0.0, 0.5908796687797819, 0.2144081438468206, nan, 0.1813236746419022, 0.0, 0.0, 0.8825551027577866, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.9239907140298015, 0.0, 0.08495225520298297, 0.0, 0.0, nan, 0.0, 0.021302829364985724, 0.0, 0.0, 0.9258397010509258, 0.8834861376443207, 0.9489131468773239, 0.0, 0.0, 0.0001372777815910495, 0.0] |
| 0.659 | 3.0 | 1200 | 0.6798 | 0.2215 | 0.2687 | 0.8107 | [nan, 0.6728474586764454, 0.8404607924530816, 0.21147709475332813, 0.5407350347311378, 0.23535489130104167, nan, 0.3087159264982809, 0.0060319580742948155, 0.0, 0.7331305064022374, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.6378031991744924, 0.0, 0.35289337122777764, 6.24997656258789e-05, 0.0, nan, 0.0, 0.14698390926256938, 0.0, 0.0, 0.8019042204623998, 0.669283249725758, 0.8928145424856038, 0.0, 0.0, 0.03847722460691187, 0.0] | [nan, 0.866012011452706, 0.9627112260298595, 0.21236715482371135, 0.5645869262075475, 0.2750610095322395, nan, 0.3857655597748765, 0.0060319580742948155, 0.0, 0.939196440844118, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.8380282443529743, 0.0, 0.5749902063170915, 6.256068386334744e-05, 0.0, nan, 0.0, 0.1605725590139305, 0.0, 0.0, 0.9212803460870584, 0.8870298583701837, 0.959700359744241, 0.0, 0.0, 0.04453994364914478, 0.0] |
| 0.5481 | 4.0 | 1600 | 0.5999 | 0.2522 | 0.2998 | 0.8312 | [nan, 0.7078353465279917, 0.8661728761172196, 0.3857324719136883, 0.6338278880825696, 0.3440050078187208, nan, 0.35980405625532347, 0.23875867241702606, 0.0, 0.773703347865372, 0.0, 0.0, 0.0, 0.0, 0.0004931363471679884, 0.0, 0.0, 0.6554146448850521, 0.0, 0.367673493717809, 0.03089804641909161, 0.0, nan, 0.0, 0.21529017459808872, 0.0, 0.0, 0.818951849158376, 0.7007504838794707, 0.9053929635423027, 0.0, 0.0, 0.06626212301200333, 0.0] | [nan, 0.8955207784307155, 0.9536263694097721, 0.39712577675621036, 0.6989299616008556, 0.4248959179453637, nan, 0.42984959564233455, 0.26168627652468784, 0.0, 0.9055166364779607, 0.0, 0.0, 0.0, 0.0, 0.0004932058379466533, 0.0, 0.0, 0.8632164276000204, 0.0, 0.6365580872107307, 0.031401709658368616, 0.0, nan, 0.0, 0.2497286263775161, 0.0, 0.0, 0.9296676429517725, 0.8858954297713482, 0.9555756265860916, 0.0, 0.0, 0.0750792276952902, 0.0] |
| 0.7855 | 5.0 | 2000 | 0.5679 | 0.2769 | 0.3331 | 0.8424 | [nan, 0.7174911859423314, 0.8790751054409742, 0.6065232798410057, 0.6975274018055722, 0.3486407385349508, nan, 0.40093167116703843, 0.28779837903852556, 0.0, 0.7870339041746186, 0.0, 0.0, 0.0, 0.0, 0.1464360606454247, 0.0, 0.0, 0.6770283275082656, 0.0, 0.338555175257431, 0.14697310016578427, 0.0, nan, 0.0, 0.27163002251763635, 0.0, 0.0, 0.8257437911843676, 0.7169333376341568, 0.9108105550493353, 0.0, 0.0, 0.1016801552778885, 0.0] | [nan, 0.9199960254104915, 0.9327745517652714, 0.7304629327758765, 0.7378309547498484, 0.45295941407150275, nan, 0.5188608021128075, 0.5327441812670195, 0.0, 0.9353764765979435, 0.0, 0.0, 0.0, 0.0, 0.1588525415198792, 0.0, 0.0, 0.9238854794385364, 0.0, 0.4400394213522207, 0.15130051149615126, 0.0, nan, 0.0, 0.3570096986572905, 0.0, 0.0, 0.9359897980968498, 0.8570458108260572, 0.9549583230619891, 0.0, 0.0, 0.11786971668879294, 0.0] |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.6
| [
"unlabeled",
"flat-road",
"flat-sidewalk",
"flat-crosswalk",
"flat-cyclinglane",
"flat-parkingdriveway",
"flat-railtrack",
"flat-curb",
"human-person",
"human-rider",
"vehicle-car",
"vehicle-truck",
"vehicle-bus",
"vehicle-tramtrain",
"vehicle-motorcycle",
"vehicle-bicycle",
"vehicle-caravan",
"vehicle-cartrailer",
"construction-building",
"construction-door",
"construction-wall",
"construction-fenceguardrail",
"construction-bridge",
"construction-tunnel",
"construction-stairs",
"object-pole",
"object-trafficsign",
"object-trafficlight",
"nature-vegetation",
"nature-terrain",
"sky",
"void-ground",
"void-dynamic",
"void-static",
"void-unclear"
] |
nickmuchi/segformer-b4-finetuned-segments-sidewalk |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# segformer-b4-finetuned-segments-sidewalk
This model is a fine-tuned version of [nvidia/mit-b4](https://huggingface.co/nvidia/mit-b4) on the segments/sidewalk-semantic dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6463
- Mean Accuracy: 0.5168
- Mean Iou: 0.4317
- Overall Accuracy: 0.8895
- Per Category Accuracy: [nan, 0.9354022848098984, 0.9601675641402632, 0.5369719626168225, 0.8337939300328185, 0.6403441237446122, nan, 0.7582108280375539, 0.8834986003700717, 0.24187000289987157, 0.948116751458167, 0.5520704700749156, 0.0, 0.7381320949432405, 0.19649388321352, 0.888963759173865, 0.0, 0.07624433796769041, 0.9231866922167408, 0.1182221559959602, 0.6801081993642044, 0.5121910497873957, 0.04447175819878205, nan, 0.19406837841548813, 0.5788088135238394, 0.5379894086104895, 0.008460918614020952, 0.9391146435745414, 0.9050362370798539, 0.9765451034803329, 0.015450806083965353, 0.41939482614968804, 0.4941702933568719, 0.0]
- Per Category Iou: [nan, 0.8640678937775673, 0.895377615265056, 0.442350332594235, 0.7643727945096741, 0.4849891658522591, nan, 0.6340492784936108, 0.6910083381883088, 0.21346568681218236, 0.8895978581938467, 0.46446072065520405, 0.0, 0.601404187337089, 0.08586860670194003, 0.6029780227646933, 0.0, 0.07410800631139614, 0.7995575849393181, 0.09964415294445995, 0.4716975388811325, 0.4492564945882909, 0.04216548363174065, nan, 0.13932260862707987, 0.43292556418938755, 0.4516033033256454, 0.00821917808219178, 0.8889508587805682, 0.7461158390782254, 0.954070468766836, 0.012555965083260888, 0.23512657506778772, 0.3742610137901782, 0.0]
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 6e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 25
### Training results
| Training Loss | Epoch | Step | Validation Loss | Mean Accuracy | Mean Iou | Overall Accuracy | Per Category Accuracy | Per Category Iou |
|:-------------:|:-----:|:-----:|:---------------:|:-------------:|:--------:|:----------------:|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------:|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------:|
| 1.0086 | 0.25 | 100 | 0.9195 | 0.2302 | 0.1742 | 0.7405 | [nan, 0.754391784765388, 0.8738098328493714, 0.0, 0.6095047025690915, 0.04406067496837279, nan, 0.11344860810198232, 0.03344878303363856, 0.0, 0.9451322667227594, 0.0, 0.0, 0.0, 0.0, 8.118464635968046e-06, 0.0, 0.0, 0.8406900175689528, 0.0, 0.33313290995723815, 0.007980320315659196, 0.0, nan, 0.0, 0.01001465431517245, 0.0, 0.0, 0.9094842682836028, 0.9104621468677264, 0.9500069670140131, 0.0, 0.0, 0.030522857924993155, 0.0] | [nan, 0.5181348731869903, 0.7666613623083653, 0.0, 0.3145052392920833, 0.040279298027504136, nan, 0.09896279300890763, 0.0332534621335044, 0.0, 0.707185048053476, 0.0, 0.0, 0.0, 0.0, 8.11839872703508e-06, 0.0, 0.0, 0.6129636976206597, 0.0, 0.21304181051016494, 0.007979819175153202, 0.0, nan, 0.0, 0.009972716399085856, 0.0, 0.0, 0.8032595523715207, 0.5644424403160349, 0.8548000615746258, 0.0, 0.0, 0.02810796628175876, 0.0] |
| 0.6465 | 0.5 | 200 | 0.7250 | 0.2963 | 0.2416 | 0.7963 | [nan, 0.8965158332325365, 0.9203420775747997, 0.0005677570093457944, 0.42947876549598557, 0.20108992228390948, nan, 0.6149826174335852, 0.6106893770460692, 0.0, 0.9320756176369465, 0.0, 0.0, 0.0, 0.0, 0.23413652010131844, 0.0, 0.0, 0.9437607244807804, 0.0, 0.2033741348512844, 0.2597617238717267, 0.0, nan, 0.0, 0.21746480347516617, 0.0, 0.0, 0.8793454644762622, 0.8380851985041863, 0.9445753860505853, 0.0, 0.0, 0.35629926758549024, 0.0] | [nan, 0.6645359168510458, 0.8064416600263559, 0.000566105647428005, 0.4116417722563792, 0.17504073239500048, nan, 0.34611894249410324, 0.4768988514264542, 0.0, 0.7872815412923856, 0.0, 0.0, 0.0, 0.0, 0.22760454893418883, 0.0, 0.0, 0.6497218142931416, 0.0, 0.16433182458127107, 0.24025960226620707, 0.0, nan, 0.0, 0.1865917623179034, 0.0, 0.0, 0.8237045305017561, 0.6485287252686867, 0.8916263487480074, 0.0, 0.0, 0.23161660227979464, 0.0] |
| 0.6777 | 1.0 | 400 | 0.6645 | 0.3343 | 0.2755 | 0.8205 | [nan, 0.8955600256602996, 0.9528284776336102, 0.20619042056074766, 0.4578573681184769, 0.34171859852352976, nan, 0.5150824142204389, 0.8000759121317076, 0.0, 0.9308408861203066, 0.0, 0.0, 0.0, 0.0, 0.8202247191011236, 0.0, 0.0, 0.931415684238172, 0.0, 0.22729327499111263, 0.2807173404242283, 0.0, nan, 0.0, 0.3332993143873973, 0.0, 0.0, 0.904612735522824, 0.9085503237620377, 0.9531456202767545, 0.0, 0.0, 0.2395403274915222, 0.0] | [nan, 0.7091852218081763, 0.8215012473174504, 0.20316384883142716, 0.449169741519482, 0.2820828827399737, nan, 0.4034439348068946, 0.5801054036574794, 0.0, 0.8406284073872154, 0.0, 0.0, 0.0, 0.0, 0.5491287380561565, 0.0, 0.0, 0.6833033543785748, 0.0, 0.196701947180513, 0.26816266986235426, 0.0, nan, 0.0, 0.2624543573765898, 0.0, 0.0, 0.8319417451247856, 0.6328739755697549, 0.9148380247362377, 0.0, 0.0, 0.18610354253000033, 0.0] |
| 0.4931 | 1.25 | 500 | 0.6513 | 0.3693 | 0.2930 | 0.8232 | [nan, 0.8195930838546497, 0.9565826472101743, 0.3660338785046729, 0.502483997738174, 0.5101274819814215, nan, 0.6120499018406388, 0.8168524932390757, 0.0, 0.9680832750475287, 0.0, 0.0, 0.0, 0.0, 0.7678687406637656, 0.0, 0.0, 0.9132467503439181, 0.07463699730127982, 0.3080053777834345, 0.3700341269744017, 0.0, nan, 0.0, 0.3144554351808238, 0.0, 0.0, 0.8719933435243034, 0.9280312013943278, 0.9461371807749148, 0.0, 0.3623930581804142, 0.40862556355693114, 0.0] | [nan, 0.7255301419742964, 0.8322765227346863, 0.3328323011716717, 0.4866977152337555, 0.31646114214929966, nan, 0.4116248877039441, 0.584768070212383, 0.0, 0.7940437031847611, 0.0, 0.0, 0.0, 0.0, 0.5384221282312557, 0.0, 0.0, 0.7148576049798162, 0.06922710729587371, 0.23689839512021127, 0.330131038978254, 0.0, nan, 0.0, 0.25964434649208096, 0.0, 0.0, 0.8276496500163791, 0.5924934568973941, 0.9145898275185997, 0.0, 0.10460157785142388, 0.3046522912622977, 0.0] |
| 0.1718 | 2.0 | 800 | 0.5338 | 0.3766 | 0.3117 | 0.8521 | [nan, 0.9149980619048741, 0.9439616375983239, 0.49970093457943926, 0.7343188057936092, 0.4654595153245685, nan, 0.4401632944315461, 0.7951368790624852, 0.0, 0.9516775700030986, 0.0, 0.0, 0.0, 0.0, 0.7842599207637851, 0.0, 0.0, 0.9120325078402151, 0.0, 0.5436783980174178, 0.289193941696178, 0.0, nan, 0.0, 0.4040691893023499, 0.04438191043850125, 0.0, 0.9289921718405059, 0.9105179916825697, 0.9579859465374478, 0.0, 0.00014225040134934668, 0.5310102962619485, 0.0] | [nan, 0.7682867926029272, 0.863978713337328, 0.3619354489331745, 0.619807980106986, 0.4001297195410576, nan, 0.37693255173950874, 0.6055069405805374, 0.0, 0.8443884543167844, 0.0, 0.0, 0.0, 0.0, 0.5757144134211389, 0.0, 0.0, 0.7512958252799772, 0.0, 0.35684944134400076, 0.2822025918120264, 0.0, nan, 0.0, 0.3086991377431782, 0.04423000485801351, 0.0, 0.8578322873273115, 0.6920597473565505, 0.9258143343645202, 0.0, 0.00013209541062801931, 0.3399454223242722, 0.0] |
| 1.7925 | 2.25 | 900 | 0.5745 | 0.3877 | 0.3157 | 0.8463 | [nan, 0.9373443718928436, 0.8936817705653165, 0.5237184579439252, 0.785620810686892, 0.5932309765570626, nan, 0.5731998228133042, 0.7751909664563268, 0.0, 0.9330254836699918, 0.0, 0.0, 0.0, 0.0, 0.8874780801454829, 0.0, 0.0, 0.9253989025665076, 0.0, 0.49743326413606553, 0.3720606075459213, 0.0, nan, 0.0, 0.362670748940179, 0.2263189382021227, 0.0, 0.9355852115710428, 0.9121195658169062, 0.9653801272784691, 0.0, 0.09587677050945966, 0.21074794549629322, 0.0] | [nan, 0.7666762008063966, 0.8459820722288737, 0.35589376130270695, 0.6602856629180212, 0.391087786259542, nan, 0.4283483218139711, 0.618615992154992, 0.0, 0.8563419873974479, 0.0, 0.0, 0.0, 0.0, 0.4695442264821982, 0.0, 0.0, 0.7387838557909564, 0.0, 0.3568544684209477, 0.3548962568907604, 0.0, nan, 0.0, 0.28509334019028026, 0.21794051124482566, 0.0, 0.8588025306782998, 0.6960344960020876, 0.927551192360457, 0.0, 0.09183812508516147, 0.18221393560509547, 0.0] |
| 0.4287 | 2.5 | 1000 | 0.5140 | 0.4156 | 0.3337 | 0.8596 | [nan, 0.9114284539509796, 0.9599424299786812, 0.3729602803738318, 0.6955020648206622, 0.6337076451002155, nan, 0.648796319756489, 0.9076149357119134, 0.0, 0.9333320442069727, 0.0, 0.0, 0.0, 0.0, 0.837638825745275, 0.0, 0.0, 0.8487128760410935, 0.14962168247818672, 0.7450834097721757, 0.4416333770387344, 0.0, nan, 0.005162707675408485, 0.4304364892447794, 0.29855310097272386, 0.0, 0.9243997842101966, 0.9100753698167738, 0.9780073694330464, 0.0, 0.3377837387469772, 0.3283183517042185, 0.0] | [nan, 0.8056652041667661, 0.868478873207236, 0.36872340720413566, 0.648560287656455, 0.4227995307199668, nan, 0.5211383920382058, 0.5417303836612635, 0.0, 0.8614512323591124, 0.0, 0.0, 0.0, 0.0, 0.4902451772308277, 0.0, 0.0, 0.7414797203702529, 0.1034994187677877, 0.37103542329614997, 0.38941938864817555, 0.0, nan, 0.004775330844065127, 0.3339817219387496, 0.27392303157209946, 0.0, 0.8695462814099766, 0.7123344518279238, 0.9249476057387171, 0.0, 0.15441354067963511, 0.2686663032210652, 0.0] |
| 0.2477 | 2.75 | 1100 | 0.5852 | 0.3976 | 0.3245 | 0.8501 | [nan, 0.9240898770490549, 0.9130342916084687, 0.5360268691588785, 0.6767027987344469, 0.5151102302165186, nan, 0.6523417772790812, 0.8782321962328604, 0.0, 0.9459085723287141, 0.01212233473285585, 0.0, 0.0, 0.0, 0.8298613366240176, 0.0, 0.0, 0.8996769125664682, 0.0046441166244474245, 0.58637589184745, 0.4359797566385237, 0.0, nan, 0.0, 0.4451038886272047, 0.26994748620682013, 0.0, 0.9522730369995648, 0.9058973503358962, 0.9744264856283144, 0.024141075054913176, 0.024040317828039587, 0.315675681715336, 0.0] | [nan, 0.7635041179698989, 0.8504428879888529, 0.32134395517814934, 0.5814428391874907, 0.4398125968608028, nan, 0.5183108660060791, 0.5876442483214019, 0.0, 0.8637126471579993, 0.010904378413403684, 0.0, 0.0, 0.0, 0.5582717546245474, 0.0, 0.0, 0.7543635882159604, 0.004548919124920941, 0.3707771520336274, 0.37139606254827867, 0.0, nan, 0.0, 0.32640450731902027, 0.25674365674787153, 0.0, 0.8589069009951039, 0.7216899081490464, 0.9303705560523882, 0.023933704665274814, 0.02273469779955799, 0.24717820737291407, 0.0] |
| 0.2092 | 3.5 | 1400 | 0.5305 | 0.4215 | 0.3450 | 0.8615 | [nan, 0.8854690236777607, 0.9752597083363964, 0.4837301401869159, 0.7543174059151941, 0.32120495047431574, nan, 0.6121067808383275, 0.8640129050623903, 0.006110443680351299, 0.9472197081638014, 0.22567300568041493, 0.0, 0.0, 0.0, 0.849337533285705, 0.0, 0.0, 0.9323370763681338, 0.09924833192602527, 0.4992824257958052, 0.5897763059541461, 0.0, nan, 0.005025401620211451, 0.5194038833935207, 0.26516141898030177, 0.0, 0.9098213390526053, 0.9140251839431679, 0.9696367307434691, 0.0, 0.46129773009002417, 0.39953043905763785, 0.0] | [nan, 0.8279523588823188, 0.8503094621684615, 0.4166789099025304, 0.6531647345358885, 0.2970569371138754, nan, 0.4891076127233826, 0.6267720763107083, 0.0060749588138385505, 0.8628731375345856, 0.1638621555382868, 0.0, 0.0, 0.0, 0.5868382377688277, 0.0, 0.0, 0.766351782387915, 0.08906272053962098, 0.3548571571167739, 0.42844759670807536, 0.0, nan, 0.004661470273574813, 0.3559905085937402, 0.24649831094998764, 0.0, 0.8706735405566627, 0.7172875061476175, 0.937101627261161, 0.0, 0.18277266944717308, 0.30403604315996224, 0.0] |
| 0.1763 | 3.75 | 1500 | 0.5284 | 0.4184 | 0.3549 | 0.8725 | [nan, 0.9155522786024052, 0.9647682266779387, 0.44949532710280377, 0.7917047766525447, 0.5148885009996292, nan, 0.6544609508444807, 0.8639037813730607, 0.006400430838062886, 0.9591118988406824, 0.21581460442907713, 0.0, 0.0, 0.0, 0.8629440800155874, 0.0, 0.0, 0.9189088001847752, 0.0, 0.553022223587637, 0.46456492702831864, 0.0, nan, 0.09048469037484554, 0.4453708065107029, 0.3956482240588509, 0.0, 0.9463804808607508, 0.8827003794689641, 0.9646183286805874, 0.0, 0.10191225182385336, 0.42574316887992536, 0.0] | [nan, 0.8411073731152799, 0.8690976727110442, 0.4122661523625844, 0.6761261173524866, 0.4325420396336731, nan, 0.5235010874548043, 0.6267662599177323, 0.006377182482354398, 0.8589461626478264, 0.21441570391575504, 0.0, 0.0, 0.0, 0.5785872529434498, 0.0, 0.0, 0.7644870697544361, 0.0, 0.3931242258826368, 0.4137160566746283, 0.0, nan, 0.07477420233286435, 0.3486446014515762, 0.35308773803167826, 0.0, 0.8775350307334798, 0.7615382190401359, 0.9362335277343948, 0.0, 0.08161239401780339, 0.3123361865981938, 0.0] |
| 0.227 | 4.0 | 1600 | 0.5923 | 0.4426 | 0.3538 | 0.8544 | [nan, 0.9577374173182539, 0.9166854278467985, 0.1959217289719626, 0.7810987315371373, 0.5809225413617377, nan, 0.5835888579214346, 0.8662428239312995, 0.024607481668668958, 0.960621119945819, 0.44992590763151397, 0.0, 0.0, 0.0, 0.890757939858414, 0.0, 0.0, 0.8824976680624833, 0.23107998476795974, 0.6677916708726317, 0.5485129952087443, 0.0, nan, 0.13447755045997528, 0.4840215627780395, 0.4094524827723738, 0.0, 0.9258667409261705, 0.8784809934585728, 0.9680485743444954, 0.0, 0.5403279887825397, 0.2843078375615234, 0.0] | [nan, 0.732742632898181, 0.85248637631468, 0.1937195271972472, 0.6916132972252533, 0.4613544304478555, nan, 0.5019837033874182, 0.6339381818434339, 0.024391746227286727, 0.8507334888775837, 0.3399262956570416, 0.0, 0.0, 0.0, 0.5118086361876507, 0.0, 0.0, 0.7596215991272331, 0.14059847786558677, 0.3924964359231432, 0.4511581321221818, 0.0, nan, 0.11381225741975969, 0.3543174804464886, 0.36413975210357263, 0.0, 0.8783724167054704, 0.7445500851078998, 0.9377100490542223, 0.0, 0.1494074611014649, 0.24185599444907813, 0.0] | |
| 0.3219 | 4.75 | 1900 | 0.5306 | 0.4360 | 0.3684 | 0.8771 | [nan, 0.9383015101174155, 0.9581139041020363, 0.4607803738317757, 0.811509517207101, 0.6291153866526402, nan, 0.6505845609717001, 0.814323670351568, 0.021541903144289325, 0.9406027168809682, 0.41314727916357946, 0.0, 0.0, 0.0, 0.8354955510813795, 0.0, 0.0, 0.9418887586641801, 0.05121773539297008, 0.6343575406735104, 0.518250578994449, 0.0, nan, 0.027131676506933957, 0.4585466059559324, 0.39812988854667525, 0.0, 0.9202410996786, 0.895342680330491, 0.9736189575948254, 0.00016059513448547392, 0.336889593367067, 0.32415208076113006, 0.0] | [nan, 0.8286943759948178, 0.8911330146359255, 0.44085585238189445, 0.7563455702043241, 0.44281982228819555, nan, 0.5389345827619121, 0.6390151642075557, 0.02125355077350663, 0.8721853143259732, 0.34406869718732325, 0.0, 0.0, 0.0, 0.6106328062420269, 0.0, 0.0, 0.7642481786905918, 0.04822404265103627, 0.40217085841005906, 0.4365575304022451, 0.0, nan, 0.02300777793302594, 0.35943746679548483, 0.36207556675062974, 0.0, 0.8758467465629671, 0.7286601531442717, 0.9422882468777368, 0.00016028416831905857, 0.18664925297515172, 0.274341743647937, 0.0] | |
| 0.3758 | 5.25 | 2100 | 0.5413 | 0.4400 | 0.3618 | 0.8749 | [nan, 0.9446099997724584, 0.9535776804748952, 0.5333586448598131, 0.7118822151738956, 0.5725146926401914, nan, 0.637704053404208, 0.8958248327560848, 0.02011268072413936, 0.9449676672959805, 0.4536305260558163, 0.0, 0.0, 0.0, 0.8527716438267194, 0.0, 0.0, 0.9263943868758329, 0.13527541846719315, 0.6231382204452325, 0.5343291629394538, 0.0, nan, 0.07845667993958534, 0.48360548490082167, 0.39496133478097095, 0.0, 0.9342636737434504, 0.9081380373512183, 0.9754223113378334, 0.0, 0.0686053364221992, 0.4949887428280921, 0.0] | [nan, 0.8421459412186475, 0.884886678991681, 0.3243137842681656, 0.6975183850797184, 0.4470212561315764, nan, 0.5491953906967838, 0.5880944000946866, 0.01971493543409405, 0.8720965863289499, 0.2829941580535405, 0.0, 0.0, 0.0, 0.5648458841496203, 0.0, 0.0, 0.7876641278543601, 0.11773309221380866, 0.4507472099997672, 0.4306682617343027, 0.0, nan, 0.053795025325274436, 0.35687388479928317, 0.3506028598965402, 0.0, 0.8763044901374653, 0.7342806685419377, 0.9417441335611155, 0.0, 0.05263732322996086, 0.3527909231538019, 0.0] |
| 0.1962 | 6.0 | 2400 | 0.5252 | 0.4591 | 0.3755 | 0.8678 | [nan, 0.8788767058796604, 0.9301585587737999, 0.5368457943925233, 0.8328600223823257, 0.6594750437607246, nan, 0.7274099889861577, 0.8314845566257058, 0.20671941671154564, 0.9452567774639331, 0.5536552235119783, 0.0, 0.0, 0.0, 0.8969685653049295, 0.0, 0.0, 0.9273548947094251, 0.04859351976026093, 0.6165535079211122, 0.5024186037962429, 0.0, nan, 0.07840175751750653, 0.49256293504998166, 0.4105160532671556, 0.0, 0.928572042963352, 0.9119196275909236, 0.976082967184019, 0.09759262712918065, 0.23430673250828102, 0.4679128700481014, 0.0] | [nan, 0.8020983983063393, 0.8683865888896747, 0.4544978013913642, 0.6680523786513721, 0.4517445785165809, nan, 0.5857034011566181, 0.6746845091894639, 0.18334129404416358, 0.8638403093611754, 0.3497406295097313, 0.0, 0.0, 0.0, 0.5136113874503752, 0.0, 0.0, 0.7818072530904586, 0.04626054062573883, 0.40338464571865573, 0.41853055526845995, 0.0, nan, 0.05885020509966401, 0.3764221220090192, 0.37385233165849424, 0.0, 0.8760216287329546, 0.7184759765101966, 0.9447723343539753, 0.07888984275215143, 0.17396158662623154, 0.3506487661563549, 0.0] |
| 0.2721 | 6.25 | 2500 | 0.5120 | 0.4726 | 0.3905 | 0.8834 | [nan, 0.9352277032235452, 0.9553332100455781, 0.5201098130841122, 0.8315588432600179, 0.6507746356557826, nan, 0.7171028251625792, 0.8676946434502064, 0.12399022329011143, 0.9414992885437384, 0.5631225817074175, 0.0, 0.0, 0.0, 0.8815434824965902, 0.0, 0.0, 0.9265160801760165, 0.12371893574396928, 0.6983379489227609, 0.496123187961817, 0.0, nan, 0.1353837704242757, 0.5335426806929398, 0.5267111298220735, 0.0, 0.9267000099723489, 0.9157963608485102, 0.9708294620227798, 0.0039371710389987154, 0.44802779979272084, 0.43061615557802646, 0.0] | [nan, 0.847290915944923, 0.8918843187400161, 0.4215259288995603, 0.7694117638497967, 0.498788432969163, nan, 0.5567520477680967, 0.6726198795136411, 0.11618337797445752, 0.8753637372987935, 0.42321077786886513, 0.0, 0.0, 0.0, 0.581673157378788, 0.0, 0.0, 0.7933263418076343, 0.10532064834390416, 0.437053368284101, 0.4288208971032145, 0.0, nan, 0.09955372468245795, 0.3973712316699539, 0.442531089433316, 0.0, 0.880946087123613, 0.7345359613309864, 0.9452321649786941, 0.003849095209395844, 0.23329171252010497, 0.3386007935784502, 0.0] |
| 0.2409 | 6.5 | 2600 | 0.5224 | 0.4636 | 0.3840 | 0.8786 | [nan, 0.8731382676849351, 0.9738163801183563, 0.5331343457943926, 0.8196854363098576, 0.6540081867354192, nan, 0.6300072908533401, 0.8875978554822792, 0.13449190107295247, 0.955765201040042, 0.6083600889108421, 0.0, 0.03281733746130031, 0.0, 0.8703400012989544, 0.0, 0.0, 0.9262836625295774, 0.08389211741916257, 0.6663345782989761, 0.5452994228436286, 0.0, nan, 0.13288480021968968, 0.47811535039514313, 0.4147924929649243, 0.0, 0.9382028859601423, 0.8756597961457425, 0.965266610679491, 0.010467176426706453, 0.4342701538336483, 0.3917412023665201, 0.0] | [nan, 0.8209592404927408, 0.8860938595226477, 0.41218836114746504, 0.7196016259460952, 0.4954368536125842, nan, 0.545313357840212, 0.6491223200313668, 0.12371625097650668, 0.8633659080664855, 0.4708871648638746, 0.0, 0.03281733746130031, 0.0, 0.5802203868677137, 0.0, 0.0, 0.7907500494259085, 0.06952381605757291, 0.447113968783744, 0.44327869995554786, 0.0, nan, 0.08728984775236309, 0.38119151688382136, 0.37855655092920265, 0.0, 0.8832564638909316, 0.7526222693644393, 0.9416404778849121, 0.009589327157183334, 0.18190330268981955, 0.32252322488728213, 0.0] | |
| 0.1524 | 10.5 | 4200 | 0.5353 | 0.5128 | 0.4237 | 0.8872 | [nan, 0.9268517790355991, 0.9602839791773874, 0.537267523364486, 0.8456677302072528, 0.6567083558655384, nan, 0.7076703913792123, 0.8633391848934858, 0.3143875056961763, 0.9515964493686976, 0.6206264921379765, 0.0, 0.7490196078431373, 0.08954470929499306, 0.8721747743066831, 0.0, 0.005131830440133009, 0.9147190737070242, 0.11450520703985165, 0.6915674424660561, 0.5259122991900205, 0.0019833510251969382, nan, 0.2044761773994233, 0.5593918459203433, 0.4851432496510159, 0.0, 0.9463960710558084, 0.8834918590669917, 0.9670624325154579, 0.012832069294210286, 0.5599179011969355, 0.44183701402816805, 0.0] | [nan, 0.8497898154944094, 0.8911284588944798, 0.4558941463477496, 0.7715538102169041, 0.5041805687956784, nan, 0.5916295134976238, 0.6664176289411136, 0.25352865518566153, 0.8836310493548173, 0.5013133395398324, 0.0, 0.6053882725832013, 0.05452311472892029, 0.5946321429362145, 0.0, 0.005111887747118043, 0.802846410488875, 0.09434940383618455, 0.47282749487636766, 0.44441582446257716, 0.001977936260307555, nan, 0.14078808047194072, 0.4107132907440319, 0.42875046507529324, 0.0, 0.8865359213150946, 0.7513094837462199, 0.9478585417349973, 0.011508324602586469, 0.19474424489161243, 0.34180230893483227, 0.0] |
| 0.052 | 10.75 | 4300 | 0.5611 | 0.5030 | 0.4222 | 0.8855 | [nan, 0.932148839850802, 0.9568949634271852, 0.5225233644859814, 0.8511642191077112, 0.6031687568751455, nan, 0.7201923889006668, 0.8793424111590834, 0.1743029951530718, 0.9511564170902311, 0.5728369144644768, 0.018116900290928325, 0.7155830753353973, 0.08790515827973262, 0.8945492628434111, 0.0, 0.0, 0.9018928482213427, 0.19409261742744086, 0.6978142148450815, 0.5187192887865012, 0.004106374657802112, nan, 0.18591239873678428, 0.5679096666143298, 0.48372515565797347, 0.0, 0.9465148790940053, 0.8887757437702006, 0.9729464658947179, 0.03061668531642422, 0.3269727082444268, 0.4968253657882534, 0.0] | [nan, 0.8544673632153686, 0.8915093314898118, 0.4824501321862451, 0.7281104549174552, 0.4796578889108752, nan, 0.5955885392390377, 0.6806501724220245, 0.15806082007550856, 0.8869557339277052, 0.5018390970394144, 0.017487873372478938, 0.5719234576047509, 0.08299595141700405, 0.5743453150410742, 0.0, 0.0, 0.7988127196821454, 0.14769412965284384, 0.4636640495670947, 0.44194705232908676, 0.004079706927175844, nan, 0.14373978216098007, 0.4138202592132837, 0.4263783910470499, 0.0, 0.8825003483580057, 0.7459231292221788, 0.9497549296351595, 0.022555788364877087, 0.19864442770898405, 0.36609089056617755, 0.0] |
| 0.0897 | 11.0 | 4400 | 0.5797 | 0.4966 | 0.4137 | 0.8864 | [nan, 0.9266090680496935, 0.9675701132103213, 0.5286179906542056, 0.8135055236213754, 0.6141498963415911, nan, 0.7310209435363914, 0.8153911847037054, 0.24547412900285845, 0.9446611067589995, 0.6598542850086441, 0.0, 0.5599071207430341, 0.13658721150208097, 0.8912937585243879, 0.0, 0.004870002356452753, 0.9252981123672058, 0.10847033891289591, 0.6586394910124014, 0.4795176884335903, 0.01181630258673669, nan, 0.18618701084717837, 0.5559088292248914, 0.4992355587068755, 0.0, 0.9406880436912528, 0.9118086274033954, 0.9573602602596679, 0.003960483235940155, 0.3327033672702148, 0.4804871031358067, 0.0] | [nan, 0.8565575968459415, 0.8928102104157912, 0.43275555700074025, 0.7654702047573079, 0.47074416606474334, nan, 0.6054622841435586, 0.6863363711152467, 0.21403286978508218, 0.8828456438079144, 0.4322928605137194, 0.0, 0.4530688935281837, 0.09709521247982786, 0.5749041704195555, 0.0, 0.004865289040020926, 0.7951008940737603, 0.09395592969976839, 0.4548604901862724, 0.41665801557197046, 0.011736958934517204, nan, 0.1216732767438939, 0.41094472698150475, 0.430227229329769, 0.0, 0.8867287999971621, 0.7466484878252573, 0.9415279772911855, 0.0036285882442284325, 0.19204917359734425, 0.36246293958863207, 0.0] |
| 0.0936 | 11.25 | 4500 | 0.5731 | 0.5011 | 0.4193 | 0.8864 | [nan, 0.9324196276009762, 0.9569564158641476, 0.5246004672897197, 0.8364710008894733, 0.6578250088383729, nan, 0.7038215792022807, 0.8665369834416663, 0.21309913418120055, 0.9410960435297098, 0.49318761834197744, 0.028167151547209734, 0.5808565531475748, 0.11010215664018161, 0.8849288822497889, 0.0, 0.0565548660749352, 0.9216694582309478, 0.11269226311693903, 0.6871508134702065, 0.5262584704743466, 0.01969383764456115, nan, 0.2076616778799945, 0.571397916993772, 0.476856262879174, 0.0, 0.9377623285515337, 0.907275545210859, 0.973954665451519, 0.050830950308757096, 0.38818102379646, 0.4678081196891568, 0.0] | [nan, 0.858380886499719, 0.8914561596816896, 0.45129869803574746, 0.786844102694609, 0.48464472942061587, nan, 0.6094618696875397, 0.6854209198991233, 0.18657623184200503, 0.8857526637100221, 0.394797106941035, 0.023946037099494097, 0.49684424239749303, 0.062077792789589706, 0.5615273263032089, 0.0, 0.055464256368118324, 0.7962485307269822, 0.09311408578835408, 0.4733745462314789, 0.44196131097098196, 0.019312422955759485, nan, 0.14722087024238295, 0.4185961804636968, 0.4181839379748557, 0.0, 0.8886792481667263, 0.7473472827679579, 0.9501856968302422, 0.031198480139267574, 0.2030701847638892, 0.3556589318498682, 0.0] |
| 0.033 | 14.25 | 5700 | 0.5935 | 0.5181 | 0.4292 | 0.8880 | [nan, 0.9232290780535377, 0.9550432923803572, 0.5331775700934579, 0.8469649770868216, 0.6796985960845084, nan, 0.7591958688611619, 0.8564643924657209, 0.21028211607771655, 0.9524029393967549, 0.6051700008232486, 0.0, 0.6860681114551084, 0.21654685332324378, 0.8960592972657011, 0.0, 0.03558243657214673, 0.9155229117646998, 0.140697693670425, 0.711005584058588, 0.5227324249145294, 0.037180848092072186, nan, 0.2080186736235068, 0.5726225990474695, 0.5346435930956549, 0.0, 0.9410130186192625, 0.9154633602859255, 0.9760592954761752, 0.01645064030834266, 0.4608913003718832, 0.4701447510293469, 0.0] | [nan, 0.8573293198744064, 0.8916240779976521, 0.48186665258934697, 0.7676170029872194, 0.4823511054134466, nan, 0.6260715377125842, 0.6901341142647419, 0.1894206549118388, 0.8862935130575381, 0.49201833941300493, 0.0, 0.5435813573180703, 0.1092586700604518, 0.5822497006272321, 0.0, 0.035439538946984116, 0.8016860332567224, 0.11209233305853257, 0.4701563285996208, 0.45173968006036097, 0.03573442156415282, nan, 0.1250185671139278, 0.43006031638093856, 0.44816121842496287, 0.0, 0.8878007481353359, 0.7386750898148962, 0.9519721480330992, 0.013876810802543318, 0.25855582662623405, 0.3720678838361397, 0.0] |
| 0.0548 | 14.5 | 5800 | 0.5902 | 0.5151 | 0.4174 | 0.8882 | [nan, 0.9249082282350853, 0.9577153821767257, 0.5438259345794393, 0.8625692959476665, 0.6265525664540941, nan, 0.7491911978889274, 0.8432461925321441, 0.249306102158333, 0.951930364538209, 0.6013830575450728, 0.0, 0.7704850361197111, 0.20002522386177324, 0.8704780151977658, 0.0, 0.0013615060351373288, 0.9208633435979287, 0.11193893938641368, 0.6970564096712325, 0.4979168453686571, 0.03908039555282418, nan, 0.18904297679527668, 0.5623985973726906, 0.5131506060136048, 0.0, 0.9399214361687687, 0.9123994793332818, 0.9756660223299524, 0.04515831571967342, 0.4303481070535878, 0.49404040291178064, 0.0] | [0.0, 0.8607762479438139, 0.8922939816555095, 0.45337232891467816, 0.7416336434657338, 0.4957900790517687, nan, 0.6227225352163122, 0.6905205002583658, 0.2142437565638406, 0.8883435707029895, 0.4944664432937354, 0.0, 0.5822804554671658, 0.1227364185110664, 0.6143083859952676, 0.0, 0.0013572770933389015, 0.7986526753983755, 0.09318127002721979, 0.47663610300281495, 0.44101175423554057, 0.037423427761281866, nan, 0.14246983588236511, 0.42780903014161104, 0.4432599000899573, 0.0, 0.8868797486244817, 0.7354235169834137, 0.9525392249964284, 0.03855126495647117, 0.2526545610728006, 0.37165059315614124, 0.0] |
| 0.1047 | 14.75 | 5900 | 0.5997 | 0.5159 | 0.4159 | 0.8881 | [nan, 0.9210892560336101, 0.9617335675034919, 0.5317464953271028, 0.8683264925417152, 0.6381114337134347, nan, 0.7416693813461018, 0.862755610380984, 0.2719665271966527, 0.9489817238040484, 0.570408331275212, 0.0005289605924358636, 0.6938596491228071, 0.22575356287047546, 0.8948821198934858, 0.0, 0.011022962322938758, 0.9258684979714679, 0.17593834335005545, 0.6548460763101033, 0.4725421838812847, 0.04097994301357618, nan, 0.22218865851984074, 0.5752629926205056, 0.5366821032106535, 0.0, 0.936931478673554, 0.9021336855923136, 0.9725860103434604, 0.020141738157403954, 0.43632262391026033, 0.4934216774582814, 0.0] | [0.0, 0.8607109591035689, 0.8928295853674818, 0.4670190706507743, 0.7523185639791471, 0.4845338501499847, nan, 0.6282224979925543, 0.6928170564904808, 0.23142272983643541, 0.8873278318309525, 0.46953884728763595, 0.0005215803885773895, 0.5542412002308136, 0.10845198424719782, 0.5869154300379641, 0.0, 0.010907018316536697, 0.793456051943224, 0.12649239962384984, 0.4589822701689517, 0.42143872921678477, 0.03893105461493551, nan, 0.13440869146302972, 0.4245448084603441, 0.46174816509389, 0.0, 0.8878226827336242, 0.7447736277446672, 0.951929183073613, 0.018382891806658124, 0.25878028202964926, 0.37484668044597425, 0.0] |
| 0.1363 | 15.0 | 6000 | 0.6052 | 0.5193 | 0.4155 | 0.8887 | [nan, 0.9281772418265013, 0.9663767872895684, 0.5342161214953272, 0.8447924129735698, 0.6015187219527939, nan, 0.7291077408868643, 0.8812164919106135, 0.23211400637971746, 0.9479408328730995, 0.633386844488351, 0.0030415234065062154, 0.789422084623323, 0.21314163198385672, 0.8954179385594596, 0.0, 0.0066242505171104655, 0.9164480291997693, 0.1360949684597427, 0.6964961019847766, 0.4960711090960334, 0.03860550868763618, nan, 0.19802279280516272, 0.5609541005914063, 0.5661075535662848, 0.0, 0.9376398917610389, 0.9059173441584945, 0.9782134208899593, 0.041454266650089104, 0.43892377410636263, 0.49969692229478707, 0.0] | [0.0, 0.8633930449091305, 0.8952460293484353, 0.42706756384454103, 0.7593774610091322, 0.47377891058119026, nan, 0.6217821374684249, 0.6898326802726141, 0.20124995510218743, 0.8868864734587292, 0.4952526552944963, 0.0028388052332757345, 0.6066698390038862, 0.10356026717323365, 0.5863739068024136, 0.0, 0.00656256484747873, 0.7990222508044155, 0.11130896362146828, 0.4768559231889487, 0.4358850122678166, 0.03689958080794596, nan, 0.14020726799012267, 0.42208907144066693, 0.46374312526092243, 0.0, 0.889531203939725, 0.7432560391610733, 0.952160090573041, 0.03558025789239662, 0.21245893254116582, 0.3712419453581397, 0.0] |
| 0.0804 | 15.25 | 6100 | 0.6205 | 0.5110 | 0.4268 | 0.8877 | [nan, 0.9338093608996594, 0.9656453309931633, 0.5360116822429907, 0.8032054069910557, 0.6059132718486427, nan, 0.7301936126609202, 0.8766143189258433, 0.22587928248891834, 0.9574923159422327, 0.619350456902939, 0.0011901613329806928, 0.7703818369453045, 0.07655442048177576, 0.8504335260115607, 0.0, 0.020239310868483754, 0.9198111518664089, 0.12485306048113379, 0.7319227623900414, 0.495000428884777, 0.03547684228169171, nan, 0.1875600713991487, 0.5538912440466844, 0.5455451906671689, 0.0, 0.9362906678973961, 0.9101525873385327, 0.9729007364591106, 0.02293143105806291, 0.4597532971610884, 0.48345782331547454, 0.0] | [nan, 0.856464729269542, 0.8942823604125036, 0.4347924144963024, 0.7282825257603309, 0.4836585626064097, nan, 0.6163747573889081, 0.6892970262677814, 0.20072891932188414, 0.888225522138808, 0.5066929332727181, 0.0011893749174045195, 0.6024777046931117, 0.05147557666214383, 0.6220782459974346, 0.0, 0.020031615227137266, 0.7981944383082095, 0.09975989363883506, 0.476298280003313, 0.4345003764655265, 0.03419217618393775, nan, 0.1330243066375818, 0.42041703246719714, 0.45861972618049734, 0.0, 0.8892991369897043, 0.7440154875361404, 0.9524152608652374, 0.021443727473549588, 0.22949422815524131, 0.36944182958821886, 0.0] |
| 0.0627 | 15.5 | 6200 | 0.6244 | 0.5088 | 0.4226 | 0.8864 | [nan, 0.9363099227676078, 0.9557843398515034, 0.5258376168224299, 0.8250218829308421, 0.6537759869721766, nan, 0.7370216777925434, 0.8573990605873701, 0.24421061352997225, 0.944441326435564, 0.6453651107269285, 0.0, 0.574406604747162, 0.202547610039097, 0.9001834773007729, 0.0, 0.08682219254837274, 0.9295308868150898, 0.08372655176410206, 0.6741101275248591, 0.4846229490117269, 0.03799094921503995, nan, 0.18766991624330634, 0.5747971947453813, 0.5357957944650019, 0.0, 0.9393777953152539, 0.9065412893119918, 0.9711350422513085, 0.01408833768494343, 0.423479444817005, 0.43092900998340755, 0.0] | [nan, 0.8597774723874926, 0.8905873458192073, 0.4468008441348313, 0.7358981742624778, 0.4808541172889169, nan, 0.6284059730270303, 0.6908370828825592, 0.2063894967177243, 0.8877064612239235, 0.5085303752716421, 0.0, 0.4786515887689728, 0.07696731524968849, 0.5910784632525015, 0.0, 0.08625308882819613, 0.7927730663764808, 0.07191564097641445, 0.4573643410852713, 0.43199170940310977, 0.036449399656946824, nan, 0.12474672799956191, 0.42888997799442735, 0.45055805027110624, 0.0, 0.8884059722861457, 0.7421115189770542, 0.9513756980737487, 0.012830765528906378, 0.21910649885920366, 0.3464300992446894, 0.0] |
| 0.0906 | 15.75 | 6300 | 0.6277 | 0.5077 | 0.4232 | 0.8874 | [nan, 0.9291486180310576, 0.9587963707454238, 0.5362032710280373, 0.8561640657502444, 0.6342631999714216, nan, 0.7070024940578683, 0.8671632585282536, 0.2429056713202701, 0.9448969225566771, 0.5583271589692929, 0.0010579211848717272, 0.6710010319917441, 0.23294236347584815, 0.9067513151912711, 0.0, 0.020684418610740187, 0.9250756288677204, 0.07677279425156046, 0.6503387447644879, 0.5319197495312902, 0.03860550868763618, nan, 0.18569270904846905, 0.5416470403517035, 0.5072344951363807, 0.0, 0.9414354322663816, 0.9037269864207472, 0.9731874869200364, 0.013277591280202247, 0.39988619967892053, 0.4915501377118052, 0.0] | [nan, 0.8573471144295101, 0.892101583588469, 0.4449642809016976, 0.7400242676373722, 0.48442379031764893, nan, 0.6140014998720169, 0.6924650683478314, 0.21178574008524165, 0.8871035802257583, 0.4782118177972077, 0.00099601593625498, 0.5315565729234794, 0.08438028233359221, 0.5871221081515825, 0.0, 0.020441960358122443, 0.7966462351239197, 0.06850549580427845, 0.4652701824381677, 0.4532145005879428, 0.03686906413403052, nan, 0.1488673139158576, 0.4142177021859072, 0.4423489401170992, 0.0, 0.888882064716084, 0.7468477974750474, 0.9515378343546987, 0.012387656809223801, 0.2237051521076804, 0.3671609871108074, 0.0] |
| 0.0798 | 16.0 | 6400 | 0.6190 | 0.5286 | 0.4172 | 0.8869 | [nan, 0.926680657145317, 0.9583277241233551, 0.5414509345794393, 0.8395448350384849, 0.6163055970613488, nan, 0.729106879083869, 0.8763296484319401, 0.26653962467376446, 0.94462856417892, 0.6354449658351856, 0.0, 0.7736326109391125, 0.21591625677891285, 0.8849045268558811, 0.34363411619283063, 0.10316026497002069, 0.9218656576332847, 0.10944717627775294, 0.7009902670312324, 0.5122599776979916, 0.038968657466897594, nan, 0.1919538651654538, 0.5525226356832574, 0.538875717356141, 0.0, 0.9457572762531493, 0.901183634297817, 0.9780756945897774, 0.023115338389489825, 0.3853969802271942, 0.4585034944719744, 0.0] | [0.0, 0.8564334135192141, 0.8938306198574103, 0.41026489890361634, 0.7353951913707414, 0.47809949912634986, nan, 0.6215698951590981, 0.6951678039270297, 0.23431724238396126, 0.8861469346690092, 0.5033256170323759, 0.0, 0.5823655078656049, 0.06725329981143935, 0.60684460181721, 0.013995167136528394, 0.10232968859569384, 0.80017144909153, 0.09089721553798556, 0.48491411153457703, 0.44620918590626235, 0.03736540418921091, nan, 0.14435885256397019, 0.42539846918525115, 0.4624629192971781, 0.0, 0.8873440144497453, 0.7475156108906514, 0.9524719380738451, 0.01972869725160058, 0.22189851053623036, 0.35861227450389216, 0.0] |
| 0.0901 | 16.25 | 6500 | 0.5917 | 0.5200 | 0.4299 | 0.8896 | [nan, 0.9258199912150333, 0.9603701848856869, 0.5186892523364486, 0.8721793039773063, 0.647948819969426, nan, 0.7465402918754385, 0.8815201404374436, 0.21442478975931065, 0.9491194402298921, 0.6424219972009549, 0.00039672044432689763, 0.7311661506707946, 0.1943498549627948, 0.8921543157758005, 0.15327564894932014, 0.07967428586390177, 0.9293905669893677, 0.12015927416016821, 0.6698895330720515, 0.5201315450880439, 0.040560925191351474, nan, 0.17654812577234655, 0.5835060449050087, 0.5231215794021847, 0.0, 0.9400508616673928, 0.8957790972168599, 0.9722137189382809, 0.011464420406979153, 0.38557987360035767, 0.46186248931546336, 0.0] | [nan, 0.866351138156412, 0.8939541036386832, 0.46360912979965524, 0.7507890322152613, 0.48660598648618647, nan, 0.6225598103833513, 0.6911588008377322, 0.19347001326929186, 0.887840691207522, 0.5082802755206722, 0.00036527456471447707, 0.5638678869876641, 0.0832837918175431, 0.6045529063562446, 0.006450606044842116, 0.07925304719241588, 0.7975401296695107, 0.09911841629051973, 0.4713279486495917, 0.45141671341630396, 0.03856573705179283, nan, 0.12819285757013818, 0.4279405668488608, 0.45535903716704923, 0.0, 0.8891564381205536, 0.7534260714863522, 0.9520390401591446, 0.010587073054631307, 0.21693992819738858, 0.3621346900827125, 0.0] |
| 0.0653 | 16.5 | 6600 | 0.6069 | 0.5188 | 0.4270 | 0.8875 | [nan, 0.9290124922971863, 0.9589720557965155, 0.5377873831775701, 0.8408719669628694, 0.6464453726960179, nan, 0.7621001449552638, 0.8857807088295299, 0.2068851236588094, 0.9480908117204224, 0.6177862846793447, 0.0, 0.7590299277605779, 0.18791777021061926, 0.9075956355134117, 0.0, 0.058230565810488834, 0.9227427600247443, 0.14023410983625556, 0.6694696680432973, 0.503836987023172, 0.03972288954690206, nan, 0.19629273650968007, 0.5403046004082274, 0.5528350801001529, 0.0, 0.9376581699207615, 0.901014031526811, 0.9752275577414824, 0.015813440258609972, 0.5130362332093723, 0.44827147941026946, 0.0] | [nan, 0.8616804147441266, 0.8938918495590652, 0.4436595217282778, 0.7588707802865634, 0.4758728817247983, nan, 0.628730181301102, 0.688001179245283, 0.18745190773792766, 0.8877420745200684, 0.49290617097441625, 0.0, 0.5890833366705378, 0.07141145458902469, 0.5823605098793022, 0.0, 0.05773773981671383, 0.7947286013642479, 0.11004573329175761, 0.45664170004530313, 0.44804481905654414, 0.037985842126352344, nan, 0.1362925675933341, 0.4181863845162963, 0.46249953657361065, 0.0, 0.888743313770925, 0.7487091113564399, 0.952506386954324, 0.013629087889199198, 0.23068137169799252, 0.34552559761867596, 0.0] |
| 0.0946 | 16.75 | 6700 | 0.6065 | 0.5143 | 0.4299 | 0.8883 | [nan, 0.9366806425081413, 0.9542471674446813, 0.5289754672897197, 0.8420186089455377, 0.6348452391657562, nan, 0.7554582292706217, 0.8872989514636808, 0.24603338994987364, 0.95065695923075, 0.5426442743064132, 0.0, 0.6714138286893705, 0.17089166351368396, 0.8694632071182697, 0.0, 0.019113450108658656, 0.9217120922782911, 0.13903375883706684, 0.6740194249750934, 0.5118203708015244, 0.03178948544611431, nan, 0.20950157901963476, 0.5704453865075627, 0.5623407413972658, 0.0, 0.9411122045154043, 0.9100815747962009, 0.9743145830094165, 0.0857785237680799, 0.4308967871730781, 0.48645508025274165, 0.0] | [nan, 0.8651947384722789, 0.8930717543250574, 0.4526545293143849, 0.7524401466986995, 0.4887861010723328, nan, 0.6214073859834178, 0.6850152009083916, 0.21553648224427951, 0.8870252213407757, 0.45774305555555556, 0.0, 0.5674414547991802, 0.07292395457725634, 0.6296601151175575, 0.0, 0.018957592126106943, 0.7990749594007368, 0.11146433406780111, 0.4733450112755498, 0.44892412444043184, 0.03086520206129645, nan, 0.14343460931037075, 0.423674789416196, 0.4623610858079796, 0.0, 0.8878002154581935, 0.7401265142858424, 0.9527410923966566, 0.060905676756307404, 0.2440383021821195, 0.37124052036090577, 0.0] |
| 0.0849 | 17.0 | 6800 | 0.6239 | 0.5140 | 0.4277 | 0.8874 | [nan, 0.9305970330977147, 0.9554562297838712, 0.5320046728971962, 0.8489963736857462, 0.6542095907740937, nan, 0.7229605001215142, 0.8664610713099588, 0.28969717055387545, 0.9528962660454964, 0.4980859471474438, 0.0, 0.7176470588235294, 0.20759238239374447, 0.8862034811976359, 0.0, 0.031864477783887096, 0.9191836449171626, 0.12003509991887283, 0.6955934653201726, 0.5165258494982048, 0.04092407397061288, nan, 0.19217355485376905, 0.5895090804417229, 0.503489840686003, 0.0, 0.9408365537389992, 0.904218558679801, 0.9778653391859837, 0.011972108251481619, 0.48105021439167633, 0.4599672061542931, 0.0] | [nan, 0.8636437394553574, 0.8929500733790351, 0.4345244853931126, 0.7599993804727837, 0.46696218452852767, nan, 0.6206510046358703, 0.6983976442693793, 0.2497009515987931, 0.8874926753329814, 0.43156730923551545, 0.0, 0.5706314364255529, 0.11078207026517702, 0.6145475017593244, 0.0, 0.03131271548397056, 0.8003820861050736, 0.10237293400828867, 0.4670301606353909, 0.4459244664251144, 0.038865601952565394, nan, 0.13528195016335132, 0.4290314962729347, 0.43912572952498746, 0.0, 0.8877216097613865, 0.738180307717246, 0.9528556585267144, 0.010467599586006663, 0.24685847767824554, 0.3594826033565289, 0.0] |
| 0.0623 | 17.25 | 6900 | 0.6172 | 0.5119 | 0.4289 | 0.8887 | [nan, 0.9328785695913208, 0.9578098581195325, 0.5317383177570093, 0.8561058685577084, 0.6304827168234579, nan, 0.7396010541574238, 0.8636618114532428, 0.2868801524503915, 0.9518605630620964, 0.4947929529925084, 0.0009256810367627612, 0.7112487100103199, 0.18766553159288688, 0.8812836916282393, 0.0, 0.01743775037310502, 0.9291997485832975, 0.11260120200665574, 0.6826961479212292, 0.49109604568235565, 0.042125258394323704, nan, 0.18536317451599615, 0.5637959909980635, 0.5345549622210897, 0.0, 0.9375897612200349, 0.9104269853176398, 0.9785152351649676, 0.016857308632765553, 0.471885224247597, 0.4792468588859031, 0.0] | [nan, 0.8649230898296971, 0.8934913832615394, 0.4476893494179728, 0.7525214888224941, 0.47904609433387446, nan, 0.6239313691633799, 0.6925921698436251, 0.24592492631130367, 0.887597908356459, 0.43200359389038634, 0.000914435009797518, 0.5808680994521702, 0.10441372535260683, 0.6200052546206393, 0.0, 0.01701975415910659, 0.7967171468468032, 0.09773096322694678, 0.46324810420871126, 0.4373241271317872, 0.03999681722939819, nan, 0.13242564545240523, 0.42549338304851775, 0.45084188297733174, 0.0, 0.888754441570771, 0.7411121674604253, 0.9532170914369867, 0.015176070871411481, 0.2681904277926638, 0.37097400203468917, 0.0] |
| 0.087 | 17.5 | 7000 | 0.5958 | 0.5165 | 0.4323 | 0.8903 | [nan, 0.9358029442279695, 0.9581817889436154, 0.5173516355140186, 0.8565989717971686, 0.667348278703771, nan, 0.7453587599689061, 0.8783982540209707, 0.2597456398359501, 0.9499820544177967, 0.5674240553223018, 0.0, 0.7777605779153767, 0.14150586454786226, 0.8944761966616873, 0.0, 0.04935459377372817, 0.9190064859631538, 0.13516780079140384, 0.6902990697136872, 0.5223050718688348, 0.039750824068383706, nan, 0.1931621584511877, 0.5658763803841524, 0.501960958099754, 0.0, 0.9402762475045608, 0.9019702878007346, 0.9759436269037568, 0.012736230262339924, 0.4254506289499888, 0.5057514930417828, 0.0] | [nan, 0.8672982432946728, 0.8947683772895187, 0.45221659685446863, 0.7622893195763734, 0.4902560352855047, nan, 0.6223052874324095, 0.6932109212359029, 0.22966612333107453, 0.8909383965244376, 0.46376665320952765, 0.0, 0.5938460326215428, 0.08434187777193114, 0.602773750581284, 0.0, 0.048440150074523305, 0.8000458716174862, 0.11235893201211121, 0.479082966550413, 0.45730325325150806, 0.03797907547774101, nan, 0.13441877352901832, 0.42968388297967464, 0.43185024209844064, 0.0, 0.8885136898541194, 0.7448990572757507, 0.9530770665482792, 0.011476439106252173, 0.27282086031874275, 0.3826734258440253, 0.0] |
| 0.0493 | 17.75 | 7100 | 0.6044 | 0.5187 | 0.4325 | 0.8897 | [nan, 0.9240685866116948, 0.9622943353488201, 0.5353317757009346, 0.853514520592762, 0.6373741840672775, nan, 0.7478235165354141, 0.8836883806993405, 0.21751108165209826, 0.9509281473980792, 0.5420474191158311, 0.0, 0.7930340557275541, 0.22083490982469417, 0.8908310060401377, 0.0, 0.0858534286387558, 0.9207060529378274, 0.1411447209390884, 0.681761326480902, 0.5542661781464825, 0.03930387172467736, nan, 0.1931621584511877, 0.5752080389386088, 0.49312002836187985, 0.0, 0.9390712329452002, 0.9078367511279274, 0.9729394719810368, 0.022296821252434828, 0.4083602593021602, 0.5050154471862657, 0.0] | [nan, 0.8665364871726114, 0.892965816013915, 0.4547348114599635, 0.7642413653965189, 0.4857421136997843, nan, 0.6253954022706847, 0.6870444418213474, 0.19578268327242895, 0.8874360309454634, 0.462182366980205, 0.0, 0.6077345881608605, 0.08939146416173167, 0.6003337345442609, 0.0, 0.0839241381075478, 0.8010272384750775, 0.11626241894020498, 0.4793339806464354, 0.46760060321222136, 0.03759519038076152, nan, 0.13732648718299134, 0.4276941756073643, 0.42612058896739236, 0.0, 0.8882284916106664, 0.7388891943971531, 0.9525770980335972, 0.01913195000088903, 0.25993428881875097, 0.3840528604415517, 0.0] |
| 0.0609 | 18.0 | 7200 | 0.6040 | 0.5216 | 0.4331 | 0.8892 | [nan, 0.9227158454479248, 0.9619075870212453, 0.5316542056074767, 0.8629644863429278, 0.6514016366079864, nan, 0.7428586694795917, 0.8715519286425962, 0.2045030862918928, 0.9466966687245525, 0.5841977442990038, 0.005950806664903465, 0.7702786377708978, 0.22789759112120064, 0.8969036175878418, 0.0, 0.10873720315241013, 0.9154051507310187, 0.16112021722213943, 0.6850397847716271, 0.5074181749114659, 0.04494664506397005, nan, 0.19590827955512838, 0.5833045480713874, 0.5258912942323458, 0.0, 0.940934664449275, 0.8882331527914135, 0.9774381724580755, 0.014391396245182146, 0.43477819098132453, 0.5255548975681157, 0.0] | [nan, 0.8627327541149343, 0.8943888286230383, 0.44826842363954605, 0.7637335274754071, 0.48244240753868006, nan, 0.625331534198079, 0.6944541055496749, 0.18654700047236655, 0.8893611006867107, 0.4845014167207183, 0.005280450598451068, 0.5995903120857935, 0.10169968482665466, 0.5777541863213714, 0.0, 0.10625831542319107, 0.8006913747953047, 0.12712606139777924, 0.4783386384345389, 0.44333322627096416, 0.042293134265587215, nan, 0.148674558186062, 0.4270657907089471, 0.4375414792419438, 0.0, 0.8881646826265218, 0.746841100561318, 0.9521439225045568, 0.01294715575036877, 0.24666520631333802, 0.38409386690619945, 0.0] |
| 0.0594 | 18.25 | 7300 | 0.6184 | 0.5184 | 0.4328 | 0.8884 | [nan, 0.9404973526006469, 0.9537239028155554, 0.5275303738317757, 0.8254461719223712, 0.6778219046293364, nan, 0.7472383523016173, 0.8659581534373962, 0.2943783918140768, 0.9543757743601257, 0.5650160533465053, 0.0, 0.7537667698658411, 0.19283642325640055, 0.8840439696044684, 0.0, 0.053517660304244236, 0.9223867864255677, 0.14299077799301313, 0.6933990487935829, 0.5170742093202789, 0.040644728755796417, nan, 0.19868186187010847, 0.5769927251792537, 0.5184906162061554, 0.005237711522965351, 0.936523983230326, 0.8965774712364731, 0.9780089834131267, 0.013717932777984998, 0.4056981446483367, 0.5054707620798113, 0.0] | [nan, 0.8646951423015076, 0.8916557550473645, 0.4456280068092665, 0.7798208455321158, 0.4668012972723517, nan, 0.6275296552822227, 0.693191442493572, 0.24416726797924612, 0.8882015249296725, 0.4734908589168679, 0.0, 0.6010533245556287, 0.10449699289229086, 0.6037870806764625, 0.0, 0.0522041170761608, 0.8024731726060429, 0.12131790023739622, 0.47577199080928667, 0.44858497899759875, 0.038707102952913006, nan, 0.1414826837710464, 0.42720162129381883, 0.43218883327484625, 0.005164878823996822, 0.8886286814206171, 0.7396195316490108, 0.952706951959097, 0.011655776057680246, 0.24503522596165647, 0.3835704565398948, 0.0] |
| 0.0616 | 18.5 | 7400 | 0.6177 | 0.5082 | 0.4272 | 0.8887 | [nan, 0.9388723599691342, 0.9564944313754319, 0.5251226635514019, 0.8417103211148066, 0.6482573931295971, nan, 0.7321895483979944, 0.8855861839920293, 0.2417250093210158, 0.9506753528629689, 0.5459990121017535, 0.0, 0.656656346749226, 0.11275066212637155, 0.8765912190686498, 0.0, 0.07320713219699945, 0.9230813488667519, 0.11395056209539893, 0.703570900866502, 0.5234722511549255, 0.043466115425442764, nan, 0.1751201427982974, 0.5677919087245512, 0.4888879041013937, 0.00040290088638195, 0.9391572478144832, 0.8977247029883181, 0.9766107386702634, 0.018289713622611795, 0.4217114755430917, 0.4846827041793997, 0.0] | [nan, 0.8641564182971058, 0.8921133993393542, 0.4501424016407233, 0.7647378890792713, 0.4769587373086239, nan, 0.6209624017506187, 0.6859163987138264, 0.20884410959394406, 0.8903311694707657, 0.45434149683164926, 0.0, 0.5354933726067747, 0.07164035579774021, 0.6122940826221327, 0.0, 0.06951938138690669, 0.8003213370838211, 0.09716584900998836, 0.4828652554046836, 0.45382137270368395, 0.04121417598135297, nan, 0.13381035314854062, 0.43221966358833797, 0.42342013855571975, 0.00040160642570281126, 0.8881950211846364, 0.7398417591158966, 0.9530845970447974, 0.014810386777414213, 0.2365547272188405, 0.37402163767775426, 0.0] |
| 0.0611 | 18.75 | 7500 | 0.6099 | 0.5177 | 0.4324 | 0.8902 | [nan, 0.9345079533755389, 0.9638643589649342, 0.5356553738317757, 0.8422997643013702, 0.6257334001805861, nan, 0.7471220088972541, 0.8814537173221996, 0.2763370479307345, 0.9466207360377004, 0.6049436074750967, 0.0, 0.7059855521155831, 0.14970361962416445, 0.8782149119958433, 0.0, 0.0958028958186055, 0.9234898906602255, 0.14089637245649764, 0.6854742792438918, 0.5173606430820885, 0.04232080004469523, nan, 0.19343677056158176, 0.5813811692050034, 0.5071015488245331, 0.00040290088638195, 0.9400356746670351, 0.8951641148114238, 0.9764509546423178, 0.03372756848605413, 0.4723729399093662, 0.4701335776577261, 0.0] | [nan, 0.8647971283970989, 0.8977857991553266, 0.4345779290016539, 0.7684148484664771, 0.4855945598832977, nan, 0.6259089780170273, 0.686933822387541, 0.2366516479228013, 0.8888089337936385, 0.48289741736216074, 0.0, 0.5985650538104821, 0.061681563084597796, 0.6094675222969052, 0.0, 0.09345866005976859, 0.7993214394154491, 0.11438556403104944, 0.4762232900770807, 0.45242021144786737, 0.04009209272785011, nan, 0.14212501513256123, 0.43339055459103054, 0.4277836968915307, 0.00040032025620496394, 0.8873505568836287, 0.7422385564869821, 0.9528040989243474, 0.029041136219678652, 0.23652292476444373, 0.3661642120469451, 0.0] |
| 0.0526 | 19.0 | 7600 | 0.6228 | 0.5108 | 0.4297 | 0.8909 | [nan, 0.9405315503656566, 0.9623814025398809, 0.5330642523364486, 0.8317861268903274, 0.6622725273804787, nan, 0.7263120519701678, 0.8674004839398396, 0.27552922656282364, 0.9455175897361646, 0.5819338108174859, 0.0, 0.6111971104231166, 0.16710808424769832, 0.8864145612781711, 0.0, 0.0827900400596968, 0.930233313789279, 0.11843739134753886, 0.6995346374019279, 0.5042107294717365, 0.042153192915805354, nan, 0.18371550185363175, 0.5630920605013869, 0.5005871795439941, 0.0056406124093473006, 0.9407823912509976, 0.8985265242187241, 0.9751204970628252, 0.012990074184591156, 0.42681216850576115, 0.4687243361620586, 0.0] | [nan, 0.8642299686902748, 0.8983701844671692, 0.4505770666371748, 0.7744797343632894, 0.49247659714013137, nan, 0.623426329007179, 0.696151825084343, 0.23867367627796818, 0.8898312419634539, 0.48430193720774883, 0.0, 0.5244863620262132, 0.07708866651151966, 0.5993412927130506, 0.0, 0.08080962968642183, 0.7977044198782267, 0.10166926045153175, 0.47672785170429793, 0.4451483954200063, 0.04006265597621197, nan, 0.1264172335600907, 0.43160647951283304, 0.42598284151975113, 0.00554016620498615, 0.8878311660408268, 0.74270285241124, 0.9536917187049466, 0.011887351052557973, 0.24007269734586106, 0.3689853153957455, 0.0] |
| 0.054 | 19.25 | 7700 | 0.6199 | 0.5112 | 0.4157 | 0.8897 | [nan, 0.9383711032345364, 0.9577791893332354, 0.532998831775701, 0.8352225138198671, 0.6740592830016223, nan, 0.7513879337239024, 0.8669212886084358, 0.21351340154935997, 0.9451751851979368, 0.5077796986910348, 0.0, 0.7028895768833849, 0.18400807163576743, 0.8914236539585634, 0.0, 0.1072709658838007, 0.9291372462420467, 0.11183132171062435, 0.6577470949582549, 0.5160479493180732, 0.04262807978099335, nan, 0.1900590416037347, 0.5664154498351389, 0.5106689415257805, 0.0012087026591458502, 0.9410463493811095, 0.8949234994980861, 0.9775344732695309, 0.011246839902192383, 0.42160986811355644, 0.47790186427705494, 0.0] | [0.0, 0.8647432445871411, 0.896112476860621, 0.45036567465468447, 0.76789556797279, 0.4910576591298745, nan, 0.6249728507663073, 0.6958387758910245, 0.19385049365303245, 0.8887827463711233, 0.4413911550021468, 0.0, 0.5792159197210647, 0.08409221902017291, 0.5936591009850886, 0.0, 0.10176353700943865, 0.7979000623472865, 0.09749989173896098, 0.46787846117983983, 0.45133395403669296, 0.04032236755185625, nan, 0.1322593590552084, 0.4340972401884397, 0.4265909006774516, 0.0011904761904761906, 0.8880726081330668, 0.743872268803543, 0.953516990645358, 0.009541850530053972, 0.23069652626428858, 0.3703797514940341, 0.0] |
| 0.0671 | 19.5 | 7800 | 0.6217 | 0.5094 | 0.4146 | 0.8892 | [nan, 0.9331891438463118, 0.9574927175990591, 0.5350619158878505, 0.834028291700058, 0.6744756411977813, nan, 0.7431025597272566, 0.8738719931679082, 0.2327354074319566, 0.9446516741270925, 0.5379723388490986, 0.0, 0.669969040247678, 0.18249463992937318, 0.8913668247061116, 0.0, 0.09954703741523316, 0.9238793920053711, 0.0888259739399659, 0.6886532573187448, 0.5368212898403323, 0.03941560981060394, nan, 0.18061238500617877, 0.5652404877793479, 0.5268662338525626, 0.0060435132957292505, 0.9420171078199074, 0.9042006331836784, 0.9732816357580515, 0.009485473911061379, 0.3114064500396269, 0.49469125180868956, 0.0] | [0.0, 0.8617017485872825, 0.8957626230741332, 0.4508312580591182, 0.7683050299189929, 0.4878950714613818, nan, 0.624948812708509, 0.6911476098809349, 0.20973251451290761, 0.8882723484572987, 0.46124933827421916, 0.0, 0.5501928047798635, 0.07156988821841923, 0.5965012359764214, 0.0, 0.09680704791974334, 0.7988314631673791, 0.07901907356948229, 0.4711932405689982, 0.46080549284533756, 0.03769502030348365, nan, 0.13494050061551088, 0.43071416464770335, 0.43780380026513477, 0.005912495072920773, 0.8877312783085815, 0.7390862578001592, 0.9533931934816451, 0.008087813065948142, 0.20454363437358178, 0.3783462459982845, 0.0] |
| 0.0512 | 19.75 | 7900 | 0.6300 | 0.5080 | 0.4263 | 0.8887 | [nan, 0.9391756156362827, 0.957153465687716, 0.531875, 0.8363349452907067, 0.6442373192444947, nan, 0.7406369413577534, 0.8858234094036154, 0.26463399478023114, 0.9530349257345309, 0.5036634559973656, 0.0, 0.6101651186790505, 0.1925841846386682, 0.8746996168084692, 0.0, 0.0674207315476658, 0.9178750280173988, 0.11324690806139175, 0.6909895794473874, 0.5175153479480927, 0.042963294038773116, nan, 0.2016476726623644, 0.5813497671010625, 0.5020052735370366, 0.008058017727639, 0.9412167663408764, 0.897734355178538, 0.9747767193057303, 0.01633407932363546, 0.3496514865166941, 0.49998742995692663, 0.0] | [nan, 0.8625082043880324, 0.8957494129402008, 0.43782876705742063, 0.7496431303023787, 0.48514174134060595, nan, 0.6274006504670441, 0.6871961161760971, 0.2302687309626372, 0.8882991958037961, 0.4373045513839996, 0.0, 0.5170981283890153, 0.08045310853530031, 0.6189258899694966, 0.0, 0.06474078543772313, 0.7999986290910134, 0.09763826734899257, 0.47261393142851427, 0.4453505921742053, 0.040873817370043586, nan, 0.1437999373335422, 0.43193558986563074, 0.42771380026430056, 0.007840062720501764, 0.887320160440498, 0.7455157136812743, 0.9534156947680599, 0.013436060460141392, 0.21404224616226705, 0.3788044726196485, 0.0] |
| 0.0535 | 20.0 | 8000 | 0.6326 | 0.5129 | 0.4292 | 0.8889 | [nan, 0.9375849538350132, 0.9591767441005661, 0.5300221962616822, 0.8259597228240738, 0.6596635135950806, nan, 0.7492101575548236, 0.8658110736822129, 0.2693152160404325, 0.9484445354169388, 0.5863176092862435, 0.0, 0.6744066047471621, 0.20784462101147685, 0.883142820029876, 0.0, 0.07781530646977194, 0.9271092315337143, 0.10147518998658918, 0.678314629589805, 0.497267391277709, 0.043242639253589586, nan, 0.18442949334065634, 0.576354215732454, 0.5145022268507234, 0.007252215954875101, 0.939646591781763, 0.9018448093278766, 0.9767371671098836, 0.012725869285921506, 0.41707817675628445, 0.45857891473041446, 0.0] | [nan, 0.8619435562270654, 0.8965635233177199, 0.4407369269775891, 0.7663725441548623, 0.48239880840583743, nan, 0.6305089171096815, 0.6940516487277982, 0.23291892085557667, 0.8902205646366161, 0.48581173260572985, 0.0, 0.5452649144764289, 0.09688988182726792, 0.6044686963431372, 0.0, 0.07672845562038519, 0.7962772336784573, 0.08572747363415112, 0.4690486788330029, 0.43758222088032955, 0.04117568825641708, nan, 0.13543326140878018, 0.4322105242501251, 0.4339781328847771, 0.007067137809187279, 0.8877484539815808, 0.7395098273111396, 0.9530623665306688, 0.010661406489721605, 0.2371072088724584, 0.3613527133617203, 0.0] |
| 0.0467 | 20.25 | 8100 | 0.6268 | 0.5170 | 0.4303 | 0.8886 | [nan, 0.9395265086570245, 0.956900821509961, 0.5300023364485982, 0.8314043061203785, 0.6477819071422676, nan, 0.7464739330448017, 0.8916828770697918, 0.24499772152947513, 0.9451416993546665, 0.549950605087676, 0.0, 0.687203302373581, 0.1523521251103544, 0.8917889848671819, 0.0, 0.08004084518105412, 0.915062008738324, 0.1551515753572079, 0.6881485415176292, 0.526278382981852, 0.04472316889211688, nan, 0.18451187697377455, 0.5879677605066206, 0.549156898805699, 0.007655116841257051, 0.940224100990058, 0.9054685173132715, 0.9762965505479732, 0.02776741680135936, 0.449734804608913, 0.49033782689095345, 0.0] | [nan, 0.8644696780108341, 0.8944980656632955, 0.440104340976533, 0.7641389998117053, 0.4770745740308388, nan, 0.6297284505666034, 0.6844286473848664, 0.21773065311832707, 0.8890008282328474, 0.46004855121119775, 0.0, 0.5750680081177943, 0.06133536430566133, 0.6000371448704572, 0.0, 0.07885979620791951, 0.8006806868947128, 0.1252363801594355, 0.4706566275608475, 0.45444853884552, 0.04241284306453322, nan, 0.13328969033307544, 0.4323046138453842, 0.45063456852976475, 0.007448059584476676, 0.888463849852071, 0.7450400534159003, 0.9535229169698916, 0.021638336996913712, 0.23653075402126864, 0.371412309599829, 0.0] |
| 0.0566 | 20.5 | 8200 | 0.6333 | 0.5121 | 0.4287 | 0.8890 | [nan, 0.9382327153916955, 0.9575874232706021, 0.5340771028037383, 0.8342787755625269, 0.6541523107263972, nan, 0.7406429739787204, 0.8870285144944726, 0.2079415054476159, 0.9479172512933317, 0.5500535111550177, 0.0, 0.7218266253869969, 0.17152226005801488, 0.8854728193803988, 0.0, 0.06920116251669153, 0.9246219694901651, 0.12077186708389212, 0.6759797704055135, 0.5097310892447952, 0.045561204536566285, nan, 0.1750377591651792, 0.5736405505835558, 0.5156101127827879, 0.00684931506849315, 0.9398823262828916, 0.9029458484550981, 0.9765633952545758, 0.017017903767251024, 0.4133390233493873, 0.48943837047548283, 0.0] | [nan, 0.8643736263008805, 0.8951902105356352, 0.44089650982245326, 0.7609522214327652, 0.4848458703216258, nan, 0.6265179780801705, 0.6811413623628766, 0.1878590542487696, 0.887796763348636, 0.46558542236468475, 0.0, 0.5934331650617232, 0.06971498872257535, 0.6047629609093429, 0.0, 0.06810626948746361, 0.7983954196511591, 0.10178182731484066, 0.4720678124715856, 0.44954610542241913, 0.0431413003227001, nan, 0.12741374485267662, 0.432512153928718, 0.4367328553732968, 0.006685017695635077, 0.8879940574069723, 0.7494547941207608, 0.9536808104413358, 0.013580974233357105, 0.23932508912918143, 0.374424364423531, 0.0] |
| 0.0445 | 20.75 | 8300 | 0.6446 | 0.5134 | 0.4274 | 0.8856 | [nan, 0.9405399334753671, 0.9458917035764169, 0.5273960280373832, 0.8282526135651365, 0.6846166732980127, nan, 0.7372879749180856, 0.8847701285761731, 0.2182567629147852, 0.9486374327394391, 0.565180703054252, 0.0, 0.6657378740970072, 0.14856854584436877, 0.8831509384945119, 0.0, 0.06705417223051345, 0.9206841150299712, 0.12586301097700292, 0.6806553405515008, 0.5199094440427905, 0.04444382367730041, nan, 0.17805849237951393, 0.5833280996493432, 0.5248720391748466, 0.007252215954875101, 0.9356924613611799, 0.9010464353082633, 0.9759161892423923, 0.023617845745783083, 0.4449998983925705, 0.5172488924395381, 0.0] | [nan, 0.8666434932726657, 0.8860462410088557, 0.4516813574923211, 0.7742782740775649, 0.4555874524449895, nan, 0.6267926037830955, 0.6896407624091181, 0.1957204153277486, 0.8882182070612508, 0.46149838666308146, 0.0, 0.5469962267350659, 0.06421718273004798, 0.6011771207515888, 0.0, 0.06543011164763292, 0.79986647852113, 0.10526898843730527, 0.4713830230218466, 0.45188595346756627, 0.04203767801939388, nan, 0.1276553855846278, 0.42972506139948413, 0.441923808813104, 0.007075471698113208, 0.8884781477624152, 0.7456781431206605, 0.9535186762124032, 0.016432559463950374, 0.2430653450400151, 0.37996353686275436, 0.0] |
| 0.0523 | 21.0 | 8400 | 0.6334 | 0.5087 | 0.4256 | 0.8903 | [nan, 0.933221079502352, 0.9637948085900169, 0.5297546728971962, 0.8356436570172051, 0.6448230539257773, nan, 0.7465713167832686, 0.8749679745694359, 0.2327354074319566, 0.9465962111947419, 0.5354408495924919, 0.0, 0.6270897832817337, 0.14024467145920042, 0.8939972072481652, 0.009888751545117428, 0.05998481397114654, 0.9259419692666467, 0.10259275815824766, 0.6911110038285254, 0.5109028637249255, 0.044248282026928876, nan, 0.19286008512975422, 0.5704035170356414, 0.5006314949812767, 0.0, 0.9387582194599503, 0.9072224581646499, 0.9775237134023292, 0.011000766712254964, 0.4426019630555386, 0.48799979887931083, 0.0] | [nan, 0.8627899844290204, 0.898045292380419, 0.4429741700156492, 0.7733528050732301, 0.48122023215814036, nan, 0.6285033134107889, 0.6922586045743415, 0.2067303269489062, 0.888126363728484, 0.4555339601828019, 0.0, 0.512374046123361, 0.062230678829257376, 0.5926462119703566, 0.00044943820224719103, 0.05796624750145485, 0.8002256522783529, 0.08795100349163994, 0.4798915494731881, 0.45172247073689, 0.0420103434557751, nan, 0.13598869181318254, 0.4315342675118884, 0.4297071129707113, 0.0, 0.8889534278458562, 0.7430008362351238, 0.9537407288817968, 0.009678051537276564, 0.23964350552896518, 0.3711983987778357, 0.0] |
| 0.0715 | 21.25 | 8500 | 0.6366 | 0.5151 | 0.4287 | 0.8894 | [nan, 0.9370145031789949, 0.9615540919282511, 0.5349906542056074, 0.8234293246215806, 0.6427307923986297, nan, 0.7520265297434068, 0.877506286473407, 0.2407929077426571, 0.9458038701145451, 0.5871614390384458, 0.0, 0.6843137254901961, 0.1972505990667171, 0.8854890563096707, 0.054388133498145856, 0.06252454638284502, 0.9220868993644009, 0.11473699895693637, 0.6793299129694406, 0.505244648130675, 0.04341024638247947, nan, 0.19102018399011397, 0.5753257968283875, 0.5107132569630631, 0.0, 0.9400241164189752, 0.9050651936505135, 0.9789779094546415, 0.014533859670935389, 0.41945579060740923, 0.49523735034665384, 0.0] | [nan, 0.8636190041686136, 0.8961979040679402, 0.44008160621637177, 0.7735135302856915, 0.47552992149378714, nan, 0.6295369121222396, 0.6946632262523146, 0.2137970353477765, 0.8882677382290695, 0.4793581450054608, 0.0, 0.555406650473239, 0.08438545376065609, 0.5980720618958058, 0.002378506946321423, 0.06108823002737203, 0.7997681127577295, 0.0970839783417272, 0.47365876347968716, 0.44734126160727244, 0.041260653691952316, nan, 0.13688871396241267, 0.4310366799265186, 0.42952982613070945, 0.0, 0.8887487055026462, 0.7433844306901257, 0.9533070831491001, 0.012093141544284045, 0.23472485984284203, 0.3736148179836323, 0.0] |
| 0.0856 | 21.5 | 8600 | 0.6332 | 0.5104 | 0.4282 | 0.8891 | [nan, 0.9354302285089335, 0.9598914301992207, 0.5326285046728972, 0.8348257505275104, 0.6418013774311685, nan, 0.7519851631996333, 0.8757413294112065, 0.2316790256431501, 0.9473149777460632, 0.5441672841030707, 0.0, 0.6676986584107327, 0.19119687224114013, 0.8908797168279535, 0.0, 0.05576938182389443, 0.9230974918555517, 0.1150019040050332, 0.6832652332737915, 0.5057945396840957, 0.04410860941952064, nan, 0.19250308938624194, 0.5698984665305908, 0.50395515277747, 0.0040290088638195, 0.9408126308534799, 0.8986623443239606, 0.9766785258336341, 0.01867306975009325, 0.40035359385478264, 0.4951898635172656, 0.0] | [nan, 0.8652175117062043, 0.8949487144681932, 0.4437434730009742, 0.7611759319446382, 0.47865894832193984, nan, 0.6331643341293494, 0.6931150372692965, 0.2068423485899214, 0.8889820786499946, 0.4611976486594917, 0.0, 0.5675936485656636, 0.08603859250851305, 0.595085736597217, 0.0, 0.05421502748930971, 0.799696203512091, 0.09667497111998775, 0.4707822447654798, 0.4485026865801383, 0.041887733446519526, nan, 0.13581323258742614, 0.4329091328339933, 0.42695701145109816, 0.003957261574990107, 0.8887286680634571, 0.7476012702986532, 0.953293396822863, 0.014771330218834523, 0.23667139184546263, 0.3740649694565481, 0.0] | |
| 0.0426 | 22.25 | 8900 | 0.6388 | 0.5153 | 0.4321 | 0.8907 | [nan, 0.9365843032790866, 0.9619280328787767, 0.5323341121495327, 0.832118008177492, 0.6589330390083284, nan, 0.7530012289310712, 0.8876025999905109, 0.2356145656406645, 0.9495151391383951, 0.5967728657281633, 0.0, 0.6851909184726522, 0.16698196493883213, 0.8856433071377541, 0.0, 0.046160291152829054, 0.9249913955800083, 0.14087981589099158, 0.6780864102710397, 0.5070796622838727, 0.043214704732107936, nan, 0.19390361114925167, 0.577557963050191, 0.5263122908865303, 0.009266720386784852, 0.9401577082628303, 0.9045005405226523, 0.9759350190099954, 0.014261884039951924, 0.44343514397772765, 0.48190053464583205, 0.0] | [nan, 0.8638275353000382, 0.8975929370440341, 0.44847327680807825, 0.7680456934961463, 0.4896127563059361, nan, 0.6344922288860472, 0.6906430201049919, 0.21071058091286307, 0.8908914064913077, 0.4893922260291313, 0.0, 0.5741773684438103, 0.0915502696722445, 0.6133303348044865, 0.0, 0.045543787135107205, 0.799706519605589, 0.11493135050077327, 0.47303106132662764, 0.44896719237169413, 0.04119511090991399, nan, 0.13769769301273427, 0.43323479414732197, 0.4435750434181777, 0.008966861598440545, 0.8892865533176849, 0.7464162172003368, 0.9537521470921787, 0.012501163611760084, 0.24370386088743454, 0.37164396457569027, 0.0] |
| 0.0544 | 22.5 | 9000 | 0.6275 | 0.5126 | 0.4297 | 0.8902 | [nan, 0.9362912936349177, 0.962198079008307, 0.5305654205607476, 0.829452734049054, 0.6501778145136554, nan, 0.7606583485441561, 0.8785880343502396, 0.2379137495339492, 0.9477460490242178, 0.5748332921709064, 0.0, 0.6779153766769865, 0.15399167612561482, 0.8968792621939339, 0.0, 0.062053255832220565, 0.9268894385323623, 0.11712114438980778, 0.6830882170073133, 0.515366328868847, 0.046119894966199226, nan, 0.1939585335713305, 0.5666535824566913, 0.5097161596242051, 0.0064464141821112, 0.9399919952412273, 0.8983810519232679, 0.9745475341343337, 0.015694289029798168, 0.43490011989676686, 0.47604289457365206, 0.0] | [nan, 0.8648796447130465, 0.8972780355218145, 0.44448663694053075, 0.7723828909831303, 0.4856595115662902, nan, 0.6367705951823552, 0.693571040656192, 0.2097133467226584, 0.8885713515050402, 0.47493538294109644, 0.0, 0.5753448653382964, 0.07485745815707191, 0.589861603519713, 0.0, 0.060925449871465295, 0.7986432258569581, 0.09907840555757864, 0.4719490094091225, 0.45171147174755927, 0.04363338442835245, nan, 0.13716960245479792, 0.4304074481173985, 0.4370060790273556, 0.00631163708086785, 0.8878797422918536, 0.748175287257327, 0.9535688641919678, 0.013234083170064194, 0.2360317635381052, 0.36728912241605793, 0.0] |
| 0.0701 | 22.75 | 9100 | 0.6508 | 0.5132 | 0.4302 | 0.8902 | [nan, 0.9420095059141509, 0.9626173339520694, 0.5384521028037383, 0.8237863722622742, 0.6345902505663333, nan, 0.7493342571861443, 0.8728092233240025, 0.24462488089813164, 0.9462424874982255, 0.5649748909195687, 0.0, 0.6890092879256966, 0.18148568545844368, 0.8978859518087939, 0.0, 0.06417406331003063, 0.926905788482557, 0.10334608188877299, 0.6837845785184178, 0.5068636881640055, 0.044555561763226996, nan, 0.19329946450638474, 0.5856309206050139, 0.5353969555294587, 0.008058017727639, 0.9389002783925003, 0.9000722535382172, 0.9752872750044519, 0.01801255750341912, 0.4159604950313967, 0.4749814242696805, 0.0] | [nan, 0.8667971887550201, 0.8964523921395798, 0.43883250929953793, 0.7789739251684871, 0.4822597903246794, nan, 0.6338344499902683, 0.6949882507612449, 0.21506355392067597, 0.8897027195058894, 0.47454492022058187, 0.0, 0.5744214058332616, 0.09034404821697639, 0.5890266504761296, 0.0, 0.06334315397736083, 0.7983683031468644, 0.08797806890816708, 0.47160166966502776, 0.4468892814313033, 0.04230993686667728, nan, 0.13598253612549263, 0.43447527412791603, 0.442910823939144, 0.007836990595611285, 0.8890303591865106, 0.7479650947941834, 0.9538041433738902, 0.014260666277030976, 0.23761100470137558, 0.3677322595225377, 0.0] |
| 0.0588 | 23.0 | 9200 | 0.6510 | 0.5156 | 0.4306 | 0.8898 | [nan, 0.9386450845503147, 0.9615407102293612, 0.5321039719626168, 0.8252994992682097, 0.646236577683447, nan, 0.7500099107344458, 0.8891493096740523, 0.2356145656406645, 0.948320024675765, 0.5611467852144563, 0.0, 0.7061919504643963, 0.15790137470046664, 0.8929012145223095, 0.0, 0.06268164323305318, 0.9247904360655894, 0.12226195797943674, 0.6746470281016981, 0.5158947761834156, 0.04522599027878652, nan, 0.1926953178635178, 0.5791620871931753, 0.5486694289955906, 0.014504431909750202, 0.9393220200484532, 0.9030809791181759, 0.9764800062837624, 0.014337001118985454, 0.46371598691296306, 0.476005184444432, 0.0] | [nan, 0.8636880663267268, 0.8963496684957871, 0.4393286431075093, 0.7694031519559503, 0.48618816019454364, nan, 0.6323091767222339, 0.6843731284418411, 0.20910695246148756, 0.8901931512501616, 0.4713865836791148, 0.0, 0.594294150853272, 0.07763859605605854, 0.5971841386537511, 0.0, 0.061455525606469004, 0.799169285452784, 0.10285033809898536, 0.4708681854568623, 0.4517361674617981, 0.04280237937871778, nan, 0.1379100253532753, 0.432983014903532, 0.45285296269202635, 0.013830195927775643, 0.8892098290384068, 0.7459428984706676, 0.9536680185853351, 0.012051498108992573, 0.23353802067342136, 0.36591936147117593, 0.0] |
| 0.067 | 23.25 | 9300 | 0.6275 | 0.5128 | 0.4311 | 0.8905 | [nan, 0.9372797021893622, 0.9638153118797325, 0.5312441588785046, 0.8278251787794161, 0.6422768634184979, nan, 0.7515353020360958, 0.8786212459078616, 0.24139359542648825, 0.9490656742280216, 0.5420885815427677, 0.0, 0.7038183694530443, 0.17707150964812712, 0.8822822627784633, 0.0, 0.06734218312256172, 0.9252767953435341, 0.10501829500488419, 0.6879495810858851, 0.5059293320425944, 0.04416447846248394, nan, 0.19404091720444872, 0.5719029674988224, 0.5293478983403869, 0.008058017727639, 0.9393905631474131, 0.9031768115782158, 0.9770540451989742, 0.01500269385386879, 0.4205734723322969, 0.4884174036436365, 0.0] | [nan, 0.8641485198316792, 0.897149130251509, 0.4431534355853929, 0.7712457425720085, 0.4882715323914724, nan, 0.6318488634618116, 0.69528994349434, 0.21461061083181407, 0.890398769558611, 0.46117346313448776, 0.0, 0.5855585129217824, 0.08629909644108427, 0.608788204714529, 0.0, 0.0658912742737101, 0.7992632312490636, 0.09043857647998176, 0.47160302909046053, 0.44752081120336445, 0.04198645598194131, nan, 0.13798894682367646, 0.43383933729163815, 0.44664223751121745, 0.007836990595611285, 0.8889539638268134, 0.7463182889742939, 0.9538402391601662, 0.01284986599932556, 0.2406063988095238, 0.3716953276213374, 0.0] |
| 0.0513 | 23.5 | 9400 | 0.6472 | 0.5144 | 0.4306 | 0.8897 | [nan, 0.938401309042541, 0.9600648179629494, 0.5333469626168225, 0.832045261686822, 0.6450022850427629, nan, 0.7455948939896135, 0.883593490534706, 0.23551099879862464, 0.9506135691239773, 0.5523380258500041, 0.0, 0.6968524251805985, 0.18312523647370413, 0.8904413197376112, 0.0, 0.06160814808996413, 0.9256348385566595, 0.12978691700193712, 0.6801915871922148, 0.5208407367015084, 0.04416447846248394, nan, 0.1951942880681038, 0.5735463442717329, 0.5357736367463606, 0.010072522159548751, 0.9380115028759878, 0.9056712133078884, 0.9770508172388136, 0.017681006258029756, 0.4195573980369445, 0.4783152790270228, 0.0] | [nan, 0.8645788687513425, 0.8959992534632647, 0.44551363683824813, 0.7647562903055005, 0.48403962995403316, nan, 0.6342904860496079, 0.6900071507171095, 0.2094308344078099, 0.8896775711392028, 0.4683431642874594, 0.0, 0.5778034484233945, 0.08829968377523717, 0.5990191205946445, 0.0, 0.060376680693831467, 0.7987594181280973, 0.10780592458123607, 0.47080665968645763, 0.45253694794349175, 0.04196862307876085, nan, 0.13750677087363616, 0.4326699094290159, 0.44833404409174343, 0.009754194303550527, 0.8891644113783483, 0.7456061236432407, 0.9539508207140677, 0.014409173235161254, 0.23587072008774035, 0.3678274990977986, 0.0] |
| 0.0514 | 23.75 | 9500 | 0.6439 | 0.5126 | 0.4298 | 0.8893 | [nan, 0.9377822895762951, 0.9605358193045652, 0.5385, 0.8340916008081545, 0.6271635536295225, nan, 0.7452691324573968, 0.884822318166722, 0.22701851775135673, 0.9488086350085531, 0.537766526714415, 0.0, 0.6666150670794634, 0.20002522386177324, 0.8838085341300254, 0.0, 0.05781164087660042, 0.9238019884436897, 0.11829666054073742, 0.6694155391023081, 0.5142496967171933, 0.043549918989887706, nan, 0.19379376630509407, 0.5833176322813628, 0.5375905696749462, 0.014101531023368252, 0.9389680151020606, 0.9049790133806934, 0.9761012589582619, 0.02082556260101952, 0.414029953870227, 0.5005852053386369, 0.0] | [nan, 0.863411965165267, 0.894931428278196, 0.4402552004737254, 0.7611011560258087, 0.4837046157587918, nan, 0.6314089786667951, 0.6898753375504013, 0.2022476056909819, 0.8895664124405706, 0.4596777031068576, 0.0, 0.5673444293179922, 0.08523215821152193, 0.6083079089415631, 0.0, 0.056674965989886805, 0.7993862287218525, 0.09987768652804473, 0.4710007534678047, 0.450200875376809, 0.041379127295891285, nan, 0.1393342283999368, 0.4316562226473846, 0.44881423656073105, 0.013539651837524178, 0.8892954904899649, 0.7457058534465373, 0.9537927510495554, 0.016624966398544282, 0.24126375122858124, 0.37717282181124784, 0.0] |
| 0.0396 | 24.0 | 9600 | 0.6535 | 0.5114 | 0.4293 | 0.8894 | [nan, 0.9355970923117436, 0.9613217787436595, 0.5374941588785047, 0.8288621111896686, 0.642493049404965, nan, 0.7527694039253403, 0.878070882952982, 0.22343510501677782, 0.9446323372316829, 0.5478719025273731, 0.0, 0.6478844169246646, 0.1983856728465128, 0.8865769305708905, 0.0, 0.07386170240620009, 0.92611209153323, 0.1052169737909568, 0.6754384809956214, 0.5089943264670923, 0.04279568690988323, nan, 0.19272277907455718, 0.5795022766525357, 0.533735126631362, 0.008058017727639, 0.9392768622420797, 0.9018779025514876, 0.9758392561919, 0.014779932860872808, 0.4110833384137048, 0.4900487159002665, 0.0] | [nan, 0.8639528354166897, 0.8950065886128323, 0.44207385913246505, 0.7660355663095111, 0.48472638815638147, nan, 0.632634318964356, 0.6931134697057083, 0.20094633110411506, 0.8905903659512103, 0.4648726053472574, 0.0, 0.5535911115030201, 0.08658556723729839, 0.604755865918694, 0.0, 0.0724857392466211, 0.7980282230680995, 0.09017126154632008, 0.4707250951496855, 0.44738482499754295, 0.04074793201585233, nan, 0.13850404578646142, 0.43285457950063133, 0.4469182529964006, 0.007840062720501764, 0.8885988668670501, 0.746866946124605, 0.9537924535842215, 0.012023161337086795, 0.24114295250810605, 0.37191019096397804, 0.0] |
| 0.0572 | 24.25 | 9700 | 0.6468 | 0.5169 | 0.4312 | 0.8893 | [nan, 0.9401996856733055, 0.9583929096522826, 0.5344988317757009, 0.8275082400146594, 0.6494017622545427, nan, 0.7543103076809053, 0.8711154338852778, 0.24802187331703882, 0.9453213909924968, 0.5670947559068082, 0.0, 0.7040763673890609, 0.20204313280363223, 0.8891017730726765, 0.0, 0.06668761291336109, 0.9255172844843733, 0.1113677378764549, 0.6754443327730256, 0.5202249807001851, 0.044248282026928876, nan, 0.19305231360703007, 0.5827890301983566, 0.55261350291374, 0.014101531023368252, 0.9394324953961886, 0.9048990380903004, 0.9755035483352065, 0.0154197231547101, 0.45343331504399603, 0.47399118420979125, 0.0] | [nan, 0.863689319961114, 0.895499199129711, 0.4429491151299229, 0.765606502579043, 0.48571154804691785, nan, 0.6324972973597951, 0.6956526681114833, 0.21654760828284655, 0.8900625950293436, 0.47545424740738185, 0.0, 0.5803666368933691, 0.08725014977397745, 0.5992339680455242, 0.0, 0.06544361365913821, 0.7982999807741021, 0.09452243441114062, 0.4717078672807595, 0.4521680319629779, 0.04200588718873478, nan, 0.13927135130851676, 0.4339583670272156, 0.4507663389242337, 0.01348747591522158, 0.8884945203133995, 0.7465496843182982, 0.9537005332798949, 0.012399112712579277, 0.24028127759471044, 0.3662329926099869, 0.0] |
| 0.1 | 24.5 | 9800 | 0.6434 | 0.5135 | 0.4300 | 0.8895 | [nan, 0.9377224102212196, 0.9606645248290818, 0.5361588785046729, 0.8331230894215592, 0.6375564947567199, nan, 0.7494747310743753, 0.8814869288798216, 0.23789303616554125, 0.9491298161249899, 0.5208281880299662, 0.0, 0.7291537667698659, 0.1923319460209358, 0.8872670000649477, 0.0, 0.058754221977849345, 0.9251466166261608, 0.10029967383565953, 0.684280516653427, 0.5108906098741529, 0.04338231186099782, nan, 0.1931896196622271, 0.581302663945151, 0.5429748953047794, 0.014101531023368252, 0.939044218900316, 0.9053540699149504, 0.9762874046608516, 0.016517986655062374, 0.4174033205307972, 0.4717006430275368, 0.0] | [nan, 0.8641608155359141, 0.8958643122776131, 0.4417664033758718, 0.7644541831979321, 0.4846296892790795, nan, 0.6335999382179972, 0.6905137105945841, 0.21054850773630565, 0.8890883354259757, 0.44958072768618534, 0.0, 0.6023700925018117, 0.08546290069491146, 0.6030192343768966, 0.0, 0.057282891713891865, 0.7981027891830667, 0.08634672672073433, 0.470738722708764, 0.44815859378883993, 0.04122753457750405, nan, 0.1376066035521477, 0.4340720968586592, 0.4532255678035067, 0.01352918438345574, 0.888563607775072, 0.7458284701692807, 0.9538944088343424, 0.01350879014029907, 0.2349899322716456, 0.3667384437299315, 0.0] |
| 0.0547 | 24.75 | 9900 | 0.6482 | 0.5155 | 0.4313 | 0.8898 | [nan, 0.9397340904212859, 0.9603330836947732, 0.5307733644859813, 0.8309005858255233, 0.6429241895489165, nan, 0.7515697741559071, 0.8821369265075675, 0.23520029827250508, 0.948613379528076, 0.5628961883592657, 0.0, 0.7383384932920537, 0.19170134947660486, 0.8888176268104176, 0.0, 0.06747309716440185, 0.9241314709843229, 0.1176757893342605, 0.6804680836745651, 0.509839842170402, 0.04290742499580982, nan, 0.19313469724014828, 0.5775631967341812, 0.5366821032106535, 0.009669621273166801, 0.9403802717370998, 0.9035215326574961, 0.9734618635336802, 0.012358054623067678, 0.41701721229856326, 0.48626373626373626, 0.0] | [nan, 0.8640778611527823, 0.8958137823018933, 0.4460626314967881, 0.7641756445447411, 0.4858917928580605, nan, 0.6328187132466054, 0.6908867956078256, 0.20850548118768247, 0.8893168906380365, 0.47044860327507915, 0.0, 0.6030682345007797, 0.08536927829261444, 0.6011740028114567, 0.0, 0.06583048076431819, 0.7992350659678636, 0.09887388797306791, 0.4713607906006725, 0.44755617108819296, 0.040873892333484124, nan, 0.13801020408163264, 0.4335135793399971, 0.45185060816356987, 0.0093603744149766, 0.8886009280250379, 0.7464543006342957, 0.9536265277974683, 0.010431767147039596, 0.2352570275599578, 0.3719794479055262, 0.0] |
| 0.0627 | 25.0 | 10000 | 0.6463 | 0.5168 | 0.4317 | 0.8895 | [nan, 0.9354022848098984, 0.9601675641402632, 0.5369719626168225, 0.8337939300328185, 0.6403441237446122, nan, 0.7582108280375539, 0.8834986003700717, 0.24187000289987157, 0.948116751458167, 0.5520704700749156, 0.0, 0.7381320949432405, 0.19649388321352, 0.888963759173865, 0.0, 0.07624433796769041, 0.9231866922167408, 0.1182221559959602, 0.6801081993642044, 0.5121910497873957, 0.04447175819878205, nan, 0.19406837841548813, 0.5788088135238394, 0.5379894086104895, 0.008460918614020952, 0.9391146435745414, 0.9050362370798539, 0.9765451034803329, 0.015450806083965353, 0.41939482614968804, 0.4941702933568719, 0.0] | [nan, 0.8640678937775673, 0.895377615265056, 0.442350332594235, 0.7643727945096741, 0.4849891658522591, nan, 0.6340492784936108, 0.6910083381883088, 0.21346568681218236, 0.8895978581938467, 0.46446072065520405, 0.0, 0.601404187337089, 0.08586860670194003, 0.6029780227646933, 0.0, 0.07410800631139614, 0.7995575849393181, 0.09964415294445995, 0.4716975388811325, 0.4492564945882909, 0.04216548363174065, nan, 0.13932260862707987, 0.43292556418938755, 0.4516033033256454, 0.00821917808219178, 0.8889508587805682, 0.7461158390782254, 0.954070468766836, 0.012555965083260888, 0.23512657506778772, 0.3742610137901782, 0.0] |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
| [
"unlabeled",
"flat-road",
"flat-sidewalk",
"flat-crosswalk",
"flat-cyclinglane",
"flat-parkingdriveway",
"flat-railtrack",
"flat-curb",
"human-person",
"human-rider",
"vehicle-car",
"vehicle-truck",
"vehicle-bus",
"vehicle-tramtrain",
"vehicle-motorcycle",
"vehicle-bicycle",
"vehicle-caravan",
"vehicle-cartrailer",
"construction-building",
"construction-door",
"construction-wall",
"construction-fenceguardrail",
"construction-bridge",
"construction-tunnel",
"construction-stairs",
"object-pole",
"object-trafficsign",
"object-trafficlight",
"nature-vegetation",
"nature-terrain",
"sky",
"void-ground",
"void-dynamic",
"void-static",
"void-unclear"
] |
nielsr/segformer-finetuned-sidewalk |
# Segformer-b0, fine-tuned on Sidewalk
This repository contains the weights of a `SegFormerForSemanticSegmentation` model.
It was trained using the example script. | [
"unlabeled",
"flat-road",
"flat-sidewalk",
"flat-crosswalk",
"flat-cyclinglane",
"flat-parkingdriveway",
"flat-railtrack",
"flat-curb",
"human-person",
"human-rider",
"vehicle-car",
"vehicle-truck",
"vehicle-bus",
"vehicle-tramtrain",
"vehicle-motorcycle",
"vehicle-bicycle",
"vehicle-caravan",
"vehicle-cartrailer",
"construction-building",
"construction-door",
"construction-wall",
"construction-fenceguardrail",
"construction-bridge",
"construction-tunnel",
"construction-stairs",
"object-pole",
"object-trafficsign",
"object-trafficlight",
"nature-vegetation",
"nature-terrain",
"sky",
"void-ground",
"void-dynamic",
"void-static",
"void-unclear"
] |
nielsr/sidewalk-semantic-demo |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sidewalk-semantic-demo
This model is a fine-tuned version of [nvidia/mit-b0](https://huggingface.co/nvidia/mit-b0) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7591
- Mean Iou: 0.1135
- Mean Accuracy: 0.1608
- Overall Accuracy: 0.6553
- Per Category Iou: [nan, 0.38512238586129177, 0.723869670479682, 3.007496184239216e-05, 0.04329871029371091, 0.0006725029325634934, nan, 0.0, 0.0, 0.0, 0.5420712902837528, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.4939727049879936, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.5630706428968278, 0.2911849732223226, 0.5899473333836793, 0.0, 0.0, 1.723395088323998e-05, 0.0]
- Per Category Accuracy: [nan, 0.6995968221991989, 0.8870903675336742, 3.007496184239216e-05, 0.043772127605383085, 0.0006731284624713075, nan, 0.0, 0.0, 0.0, 0.8074880705716012, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.8257698903048035, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.9746918606102934, 0.3057553223999185, 0.6001142624744604, 0.0, 0.0, 1.7275073149137866e-05, 0.0]
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Mean Iou | Mean Accuracy | Overall Accuracy | Per Category Iou | Per Category Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:-------------:|:----------------:|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------:|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------:|
| 2.3589 | 1.0 | 53 | 1.9020 | 0.1014 | 0.1491 | 0.6442 | [0.0, 0.3612513514640175, 0.6751826209974531, 0.0, 0.030376890155720412, 0.0008039971158010613, nan, 2.235273737210043e-05, 0.0, 0.0, 0.5369771616036864, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.4924640887729494, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.5705205266526164, 0.07944837262494953, 0.5986634961452602, 0.0, 0.0, 0.00011218284533795612, 0.0] | [nan, 0.523053840654786, 0.9469253318772407, 0.0, 0.030589314463641413, 0.0008054985216698098, nan, 2.2371239534454507e-05, 0.0, 0.0, 0.8528562962514211, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.7547252442297603, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.9698553453075568, 0.08054302832748386, 0.6107703679316233, 0.0, 0.0, 0.00011444735961303836, 0.0] |
| 2.1214 | 2.0 | 106 | 1.7800 | 0.1158 | 0.1627 | 0.6622 | [nan, 0.3912271306195065, 0.7114203717790301, 0.0001503748092119608, 0.04491329385698775, 0.0008871978593462472, nan, 1.3975654410017748e-06, 0.0, 0.0, 0.5167420849064452, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.49676247687874375, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.5965069148571663, 0.3115535309159788, 0.636016670211685, 0.0, 0.0, 0.0, 0.0] | [nan, 0.6306423988442347, 0.9198450793635351, 0.0001503748092119608, 0.045391490029595895, 0.0008886008009872551, nan, 1.3982024709034067e-06, 0.0, 0.0, 0.8587918189550764, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.8103648148965297, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.9600035488335386, 0.3307256120335472, 0.6505175702762634, 0.0, 0.0, 0.0, 0.0] |
| 1.9022 | 3.0 | 159 | 1.7591 | 0.1135 | 0.1608 | 0.6553 | [nan, 0.38512238586129177, 0.723869670479682, 3.007496184239216e-05, 0.04329871029371091, 0.0006725029325634934, nan, 0.0, 0.0, 0.0, 0.5420712902837528, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.4939727049879936, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.5630706428968278, 0.2911849732223226, 0.5899473333836793, 0.0, 0.0, 1.723395088323998e-05, 0.0] | [nan, 0.6995968221991989, 0.8870903675336742, 3.007496184239216e-05, 0.043772127605383085, 0.0006731284624713075, nan, 0.0, 0.0, 0.0, 0.8074880705716012, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.8257698903048035, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.9746918606102934, 0.3057553223999185, 0.6001142624744604, 0.0, 0.0, 1.7275073149137866e-05, 0.0] |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
| [
"unlabeled",
"flat-road",
"flat-sidewalk",
"flat-crosswalk",
"flat-cyclinglane",
"flat-parkingdriveway",
"flat-railtrack",
"flat-curb",
"human-person",
"human-rider",
"vehicle-car",
"vehicle-truck",
"vehicle-bus",
"vehicle-tramtrain",
"vehicle-motorcycle",
"vehicle-bicycle",
"vehicle-caravan",
"vehicle-cartrailer",
"construction-building",
"construction-door",
"construction-wall",
"construction-fenceguardrail",
"construction-bridge",
"construction-tunnel",
"construction-stairs",
"object-pole",
"object-trafficsign",
"object-trafficlight",
"nature-vegetation",
"nature-terrain",
"sky",
"void-ground",
"void-dynamic",
"void-static",
"void-unclear"
] |
hufanyoung/segformer-b0-finetuned-segments-sidewalk-2 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# segformer-b0-finetuned-segments-sidewalk-2
This model is a fine-tuned version of [nvidia/mit-b0](https://huggingface.co/nvidia/mit-b0) on the segments/sidewalk-semantic dataset.
It achieves the following results on the evaluation set:
- Loss: 2.9327
- Mean Iou: 0.0763
- Mean Accuracy: 0.1260
- Overall Accuracy: 0.5923
- Per Category Iou: [nan, 0.15598158400203022, 0.6233750625153907, 0.0037560777123078824, 0.026995519273962765, 0.027599075064035524, 0.0, 0.0010671752114502803, 0.0, 0.0, 0.503652156236298, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.42226922942999406, 0.0, 0.0005751844669974061, 0.0, 0.0, 0.0, 0.015053303500921295, 0.0, 0.0, 0.0, 0.5380260834627074, 0.2004924888392474, 0.07113330974397604, 7.792680075848753e-05, 0.000328515111695138, 0.0025085129486024, 0.0]
- Per Category Accuracy: [nan, 0.17282441039529764, 0.9228726118961177, 0.00408103876916878, 0.028255152590055656, 0.029544523907019265, nan, 0.0010791707371488259, 0.0, 0.0, 0.8681646650418041, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.7122996003019028, 0.0, 0.0005801259615003622, 0.0, 0.0, nan, 0.02304960072549563, 0.0, 0.0, 0.0, 0.9348363685365858, 0.2596289024956107, 0.07122958643730157, 8.48216389425569e-05, 0.0005356047133214773, 0.0026059641588056346, 0.0]
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 0.05
### Training results
| Training Loss | Epoch | Step | Validation Loss | Mean Iou | Mean Accuracy | Overall Accuracy | Per Category Iou | Per Category Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:-------------:|:----------------:|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------:|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------:|
| 3.0624 | 0.03 | 10 | 3.1628 | 0.0726 | 0.1219 | 0.5758 | [nan, 0.0878087898079964, 0.611982872765419, 0.0001999765816897758, 0.006930751650791711, 0.0208104329339671, 0.0, 0.0010631316774049914, 0.0, 0.0, 0.4839157481183621, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.39292052415275885, 0.0, 0.0003268797082673576, 0.0011424188270622699, 0.0, 0.0, 0.004317032040472175, 3.142508260307427e-05, 0.0, 0.0, 0.5537894233680722, 0.28184052017073197, 0.015966383939961543, 0.0002995587926924772, 0.0005713078253519804, 0.0035316933149879015, 0.0] | [nan, 0.09656561651317118, 0.9239613003877697, 0.00021265611687132485, 0.007163978434475801, 0.0222089828684614, nan, 0.0010774805715464, 0.0, 0.0, 0.8583517795809614, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.705533848895072, 0.0, 0.00033222625115695, 0.0011495555325644448, 0.0, nan, 0.008061062548807214, 3.244014792707455e-05, 0.0, 0.0, 0.8715627360179777, 0.3828074002074446, 0.01597238073499201, 0.0003298619292210546, 0.0011388100215281895, 0.003805890022240969, 0.0] |
| 2.6259 | 0.05 | 20 | 2.9327 | 0.0763 | 0.1260 | 0.5923 | [nan, 0.15598158400203022, 0.6233750625153907, 0.0037560777123078824, 0.026995519273962765, 0.027599075064035524, 0.0, 0.0010671752114502803, 0.0, 0.0, 0.503652156236298, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.42226922942999406, 0.0, 0.0005751844669974061, 0.0, 0.0, 0.0, 0.015053303500921295, 0.0, 0.0, 0.0, 0.5380260834627074, 0.2004924888392474, 0.07113330974397604, 7.792680075848753e-05, 0.000328515111695138, 0.0025085129486024, 0.0] | [nan, 0.17282441039529764, 0.9228726118961177, 0.00408103876916878, 0.028255152590055656, 0.029544523907019265, nan, 0.0010791707371488259, 0.0, 0.0, 0.8681646650418041, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.7122996003019028, 0.0, 0.0005801259615003622, 0.0, 0.0, nan, 0.02304960072549563, 0.0, 0.0, 0.0, 0.9348363685365858, 0.2596289024956107, 0.07122958643730157, 8.48216389425569e-05, 0.0005356047133214773, 0.0026059641588056346, 0.0] |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.0+cu111
- Datasets 2.1.0
- Tokenizers 0.12.1
| [
"unlabeled",
"flat-road",
"flat-sidewalk",
"flat-crosswalk",
"flat-cyclinglane",
"flat-parkingdriveway",
"flat-railtrack",
"flat-curb",
"human-person",
"human-rider",
"vehicle-car",
"vehicle-truck",
"vehicle-bus",
"vehicle-tramtrain",
"vehicle-motorcycle",
"vehicle-bicycle",
"vehicle-caravan",
"vehicle-cartrailer",
"construction-building",
"construction-door",
"construction-wall",
"construction-fenceguardrail",
"construction-bridge",
"construction-tunnel",
"construction-stairs",
"object-pole",
"object-trafficsign",
"object-trafficlight",
"nature-vegetation",
"nature-terrain",
"sky",
"void-ground",
"void-dynamic",
"void-static",
"void-unclear"
] |
nielsr/segformer-trainer-test |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# segformer-trainer-test
This model is a fine-tuned version of [nvidia/mit-b0](https://huggingface.co/nvidia/mit-b0) on the segments/sidewalk-semantic dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3886
- Mean Iou: 0.1391
- Mean Accuracy: 0.1905
- Overall Accuracy: 0.7192
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10.0
### Training results
### Framework versions
- Transformers 4.19.0.dev0
- Pytorch 1.11.0+cu113
- Datasets 2.0.0
- Tokenizers 0.11.6
| [
"unlabeled",
"flat-road",
"flat-sidewalk",
"flat-crosswalk",
"flat-cyclinglane",
"flat-parkingdriveway",
"flat-railtrack",
"flat-curb",
"human-person",
"human-rider",
"vehicle-car",
"vehicle-truck",
"vehicle-bus",
"vehicle-tramtrain",
"vehicle-motorcycle",
"vehicle-bicycle",
"vehicle-caravan",
"vehicle-cartrailer",
"construction-building",
"construction-door",
"construction-wall",
"construction-fenceguardrail",
"construction-bridge",
"construction-tunnel",
"construction-stairs",
"object-pole",
"object-trafficsign",
"object-trafficlight",
"nature-vegetation",
"nature-terrain",
"sky",
"void-ground",
"void-dynamic",
"void-static",
"void-unclear"
] |
nielsr/segformer-trainer-test-bis |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# segformer-trainer-test-bis
This model is a fine-tuned version of [nvidia/mit-b0](https://huggingface.co/nvidia/mit-b0) on the segments/sidewalk-semantic dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3784
- Mean Iou: 0.1424
- Mean Accuracy: 0.1896
- Overall Accuracy: 0.7288
- Accuracy Unlabeled: nan
- Accuracy Flat-road: 0.6651
- Accuracy Flat-sidewalk: 0.9129
- Accuracy Flat-crosswalk: 0.0
- Accuracy Flat-cyclinglane: 0.5829
- Accuracy Flat-parkingdriveway: 0.0184
- Accuracy Flat-railtrack: 0.0
- Accuracy Flat-curb: 0.0
- Accuracy Human-person: 0.0
- Accuracy Human-rider: 0.0
- Accuracy Vehicle-car: 0.8322
- Accuracy Vehicle-truck: 0.0
- Accuracy Vehicle-bus: 0.0
- Accuracy Vehicle-tramtrain: 0.0
- Accuracy Vehicle-motorcycle: 0.0
- Accuracy Vehicle-bicycle: 0.0
- Accuracy Vehicle-caravan: 0.0
- Accuracy Vehicle-cartrailer: 0.0
- Accuracy Construction-building: 0.8930
- Accuracy Construction-door: 0.0
- Accuracy Construction-wall: 0.0025
- Accuracy Construction-fenceguardrail: 0.0
- Accuracy Construction-bridge: 0.0
- Accuracy Construction-tunnel: 0.0
- Accuracy Construction-stairs: 0.0
- Accuracy Object-pole: 0.0008
- Accuracy Object-trafficsign: 0.0
- Accuracy Object-trafficlight: 0.0
- Accuracy Nature-vegetation: 0.8552
- Accuracy Nature-terrain: 0.8507
- Accuracy Sky: 0.8336
- Accuracy Void-ground: 0.0
- Accuracy Void-dynamic: 0.0
- Accuracy Void-static: 0.0
- Accuracy Void-unclear: 0.0
- Iou Unlabeled: nan
- Iou Flat-road: 0.4712
- Iou Flat-sidewalk: 0.7651
- Iou Flat-crosswalk: 0.0
- Iou Flat-cyclinglane: 0.5216
- Iou Flat-parkingdriveway: 0.0178
- Iou Flat-railtrack: 0.0
- Iou Flat-curb: 0.0
- Iou Human-person: 0.0
- Iou Human-rider: 0.0
- Iou Vehicle-car: 0.5696
- Iou Vehicle-truck: 0.0
- Iou Vehicle-bus: 0.0
- Iou Vehicle-tramtrain: 0.0
- Iou Vehicle-motorcycle: 0.0
- Iou Vehicle-bicycle: 0.0
- Iou Vehicle-caravan: 0.0
- Iou Vehicle-cartrailer: 0.0
- Iou Construction-building: 0.4716
- Iou Construction-door: 0.0
- Iou Construction-wall: 0.0024
- Iou Construction-fenceguardrail: 0.0
- Iou Construction-bridge: 0.0
- Iou Construction-tunnel: 0.0
- Iou Construction-stairs: 0.0
- Iou Object-pole: 0.0008
- Iou Object-trafficsign: 0.0
- Iou Object-trafficlight: 0.0
- Iou Nature-vegetation: 0.6813
- Iou Nature-terrain: 0.5513
- Iou Sky: 0.7873
- Iou Void-ground: 0.0
- Iou Void-dynamic: 0.0
- Iou Void-static: 0.0
- Iou Void-unclear: 0.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10.0
### Training results
### Framework versions
- Transformers 4.19.0.dev0
- Pytorch 1.11.0+cu113
- Datasets 2.0.0
- Tokenizers 0.11.6
| [
"unlabeled",
"flat-road",
"flat-sidewalk",
"flat-crosswalk",
"flat-cyclinglane",
"flat-parkingdriveway",
"flat-railtrack",
"flat-curb",
"human-person",
"human-rider",
"vehicle-car",
"vehicle-truck",
"vehicle-bus",
"vehicle-tramtrain",
"vehicle-motorcycle",
"vehicle-bicycle",
"vehicle-caravan",
"vehicle-cartrailer",
"construction-building",
"construction-door",
"construction-wall",
"construction-fenceguardrail",
"construction-bridge",
"construction-tunnel",
"construction-stairs",
"object-pole",
"object-trafficsign",
"object-trafficlight",
"nature-vegetation",
"nature-terrain",
"sky",
"void-ground",
"void-dynamic",
"void-static",
"void-unclear"
] |
nielsr/segformer-finetuned-sidewalk-10k-steps |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# segformer-finetuned-sidewalk-50-epochs
This model is a fine-tuned version of [nvidia/mit-b0](https://huggingface.co/nvidia/mit-b0) on the segments/sidewalk-semantic dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6350
- Mean Iou: 0.3022
- Mean Accuracy: 0.3724
- Overall Accuracy: 0.8117
- Accuracy Unlabeled: nan
- Accuracy Flat-road: 0.8240
- Accuracy Flat-sidewalk: 0.8308
- Accuracy Flat-crosswalk: 0.7789
- Accuracy Flat-cyclinglane: 0.9052
- Accuracy Flat-parkingdriveway: 0.3152
- Accuracy Flat-railtrack: nan
- Accuracy Flat-curb: 0.4703
- Accuracy Human-person: 0.6444
- Accuracy Human-rider: 0.0
- Accuracy Vehicle-car: 0.9424
- Accuracy Vehicle-truck: 0.0
- Accuracy Vehicle-bus: 0.0
- Accuracy Vehicle-tramtrain: 0.0
- Accuracy Vehicle-motorcycle: 0.0
- Accuracy Vehicle-bicycle: 0.7116
- Accuracy Vehicle-caravan: 0.0
- Accuracy Vehicle-cartrailer: 0.0
- Accuracy Construction-building: 0.8716
- Accuracy Construction-door: 0.0
- Accuracy Construction-wall: 0.4736
- Accuracy Construction-fenceguardrail: 0.5408
- Accuracy Construction-bridge: 0.0
- Accuracy Construction-tunnel: nan
- Accuracy Construction-stairs: 0.0048
- Accuracy Object-pole: 0.4202
- Accuracy Object-trafficsign: 0.0754
- Accuracy Object-trafficlight: 0.0
- Accuracy Nature-vegetation: 0.9437
- Accuracy Nature-terrain: 0.8196
- Accuracy Sky: 0.9525
- Accuracy Void-ground: 0.0
- Accuracy Void-dynamic: 0.1041
- Accuracy Void-static: 0.2872
- Accuracy Void-unclear: 0.0
- Iou Unlabeled: nan
- Iou Flat-road: 0.7413
- Iou Flat-sidewalk: 0.7520
- Iou Flat-crosswalk: 0.7629
- Iou Flat-cyclinglane: 0.4453
- Iou Flat-parkingdriveway: 0.2976
- Iou Flat-railtrack: nan
- Iou Flat-curb: 0.3701
- Iou Human-person: 0.4953
- Iou Human-rider: 0.0
- Iou Vehicle-car: 0.7962
- Iou Vehicle-truck: 0.0
- Iou Vehicle-bus: 0.0
- Iou Vehicle-tramtrain: 0.0
- Iou Vehicle-motorcycle: 0.0
- Iou Vehicle-bicycle: 0.4152
- Iou Vehicle-caravan: 0.0
- Iou Vehicle-cartrailer: 0.0
- Iou Construction-building: 0.6712
- Iou Construction-door: 0.0
- Iou Construction-wall: 0.3749
- Iou Construction-fenceguardrail: 0.4613
- Iou Construction-bridge: 0.0
- Iou Construction-tunnel: nan
- Iou Construction-stairs: 0.0048
- Iou Object-pole: 0.2337
- Iou Object-trafficsign: 0.0753
- Iou Object-trafficlight: 0.0
- Iou Nature-vegetation: 0.8324
- Iou Nature-terrain: 0.7277
- Iou Sky: 0.9234
- Iou Void-ground: 0.0
- Iou Void-dynamic: 0.0913
- Iou Void-static: 0.1997
- Iou Void-unclear: 0.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 6e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: polynomial
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | Mean Iou | Mean Accuracy | Overall Accuracy | Accuracy Unlabeled | Accuracy Flat-road | Accuracy Flat-sidewalk | Accuracy Flat-crosswalk | Accuracy Flat-cyclinglane | Accuracy Flat-parkingdriveway | Accuracy Flat-railtrack | Accuracy Flat-curb | Accuracy Human-person | Accuracy Human-rider | Accuracy Vehicle-car | Accuracy Vehicle-truck | Accuracy Vehicle-bus | Accuracy Vehicle-tramtrain | Accuracy Vehicle-motorcycle | Accuracy Vehicle-bicycle | Accuracy Vehicle-caravan | Accuracy Vehicle-cartrailer | Accuracy Construction-building | Accuracy Construction-door | Accuracy Construction-wall | Accuracy Construction-fenceguardrail | Accuracy Construction-bridge | Accuracy Construction-tunnel | Accuracy Construction-stairs | Accuracy Object-pole | Accuracy Object-trafficsign | Accuracy Object-trafficlight | Accuracy Nature-vegetation | Accuracy Nature-terrain | Accuracy Sky | Accuracy Void-ground | Accuracy Void-dynamic | Accuracy Void-static | Accuracy Void-unclear | Iou Unlabeled | Iou Flat-road | Iou Flat-sidewalk | Iou Flat-crosswalk | Iou Flat-cyclinglane | Iou Flat-parkingdriveway | Iou Flat-railtrack | Iou Flat-curb | Iou Human-person | Iou Human-rider | Iou Vehicle-car | Iou Vehicle-truck | Iou Vehicle-bus | Iou Vehicle-tramtrain | Iou Vehicle-motorcycle | Iou Vehicle-bicycle | Iou Vehicle-caravan | Iou Vehicle-cartrailer | Iou Construction-building | Iou Construction-door | Iou Construction-wall | Iou Construction-fenceguardrail | Iou Construction-bridge | Iou Construction-tunnel | Iou Construction-stairs | Iou Object-pole | Iou Object-trafficsign | Iou Object-trafficlight | Iou Nature-vegetation | Iou Nature-terrain | Iou Sky | Iou Void-ground | Iou Void-dynamic | Iou Void-static | Iou Void-unclear |
|:-------------:|:------:|:-----:|:---------------:|:--------:|:-------------:|:----------------:|:------------------:|:------------------:|:----------------------:|:-----------------------:|:-------------------------:|:-----------------------------:|:-----------------------:|:------------------:|:---------------------:|:--------------------:|:--------------------:|:----------------------:|:--------------------:|:--------------------------:|:---------------------------:|:------------------------:|:------------------------:|:---------------------------:|:------------------------------:|:--------------------------:|:--------------------------:|:------------------------------------:|:----------------------------:|:----------------------------:|:----------------------------:|:--------------------:|:---------------------------:|:----------------------------:|:--------------------------:|:-----------------------:|:------------:|:--------------------:|:---------------------:|:--------------------:|:---------------------:|:-------------:|:-------------:|:-----------------:|:------------------:|:--------------------:|:------------------------:|:------------------:|:-------------:|:----------------:|:---------------:|:---------------:|:-----------------:|:---------------:|:---------------------:|:----------------------:|:-------------------:|:-------------------:|:----------------------:|:-------------------------:|:---------------------:|:---------------------:|:-------------------------------:|:-----------------------:|:-----------------------:|:-----------------------:|:---------------:|:----------------------:|:-----------------------:|:---------------------:|:------------------:|:-------:|:---------------:|:----------------:|:---------------:|:----------------:|
| 2.4745 | 1.85 | 100 | 1.7861 | 0.1056 | 0.1555 | 0.6397 | nan | 0.2287 | 0.9278 | 0.0 | 0.1406 | 0.0032 | nan | 0.0 | 0.0 | 0.0 | 0.7757 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.8764 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.8387 | 0.8794 | 0.3057 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.1931 | 0.6432 | 0.0 | 0.1380 | 0.0031 | nan | 0.0 | 0.0 | 0.0 | 0.5312 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.4482 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.6323 | 0.4860 | 0.3053 | 0.0 | 0.0 | 0.0 | 0.0 |
| 1.7294 | 3.7 | 200 | 1.3129 | 0.1517 | 0.1996 | 0.7410 | nan | 0.7928 | 0.8830 | 0.0 | 0.6053 | 0.0089 | nan | 0.0 | 0.0 | 0.0 | 0.7837 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.8530 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.9138 | 0.7742 | 0.7740 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.5519 | 0.7788 | 0.0 | 0.5131 | 0.0088 | nan | 0.0 | 0.0 | 0.0 | 0.5804 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.5005 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.6747 | 0.5247 | 0.7209 | 0.0 | 0.0 | 0.0 | 0.0 |
| 1.4479 | 5.56 | 300 | 1.1309 | 0.1608 | 0.2113 | 0.7588 | nan | 0.7973 | 0.9008 | 0.0 | 0.7721 | 0.0269 | nan | 0.0 | 0.0 | 0.0 | 0.8744 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.8581 | 0.0 | 0.0007 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.8622 | 0.8707 | 0.7985 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.5861 | 0.7816 | 0.0 | 0.5877 | 0.0261 | nan | 0.0 | 0.0 | 0.0 | 0.6119 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.5582 | 0.0 | 0.0007 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.7024 | 0.5206 | 0.7706 | 0.0 | 0.0 | 0.0 | 0.0 |
| 1.2348 | 7.41 | 400 | 0.9644 | 0.1707 | 0.2170 | 0.7736 | nan | 0.8125 | 0.9218 | 0.0 | 0.7596 | 0.1081 | nan | 0.0000 | 0.0 | 0.0 | 0.9080 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.8280 | 0.0 | 0.0334 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.8856 | 0.8260 | 0.8612 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.6003 | 0.7937 | 0.0 | 0.6538 | 0.0997 | nan | 0.0000 | 0.0 | 0.0 | 0.6189 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.5731 | 0.0 | 0.0330 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.7147 | 0.5601 | 0.8139 | 0.0 | 0.0 | 0.0 | 0.0 |
| 1.0762 | 9.26 | 500 | 0.8819 | 0.1722 | 0.2159 | 0.7748 | nan | 0.7512 | 0.9353 | 0.0 | 0.7565 | 0.1204 | nan | 0.0016 | 0.0 | 0.0 | 0.9115 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.8689 | 0.0 | 0.0565 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.9098 | 0.7664 | 0.8303 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.5993 | 0.7850 | 0.0 | 0.6536 | 0.1052 | nan | 0.0016 | 0.0 | 0.0 | 0.6377 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.5767 | 0.0 | 0.0547 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.7285 | 0.5709 | 0.7984 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.9933 | 11.11 | 600 | 0.8347 | 0.1814 | 0.2263 | 0.7822 | nan | 0.8064 | 0.9111 | 0.0 | 0.7880 | 0.1443 | nan | 0.0436 | 0.0 | 0.0 | 0.8944 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.8970 | 0.0 | 0.1914 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.9053 | 0.8080 | 0.8526 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.6088 | 0.8045 | 0.0 | 0.6845 | 0.1255 | nan | 0.0419 | 0.0 | 0.0 | 0.6594 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.5548 | 0.0 | 0.1585 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.7440 | 0.6068 | 0.8176 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.9424 | 12.96 | 700 | 0.8428 | 0.1824 | 0.2271 | 0.7704 | nan | 0.6767 | 0.9270 | 0.0475 | 0.7655 | 0.1322 | nan | 0.2020 | 0.0189 | 0.0 | 0.8410 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.9205 | 0.0 | 0.2568 | 0.0 | 0.0 | nan | 0.0 | 0.0023 | 0.0 | 0.0 | 0.8994 | 0.7347 | 0.8413 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.5838 | 0.7914 | 0.0475 | 0.6091 | 0.1095 | nan | 0.1597 | 0.0185 | 0.0 | 0.6706 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.5131 | 0.0 | 0.1872 | 0.0 | 0.0 | nan | 0.0 | 0.0023 | 0.0 | 0.0 | 0.7525 | 0.5837 | 0.8077 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.8673 | 14.81 | 800 | 0.7934 | 0.2089 | 0.2509 | 0.7818 | nan | 0.6854 | 0.9394 | 0.7072 | 0.7240 | 0.1504 | nan | 0.2013 | 0.0186 | 0.0 | 0.9071 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.9037 | 0.0 | 0.3110 | 0.0 | 0.0 | nan | 0.0 | 0.0108 | 0.0 | 0.0 | 0.8990 | 0.7171 | 0.8513 | 0.0 | 0.0 | 0.0013 | 0.0 | nan | 0.5914 | 0.7755 | 0.6900 | 0.6673 | 0.1340 | nan | 0.1542 | 0.0183 | 0.0 | 0.6792 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.5639 | 0.0 | 0.2172 | 0.0 | 0.0 | nan | 0.0 | 0.0100 | 0.0 | 0.0 | 0.7615 | 0.6014 | 0.8192 | 0.0 | 0.0 | 0.0013 | 0.0 |
| 0.8126 | 16.67 | 900 | 0.7484 | 0.2268 | 0.2784 | 0.7940 | nan | 0.6791 | 0.9397 | 0.7812 | 0.8009 | 0.1532 | nan | 0.3244 | 0.2962 | 0.0 | 0.9018 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.8567 | 0.0 | 0.4772 | 0.0002 | 0.0 | nan | 0.0 | 0.0834 | 0.0 | 0.0 | 0.8992 | 0.8280 | 0.8837 | 0.0 | 0.0 | 0.0032 | 0.0 | nan | 0.6303 | 0.7968 | 0.7079 | 0.6095 | 0.1396 | nan | 0.2196 | 0.2638 | 0.0 | 0.7100 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.6016 | 0.0 | 0.2860 | 0.0002 | 0.0 | nan | 0.0 | 0.0570 | 0.0 | 0.0 | 0.7678 | 0.6211 | 0.8416 | 0.0 | 0.0 | 0.0032 | 0.0 |
| 0.7989 | 18.52 | 1000 | 0.7241 | 0.2279 | 0.2803 | 0.8018 | nan | 0.7224 | 0.9402 | 0.7875 | 0.8234 | 0.1793 | nan | 0.3763 | 0.1974 | 0.0 | 0.9259 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.8911 | 0.0 | 0.3994 | 0.0029 | 0.0 | nan | 0.0 | 0.0758 | 0.0 | 0.0 | 0.8619 | 0.8774 | 0.8854 | 0.0 | 0.0 | 0.0225 | 0.0 | nan | 0.6579 | 0.8292 | 0.7198 | 0.6924 | 0.1660 | nan | 0.2392 | 0.1794 | 0.0 | 0.6748 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.5766 | 0.0 | 0.2654 | 0.0029 | 0.0 | nan | 0.0 | 0.0636 | 0.0 | 0.0 | 0.7582 | 0.5994 | 0.8455 | 0.0 | 0.0 | 0.0220 | 0.0 |
| 0.7429 | 20.37 | 1100 | 0.7321 | 0.2276 | 0.2862 | 0.7876 | nan | 0.8321 | 0.8491 | 0.7958 | 0.8572 | 0.2216 | nan | 0.3030 | 0.2864 | 0.0 | 0.9456 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.8668 | 0.0 | 0.3757 | 0.0040 | 0.0 | nan | 0.0 | 0.1140 | 0.0 | 0.0 | 0.8839 | 0.8499 | 0.9228 | 0.0 | 0.0 | 0.0505 | 0.0 | nan | 0.6678 | 0.7848 | 0.7342 | 0.5048 | 0.1995 | nan | 0.2316 | 0.2463 | 0.0 | 0.6379 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.5916 | 0.0 | 0.2668 | 0.0040 | 0.0 | nan | 0.0 | 0.0820 | 0.0 | 0.0 | 0.7827 | 0.6428 | 0.8583 | 0.0 | 0.0 | 0.0465 | 0.0 |
| 0.7131 | 22.22 | 1200 | 0.7231 | 0.2377 | 0.2995 | 0.7870 | nan | 0.8306 | 0.8458 | 0.7952 | 0.8505 | 0.2218 | nan | 0.3614 | 0.5001 | 0.0 | 0.9504 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.7598 | 0.0 | 0.5317 | 0.0405 | 0.0 | nan | 0.0 | 0.1381 | 0.0 | 0.0 | 0.9284 | 0.7938 | 0.9110 | 0.0 | 0.0 | 0.1262 | 0.0 | nan | 0.7038 | 0.7740 | 0.7537 | 0.4538 | 0.1996 | nan | 0.2521 | 0.3853 | 0.0 | 0.6576 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.6157 | 0.0 | 0.3046 | 0.0404 | 0.0 | nan | 0.0 | 0.0921 | 0.0 | 0.0 | 0.7846 | 0.6383 | 0.8588 | 0.0 | 0.0 | 0.0911 | 0.0 |
| 0.6919 | 24.07 | 1300 | 0.6775 | 0.2361 | 0.2885 | 0.8013 | nan | 0.7728 | 0.9073 | 0.8010 | 0.8366 | 0.1547 | nan | 0.3070 | 0.3428 | 0.0 | 0.9272 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.8568 | 0.0 | 0.5009 | 0.0736 | 0.0 | nan | 0.0 | 0.0975 | 0.0 | 0.0 | 0.9297 | 0.7567 | 0.8978 | 0.0 | 0.0 | 0.0682 | 0.0 | nan | 0.6564 | 0.7929 | 0.6932 | 0.6396 | 0.1438 | nan | 0.2385 | 0.2888 | 0.0 | 0.6807 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.6085 | 0.0 | 0.3114 | 0.0729 | 0.0 | nan | 0.0 | 0.0803 | 0.0 | 0.0 | 0.7857 | 0.6403 | 0.8601 | 0.0 | 0.0 | 0.0610 | 0.0 |
| 0.68 | 25.93 | 1400 | 0.6321 | 0.2575 | 0.3109 | 0.8181 | nan | 0.7851 | 0.9362 | 0.8041 | 0.8438 | 0.1694 | nan | 0.3956 | 0.5626 | 0.0 | 0.9306 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.8313 | 0.0 | 0.5073 | 0.2728 | 0.0 | nan | 0.0 | 0.1741 | 0.0 | 0.0 | 0.9221 | 0.7899 | 0.9071 | 0.0 | 0.0 | 0.1157 | 0.0 | nan | 0.6781 | 0.8336 | 0.7386 | 0.7047 | 0.1564 | nan | 0.2789 | 0.4291 | 0.0 | 0.6934 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.6062 | 0.0 | 0.3305 | 0.2579 | 0.0 | nan | 0.0 | 0.1228 | 0.0 | 0.0 | 0.7952 | 0.6651 | 0.8631 | 0.0 | 0.0 | 0.0865 | 0.0 |
| 0.6644 | 27.78 | 1500 | 0.6568 | 0.2555 | 0.3132 | 0.8074 | nan | 0.7687 | 0.9014 | 0.7631 | 0.8302 | 0.1869 | nan | 0.4841 | 0.4880 | 0.0 | 0.9294 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0002 | 0.0 | 0.0 | 0.8139 | 0.0 | 0.5482 | 0.3042 | 0.0 | nan | 0.0 | 0.1974 | 0.0 | 0.0 | 0.9225 | 0.8543 | 0.9042 | 0.0 | 0.0 | 0.1259 | 0.0 | nan | 0.6723 | 0.8030 | 0.7443 | 0.5873 | 0.1742 | nan | 0.3013 | 0.3813 | 0.0 | 0.7117 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0002 | 0.0 | 0.0 | 0.6159 | 0.0 | 0.3289 | 0.2810 | 0.0 | nan | 0.0 | 0.1295 | 0.0 | 0.0 | 0.8015 | 0.6848 | 0.8665 | 0.0 | 0.0 | 0.0931 | 0.0 |
| 0.6153 | 29.63 | 1600 | 0.6157 | 0.2586 | 0.3131 | 0.8188 | nan | 0.8000 | 0.9242 | 0.7980 | 0.8445 | 0.1758 | nan | 0.4143 | 0.6256 | 0.0 | 0.9155 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0007 | 0.0 | 0.0 | 0.8792 | 0.0 | 0.4465 | 0.2182 | 0.0 | nan | 0.0 | 0.1970 | 0.0 | 0.0 | 0.9111 | 0.8171 | 0.9368 | 0.0 | 0.0 | 0.1136 | 0.0 | nan | 0.6844 | 0.8212 | 0.7565 | 0.6537 | 0.1636 | nan | 0.2857 | 0.4354 | 0.0 | 0.7222 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0007 | 0.0 | 0.0 | 0.6274 | 0.0 | 0.3217 | 0.2147 | 0.0 | nan | 0.0 | 0.1313 | 0.0 | 0.0 | 0.8082 | 0.6809 | 0.8737 | 0.0 | 0.0 | 0.0926 | 0.0 |
| 0.6154 | 31.48 | 1700 | 0.6397 | 0.2621 | 0.3204 | 0.8117 | nan | 0.8357 | 0.8840 | 0.7908 | 0.8465 | 0.2590 | nan | 0.4050 | 0.5401 | 0.0 | 0.9393 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0105 | 0.0 | 0.0 | 0.8169 | 0.0 | 0.4733 | 0.3188 | 0.0 | nan | 0.0 | 0.2505 | 0.0 | 0.0 | 0.9181 | 0.8473 | 0.9287 | 0.0 | 0.0 | 0.1890 | 0.0 | nan | 0.6774 | 0.8042 | 0.7524 | 0.5662 | 0.2300 | nan | 0.2971 | 0.4050 | 0.0 | 0.6970 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0105 | 0.0 | 0.0 | 0.6489 | 0.0 | 0.3454 | 0.3058 | 0.0 | nan | 0.0 | 0.1441 | 0.0 | 0.0 | 0.8074 | 0.6913 | 0.8820 | 0.0 | 0.0 | 0.1224 | 0.0 |
| 0.6305 | 33.33 | 1800 | 0.6131 | 0.2641 | 0.3212 | 0.8194 | nan | 0.8171 | 0.8984 | 0.8212 | 0.8462 | 0.2582 | nan | 0.5051 | 0.5504 | 0.0 | 0.9421 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0221 | 0.0 | 0.0 | 0.8777 | 0.0 | 0.3528 | 0.3169 | 0.0 | nan | 0.0 | 0.2249 | 0.0 | 0.0 | 0.9203 | 0.8499 | 0.9175 | 0.0 | 0.0 | 0.1587 | 0.0 | nan | 0.7209 | 0.8195 | 0.7546 | 0.6166 | 0.2267 | nan | 0.3408 | 0.4000 | 0.0 | 0.6906 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0221 | 0.0 | 0.0 | 0.6055 | 0.0 | 0.2823 | 0.3044 | 0.0 | nan | 0.0 | 0.1545 | 0.0 | 0.0 | 0.8124 | 0.6994 | 0.8799 | 0.0 | 0.0 | 0.1204 | 0.0 |
| 0.6083 | 35.19 | 1900 | 0.6224 | 0.2646 | 0.3182 | 0.8171 | nan | 0.7473 | 0.9297 | 0.7826 | 0.8269 | 0.2162 | nan | 0.4556 | 0.4982 | 0.0 | 0.9169 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0865 | 0.0 | 0.0 | 0.9031 | 0.0 | 0.3618 | 0.3583 | 0.0 | nan | 0.0 | 0.2603 | 0.0 | 0.0 | 0.8966 | 0.8828 | 0.9016 | 0.0 | 0.0 | 0.1587 | 0.0 | nan | 0.6824 | 0.8210 | 0.7645 | 0.5950 | 0.2019 | nan | 0.3166 | 0.3895 | 0.0 | 0.7307 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0853 | 0.0 | 0.0 | 0.6063 | 0.0 | 0.2860 | 0.3200 | 0.0 | nan | 0.0 | 0.1659 | 0.0 | 0.0 | 0.8188 | 0.7017 | 0.8695 | 0.0 | 0.0 | 0.1113 | 0.0 |
| 0.5847 | 37.04 | 2000 | 0.5906 | 0.2713 | 0.3209 | 0.8281 | nan | 0.7374 | 0.9612 | 0.7764 | 0.8195 | 0.2033 | nan | 0.4219 | 0.4950 | 0.0 | 0.9339 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0960 | 0.0 | 0.0 | 0.8434 | 0.0 | 0.4552 | 0.4437 | 0.0 | nan | 0.0 | 0.2250 | 0.0 | 0.0 | 0.9315 | 0.8612 | 0.9071 | 0.0 | 0.0 | 0.1567 | 0.0 | nan | 0.6883 | 0.8311 | 0.7525 | 0.6838 | 0.1851 | nan | 0.3228 | 0.3780 | 0.0 | 0.7236 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0944 | 0.0 | 0.0 | 0.6338 | 0.0 | 0.3408 | 0.3853 | 0.0 | nan | 0.0 | 0.1586 | 0.0 | 0.0 | 0.8104 | 0.6978 | 0.8800 | 0.0 | 0.0 | 0.1162 | 0.0 |
| 0.5764 | 38.89 | 2100 | 0.6088 | 0.2752 | 0.3225 | 0.8255 | nan | 0.7525 | 0.9472 | 0.7709 | 0.8441 | 0.2134 | nan | 0.3932 | 0.5383 | 0.0 | 0.9030 | 0.0 | 0.0 | 0.0 | 0.0 | 0.3470 | 0.0 | 0.0 | 0.9195 | 0.0 | 0.3310 | 0.3215 | 0.0 | nan | 0.0 | 0.2234 | 0.0 | 0.0 | 0.9289 | 0.7964 | 0.9280 | 0.0 | 0.0 | 0.1604 | 0.0 | nan | 0.6993 | 0.8276 | 0.7546 | 0.7234 | 0.1997 | nan | 0.3005 | 0.4222 | 0.0 | 0.7348 | 0.0 | 0.0 | 0.0 | 0.0 | 0.3123 | 0.0 | 0.0 | 0.5918 | 0.0 | 0.2787 | 0.3037 | 0.0 | nan | 0.0 | 0.1585 | 0.0 | 0.0 | 0.8124 | 0.6781 | 0.8844 | 0.0 | 0.0 | 0.1247 | 0.0 |
| 0.5787 | 40.74 | 2200 | 0.5706 | 0.2824 | 0.3351 | 0.8347 | nan | 0.8178 | 0.9369 | 0.8003 | 0.8511 | 0.2352 | nan | 0.4838 | 0.5417 | 0.0 | 0.9025 | 0.0 | 0.0 | 0.0 | 0.0 | 0.3689 | 0.0 | 0.0 | 0.8739 | 0.0 | 0.4493 | 0.4040 | 0.0 | nan | 0.0 | 0.2524 | 0.0 | 0.0 | 0.9422 | 0.8182 | 0.9183 | 0.0 | 0.0 | 0.1276 | 0.0 | nan | 0.7292 | 0.8432 | 0.7669 | 0.6897 | 0.2161 | nan | 0.3484 | 0.4230 | 0.0 | 0.7519 | 0.0 | 0.0 | 0.0 | 0.0 | 0.3045 | 0.0 | 0.0 | 0.6407 | 0.0 | 0.3373 | 0.3491 | 0.0 | nan | 0.0 | 0.1557 | 0.0 | 0.0 | 0.8080 | 0.6803 | 0.8850 | 0.0 | 0.0 | 0.1068 | 0.0 |
| 0.5724 | 42.59 | 2300 | 0.7562 | 0.2740 | 0.3479 | 0.7662 | nan | 0.8734 | 0.7169 | 0.7809 | 0.8847 | 0.2838 | nan | 0.3742 | 0.6758 | 0.0 | 0.9339 | 0.0 | 0.0 | 0.0 | 0.0 | 0.6048 | 0.0 | 0.0 | 0.8535 | 0.0 | 0.4435 | 0.4729 | 0.0 | nan | 0.0 | 0.2817 | 0.0 | 0.0 | 0.9149 | 0.8765 | 0.9329 | 0.0 | 0.0 | 0.2292 | 0.0 | nan | 0.7041 | 0.6683 | 0.7628 | 0.3371 | 0.2575 | nan | 0.2878 | 0.4639 | 0.0 | 0.7454 | 0.0 | 0.0 | 0.0 | 0.0 | 0.4190 | 0.0 | 0.0 | 0.6387 | 0.0 | 0.3357 | 0.3997 | 0.0 | nan | 0.0 | 0.1776 | 0.0 | 0.0 | 0.8183 | 0.7106 | 0.8911 | 0.0 | 0.0 | 0.1516 | 0.0 |
| 0.556 | 44.44 | 2400 | 0.7350 | 0.2665 | 0.3366 | 0.7813 | nan | 0.7897 | 0.7888 | 0.8022 | 0.8878 | 0.2389 | nan | 0.4270 | 0.4859 | 0.0 | 0.9401 | 0.0 | 0.0 | 0.0 | 0.0 | 0.4618 | 0.0 | 0.0 | 0.8866 | 0.0 | 0.3979 | 0.5050 | 0.0 | nan | 0.0 | 0.2580 | 0.0 | 0.0 | 0.9097 | 0.8627 | 0.9337 | 0.0 | 0.0 | 0.1948 | 0.0 | nan | 0.6902 | 0.7286 | 0.7779 | 0.3964 | 0.2231 | nan | 0.3011 | 0.3626 | 0.0 | 0.7078 | 0.0 | 0.0 | 0.0 | 0.0 | 0.3485 | 0.0 | 0.0 | 0.6171 | 0.0 | 0.3044 | 0.3372 | 0.0 | nan | 0.0 | 0.1812 | 0.0 | 0.0 | 0.8195 | 0.7011 | 0.8947 | 0.0 | 0.0 | 0.1378 | 0.0 |
| 0.5599 | 46.3 | 2500 | 0.5949 | 0.2846 | 0.3464 | 0.8215 | nan | 0.7919 | 0.9145 | 0.7935 | 0.8679 | 0.2189 | nan | 0.3795 | 0.5589 | 0.0 | 0.9334 | 0.0 | 0.0 | 0.0 | 0.0 | 0.5627 | 0.0 | 0.0 | 0.8536 | 0.0 | 0.4394 | 0.4730 | 0.0 | nan | 0.0 | 0.3260 | 0.0 | 0.0 | 0.9098 | 0.8344 | 0.9487 | 0.0 | 0.0 | 0.2801 | 0.0 | nan | 0.6901 | 0.8199 | 0.7749 | 0.5729 | 0.2084 | nan | 0.3034 | 0.4321 | 0.0 | 0.7422 | 0.0 | 0.0 | 0.0 | 0.0 | 0.4230 | 0.0 | 0.0 | 0.6491 | 0.0 | 0.3237 | 0.3989 | 0.0 | nan | 0.0 | 0.1963 | 0.0 | 0.0 | 0.8232 | 0.7048 | 0.8949 | 0.0 | 0.0 | 0.1489 | 0.0 |
| 0.5368 | 48.15 | 2600 | 0.6125 | 0.2829 | 0.3502 | 0.8211 | nan | 0.7798 | 0.9034 | 0.7913 | 0.9079 | 0.2587 | nan | 0.3407 | 0.6423 | 0.0 | 0.9351 | 0.0 | 0.0 | 0.0 | 0.0 | 0.6794 | 0.0 | 0.0 | 0.8554 | 0.0 | 0.3996 | 0.4884 | 0.0 | nan | 0.0 | 0.2870 | 0.0 | 0.0 | 0.9271 | 0.8698 | 0.9424 | 0.0 | 0.0 | 0.1992 | 0.0 | nan | 0.6878 | 0.8122 | 0.7578 | 0.5597 | 0.2427 | nan | 0.2680 | 0.4737 | 0.0 | 0.7517 | 0.0 | 0.0 | 0.0 | 0.0 | 0.3649 | 0.0 | 0.0 | 0.6557 | 0.0 | 0.3130 | 0.4117 | 0.0 | nan | 0.0 | 0.1847 | 0.0 | 0.0 | 0.8236 | 0.7137 | 0.8969 | 0.0 | 0.0 | 0.1361 | 0.0 |
| 0.5391 | 50.0 | 2700 | 0.5993 | 0.2877 | 0.3507 | 0.8242 | nan | 0.8174 | 0.8948 | 0.8094 | 0.8896 | 0.2730 | nan | 0.4105 | 0.5570 | 0.0 | 0.9164 | 0.0 | 0.0 | 0.0 | 0.0 | 0.5439 | 0.0 | 0.0 | 0.8772 | 0.0 | 0.5070 | 0.5443 | 0.0 | nan | 0.0 | 0.2691 | 0.0 | 0.0 | 0.9205 | 0.8660 | 0.8975 | 0.0 | 0.0 | 0.2294 | 0.0 | nan | 0.7059 | 0.8214 | 0.7578 | 0.5803 | 0.2537 | nan | 0.2892 | 0.4308 | 0.0 | 0.7548 | 0.0 | 0.0 | 0.0 | 0.0 | 0.4363 | 0.0 | 0.0 | 0.6490 | 0.0 | 0.3579 | 0.4224 | 0.0 | nan | 0.0 | 0.1927 | 0.0 | 0.0 | 0.8239 | 0.7040 | 0.8748 | 0.0 | 0.0 | 0.1516 | 0.0 |
| 0.5041 | 51.85 | 2800 | 0.5912 | 0.2859 | 0.3493 | 0.8264 | nan | 0.7593 | 0.9248 | 0.8029 | 0.8780 | 0.2945 | nan | 0.3718 | 0.6308 | 0.0 | 0.9078 | 0.0 | 0.0 | 0.0 | 0.0 | 0.6667 | 0.0 | 0.0 | 0.8945 | 0.0 | 0.3362 | 0.4834 | 0.0 | nan | 0.0 | 0.3167 | 0.0 | 0.0 | 0.9255 | 0.8641 | 0.9382 | 0.0 | 0.0 | 0.1836 | 0.0 | nan | 0.6993 | 0.8205 | 0.7232 | 0.5789 | 0.2712 | nan | 0.2852 | 0.4872 | 0.0 | 0.7747 | 0.0 | 0.0 | 0.0 | 0.0 | 0.3825 | 0.0 | 0.0 | 0.6382 | 0.0 | 0.2862 | 0.4138 | 0.0 | nan | 0.0 | 0.2019 | 0.0 | 0.0 | 0.8284 | 0.7271 | 0.8984 | 0.0 | 0.0 | 0.1316 | 0.0 |
| 0.5007 | 53.7 | 2900 | 0.6220 | 0.2839 | 0.3577 | 0.8134 | nan | 0.7302 | 0.8903 | 0.8180 | 0.9098 | 0.3134 | nan | 0.3521 | 0.6870 | 0.0 | 0.9429 | 0.0 | 0.0 | 0.0 | 0.0 | 0.7288 | 0.0 | 0.0 | 0.8340 | 0.0 | 0.5169 | 0.4700 | 0.0 | nan | 0.0 | 0.3105 | 0.0 | 0.0 | 0.9356 | 0.8318 | 0.9437 | 0.0 | 0.0003 | 0.2298 | 0.0 | nan | 0.6722 | 0.8034 | 0.7257 | 0.4922 | 0.2900 | nan | 0.2639 | 0.4741 | 0.0 | 0.7434 | 0.0 | 0.0 | 0.0 | 0.0 | 0.4082 | 0.0 | 0.0 | 0.6635 | 0.0 | 0.3690 | 0.4172 | 0.0 | nan | 0.0 | 0.1981 | 0.0 | 0.0 | 0.8205 | 0.6936 | 0.9015 | 0.0 | 0.0003 | 0.1483 | 0.0 |
| 0.4992 | 55.56 | 3000 | 0.5669 | 0.2928 | 0.3647 | 0.8317 | nan | 0.7826 | 0.9171 | 0.8018 | 0.9165 | 0.2758 | nan | 0.5273 | 0.6986 | 0.0 | 0.9410 | 0.0 | 0.0 | 0.0 | 0.0 | 0.6836 | 0.0 | 0.0 | 0.8296 | 0.0 | 0.4717 | 0.4595 | 0.0 | nan | 0.0 | 0.3613 | 0.0 | 0.0 | 0.9272 | 0.8671 | 0.9424 | 0.0 | 0.0017 | 0.2669 | 0.0 | nan | 0.7196 | 0.8377 | 0.7464 | 0.6016 | 0.2573 | nan | 0.3367 | 0.4767 | 0.0 | 0.7565 | 0.0 | 0.0 | 0.0 | 0.0 | 0.4237 | 0.0 | 0.0 | 0.6653 | 0.0 | 0.3438 | 0.4034 | 0.0 | nan | 0.0 | 0.1974 | 0.0 | 0.0 | 0.8287 | 0.7120 | 0.9031 | 0.0 | 0.0017 | 0.1565 | 0.0 |
| 0.5151 | 57.41 | 3100 | 0.6131 | 0.2864 | 0.3598 | 0.8169 | nan | 0.7793 | 0.9005 | 0.7894 | 0.8762 | 0.2508 | nan | 0.3852 | 0.6197 | 0.0 | 0.9316 | 0.0 | 0.0 | 0.0 | 0.0 | 0.6506 | 0.0 | 0.0 | 0.7819 | 0.0 | 0.5348 | 0.5782 | 0.0 | nan | 0.0 | 0.3853 | 0.0 | 0.0 | 0.9211 | 0.8624 | 0.9390 | 0.0 | 0.0 | 0.3278 | 0.0 | nan | 0.6967 | 0.8145 | 0.7436 | 0.5453 | 0.2362 | nan | 0.2992 | 0.4656 | 0.0 | 0.7549 | 0.0 | 0.0 | 0.0 | 0.0 | 0.4221 | 0.0 | 0.0 | 0.6246 | 0.0 | 0.3873 | 0.3923 | 0.0 | nan | 0.0 | 0.1937 | 0.0 | 0.0 | 0.8257 | 0.7204 | 0.8994 | 0.0 | 0.0 | 0.1417 | 0.0 |
| 0.4688 | 59.26 | 3200 | 0.7342 | 0.2674 | 0.3425 | 0.7758 | nan | 0.6724 | 0.8138 | 0.8211 | 0.8881 | 0.2106 | nan | 0.3435 | 0.4240 | 0.0 | 0.9345 | 0.0 | 0.0 | 0.0 | 0.0 | 0.6881 | 0.0 | 0.0 | 0.8684 | 0.0 | 0.4808 | 0.5494 | 0.0 | nan | 0.0 | 0.2968 | 0.0 | 0.0 | 0.9269 | 0.8322 | 0.9291 | 0.0 | 0.0 | 0.2817 | 0.0 | nan | 0.6227 | 0.7395 | 0.7654 | 0.4008 | 0.1990 | nan | 0.2434 | 0.3473 | 0.0 | 0.7526 | 0.0 | 0.0 | 0.0 | 0.0 | 0.3733 | 0.0 | 0.0 | 0.5567 | 0.0 | 0.3425 | 0.4056 | 0.0 | nan | 0.0 | 0.2033 | 0.0 | 0.0 | 0.8238 | 0.7088 | 0.8978 | 0.0 | 0.0 | 0.1748 | 0.0 |
| 0.4657 | 61.11 | 3300 | 0.7162 | 0.2737 | 0.3487 | 0.7884 | nan | 0.6859 | 0.8395 | 0.7919 | 0.8974 | 0.2306 | nan | 0.4086 | 0.6012 | 0.0 | 0.9212 | 0.0 | 0.0 | 0.0 | 0.0 | 0.7186 | 0.0 | 0.0 | 0.8738 | 0.0 | 0.4323 | 0.5271 | 0.0 | nan | 0.0 | 0.3163 | 0.0 | 0.0 | 0.9373 | 0.8107 | 0.9381 | 0.0 | 0.0 | 0.2280 | 0.0 | nan | 0.6253 | 0.7668 | 0.7584 | 0.4350 | 0.2180 | nan | 0.2835 | 0.4646 | 0.0 | 0.7649 | 0.0 | 0.0 | 0.0 | 0.0 | 0.3505 | 0.0 | 0.0 | 0.5817 | 0.0 | 0.3184 | 0.4275 | 0.0 | nan | 0.0 | 0.1989 | 0.0 | 0.0 | 0.8181 | 0.6916 | 0.9021 | 0.0 | 0.0 | 0.1529 | 0.0 |
| 0.4789 | 62.96 | 3400 | 0.6510 | 0.2824 | 0.3535 | 0.8065 | nan | 0.7245 | 0.8835 | 0.7760 | 0.8886 | 0.2720 | nan | 0.3709 | 0.6675 | 0.0 | 0.9351 | 0.0 | 0.0 | 0.0 | 0.0 | 0.6668 | 0.0 | 0.0 | 0.8450 | 0.0 | 0.4917 | 0.5508 | 0.0 | nan | 0.0 | 0.3585 | 0.0 | 0.0 | 0.9367 | 0.7684 | 0.9321 | 0.0 | 0.0022 | 0.2404 | 0.0 | nan | 0.6754 | 0.7938 | 0.7682 | 0.4856 | 0.2514 | nan | 0.2841 | 0.4779 | 0.0 | 0.7566 | 0.0 | 0.0 | 0.0 | 0.0 | 0.3801 | 0.0 | 0.0 | 0.6118 | 0.0 | 0.3623 | 0.4464 | 0.0 | nan | 0.0 | 0.1990 | 0.0 | 0.0 | 0.8150 | 0.6727 | 0.9029 | 0.0 | 0.0022 | 0.1516 | 0.0 |
| 0.4718 | 64.81 | 3500 | 0.7369 | 0.2741 | 0.3491 | 0.7687 | nan | 0.7886 | 0.7455 | 0.8159 | 0.8865 | 0.2585 | nan | 0.3583 | 0.6014 | 0.0 | 0.9362 | 0.0 | 0.0 | 0.0 | 0.0 | 0.6741 | 0.0 | 0.0 | 0.8728 | 0.0 | 0.4488 | 0.5138 | 0.0 | nan | 0.0 | 0.3533 | 0.0 | 0.0 | 0.9343 | 0.8363 | 0.9345 | 0.0 | 0.0002 | 0.2111 | 0.0 | nan | 0.6800 | 0.6730 | 0.7173 | 0.3412 | 0.2406 | nan | 0.2736 | 0.4651 | 0.0 | 0.7688 | 0.0 | 0.0 | 0.0 | 0.0 | 0.3688 | 0.0 | 0.0 | 0.6494 | 0.0 | 0.3507 | 0.4403 | 0.0 | nan | 0.0 | 0.1950 | 0.0 | 0.0 | 0.8287 | 0.7216 | 0.9039 | 0.0 | 0.0002 | 0.1536 | 0.0 |
| 0.4586 | 66.67 | 3600 | 0.7463 | 0.2799 | 0.3515 | 0.7620 | nan | 0.8497 | 0.6965 | 0.7931 | 0.9041 | 0.2737 | nan | 0.3983 | 0.5616 | 0.0 | 0.9365 | 0.0 | 0.0 | 0.0 | 0.0 | 0.5892 | 0.0 | 0.0 | 0.8439 | 0.0 | 0.5213 | 0.4720 | 0.0 | nan | 0.0 | 0.3429 | 0.0 | 0.0 | 0.9332 | 0.8690 | 0.9431 | 0.0 | 0.0 | 0.3213 | 0.0 | nan | 0.7435 | 0.6450 | 0.7808 | 0.3120 | 0.2517 | nan | 0.3134 | 0.4378 | 0.0 | 0.7305 | 0.0 | 0.0 | 0.0 | 0.0 | 0.4349 | 0.0 | 0.0 | 0.6399 | 0.0 | 0.3813 | 0.4243 | 0.0 | nan | 0.0 | 0.2097 | 0.0 | 0.0 | 0.8287 | 0.7225 | 0.9085 | 0.0 | 0.0 | 0.1926 | 0.0 |
| 0.4506 | 68.52 | 3700 | 0.6409 | 0.2859 | 0.3587 | 0.8030 | nan | 0.7887 | 0.8394 | 0.8054 | 0.8912 | 0.2518 | nan | 0.3799 | 0.6292 | 0.0 | 0.9273 | 0.0 | 0.0 | 0.0 | 0.0 | 0.7090 | 0.0 | 0.0 | 0.8655 | 0.0 | 0.4989 | 0.5447 | 0.0 | nan | 0.0 | 0.3519 | 0.0 | 0.0 | 0.9335 | 0.8362 | 0.9278 | 0.0 | 0.0 | 0.2975 | 0.0 | nan | 0.7248 | 0.7574 | 0.7649 | 0.4118 | 0.2326 | nan | 0.2996 | 0.4840 | 0.0 | 0.7856 | 0.0 | 0.0 | 0.0 | 0.0 | 0.3424 | 0.0 | 0.0 | 0.6639 | 0.0 | 0.3766 | 0.4576 | 0.0 | nan | 0.0 | 0.2055 | 0.0 | 0.0 | 0.8284 | 0.7274 | 0.9032 | 0.0 | 0.0 | 0.1823 | 0.0 |
| 0.4659 | 70.37 | 3800 | 0.6466 | 0.2884 | 0.3577 | 0.8081 | nan | 0.8256 | 0.8420 | 0.7982 | 0.8692 | 0.3484 | nan | 0.4035 | 0.4964 | 0.0 | 0.9489 | 0.0 | 0.0 | 0.0 | 0.0 | 0.6461 | 0.0 | 0.0 | 0.8281 | 0.0 | 0.5593 | 0.5404 | 0.0 | nan | 0.0 | 0.3533 | 0.0 | 0.0 | 0.9345 | 0.7861 | 0.9426 | 0.0 | 0.0 | 0.3225 | 0.0 | nan | 0.7403 | 0.7665 | 0.7649 | 0.4456 | 0.2991 | nan | 0.3198 | 0.3976 | 0.0 | 0.7512 | 0.0 | 0.0 | 0.0 | 0.0 | 0.4217 | 0.0 | 0.0 | 0.6537 | 0.0 | 0.3859 | 0.4470 | 0.0 | nan | 0.0 | 0.2219 | 0.0 | 0.0 | 0.8223 | 0.6908 | 0.9109 | 0.0 | 0.0 | 0.1898 | 0.0 |
| 0.4416 | 72.22 | 3900 | 0.6944 | 0.2824 | 0.3648 | 0.7953 | nan | 0.8073 | 0.8044 | 0.8200 | 0.9039 | 0.2713 | nan | 0.4385 | 0.6632 | 0.0 | 0.9435 | 0.0 | 0.0 | 0.0 | 0.0 | 0.7130 | 0.0 | 0.0 | 0.8448 | 0.0 | 0.5050 | 0.5552 | 0.0 | nan | 0.0 | 0.3791 | 0.0 | 0.0 | 0.9316 | 0.8332 | 0.9378 | 0.0 | 0.0047 | 0.3183 | 0.0 | nan | 0.7045 | 0.7445 | 0.6571 | 0.4107 | 0.2536 | nan | 0.3089 | 0.4711 | 0.0 | 0.7504 | 0.0 | 0.0 | 0.0 | 0.0 | 0.3814 | 0.0 | 0.0 | 0.6468 | 0.0 | 0.3800 | 0.4413 | 0.0 | nan | 0.0 | 0.2243 | 0.0 | 0.0 | 0.8294 | 0.7257 | 0.9078 | 0.0 | 0.0047 | 0.1964 | 0.0 |
| 0.4347 | 74.07 | 4000 | 0.5742 | 0.2960 | 0.3615 | 0.8319 | nan | 0.8135 | 0.9088 | 0.8067 | 0.8959 | 0.3006 | nan | 0.3611 | 0.6055 | 0.0 | 0.9354 | 0.0 | 0.0 | 0.0 | 0.0 | 0.6851 | 0.0 | 0.0 | 0.8692 | 0.0 | 0.4956 | 0.5065 | 0.0 | nan | 0.0 | 0.3493 | 0.0 | 0.0 | 0.9264 | 0.8500 | 0.9368 | 0.0 | 0.0018 | 0.3210 | 0.0 | nan | 0.7436 | 0.8254 | 0.7615 | 0.5609 | 0.2797 | nan | 0.3045 | 0.4733 | 0.0 | 0.7745 | 0.0 | 0.0 | 0.0 | 0.0 | 0.4006 | 0.0 | 0.0 | 0.6424 | 0.0 | 0.3800 | 0.4600 | 0.0 | nan | 0.0 | 0.2126 | 0.0 | 0.0 | 0.8296 | 0.7251 | 0.9085 | 0.0 | 0.0018 | 0.1876 | 0.0 |
| 0.4191 | 75.93 | 4100 | 0.6454 | 0.2879 | 0.3671 | 0.8068 | nan | 0.7757 | 0.8432 | 0.8171 | 0.8803 | 0.3169 | nan | 0.4971 | 0.6474 | 0.0 | 0.9274 | 0.0 | 0.0 | 0.0 | 0.0 | 0.7272 | 0.0 | 0.0 | 0.8520 | 0.0 | 0.4847 | 0.5414 | 0.0 | nan | 0.0 | 0.4113 | 0.0 | 0.0 | 0.9400 | 0.8335 | 0.9348 | 0.0 | 0.0167 | 0.3000 | 0.0 | nan | 0.7112 | 0.7615 | 0.6876 | 0.4533 | 0.2904 | nan | 0.3375 | 0.4768 | 0.0 | 0.7857 | 0.0 | 0.0 | 0.0 | 0.0 | 0.3483 | 0.0 | 0.0 | 0.6544 | 0.0 | 0.3636 | 0.4546 | 0.0 | nan | 0.0 | 0.2086 | 0.0 | 0.0 | 0.8293 | 0.7293 | 0.9093 | 0.0 | 0.0165 | 0.1938 | 0.0 |
| 0.4355 | 77.78 | 4200 | 0.5871 | 0.2915 | 0.3601 | 0.8236 | nan | 0.6673 | 0.9324 | 0.8063 | 0.8730 | 0.2988 | nan | 0.5014 | 0.5734 | 0.0 | 0.9480 | 0.0 | 0.0 | 0.0 | 0.0 | 0.6629 | 0.0 | 0.0 | 0.8653 | 0.0 | 0.4649 | 0.5559 | 0.0 | nan | 0.0 | 0.3890 | 0.0 | 0.0 | 0.9183 | 0.8681 | 0.9537 | 0.0 | 0.0088 | 0.2359 | 0.0 | nan | 0.6266 | 0.8175 | 0.7309 | 0.5730 | 0.2746 | nan | 0.3471 | 0.4465 | 0.0 | 0.7567 | 0.0 | 0.0 | 0.0 | 0.0 | 0.4103 | 0.0 | 0.0 | 0.6684 | 0.0 | 0.3482 | 0.4615 | 0.0 | nan | 0.0 | 0.2062 | 0.0 | 0.0 | 0.8356 | 0.7347 | 0.9131 | 0.0 | 0.0088 | 0.1686 | 0.0 |
| 0.431 | 79.63 | 4300 | 0.5778 | 0.2902 | 0.3540 | 0.8266 | nan | 0.8325 | 0.9042 | 0.7971 | 0.8575 | 0.2707 | nan | 0.4318 | 0.5731 | 0.0 | 0.9428 | 0.0 | 0.0 | 0.0 | 0.0 | 0.6701 | 0.0 | 0.0 | 0.8781 | 0.0 | 0.4081 | 0.5480 | 0.0 | nan | 0.0 | 0.3573 | 0.0 | 0.0 | 0.9299 | 0.7480 | 0.9397 | 0.0 | 0.0343 | 0.2046 | 0.0 | nan | 0.7428 | 0.8112 | 0.7719 | 0.5907 | 0.2545 | nan | 0.3259 | 0.4272 | 0.0 | 0.7505 | 0.0 | 0.0 | 0.0 | 0.0 | 0.4255 | 0.0 | 0.0 | 0.6496 | 0.0 | 0.3209 | 0.4384 | 0.0 | nan | 0.0 | 0.2061 | 0.0 | 0.0 | 0.8142 | 0.6646 | 0.9118 | 0.0 | 0.0338 | 0.1477 | 0.0 |
| 0.4105 | 81.48 | 4400 | 0.7355 | 0.2837 | 0.3547 | 0.7802 | nan | 0.8194 | 0.7548 | 0.8125 | 0.9004 | 0.2421 | nan | 0.4411 | 0.5260 | 0.0 | 0.9344 | 0.0 | 0.0 | 0.0 | 0.0 | 0.6628 | 0.0 | 0.0 | 0.9003 | 0.0 | 0.4114 | 0.5457 | 0.0 | nan | 0.0 | 0.3720 | 0.0 | 0.0 | 0.9386 | 0.8336 | 0.9269 | 0.0 | 0.0905 | 0.2364 | 0.0 | nan | 0.7295 | 0.6964 | 0.7754 | 0.3477 | 0.2325 | nan | 0.3336 | 0.4069 | 0.0 | 0.7641 | 0.0 | 0.0 | 0.0 | 0.0 | 0.4284 | 0.0 | 0.0 | 0.6483 | 0.0 | 0.3512 | 0.4444 | 0.0 | nan | 0.0 | 0.2140 | 0.0 | 0.0 | 0.8260 | 0.7200 | 0.9047 | 0.0 | 0.0883 | 0.1667 | 0.0 |
| 0.4102 | 83.33 | 4500 | 0.6431 | 0.2832 | 0.3550 | 0.8023 | nan | 0.6173 | 0.8926 | 0.8233 | 0.8684 | 0.3015 | nan | 0.4774 | 0.5853 | 0.0 | 0.9435 | 0.0 | 0.0 | 0.0 | 0.0 | 0.7118 | 0.0 | 0.0 | 0.8678 | 0.0 | 0.4544 | 0.5288 | 0.0 | nan | 0.0 | 0.3435 | 0.0 | 0.0 | 0.9438 | 0.7934 | 0.9323 | 0.0 | 0.0264 | 0.2495 | 0.0 | nan | 0.5793 | 0.7784 | 0.7849 | 0.5220 | 0.2750 | nan | 0.3433 | 0.4263 | 0.0 | 0.7478 | 0.0 | 0.0 | 0.0 | 0.0 | 0.3651 | 0.0 | 0.0 | 0.6236 | 0.0 | 0.3489 | 0.4347 | 0.0 | nan | 0.0 | 0.2243 | 0.0 | 0.0 | 0.8184 | 0.6879 | 0.9082 | 0.0 | 0.0258 | 0.1674 | 0.0 |
| 0.4172 | 85.19 | 4600 | 0.6988 | 0.2875 | 0.3537 | 0.7940 | nan | 0.7505 | 0.8194 | 0.8168 | 0.9128 | 0.2640 | nan | 0.4022 | 0.4961 | 0.0 | 0.9391 | 0.0 | 0.0 | 0.0 | 0.0 | 0.6453 | 0.0 | 0.0 | 0.8769 | 0.0 | 0.4600 | 0.5182 | 0.0 | nan | 0.0 | 0.3740 | 0.0 | 0.0 | 0.9378 | 0.8263 | 0.9455 | 0.0 | 0.0900 | 0.2436 | 0.0 | nan | 0.7048 | 0.7401 | 0.7654 | 0.3938 | 0.2454 | nan | 0.2874 | 0.3973 | 0.0 | 0.7572 | 0.0 | 0.0 | 0.0 | 0.0 | 0.4779 | 0.0 | 0.0 | 0.6427 | 0.0 | 0.3531 | 0.4565 | 0.0 | nan | 0.0 | 0.2402 | 0.0 | 0.0 | 0.8333 | 0.7320 | 0.9149 | 0.0 | 0.0880 | 0.1706 | 0.0 |
| 0.3885 | 87.04 | 4700 | 0.5978 | 0.2953 | 0.3647 | 0.8175 | nan | 0.8142 | 0.8718 | 0.8027 | 0.8554 | 0.3059 | nan | 0.3787 | 0.5867 | 0.0 | 0.9403 | 0.0 | 0.0 | 0.0 | 0.0 | 0.6845 | 0.0 | 0.0 | 0.8471 | 0.0 | 0.5315 | 0.5788 | 0.0 | nan | 0.0 | 0.3874 | 0.0 | 0.0 | 0.9354 | 0.8156 | 0.9494 | 0.0 | 0.1221 | 0.2636 | 0.0 | nan | 0.7263 | 0.7825 | 0.7874 | 0.4784 | 0.2859 | nan | 0.2981 | 0.4480 | 0.0 | 0.7604 | 0.0 | 0.0 | 0.0 | 0.0 | 0.3820 | 0.0 | 0.0 | 0.6694 | 0.0 | 0.3781 | 0.4545 | 0.0 | nan | 0.0 | 0.2385 | 0.0 | 0.0 | 0.8301 | 0.7216 | 0.9144 | 0.0 | 0.1131 | 0.1798 | 0.0 |
| 0.3949 | 88.89 | 4800 | 0.5747 | 0.2961 | 0.3643 | 0.8282 | nan | 0.8129 | 0.8976 | 0.8121 | 0.8713 | 0.2894 | nan | 0.4694 | 0.5562 | 0.0 | 0.9391 | 0.0 | 0.0 | 0.0 | 0.0 | 0.6947 | 0.0 | 0.0 | 0.8395 | 0.0 | 0.5260 | 0.5481 | 0.0 | nan | 0.0 | 0.3852 | 0.0 | 0.0 | 0.9428 | 0.8221 | 0.9365 | 0.0 | 0.0559 | 0.2580 | 0.0 | nan | 0.7394 | 0.8130 | 0.7924 | 0.5533 | 0.2658 | nan | 0.3447 | 0.4378 | 0.0 | 0.7620 | 0.0 | 0.0 | 0.0 | 0.0 | 0.3851 | 0.0 | 0.0 | 0.6633 | 0.0 | 0.3722 | 0.4533 | 0.0 | nan | 0.0 | 0.2184 | 0.0 | 0.0 | 0.8217 | 0.7122 | 0.9124 | 0.0 | 0.0534 | 0.1742 | 0.0 |
| 0.4158 | 90.74 | 4900 | 0.6449 | 0.2916 | 0.3657 | 0.8070 | nan | 0.8043 | 0.8271 | 0.8157 | 0.9192 | 0.3073 | nan | 0.4380 | 0.6344 | 0.0 | 0.9340 | 0.0 | 0.0 | 0.0 | 0.0 | 0.7171 | 0.0 | 0.0 | 0.8572 | 0.0 | 0.5188 | 0.5406 | 0.0 | nan | 0.0 | 0.3852 | 0.0 | 0.0 | 0.9420 | 0.8552 | 0.9459 | 0.0 | 0.0450 | 0.2148 | 0.0 | nan | 0.6975 | 0.7564 | 0.7902 | 0.4563 | 0.2853 | nan | 0.3171 | 0.4654 | 0.0 | 0.7879 | 0.0 | 0.0 | 0.0 | 0.0 | 0.3571 | 0.0 | 0.0 | 0.6623 | 0.0 | 0.3819 | 0.4583 | 0.0 | nan | 0.0 | 0.2243 | 0.0 | 0.0 | 0.8302 | 0.7431 | 0.9150 | 0.0 | 0.0421 | 0.1602 | 0.0 |
| 0.3856 | 92.59 | 5000 | 0.7492 | 0.2796 | 0.3559 | 0.7680 | nan | 0.8020 | 0.7250 | 0.8248 | 0.9139 | 0.2500 | nan | 0.3621 | 0.5930 | 0.0 | 0.9411 | 0.0 | 0.0 | 0.0 | 0.0 | 0.6964 | 0.0 | 0.0 | 0.9036 | 0.0 | 0.3460 | 0.5234 | 0.0 | nan | 0.0 | 0.4271 | 0.0 | 0.0 | 0.9255 | 0.8871 | 0.9524 | 0.0 | 0.0666 | 0.2471 | 0.0 | nan | 0.6954 | 0.6697 | 0.7878 | 0.3256 | 0.2365 | nan | 0.2864 | 0.4452 | 0.0 | 0.7724 | 0.0 | 0.0 | 0.0 | 0.0 | 0.3838 | 0.0 | 0.0 | 0.6413 | 0.0 | 0.2968 | 0.4239 | 0.0 | nan | 0.0 | 0.2271 | 0.0 | 0.0 | 0.8382 | 0.7554 | 0.9171 | 0.0 | 0.0624 | 0.1808 | 0.0 |
| 0.3915 | 94.44 | 5100 | 0.6402 | 0.2893 | 0.3608 | 0.8012 | nan | 0.7614 | 0.8406 | 0.7898 | 0.9029 | 0.3080 | nan | 0.3857 | 0.6328 | 0.0 | 0.9373 | 0.0 | 0.0 | 0.0 | 0.0 | 0.7010 | 0.0 | 0.0 | 0.8626 | 0.0 | 0.5045 | 0.5235 | 0.0 | nan | 0.0 | 0.3802 | 0.0 | 0.0 | 0.9442 | 0.7561 | 0.9401 | 0.0 | 0.1133 | 0.2603 | 0.0 | nan | 0.6850 | 0.7546 | 0.7750 | 0.4451 | 0.2827 | nan | 0.3049 | 0.4715 | 0.0 | 0.7694 | 0.0 | 0.0 | 0.0 | 0.0 | 0.3810 | 0.0 | 0.0 | 0.6626 | 0.0 | 0.3832 | 0.4394 | 0.0 | nan | 0.0 | 0.2214 | 0.0 | 0.0 | 0.8125 | 0.6725 | 0.9138 | 0.0 | 0.1034 | 0.1797 | 0.0 |
| 0.3732 | 96.3 | 5200 | 0.7308 | 0.2840 | 0.3598 | 0.7795 | nan | 0.7534 | 0.7741 | 0.8137 | 0.9035 | 0.2614 | nan | 0.4308 | 0.6431 | 0.0 | 0.9315 | 0.0 | 0.0 | 0.0 | 0.0 | 0.7293 | 0.0 | 0.0 | 0.8884 | 0.0 | 0.4166 | 0.5225 | 0.0 | nan | 0.0 | 0.3992 | 0.0 | 0.0 | 0.9329 | 0.8517 | 0.9519 | 0.0 | 0.0756 | 0.2354 | 0.0 | nan | 0.6723 | 0.6942 | 0.7836 | 0.3665 | 0.2474 | nan | 0.3333 | 0.4669 | 0.0 | 0.7857 | 0.0 | 0.0 | 0.0 | 0.0 | 0.3545 | 0.0 | 0.0 | 0.6375 | 0.0 | 0.3443 | 0.4311 | 0.0 | nan | 0.0 | 0.2377 | 0.0 | 0.0 | 0.8346 | 0.7428 | 0.9173 | 0.0 | 0.0659 | 0.1722 | 0.0 |
| 0.3843 | 98.15 | 5300 | 0.6580 | 0.2864 | 0.3556 | 0.7962 | nan | 0.7254 | 0.8440 | 0.7996 | 0.8889 | 0.2696 | nan | 0.4320 | 0.6399 | 0.0 | 0.9285 | 0.0 | 0.0 | 0.0 | 0.0 | 0.6708 | 0.0 | 0.0 | 0.8872 | 0.0 | 0.4070 | 0.5262 | 0.0 | nan | 0.0 | 0.3791 | 0.0 | 0.0 | 0.9423 | 0.7462 | 0.9487 | 0.0 | 0.1269 | 0.2159 | 0.0 | nan | 0.6660 | 0.7540 | 0.7836 | 0.4484 | 0.2521 | nan | 0.3307 | 0.4691 | 0.0 | 0.7963 | 0.0 | 0.0 | 0.0 | 0.0 | 0.3896 | 0.0 | 0.0 | 0.6071 | 0.0 | 0.3185 | 0.4568 | 0.0 | nan | 0.0 | 0.2206 | 0.0 | 0.0 | 0.8138 | 0.6608 | 0.9170 | 0.0 | 0.1163 | 0.1644 | 0.0 |
| 0.3903 | 100.0 | 5400 | 0.6288 | 0.2881 | 0.3541 | 0.8086 | nan | 0.7763 | 0.8567 | 0.8240 | 0.8951 | 0.2446 | nan | 0.4334 | 0.5553 | 0.0 | 0.9354 | 0.0 | 0.0 | 0.0 | 0.0 | 0.6738 | 0.0 | 0.0 | 0.8901 | 0.0 | 0.4777 | 0.5458 | 0.0 | nan | 0.0 | 0.3297 | 0.0 | 0.0 | 0.9417 | 0.7702 | 0.9457 | 0.0 | 0.0457 | 0.1907 | 0.0 | nan | 0.6906 | 0.7727 | 0.7923 | 0.4705 | 0.2358 | nan | 0.3295 | 0.4509 | 0.0 | 0.7755 | 0.0 | 0.0 | 0.0 | 0.0 | 0.3981 | 0.0 | 0.0 | 0.6528 | 0.0 | 0.3644 | 0.4573 | 0.0 | nan | 0.0 | 0.2197 | 0.0 | 0.0 | 0.8176 | 0.6797 | 0.9157 | 0.0 | 0.0444 | 0.1500 | 0.0 |
| 0.355 | 101.85 | 5500 | 0.7112 | 0.2860 | 0.3563 | 0.7844 | nan | 0.7834 | 0.7947 | 0.8123 | 0.8807 | 0.2262 | nan | 0.3408 | 0.6020 | 0.0 | 0.9382 | 0.0 | 0.0 | 0.0 | 0.0 | 0.6759 | 0.0 | 0.0 | 0.8838 | 0.0 | 0.4491 | 0.5845 | 0.0 | nan | 0.0 | 0.4029 | 0.0 | 0.0 | 0.9295 | 0.7890 | 0.9477 | 0.0 | 0.1045 | 0.2564 | 0.0 | nan | 0.7086 | 0.7078 | 0.7825 | 0.3607 | 0.2168 | nan | 0.2792 | 0.4624 | 0.0 | 0.7767 | 0.0 | 0.0 | 0.0 | 0.0 | 0.4366 | 0.0 | 0.0 | 0.6667 | 0.0 | 0.3443 | 0.4351 | 0.0 | nan | 0.0 | 0.2386 | 0.0 | 0.0 | 0.8283 | 0.7060 | 0.9167 | 0.0 | 0.1000 | 0.1847 | 0.0 |
| 0.3729 | 103.7 | 5600 | 0.6849 | 0.2835 | 0.3591 | 0.7887 | nan | 0.8150 | 0.7790 | 0.8122 | 0.8834 | 0.2787 | nan | 0.4506 | 0.6270 | 0.0 | 0.9253 | 0.0 | 0.0 | 0.0 | 0.0 | 0.7408 | 0.0 | 0.0 | 0.9180 | 0.0 | 0.3273 | 0.5197 | 0.0 | nan | 0.0 | 0.4167 | 0.0 | 0.0 | 0.9358 | 0.8379 | 0.9406 | 0.0 | 0.0480 | 0.2345 | 0.0 | nan | 0.6989 | 0.7189 | 0.7862 | 0.3939 | 0.2648 | nan | 0.3292 | 0.4851 | 0.0 | 0.7976 | 0.0 | 0.0 | 0.0 | 0.0 | 0.3286 | 0.0 | 0.0 | 0.6202 | 0.0 | 0.2779 | 0.4371 | 0.0 | nan | 0.0 | 0.2402 | 0.0 | 0.0 | 0.8321 | 0.7297 | 0.9140 | 0.0 | 0.0437 | 0.1749 | 0.0 |
| 0.3895 | 105.56 | 5700 | 0.6917 | 0.2909 | 0.3669 | 0.7881 | nan | 0.8520 | 0.7575 | 0.8037 | 0.9006 | 0.2858 | nan | 0.4909 | 0.6331 | 0.0 | 0.9365 | 0.0 | 0.0 | 0.0 | 0.0 | 0.6811 | 0.0 | 0.0 | 0.8525 | 0.0 | 0.5087 | 0.5374 | 0.0 | nan | 0.0 | 0.3766 | 0.0 | 0.0 | 0.9432 | 0.8426 | 0.9479 | 0.0 | 0.0982 | 0.2931 | 0.0 | nan | 0.7338 | 0.7000 | 0.7834 | 0.3764 | 0.2683 | nan | 0.3430 | 0.4719 | 0.0 | 0.7841 | 0.0 | 0.0 | 0.0 | 0.0 | 0.3792 | 0.0 | 0.0 | 0.6627 | 0.0 | 0.3815 | 0.4454 | 0.0 | nan | 0.0 | 0.2245 | 0.0 | 0.0 | 0.8273 | 0.7311 | 0.9183 | 0.0 | 0.0894 | 0.1885 | 0.0 |
| 0.3602 | 107.41 | 5800 | 0.5475 | 0.3042 | 0.3685 | 0.8353 | nan | 0.7641 | 0.9319 | 0.8055 | 0.8737 | 0.3132 | nan | 0.4868 | 0.6244 | 0.0 | 0.9407 | 0.0 | 0.0 | 0.0 | 0.0 | 0.6873 | 0.0 | 0.0 | 0.8810 | 0.0 | 0.4631 | 0.5387 | 0.0 | nan | 0.0 | 0.4382 | 0.0 | 0.0 | 0.9298 | 0.7866 | 0.9486 | 0.0 | 0.1344 | 0.2454 | 0.0 | nan | 0.7121 | 0.8270 | 0.7806 | 0.6491 | 0.2900 | nan | 0.3497 | 0.4700 | 0.0 | 0.7753 | 0.0 | 0.0 | 0.0 | 0.0 | 0.4480 | 0.0 | 0.0 | 0.6577 | 0.0 | 0.3509 | 0.4582 | 0.0 | nan | 0.0 | 0.2281 | 0.0 | 0.0 | 0.8267 | 0.6946 | 0.9179 | 0.0 | 0.1213 | 0.1782 | 0.0 |
| 0.3674 | 109.26 | 5900 | 0.6421 | 0.2919 | 0.3540 | 0.8016 | nan | 0.6932 | 0.8577 | 0.8144 | 0.9018 | 0.3136 | nan | 0.3961 | 0.5655 | 0.0 | 0.9370 | 0.0 | 0.0 | 0.0 | 0.0 | 0.6563 | 0.0 | 0.0 | 0.9140 | 0.0 | 0.3656 | 0.4891 | 0.0 | nan | 0.0 | 0.3775 | 0.0 | 0.0 | 0.9373 | 0.8204 | 0.9427 | 0.0 | 0.1378 | 0.2090 | 0.0 | nan | 0.6366 | 0.7503 | 0.7829 | 0.4541 | 0.2884 | nan | 0.3050 | 0.4442 | 0.0 | 0.7727 | 0.0 | 0.0 | 0.0 | 0.0 | 0.4780 | 0.0 | 0.0 | 0.6644 | 0.0 | 0.3163 | 0.4511 | 0.0 | nan | 0.0 | 0.2316 | 0.0 | 0.0 | 0.8321 | 0.7257 | 0.9157 | 0.0 | 0.1268 | 0.1636 | 0.0 |
| 0.3657 | 111.11 | 6000 | 0.5813 | 0.2955 | 0.3637 | 0.8277 | nan | 0.7870 | 0.8975 | 0.7014 | 0.8566 | 0.3741 | nan | 0.4469 | 0.6219 | 0.0 | 0.9403 | 0.0 | 0.0 | 0.0 | 0.0 | 0.7185 | 0.0 | 0.0 | 0.8827 | 0.0 | 0.4503 | 0.5681 | 0.0 | nan | 0.0 | 0.3815 | 0.0 | 0.0 | 0.9397 | 0.8275 | 0.9484 | 0.0 | 0.0968 | 0.1999 | 0.0 | nan | 0.7203 | 0.8097 | 0.6881 | 0.5693 | 0.3405 | nan | 0.3293 | 0.4754 | 0.0 | 0.7846 | 0.0 | 0.0 | 0.0 | 0.0 | 0.3863 | 0.0 | 0.0 | 0.6346 | 0.0 | 0.3557 | 0.4385 | 0.0 | nan | 0.0 | 0.2181 | 0.0 | 0.0 | 0.8287 | 0.7172 | 0.9189 | 0.0 | 0.0846 | 0.1578 | 0.0 |
| 0.367 | 112.96 | 6100 | 0.6609 | 0.2897 | 0.3661 | 0.7984 | nan | 0.7903 | 0.8284 | 0.8039 | 0.9016 | 0.2212 | nan | 0.4163 | 0.6816 | 0.0 | 0.9453 | 0.0 | 0.0 | 0.0 | 0.0 | 0.7209 | 0.0 | 0.0 | 0.8372 | 0.0 | 0.4577 | 0.5511 | 0.0 | nan | 0.0 | 0.4283 | 0.0 | 0.0 | 0.9390 | 0.7875 | 0.9493 | 0.0 | 0.1399 | 0.3157 | 0.0 | nan | 0.7203 | 0.7408 | 0.7738 | 0.4105 | 0.2117 | nan | 0.3182 | 0.4784 | 0.0 | 0.7828 | 0.0 | 0.0 | 0.0 | 0.0 | 0.3859 | 0.0 | 0.0 | 0.6672 | 0.0 | 0.3588 | 0.4378 | 0.0 | nan | 0.0 | 0.2244 | 0.0 | 0.0 | 0.8282 | 0.7032 | 0.9187 | 0.0 | 0.1137 | 0.1958 | 0.0 |
| 0.3638 | 114.81 | 6200 | 0.7997 | 0.2803 | 0.3592 | 0.7547 | nan | 0.8092 | 0.6782 | 0.8102 | 0.9284 | 0.2905 | nan | 0.3691 | 0.6185 | 0.0 | 0.9403 | 0.0 | 0.0 | 0.0 | 0.0 | 0.7520 | 0.0 | 0.0 | 0.8609 | 0.0 | 0.4178 | 0.5567 | 0.0 | nan | 0.0 | 0.3931 | 0.0 | 0.0 | 0.9474 | 0.8770 | 0.9435 | 0.0000 | 0.0667 | 0.2347 | 0.0 | nan | 0.7091 | 0.6261 | 0.7837 | 0.2942 | 0.2753 | nan | 0.2928 | 0.4552 | 0.0 | 0.7808 | 0.0 | 0.0 | 0.0 | 0.0 | 0.3801 | 0.0 | 0.0 | 0.6648 | 0.0 | 0.3421 | 0.4315 | 0.0 | nan | 0.0 | 0.2152 | 0.0 | 0.0 | 0.8297 | 0.7448 | 0.9168 | 0.0000 | 0.0595 | 0.1680 | 0.0 |
| 0.3654 | 116.67 | 6300 | 0.6019 | 0.2956 | 0.3645 | 0.8175 | nan | 0.8244 | 0.8533 | 0.6788 | 0.8927 | 0.3058 | nan | 0.4950 | 0.6003 | 0.0 | 0.9396 | 0.0 | 0.0 | 0.0 | 0.0 | 0.6930 | 0.0 | 0.0 | 0.8964 | 0.0 | 0.3647 | 0.5196 | 0.0 | nan | 0.0 | 0.4113 | 0.0 | 0.0 | 0.9257 | 0.8551 | 0.9594 | 0.0 | 0.1310 | 0.3167 | 0.0 | nan | 0.7337 | 0.7732 | 0.6601 | 0.4748 | 0.2853 | nan | 0.3520 | 0.4685 | 0.0 | 0.7868 | 0.0 | 0.0 | 0.0 | 0.0 | 0.4121 | 0.0 | 0.0 | 0.6708 | 0.0 | 0.3117 | 0.4434 | 0.0 | nan | 0.0 | 0.2326 | 0.0 | 0.0 | 0.8405 | 0.7541 | 0.9187 | 0.0 | 0.1205 | 0.2201 | 0.0 |
| 0.3652 | 118.52 | 6400 | 0.5981 | 0.2967 | 0.3649 | 0.8205 | nan | 0.7551 | 0.8909 | 0.6342 | 0.9054 | 0.3093 | nan | 0.4234 | 0.6313 | 0.0 | 0.9387 | 0.0 | 0.0 | 0.0 | 0.0 | 0.6751 | 0.0 | 0.0 | 0.8700 | 0.0 | 0.4187 | 0.5633 | 0.0 | nan | 0.0 | 0.4465 | 0.0 | 0.0 | 0.9262 | 0.8528 | 0.9534 | 0.0002 | 0.1437 | 0.3398 | 0.0 | nan | 0.6956 | 0.7948 | 0.6246 | 0.4963 | 0.2861 | nan | 0.3171 | 0.4870 | 0.0 | 0.7941 | 0.0 | 0.0 | 0.0 | 0.0 | 0.4467 | 0.0 | 0.0 | 0.6719 | 0.0 | 0.3338 | 0.4473 | 0.0 | nan | 0.0 | 0.2377 | 0.0 | 0.0 | 0.8417 | 0.7531 | 0.9198 | 0.0002 | 0.1302 | 0.2180 | 0.0 |
| 0.3559 | 120.37 | 6500 | 0.5780 | 0.3026 | 0.3668 | 0.8256 | nan | 0.7517 | 0.9024 | 0.8103 | 0.8905 | 0.3788 | nan | 0.3990 | 0.5648 | 0.0 | 0.9522 | 0.0 | 0.0 | 0.0 | 0.0 | 0.6491 | 0.0 | 0.0 | 0.8623 | 0.0 | 0.5208 | 0.5227 | 0.0 | nan | 0.0 | 0.4095 | 0.0 | 0.0 | 0.9315 | 0.8073 | 0.9531 | 0.0 | 0.1367 | 0.2937 | 0.0 | nan | 0.6917 | 0.8084 | 0.7831 | 0.5645 | 0.3365 | nan | 0.3195 | 0.4446 | 0.0 | 0.7603 | 0.0 | 0.0 | 0.0 | 0.0 | 0.4620 | 0.0 | 0.0 | 0.6310 | 0.0 | 0.3859 | 0.4599 | 0.0 | nan | 0.0 | 0.2286 | 0.0 | 0.0 | 0.8329 | 0.7236 | 0.9192 | 0.0 | 0.1259 | 0.2064 | 0.0 |
| 0.3348 | 122.22 | 6600 | 0.5522 | 0.3023 | 0.3735 | 0.8379 | nan | 0.8289 | 0.9088 | 0.6882 | 0.8947 | 0.3594 | nan | 0.4373 | 0.6918 | 0.0 | 0.9448 | 0.0 | 0.0 | 0.0 | 0.0 | 0.7098 | 0.0 | 0.0 | 0.8356 | 0.0 | 0.5156 | 0.5832 | 0.0 | nan | 0.0 | 0.4059 | 0.0 | 0.0 | 0.9417 | 0.8359 | 0.9578 | 0.0009 | 0.1308 | 0.2812 | 0.0 | nan | 0.7433 | 0.8257 | 0.6716 | 0.5930 | 0.3306 | nan | 0.3517 | 0.4956 | 0.0 | 0.7897 | 0.0 | 0.0 | 0.0 | 0.0 | 0.3747 | 0.0 | 0.0 | 0.6736 | 0.0 | 0.3802 | 0.4271 | 0.0 | nan | 0.0 | 0.2180 | 0.0 | 0.0 | 0.8323 | 0.7373 | 0.9200 | 0.0008 | 0.1171 | 0.1906 | 0.0 |
| 0.3653 | 124.07 | 6700 | 0.6070 | 0.2986 | 0.3679 | 0.8216 | nan | 0.6919 | 0.9133 | 0.8114 | 0.8786 | 0.3306 | nan | 0.4558 | 0.6517 | 0.0 | 0.9455 | 0.0 | 0.0 | 0.0 | 0.0 | 0.7183 | 0.0 | 0.0 | 0.8672 | 0.0 | 0.5019 | 0.5472 | 0.0 | nan | 0.0 | 0.4162 | 0.0 | 0.0 | 0.9390 | 0.8019 | 0.9414 | 0.0 | 0.0957 | 0.2664 | 0.0 | nan | 0.6394 | 0.8000 | 0.7821 | 0.6011 | 0.3025 | nan | 0.3359 | 0.4969 | 0.0 | 0.7887 | 0.0 | 0.0 | 0.0 | 0.0 | 0.3803 | 0.0 | 0.0 | 0.6386 | 0.0 | 0.3855 | 0.4427 | 0.0 | nan | 0.0 | 0.2268 | 0.0 | 0.0 | 0.8298 | 0.7136 | 0.9170 | 0.0 | 0.0886 | 0.1861 | 0.0 |
| 0.3216 | 125.93 | 6800 | 0.6091 | 0.3003 | 0.3729 | 0.8176 | nan | 0.8300 | 0.8429 | 0.8233 | 0.9193 | 0.3587 | nan | 0.4900 | 0.6837 | 0.0 | 0.9439 | 0.0 | 0.0 | 0.0 | 0.0 | 0.7272 | 0.0 | 0.0 | 0.8781 | 0.0 | 0.4143 | 0.5307 | 0.0 | nan | 0.0 | 0.4051 | 0.0116 | 0.0 | 0.9314 | 0.8400 | 0.9539 | 0.0 | 0.0921 | 0.2558 | 0.0 | nan | 0.7584 | 0.7706 | 0.7892 | 0.4626 | 0.3268 | nan | 0.3678 | 0.5054 | 0.0 | 0.7811 | 0.0 | 0.0 | 0.0 | 0.0 | 0.3947 | 0.0 | 0.0 | 0.6604 | 0.0 | 0.3306 | 0.4515 | 0.0 | nan | 0.0 | 0.2265 | 0.0116 | 0.0 | 0.8386 | 0.7409 | 0.9204 | 0.0 | 0.0850 | 0.1887 | 0.0 |
| 0.358 | 127.78 | 6900 | 0.5287 | 0.3110 | 0.3729 | 0.8465 | nan | 0.8062 | 0.9359 | 0.8173 | 0.8927 | 0.3346 | nan | 0.4527 | 0.6392 | 0.0 | 0.9354 | 0.0 | 0.0 | 0.0 | 0.0 | 0.6945 | 0.0 | 0.0 | 0.8722 | 0.0 | 0.4896 | 0.5317 | 0.0 | nan | 0.0 | 0.4070 | 0.0 | 0.0 | 0.9436 | 0.8467 | 0.9449 | 0.0 | 0.1243 | 0.2646 | 0.0 | nan | 0.7567 | 0.8356 | 0.7873 | 0.6388 | 0.3087 | nan | 0.3575 | 0.4948 | 0.0 | 0.7958 | 0.0 | 0.0 | 0.0 | 0.0 | 0.4146 | 0.0 | 0.0 | 0.6798 | 0.0 | 0.3797 | 0.4630 | 0.0 | nan | 0.0 | 0.2283 | 0.0 | 0.0 | 0.8356 | 0.7467 | 0.9182 | 0.0 | 0.1175 | 0.1940 | 0.0 |
| 0.3402 | 129.63 | 7000 | 0.6208 | 0.2946 | 0.3637 | 0.8141 | nan | 0.7658 | 0.8754 | 0.8158 | 0.9118 | 0.2322 | nan | 0.4017 | 0.6637 | 0.0 | 0.9438 | 0.0 | 0.0 | 0.0 | 0.0 | 0.6933 | 0.0 | 0.0 | 0.8763 | 0.0 | 0.3895 | 0.5601 | 0.0 | nan | 0.0 | 0.4252 | 0.0043 | 0.0 | 0.9423 | 0.7810 | 0.9448 | 0.0000 | 0.1253 | 0.2865 | 0.0 | nan | 0.7060 | 0.7779 | 0.7885 | 0.4813 | 0.2236 | nan | 0.3133 | 0.4921 | 0.0 | 0.7863 | 0.0 | 0.0 | 0.0 | 0.0 | 0.4236 | 0.0 | 0.0 | 0.6817 | 0.0 | 0.3292 | 0.4440 | 0.0 | nan | 0.0 | 0.2236 | 0.0043 | 0.0 | 0.8247 | 0.6964 | 0.9178 | 0.0000 | 0.1163 | 0.1976 | 0.0 |
| 0.3218 | 131.48 | 7100 | 0.5444 | 0.3108 | 0.3748 | 0.8443 | nan | 0.8296 | 0.9244 | 0.8276 | 0.8878 | 0.2774 | nan | 0.4782 | 0.6750 | 0.0 | 0.9366 | 0.0 | 0.0 | 0.0 | 0.0 | 0.6983 | 0.0 | 0.0 | 0.8664 | 0.0 | 0.4743 | 0.5451 | 0.0 | nan | 0.0 | 0.4187 | 0.0113 | 0.0 | 0.9391 | 0.8642 | 0.9558 | 0.0 | 0.1166 | 0.2684 | 0.0 | nan | 0.7636 | 0.8260 | 0.7984 | 0.6281 | 0.2647 | nan | 0.3705 | 0.5066 | 0.0 | 0.8001 | 0.0 | 0.0 | 0.0 | 0.0 | 0.4217 | 0.0 | 0.0 | 0.6783 | 0.0 | 0.3686 | 0.4581 | 0.0 | nan | 0.0 | 0.2178 | 0.0113 | 0.0 | 0.8396 | 0.7666 | 0.9213 | 0.0 | 0.1113 | 0.1943 | 0.0 |
| 0.3413 | 133.33 | 7200 | 0.5473 | 0.3063 | 0.3680 | 0.8412 | nan | 0.8038 | 0.9272 | 0.7396 | 0.8885 | 0.2742 | nan | 0.4489 | 0.5761 | 0.0 | 0.9434 | 0.0 | 0.0 | 0.0 | 0.0 | 0.6970 | 0.0 | 0.0 | 0.8722 | 0.0 | 0.5185 | 0.5545 | 0.0 | nan | 0.0 | 0.4060 | 0.0241 | 0.0 | 0.9384 | 0.8611 | 0.9453 | 0.0 | 0.1082 | 0.2489 | 0.0 | nan | 0.7450 | 0.8245 | 0.7280 | 0.6104 | 0.2595 | nan | 0.3532 | 0.4660 | 0.0 | 0.7846 | 0.0 | 0.0 | 0.0 | 0.0 | 0.4313 | 0.0 | 0.0 | 0.6807 | 0.0 | 0.3896 | 0.4684 | 0.0 | nan | 0.0 | 0.2284 | 0.0241 | 0.0 | 0.8397 | 0.7610 | 0.9186 | 0.0 | 0.1022 | 0.1871 | 0.0 |
| 0.3463 | 135.19 | 7300 | 0.6341 | 0.2922 | 0.3603 | 0.8106 | nan | 0.8087 | 0.8519 | 0.8052 | 0.9145 | 0.2425 | nan | 0.3711 | 0.5676 | 0.0 | 0.9336 | 0.0 | 0.0 | 0.0 | 0.0 | 0.7046 | 0.0 | 0.0 | 0.8888 | 0.0 | 0.3923 | 0.5815 | 0.0 | nan | 0.0 | 0.4055 | 0.0319 | 0.0 | 0.9344 | 0.8036 | 0.9503 | 0.0 | 0.1152 | 0.2276 | 0.0 | nan | 0.7410 | 0.7674 | 0.7870 | 0.4522 | 0.2330 | nan | 0.3152 | 0.4495 | 0.0 | 0.7851 | 0.0 | 0.0 | 0.0 | 0.0 | 0.4247 | 0.0 | 0.0 | 0.6553 | 0.0 | 0.3108 | 0.4330 | 0.0 | nan | 0.0 | 0.2290 | 0.0319 | 0.0 | 0.8273 | 0.7106 | 0.9198 | 0.0 | 0.1051 | 0.1720 | 0.0 |
| 0.317 | 137.04 | 7400 | 0.5689 | 0.2996 | 0.3673 | 0.8346 | nan | 0.8380 | 0.9048 | 0.7202 | 0.8874 | 0.2300 | nan | 0.4682 | 0.6001 | 0.0 | 0.9282 | 0.0 | 0.0 | 0.0 | 0.0 | 0.7278 | 0.0 | 0.0 | 0.8811 | 0.0 | 0.4430 | 0.5714 | 0.0 | nan | 0.0 | 0.4115 | 0.0148 | 0.0 | 0.9311 | 0.8477 | 0.9517 | 0.0 | 0.1019 | 0.2961 | 0.0 | nan | 0.7600 | 0.8107 | 0.7092 | 0.5843 | 0.2243 | nan | 0.3634 | 0.4741 | 0.0 | 0.7839 | 0.0 | 0.0 | 0.0 | 0.0 | 0.3683 | 0.0 | 0.0 | 0.6667 | 0.0 | 0.3433 | 0.4519 | 0.0 | nan | 0.0 | 0.2331 | 0.0148 | 0.0 | 0.8387 | 0.7448 | 0.9201 | 0.0 | 0.0930 | 0.2020 | 0.0 |
| 0.3241 | 138.89 | 7500 | 0.5921 | 0.3030 | 0.3698 | 0.8264 | nan | 0.7560 | 0.9038 | 0.8054 | 0.8993 | 0.2921 | nan | 0.4358 | 0.6497 | 0.0 | 0.9426 | 0.0 | 0.0 | 0.0 | 0.0 | 0.6843 | 0.0 | 0.0 | 0.8596 | 0.0 | 0.4666 | 0.5531 | 0.0 | nan | 0.0014 | 0.4125 | 0.0280 | 0.0 | 0.9419 | 0.8345 | 0.9468 | 0.0005 | 0.1478 | 0.2726 | 0.0 | nan | 0.6935 | 0.8021 | 0.7869 | 0.5437 | 0.2719 | nan | 0.3428 | 0.4933 | 0.0 | 0.7917 | 0.0 | 0.0 | 0.0 | 0.0 | 0.4134 | 0.0 | 0.0 | 0.6707 | 0.0 | 0.3632 | 0.4528 | 0.0 | nan | 0.0014 | 0.2150 | 0.0280 | 0.0 | 0.8367 | 0.7422 | 0.9203 | 0.0005 | 0.1346 | 0.1914 | 0.0 |
| 0.3341 | 140.74 | 7600 | 0.5641 | 0.3038 | 0.3702 | 0.8325 | nan | 0.7624 | 0.9172 | 0.8114 | 0.8959 | 0.2940 | nan | 0.5063 | 0.6105 | 0.0 | 0.9434 | 0.0 | 0.0 | 0.0 | 0.0 | 0.7179 | 0.0 | 0.0 | 0.8732 | 0.0 | 0.5230 | 0.5420 | 0.0 | nan | 0.0 | 0.4148 | 0.0425 | 0.0 | 0.9411 | 0.7719 | 0.9528 | 0.0 | 0.0840 | 0.2431 | 0.0 | nan | 0.7064 | 0.8174 | 0.7877 | 0.6132 | 0.2760 | nan | 0.3594 | 0.4823 | 0.0 | 0.7859 | 0.0 | 0.0 | 0.0 | 0.0 | 0.4116 | 0.0 | 0.0 | 0.6715 | 0.0 | 0.3953 | 0.4613 | 0.0 | nan | 0.0 | 0.2236 | 0.0425 | 0.0 | 0.8241 | 0.6840 | 0.9219 | 0.0 | 0.0790 | 0.1794 | 0.0 |
| 0.3135 | 142.59 | 7700 | 0.5712 | 0.3062 | 0.3709 | 0.8300 | nan | 0.7952 | 0.8986 | 0.8100 | 0.8619 | 0.3084 | nan | 0.4715 | 0.6006 | 0.0 | 0.9439 | 0.0 | 0.0 | 0.0 | 0.0 | 0.6837 | 0.0 | 0.0 | 0.8669 | 0.0 | 0.5083 | 0.5475 | 0.0 | nan | 0.0 | 0.4053 | 0.0384 | 0.0 | 0.9443 | 0.8124 | 0.9524 | 0.0 | 0.1181 | 0.3029 | 0.0 | nan | 0.7270 | 0.8042 | 0.7907 | 0.5385 | 0.2877 | nan | 0.3610 | 0.4689 | 0.0 | 0.7784 | 0.0 | 0.0 | 0.0 | 0.0 | 0.4431 | 0.0 | 0.0 | 0.6764 | 0.0 | 0.3905 | 0.4659 | 0.0 | nan | 0.0 | 0.2280 | 0.0384 | 0.0 | 0.8312 | 0.7224 | 0.9227 | 0.0 | 0.1114 | 0.2117 | 0.0 |
| 0.2985 | 144.44 | 7800 | 0.5705 | 0.3063 | 0.3739 | 0.8331 | nan | 0.7844 | 0.9061 | 0.8011 | 0.8987 | 0.3105 | nan | 0.4674 | 0.6336 | 0.0 | 0.9448 | 0.0 | 0.0 | 0.0 | 0.0 | 0.7174 | 0.0 | 0.0 | 0.8645 | 0.0 | 0.4836 | 0.5414 | 0.0 | nan | 0.0 | 0.4277 | 0.0445 | 0.0 | 0.9390 | 0.8448 | 0.9518 | 0.0003 | 0.1004 | 0.3014 | 0.0 | nan | 0.7238 | 0.8110 | 0.7871 | 0.5506 | 0.2869 | nan | 0.3545 | 0.4901 | 0.0 | 0.7879 | 0.0 | 0.0 | 0.0 | 0.0 | 0.4047 | 0.0 | 0.0 | 0.6872 | 0.0 | 0.3776 | 0.4572 | 0.0 | nan | 0.0 | 0.2263 | 0.0445 | 0.0 | 0.8392 | 0.7464 | 0.9226 | 0.0003 | 0.0950 | 0.2101 | 0.0 |
| 0.3083 | 146.3 | 7900 | 0.6255 | 0.3029 | 0.3735 | 0.8173 | nan | 0.7919 | 0.8576 | 0.8118 | 0.9101 | 0.3017 | nan | 0.4374 | 0.6462 | 0.0 | 0.9461 | 0.0 | 0.0 | 0.0 | 0.0 | 0.7137 | 0.0 | 0.0 | 0.8706 | 0.0 | 0.5111 | 0.5445 | 0.0 | nan | 0.0001 | 0.4282 | 0.0589 | 0.0 | 0.9317 | 0.8537 | 0.9628 | 0.0000 | 0.1030 | 0.2713 | 0.0 | nan | 0.7389 | 0.7675 | 0.7857 | 0.4623 | 0.2774 | nan | 0.3477 | 0.4815 | 0.0 | 0.7777 | 0.0 | 0.0 | 0.0 | 0.0 | 0.4220 | 0.0 | 0.0 | 0.6797 | 0.0 | 0.3926 | 0.4652 | 0.0 | nan | 0.0001 | 0.2292 | 0.0588 | 0.0 | 0.8421 | 0.7549 | 0.9219 | 0.0000 | 0.0939 | 0.1926 | 0.0 |
| 0.3132 | 148.15 | 8000 | 0.6407 | 0.2987 | 0.3697 | 0.8084 | nan | 0.8056 | 0.8366 | 0.8045 | 0.9187 | 0.2881 | nan | 0.3901 | 0.6494 | 0.0 | 0.9456 | 0.0 | 0.0 | 0.0 | 0.0 | 0.7065 | 0.0 | 0.0 | 0.8674 | 0.0 | 0.4835 | 0.5578 | 0.0 | nan | 0.0 | 0.4107 | 0.0690 | 0.0 | 0.9364 | 0.8069 | 0.9579 | 0.0 | 0.1392 | 0.2549 | 0.0 | nan | 0.7400 | 0.7511 | 0.7860 | 0.4288 | 0.2705 | nan | 0.3211 | 0.4907 | 0.0 | 0.7845 | 0.0 | 0.0 | 0.0 | 0.0 | 0.4064 | 0.0 | 0.0 | 0.6776 | 0.0 | 0.3750 | 0.4463 | 0.0 | nan | 0.0 | 0.2323 | 0.0689 | 0.0 | 0.8346 | 0.7221 | 0.9215 | 0.0 | 0.1189 | 0.1827 | 0.0 |
| 0.3227 | 150.0 | 8100 | 0.6215 | 0.3010 | 0.3747 | 0.8154 | nan | 0.8072 | 0.8523 | 0.7987 | 0.9122 | 0.3387 | nan | 0.4049 | 0.6521 | 0.0 | 0.9464 | 0.0 | 0.0 | 0.0 | 0.0 | 0.7268 | 0.0 | 0.0 | 0.8526 | 0.0 | 0.5301 | 0.5632 | 0.0 | nan | 0.0015 | 0.4353 | 0.0597 | 0.0 | 0.9352 | 0.8036 | 0.9574 | 0.0 | 0.1202 | 0.2916 | 0.0 | nan | 0.7319 | 0.7712 | 0.7839 | 0.4639 | 0.3115 | nan | 0.3235 | 0.4815 | 0.0 | 0.7813 | 0.0 | 0.0 | 0.0 | 0.0 | 0.3954 | 0.0 | 0.0 | 0.6800 | 0.0 | 0.3930 | 0.4522 | 0.0 | nan | 0.0015 | 0.2349 | 0.0596 | 0.0 | 0.8319 | 0.7106 | 0.9225 | 0.0 | 0.1071 | 0.1947 | 0.0 |
| 0.3041 | 151.85 | 8200 | 0.6365 | 0.2982 | 0.3695 | 0.8091 | nan | 0.7813 | 0.8516 | 0.8100 | 0.9057 | 0.2989 | nan | 0.4138 | 0.6557 | 0.0 | 0.9422 | 0.0 | 0.0 | 0.0 | 0.0 | 0.7155 | 0.0 | 0.0 | 0.8717 | 0.0 | 0.5273 | 0.5454 | 0.0 | nan | 0.0 | 0.4293 | 0.0595 | 0.0 | 0.9354 | 0.7484 | 0.9557 | 0.0 | 0.1301 | 0.2483 | 0.0 | nan | 0.7117 | 0.7612 | 0.7891 | 0.4543 | 0.2787 | nan | 0.3305 | 0.4950 | 0.0 | 0.7874 | 0.0 | 0.0 | 0.0 | 0.0 | 0.4007 | 0.0 | 0.0 | 0.6772 | 0.0 | 0.3923 | 0.4632 | 0.0 | nan | 0.0 | 0.2342 | 0.0594 | 0.0 | 0.8230 | 0.6691 | 0.9227 | 0.0 | 0.1142 | 0.1800 | 0.0 |
| 0.3295 | 153.7 | 8300 | 0.5763 | 0.3064 | 0.3745 | 0.8319 | nan | 0.8091 | 0.9000 | 0.8155 | 0.8927 | 0.3048 | nan | 0.4385 | 0.6734 | 0.0 | 0.9391 | 0.0 | 0.0 | 0.0 | 0.0 | 0.7114 | 0.0 | 0.0 | 0.8707 | 0.0 | 0.4884 | 0.5694 | 0.0 | nan | 0.0032 | 0.4179 | 0.0581 | 0.0 | 0.9385 | 0.8107 | 0.9552 | 0.0006 | 0.1316 | 0.2550 | 0.0 | nan | 0.7460 | 0.8059 | 0.7926 | 0.5582 | 0.2844 | nan | 0.3545 | 0.5009 | 0.0 | 0.7892 | 0.0 | 0.0 | 0.0 | 0.0 | 0.4184 | 0.0 | 0.0 | 0.6741 | 0.0 | 0.3769 | 0.4455 | 0.0 | nan | 0.0032 | 0.2317 | 0.0581 | 0.0 | 0.8317 | 0.7120 | 0.9232 | 0.0005 | 0.1162 | 0.1807 | 0.0 |
| 0.3057 | 155.56 | 8400 | 0.6602 | 0.2967 | 0.3669 | 0.8053 | nan | 0.7862 | 0.8400 | 0.8012 | 0.9083 | 0.2761 | nan | 0.3977 | 0.6548 | 0.0 | 0.9399 | 0.0 | 0.0 | 0.0 | 0.0 | 0.7262 | 0.0 | 0.0 | 0.8830 | 0.0 | 0.4582 | 0.5390 | 0.0 | nan | 0.0 | 0.4382 | 0.0696 | 0.0 | 0.9380 | 0.7676 | 0.9517 | 0.0 | 0.1204 | 0.2454 | 0.0 | nan | 0.7257 | 0.7493 | 0.7832 | 0.4331 | 0.2603 | nan | 0.3344 | 0.4909 | 0.0 | 0.7899 | 0.0 | 0.0 | 0.0 | 0.0 | 0.4164 | 0.0 | 0.0 | 0.6631 | 0.0 | 0.3619 | 0.4610 | 0.0 | nan | 0.0 | 0.2358 | 0.0695 | 0.0 | 0.8268 | 0.6858 | 0.9224 | 0.0 | 0.1038 | 0.1798 | 0.0 |
| 0.3152 | 157.41 | 8500 | 0.6195 | 0.2986 | 0.3661 | 0.8115 | nan | 0.7876 | 0.8570 | 0.7994 | 0.8920 | 0.2891 | nan | 0.4035 | 0.6056 | 0.0 | 0.9417 | 0.0 | 0.0 | 0.0 | 0.0 | 0.7090 | 0.0 | 0.0 | 0.8719 | 0.0 | 0.4959 | 0.5413 | 0.0 | nan | 0.0 | 0.4136 | 0.0566 | 0.0 | 0.9414 | 0.7717 | 0.9517 | 0.0 | 0.1198 | 0.2672 | 0.0 | nan | 0.7263 | 0.7633 | 0.7814 | 0.4550 | 0.2715 | nan | 0.3352 | 0.4721 | 0.0 | 0.7820 | 0.0 | 0.0 | 0.0 | 0.0 | 0.4233 | 0.0 | 0.0 | 0.6671 | 0.0 | 0.3757 | 0.4677 | 0.0 | nan | 0.0 | 0.2407 | 0.0565 | 0.0 | 0.8255 | 0.6891 | 0.9216 | 0.0 | 0.1083 | 0.1912 | 0.0 |
| 0.3041 | 159.26 | 8600 | 0.5761 | 0.3071 | 0.3735 | 0.8297 | nan | 0.8077 | 0.8910 | 0.8053 | 0.8839 | 0.3353 | nan | 0.4603 | 0.6015 | 0.0 | 0.9489 | 0.0 | 0.0 | 0.0 | 0.0 | 0.6966 | 0.0 | 0.0 | 0.8701 | 0.0 | 0.4933 | 0.5427 | 0.0 | nan | 0.0082 | 0.4481 | 0.0761 | 0.0 | 0.9301 | 0.8454 | 0.9544 | 0.0005 | 0.1062 | 0.2469 | 0.0 | nan | 0.7406 | 0.7982 | 0.7855 | 0.5184 | 0.3024 | nan | 0.3652 | 0.4669 | 0.0 | 0.7807 | 0.0 | 0.0 | 0.0 | 0.0 | 0.4413 | 0.0 | 0.0 | 0.6853 | 0.0 | 0.3815 | 0.4553 | 0.0 | nan | 0.0082 | 0.2312 | 0.0759 | 0.0 | 0.8414 | 0.7507 | 0.9229 | 0.0005 | 0.0961 | 0.1775 | 0.0 |
| 0.3185 | 161.11 | 8700 | 0.5760 | 0.3058 | 0.3698 | 0.8296 | nan | 0.8094 | 0.8946 | 0.7956 | 0.8887 | 0.2897 | nan | 0.4223 | 0.5895 | 0.0 | 0.9357 | 0.0 | 0.0 | 0.0 | 0.0 | 0.6889 | 0.0 | 0.0 | 0.8908 | 0.0 | 0.4640 | 0.5538 | 0.0 | nan | 0.0 | 0.4239 | 0.0692 | 0.0 | 0.9305 | 0.8418 | 0.9519 | 0.0001 | 0.1431 | 0.2510 | 0.0 | nan | 0.7455 | 0.7997 | 0.7789 | 0.5321 | 0.2717 | nan | 0.3473 | 0.4756 | 0.0 | 0.8013 | 0.0 | 0.0 | 0.0 | 0.0 | 0.4311 | 0.0 | 0.0 | 0.6576 | 0.0 | 0.3605 | 0.4511 | 0.0 | nan | 0.0 | 0.2412 | 0.0691 | 0.0 | 0.8410 | 0.7459 | 0.9223 | 0.0001 | 0.1284 | 0.1839 | 0.0 |
| 0.2908 | 162.96 | 8800 | 0.5655 | 0.3075 | 0.3717 | 0.8316 | nan | 0.8548 | 0.8841 | 0.7997 | 0.8745 | 0.3118 | nan | 0.4610 | 0.6024 | 0.0 | 0.9410 | 0.0 | 0.0 | 0.0 | 0.0 | 0.6931 | 0.0 | 0.0 | 0.8861 | 0.0 | 0.4534 | 0.5383 | 0.0 | nan | 0.0015 | 0.4266 | 0.0689 | 0.0 | 0.9366 | 0.8053 | 0.9554 | 0.0 | 0.1346 | 0.2641 | 0.0 | nan | 0.7595 | 0.8021 | 0.7817 | 0.5396 | 0.2919 | nan | 0.3717 | 0.4720 | 0.0 | 0.7905 | 0.0 | 0.0 | 0.0 | 0.0 | 0.4462 | 0.0 | 0.0 | 0.6634 | 0.0 | 0.3562 | 0.4639 | 0.0 | nan | 0.0015 | 0.2393 | 0.0688 | 0.0 | 0.8346 | 0.7212 | 0.9232 | 0.0 | 0.1193 | 0.1923 | 0.0 |
| 0.3137 | 164.81 | 8900 | 0.5829 | 0.3094 | 0.3784 | 0.8279 | nan | 0.8476 | 0.8674 | 0.8118 | 0.9018 | 0.3237 | nan | 0.4801 | 0.6610 | 0.0 | 0.9387 | 0.0 | 0.0 | 0.0 | 0.0 | 0.6851 | 0.0 | 0.0 | 0.8696 | 0.0 | 0.5109 | 0.5681 | 0.0 | nan | 0.0260 | 0.4276 | 0.0709 | 0.0 | 0.9330 | 0.8416 | 0.9554 | 0.0012 | 0.1333 | 0.2547 | 0.0 | nan | 0.7562 | 0.7893 | 0.7902 | 0.5123 | 0.3055 | nan | 0.3768 | 0.4921 | 0.0 | 0.7978 | 0.0 | 0.0 | 0.0 | 0.0 | 0.4472 | 0.0 | 0.0 | 0.6754 | 0.0 | 0.3867 | 0.4408 | 0.0 | nan | 0.0260 | 0.2316 | 0.0708 | 0.0 | 0.8396 | 0.7418 | 0.9237 | 0.0010 | 0.1173 | 0.1797 | 0.0 |
| 0.3219 | 166.67 | 9000 | 0.5812 | 0.3065 | 0.3750 | 0.8278 | nan | 0.8354 | 0.8788 | 0.8041 | 0.8834 | 0.2990 | nan | 0.4594 | 0.6655 | 0.0 | 0.9395 | 0.0 | 0.0 | 0.0 | 0.0 | 0.6980 | 0.0 | 0.0 | 0.8601 | 0.0 | 0.5069 | 0.5685 | 0.0 | nan | 0.0113 | 0.4156 | 0.0664 | 0.0 | 0.9440 | 0.8108 | 0.9521 | 0.0001 | 0.1291 | 0.2716 | 0.0 | nan | 0.7565 | 0.7902 | 0.7828 | 0.5219 | 0.2845 | nan | 0.3688 | 0.4922 | 0.0 | 0.7966 | 0.0 | 0.0 | 0.0 | 0.0 | 0.4240 | 0.0 | 0.0 | 0.6768 | 0.0 | 0.3877 | 0.4481 | 0.0 | nan | 0.0113 | 0.2327 | 0.0664 | 0.0 | 0.8308 | 0.7154 | 0.9230 | 0.0001 | 0.1124 | 0.1869 | 0.0 |
| 0.3181 | 168.52 | 9100 | 0.5632 | 0.3112 | 0.3765 | 0.8367 | nan | 0.8125 | 0.9072 | 0.8124 | 0.8963 | 0.3044 | nan | 0.4647 | 0.6697 | 0.0 | 0.9359 | 0.0 | 0.0 | 0.0 | 0.0 | 0.6879 | 0.0 | 0.0 | 0.8771 | 0.0 | 0.5085 | 0.5560 | 0.0 | nan | 0.0039 | 0.4244 | 0.0703 | 0.0 | 0.9367 | 0.8280 | 0.9532 | 0.0 | 0.1309 | 0.2672 | 0.0 | nan | 0.7474 | 0.8113 | 0.7892 | 0.5707 | 0.2882 | nan | 0.3704 | 0.5031 | 0.0 | 0.7988 | 0.0 | 0.0 | 0.0 | 0.0 | 0.4314 | 0.0 | 0.0 | 0.6778 | 0.0 | 0.3900 | 0.4604 | 0.0 | nan | 0.0039 | 0.2372 | 0.0702 | 0.0 | 0.8390 | 0.7407 | 0.9234 | 0.0 | 0.1173 | 0.1872 | 0.0 |
| 0.3009 | 170.37 | 9200 | 0.5671 | 0.3095 | 0.3743 | 0.8326 | nan | 0.7939 | 0.9018 | 0.7926 | 0.8902 | 0.3160 | nan | 0.4603 | 0.6415 | 0.0 | 0.9414 | 0.0 | 0.0 | 0.0 | 0.0 | 0.6804 | 0.0 | 0.0 | 0.8815 | 0.0 | 0.4974 | 0.5528 | 0.0 | nan | 0.0000 | 0.4233 | 0.0749 | 0.0 | 0.9339 | 0.8322 | 0.9566 | 0.0 | 0.1296 | 0.2770 | 0.0 | nan | 0.7279 | 0.8041 | 0.7736 | 0.5652 | 0.2951 | nan | 0.3698 | 0.4960 | 0.0 | 0.7938 | 0.0 | 0.0 | 0.0 | 0.0 | 0.4395 | 0.0 | 0.0 | 0.6714 | 0.0 | 0.3837 | 0.4627 | 0.0 | nan | 0.0000 | 0.2368 | 0.0747 | 0.0 | 0.8379 | 0.7389 | 0.9235 | 0.0 | 0.1161 | 0.1946 | 0.0 |
| 0.2873 | 172.22 | 9300 | 0.6113 | 0.3047 | 0.3720 | 0.8176 | nan | 0.8107 | 0.8536 | 0.7603 | 0.8949 | 0.3232 | nan | 0.4761 | 0.6422 | 0.0 | 0.9415 | 0.0 | 0.0 | 0.0 | 0.0 | 0.6799 | 0.0 | 0.0 | 0.8720 | 0.0 | 0.5023 | 0.5457 | 0.0 | nan | 0.0034 | 0.4146 | 0.0717 | 0.0 | 0.9439 | 0.8035 | 0.9521 | 0.0 | 0.1299 | 0.2839 | 0.0 | nan | 0.7355 | 0.7675 | 0.7422 | 0.4826 | 0.3027 | nan | 0.3715 | 0.4933 | 0.0 | 0.7896 | 0.0 | 0.0 | 0.0 | 0.0 | 0.4421 | 0.0 | 0.0 | 0.6666 | 0.0 | 0.3881 | 0.4723 | 0.0 | nan | 0.0034 | 0.2350 | 0.0716 | 0.0 | 0.8305 | 0.7183 | 0.9229 | 0.0 | 0.1152 | 0.1992 | 0.0 |
| 0.2856 | 174.07 | 9400 | 0.6091 | 0.3045 | 0.3713 | 0.8183 | nan | 0.8177 | 0.8508 | 0.7884 | 0.9070 | 0.3274 | nan | 0.4412 | 0.5971 | 0.0 | 0.9437 | 0.0 | 0.0 | 0.0 | 0.0 | 0.6904 | 0.0 | 0.0 | 0.8760 | 0.0 | 0.5037 | 0.5471 | 0.0 | nan | 0.0023 | 0.4093 | 0.0729 | 0.0 | 0.9395 | 0.8289 | 0.9513 | 0.0000 | 0.1123 | 0.2745 | 0.0 | nan | 0.7401 | 0.7694 | 0.7705 | 0.4745 | 0.3070 | nan | 0.3570 | 0.4797 | 0.0 | 0.7901 | 0.0 | 0.0 | 0.0 | 0.0 | 0.4370 | 0.0 | 0.0 | 0.6642 | 0.0 | 0.3879 | 0.4663 | 0.0 | nan | 0.0023 | 0.2356 | 0.0728 | 0.0 | 0.8358 | 0.7333 | 0.9230 | 0.0000 | 0.1034 | 0.1937 | 0.0 |
| 0.2803 | 175.93 | 9500 | 0.6404 | 0.3009 | 0.3704 | 0.8084 | nan | 0.8365 | 0.8208 | 0.7833 | 0.9062 | 0.3050 | nan | 0.4405 | 0.6203 | 0.0 | 0.9443 | 0.0 | 0.0 | 0.0 | 0.0 | 0.6940 | 0.0 | 0.0 | 0.8667 | 0.0 | 0.5055 | 0.5494 | 0.0 | nan | 0.0084 | 0.4148 | 0.0772 | 0.0 | 0.9424 | 0.8074 | 0.9551 | 0.0001 | 0.1077 | 0.2664 | 0.0 | nan | 0.7454 | 0.7459 | 0.7680 | 0.4316 | 0.2897 | nan | 0.3571 | 0.4866 | 0.0 | 0.7930 | 0.0 | 0.0 | 0.0 | 0.0 | 0.4255 | 0.0 | 0.0 | 0.6652 | 0.0 | 0.3877 | 0.4601 | 0.0 | nan | 0.0084 | 0.2306 | 0.0771 | 0.0 | 0.8314 | 0.7178 | 0.9235 | 0.0001 | 0.0969 | 0.1889 | 0.0 |
| 0.2924 | 177.78 | 9600 | 0.6156 | 0.3045 | 0.3723 | 0.8156 | nan | 0.8293 | 0.8420 | 0.8051 | 0.8964 | 0.3365 | nan | 0.4651 | 0.6281 | 0.0 | 0.9443 | 0.0 | 0.0 | 0.0 | 0.0 | 0.6806 | 0.0 | 0.0 | 0.8777 | 0.0 | 0.4957 | 0.5434 | 0.0 | nan | 0.0043 | 0.4293 | 0.0774 | 0.0 | 0.9387 | 0.7942 | 0.9562 | 0.0 | 0.1178 | 0.2514 | 0.0 | nan | 0.7508 | 0.7606 | 0.7848 | 0.4617 | 0.3134 | nan | 0.3712 | 0.4903 | 0.0 | 0.7912 | 0.0 | 0.0 | 0.0 | 0.0 | 0.4384 | 0.0 | 0.0 | 0.6666 | 0.0 | 0.3850 | 0.4648 | 0.0 | nan | 0.0043 | 0.2308 | 0.0773 | 0.0 | 0.8320 | 0.7126 | 0.9232 | 0.0 | 0.1028 | 0.1836 | 0.0 |
| 0.2911 | 179.63 | 9700 | 0.6039 | 0.3051 | 0.3743 | 0.8197 | nan | 0.8161 | 0.8573 | 0.8009 | 0.9013 | 0.3091 | nan | 0.4597 | 0.6407 | 0.0 | 0.9406 | 0.0 | 0.0 | 0.0 | 0.0 | 0.7191 | 0.0 | 0.0 | 0.8787 | 0.0 | 0.5007 | 0.5561 | 0.0 | nan | 0.0046 | 0.4187 | 0.0825 | 0.0 | 0.9325 | 0.8335 | 0.9578 | 0.0000 | 0.1036 | 0.2642 | 0.0 | nan | 0.7434 | 0.7687 | 0.7825 | 0.4751 | 0.2917 | nan | 0.3667 | 0.4994 | 0.0 | 0.7998 | 0.0 | 0.0 | 0.0 | 0.0 | 0.4127 | 0.0 | 0.0 | 0.6761 | 0.0 | 0.3878 | 0.4561 | 0.0 | nan | 0.0046 | 0.2352 | 0.0823 | 0.0 | 0.8393 | 0.7401 | 0.9235 | 0.0000 | 0.0883 | 0.1885 | 0.0 |
| 0.3093 | 181.48 | 9800 | 0.6244 | 0.3021 | 0.3707 | 0.8132 | nan | 0.8240 | 0.8367 | 0.7819 | 0.9031 | 0.3158 | nan | 0.4523 | 0.6336 | 0.0 | 0.9419 | 0.0 | 0.0 | 0.0 | 0.0 | 0.7047 | 0.0 | 0.0 | 0.8782 | 0.0 | 0.5024 | 0.5478 | 0.0 | nan | 0.0 | 0.4039 | 0.0761 | 0.0 | 0.9422 | 0.8036 | 0.9524 | 0.0 | 0.0992 | 0.2629 | 0.0 | nan | 0.7414 | 0.7575 | 0.7666 | 0.4537 | 0.2990 | nan | 0.3642 | 0.4913 | 0.0 | 0.7906 | 0.0 | 0.0 | 0.0 | 0.0 | 0.4261 | 0.0 | 0.0 | 0.6655 | 0.0 | 0.3892 | 0.4639 | 0.0 | nan | 0.0 | 0.2339 | 0.0760 | 0.0 | 0.8311 | 0.7168 | 0.9226 | 0.0 | 0.0873 | 0.1892 | 0.0 |
| 0.3194 | 183.33 | 9900 | 0.6384 | 0.3015 | 0.3707 | 0.8106 | nan | 0.8269 | 0.8295 | 0.7809 | 0.9036 | 0.3169 | nan | 0.4373 | 0.6407 | 0.0 | 0.9394 | 0.0 | 0.0 | 0.0 | 0.0 | 0.7004 | 0.0 | 0.0 | 0.8774 | 0.0 | 0.4936 | 0.5511 | 0.0 | nan | 0.0004 | 0.4210 | 0.0726 | 0.0 | 0.9434 | 0.8072 | 0.9462 | 0.0 | 0.1149 | 0.2605 | 0.0 | nan | 0.7423 | 0.7508 | 0.7639 | 0.4418 | 0.2988 | nan | 0.3584 | 0.4963 | 0.0 | 0.7976 | 0.0 | 0.0 | 0.0 | 0.0 | 0.4212 | 0.0 | 0.0 | 0.6662 | 0.0 | 0.3830 | 0.4618 | 0.0 | nan | 0.0004 | 0.2347 | 0.0725 | 0.0 | 0.8311 | 0.7208 | 0.9214 | 0.0 | 0.0993 | 0.1875 | 0.0 |
| 0.3174 | 185.19 | 10000 | 0.6350 | 0.3022 | 0.3724 | 0.8117 | nan | 0.8240 | 0.8308 | 0.7789 | 0.9052 | 0.3152 | nan | 0.4703 | 0.6444 | 0.0 | 0.9424 | 0.0 | 0.0 | 0.0 | 0.0 | 0.7116 | 0.0 | 0.0 | 0.8716 | 0.0 | 0.4736 | 0.5408 | 0.0 | nan | 0.0048 | 0.4202 | 0.0754 | 0.0 | 0.9437 | 0.8196 | 0.9525 | 0.0 | 0.1041 | 0.2872 | 0.0 | nan | 0.7413 | 0.7520 | 0.7629 | 0.4453 | 0.2976 | nan | 0.3701 | 0.4953 | 0.0 | 0.7962 | 0.0 | 0.0 | 0.0 | 0.0 | 0.4152 | 0.0 | 0.0 | 0.6712 | 0.0 | 0.3749 | 0.4613 | 0.0 | nan | 0.0048 | 0.2337 | 0.0753 | 0.0 | 0.8324 | 0.7277 | 0.9234 | 0.0 | 0.0913 | 0.1997 | 0.0 |
### Framework versions
- Transformers 4.19.0.dev0
- Pytorch 1.11.0+cu113
- Datasets 2.0.0
- Tokenizers 0.11.6
| [
"unlabeled",
"flat-road",
"flat-sidewalk",
"flat-crosswalk",
"flat-cyclinglane",
"flat-parkingdriveway",
"flat-railtrack",
"flat-curb",
"human-person",
"human-rider",
"vehicle-car",
"vehicle-truck",
"vehicle-bus",
"vehicle-tramtrain",
"vehicle-motorcycle",
"vehicle-bicycle",
"vehicle-caravan",
"vehicle-cartrailer",
"construction-building",
"construction-door",
"construction-wall",
"construction-fenceguardrail",
"construction-bridge",
"construction-tunnel",
"construction-stairs",
"object-pole",
"object-trafficsign",
"object-trafficlight",
"nature-vegetation",
"nature-terrain",
"sky",
"void-ground",
"void-dynamic",
"void-static",
"void-unclear"
] |
Matthijs/deeplabv3-mobilevit-small |
# MobileViT + DeepLabV3 (small-sized model)
MobileViT model pre-trained on PASCAL VOC at resolution 512x512. It was introduced in [MobileViT: Light-weight, General-purpose, and Mobile-friendly Vision Transformer](https://arxiv.org/abs/2110.02178) by Sachin Mehta and Mohammad Rastegari, and first released in [this repository](https://github.com/apple/ml-cvnets). The license used is [Apple sample code license](https://github.com/apple/ml-cvnets/blob/main/LICENSE).
Disclaimer: The team releasing MobileViT did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
MobileViT is a light-weight, low latency convolutional neural network that combines MobileNetV2-style layers with a new block that replaces local processing in convolutions with global processing using transformers. As with ViT (Vision Transformer), the image data is converted into flattened patches before it is processed by the transformer layers. Afterwards, however, the patches are "unflattened" back into feature maps. This allows the MobileViT-block to be placed anywhere inside a CNN. MobileViT does not require any positional embeddings.
The model in this repo adds a [DeepLabV3](https://arxiv.org/abs/1706.05587) head to the MobileViT backbone for semantic segmentation.
## Intended uses & limitations
You can use the raw model for semantic segmentation. See the [model hub](https://huggingface.co/models?search=mobilevit) to look for fine-tuned versions on a task that interests you.
### How to use
Here is how to use this model:
```python
from transformers import MobileViTFeatureExtractor, MobileViTForSemanticSegmentation
from PIL import Image
import requests
url = 'http://images.cocodataset.org/val2017/000000039769.jpg'
image = Image.open(requests.get(url, stream=True).raw)
feature_extractor = MobileViTFeatureExtractor.from_pretrained('Matthijs/deeplabv3-mobilevit-small')
model = MobileViTForSemanticSegmentation.from_pretrained('Matthijs/deeplabv3-mobilevit-small')
inputs = feature_extractor(images=image, return_tensors="pt")
outputs = model(**inputs)
logits = outputs.logits
predicted_mask = logits.argmax(1).squeeze(0)
```
Currently, both the feature extractor and model support PyTorch.
## Training data
The MobileViT + DeepLabV3 model was pretrained on [ImageNet-1k](https://huggingface.co/datasets/imagenet-1k), a dataset consisting of 1 million images and 1,000 classes, and then fine-tuned on the [PASCAL VOC2012](http://host.robots.ox.ac.uk/pascal/VOC/) dataset.
## Training procedure
### Preprocessing
At inference time, images are center-cropped at 512x512. Pixels are normalized to the range [0, 1]. Images are expected to be in BGR pixel order, not RGB.
### Pretraining
The MobileViT networks are trained from scratch for 300 epochs on ImageNet-1k on 8 NVIDIA GPUs with an effective batch size of 1024 and learning rate warmup for 3k steps, followed by cosine annealing. Also used were label smoothing cross-entropy loss and L2 weight decay. Training resolution varies from 160x160 to 320x320, using multi-scale sampling.
To obtain the DeepLabV3 model, MobileViT was fine-tuned on the PASCAL VOC dataset using 4 NVIDIA A100 GPUs.
## Evaluation results
| Model | PASCAL VOC mIOU | # params | URL |
|------------------|-----------------|-----------|--------------------------------------------------------------|
| MobileViT-XXS | 73.6 | 1.9 M | https://huggingface.co/Matthijs/deeplabv3-mobilevit-xx-small |
| MobileViT-XS | 77.1 | 2.9 M | https://huggingface.co/Matthijs/deeplabv3-mobilevit-x-small |
| **MobileViT-S** | **79.1** | **6.4 M** | https://huggingface.co/Matthijs/deeplabv3-mobilevit-small |
### BibTeX entry and citation info
```bibtex
@inproceedings{vision-transformer,
title = {MobileViT: Light-weight, General-purpose, and Mobile-friendly Vision Transformer},
author = {Sachin Mehta and Mohammad Rastegari},
year = {2022},
URL = {https://arxiv.org/abs/2110.02178}
}
```
| [
"background",
"aeroplane",
"bicycle",
"bird",
"boat",
"bottle",
"bus",
"car",
"cat",
"chair",
"cow",
"diningtable",
"dog",
"horse",
"motorbike",
"person",
"pottedplant",
"sheep",
"sofa",
"train",
"tvmonitor"
] |
reannayang/segformer-b0-pavement |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# segformer-b0-pavement
This model is a fine-tuned version of [nvidia/mit-b0](https://huggingface.co/nvidia/mit-b0) on the reannayang/FL_pavement dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4165
- Mean Iou: 0.6318
- Mean Accuracy: 0.9700
- Overall Accuracy: 0.9738
- Per Category Iou: [0.0, 0.964166382973358, 0.9809231860559384, 0.0, 0.9295139919583345, 0.9164463823409184]
- Per Category Accuracy: [nan, 0.9643001261034048, 0.9983497924348297, nan, 0.995031342981772, 0.9223532638507954]
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 6e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Mean Iou | Mean Accuracy | Overall Accuracy | Per Category Iou | Per Category Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:-------------:|:----------------:|:------------------------------------------------------------------------------------------:|:------------------------------------------------------------------------------------------:|
| 1.0651 | 10.0 | 20 | 1.3005 | 0.5967 | 0.9512 | 0.9534 | [0.0, 0.9462421185372005, 0.9681701711239586, 0.0, 0.7994398965962947, 0.8662896799897185] | [nan, 0.9462421185372005, 0.9693809143181291, nan, 0.9648149753011526, 0.9243828853538124] |
| 0.5732 | 20.0 | 40 | 0.6626 | 0.6287 | 0.9702 | 0.9760 | [0.0, 0.975246652572234, 0.985446932366533, 0.0, 0.9010974339804011, 0.9103918683964157] | [nan, 0.9772635561160151, 0.9952040842637238, nan, 0.9748678395008233, 0.9334887547997806] |
| 0.6987 | 30.0 | 60 | 0.4319 | 0.6317 | 0.9705 | 0.9758 | [0.0, 0.9709705045212967, 0.9798115236227942, 0.0, 0.9255918522130127, 0.9139245313729214] | [nan, 0.9722194199243379, 0.9986205296134905, nan, 0.9871161568015715, 0.924026330224904] |
| 0.6915 | 40.0 | 80 | 0.4382 | 0.6237 | 0.9634 | 0.9692 | [0.0, 0.9611727616645649, 0.9725125142706595, 0.0, 0.9147983251179308, 0.8937433316006894] | [nan, 0.9611727616645649, 0.9993811721630611, nan, 0.9971690210012422, 0.896023038946791] |
| 0.4373 | 50.0 | 100 | 0.4165 | 0.6318 | 0.9700 | 0.9738 | [0.0, 0.964166382973358, 0.9809231860559384, 0.0, 0.9295139919583345, 0.9164463823409184] | [nan, 0.9643001261034048, 0.9983497924348297, nan, 0.995031342981772, 0.9223532638507954] |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.7.1
- Datasets 2.2.1
- Tokenizers 0.12.1
| [
"rumble strips",
"tree",
"through lane",
"car",
"grass",
"curb"
] |
jakka/segformer-b0-finetuned-segments-sidewalk-4 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# segformer-b0-finetuned-segments-sidewalk-4
This model is a fine-tuned version of [nvidia/mit-b0](https://huggingface.co/nvidia/mit-b0) on the segments/sidewalk-semantic dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6258
- Mean Iou: 0.1481
- Mean Accuracy: 0.1991
- Overall Accuracy: 0.7316
- Per Category Iou: [nan, 0.4971884694242825, 0.7844619900838784, 0.0, 0.10165655377640956, 0.007428563507709108, nan, 4.566798099115959e-06, 0.0, 0.0, 0.5570746278221521, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.534278997386317, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.7557693923373933, 0.5270379031768208, 0.8254522211471568, 0.0, 0.0, 0.0, 0.0]
- Per Category Accuracy: [nan, 0.8698779680369205, 0.9122325676343133, 0.0, 0.10179229832932858, 0.007508413919135004, nan, 4.566798099115959e-06, 0.0, 0.0, 0.8968168359562617, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.8492049383357001, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.9388033874781816, 0.6627890453030717, 0.9334458854084583, 0.0, 0.0, 0.0, 0.0]
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 6e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Mean Iou | Mean Accuracy | Overall Accuracy | Per Category Iou | Per Category Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:-------------:|:----------------:|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------:|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------:|
| 1.7912 | 1.0 | 25 | 1.6392 | 0.1412 | 0.1911 | 0.7210 | [nan, 0.48942576059104514, 0.7754689525048201, 0.0, 0.031932013148008094, 0.004348266117522573, nan, 1.5527099355168697e-05, 0.0, 0.0, 0.5356571432088642, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.5243044552616699, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.7355207837531991, 0.4479559177066271, 0.8315839315332364, 0.0, 0.0, 0.0, 0.0] | [nan, 0.8476069713517648, 0.9129050708992534, 0.0, 0.03194435645315849, 0.004370669306327572, nan, 1.552711353699426e-05, 0.0, 0.0, 0.897824434787493, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.8555478632753987, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.9510113270409175, 0.5116786406550935, 0.9122706949370997, 0.0, 0.0, 0.0, 0.0] |
| 1.7531 | 2.0 | 50 | 1.6258 | 0.1481 | 0.1991 | 0.7316 | [nan, 0.4971884694242825, 0.7844619900838784, 0.0, 0.10165655377640956, 0.007428563507709108, nan, 4.566798099115959e-06, 0.0, 0.0, 0.5570746278221521, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.534278997386317, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.7557693923373933, 0.5270379031768208, 0.8254522211471568, 0.0, 0.0, 0.0, 0.0] | [nan, 0.8698779680369205, 0.9122325676343133, 0.0, 0.10179229832932858, 0.007508413919135004, nan, 4.566798099115959e-06, 0.0, 0.0, 0.8968168359562617, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.8492049383357001, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.9388033874781816, 0.6627890453030717, 0.9334458854084583, 0.0, 0.0, 0.0, 0.0] |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0
- Datasets 2.2.2
- Tokenizers 0.12.1
| [
"unlabeled",
"flat-road",
"flat-sidewalk",
"flat-crosswalk",
"flat-cyclinglane",
"flat-parkingdriveway",
"flat-railtrack",
"flat-curb",
"human-person",
"human-rider",
"vehicle-car",
"vehicle-truck",
"vehicle-bus",
"vehicle-tramtrain",
"vehicle-motorcycle",
"vehicle-bicycle",
"vehicle-caravan",
"vehicle-cartrailer",
"construction-building",
"construction-door",
"construction-wall",
"construction-fenceguardrail",
"construction-bridge",
"construction-tunnel",
"construction-stairs",
"object-pole",
"object-trafficsign",
"object-trafficlight",
"nature-vegetation",
"nature-terrain",
"sky",
"void-ground",
"void-dynamic",
"void-static",
"void-unclear"
] |
apple/deeplabv3-mobilevit-small |
# MobileViT + DeepLabV3 (small-sized model)
MobileViT model pre-trained on PASCAL VOC at resolution 512x512. It was introduced in [MobileViT: Light-weight, General-purpose, and Mobile-friendly Vision Transformer](https://arxiv.org/abs/2110.02178) by Sachin Mehta and Mohammad Rastegari, and first released in [this repository](https://github.com/apple/ml-cvnets). The license used is [Apple sample code license](https://github.com/apple/ml-cvnets/blob/main/LICENSE).
Disclaimer: The team releasing MobileViT did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
MobileViT is a light-weight, low latency convolutional neural network that combines MobileNetV2-style layers with a new block that replaces local processing in convolutions with global processing using transformers. As with ViT (Vision Transformer), the image data is converted into flattened patches before it is processed by the transformer layers. Afterwards, the patches are "unflattened" back into feature maps. This allows the MobileViT-block to be placed anywhere inside a CNN. MobileViT does not require any positional embeddings.
The model in this repo adds a [DeepLabV3](https://arxiv.org/abs/1706.05587) head to the MobileViT backbone for semantic segmentation.
## Intended uses & limitations
You can use the raw model for semantic segmentation. See the [model hub](https://huggingface.co/models?search=mobilevit) to look for fine-tuned versions on a task that interests you.
### How to use
Here is how to use this model:
```python
from transformers import MobileViTFeatureExtractor, MobileViTForSemanticSegmentation
from PIL import Image
import requests
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
feature_extractor = MobileViTFeatureExtractor.from_pretrained("apple/deeplabv3-mobilevit-small")
model = MobileViTForSemanticSegmentation.from_pretrained("apple/deeplabv3-mobilevit-small")
inputs = feature_extractor(images=image, return_tensors="pt")
outputs = model(**inputs)
logits = outputs.logits
predicted_mask = logits.argmax(1).squeeze(0)
```
Currently, both the feature extractor and model support PyTorch.
## Training data
The MobileViT + DeepLabV3 model was pretrained on [ImageNet-1k](https://huggingface.co/datasets/imagenet-1k), a dataset consisting of 1 million images and 1,000 classes, and then fine-tuned on the [PASCAL VOC2012](http://host.robots.ox.ac.uk/pascal/VOC/) dataset.
## Training procedure
### Preprocessing
At inference time, images are center-cropped at 512x512. Pixels are normalized to the range [0, 1]. Images are expected to be in BGR pixel order, not RGB.
### Pretraining
The MobileViT networks are trained from scratch for 300 epochs on ImageNet-1k on 8 NVIDIA GPUs with an effective batch size of 1024 and learning rate warmup for 3k steps, followed by cosine annealing. Also used were label smoothing cross-entropy loss and L2 weight decay. Training resolution varies from 160x160 to 320x320, using multi-scale sampling.
To obtain the DeepLabV3 model, MobileViT was fine-tuned on the PASCAL VOC dataset using 4 NVIDIA A100 GPUs.
## Evaluation results
| Model | PASCAL VOC mIOU | # params | URL |
|------------------|-----------------|-----------|-----------------------------------------------------------|
| MobileViT-XXS | 73.6 | 1.9 M | https://huggingface.co/apple/deeplabv3-mobilevit-xx-small |
| MobileViT-XS | 77.1 | 2.9 M | https://huggingface.co/apple/deeplabv3-mobilevit-x-small |
| **MobileViT-S** | **79.1** | **6.4 M** | https://huggingface.co/apple/deeplabv3-mobilevit-small |
### BibTeX entry and citation info
```bibtex
@inproceedings{vision-transformer,
title = {MobileViT: Light-weight, General-purpose, and Mobile-friendly Vision Transformer},
author = {Sachin Mehta and Mohammad Rastegari},
year = {2022},
URL = {https://arxiv.org/abs/2110.02178}
}
```
| [
"background",
"aeroplane",
"bicycle",
"bird",
"boat",
"bottle",
"bus",
"car",
"cat",
"chair",
"cow",
"diningtable",
"dog",
"horse",
"motorbike",
"person",
"pottedplant",
"sheep",
"sofa",
"train",
"tvmonitor"
] |
apple/deeplabv3-mobilevit-x-small |
# MobileViT + DeepLabV3 (extra small-sized model)
MobileViT model pre-trained on PASCAL VOC at resolution 512x512. It was introduced in [MobileViT: Light-weight, General-purpose, and Mobile-friendly Vision Transformer](https://arxiv.org/abs/2110.02178) by Sachin Mehta and Mohammad Rastegari, and first released in [this repository](https://github.com/apple/ml-cvnets). The license used is [Apple sample code license](https://github.com/apple/ml-cvnets/blob/main/LICENSE).
Disclaimer: The team releasing MobileViT did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
MobileViT is a light-weight, low latency convolutional neural network that combines MobileNetV2-style layers with a new block that replaces local processing in convolutions with global processing using transformers. As with ViT (Vision Transformer), the image data is converted into flattened patches before it is processed by the transformer layers. Afterwards, the patches are "unflattened" back into feature maps. This allows the MobileViT-block to be placed anywhere inside a CNN. MobileViT does not require any positional embeddings.
The model in this repo adds a [DeepLabV3](https://arxiv.org/abs/1706.05587) head to the MobileViT backbone for semantic segmentation.
## Intended uses & limitations
You can use the raw model for semantic segmentation. See the [model hub](https://huggingface.co/models?search=mobilevit) to look for fine-tuned versions on a task that interests you.
### How to use
Here is how to use this model:
```python
from transformers import MobileViTFeatureExtractor, MobileViTForSemanticSegmentation
from PIL import Image
import requests
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
feature_extractor = MobileViTFeatureExtractor.from_pretrained("apple/deeplabv3-mobilevit-x-small")
model = MobileViTForSemanticSegmentation.from_pretrained("apple/deeplabv3-mobilevit-x-small")
inputs = feature_extractor(images=image, return_tensors="pt")
outputs = model(**inputs)
logits = outputs.logits
predicted_mask = logits.argmax(1).squeeze(0)
```
Currently, both the feature extractor and model support PyTorch.
## Training data
The MobileViT + DeepLabV3 model was pretrained on [ImageNet-1k](https://huggingface.co/datasets/imagenet-1k), a dataset consisting of 1 million images and 1,000 classes, and then fine-tuned on the [PASCAL VOC2012](http://host.robots.ox.ac.uk/pascal/VOC/) dataset.
## Training procedure
### Preprocessing
At inference time, images are center-cropped at 512x512. Pixels are normalized to the range [0, 1]. Images are expected to be in BGR pixel order, not RGB.
### Pretraining
The MobileViT networks are trained from scratch for 300 epochs on ImageNet-1k on 8 NVIDIA GPUs with an effective batch size of 1024 and learning rate warmup for 3k steps, followed by cosine annealing. Also used were label smoothing cross-entropy loss and L2 weight decay. Training resolution varies from 160x160 to 320x320, using multi-scale sampling.
To obtain the DeepLabV3 model, MobileViT was fine-tuned on the PASCAL VOC dataset using 4 NVIDIA A100 GPUs.
## Evaluation results
| Model | PASCAL VOC mIOU | # params | URL |
|------------------|-----------------|-----------|-----------------------------------------------------------|
| MobileViT-XXS | 73.6 | 1.9 M | https://huggingface.co/apple/deeplabv3-mobilevit-xx-small |
| **MobileViT-XS** | **77.1** | **2.9 M** | https://huggingface.co/apple/deeplabv3-mobilevit-x-small |
| MobileViT-S | 79.1 | 6.4 M | https://huggingface.co/apple/deeplabv3-mobilevit-small |
### BibTeX entry and citation info
```bibtex
@inproceedings{vision-transformer,
title = {MobileViT: Light-weight, General-purpose, and Mobile-friendly Vision Transformer},
author = {Sachin Mehta and Mohammad Rastegari},
year = {2022},
URL = {https://arxiv.org/abs/2110.02178}
}
```
| [
"background",
"aeroplane",
"bicycle",
"bird",
"boat",
"bottle",
"bus",
"car",
"cat",
"chair",
"cow",
"diningtable",
"dog",
"horse",
"motorbike",
"person",
"pottedplant",
"sheep",
"sofa",
"train",
"tvmonitor"
] |
apple/deeplabv3-mobilevit-xx-small |
# MobileViT + DeepLabV3 (extra extra small-sized model)
MobileViT model pre-trained on PASCAL VOC at resolution 512x512. It was introduced in [MobileViT: Light-weight, General-purpose, and Mobile-friendly Vision Transformer](https://arxiv.org/abs/2110.02178) by Sachin Mehta and Mohammad Rastegari, and first released in [this repository](https://github.com/apple/ml-cvnets). The license used is [Apple sample code license](https://github.com/apple/ml-cvnets/blob/main/LICENSE).
Disclaimer: The team releasing MobileViT did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
MobileViT is a light-weight, low latency convolutional neural network that combines MobileNetV2-style layers with a new block that replaces local processing in convolutions with global processing using transformers. As with ViT (Vision Transformer), the image data is converted into flattened patches before it is processed by the transformer layers. Afterwards, the patches are "unflattened" back into feature maps. This allows the MobileViT-block to be placed anywhere inside a CNN. MobileViT does not require any positional embeddings.
The model in this repo adds a [DeepLabV3](https://arxiv.org/abs/1706.05587) head to the MobileViT backbone for semantic segmentation.
## Intended uses & limitations
You can use the raw model for semantic segmentation. See the [model hub](https://huggingface.co/models?search=mobilevit) to look for fine-tuned versions on a task that interests you.
### How to use
Here is how to use this model:
```python
from transformers import MobileViTFeatureExtractor, MobileViTForSemanticSegmentation
from PIL import Image
import requests
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
feature_extractor = MobileViTFeatureExtractor.from_pretrained("apple/deeplabv3-mobilevit-xx-small")
model = MobileViTForSemanticSegmentation.from_pretrained("apple/deeplabv3-mobilevit-xx-small")
inputs = feature_extractor(images=image, return_tensors="pt")
outputs = model(**inputs)
logits = outputs.logits
predicted_mask = logits.argmax(1).squeeze(0)
```
Currently, both the feature extractor and model support PyTorch.
## Training data
The MobileViT + DeepLabV3 model was pretrained on [ImageNet-1k](https://huggingface.co/datasets/imagenet-1k), a dataset consisting of 1 million images and 1,000 classes, and then fine-tuned on the [PASCAL VOC2012](http://host.robots.ox.ac.uk/pascal/VOC/) dataset.
## Training procedure
### Preprocessing
At inference time, images are center-cropped at 512x512. Pixels are normalized to the range [0, 1]. Images are expected to be in BGR pixel order, not RGB.
### Pretraining
The MobileViT networks are trained from scratch for 300 epochs on ImageNet-1k on 8 NVIDIA GPUs with an effective batch size of 1024 and learning rate warmup for 3k steps, followed by cosine annealing. Also used were label smoothing cross-entropy loss and L2 weight decay. Training resolution varies from 160x160 to 320x320, using multi-scale sampling.
To obtain the DeepLabV3 model, MobileViT was fine-tuned on the PASCAL VOC dataset using 4 NVIDIA A100 GPUs.
## Evaluation results
| Model | PASCAL VOC mIOU | # params | URL |
|-------------------|-----------------|-----------|-----------------------------------------------------------|
| **MobileViT-XXS** | **73.6** | **1.9 M** | https://huggingface.co/apple/deeplabv3-mobilevit-xx-small |
| MobileViT-XS | 77.1 | 2.9 M | https://huggingface.co/apple/deeplabv3-mobilevit-x-small |
| MobileViT-S | 79.1 | 6.4 M | https://huggingface.co/apple/deeplabv3-mobilevit-small |
### BibTeX entry and citation info
```bibtex
@inproceedings{vision-transformer,
title = {MobileViT: Light-weight, General-purpose, and Mobile-friendly Vision Transformer},
author = {Sachin Mehta and Mohammad Rastegari},
year = {2022},
URL = {https://arxiv.org/abs/2110.02178}
}
```
| [
"background",
"aeroplane",
"bicycle",
"bird",
"boat",
"bottle",
"bus",
"car",
"cat",
"chair",
"cow",
"diningtable",
"dog",
"horse",
"motorbike",
"person",
"pottedplant",
"sheep",
"sofa",
"train",
"tvmonitor"
] |
malra/segformer-b0-finetuned-segments-sidewalk-4 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# segformer-b0-finetuned-segments-sidewalk-4
This model is a fine-tuned version of [nvidia/mit-b0](https://huggingface.co/nvidia/mit-b0) on the segments/sidewalk-semantic dataset.
It achieves the following results on the evaluation set:
- Loss: 2.5207
- Mean Iou: 0.1023
- Mean Accuracy: 0.1567
- Overall Accuracy: 0.6612
- Per Category Iou: [0.0, 0.37997208823402434, 0.7030895600821837, 0.0, 0.0020740824048893942, 0.0006611109803275343, 0.0, 0.0009644717061794479, 0.0, 0.0, 0.44780560238339745, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.4962679673706645, 0.0, 0.008267299447856608, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.6719286019431624, 0.1932540547332544, 0.6762198255750292, 0.0, 0.0, 0.0003312368464636427, 0.0]
- Per Category Accuracy: [nan, 0.7085417733756095, 0.8643251797889624, 0.0, 0.0020922282164545967, 0.0006691672739475508, nan, 0.0009725011389865425, 0.0, 0.0, 0.9224475476880146, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.7984415122785299, 0.0, 0.008394275137866055, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.9294223049507054, 0.2306496542338313, 0.7045666997791757, 0.0, 0.0, 0.0003315891206418271, 0.0]
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 6e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Mean Iou | Mean Accuracy | Overall Accuracy | Per Category Iou | Per Category Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:-------------:|:----------------:|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------:|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------:|
| 2.8255 | 1.0 | 25 | 3.0220 | 0.0892 | 0.1429 | 0.6352 | [0.0, 0.3631053229188519, 0.6874502125236047, 0.0, 0.012635239862746197, 0.001133215250040838, 0.0, 0.00463024415429387, 2.6557099661207286e-05, 0.0, 0.3968535016422742, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.4820466790242289, 0.0, 0.00693999220077067, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.6134928158666486, 0.05160593984758798, 0.5016270369795023, 0.0, 0.0, 0.00023524914354608678, 0.0] | [nan, 0.6625398055826, 0.851744092156527, 0.0, 0.01307675614921835, 0.001170877257777663, nan, 0.004771009467501389, 2.6941417811356193e-05, 0.0, 0.9316713675735513, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.7310221003907382, 0.0, 0.0070371168820434, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.948375993368795, 0.056265031783493576, 0.5061367774453964, 0.0, 0.0, 0.00023723449281691698, 0.0] |
| 2.5443 | 2.0 | 50 | 2.5207 | 0.1023 | 0.1567 | 0.6612 | [0.0, 0.37997208823402434, 0.7030895600821837, 0.0, 0.0020740824048893942, 0.0006611109803275343, 0.0, 0.0009644717061794479, 0.0, 0.0, 0.44780560238339745, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.4962679673706645, 0.0, 0.008267299447856608, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.6719286019431624, 0.1932540547332544, 0.6762198255750292, 0.0, 0.0, 0.0003312368464636427, 0.0] | [nan, 0.7085417733756095, 0.8643251797889624, 0.0, 0.0020922282164545967, 0.0006691672739475508, nan, 0.0009725011389865425, 0.0, 0.0, 0.9224475476880146, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.7984415122785299, 0.0, 0.008394275137866055, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.9294223049507054, 0.2306496542338313, 0.7045666997791757, 0.0, 0.0, 0.0003315891206418271, 0.0] |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu102
- Datasets 2.2.2
- Tokenizers 0.12.1
| [
"unlabeled",
"flat-road",
"flat-sidewalk",
"flat-crosswalk",
"flat-cyclinglane",
"flat-parkingdriveway",
"flat-railtrack",
"flat-curb",
"human-person",
"human-rider",
"vehicle-car",
"vehicle-truck",
"vehicle-bus",
"vehicle-tramtrain",
"vehicle-motorcycle",
"vehicle-bicycle",
"vehicle-caravan",
"vehicle-cartrailer",
"construction-building",
"construction-door",
"construction-wall",
"construction-fenceguardrail",
"construction-bridge",
"construction-tunnel",
"construction-stairs",
"object-pole",
"object-trafficsign",
"object-trafficlight",
"nature-vegetation",
"nature-terrain",
"sky",
"void-ground",
"void-dynamic",
"void-static",
"void-unclear"
] |
malra/segformer-b5-segments-warehouse1 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# segformer-b5-segments-warehouse1
This model is a fine-tuned version of [nvidia/mit-b5](https://huggingface.co/nvidia/mit-b5) on the jakka/warehouse_part1 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1610
- Mean Iou: 0.6952
- Mean Accuracy: 0.8014
- Overall Accuracy: 0.9648
- Per Category Iou: [0.0, 0.47153295365063086, 0.9293854681828234, 0.9766069961659746, 0.927007550222462, 0.9649404794739765, 0.9824606440795911, 0.8340592613982738, 0.9706739467997174, 0.653761891900003, 0.0, 0.8080046149867717, 0.75033588410538, 0.6921465280057791, 0.7522124809345331, 0.7548461579766955, 0.3057219434101416, 0.5087799410519325, 0.84829211455404, 0.7730356409704979]
- Per Category Accuracy: [nan, 0.9722884260421271, 0.9720560851996344, 0.9881427437833682, 0.9650114633107388, 0.9828538231066912, 0.9897027752946145, 0.9071521422402136, 0.9848998109819413, 0.6895634832705517, 0.0, 0.8704126720181029, 0.8207667731629393, 0.7189631369929214, 0.8238982104266324, 0.8620090549531412, 0.3522998155172771, 0.5387075151368637, 0.9081104400345125, 0.8794092789466661]
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 6e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 15
### Training results
| Training Loss | Epoch | Step | Validation Loss | Mean Iou | Mean Accuracy | Overall Accuracy | Per Category Iou | Per Category Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:-------------:|:----------------:|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------:|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------:|
| 0.1656 | 1.0 | 787 | 0.1917 | 0.5943 | 0.6937 | 0.9348 | [0.0, 0.8760430595457738, 0.8113714411434076, 0.9533787339343942, 0.8499988352439646, 0.9330256290984922, 0.964368918196211, 0.6984009498117659, 0.9341093239597545, 0.288411561596369, 0.0, 0.6496866199024376, 0.4510074387900882, 0.5206343319728309, 0.6377305875444397, 0.5391733301507737, 0.1395685713288422, 0.390702947845805, 0.6999919374344916, 0.548023343373494] | [nan, 0.9502542152644661, 0.9516900451328754, 0.9788975544390225, 0.921821413759201, 0.9534230318615367, 0.9778020069070933, 0.8108538425970355, 0.970571911491369, 0.2993067645848501, 0.0, 0.7454496363566233, 0.5849840255591054, 0.5858306866277158, 0.7137540570947559, 0.6925710548100606, 0.16576498144808574, 0.4165357186026834, 0.8142326593390103, 0.6474578532983408] |
| 0.0948 | 2.0 | 1574 | 0.2058 | 0.6310 | 0.7305 | 0.9442 | [0.0, 0.904077233776714, 0.8616556242304713, 0.9604692135700761, 0.8306854004041632, 0.9459690932012119, 0.9714777936344227, 0.7463801249809481, 0.9197830038961162, 0.4759644364074744, 0.0, 0.7133768631713745, 0.4878118726699168, 0.5403469048526253, 0.6267211124010835, 0.6280780328151242, 0.11116434156063161, 0.4757211293446132, 0.7386220435315599, 0.6814722192019137] | [nan, 0.9530795697109564, 0.9481439135801821, 0.9753750826203033, 0.9328161802391284, 0.9783733696392768, 0.9831560736299451, 0.8544532947139754, 0.9700176894451403, 0.5598936405938401, 0.0, 0.8212854589792271, 0.5434504792332269, 0.5765256977221256, 0.7602586827898242, 0.745275787709383, 0.12024542420662065, 0.5128732019823522, 0.8080522939565592, 0.8363729371469241] |
| 0.0595 | 3.0 | 2361 | 0.1363 | 0.6578 | 0.7540 | 0.9494 | [0.0, 0.9109388123768081, 0.8466263269727539, 0.965583073696094, 0.8848508600101197, 0.9507919193853351, 0.9742807972055659, 0.7672266040033193, 0.9571650494933543, 0.5580972230045627, 0.0, 0.7572676505482382, 0.5338298840118263, 0.5743160573368553, 0.6964399439112182, 0.6369583059750492, 0.19255896751223853, 0.49017131449756574, 0.7563405327946686, 0.7018448645266491] | [nan, 0.9587813659877967, 0.9568298005631468, 0.9842947615263231, 0.9380059570384915, 0.9734457175747111, 0.9839202800499454, 0.863077218359317, 0.9757816512090675, 0.6272609287455287, 0.0, 0.8589569413670591, 0.5999361022364217, 0.6161844118746441, 0.7983763527021668, 0.793146442915981, 0.2242190576871256, 0.5288397085810358, 0.8216978654762351, 0.8232729860771318] |
| 0.0863 | 4.0 | 3148 | 0.1706 | 0.6597 | 0.7678 | 0.9537 | [0.0, 0.5911845175607978, 0.8922572171811833, 0.9657396689703207, 0.8726664918778465, 0.948172990516989, 0.9741643734457509, 0.7832072821045744, 0.9578631876788363, 0.5869565217391305, 0.0, 0.7602876424039574, 0.5747447162194254, 0.6642950791717092, 0.6978602093118107, 0.7122118073263809, 0.21745086578505152, 0.5091171801864137, 0.763416879968237, 0.7220314268720861] | [nan, 0.9656626144746107, 0.9588916966191391, 0.9766109980050623, 0.9234167566678667, 0.9783156758536367, 0.9891284919047324, 0.8876447135391675, 0.9773653302095363, 0.6623721946123896, 0.0, 0.8391697702425289, 0.6185942492012779, 0.6961703584876796, 0.8060121894956657, 0.8277923697200732, 0.24677155234956366, 0.5498060503499884, 0.8475353565667555, 0.8369956852453183] |
| 0.0849 | 5.0 | 3935 | 0.1529 | 0.6489 | 0.7616 | 0.9535 | [0.0, 0.34717493700692625, 0.9200786785121082, 0.9707860061715432, 0.9064316496153364, 0.9571373496125165, 0.9765647396031262, 0.7914886053951578, 0.9636858999629485, 0.5253852888123762, 0.0, 0.7668434757450091, 0.6228696113699357, 0.5646135260344276, 0.7194371537530142, 0.7276571750775304, 0.13134474327628362, 0.5398065590178835, 0.8087983436006237, 0.7371620697069805] | [nan, 0.9673995855258336, 0.9622823082917784, 0.9832096263122092, 0.9590923200613435, 0.9794833291868915, 0.9849481430590119, 0.8741570190973889, 0.9814726613968338, 0.5661042702035389, 0.0, 0.8519369313384734, 0.674888178913738, 0.5955861885708164, 0.7973710835377057, 0.8440933293815855, 0.139191177994735, 0.5807830511082053, 0.8902258318640507, 0.8387304835194164] |
| 0.0652 | 6.0 | 4722 | 0.1776 | 0.6701 | 0.7802 | 0.9598 | [0.0, 0.442020662403383, 0.9221209597093164, 0.9723970198449976, 0.9094898951877407, 0.958969887541612, 0.9774286126326331, 0.8043337900190548, 0.9641322534475246, 0.524194500874002, 0.0, 0.7732021981650511, 0.6714277552419585, 0.6791383524722951, 0.7265590222386986, 0.7252668038047013, 0.25612624095650144, 0.512317443386938, 0.8223912256195354, 0.7602526763224181] | [nan, 0.9667776521571092, 0.968306375662177, 0.9871287057126554, 0.9515142073239339, 0.9800501491032743, 0.9870913605013194, 0.8911998464531551, 0.9789458602211063, 0.5619638504637396, 0.0, 0.8429926328466184, 0.750926517571885, 0.7091730161871252, 0.8058454540303847, 0.8431735260151052, 0.2957320232987169, 0.5489159698031933, 0.8944742469145065, 0.8592366887593968] |
| 0.0516 | 7.0 | 5509 | 0.2204 | 0.6782 | 0.7854 | 0.9562 | [0.0, 0.5972965874238374, 0.9024890361234837, 0.9727685140940331, 0.915582953759141, 0.9598962357171329, 0.9798718588278901, 0.8112726586102719, 0.9047252363294271, 0.6408527982442389, 0.0, 0.7886848740988032, 0.676712646342877, 0.5672950158399087, 0.7336613818739761, 0.7298649456617311, 0.3028603088856569, 0.5060868673401364, 0.8269845785168136, 0.7471687598272396] | [nan, 0.9698273468544609, 0.9632905651879291, 0.9861640741314249, 0.9551792854314081, 0.9817079843391511, 0.9899518141518776, 0.8996100259110301, 0.9832172012468946, 0.6987812984710835, 0.0, 0.8565569379384828, 0.7460702875399361, 0.593452450290354, 0.8111955580377016, 0.848355084979611, 0.3625810998486827, 0.5422458600265925, 0.8997261507296395, 0.834927271918509] |
| 0.1051 | 8.0 | 6296 | 0.1860 | 0.6731 | 0.7789 | 0.9575 | [0.0, 0.44805540920356957, 0.9045125103512419, 0.9742941726927242, 0.9171717803896707, 0.9608739687771942, 0.9806696534895757, 0.8165927346840907, 0.9677688538979997, 0.6195552331193943, 0.0, 0.795984684169727, 0.6862710467443778, 0.573071397774824, 0.7390593444665892, 0.746059006435751, 0.2037963564144674, 0.5303406505500898, 0.8387988518436741, 0.7590468131997875] | [nan, 0.9709112878685233, 0.966379770128131, 0.9872427322752713, 0.9529925896087971, 0.9834568092767589, 0.9900317817435064, 0.8913394344939497, 0.9851288999243455, 0.6704124592447216, 0.0, 0.871338387626268, 0.7448562300319489, 0.5994265432176736, 0.8121846392929121, 0.8435414473616973, 0.2212134402918558, 0.5609595288067426, 0.8906947518475448, 0.8579244695520661] |
| 0.0619 | 9.0 | 7083 | 0.2919 | 0.6996 | 0.7903 | 0.9579 | [0.0, 0.934913158921961, 0.9053172937262943, 0.9749731654503406, 0.8705131863049136, 0.9625421596476281, 0.9801264786114002, 0.8223383305806123, 0.9066864104553713, 0.6468175775129386, 0.0, 0.7950479182280621, 0.7176821075997429, 0.5689160215594734, 0.7424713897302829, 0.7480081111150989, 0.3071719253739231, 0.5035704204000125, 0.8359422295252097, 0.7696666024282135] | [nan, 0.9682325320018036, 0.9702179964865137, 0.9871538608460199, 0.9606411126417358, 0.9816951395784177, 0.9890656141613147, 0.9035010425481796, 0.9836680314909386, 0.689949669209585, 0.0, 0.8547140781629688, 0.7850479233226837, 0.5903872774743949, 0.8138309496636962, 0.8520138583707216, 0.3614203096822337, 0.5292682658813446, 0.9065161120906329, 0.8882611983452693] |
| 0.081 | 10.0 | 7870 | 0.2470 | 0.6804 | 0.7921 | 0.9583 | [0.0, 0.4404433924045006, 0.9318621565838054, 0.9751204660574527, 0.8701648407446415, 0.9625333515302946, 0.9811772580795882, 0.8257730976318673, 0.9694596723226286, 0.6262599628453287, 0.0, 0.8035308913444122, 0.7247258740455824, 0.5731919576321138, 0.7446832704519876, 0.7540709586972932, 0.2964031339031339, 0.5176075672651548, 0.8402309249924604, 0.7699341552529259] | [nan, 0.9683524762943433, 0.9703483634609842, 0.9874040565137937, 0.9560906426120769, 0.9828287794111833, 0.9897414692905638, 0.9071739528715878, 0.9809845681174846, 0.6616061536513564, 0.0, 0.8707555296507566, 0.8066453674121405, 0.5982298533423343, 0.8269010675926151, 0.8575633386818196, 0.3450448769769707, 0.5489928903442743, 0.9145158870090407, 0.8764289844757795] |
| 0.0595 | 11.0 | 8657 | 0.1520 | 0.6754 | 0.7803 | 0.9583 | [0.0, 0.43998949915443775, 0.9316636729918347, 0.974311900634481, 0.90408659589869, 0.9621039259469353, 0.9814528086580536, 0.8173484866921386, 0.9299168519752622, 0.5981595278841879, 0.0, 0.79896542666047, 0.7130791649318979, 0.5767892232828117, 0.7434904893608313, 0.7476740572849074, 0.2818679619421856, 0.5013427236914975, 0.8417679322268942, 0.7636900967723242] | [nan, 0.9604694708457627, 0.9682111157218825, 0.9850226034689381, 0.9629913194164226, 0.9838887233262218, 0.9906282066977372, 0.8790295141463755, 0.9828138682520776, 0.6217973473457631, 0.0, 0.8472869246956067, 0.7660702875399361, 0.601589754313674, 0.8233235396482367, 0.8360910400932068, 0.3211657649814481, 0.5272243772183335, 0.8880687999399782, 0.8793425559361239] |
| 0.0607 | 12.0 | 9444 | 0.1907 | 0.6792 | 0.7814 | 0.9611 | [0.0, 0.4394265102382861, 0.9325678358934418, 0.9751503005414947, 0.9213536629526586, 0.9630218995457999, 0.9808145244188059, 0.8160516650442948, 0.9402095421968347, 0.5678403556289702, 0.0, 0.7897903639847522, 0.717973174366617, 0.6351749265433101, 0.7451406149738536, 0.7539060338307724, 0.2810049109433409, 0.5169863186167534, 0.8447414560224139, 0.7628612943763745] | [nan, 0.964392093449931, 0.9699039597844642, 0.9860071181495944, 0.9689476561441872, 0.9817555601847723, 0.9915172012546744, 0.8703445207331861, 0.9829836512368835, 0.5919660662847014, 0.0, 0.8320126171608817, 0.7695846645367412, 0.6606869598697208, 0.8177192854656857, 0.8353858575122385, 0.31786995004456603, 0.541465665967056, 0.8991915819484563, 0.8640852275254659] |
| 0.054 | 13.0 | 10231 | 0.1756 | 0.6845 | 0.7854 | 0.9633 | [0.0, 0.44063089620853896, 0.9319015227980866, 0.9747420439658205, 0.9230841377589553, 0.9626774348954341, 0.9806204202647846, 0.824089995398513, 0.9682449901582629, 0.6269069221957562, 0.0, 0.7878031759942226, 0.7230044147476434, 0.6870255399578931, 0.7273836360818303, 0.7465091396254238, 0.25750268946841265, 0.5202245077135331, 0.8455619310735664, 0.7623883906475817] | [nan, 0.9684613146338701, 0.9659761462687484, 0.985573907589379, 0.969242630837417, 0.9846717514218756, 0.9904148523034052, 0.8905935109009535, 0.9873657317056209, 0.6548320724256909, 0.0, 0.8321711888159841, 0.7743769968051119, 0.7167465941354711, 0.7672955669410517, 0.8485288256155018, 0.28777231930020936, 0.5469380130325374, 0.8955527628765427, 0.8564788043236511] |
| 0.0908 | 14.0 | 11018 | 0.1677 | 0.6922 | 0.7956 | 0.9641 | [0.0, 0.4710389646938612, 0.9277225664822271, 0.9753445134184554, 0.9250469473155007, 0.9640090632546157, 0.9817333061419466, 0.8297056239192101, 0.970059681920668, 0.647379308685926, 0.0, 0.79693329490141, 0.7458423929012165, 0.6895638439061885, 0.7486849253355593, 0.7520096317485606, 0.30687537928818764, 0.49287677819238446, 0.848826224760963, 0.7700556938025832] | [nan, 0.9666066204807101, 0.9697912533607226, 0.9863864033340946, 0.9658514745108883, 0.9826761492096202, 0.9913739259863396, 0.9020659030037601, 0.9838249561044068, 0.6815485423063531, 0.0, 0.8412997732853904, 0.8109904153354632, 0.7185046709734403, 0.8232134618653327, 0.8490091673735526, 0.35638330949567815, 0.5181697306682197, 0.9016768578609746, 0.8671989680174369] |
| 0.0584 | 15.0 | 11805 | 0.1610 | 0.6952 | 0.8014 | 0.9648 | [0.0, 0.47153295365063086, 0.9293854681828234, 0.9766069961659746, 0.927007550222462, 0.9649404794739765, 0.9824606440795911, 0.8340592613982738, 0.9706739467997174, 0.653761891900003, 0.0, 0.8080046149867717, 0.75033588410538, 0.6921465280057791, 0.7522124809345331, 0.7548461579766955, 0.3057219434101416, 0.5087799410519325, 0.84829211455404, 0.7730356409704979] | [nan, 0.9722884260421271, 0.9720560851996344, 0.9881427437833682, 0.9650114633107388, 0.9828538231066912, 0.9897027752946145, 0.9071521422402136, 0.9848998109819413, 0.6895634832705517, 0.0, 0.8704126720181029, 0.8207667731629393, 0.7189631369929214, 0.8238982104266324, 0.8620090549531412, 0.3522998155172771, 0.5387075151368637, 0.9081104400345125, 0.8794092789466661] |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu102
- Datasets 2.2.2
- Tokenizers 0.12.1
| [
"unlabeled",
"klt_bin",
"rack",
"ceiling",
"pillar",
"wall",
"floor",
"sign",
"box",
"bracket",
"barcode",
"floor_decal",
"fuse_box",
"pallet",
"lamp",
"not-known-2",
"wire",
"fire_extinguisher",
"crate",
"cart"
] |
jakka/segformer-b0-finetuned-warehouse-part-1-V2 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# segformer-b0-finetuned-warehouse-part-1-V2
This model is a fine-tuned version of [nvidia/mit-b5](https://huggingface.co/nvidia/mit-b5) on the jakka/warehouse_part1 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2737
- Mean Iou: 0.7224
- Mean Accuracy: 0.8119
- Overall Accuracy: 0.9668
- Per Category Iou: [0.0, 0.9392313580983768, 0.9322932027111482, 0.9772249946988713, 0.8749950826812657, 0.9591121585348171, 0.9803780030124933, 0.8554852055380204, 0.9661475962866876, 0.5609089467958914, 0.0, 0.8095003013989066, 0.7113799121381718, 0.8927260044840537, 0.6133653057361015, 0.8420100377966416, 0.33841086205511367, 0.553361761785151, 0.8141592920353983, 0.8270316181708587]
- Per Category Accuracy: [nan, 0.9727824725573769, 0.9676994291705018, 0.9882968957337019, 0.9679484011220059, 0.9772700079950366, 0.9882492205666621, 0.9252107983136135, 0.9825945071781523, 0.6062795795494159, 0.0, 0.894776445179671, 0.7968855332344613, 0.9522349792248335, 0.6544510171692397, 0.9276157710790738, 0.42203029817249116, 0.5863404454740788, 0.8963814834175524, 0.9193914381006046]
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 6e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Mean Iou | Mean Accuracy | Overall Accuracy | Per Category Iou | Per Category Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:-------------:|:----------------:|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------:|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------:|
| 0.7008 | 1.0 | 787 | 0.2473 | 0.5595 | 0.6448 | 0.9325 | [0.0, 0.8572456184869756, 0.8403481284744914, 0.9524827531570127, 0.7992052152702355, 0.9196710216877864, 0.9471503664300267, 0.6193304552041781, 0.9133086982125345, 0.17558267725303728, 0.0, 0.6344520667741999, 0.3360920970752956, 0.7642426437536942, 0.510575871022846, 0.6056988833269157, 0.021209386281588447, 0.27355691497341356, 0.6138181818181818, 0.40645271873846317] | [nan, 0.9155298033269351, 0.9463379226245591, 0.978836265135544, 0.9240214201112357, 0.9448111967681583, 0.9643622308798924, 0.6930912552699579, 0.9497575640760723, 0.18632531152693993, 0.0, 0.7500919033177098, 0.36409599568558715, 0.8900647437729461, 0.5728964730263244, 0.6549871668851026, 0.02166159025328631, 0.2902301645548354, 0.7353197421153511, 0.4694729147312794] |
| 0.1321 | 2.0 | 1574 | 0.2331 | 0.6221 | 0.7115 | 0.9457 | [0.0, 0.8970560279823083, 0.8791120244598839, 0.9603620467193393, 0.8160602187615088, 0.934767875213888, 0.9616837752836253, 0.7419391385825133, 0.9351874201394574, 0.26717521084051926, 0.0, 0.6985475965645938, 0.43481867741170893, 0.8134984418163408, 0.5459611126448698, 0.7401712453141447, 0.13175924760380514, 0.355121624272543, 0.7060811650388926, 0.6229231428877693] | [nan, 0.951233770160613, 0.9409053657605947, 0.9843213861494523, 0.9219686102230917, 0.9665968250506056, 0.9829729958024298, 0.8238168094655243, 0.9620596605954946, 0.29986351309033543, 0.0, 0.8030913978494624, 0.49467439665633006, 0.909599171191769, 0.5931253087796156, 0.8208142201834863, 0.14682189804424495, 0.3841705499014086, 0.8251147122030551, 0.70800907664895] |
| 0.1085 | 3.0 | 2361 | 0.2457 | 0.6542 | 0.7530 | 0.9521 | [0.0, 0.9079405116712079, 0.8959028018194484, 0.9654330936322201, 0.8358564096747072, 0.942169826126924, 0.967131589172387, 0.7785683188874377, 0.942506044201895, 0.3544242514524058, 0.0, 0.7247706422018348, 0.5044915351836923, 0.8273089178892802, 0.5630444261421442, 0.7399785788281565, 0.21738423517169614, 0.46725284186024263, 0.7218755768875762, 0.7280122150607375] | [nan, 0.9545620491089126, 0.9497321958018098, 0.9837544714508515, 0.9402501375924134, 0.9686463320401577, 0.9809467909731419, 0.8694886440908473, 0.9735407105395524, 0.3936199755387097, 0.0, 0.8558151824280856, 0.5906026695429419, 0.9157369138435157, 0.6097401660523865, 0.8630406290956749, 0.2679143956396281, 0.5182902566913956, 0.8517163268862171, 0.8205229733639949] |
| 0.8409 | 4.0 | 3148 | 0.2533 | 0.6749 | 0.7760 | 0.9559 | [0.0, 0.912375840411698, 0.904072054206276, 0.9676067299522242, 0.900289256120933, 0.9448264254043457, 0.9706472863960092, 0.7942658684379895, 0.9498265874428659, 0.5556284571729604, 0.0, 0.743214707471828, 0.529188361408882, 0.7269154778675782, 0.5697874335729916, 0.7702618169892564, 0.2288491765188273, 0.5089612784265519, 0.757448678510892, 0.7646070737475812] | [nan, 0.9601569621727435, 0.9525397945710891, 0.9830820784511696, 0.9462795897530819, 0.9732812778343284, 0.9810361205428978, 0.8895280837753298, 0.9743959070958451, 0.6854951638729194, 0.0, 0.8531327543424317, 0.5823783200755023, 0.9177828280607646, 0.6184135395216047, 0.8657506006989952, 0.26841535748637385, 0.5491586570344761, 0.8759801359121798, 0.8665306184609293] |
| 0.0655 | 5.0 | 3935 | 0.2164 | 0.6815 | 0.7909 | 0.9577 | [0.0, 0.9195724102825147, 0.8817887152896982, 0.9692666162636345, 0.90446655617651, 0.9477266300807918, 0.972197851990263, 0.8006212298550464, 0.9526181996158507, 0.48675750740382695, 0.0, 0.7544064333927534, 0.589975775752682, 0.8568833610473964, 0.5739430151581254, 0.7804109001873066, 0.2738491187715644, 0.46180522107696753, 0.7493122891746226, 0.754828899421902] | [nan, 0.9629768162749704, 0.9511904548979574, 0.9855793956741679, 0.9532853326979632, 0.9705567416728694, 0.9856702233410021, 0.9070277437780497, 0.9761803883026475, 0.7497090051817757, 0.0, 0.8653903593419723, 0.689564513954429, 0.9349779882164135, 0.6119830537374903, 0.9072670926168632, 0.3530779095864059, 0.5086786980626564, 0.8741215078120462, 0.8391483788434887] |
| 0.0568 | 6.0 | 4722 | 0.2803 | 0.6876 | 0.7839 | 0.9591 | [0.0, 0.9166100071412383, 0.913602419181271, 0.9710201737288663, 0.8563050555469198, 0.9497657746314072, 0.9730697054916811, 0.8143688646719719, 0.9549812903957364, 0.460486150973965, 0.0, 0.7634781269254467, 0.6136748147716002, 0.8542174198928293, 0.5922937831600485, 0.8066394260877113, 0.28399126278134795, 0.5207639813581891, 0.7629174644376197, 0.7438457521999924] | [nan, 0.9601927982852421, 0.9660710264704008, 0.982455068550298, 0.957830657460364, 0.9688535013815731, 0.9819961506837456, 0.893842649258806, 0.9749506995826178, 0.5071640856263331, 0.0, 0.8540977391783844, 0.7091141971147364, 0.9317785850902456, 0.653052819349169, 0.8880378986456968, 0.35953029817249116, 0.553305686470427, 0.862098507289307, 0.8895268263710157] |
| 0.8994 | 7.0 | 5509 | 0.2743 | 0.6868 | 0.7764 | 0.9606 | [0.0, 0.92180556388016, 0.9171201062365498, 0.9721111956032598, 0.8587950800137758, 0.9513526631552707, 0.9756092701000854, 0.819792597945916, 0.9576544961199075, 0.4512109977539036, 0.0, 0.7723053199691596, 0.61351217088922, 0.8696959538394335, 0.5947007494875557, 0.8068989910272162, 0.2400942828140323, 0.49048112386556714, 0.772383338067815, 0.7496112574696395] | [nan, 0.9644998510561574, 0.9609472275076806, 0.9854828942497743, 0.9565172529563908, 0.9753485051500238, 0.9840922427646661, 0.8947674418604651, 0.974328764760461, 0.49258184783186704, 0.0, 0.8630410807830162, 0.6660374814615073, 0.9410600831006661, 0.6446391486645419, 0.8876351572739187, 0.2796369028534787, 0.5232773027508334, 0.8685891851077423, 0.8883389427836073] |
| 0.0757 | 8.0 | 6296 | 0.2245 | 0.7038 | 0.8009 | 0.9625 | [0.0, 0.9246349181813107, 0.9204571437331909, 0.9735757462990084, 0.8677796689121399, 0.9529629595462734, 0.9762280475446855, 0.8249549577060494, 0.9591099123245741, 0.6276133447390932, 0.0, 0.7755030368136181, 0.6490189248809939, 0.8729206918730364, 0.598100700980074, 0.8000277974172574, 0.27374031814774713, 0.5049971433066432, 0.7770387696167466, 0.7981819415236415] | [nan, 0.964623037692871, 0.9637122903759715, 0.9863849456780516, 0.9537638293913148, 0.974798022498043, 0.985726579790157, 0.9184958520331837, 0.980103295010109, 0.7586190597174544, 0.0, 0.8624896608767576, 0.7536739921801268, 0.9379994558884956, 0.6446181625809385, 0.9037175076452599, 0.32931227957678744, 0.5392729877180727, 0.863477957832375, 0.8959383518876689] |
| 0.0638 | 9.0 | 7083 | 0.2660 | 0.7091 | 0.8064 | 0.9632 | [0.0, 0.9247942993361187, 0.9227547653133065, 0.9737952169757659, 0.8675395458562903, 0.954005651357167, 0.9771936329793919, 0.832432130071599, 0.960664758331238, 0.6439555818513429, 0.0, 0.7800093558353167, 0.6503190735050816, 0.8771838558892437, 0.6000063410406786, 0.8135397086825815, 0.29345229389108285, 0.5278915956856804, 0.7979207701237885, 0.7849771726504039] | [nan, 0.9696983271254734, 0.9626331855239437, 0.9865491477141318, 0.9580933383611586, 0.9736782563602464, 0.9877136372491695, 0.9107507139942881, 0.9774734570720269, 0.778129006717992, 0.0, 0.8715651135005974, 0.7419441822839423, 0.9522322311869326, 0.6453719127503574, 0.9070076998689384, 0.36183472266752165, 0.5638987382066087, 0.8882354649474357, 0.8850494190030915] |
| 0.1028 | 10.0 | 7870 | 0.2753 | 0.7045 | 0.7986 | 0.9632 | [0.0, 0.9310677916035094, 0.9231154731835156, 0.9742966471140867, 0.8659672807905657, 0.9548025101399095, 0.9761885400996432, 0.8359586760218701, 0.9606324687638941, 0.536304571449891, 0.0, 0.7861687315154533, 0.6648749707875672, 0.8782393648813203, 0.6028230645967004, 0.8034017821150734, 0.2798240884275797, 0.5292981433685788, 0.7976529535864979, 0.7897882016975595] | [nan, 0.9671696414372969, 0.9640722977320454, 0.9864307028133905, 0.9566418983913256, 0.9766712626661613, 0.984078186494131, 0.917516659866721, 0.9804665003157427, 0.5945275248601157, 0.0, 0.8886304108078301, 0.7671565322906836, 0.945889759711566, 0.6500072139662386, 0.9114992900830057, 0.33277893555626803, 0.5621391244374099, 0.8784050647615729, 0.9097665351872439] |
| 0.098 | 11.0 | 8657 | 0.2029 | 0.7052 | 0.8014 | 0.9640 | [0.0, 0.9288737885707921, 0.9265083379180753, 0.9747097980123621, 0.8738478537660755, 0.9558379241305062, 0.9781696214462526, 0.8391837240652649, 0.9626716931455067, 0.507780252899168, 0.0, 0.7878061172645057, 0.6769843155893536, 0.8815102118136605, 0.6056046400027283, 0.8269347543218291, 0.3132485690006253, 0.5154277002618235, 0.7927511930865472, 0.7569567975718071] | [nan, 0.9711631282238503, 0.964815472153087, 0.9853689377873769, 0.9652020663968313, 0.9754185940822899, 0.9867780413729902, 0.9206854345165238, 0.9811350296034029, 0.5495104787677182, 0.0, 0.8906350519253745, 0.7681677227989753, 0.9430888220810342, 0.65217140383783, 0.9110078090869376, 0.3914916639948702, 0.5500605696196935, 0.8924609397688331, 0.9267167202229566] |
| 0.0734 | 12.0 | 9444 | 0.2171 | 0.7126 | 0.8001 | 0.9648 | [0.0, 0.9309643707918894, 0.9277494647914695, 0.9750904306170505, 0.8777832954332417, 0.9566409475731096, 0.9780693213049435, 0.8436550838167809, 0.9635515941347027, 0.527304314900299, 0.0, 0.7909202018197202, 0.6909584834347133, 0.8836639196984207, 0.6084447805077513, 0.8287813112544289, 0.31069205419260343, 0.5403587067765045, 0.7955642033577429, 0.8211277996631356] | [nan, 0.9680901815771025, 0.9655377799057193, 0.9852963747008175, 0.9662340833391586, 0.9756774116913669, 0.9890014280908129, 0.9132224942200462, 0.9813789993824062, 0.5595195188097869, 0.0, 0.8697959746346843, 0.7887285964675745, 0.9477302580957196, 0.6557731404362482, 0.9149260048055919, 0.374058191728118, 0.5695666398450833, 0.8786809548701865, 0.8983598068927706] |
| 0.0839 | 13.0 | 10231 | 0.2606 | 0.7139 | 0.8056 | 0.9651 | [0.0, 0.932934590872574, 0.928599894716927, 0.9759876131918817, 0.8695983139625728, 0.9571779321732448, 0.979228463067019, 0.8446447574729073, 0.9630766038435438, 0.47072541703248466, 0.0, 0.7968195631480623, 0.6967972782731112, 0.8867456411969523, 0.6076684496270689, 0.8274634197517912, 0.3560522933191209, 0.5582305522639651, 0.8036840005319856, 0.8219356251968073] | [nan, 0.970161956830923, 0.9673467595439784, 0.9869340313021197, 0.9654732145230638, 0.9756083312329464, 0.9874815117348184, 0.9121141030871753, 0.9832381474966617, 0.50686275089071, 0.0, 0.8991361088135281, 0.8007954698665228, 0.9482970409127882, 0.6487891466970965, 0.9152673110528615, 0.4551538954793203, 0.5915043371384613, 0.8774612301794738, 0.914289630385453] |
| 0.0797 | 14.0 | 11018 | 0.2504 | 0.7153 | 0.8044 | 0.9655 | [0.0, 0.9353593794015038, 0.9288667661318105, 0.9762064564453578, 0.8718886319160292, 0.9576685946960725, 0.9788546612617008, 0.8472608735210976, 0.9642969355331718, 0.5361721760842425, 0.0, 0.8004189668257286, 0.696640611014977, 0.8853084044449696, 0.6099045788314064, 0.8344863725117123, 0.3254310344827586, 0.5323734971095841, 0.8050435956126539, 0.8204823185898129] | [nan, 0.9668112803123117, 0.9681903691382433, 0.9879581433175818, 0.9650443397090228, 0.9762644155033261, 0.9866578405548627, 0.9181626546987625, 0.9814820281384267, 0.5836381147080894, 0.0, 0.8844717856814631, 0.7870432789537549, 0.9470982093785038, 0.6547561898016377, 0.9131239078200087, 0.39335524206476435, 0.5610603662472479, 0.8835162920369403, 0.9243561823249014] |
| 0.0606 | 15.0 | 11805 | 0.2363 | 0.7209 | 0.8122 | 0.9661 | [0.0, 0.9354450021238048, 0.9300759788666999, 0.9766100423179009, 0.8739351769905989, 0.9580569741305669, 0.9795622398211299, 0.8496875639431477, 0.9646763306438436, 0.6043151650835981, 0.0, 0.8018012422360249, 0.7004677380666826, 0.889289794511031, 0.610767874342205, 0.8325289843013258, 0.33953698039089414, 0.5566040090865972, 0.7993623498974272, 0.8161583186067531] | [nan, 0.966786642984969, 0.965287953144928, 0.9879603875367537, 0.9664012618135025, 0.9766460508200225, 0.9889968302453108, 0.9177070583435333, 0.9825186826442273, 0.650711681743251, 0.0, 0.8897849462365591, 0.7874477551570715, 0.9497445698771078, 0.655411130494091, 0.9220183486238532, 0.42261141391471624, 0.5914689680174724, 0.8883080676075972, 0.9213864733563804] |
| 0.0532 | 16.0 | 12592 | 0.2531 | 0.7201 | 0.8074 | 0.9662 | [0.0, 0.9383203952011292, 0.9288414046194093, 0.9769141389017822, 0.8756205335515858, 0.9582358666094781, 0.979632260873732, 0.8522102747909199, 0.9655114623669192, 0.6115704722763623, 0.0, 0.8053745416448402, 0.7045095417527653, 0.8906375387790608, 0.6007837805741991, 0.8399368744136342, 0.33049747893639037, 0.5151462046865611, 0.8091001625973271, 0.8195206947575124] | [nan, 0.9678438083036752, 0.9684728717259394, 0.9879746009248427, 0.9684402878462824, 0.9766889829923047, 0.9883229174617107, 0.9215762273901809, 0.9820408723178519, 0.6655775287006565, 0.0, 0.8831104677878872, 0.7814480248078738, 0.9439503319629784, 0.6414396453351872, 0.9228033529925732, 0.40323420968259055, 0.5458428019417647, 0.8887436835685659, 0.9025173994487001] |
| 0.0862 | 17.0 | 13379 | 0.2458 | 0.7201 | 0.8087 | 0.9665 | [0.0, 0.9368370402512427, 0.9309393106006786, 0.9769932787053442, 0.8747985979138234, 0.95879411739136, 0.9800136137207117, 0.8526248910947767, 0.9651962916423883, 0.5741264468224503, 0.0, 0.8066815029500052, 0.7084107667406031, 0.8910943581653369, 0.6137487567405265, 0.843379759286757, 0.32885159559677446, 0.5243792475829478, 0.8126121336965911, 0.8231331714477782] | [nan, 0.9768073159423666, 0.9678409097683983, 0.9877789798203552, 0.9673405331004518, 0.977145821644341, 0.9876622727465598, 0.9216680266557867, 0.9832398839363699, 0.6213226822336585, 0.0, 0.8952934013417885, 0.7966158824322502, 0.946850198957944, 0.6577528276561605, 0.9188715050240279, 0.4028735171529336, 0.5553570954877843, 0.887857931114596, 0.9137413764220337] |
| 0.057 | 18.0 | 14166 | 0.2807 | 0.7169 | 0.8024 | 0.9665 | [0.0, 0.9391255338059006, 0.9316246290236013, 0.9771178536356643, 0.8736374236266327, 0.9587095139235466, 0.9802820999385629, 0.8534991833144867, 0.965491782119557, 0.5173244886677723, 0.0, 0.8079528780010615, 0.7036495460915129, 0.8919428858888571, 0.6128251272343798, 0.8423749359527112, 0.3030539267193167, 0.5387041043962495, 0.8154057368308808, 0.8249477907232359] | [nan, 0.9703254590941974, 0.967385397276143, 0.9883638482723315, 0.9660909281555922, 0.9783173801174915, 0.987878896953218, 0.9238406092751258, 0.9828454227159885, 0.5529433313441302, 0.0, 0.8918872346291701, 0.7785492786841041, 0.9525571866687186, 0.6544903660759959, 0.9202435561380515, 0.3583279897403014, 0.5679750294005819, 0.8882935470755648, 0.9144114645995461] |
| 0.27 | 19.0 | 14953 | 0.2799 | 0.7210 | 0.8089 | 0.9668 | [0.0, 0.9392661644355319, 0.932096490765189, 0.9772444850416163, 0.8748583460799624, 0.959030800837604, 0.9803660417493171, 0.8549763601588193, 0.9661359625948338, 0.5489573339508828, 0.0, 0.8082856800928263, 0.707609022556391, 0.8930480213758131, 0.6125057936760998, 0.8439663143164156, 0.3240623821315535, 0.5560068921314832, 0.813374539715939, 0.8289533147998521] | [nan, 0.9703971313191945, 0.9680462515437895, 0.9881404237858805, 0.9683475421909045, 0.9777759016962746, 0.988822374850258, 0.9210152318781449, 0.9816258632275899, 0.588252672130082, 0.0, 0.8922778237294366, 0.7930430093029527, 0.9508458460659089, 0.6517263239814098, 0.9221548711227611, 0.3959802821417121, 0.5906377936742327, 0.8980803856653308, 0.9218433516592297] |
| 0.0369 | 20.0 | 15740 | 0.2737 | 0.7224 | 0.8119 | 0.9668 | [0.0, 0.9392313580983768, 0.9322932027111482, 0.9772249946988713, 0.8749950826812657, 0.9591121585348171, 0.9803780030124933, 0.8554852055380204, 0.9661475962866876, 0.5609089467958914, 0.0, 0.8095003013989066, 0.7113799121381718, 0.8927260044840537, 0.6133653057361015, 0.8420100377966416, 0.33841086205511367, 0.553361761785151, 0.8141592920353983, 0.8270316181708587] | [nan, 0.9727824725573769, 0.9676994291705018, 0.9882968957337019, 0.9679484011220059, 0.9772700079950366, 0.9882492205666621, 0.9252107983136135, 0.9825945071781523, 0.6062795795494159, 0.0, 0.894776445179671, 0.7968855332344613, 0.9522349792248335, 0.6544510171692397, 0.9276157710790738, 0.42203029817249116, 0.5863404454740788, 0.8963814834175524, 0.9193914381006046] |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0
- Datasets 2.2.2
- Tokenizers 0.12.1
| [
"unlabeled",
"klt_bin",
"rack",
"ceiling",
"pillar",
"wall",
"floor",
"sign",
"box",
"bracket",
"barcode",
"floor_decal",
"fuse_box",
"pallet",
"lamp",
"not-known-2",
"wire",
"fire_extinguisher",
"crate",
"cart"
] |
q2-jlbar/segformer-b0-finetuned-brooks-or-dunn |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# segformer-b0-finetuned-brooks-or-dunn
This model is a fine-tuned version of [nvidia/mit-b0](https://huggingface.co/nvidia/mit-b0) on the q2-jlbar/BrooksOrDunn dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1158
- Mean Iou: nan
- Mean Accuracy: nan
- Overall Accuracy: nan
- Per Category Iou: [nan, nan]
- Per Category Accuracy: [nan, nan]
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 6e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Mean Iou | Mean Accuracy | Overall Accuracy | Per Category Iou | Per Category Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:-------------:|:----------------:|:----------------:|:---------------------:|
| 0.5153 | 4.0 | 20 | 0.5276 | nan | nan | nan | [nan, nan] | [nan, nan] |
| 0.4082 | 8.0 | 40 | 0.3333 | nan | nan | nan | [nan, nan] | [nan, nan] |
| 0.3157 | 12.0 | 60 | 0.2773 | nan | nan | nan | [nan, nan] | [nan, nan] |
| 0.2911 | 16.0 | 80 | 0.2389 | nan | nan | nan | [nan, nan] | [nan, nan] |
| 0.2395 | 20.0 | 100 | 0.1982 | nan | nan | nan | [nan, nan] | [nan, nan] |
| 0.2284 | 24.0 | 120 | 0.1745 | nan | nan | nan | [nan, nan] | [nan, nan] |
| 0.1818 | 28.0 | 140 | 0.1595 | nan | nan | nan | [nan, nan] | [nan, nan] |
| 0.1549 | 32.0 | 160 | 0.1556 | nan | nan | nan | [nan, nan] | [nan, nan] |
| 0.1351 | 36.0 | 180 | 0.1387 | nan | nan | nan | [nan, nan] | [nan, nan] |
| 0.1254 | 40.0 | 200 | 0.1263 | nan | nan | nan | [nan, nan] | [nan, nan] |
| 0.1412 | 44.0 | 220 | 0.1190 | nan | nan | nan | [nan, nan] | [nan, nan] |
| 0.1179 | 48.0 | 240 | 0.1158 | nan | nan | nan | [nan, nan] | [nan, nan] |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0
- Datasets 2.2.2
- Tokenizers 0.12.1
| [
"dunn",
"other"
] |
chainyo/segformer-sidewalk |
# SegFormer (b0-sized) model fine-tuned on sidewalk-semantic dataset
SegFormer model fine-tuned on segments/sidewalk-semantic at resolution 512x512. It was introduced in the paper [SegFormer: Simple and Efficient Design for Semantic Segmentation with Transformers](https://arxiv.org/abs/2105.15203) by Xie et al. and first released in [this repository](https://github.com/NVlabs/SegFormer).
## Model description
SegFormer consists of a hierarchical Transformer encoder and a lightweight all-MLP decode head to achieve great results on semantic segmentation benchmarks such as ADE20K and Cityscapes. The hierarchical Transformer is first pre-trained on ImageNet-1k, after which a decode head is added and fine-tuned altogether on a downstream dataset.
## Intended uses & limitations
You can use the raw model for semantic segmentation. See the [model hub](https://huggingface.co/models?other=segformer) to look for fine-tuned versions on a task that interests you.
### How to use
Here is how to use this model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes:
```python
from transformers import SegformerFeatureExtractor, SegformerForSemanticSegmentation
from PIL import Image
import requests
feature_extractor = SegformerFeatureExtractor(reduce_labels=True)
model = SegformerForSemanticSegmentation.from_pretrained("ChainYo/segformer-sidewalk")
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
inputs = feature_extractor(images=image, return_tensors="pt")
outputs = model(**inputs)
logits = outputs.logits # shape (batch_size, num_labels, height/4, width/4)
```
For more code examples, we refer to the [documentation](https://huggingface.co/transformers/model_doc/segformer.html#). | [
"unlabeled",
"flat-road",
"flat-sidewalk",
"flat-crosswalk",
"flat-cyclinglane",
"flat-parkingdriveway",
"flat-railtrack",
"flat-curb",
"human-person",
"human-rider",
"vehicle-car",
"vehicle-truck",
"vehicle-bus",
"vehicle-tramtrain",
"vehicle-motorcycle",
"vehicle-bicycle",
"vehicle-caravan",
"vehicle-cartrailer",
"construction-building",
"construction-door",
"construction-wall",
"construction-fenceguardrail",
"construction-bridge",
"construction-tunnel",
"construction-stairs",
"object-pole",
"object-trafficsign",
"object-trafficlight",
"nature-vegetation",
"nature-terrain",
"sky",
"void-ground",
"void-dynamic",
"void-static",
"void-unclear"
] |
chainyo/segformer-b1-sidewalk |
# SegFormer (b1-sized) model fine-tuned on sidewalk-semantic dataset
SegFormer model fine-tuned on segments/sidewalk-semantic at resolution 512x512. It was introduced in the paper [SegFormer: Simple and Efficient Design for Semantic Segmentation with Transformers](https://arxiv.org/abs/2105.15203) by Xie et al. and first released in [this repository](https://github.com/NVlabs/SegFormer).
## Model description
SegFormer consists of a hierarchical Transformer encoder and a lightweight all-MLP decode head to achieve great results on semantic segmentation benchmarks such as ADE20K and Cityscapes. The hierarchical Transformer is first pre-trained on ImageNet-1k, after which a decode head is added and fine-tuned altogether on a downstream dataset.
## Intended uses & limitations
You can use the raw model for semantic segmentation. See the [model hub](https://huggingface.co/models?other=segformer) to look for fine-tuned versions on a task that interests you.
### How to use
Here is how to use this model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes:
```python
from transformers import SegformerFeatureExtractor, SegformerForSemanticSegmentation
from PIL import Image
import requests
feature_extractor = SegformerFeatureExtractor(reduce_labels=True)
model = SegformerForSemanticSegmentation.from_pretrained("ChainYo/segformer-sidewalk")
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
inputs = feature_extractor(images=image, return_tensors="pt")
outputs = model(**inputs)
logits = outputs.logits # shape (batch_size, num_labels, height/4, width/4)
```
For more code examples, we refer to the [documentation](https://huggingface.co/transformers/model_doc/segformer.html#). | [
"unlabeled",
"flat-road",
"flat-sidewalk",
"flat-crosswalk",
"flat-cyclinglane",
"flat-parkingdriveway",
"flat-railtrack",
"flat-curb",
"human-person",
"human-rider",
"vehicle-car",
"vehicle-truck",
"vehicle-bus",
"vehicle-tramtrain",
"vehicle-motorcycle",
"vehicle-bicycle",
"vehicle-caravan",
"vehicle-cartrailer",
"construction-building",
"construction-door",
"construction-wall",
"construction-fenceguardrail",
"construction-bridge",
"construction-tunnel",
"construction-stairs",
"object-pole",
"object-trafficsign",
"object-trafficlight",
"nature-vegetation",
"nature-terrain",
"sky",
"void-ground",
"void-dynamic",
"void-static",
"void-unclear"
] |
Matthijs/deeplabv3_mobilenet_v2_1.0_513 |
# MobileNetV2 with DeepLabV3+
MobileNet V2 model pre-trained on PASCAL VOC at resolution 513x513. It was introduced in [MobileNetV2: Inverted Residuals and Linear Bottlenecks](https://arxiv.org/abs/1801.04381) by Mark Sandler, Andrew Howard, Menglong Zhu, Andrey Zhmoginov, Liang-Chieh Chen. It was first released in [this repository](https://github.com/tensorflow/models/tree/master/research/deeplab).
Disclaimer: The team releasing MobileNet V2 did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
From the [original README](https://github.com/tensorflow/models/blob/master/research/slim/nets/mobilenet_v1.md):
> MobileNets are small, low-latency, low-power models parameterized to meet the resource constraints of a variety of use cases. They can be built upon for classification, detection, embeddings and segmentation similar to how other popular large scale models, such as Inception, are used. MobileNets can be run efficiently on mobile devices [...] MobileNets trade off between latency, size and accuracy while comparing favorably with popular models from the literature.
The model in this repo adds a [DeepLabV3+](https://arxiv.org/abs/1802.02611) head to the MobileNetV2 backbone for semantic segmentation.
## Intended uses & limitations
You can use the raw model for semantic segmentation. See the [model hub](https://huggingface.co/models?search=mobilenet_v2) to look for fine-tuned versions on a task that interests you.
### How to use
Here is how to use this model:
```python
from transformers import MobileNetV2FeatureExtractor, MobileNetV2ForSemanticSegmentation
from PIL import Image
import requests
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
feature_extractor = MobileNetV2FeatureExtractor.from_pretrained("Matthijs/deeplabv3_mobilenet_v2_1.0_513")
model = MobileNetV2ForSemanticSegmentation.from_pretrained("Matthijs/deeplabv3_mobilenet_v2_1.0_513")
inputs = feature_extractor(images=image, return_tensors="pt")
outputs = model(**inputs)
logits = outputs.logits
predicted_mask = logits.argmax(1).squeeze(0)
```
Currently, both the feature extractor and model support PyTorch.
### BibTeX entry and citation info
```bibtex
@inproceedings{deeplabv3plus2018,
title={Encoder-Decoder with Atrous Separable Convolution for Semantic Image Segmentation},
author={Liang-Chieh Chen and Yukun Zhu and George Papandreou and Florian Schroff and Hartwig Adam},
booktitle={ECCV},
year={2018}
}
```
| [
"background",
"aeroplane",
"bicycle",
"bird",
"boat",
"bottle",
"bus",
"car",
"cat",
"chair",
"cow",
"diningtable",
"dog",
"horse",
"motorbike",
"person",
"pottedplant",
"sheep",
"sofa",
"train",
"tvmonitor"
] |
userGagan/segformer-b0-finetuned-segments-sidewalk-2 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# segformer-b0-finetuned-segments-sidewalk-2
This model is a fine-tuned version of [nvidia/mit-b0](https://huggingface.co/nvidia/mit-b0) on the userGagan/ResizedSample dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3429
- Mean Iou: 0.8143
- Mean Accuracy: 0.9007
- Overall Accuracy: 0.9061
- Per Category Iou: [0.8822819675417668, 0.7774253195321242, 0.7832033563111727]
- Per Category Accuracy: [0.9319684170082266, 0.8657193844491432, 0.9044945609610779]
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 6e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Mean Iou | Mean Accuracy | Overall Accuracy | Per Category Iou | Per Category Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:-------------:|:----------------:|:------------------------------------------------------------:|:------------------------------------------------------------:|
| 0.7949 | 0.5 | 20 | 0.8960 | 0.7129 | 0.8533 | 0.8427 | [0.7978191889735743, 0.6994730230171242, 0.6413103816527537] | [0.826874349660607, 0.8237981626592454, 0.9091007880329902] |
| 0.4881 | 1.0 | 40 | 0.6195 | 0.7364 | 0.8610 | 0.8552 | [0.8041892620489134, 0.6981663805103046, 0.7069887055480671] | [0.8308827565320059, 0.887905283397269, 0.8642919506720577] |
| 0.3115 | 1.5 | 60 | 0.4767 | 0.7352 | 0.8536 | 0.8588 | [0.8276338695141907, 0.7016825436162023, 0.6763414045904438] | [0.8633649830215921, 0.8776778472775076, 0.8196451790592317] |
| 0.5863 | 2.0 | 80 | 0.4895 | 0.7543 | 0.8748 | 0.8668 | [0.8156517914197925, 0.7259786638902507, 0.7213518497027839] | [0.8402281798360435, 0.8932153836673491, 0.8909222571543128] |
| 0.5182 | 2.5 | 100 | 0.4058 | 0.7904 | 0.8866 | 0.8919 | [0.860991170688589, 0.7583876635226005, 0.7518265397248736] | [0.9088903949664655, 0.8761789935147187, 0.8746304338865427] |
| 0.4755 | 3.0 | 120 | 0.3683 | 0.7896 | 0.8861 | 0.8895 | [0.8547537413009911, 0.7465075384127533, 0.7674680941571024] | [0.8979683913158062, 0.8865259395690547, 0.8738060532025316] |
| 0.6616 | 3.5 | 140 | 0.3697 | 0.7915 | 0.8874 | 0.8898 | [0.8551700094228354, 0.7431970428539307, 0.7761922571371438] | [0.8899387313627766, 0.903193218309171, 0.8690639906770039] |
| 0.5087 | 4.0 | 160 | 0.3367 | 0.8061 | 0.8987 | 0.8987 | [0.8640367246398447, 0.7643869962764198, 0.7899951558528526] | [0.9012200396208266, 0.8918889478830869, 0.902900133774502] |
| 0.5478 | 4.5 | 180 | 0.3297 | 0.8131 | 0.8991 | 0.9040 | [0.8775309087721331, 0.7692790103652185, 0.792538025793261] | [0.9196387801394476, 0.8895118205906903, 0.8882327151727265] |
| 0.389 | 5.0 | 200 | 0.3429 | 0.8143 | 0.9007 | 0.9061 | [0.8822819675417668, 0.7774253195321242, 0.7832033563111727] | [0.9319684170082266, 0.8657193844491432, 0.9044945609610779] |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
| [
"sand",
"soil",
"bigrock"
] |
imadd/segformer-b0-finetuned-segments-water-2 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# segformer-b0-finetuned-segments-water-2
This model is a fine-tuned version of [nvidia/mit-b0](https://huggingface.co/nvidia/mit-b0) on the imadd/water_dataset dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5845
- Mean Iou: nan
- Mean Accuracy: nan
- Overall Accuracy: nan
- Per Category Iou: [nan, nan]
- Per Category Accuracy: [nan, nan]
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 6e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Mean Iou | Mean Accuracy | Overall Accuracy | Per Category Iou | Per Category Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:-------------:|:----------------:|:----------------:|:---------------------:|
| 0.5241 | 6.67 | 20 | 0.5845 | nan | nan | nan | [nan, nan] | [nan, nan] |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
| [
"water",
"unlabeled"
] |
sayakpaul/mit-b0-finetuned-sidewalk-semantic |
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# mit-b0-finetuned-sidewalk-semantic
This model is a fine-tuned version of [nvidia/mit-b0](https://huggingface.co/nvidia/mit-b0) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.2125
- Validation Loss: 0.5151
- Epoch: 49
## Model description
The model was fine-tuned from [this model](https://huggingface.co/nvidia/mit-b0). More information about the model is available
[here](https://huggingface.co/docs/transformers/model_doc/segformer).
## Intended uses & limitations
This fine-tuned model is just for demonstration purposes. Before using it in production, it should be thoroughly inspected and adjusted
if needed.
## Training and evaluation data
[`segments/sidewalk-semantic`](https://huggingface.co/datasets/segments/sidewalk-semantic)
## Training procedure
More information is available here: [deep-diver/segformer-tf-transformers](https://github.com/deep-diver/segformer-tf-transformers).
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'learning_rate': 6e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 2.0785 | 1.1753 | 0 |
| 1.1312 | 0.8807 | 1 |
| 0.9315 | 0.7585 | 2 |
| 0.7952 | 0.7261 | 3 |
| 0.7273 | 0.6701 | 4 |
| 0.6603 | 0.6396 | 5 |
| 0.6198 | 0.6238 | 6 |
| 0.5958 | 0.5925 | 7 |
| 0.5378 | 0.5714 | 8 |
| 0.5236 | 0.5786 | 9 |
| 0.4960 | 0.5588 | 10 |
| 0.4633 | 0.5624 | 11 |
| 0.4562 | 0.5450 | 12 |
| 0.4167 | 0.5438 | 13 |
| 0.4100 | 0.5248 | 14 |
| 0.3947 | 0.5354 | 15 |
| 0.3867 | 0.5069 | 16 |
| 0.3803 | 0.5285 | 17 |
| 0.3696 | 0.5318 | 18 |
| 0.3386 | 0.5162 | 19 |
| 0.3349 | 0.5312 | 20 |
| 0.3233 | 0.5304 | 21 |
| 0.3328 | 0.5178 | 22 |
| 0.3140 | 0.5131 | 23 |
| 0.3081 | 0.5049 | 24 |
| 0.3046 | 0.5011 | 25 |
| 0.3209 | 0.5197 | 26 |
| 0.2966 | 0.5151 | 27 |
| 0.2829 | 0.5166 | 28 |
| 0.2968 | 0.5210 | 29 |
| 0.2818 | 0.5300 | 30 |
| 0.2739 | 0.5221 | 31 |
| 0.2602 | 0.5340 | 32 |
| 0.2570 | 0.5124 | 33 |
| 0.2557 | 0.5234 | 34 |
| 0.2593 | 0.5098 | 35 |
| 0.2582 | 0.5329 | 36 |
| 0.2439 | 0.5373 | 37 |
| 0.2413 | 0.5141 | 38 |
| 0.2423 | 0.5210 | 39 |
| 0.2340 | 0.5043 | 40 |
| 0.2244 | 0.5300 | 41 |
| 0.2246 | 0.4978 | 42 |
| 0.2270 | 0.5385 | 43 |
| 0.2254 | 0.5125 | 44 |
| 0.2176 | 0.5510 | 45 |
| 0.2194 | 0.5384 | 46 |
| 0.2136 | 0.5186 | 47 |
| 0.2121 | 0.5356 | 48 |
| 0.2125 | 0.5151 | 49 |
### Framework versions
- Transformers 4.21.0.dev0
- TensorFlow 2.8.0
- Datasets 2.3.2
- Tokenizers 0.12.1
| [
"unlabeled",
"flat-road",
"flat-sidewalk",
"flat-crosswalk",
"flat-cyclinglane",
"flat-parkingdriveway",
"flat-railtrack",
"flat-curb",
"human-person",
"human-rider",
"vehicle-car",
"vehicle-truck",
"vehicle-bus",
"vehicle-tramtrain",
"vehicle-motorcycle",
"vehicle-bicycle",
"vehicle-caravan",
"vehicle-cartrailer",
"construction-building",
"construction-door",
"construction-wall",
"construction-fenceguardrail",
"construction-bridge",
"construction-tunnel",
"construction-stairs",
"object-pole",
"object-trafficsign",
"object-trafficlight",
"nature-vegetation",
"nature-terrain",
"sky",
"void-ground",
"void-dynamic",
"void-static",
"void-unclear"
] |
koushikn/segformer-finetuned-Maize-10k-steps-sem |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# segformer-finetuned-Maize-10k-steps-sem
This model is a fine-tuned version of [nvidia/mit-b5](https://huggingface.co/nvidia/mit-b5) on the koushikn/Maize_sem_seg dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0756
- Mean Iou: 0.9172
- Mean Accuracy: 0.9711
- Overall Accuracy: 0.9804
- Accuracy Background: 0.9834
- Accuracy Maize: 0.9588
- Iou Background: 0.9779
- Iou Maize: 0.8566
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 16
- eval_batch_size: 16
- seed: 1337
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: polynomial
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | Mean Iou | Mean Accuracy | Overall Accuracy | Accuracy Background | Accuracy Maize | Iou Background | Iou Maize |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:-------------:|:----------------:|:-------------------:|:--------------:|:--------------:|:---------:|
| 0.0529 | 1.0 | 678 | 69.3785 | 0.4391 | 0.5 | 0.8781 | 1.0 | 0.0 | 0.8781 | 0.0 |
| 0.3755 | 2.0 | 1356 | 0.9455 | 0.4391 | 0.5 | 0.8781 | 1.0 | 0.0 | 0.8781 | 0.0 |
| 0.0603 | 3.0 | 2034 | 0.0920 | 0.8356 | 0.8602 | 0.9641 | 0.9976 | 0.7227 | 0.9607 | 0.7106 |
| 0.0341 | 4.0 | 2712 | 24.6203 | 0.4391 | 0.5 | 0.8781 | 1.0 | 0.0 | 0.8781 | 0.0 |
| 0.0332 | 5.0 | 3390 | 101.5635 | 0.4391 | 0.5 | 0.8781 | 1.0 | 0.0 | 0.8781 | 0.0 |
| 0.0331 | 6.0 | 4068 | 9.6824 | 0.4391 | 0.5 | 0.8781 | 1.0 | 0.0 | 0.8781 | 0.0 |
| 0.0302 | 7.0 | 4746 | 260.7923 | 0.4391 | 0.5 | 0.8781 | 1.0 | 0.0 | 0.8781 | 0.0 |
| 0.0305 | 8.0 | 5424 | 172.8153 | 0.4391 | 0.5 | 0.8781 | 1.0 | 0.0 | 0.8781 | 0.0 |
| 0.0313 | 9.0 | 6102 | 304.2714 | 0.4391 | 0.5 | 0.8781 | 1.0 | 0.0 | 0.8781 | 0.0 |
| 0.0301 | 10.0 | 6780 | 547.2355 | 0.4391 | 0.5 | 0.8781 | 1.0 | 0.0 | 0.8781 | 0.0 |
| 0.03 | 11.0 | 7458 | 224.2607 | 0.4391 | 0.5 | 0.8781 | 1.0 | 0.0 | 0.8781 | 0.0 |
| 0.0285 | 12.0 | 8136 | 116.3474 | 0.4391 | 0.5 | 0.8781 | 1.0 | 0.0 | 0.8781 | 0.0 |
| 0.0284 | 13.0 | 8814 | 96.8429 | 0.4391 | 0.5 | 0.8781 | 1.0 | 0.0 | 0.8781 | 0.0 |
| 0.0281 | 14.0 | 9492 | 54.2593 | 0.4391 | 0.5 | 0.8781 | 1.0 | 0.0 | 0.8781 | 0.0 |
| 0.028 | 14.75 | 10000 | 0.0756 | 0.9172 | 0.9711 | 0.9804 | 0.9834 | 0.9588 | 0.9779 | 0.8566 |
### Framework versions
- Transformers 4.21.0.dev0
- Pytorch 1.10.0+cu102
- Datasets 2.3.2
- Tokenizers 0.12.1
| [
"background",
"maize"
] |
plant/segformer-b5-finetuned-segments-instryde-foot-test |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# segformer-b5-finetuned-segments-instryde-foot-test
This model is a fine-tuned version of [nvidia/mit-b5](https://huggingface.co/nvidia/mit-b5) on the inStryde/inStrydeSegmentationFoot dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0496
- Mean Iou: 0.4672
- Mean Accuracy: 0.9344
- Overall Accuracy: 0.9344
- Per Category Iou: [0.0, 0.9343870058298716]
- Per Category Accuracy: [nan, 0.9343870058298716]
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 6e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Mean Iou | Mean Accuracy | Overall Accuracy | Per Category Iou | Per Category Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:-------------:|:----------------:|:-------------------------:|:-------------------------:|
| 0.1392 | 0.23 | 20 | 0.2371 | 0.4064 | 0.8128 | 0.8128 | [0.0, 0.8127920708469037] | [nan, 0.8127920708469037] |
| 0.2273 | 0.45 | 40 | 0.0993 | 0.4449 | 0.8898 | 0.8898 | [0.0, 0.889800913515142] | [nan, 0.889800913515142] |
| 0.0287 | 0.68 | 60 | 0.0607 | 0.4190 | 0.8379 | 0.8379 | [0.0, 0.8379005425233161] | [nan, 0.8379005425233161] |
| 0.03 | 0.91 | 80 | 0.0572 | 0.4072 | 0.8144 | 0.8144 | [0.0, 0.8144304164916533] | [nan, 0.8144304164916533] |
| 0.0239 | 1.14 | 100 | 0.0577 | 0.3973 | 0.7946 | 0.7946 | [0.0, 0.7946284254068925] | [nan, 0.7946284254068925] |
| 0.0196 | 1.36 | 120 | 0.0425 | 0.4227 | 0.8455 | 0.8455 | [0.0, 0.8454754171184029] | [nan, 0.8454754171184029] |
| 0.0295 | 1.59 | 140 | 0.0368 | 0.4479 | 0.8958 | 0.8958 | [0.0, 0.895802316554768] | [nan, 0.895802316554768] |
| 0.0297 | 1.82 | 160 | 0.0441 | 0.4561 | 0.9121 | 0.9121 | [0.0, 0.9121241975954804] | [nan, 0.9121241975954804] |
| 0.0276 | 2.05 | 180 | 0.0332 | 0.4629 | 0.9258 | 0.9258 | [0.0, 0.925774145806165] | [nan, 0.925774145806165] |
| 0.0148 | 2.27 | 200 | 0.0395 | 0.4310 | 0.8621 | 0.8621 | [0.0, 0.8620666905637888] | [nan, 0.8620666905637888] |
| 0.012 | 2.5 | 220 | 0.0372 | 0.4381 | 0.8761 | 0.8761 | [0.0, 0.8761025846276997] | [nan, 0.8761025846276997] |
| 0.0117 | 2.73 | 240 | 0.0339 | 0.4471 | 0.8941 | 0.8941 | [0.0, 0.8941320836457919] | [nan, 0.8941320836457919] |
| 0.0198 | 2.95 | 260 | 0.0297 | 0.4485 | 0.8969 | 0.8969 | [0.0, 0.8969491585060927] | [nan, 0.8969491585060927] |
| 0.0247 | 3.18 | 280 | 0.0303 | 0.4565 | 0.9130 | 0.9130 | [0.0, 0.9130423308930413] | [nan, 0.9130423308930413] |
| 0.0115 | 3.41 | 300 | 0.0307 | 0.4533 | 0.9066 | 0.9066 | [0.0, 0.9065626188900153] | [nan, 0.9065626188900153] |
| 0.0164 | 3.64 | 320 | 0.0330 | 0.4549 | 0.9097 | 0.9097 | [0.0, 0.9097436483868343] | [nan, 0.9097436483868343] |
| 0.0114 | 3.86 | 340 | 0.0362 | 0.4425 | 0.8850 | 0.8850 | [0.0, 0.8849727418868903] | [nan, 0.8849727418868903] |
| 0.012 | 4.09 | 360 | 0.0321 | 0.4582 | 0.9164 | 0.9164 | [0.0, 0.9164498699219532] | [nan, 0.9164498699219532] |
| 0.0153 | 4.32 | 380 | 0.0321 | 0.4572 | 0.9144 | 0.9144 | [0.0, 0.9144310762281544] | [nan, 0.9144310762281544] |
| 0.0115 | 4.55 | 400 | 0.0307 | 0.4573 | 0.9145 | 0.9145 | [0.0, 0.9145300367033407] | [nan, 0.9145300367033407] |
| 0.0139 | 4.77 | 420 | 0.0330 | 0.4678 | 0.9357 | 0.9357 | [0.0, 0.935664695520609] | [nan, 0.935664695520609] |
| 0.014 | 5.0 | 440 | 0.0317 | 0.4635 | 0.9271 | 0.9271 | [0.0, 0.9270562337402442] | [nan, 0.9270562337402442] |
| 0.0197 | 5.23 | 460 | 0.0320 | 0.4678 | 0.9356 | 0.9356 | [0.0, 0.9355745315321061] | [nan, 0.9355745315321061] |
| 0.0086 | 5.45 | 480 | 0.0337 | 0.4607 | 0.9214 | 0.9214 | [0.0, 0.9213528116870122] | [nan, 0.9213528116870122] |
| 0.3103 | 5.68 | 500 | 0.0338 | 0.4548 | 0.9096 | 0.9096 | [0.0, 0.9095853116265363] | [nan, 0.9095853116265363] |
| 0.0088 | 5.91 | 520 | 0.0305 | 0.4635 | 0.9270 | 0.9270 | [0.0, 0.9270243464760175] | [nan, 0.9270243464760175] |
| 0.0119 | 6.14 | 540 | 0.0299 | 0.4680 | 0.9359 | 0.9359 | [0.0, 0.9359494817769782] | [nan, 0.9359494817769782] |
| 0.0114 | 6.36 | 560 | 0.0314 | 0.4574 | 0.9148 | 0.9148 | [0.0, 0.914796130425508] | [nan, 0.914796130425508] |
| 0.0122 | 6.59 | 580 | 0.0289 | 0.4613 | 0.9227 | 0.9227 | [0.0, 0.9226920767845322] | [nan, 0.9226920767845322] |
| 0.0164 | 6.82 | 600 | 0.0312 | 0.4620 | 0.9240 | 0.9240 | [0.0, 0.9239807620836238] | [nan, 0.9239807620836238] |
| 0.0062 | 7.05 | 620 | 0.0335 | 0.4605 | 0.9210 | 0.9210 | [0.0, 0.9209954544155065] | [nan, 0.9209954544155065] |
| 0.0089 | 7.27 | 640 | 0.0309 | 0.4659 | 0.9317 | 0.9317 | [0.0, 0.9317029778306545] | [nan, 0.9317029778306545] |
| 0.0251 | 7.5 | 660 | 0.0291 | 0.4734 | 0.9468 | 0.9468 | [0.0, 0.9467878529315391] | [nan, 0.9467878529315391] |
| 0.0065 | 7.73 | 680 | 0.0326 | 0.4598 | 0.9195 | 0.9195 | [0.0, 0.9195297398219151] | [nan, 0.9195297398219151] |
| 0.0056 | 7.95 | 700 | 0.0310 | 0.4606 | 0.9213 | 0.9213 | [0.0, 0.9212714441851925] | [nan, 0.9212714441851925] |
| 0.0099 | 8.18 | 720 | 0.0345 | 0.4503 | 0.9006 | 0.9006 | [0.0, 0.9006183930138303] | [nan, 0.9006183930138303] |
| 0.0103 | 8.41 | 740 | 0.0335 | 0.4539 | 0.9078 | 0.9078 | [0.0, 0.9077512441530853] | [nan, 0.9077512441530853] |
| 0.0065 | 8.64 | 760 | 0.0334 | 0.4544 | 0.9088 | 0.9088 | [0.0, 0.9087936278250467] | [nan, 0.9087936278250467] |
| 0.0047 | 8.86 | 780 | 0.0341 | 0.4557 | 0.9114 | 0.9114 | [0.0, 0.9114215782216583] | [nan, 0.9114215782216583] |
| 0.0105 | 9.09 | 800 | 0.0315 | 0.4597 | 0.9195 | 0.9195 | [0.0, 0.9194703635368034] | [nan, 0.9194703635368034] |
| 0.0087 | 9.32 | 820 | 0.0329 | 0.4583 | 0.9166 | 0.9166 | [0.0, 0.9165708216138474] | [nan, 0.9165708216138474] |
| 0.0122 | 9.55 | 840 | 0.0357 | 0.4537 | 0.9073 | 0.9073 | [0.0, 0.9073004242105703] | [nan, 0.9073004242105703] |
| 0.0057 | 9.77 | 860 | 0.0319 | 0.4621 | 0.9241 | 0.9241 | [0.0, 0.9241050124580242] | [nan, 0.9241050124580242] |
| 0.0068 | 10.0 | 880 | 0.0342 | 0.4539 | 0.9078 | 0.9078 | [0.0, 0.907799624829843] | [nan, 0.907799624829843] |
| 0.0095 | 10.23 | 900 | 0.0340 | 0.4578 | 0.9156 | 0.9156 | [0.0, 0.9155933120311748] | [nan, 0.9155933120311748] |
| 0.0043 | 10.45 | 920 | 0.0319 | 0.4636 | 0.9272 | 0.9272 | [0.0, 0.9271771854321385] | [nan, 0.9271771854321385] |
| 0.0049 | 10.68 | 940 | 0.0308 | 0.4659 | 0.9319 | 0.9319 | [0.0, 0.9318525181042692] | [nan, 0.9318525181042692] |
| 0.005 | 10.91 | 960 | 0.0319 | 0.4640 | 0.9281 | 0.9281 | [0.0, 0.9280612323438019] | [nan, 0.9280612323438019] |
| 0.0043 | 11.14 | 980 | 0.0313 | 0.4653 | 0.9306 | 0.9306 | [0.0, 0.930638602941985] | [nan, 0.930638602941985] |
| 0.0084 | 11.36 | 1000 | 0.0321 | 0.4632 | 0.9264 | 0.9264 | [0.0, 0.9264294840640648] | [nan, 0.9264294840640648] |
| 0.0044 | 11.59 | 1020 | 0.0320 | 0.4643 | 0.9285 | 0.9285 | [0.0, 0.9285241474555063] | [nan, 0.9285241474555063] |
| 0.0044 | 11.82 | 1040 | 0.0321 | 0.4661 | 0.9321 | 0.9321 | [0.0, 0.9321098153397533] | [nan, 0.9321098153397533] |
| 0.0057 | 12.05 | 1060 | 0.0338 | 0.4626 | 0.9253 | 0.9253 | [0.0, 0.9252518544093489] | [nan, 0.9252518544093489] |
| 0.0064 | 12.27 | 1080 | 0.0348 | 0.4616 | 0.9231 | 0.9231 | [0.0, 0.9231450958487181] | [nan, 0.9231450958487181] |
| 0.0075 | 12.5 | 1100 | 0.0331 | 0.4618 | 0.9237 | 0.9237 | [0.0, 0.9236706859280404] | [nan, 0.9236706859280404] |
| 0.0103 | 12.73 | 1120 | 0.0317 | 0.4704 | 0.9408 | 0.9408 | [0.0, 0.9408425274945187] | [nan, 0.9408425274945187] |
| 0.0053 | 12.95 | 1140 | 0.0320 | 0.4704 | 0.9407 | 0.9407 | [0.0, 0.9407292727284723] | [nan, 0.9407292727284723] |
| 0.0073 | 13.18 | 1160 | 0.0331 | 0.4652 | 0.9305 | 0.9305 | [0.0, 0.9304681710124976] | [nan, 0.9304681710124976] |
| 0.0052 | 13.41 | 1180 | 0.0342 | 0.4664 | 0.9328 | 0.9328 | [0.0, 0.9328047377877275] | [nan, 0.9328047377877275] |
| 0.0089 | 13.64 | 1200 | 0.0322 | 0.4676 | 0.9353 | 0.9353 | [0.0, 0.9352996413232555] | [nan, 0.9352996413232555] |
| 0.0054 | 13.86 | 1220 | 0.0332 | 0.4655 | 0.9311 | 0.9311 | [0.0, 0.9310509382552609] | [nan, 0.9310509382552609] |
| 0.0057 | 14.09 | 1240 | 0.0333 | 0.4661 | 0.9321 | 0.9321 | [0.0, 0.9321439017256508] | [nan, 0.9321439017256508] |
| 0.0047 | 14.32 | 1260 | 0.0346 | 0.4639 | 0.9278 | 0.9278 | [0.0, 0.9277522557490538] | [nan, 0.9277522557490538] |
| 0.0092 | 14.55 | 1280 | 0.0380 | 0.4583 | 0.9166 | 0.9166 | [0.0, 0.9166290983381238] | [nan, 0.9166290983381238] |
| 0.0066 | 14.77 | 1300 | 0.0338 | 0.4638 | 0.9277 | 0.9277 | [0.0, 0.927687381659765] | [nan, 0.927687381659765] |
| 0.0076 | 15.0 | 1320 | 0.0347 | 0.4640 | 0.9280 | 0.9280 | [0.0, 0.9279897608895007] | [nan, 0.9279897608895007] |
| 0.0054 | 15.23 | 1340 | 0.0345 | 0.4647 | 0.9295 | 0.9295 | [0.0, 0.9294664710914461] | [nan, 0.9294664710914461] |
| 0.0036 | 15.45 | 1360 | 0.0349 | 0.4666 | 0.9332 | 0.9332 | [0.0, 0.9331950818842955] | [nan, 0.9331950818842955] |
| 0.004 | 15.68 | 1380 | 0.0352 | 0.4617 | 0.9234 | 0.9234 | [0.0, 0.9234408777134413] | [nan, 0.9234408777134413] |
| 0.0042 | 15.91 | 1400 | 0.0357 | 0.4622 | 0.9244 | 0.9244 | [0.0, 0.9244282833436326] | [nan, 0.9244282833436326] |
| 0.0048 | 16.14 | 1420 | 0.0370 | 0.4586 | 0.9172 | 0.9172 | [0.0, 0.9171546884174461] | [nan, 0.9171546884174461] |
| 0.0043 | 16.36 | 1440 | 0.0345 | 0.4647 | 0.9294 | 0.9294 | [0.0, 0.9294411811922318] | [nan, 0.9294411811922318] |
| 0.0027 | 16.59 | 1460 | 0.0354 | 0.4667 | 0.9334 | 0.9334 | [0.0, 0.9333754098613014] | [nan, 0.9333754098613014] |
| 0.0057 | 16.82 | 1480 | 0.0364 | 0.4689 | 0.9379 | 0.9379 | [0.0, 0.9378913062122988] | [nan, 0.9378913062122988] |
| 0.0035 | 17.05 | 1500 | 0.0363 | 0.4662 | 0.9325 | 0.9325 | [0.0, 0.9324682721720945] | [nan, 0.9324682721720945] |
| 0.0029 | 17.27 | 1520 | 0.0348 | 0.4674 | 0.9347 | 0.9347 | [0.0, 0.9347212723238338] | [nan, 0.9347212723238338] |
| 0.0043 | 17.5 | 1540 | 0.0362 | 0.4648 | 0.9295 | 0.9295 | [0.0, 0.9295390421065827] | [nan, 0.9295390421065827] |
| 0.0041 | 17.73 | 1560 | 0.0347 | 0.4664 | 0.9328 | 0.9328 | [0.0, 0.9328487202211436] | [nan, 0.9328487202211436] |
| 0.003 | 17.95 | 1580 | 0.0364 | 0.4649 | 0.9297 | 0.9297 | [0.0, 0.9297237683269303] | [nan, 0.9297237683269303] |
| 0.0121 | 18.18 | 1600 | 0.0364 | 0.4650 | 0.9300 | 0.9300 | [0.0, 0.9299920611707684] | [nan, 0.9299920611707684] |
| 0.004 | 18.41 | 1620 | 0.0369 | 0.4667 | 0.9334 | 0.9334 | [0.0, 0.9334259896597299] | [nan, 0.9334259896597299] |
| 0.0035 | 18.64 | 1640 | 0.0368 | 0.4636 | 0.9272 | 0.9272 | [0.0, 0.9272475573256042] | [nan, 0.9272475573256042] |
| 0.0031 | 18.86 | 1660 | 0.0358 | 0.4665 | 0.9330 | 0.9330 | [0.0, 0.9329784683997212] | [nan, 0.9329784683997212] |
| 0.0032 | 19.09 | 1680 | 0.0357 | 0.4661 | 0.9322 | 0.9322 | [0.0, 0.9321515986514985] | [nan, 0.9321515986514985] |
| 0.0047 | 19.32 | 1700 | 0.0371 | 0.4621 | 0.9243 | 0.9243 | [0.0, 0.9242886391175364] | [nan, 0.9242886391175364] |
| 0.0056 | 19.55 | 1720 | 0.0359 | 0.4663 | 0.9326 | 0.9326 | [0.0, 0.9326277084932278] | [nan, 0.9326277084932278] |
| 0.0033 | 19.77 | 1740 | 0.0348 | 0.4694 | 0.9389 | 0.9389 | [0.0, 0.9388523223824404] | [nan, 0.9388523223824404] |
| 0.0049 | 20.0 | 1760 | 0.0394 | 0.4612 | 0.9224 | 0.9224 | [0.0, 0.9223918966764674] | [nan, 0.9223918966764674] |
| 0.0058 | 20.23 | 1780 | 0.0368 | 0.4660 | 0.9321 | 0.9321 | [0.0, 0.9320724302713497] | [nan, 0.9320724302713497] |
| 0.003 | 20.45 | 1800 | 0.0370 | 0.4686 | 0.9372 | 0.9372 | [0.0, 0.9371787907909581] | [nan, 0.9371787907909581] |
| 0.0058 | 20.68 | 1820 | 0.0363 | 0.4665 | 0.9330 | 0.9330 | [0.0, 0.9329949618122522] | [nan, 0.9329949618122522] |
| 0.0083 | 20.91 | 1840 | 0.0351 | 0.4661 | 0.9322 | 0.9322 | [0.0, 0.9321834859157253] | [nan, 0.9321834859157253] |
| 0.0036 | 21.14 | 1860 | 0.0353 | 0.4667 | 0.9333 | 0.9333 | [0.0, 0.9333149340153543] | [nan, 0.9333149340153543] |
| 0.0032 | 21.36 | 1880 | 0.0373 | 0.4657 | 0.9314 | 0.9314 | [0.0, 0.93137640826254] | [nan, 0.93137640826254] |
| 0.005 | 21.59 | 1900 | 0.0391 | 0.4647 | 0.9294 | 0.9294 | [0.0, 0.929370809298766] | [nan, 0.929370809298766] |
| 0.0049 | 21.82 | 1920 | 0.0364 | 0.4701 | 0.9403 | 0.9403 | [0.0, 0.9402795523467927] | [nan, 0.9402795523467927] |
| 0.0044 | 22.05 | 1940 | 0.0368 | 0.4672 | 0.9343 | 0.9343 | [0.0, 0.9343111361322288] | [nan, 0.9343111361322288] |
| 0.0038 | 22.27 | 1960 | 0.0367 | 0.4663 | 0.9325 | 0.9325 | [0.0, 0.932513354166346] | [nan, 0.932513354166346] |
| 0.0032 | 22.5 | 1980 | 0.0378 | 0.4679 | 0.9358 | 0.9358 | [0.0, 0.9358483221801213] | [nan, 0.9358483221801213] |
| 0.0039 | 22.73 | 2000 | 0.0381 | 0.4653 | 0.9306 | 0.9306 | [0.0, 0.9305517376359882] | [nan, 0.9305517376359882] |
| 0.0032 | 22.95 | 2020 | 0.0385 | 0.4651 | 0.9301 | 0.9301 | [0.0, 0.9301262075926875] | [nan, 0.9301262075926875] |
| 0.0058 | 23.18 | 2040 | 0.0381 | 0.4654 | 0.9309 | 0.9309 | [0.0, 0.9308673115957486] | [nan, 0.9308673115957486] |
| 0.0049 | 23.41 | 2060 | 0.0377 | 0.4658 | 0.9316 | 0.9316 | [0.0, 0.9316194112071639] | [nan, 0.9316194112071639] |
| 0.0032 | 23.64 | 2080 | 0.0373 | 0.4692 | 0.9384 | 0.9384 | [0.0, 0.9384256927783043] | [nan, 0.9384256927783043] |
| 0.0056 | 23.86 | 2100 | 0.0390 | 0.4646 | 0.9292 | 0.9292 | [0.0, 0.9292465589243656] | [nan, 0.9292465589243656] |
| 0.003 | 24.09 | 2120 | 0.0383 | 0.4658 | 0.9317 | 0.9317 | [0.0, 0.9316765883706047] | [nan, 0.9316765883706047] |
| 0.0037 | 24.32 | 2140 | 0.0376 | 0.4668 | 0.9337 | 0.9337 | [0.0, 0.9336755899693663] | [nan, 0.9336755899693663] |
| 0.0025 | 24.55 | 2160 | 0.0390 | 0.4663 | 0.9326 | 0.9326 | [0.0, 0.9326145137632029] | [nan, 0.9326145137632029] |
| 0.0039 | 24.77 | 2180 | 0.0381 | 0.4688 | 0.9376 | 0.9376 | [0.0, 0.937613117320942] | [nan, 0.937613117320942] |
| 0.0031 | 25.0 | 2200 | 0.0395 | 0.4645 | 0.9291 | 0.9291 | [0.0, 0.9290629322648534] | [nan, 0.9290629322648534] |
| 0.0026 | 25.23 | 2220 | 0.0389 | 0.4668 | 0.9336 | 0.9336 | [0.0, 0.9335678330074968] | [nan, 0.9335678330074968] |
| 0.0028 | 25.45 | 2240 | 0.0375 | 0.4680 | 0.9359 | 0.9359 | [0.0, 0.9359329883644473] | [nan, 0.9359329883644473] |
| 0.0039 | 25.68 | 2260 | 0.0404 | 0.4656 | 0.9312 | 0.9312 | [0.0, 0.9312004785288756] | [nan, 0.9312004785288756] |
| 0.004 | 25.91 | 2280 | 0.0371 | 0.4716 | 0.9431 | 0.9431 | [0.0, 0.9431021250112706] | [nan, 0.9431021250112706] |
| 0.0048 | 26.14 | 2300 | 0.0373 | 0.4700 | 0.9400 | 0.9400 | [0.0, 0.9399639783870323] | [nan, 0.9399639783870323] |
| 0.0033 | 26.36 | 2320 | 0.0385 | 0.4688 | 0.9377 | 0.9377 | [0.0, 0.9376560001935227] | [nan, 0.9376560001935227] |
| 0.0042 | 26.59 | 2340 | 0.0374 | 0.4686 | 0.9372 | 0.9372 | [0.0, 0.9371743925476165] | [nan, 0.9371743925476165] |
| 0.0048 | 26.82 | 2360 | 0.0393 | 0.4660 | 0.9320 | 0.9320 | [0.0, 0.9319789676003404] | [nan, 0.9319789676003404] |
| 0.0047 | 27.05 | 2380 | 0.0393 | 0.4650 | 0.9300 | 0.9300 | [0.0, 0.9300162515091472] | [nan, 0.9300162515091472] |
| 0.0048 | 27.27 | 2400 | 0.0389 | 0.4670 | 0.9340 | 0.9340 | [0.0, 0.9339867656857851] | [nan, 0.9339867656857851] |
| 0.004 | 27.5 | 2420 | 0.0388 | 0.4673 | 0.9346 | 0.9346 | [0.0, 0.9345750307327253] | [nan, 0.9345750307327253] |
| 0.0051 | 27.73 | 2440 | 0.0386 | 0.4655 | 0.9309 | 0.9309 | [0.0, 0.9309002984208107] | [nan, 0.9309002984208107] |
| 0.0045 | 27.95 | 2460 | 0.0395 | 0.4664 | 0.9328 | 0.9328 | [0.0, 0.932816832956917] | [nan, 0.932816832956917] |
| 0.0042 | 28.18 | 2480 | 0.0393 | 0.4642 | 0.9285 | 0.9285 | [0.0, 0.9284856628262672] | [nan, 0.9284856628262672] |
| 0.0035 | 28.41 | 2500 | 0.0396 | 0.4667 | 0.9333 | 0.9333 | [0.0, 0.9333083366503419] | [nan, 0.9333083366503419] |
| 0.0036 | 28.64 | 2520 | 0.0395 | 0.4664 | 0.9327 | 0.9327 | [0.0, 0.9327288680900848] | [nan, 0.9327288680900848] |
| 0.0035 | 28.86 | 2540 | 0.0377 | 0.4675 | 0.9349 | 0.9349 | [0.0, 0.9349378858084081] | [nan, 0.9349378858084081] |
| 0.0029 | 29.09 | 2560 | 0.0402 | 0.4658 | 0.9315 | 0.9315 | [0.0, 0.9315479397528627] | [nan, 0.9315479397528627] |
| 0.0042 | 29.32 | 2580 | 0.0398 | 0.4691 | 0.9383 | 0.9383 | [0.0, 0.9382893472347145] | [nan, 0.9382893472347145] |
| 0.0029 | 29.55 | 2600 | 0.0405 | 0.4668 | 0.9336 | 0.9336 | [0.0, 0.9336129150017483] | [nan, 0.9336129150017483] |
| 0.0023 | 29.77 | 2620 | 0.0402 | 0.4666 | 0.9332 | 0.9332 | [0.0, 0.9332071770534849] | [nan, 0.9332071770534849] |
| 0.0036 | 30.0 | 2640 | 0.0417 | 0.4648 | 0.9296 | 0.9296 | [0.0, 0.9296435003859459] | [nan, 0.9296435003859459] |
| 0.0045 | 30.23 | 2660 | 0.0395 | 0.4674 | 0.9348 | 0.9348 | [0.0, 0.9347960424606412] | [nan, 0.9347960424606412] |
| 0.0025 | 30.45 | 2680 | 0.0400 | 0.4695 | 0.9390 | 0.9390 | [0.0, 0.9390392477244589] | [nan, 0.9390392477244589] |
| 0.0032 | 30.68 | 2700 | 0.0404 | 0.4673 | 0.9347 | 0.9347 | [0.0, 0.9346926837421135] | [nan, 0.9346926837421135] |
| 0.0047 | 30.91 | 2720 | 0.0416 | 0.4651 | 0.9303 | 0.9303 | [0.0, 0.9302790465488084] | [nan, 0.9302790465488084] |
| 0.0024 | 31.14 | 2740 | 0.0403 | 0.4677 | 0.9355 | 0.9355 | [0.0, 0.9354997613952987] | [nan, 0.9354997613952987] |
| 0.0037 | 31.36 | 2760 | 0.0406 | 0.4677 | 0.9354 | 0.9354 | [0.0, 0.9354469824751994] | [nan, 0.9354469824751994] |
| 0.0031 | 31.59 | 2780 | 0.0414 | 0.4671 | 0.9343 | 0.9343 | [0.0, 0.9342858462330146] | [nan, 0.9342858462330146] |
| 0.0036 | 31.82 | 2800 | 0.0404 | 0.4670 | 0.9339 | 0.9339 | [0.0, 0.9339152942314839] | [nan, 0.9339152942314839] |
| 0.003 | 32.05 | 2820 | 0.0411 | 0.4678 | 0.9355 | 0.9355 | [0.0, 0.9355151552469944] | [nan, 0.9355151552469944] |
| 0.0038 | 32.27 | 2840 | 0.0423 | 0.4672 | 0.9344 | 0.9344 | [0.0, 0.9344221917766045] | [nan, 0.9344221917766045] |
| 0.0023 | 32.5 | 2860 | 0.0433 | 0.4657 | 0.9313 | 0.9313 | [0.0, 0.9313401227549717] | [nan, 0.9313401227549717] |
| 0.003 | 32.73 | 2880 | 0.0421 | 0.4682 | 0.9363 | 0.9363 | [0.0, 0.9363365271910399] | [nan, 0.9363365271910399] |
| 0.0031 | 32.95 | 2900 | 0.0428 | 0.4679 | 0.9357 | 0.9357 | [0.0, 0.9357086779540251] | [nan, 0.9357086779540251] |
| 0.0026 | 33.18 | 2920 | 0.0448 | 0.4656 | 0.9311 | 0.9311 | [0.0, 0.9311081154187018] | [nan, 0.9311081154187018] |
| 0.0031 | 33.41 | 2940 | 0.0456 | 0.4639 | 0.9279 | 0.9279 | [0.0, 0.9278929995359854] | [nan, 0.9278929995359854] |
| 0.0022 | 33.64 | 2960 | 0.0424 | 0.4674 | 0.9349 | 0.9349 | [0.0, 0.9348851068883088] | [nan, 0.9348851068883088] |
| 0.0025 | 33.86 | 2980 | 0.0434 | 0.4654 | 0.9308 | 0.9308 | [0.0, 0.9307782471680811] | [nan, 0.9307782471680811] |
| 0.0025 | 34.09 | 3000 | 0.0418 | 0.4675 | 0.9351 | 0.9351 | [0.0, 0.9350610366219732] | [nan, 0.9350610366219732] |
| 0.003 | 34.32 | 3020 | 0.0424 | 0.4674 | 0.9349 | 0.9349 | [0.0, 0.9348653147932716] | [nan, 0.9348653147932716] |
| 0.0021 | 34.55 | 3040 | 0.0412 | 0.4687 | 0.9374 | 0.9374 | [0.0, 0.9374437849522901] | [nan, 0.9374437849522901] |
| 0.0043 | 34.77 | 3060 | 0.0412 | 0.4676 | 0.9352 | 0.9352 | [0.0, 0.9352446632814854] | [nan, 0.9352446632814854] |
| 0.005 | 35.0 | 3080 | 0.0428 | 0.4675 | 0.9350 | 0.9350 | [0.0, 0.9349807686809888] | [nan, 0.9349807686809888] |
| 0.003 | 35.23 | 3100 | 0.0430 | 0.4672 | 0.9344 | 0.9344 | [0.0, 0.934393603194884] | [nan, 0.934393603194884] |
| 0.0027 | 35.45 | 3120 | 0.0452 | 0.4652 | 0.9303 | 0.9303 | [0.0, 0.9303428210772617] | [nan, 0.9303428210772617] |
| 0.0022 | 35.68 | 3140 | 0.0441 | 0.4653 | 0.9306 | 0.9306 | [0.0, 0.9305847244610502] | [nan, 0.9305847244610502] |
| 0.0029 | 35.91 | 3160 | 0.0425 | 0.4671 | 0.9342 | 0.9342 | [0.0, 0.9341692927844619] | [nan, 0.9341692927844619] |
| 0.0022 | 36.14 | 3180 | 0.0438 | 0.4679 | 0.9358 | 0.9358 | [0.0, 0.9358153353550592] | [nan, 0.9358153353550592] |
| 0.0028 | 36.36 | 3200 | 0.0443 | 0.4680 | 0.9359 | 0.9359 | [0.0, 0.935929689681941] | [nan, 0.935929689681941] |
| 0.0025 | 36.59 | 3220 | 0.0433 | 0.4682 | 0.9365 | 0.9365 | [0.0, 0.9364948639513379] | [nan, 0.9364948639513379] |
| 0.003 | 36.82 | 3240 | 0.0439 | 0.4680 | 0.9359 | 0.9359 | [0.0, 0.9359340879252827] | [nan, 0.9359340879252827] |
| 0.0027 | 37.05 | 3260 | 0.0462 | 0.4665 | 0.9331 | 0.9331 | [0.0, 0.9330587363407056] | [nan, 0.9330587363407056] |
| 0.004 | 37.27 | 3280 | 0.0447 | 0.4675 | 0.9350 | 0.9350 | [0.0, 0.9349917642893428] | [nan, 0.9349917642893428] |
| 0.0032 | 37.5 | 3300 | 0.0442 | 0.4683 | 0.9367 | 0.9367 | [0.0, 0.9366916853408749] | [nan, 0.9366916853408749] |
| 0.0019 | 37.73 | 3320 | 0.0454 | 0.4674 | 0.9347 | 0.9347 | [0.0, 0.9347102767154798] | [nan, 0.9347102767154798] |
| 0.0028 | 37.95 | 3340 | 0.0451 | 0.4674 | 0.9349 | 0.9349 | [0.0, 0.9348543191849176] | [nan, 0.9348543191849176] |
| 0.0023 | 38.18 | 3360 | 0.0457 | 0.4669 | 0.9337 | 0.9337 | [0.0, 0.9337228710852885] | [nan, 0.9337228710852885] |
| 0.0028 | 38.41 | 3380 | 0.0454 | 0.4675 | 0.9351 | 0.9351 | [0.0, 0.9350764304736688] | [nan, 0.9350764304736688] |
| 0.0024 | 38.64 | 3400 | 0.0467 | 0.4677 | 0.9354 | 0.9354 | [0.0, 0.9353568184866964] | [nan, 0.9353568184866964] |
| 0.0023 | 38.86 | 3420 | 0.0463 | 0.4669 | 0.9337 | 0.9337 | [0.0, 0.9337096763552637] | [nan, 0.9337096763552637] |
| 0.0029 | 39.09 | 3440 | 0.0456 | 0.4664 | 0.9328 | 0.9328 | [0.0, 0.9328289281261064] | [nan, 0.9328289281261064] |
| 0.0026 | 39.32 | 3460 | 0.0453 | 0.4686 | 0.9372 | 0.9372 | [0.0, 0.9371578991350854] | [nan, 0.9371578991350854] |
| 0.0037 | 39.55 | 3480 | 0.0458 | 0.4678 | 0.9356 | 0.9356 | [0.0, 0.9356097174788389] | [nan, 0.9356097174788389] |
| 0.0025 | 39.77 | 3500 | 0.0468 | 0.4671 | 0.9342 | 0.9342 | [0.0, 0.9342275695087382] | [nan, 0.9342275695087382] |
| 0.0048 | 40.0 | 3520 | 0.0459 | 0.4668 | 0.9335 | 0.9335 | [0.0, 0.933527149256587] | [nan, 0.933527149256587] |
| 0.0027 | 40.23 | 3540 | 0.0468 | 0.4658 | 0.9315 | 0.9315 | [0.0, 0.9315490393136981] | [nan, 0.9315490393136981] |
| 0.0019 | 40.45 | 3560 | 0.0465 | 0.4662 | 0.9324 | 0.9324 | [0.0, 0.9323792077444268] | [nan, 0.9323792077444268] |
| 0.0033 | 40.68 | 3580 | 0.0459 | 0.4674 | 0.9348 | 0.9348 | [0.0, 0.9348015402648182] | [nan, 0.9348015402648182] |
| 0.004 | 40.91 | 3600 | 0.0467 | 0.4667 | 0.9333 | 0.9333 | [0.0, 0.9333358256712269] | [nan, 0.9333358256712269] |
| 0.0022 | 41.14 | 3620 | 0.0469 | 0.4665 | 0.9331 | 0.9331 | [0.0, 0.9330521389756931] | [nan, 0.9330521389756931] |
| 0.0036 | 41.36 | 3640 | 0.0458 | 0.4676 | 0.9352 | 0.9352 | [0.0, 0.9352479619639916] | [nan, 0.9352479619639916] |
| 0.0024 | 41.59 | 3660 | 0.0468 | 0.4671 | 0.9342 | 0.9342 | [0.0, 0.9341769897103097] | [nan, 0.9341769897103097] |
| 0.0021 | 41.82 | 3680 | 0.0466 | 0.4658 | 0.9317 | 0.9317 | [0.0, 0.9316776879314402] | [nan, 0.9316776879314402] |
| 0.0032 | 42.05 | 3700 | 0.0472 | 0.4666 | 0.9332 | 0.9332 | [0.0, 0.9331807875934351] | [nan, 0.9331807875934351] |
| 0.0023 | 42.27 | 3720 | 0.0470 | 0.4673 | 0.9347 | 0.9347 | [0.0, 0.9346827876945948] | [nan, 0.9346827876945948] |
| 0.003 | 42.5 | 3740 | 0.0474 | 0.4661 | 0.9321 | 0.9321 | [0.0, 0.9321482999689924] | [nan, 0.9321482999689924] |
| 0.0025 | 42.73 | 3760 | 0.0483 | 0.4656 | 0.9313 | 0.9313 | [0.0, 0.9312851447132016] | [nan, 0.9312851447132016] |
| 0.0019 | 42.95 | 3780 | 0.0471 | 0.4669 | 0.9338 | 0.9338 | [0.0, 0.9338130350737915] | [nan, 0.9338130350737915] |
| 0.0032 | 43.18 | 3800 | 0.0463 | 0.4682 | 0.9365 | 0.9365 | [0.0, 0.9364508815179218] | [nan, 0.9364508815179218] |
| 0.0026 | 43.41 | 3820 | 0.0484 | 0.4657 | 0.9315 | 0.9315 | [0.0, 0.9314698709335492] | [nan, 0.9314698709335492] |
| 0.0019 | 43.64 | 3840 | 0.0477 | 0.4673 | 0.9345 | 0.9345 | [0.0, 0.9345486412726757] | [nan, 0.9345486412726757] |
| 0.003 | 43.86 | 3860 | 0.0472 | 0.4688 | 0.9375 | 0.9375 | [0.0, 0.9375218537716036] | [nan, 0.9375218537716036] |
| 0.0025 | 44.09 | 3880 | 0.0473 | 0.4670 | 0.9340 | 0.9340 | [0.0, 0.9339999604158099] | [nan, 0.9339999604158099] |
| 0.0019 | 44.32 | 3900 | 0.0481 | 0.4670 | 0.9340 | 0.9340 | [0.0, 0.9340263498758595] | [nan, 0.9340263498758595] |
| 0.0024 | 44.55 | 3920 | 0.0478 | 0.4671 | 0.9343 | 0.9343 | [0.0, 0.9342561580904587] | [nan, 0.9342561580904587] |
| 0.0021 | 44.77 | 3940 | 0.0479 | 0.4677 | 0.9355 | 0.9355 | [0.0, 0.9354579780835535] | [nan, 0.9354579780835535] |
| 0.0019 | 45.0 | 3960 | 0.0479 | 0.4682 | 0.9363 | 0.9363 | [0.0, 0.9363112372918256] | [nan, 0.9363112372918256] |
| 0.0024 | 45.23 | 3980 | 0.0481 | 0.4681 | 0.9362 | 0.9362 | [0.0, 0.9362133763774748] | [nan, 0.9362133763774748] |
| 0.0023 | 45.45 | 4000 | 0.0497 | 0.4670 | 0.9340 | 0.9340 | [0.0, 0.933970272273254] | [nan, 0.933970272273254] |
| 0.0027 | 45.68 | 4020 | 0.0487 | 0.4671 | 0.9343 | 0.9343 | [0.0, 0.9342781493071667] | [nan, 0.9342781493071667] |
| 0.0023 | 45.91 | 4040 | 0.0477 | 0.4672 | 0.9344 | 0.9344 | [0.0, 0.9344309882632876] | [nan, 0.9344309882632876] |
| 0.003 | 46.14 | 4060 | 0.0485 | 0.4678 | 0.9356 | 0.9356 | [0.0, 0.9355877262621309] | [nan, 0.9355877262621309] |
| 0.0017 | 46.36 | 4080 | 0.0488 | 0.4677 | 0.9354 | 0.9354 | [0.0, 0.9353678140950504] | [nan, 0.9353678140950504] |
| 0.0022 | 46.59 | 4100 | 0.0481 | 0.4668 | 0.9337 | 0.9337 | [0.0, 0.9336634948001769] | [nan, 0.9336634948001769] |
| 0.0032 | 46.82 | 4120 | 0.0487 | 0.4676 | 0.9352 | 0.9352 | [0.0, 0.935249061524827] | [nan, 0.935249061524827] |
| 0.0021 | 47.05 | 4140 | 0.0483 | 0.4675 | 0.9351 | 0.9351 | [0.0, 0.9350885256428583] | [nan, 0.9350885256428583] |
| 0.002 | 47.27 | 4160 | 0.0486 | 0.4673 | 0.9347 | 0.9347 | [0.0, 0.9346530995520389] | [nan, 0.9346530995520389] |
| 0.0028 | 47.5 | 4180 | 0.0487 | 0.4675 | 0.9349 | 0.9349 | [0.0, 0.9349224919567125] | [nan, 0.9349224919567125] |
| 0.0026 | 47.73 | 4200 | 0.0482 | 0.4667 | 0.9335 | 0.9335 | [0.0, 0.9334589764847919] | [nan, 0.9334589764847919] |
| 0.0022 | 47.95 | 4220 | 0.0490 | 0.4670 | 0.9341 | 0.9341 | [0.0, 0.9340769296742881] | [nan, 0.9340769296742881] |
| 0.0027 | 48.18 | 4240 | 0.0489 | 0.4679 | 0.9358 | 0.9358 | [0.0, 0.9358153353550592] | [nan, 0.9358153353550592] |
| 0.0021 | 48.41 | 4260 | 0.0491 | 0.4676 | 0.9353 | 0.9353 | [0.0, 0.9352864465932307] | [nan, 0.9352864465932307] |
| 0.0024 | 48.64 | 4280 | 0.0491 | 0.4672 | 0.9344 | 0.9344 | [0.0, 0.9343804084648591] | [nan, 0.9343804084648591] |
| 0.0025 | 48.86 | 4300 | 0.0493 | 0.4675 | 0.9349 | 0.9349 | [0.0, 0.9349466822950914] | [nan, 0.9349466822950914] |
| 0.0022 | 49.09 | 4320 | 0.0484 | 0.4677 | 0.9354 | 0.9354 | [0.0, 0.9353623162908734] | [nan, 0.9353623162908734] |
| 0.0027 | 49.32 | 4340 | 0.0480 | 0.4677 | 0.9354 | 0.9354 | [0.0, 0.9354117965284665] | [nan, 0.9354117965284665] |
| 0.0018 | 49.55 | 4360 | 0.0498 | 0.4675 | 0.9350 | 0.9350 | [0.0, 0.9349983616543552] | [nan, 0.9349983616543552] |
| 0.0021 | 49.77 | 4380 | 0.0493 | 0.4672 | 0.9345 | 0.9345 | [0.0, 0.9344738711358683] | [nan, 0.9344738711358683] |
| 0.0017 | 50.0 | 4400 | 0.0496 | 0.4672 | 0.9344 | 0.9344 | [0.0, 0.9343870058298716] | [nan, 0.9343870058298716] |
### Framework versions
- Transformers 4.21.1
- Pytorch 1.12.0+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
| [
"unlabeled",
"foot"
] |
kiheh85202/yolo |
# DPT (large-sized model) fine-tuned on ADE20k
Dense Prediction Transformer (DPT) model trained on ADE20k for semantic segmentation. It was introduced in the paper [Vision Transformers for Dense Prediction](https://arxiv.org/abs/2103.13413) by Ranftl et al. and first released in [this repository](https://github.com/isl-org/DPT).
Disclaimer: The team releasing DPT did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
DPT uses the Vision Transformer (ViT) as backbone and adds a neck + head on top for semantic segmentation.

## Intended uses & limitations
You can use the raw model for semantic segmentation. See the [model hub](https://huggingface.co/models?search=dpt) to look for
fine-tuned versions on a task that interests you.
### How to use
Here is how to use this model:
```python
from transformers import DPTFeatureExtractor, DPTForSemanticSegmentation
from PIL import Image
import requests
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
feature_extractor = DPTFeatureExtractor.from_pretrained("Intel/dpt-large-ade")
model = DPTForSemanticSegmentation.from_pretrained("Intel/dpt-large-ade")
inputs = feature_extractor(images=image, return_tensors="pt")
outputs = model(**inputs)
logits = outputs.logits
```
For more code examples, we refer to the [documentation](https://huggingface.co/docs/transformers/master/en/model_doc/dpt).
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2103-13413,
author = {Ren{\'{e}} Ranftl and
Alexey Bochkovskiy and
Vladlen Koltun},
title = {Vision Transformers for Dense Prediction},
journal = {CoRR},
volume = {abs/2103.13413},
year = {2021},
url = {https://arxiv.org/abs/2103.13413},
eprinttype = {arXiv},
eprint = {2103.13413},
timestamp = {Wed, 07 Apr 2021 15:31:46 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2103-13413.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
``` | [
"wall",
"building",
"sky",
"floor",
"tree",
"ceiling",
"road",
"bed ",
"windowpane",
"grass",
"cabinet",
"sidewalk",
"person",
"earth",
"door",
"table",
"mountain",
"plant",
"curtain",
"chair",
"car",
"water",
"painting",
"sofa",
"shelf",
"house",
"sea",
"mirror",
"rug",
"field",
"armchair",
"seat",
"fence",
"desk",
"rock",
"wardrobe",
"lamp",
"bathtub",
"railing",
"cushion",
"base",
"box",
"column",
"signboard",
"chest of drawers",
"counter",
"sand",
"sink",
"skyscraper",
"fireplace",
"refrigerator",
"grandstand",
"path",
"stairs",
"runway",
"case",
"pool table",
"pillow",
"screen door",
"stairway",
"river",
"bridge",
"bookcase",
"blind",
"coffee table",
"toilet",
"flower",
"book",
"hill",
"bench",
"countertop",
"stove",
"palm",
"kitchen island",
"computer",
"swivel chair",
"boat",
"bar",
"arcade machine",
"hovel",
"bus",
"towel",
"light",
"truck",
"tower",
"chandelier",
"awning",
"streetlight",
"booth",
"television receiver",
"airplane",
"dirt track",
"apparel",
"pole",
"land",
"bannister",
"escalator",
"ottoman",
"bottle",
"buffet",
"poster",
"stage",
"van",
"ship",
"fountain",
"conveyer belt",
"canopy",
"washer",
"plaything",
"swimming pool",
"stool",
"barrel",
"basket",
"waterfall",
"tent",
"bag",
"minibike",
"cradle",
"oven",
"ball",
"food",
"step",
"tank",
"trade name",
"microwave",
"pot",
"animal",
"bicycle",
"lake",
"dishwasher",
"screen",
"blanket",
"sculpture",
"hood",
"sconce",
"vase",
"traffic light",
"tray",
"ashcan",
"fan",
"pier",
"crt screen",
"plate",
"monitor",
"bulletin board",
"shower",
"radiator",
"glass",
"clock",
"flag"
] |
nishita/segformer-b0-finetuned-segments-sidewalk-2 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# segformer-b0-finetuned-segments-sidewalk-2
This model is a fine-tuned version of [nvidia/mit-b0](https://huggingface.co/nvidia/mit-b0) on the segments/sidewalk-semantic dataset.
It achieves the following results on the evaluation set:
- Loss: 2.6306
- Mean Iou: 0.1027
- Mean Accuracy: 0.1574
- Overall Accuracy: 0.6552
- Per Category Iou: [0.0, 0.40932069741697885, 0.6666047315185674, 0.0015527279135260222, 0.000557997451181134, 0.004734463745284192, 0.0, 0.00024311836753505628, 0.0, 0.0, 0.5448608416905849, 0.005644290758731727, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.4689142754019952, 0.0, 0.00039031599380590526, 0.010175747938072128, 0.0, 0.0, 0.0, 0.0008842445754996234, 0.0, 0.0, 0.6689560919488968, 0.10178439680971307, 0.7089823411348399, 0.0, 0.0, 0.0, 0.0]
- Per Category Accuracy: [nan, 0.6798160901382586, 0.8601972223213155, 0.001563543652833044, 0.0005586801134972854, 0.004789605465686377, nan, 0.00024743825184288725, 0.0, 0.0, 0.8407289173400536, 0.012641370267169317, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.7574833533176979, 0.0, 0.00039110009377117975, 0.013959849889225483, 0.0, nan, 0.0, 0.0009309900323061499, 0.0, 0.0, 0.9337304207449932, 0.12865528611713883, 0.8019892660736478, 0.0, 0.0, 0.0, 0.0]
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 6e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Mean Iou | Mean Accuracy | Overall Accuracy | Per Category Iou | Per Category Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:-------------:|:----------------:|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------:|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------:|
| 2.8872 | 0.5 | 20 | 3.1018 | 0.0995 | 0.1523 | 0.6415 | [0.0, 0.3982872425364927, 0.6582689116809847, 0.0, 0.00044314555867048773, 0.019651883205738383, 0.0, 0.0006528617866575068, 0.0, 0.0, 0.4861235900758522, 0.003961411405960721, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.4437814560942763, 0.0, 1.1600860783870164e-06, 0.019965880301918204, 0.0, 0.0, 0.0, 0.0074026601990928, 0.0, 0.0, 0.666238976894996, 0.13012673492067245, 0.6486315429686865, 0.0, 2.0656177918545805e-05, 0.0001944735843164534, 0.0] | [nan, 0.6263716501798601, 0.8841421548179447, 0.0, 0.00044410334445801165, 0.020659891877382746, nan, 0.0006731258604635891, 0.0, 0.0, 0.8403154629142631, 0.017886412063596133, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.6324385775164868, 0.0, 1.160534402881839e-06, 0.06036834410935781, 0.0, nan, 0.0, 0.010232933175604348, 0.0, 0.0, 0.9320173945724101, 0.15828224740687694, 0.6884182010535304, 0.0, 2.3169780427714147e-05, 0.00019505205451704924, 0.0] |
| 2.6167 | 1.0 | 40 | 2.6306 | 0.1027 | 0.1574 | 0.6552 | [0.0, 0.40932069741697885, 0.6666047315185674, 0.0015527279135260222, 0.000557997451181134, 0.004734463745284192, 0.0, 0.00024311836753505628, 0.0, 0.0, 0.5448608416905849, 0.005644290758731727, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.4689142754019952, 0.0, 0.00039031599380590526, 0.010175747938072128, 0.0, 0.0, 0.0, 0.0008842445754996234, 0.0, 0.0, 0.6689560919488968, 0.10178439680971307, 0.7089823411348399, 0.0, 0.0, 0.0, 0.0] | [nan, 0.6798160901382586, 0.8601972223213155, 0.001563543652833044, 0.0005586801134972854, 0.004789605465686377, nan, 0.00024743825184288725, 0.0, 0.0, 0.8407289173400536, 0.012641370267169317, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.7574833533176979, 0.0, 0.00039110009377117975, 0.013959849889225483, 0.0, nan, 0.0, 0.0009309900323061499, 0.0, 0.0, 0.9337304207449932, 0.12865528611713883, 0.8019892660736478, 0.0, 0.0, 0.0, 0.0] |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0
- Datasets 2.1.0
- Tokenizers 0.12.1
| [
"unlabeled",
"flat-road",
"flat-sidewalk",
"flat-crosswalk",
"flat-cyclinglane",
"flat-parkingdriveway",
"flat-railtrack",
"flat-curb",
"human-person",
"human-rider",
"vehicle-car",
"vehicle-truck",
"vehicle-bus",
"vehicle-tramtrain",
"vehicle-motorcycle",
"vehicle-bicycle",
"vehicle-caravan",
"vehicle-cartrailer",
"construction-building",
"construction-door",
"construction-wall",
"construction-fenceguardrail",
"construction-bridge",
"construction-tunnel",
"construction-stairs",
"object-pole",
"object-trafficsign",
"object-trafficlight",
"nature-vegetation",
"nature-terrain",
"sky",
"void-ground",
"void-dynamic",
"void-static",
"void-unclear"
] |
shaheen1998/segformer-b0-finetuned-segments-sidewalk-2 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# segformer-b0-finetuned-segments-sidewalk-2
This model is a fine-tuned version of [nvidia/mit-b0](https://huggingface.co/nvidia/mit-b0) on the segments/sidewalk-semantic dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 6e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Framework versions
- Transformers 4.21.3
- Pytorch 1.12.1+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
| [
"unlabeled",
"flat-road",
"flat-sidewalk",
"flat-crosswalk",
"flat-cyclinglane",
"flat-parkingdriveway",
"flat-railtrack",
"flat-curb",
"human-person",
"human-rider",
"vehicle-car",
"vehicle-truck",
"vehicle-bus",
"vehicle-tramtrain",
"vehicle-motorcycle",
"vehicle-bicycle",
"vehicle-caravan",
"vehicle-cartrailer",
"construction-building",
"construction-door",
"construction-wall",
"construction-fenceguardrail",
"construction-bridge",
"construction-tunnel",
"construction-stairs",
"object-pole",
"object-trafficsign",
"object-trafficlight",
"nature-vegetation",
"nature-terrain",
"sky",
"void-ground",
"void-dynamic",
"void-static",
"void-unclear"
] |
kasumi222/segformer-b0-finetuned-busigt2 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# segformer-b0-finetuned-busigt2
This model is a fine-tuned version of [nvidia/mit-b1](https://huggingface.co/nvidia/mit-b1) on the kasumi222/busigt5 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2904
- Mean Iou: 0.4458
- Mean Accuracy: 0.6980
- Overall Accuracy: 0.6969
- Per Category Iou: [0.0, 0.6551336334577343, 0.6821319425157643]
- Per Category Accuracy: [nan, 0.6913100552356098, 0.70464740289276]
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.00013
- train_batch_size: 20
- eval_batch_size: 20
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Mean Iou | Mean Accuracy | Overall Accuracy | Per Category Iou | Per Category Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:-------------:|:----------------:|:---------------------------------------------:|:---------------------------------------------:|
| 0.1095 | 0.77 | 20 | 0.2086 | 0.4674 | 0.7410 | 0.7419 | [0.0, 0.6978460673452154, 0.704309291034096] | [nan, 0.7461995349612959, 0.7357650020760118] |
| 0.1156 | 1.54 | 40 | 0.1980 | 0.4186 | 0.6721 | 0.6783 | [0.0, 0.6446507442278364, 0.6112330250576428] | [nan, 0.7089917293749448, 0.635300900559587] |
| 0.1039 | 2.31 | 60 | 0.1987 | 0.3706 | 0.5810 | 0.5757 | [0.0, 0.5345322994102119, 0.5773860979625277] | [nan, 0.5495831330265778, 0.6123860258526792] |
| 0.0672 | 3.08 | 80 | 0.1960 | 0.4099 | 0.6407 | 0.6439 | [0.0, 0.6194380206711395, 0.6103561290824698] | [nan, 0.6596136450596995, 0.6218662960315686] |
| 0.0992 | 3.85 | 100 | 0.1969 | 0.4201 | 0.6684 | 0.6695 | [0.0, 0.6251984513525223, 0.6351366565306488] | [nan, 0.675036447653713, 0.661700391303438] |
| 0.085 | 4.62 | 120 | 0.2075 | 0.4383 | 0.6997 | 0.6964 | [0.0, 0.6407576836532538, 0.6742246105299582] | [nan, 0.6804532655724195, 0.718889834811138] |
| 0.0561 | 5.38 | 140 | 0.2037 | 0.4401 | 0.7033 | 0.7071 | [0.0, 0.6545188689920507, 0.665783897448558] | [nan, 0.7263735810923504, 0.6801427547189345] |
| 0.0841 | 6.15 | 160 | 0.2119 | 0.3651 | 0.5891 | 0.5934 | [0.0, 0.5494216923933923, 0.5458843877102458] | [nan, 0.6146571565924632, 0.5634664881039569] |
| 0.1034 | 6.92 | 180 | 0.2371 | 0.3684 | 0.6193 | 0.6367 | [0.0, 0.6047004430113216, 0.5003660220404046] | [nan, 0.7229919452156935, 0.5156554415186935] |
| 0.0691 | 7.69 | 200 | 0.2266 | 0.4285 | 0.6991 | 0.7117 | [0.0, 0.6730686627556878, 0.6124621276402561] | [nan, 0.7742042834577688, 0.6240342690621383] |
| 0.0601 | 8.46 | 220 | 0.2106 | 0.4198 | 0.6674 | 0.6704 | [0.0, 0.6308213023617786, 0.6287108585057931] | [nan, 0.6851880267250091, 0.6497046776895365] |
| 0.0647 | 9.23 | 240 | 0.2234 | 0.4229 | 0.6746 | 0.6777 | [0.0, 0.6338885508159525, 0.6349404984513296] | [nan, 0.6928998204597407, 0.6563077167064432] |
| 0.0626 | 10.0 | 260 | 0.2322 | 0.3991 | 0.6540 | 0.6655 | [0.0, 0.6267222060572648, 0.570544858752452] | [nan, 0.7227113522422911, 0.5852409330048426] |
| 0.0604 | 10.77 | 280 | 0.2021 | 0.4660 | 0.7283 | 0.7288 | [0.0, 0.6990308020264264, 0.6989818924111941] | [nan, 0.7310753774760368, 0.7255727204344536] |
| 0.0573 | 11.54 | 300 | 0.2227 | 0.4513 | 0.7014 | 0.6951 | [0.0, 0.6488805486358904, 0.7049138389320693] | [nan, 0.6638350976679388, 0.7389417956785915] |
| 0.0474 | 12.31 | 320 | 0.2108 | 0.4781 | 0.7468 | 0.7371 | [0.0, 0.6761855871787447, 0.7580093480444655] | [nan, 0.6890590324447889, 0.8044529075728725] |
| 0.0805 | 13.08 | 340 | 0.2257 | 0.4325 | 0.6902 | 0.6940 | [0.0, 0.6550347525850334, 0.6423545682885212] | [nan, 0.7128733309133007, 0.6675247882412931] |
| 0.0545 | 13.85 | 360 | 0.2155 | 0.4609 | 0.7230 | 0.7167 | [0.0, 0.6629649481906197, 0.7196967289093881] | [nan, 0.6853650161390015, 0.7606061073292577] |
| 0.0628 | 14.62 | 380 | 0.2397 | 0.4150 | 0.6561 | 0.6611 | [0.0, 0.6377593821077956, 0.6070948266377257] | [nan, 0.6861969841160831, 0.6259296622984148] |
| 0.0576 | 15.38 | 400 | 0.2177 | 0.4661 | 0.7274 | 0.7272 | [0.0, 0.6936915190759695, 0.7046022162863222] | [nan, 0.7263017649886684, 0.7284576609239519] |
| 0.0808 | 16.15 | 420 | 0.2263 | 0.4248 | 0.6707 | 0.6740 | [0.0, 0.6438773235874202, 0.6304024210524071] | [nan, 0.6904172594111472, 0.6510802419847774] |
| 0.0458 | 16.92 | 440 | 0.2342 | 0.4006 | 0.6449 | 0.6525 | [0.0, 0.6208902028936363, 0.5809796433249929] | [nan, 0.6898132977523129, 0.6000533044931062] |
| 0.0477 | 17.69 | 460 | 0.2683 | 0.3789 | 0.6170 | 0.6232 | [0.0, 0.5741692028709614, 0.5625631837395161] | [nan, 0.6539633266945951, 0.5800762342358019] |
| 0.0501 | 18.46 | 480 | 0.2364 | 0.4280 | 0.6700 | 0.6675 | [0.0, 0.6223049989658083, 0.6617065588280534] | [nan, 0.6552936905824757, 0.6846169180090992] |
| 0.039 | 19.23 | 500 | 0.2378 | 0.4500 | 0.7052 | 0.6986 | [0.0, 0.6391919313721981, 0.7106968345576296] | [nan, 0.665670921345669, 0.7446979100013106] |
| 0.041 | 20.0 | 520 | 0.2477 | 0.4142 | 0.6612 | 0.6659 | [0.0, 0.6273087938535062, 0.6153514032911991] | [nan, 0.6890233206118104, 0.6333526433632052] |
| 0.0331 | 20.77 | 540 | 0.2488 | 0.4353 | 0.6814 | 0.6778 | [0.0, 0.6267198588955959, 0.6791644212315564] | [nan, 0.6603973431966015, 0.7023153313193633] |
| 0.0316 | 21.54 | 560 | 0.2468 | 0.4500 | 0.7025 | 0.6974 | [0.0, 0.6405571933079939, 0.7093320446678179] | [nan, 0.6719456081313097, 0.7331179494069875] |
| 0.0333 | 22.31 | 580 | 0.2477 | 0.4384 | 0.6899 | 0.6906 | [0.0, 0.6520329743081146, 0.6630535380613215] | [nan, 0.6937796658392771, 0.6860558089232162] |
| 0.0269 | 23.08 | 600 | 0.2603 | 0.4477 | 0.7018 | 0.6996 | [0.0, 0.6514078130357787, 0.6916101875532822] | [nan, 0.6888588892050193, 0.7147725032516842] |
| 0.033 | 23.85 | 620 | 0.2424 | 0.4499 | 0.7061 | 0.6986 | [0.0, 0.6447352671115818, 0.7048670621273163] | [nan, 0.6616131152687708, 0.750523958937919] |
| 0.0555 | 24.62 | 640 | 0.2471 | 0.4342 | 0.6830 | 0.6823 | [0.0, 0.636756610371055, 0.6659104633164847] | [nan, 0.6791280033749645, 0.6868014110272018] |
| 0.0583 | 25.38 | 660 | 0.2517 | 0.4434 | 0.6922 | 0.6879 | [0.0, 0.6386719513699022, 0.6913843141331489] | [nan, 0.6666374954624388, 0.7178391636040445] |
| 0.154 | 26.15 | 680 | 0.2535 | 0.4235 | 0.6597 | 0.6487 | [0.0, 0.5750726006840868, 0.695285501846172] | [nan, 0.5943477194462704, 0.7250215035171054] |
| 0.0292 | 26.92 | 700 | 0.2768 | 0.3679 | 0.6035 | 0.6135 | [0.0, 0.5756677002657924, 0.5279750019379379] | [nan, 0.6631412677700708, 0.5438385402498483] |
| 0.0288 | 27.69 | 720 | 0.2455 | 0.4676 | 0.7235 | 0.7188 | [0.0, 0.6761224569996822, 0.7268002447671437] | [nan, 0.6954373227898398, 0.7515024928661187] |
| 0.0321 | 28.46 | 740 | 0.2618 | 0.4324 | 0.6745 | 0.6691 | [0.0, 0.6201514037000198, 0.6770266576179022] | [nan, 0.6425218048210974, 0.7064552401951121] |
| 0.0309 | 29.23 | 760 | 0.2742 | 0.3944 | 0.6348 | 0.6407 | [0.0, 0.6008533572398147, 0.5822751024176394] | [nan, 0.6701804232440864, 0.599451426280657] |
| 0.0244 | 30.0 | 780 | 0.2667 | 0.4386 | 0.6819 | 0.6750 | [0.0, 0.6224630782821559, 0.693390305711243] | [nan, 0.6412495217165226, 0.7224713681082742] |
| 0.0642 | 30.77 | 800 | 0.2501 | 0.4581 | 0.7121 | 0.7096 | [0.0, 0.6722145834845955, 0.7021141065136746] | [nan, 0.6976031865943273, 0.7265325317101161] |
| 0.0481 | 31.54 | 820 | 0.2685 | 0.4137 | 0.6689 | 0.6766 | [0.0, 0.6379976664903103, 0.6031984018650592] | [nan, 0.7145859291453688, 0.6231961550279683] |
| 0.0311 | 32.31 | 840 | 0.2570 | 0.4284 | 0.6804 | 0.6832 | [0.0, 0.6426329055663264, 0.6425854743219936] | [nan, 0.6969752862342657, 0.6639063603053335] |
| 0.0389 | 33.08 | 860 | 0.2795 | 0.3918 | 0.6456 | 0.6590 | [0.0, 0.6244554318979076, 0.5508200429573112] | [nan, 0.7254125011037311, 0.5658618862962298] |
| 0.0282 | 33.85 | 880 | 0.2568 | 0.4242 | 0.6759 | 0.6775 | [0.0, 0.6282787291971401, 0.6442735430594793] | [nan, 0.6857107537747603, 0.6660974613184492] |
| 0.0245 | 34.62 | 900 | 0.2635 | 0.4503 | 0.7043 | 0.7037 | [0.0, 0.6658605581388065, 0.6850412042515538] | [nan, 0.7008356961354695, 0.7076892832638209] |
| 0.0315 | 35.38 | 920 | 0.2769 | 0.4443 | 0.7038 | 0.7055 | [0.0, 0.6610872730365329, 0.6718978137221756] | [nan, 0.7138198907060935, 0.6938235070611933] |
| 0.0283 | 36.15 | 940 | 0.2697 | 0.4392 | 0.6920 | 0.6907 | [0.0, 0.6405508279799802, 0.6769668218170816] | [nan, 0.6841213809883544, 0.6998318265269149] |
| 0.0257 | 36.92 | 960 | 0.2712 | 0.4562 | 0.7099 | 0.7082 | [0.0, 0.6720494469697227, 0.6964887349332429] | [nan, 0.6999154296702542, 0.7197879714666775] |
| 0.0188 | 37.69 | 980 | 0.2857 | 0.4300 | 0.6763 | 0.6771 | [0.0, 0.6397832221652129, 0.6501046733477022] | [nan, 0.6811686795451647, 0.6713607293464362] |
| 0.0259 | 38.46 | 1000 | 0.2812 | 0.4368 | 0.6851 | 0.6838 | [0.0, 0.6396217765000503, 0.6707000380577134] | [nan, 0.6772780519391329, 0.6929027930893589] |
| 0.0169 | 39.23 | 1020 | 0.2795 | 0.4542 | 0.7084 | 0.7054 | [0.0, 0.6598929743362643, 0.7028156867427239] | [nan, 0.6906225043413423, 0.7260947520404938] |
| 0.0296 | 40.0 | 1040 | 0.2834 | 0.4470 | 0.7015 | 0.7013 | [0.0, 0.6608002641121026, 0.6801095152287282] | [nan, 0.7006602764723773, 0.7022773353480376] |
| 0.0183 | 40.77 | 1060 | 0.2874 | 0.4386 | 0.6909 | 0.6903 | [0.0, 0.6432231900832152, 0.6726091072738183] | [nan, 0.6874296310104291, 0.694422081276136] |
| 0.0199 | 41.54 | 1080 | 0.2741 | 0.4594 | 0.7175 | 0.7154 | [0.0, 0.6721657359810768, 0.7061664449453671] | [nan, 0.7051238631569653, 0.7298866398455491] |
| 0.0162 | 42.31 | 1100 | 0.2883 | 0.4414 | 0.6921 | 0.6913 | [0.0, 0.6492915338226911, 0.6750215527697642] | [nan, 0.6870752597447193, 0.6971930338516571] |
| 0.0179 | 43.08 | 1120 | 0.2927 | 0.4425 | 0.6936 | 0.6927 | [0.0, 0.651082790586508, 0.6764744769464034] | [nan, 0.6884633119781804, 0.6987260886947118] |
| 0.0228 | 43.85 | 1140 | 0.2954 | 0.4273 | 0.6807 | 0.6841 | [0.0, 0.6418083531582984, 0.6399672125377378] | [nan, 0.7006630235364526, 0.6608033559804007] |
| 0.0164 | 44.62 | 1160 | 0.2954 | 0.4264 | 0.6740 | 0.6756 | [0.0, 0.6356634502412776, 0.6436554266840772] | [nan, 0.6834636553611899, 0.6644801545389767] |
| 0.0158 | 45.38 | 1180 | 0.2906 | 0.4433 | 0.6956 | 0.6951 | [0.0, 0.6536928350497138, 0.6760836624911459] | [nan, 0.6927067410990219, 0.6985223421818058] |
| 0.0198 | 46.15 | 1200 | 0.2881 | 0.4441 | 0.6969 | 0.6961 | [0.0, 0.6527988151987781, 0.6794425179962712] | [nan, 0.6919179412716945, 0.7019810769049473] |
| 0.018 | 46.92 | 1220 | 0.2961 | 0.4350 | 0.6844 | 0.6839 | [0.0, 0.6395287774950378, 0.6655290939553297] | [nan, 0.6815206961845243, 0.6872821426644097] |
| 0.0179 | 47.69 | 1240 | 0.2898 | 0.4459 | 0.6987 | 0.6982 | [0.0, 0.6581945977423002, 0.6796217960953337] | [nan, 0.6955130632707722, 0.701934270273604] |
| 0.0213 | 48.46 | 1260 | 0.2902 | 0.4469 | 0.7004 | 0.6998 | [0.0, 0.6595482974648909, 0.6811920247361126] | [nan, 0.6971510983350829, 0.7036303223269834] |
| 0.0227 | 49.23 | 1280 | 0.2888 | 0.4452 | 0.6967 | 0.6953 | [0.0, 0.6532891096762087, 0.6823149709479772] | [nan, 0.6885578894699147, 0.7047801134592744] |
| 0.0266 | 50.0 | 1300 | 0.2904 | 0.4458 | 0.6980 | 0.6969 | [0.0, 0.6551336334577343, 0.6821319425157643] | [nan, 0.6913100552356098, 0.70464740289276] |
### Framework versions
- Transformers 4.21.3
- Pytorch 1.12.1+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
| [
"unlabeled",
"maligno",
"benigno"
] |
lapix/segformer-b3-finetuned-ccagt-400-300 |
# SegFormer (b3-sized) model fine-tuned on CCAgT dataset
SegFormer model fine-tuned on CCAgT dataset at resolution 400x300. It was introduced in the paper [Semantic Segmentation for the Detection of Very Small Objects on Cervical Cell Samples Stained with the {AgNOR} Technique](https://doi.org/10.2139/ssrn.4126881) by [J. G. A. Amorim](https://huggingface.co/johnnv) et al.
This model was trained in a subset of [CCAgT dataset](https://huggingface.co/datasets/lapix/CCAgT/), so perform a evaluation of this model on the dataset available at HF will differ from the results presented in the paper. For more information about how the model was trained, read the paper.
Disclaimer: This model card has been written based on the SegFormer [model card](https://huggingface.co/nvidia/mit-b3/blob/main/README.md) by the Hugging Face team.
## Model description
SegFormer consists of a hierarchical Transformer encoder and a lightweight all-MLP decode head to achieve great results on semantic segmentation benchmarks such as ADE20K and Cityscapes. The hierarchical Transformer is first pre-trained on ImageNet-1k, after which a decode head is added and fine-tuned altogether on a downstream dataset.
This repository only contains the pre-trained hierarchical Transformer, hence it can be used for fine-tuning purposes.
## Intended uses & limitations
You can use the model for fine-tuning of semantic segmentation. See the [model hub](https://huggingface.co/models?other=segformer) to look for fine-tuned versions on a task that interests you.
### How to use
Here is how to use this model to segment an image of the CCAgT dataset:
```python
from transformers import AutoFeatureExtractor, SegformerForSemanticSegmentation
from PIL import Image
import requests
url = "https://huggingface.co/lapix/segformer-b3-finetuned-ccagt-400-300/resolve/main/sampleB.png"
image = Image.open(requests.get(url, stream=True).raw))
model = SegformerForSemanticSegmentation.from_pretrained("lapix/segformer-b3-finetuned-ccagt-400-300")
feature_extractor = AutoFeatureExtractor.from_pretrained("lapix/segformer-b3-finetuned-ccagt-400-300")
pixel_values = feature_extractor(images=image, return_tensors="pt")
outputs = model(pixel_values=pixel_values)
logits = outputs.logits
# Rescale logits to original image size (400, 300)
upsampled_logits = nn.functional.interpolate(
logits,
size=img.size[::-1], # (height, width)
mode="bilinear",
align_corners=False,
)
segmentation_mask = upsampled_logits.argmax(dim=1)[0]
print("Predicted mask:", segmentation_mask)
```
For more code examples, we refer to the [documentation](https://huggingface.co/transformers/model_doc/segformer.html#).
### License
The license for this model can be found [here](https://github.com/NVlabs/SegFormer/blob/master/LICENSE).
### BibTeX entry and citation info
```bibtex
@article{AtkinsonSegmentationAgNORSSRN2022,
author= {Jo{\~{a}}o Gustavo Atkinson Amorim and Andr{\'{e}} Vict{\'{o}}ria Matias and Allan Cerentini and Fabiana Botelho de Miranda Onofre and Alexandre Sherlley Casimiro Onofre and Aldo von Wangenheim},
doi = {10.2139/ssrn.4126881},
url = {https://doi.org/10.2139/ssrn.4126881},
year = {2022},
publisher = {Elsevier {BV}},
title = {Semantic Segmentation for the Detection of Very Small Objects on Cervical Cell Samples Stained with the {AgNOR} Technique},
journal = {{SSRN} Electronic Journal}
}
```
| [
"background",
"nucleus",
"cluster",
"satellite",
"nucleus_out_of_focus",
"overlapped_nuclei",
"non_viable_nucleus",
"leukocyte_nucleus"
] |
zoheb/mit-b5-finetuned-sidewalk-semantic |
# SegFormer (b5-sized) model fine-tuned on sidewalk-semantic dataset.
SegFormer model fine-tuned on SegmentsAI [`sidewalk-semantic`](https://huggingface.co/datasets/segments/sidewalk-semantic). It was introduced in the paper [SegFormer: Simple and Efficient Design for Semantic Segmentation with Transformers](https://arxiv.org/abs/2105.15203) by Xie et al. and first released in [this repository](https://github.com/NVlabs/SegFormer).
## Model description
SegFormer consists of a hierarchical Transformer encoder and a lightweight all-MLP decode head to achieve great results on semantic segmentation benchmarks such as ADE20K and Cityscapes. The hierarchical Transformer is first pre-trained on ImageNet-1k, after which a decode head is added and fine-tuned altogether on a downstream dataset.
## Code and Notebook
Here is how to use this model to classify an image of the sidewalk dataset:
```python
from transformers import SegformerFeatureExtractor, SegformerForImageClassification
from PIL import Image
import requests
url = "https://segmentsai-prod.s3.eu-west-2.amazonaws.com/assets/admin-tobias/439f6843-80c5-47ce-9b17-0b2a1d54dbeb.jpg"
image = Image.open(requests.get(url, stream=True).raw)
feature_extractor = SegformerFeatureExtractor.from_pretrained("zoheb/mit-b5-finetuned-sidewalk-semantic")
model = SegformerForImageClassification.from_pretrained("zoheb/mit-b5-finetuned-sidewalk-semantic")
inputs = feature_extractor(images=image, return_tensors="pt")
outputs = model(**inputs)
logits = outputs.logits
# model predicts one of the 35 Sidewalk classes
predicted_class_idx = logits.argmax(-1).item()
print("Predicted class:", model.config.id2label[predicted_class_idx])
```
You can go through its detailed notebook [here](https://github.com/ZohebAbai/Deep-Learning-Projects/blob/master/09_HF_Image_Segmentation_using_Transformers.ipynb).
For more code examples, refer to the [documentation](https://huggingface.co/transformers/model_doc/segformer.html#).
## License
The license for this model can be found [here](https://github.com/NVlabs/SegFormer/blob/master/LICENSE).
## BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2105-15203,
author = {Enze Xie and
Wenhai Wang and
Zhiding Yu and
Anima Anandkumar and
Jose M. Alvarez and
Ping Luo},
title = {SegFormer: Simple and Efficient Design for Semantic Segmentation with
Transformers},
journal = {CoRR},
volume = {abs/2105.15203},
year = {2021},
url = {https://arxiv.org/abs/2105.15203},
eprinttype = {arXiv},
eprint = {2105.15203},
timestamp = {Wed, 02 Jun 2021 11:46:42 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2105-15203.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
| [
"unlabeled",
"flat-road",
"flat-sidewalk",
"flat-crosswalk",
"flat-cyclinglane",
"flat-parkingdriveway",
"flat-railtrack",
"flat-curb",
"human-person",
"human-rider",
"vehicle-car",
"vehicle-truck",
"vehicle-bus",
"vehicle-tramtrain",
"vehicle-motorcycle",
"vehicle-bicycle",
"vehicle-caravan",
"vehicle-cartrailer",
"construction-building",
"construction-door",
"construction-wall",
"construction-fenceguardrail",
"construction-bridge",
"construction-tunnel",
"construction-stairs",
"object-pole",
"object-trafficsign",
"object-trafficlight",
"nature-vegetation",
"nature-terrain",
"sky",
"void-ground",
"void-dynamic",
"void-static",
"void-unclear"
] |
matnun/segformer-b0-finetuned-segments-sidewalk-2 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# segformer-b0-finetuned-segments-sidewalk-2
This model is a fine-tuned version of [nvidia/mit-b0](https://huggingface.co/nvidia/mit-b0) on the segments/sidewalk-semantic dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9042
- Mean Iou: 0.1600
- Mean Accuracy: 0.1997
- Overall Accuracy: 0.7338
- Per Category Iou: [nan, 0.27359520957005035, 0.6563592089876799, 0.0, 0.23344374046535918, 0.0, nan, 0.0, 0.0, 0.0, 0.5539341917024321, nan, nan, nan, nan, 0.0, 0.0, nan, 0.6213519498256361, 0.0, 0.0, 0.0, nan, nan, 0.0, 0.0, 0.0, 0.0, 0.8012808797206368, 0.0, 0.8609473035107046, nan, 0.0, 0.0, 0.0]
- Per Category Accuracy: [nan, 0.38598740280061317, 0.9344800917343116, 0.0, 0.23402267811135147, 0.0, nan, 0.0, 0.0, 0.0, 0.6574569071869553, nan, nan, nan, nan, 0.0, 0.0, nan, 0.889953470705536, 0.0, 0.0, 0.0, nan, nan, 0.0, 0.0, 0.0, 0.0, 0.9339123774958169, 0.0, 0.9562267789312698, nan, 0.0, 0.0, 0.0]
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 6e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Mean Iou | Mean Accuracy | Overall Accuracy | Per Category Iou | Per Category Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:-------------:|:----------------:|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------:|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------:|
| 2.8419 | 0.42 | 20 | 3.2243 | 0.1239 | 0.1973 | 0.6992 | [0.0, 0.221283072298205, 0.6482498250140304, 0.0, 0.36607695456244177, 0.013827775204570018, nan, 1.0254201659129828e-05, 0.0, 0.0, 0.5416500682753081, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.5339731316050166, 0.0, 0.0006440571922786744, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.7498440701547007, 0.0, 0.7659222854515146, 0.0, 0.0, 0.0, 0.0] | [nan, 0.3346613609105567, 0.8582083544770268, 0.0, 0.5101472837243907, 0.015482685970504024, nan, 1.0366454154356502e-05, 0.0, 0.0, 0.6745826026281508, nan, nan, nan, nan, 0.0, 0.0, nan, 0.8093545247364923, 0.0, 0.0006458279514337381, 0.0, nan, nan, 0.0, 0.0, 0.0, 0.0, 0.9324806212895075, 0.0, 0.797418357423677, nan, 0.0, 0.0, 0.0] |
| 2.3662 | 0.83 | 40 | 2.5147 | 0.1402 | 0.1798 | 0.6989 | [nan, 0.19549119549985344, 0.6036027201962391, 0.0, 0.0019222772099991463, 0.000300503343099692, nan, 0.0, 0.0, 0.0, 0.47853978429259575, nan, nan, nan, nan, 0.0, 0.0, nan, 0.5820555774612892, 0.0, 0.0, 0.0, nan, nan, 0.0, 0.0, 0.0, 0.0, 0.7898452112422248, 0.0, 0.8521568687502872, nan, 0.0, 0.0, 0.0] | [nan, 0.25107981668136076, 0.9396577375184628, 0.0, 0.0019233683746435017, 0.0003025228242666523, nan, 0.0, 0.0, 0.0, 0.5513810659584686, nan, nan, nan, nan, 0.0, 0.0, nan, 0.8953553793561865, 0.0, 0.0, 0.0, nan, nan, 0.0, 0.0, 0.0, 0.0, 0.9300976130892274, 0.0, 0.9250758451014455, nan, 0.0, 0.0, 0.0] |
| 2.1745 | 1.25 | 60 | 2.0428 | 0.1485 | 0.1882 | 0.7162 | [nan, 0.24240648716131, 0.6262941164542789, 0.0, 0.04440846090507781, 0.0, nan, 0.0, 0.0, 0.0, 0.522913696330921, nan, nan, nan, nan, 0.0, 0.0, nan, 0.6194890050543631, 0.0, 0.0, 0.0, nan, nan, 0.0, 0.0, 0.0, 0.0, 0.7947837731119848, 0.0, 0.8609570537373858, nan, 0.0, 0.0, 0.0] | [nan, 0.3318909301752965, 0.9392945927202885, 0.0, 0.04443587164684973, 0.0, nan, 0.0, 0.0, 0.0, 0.6149676720993105, nan, nan, nan, nan, 0.0, 0.0, nan, 0.8836542113759377, 0.0, 0.0, 0.0, nan, nan, 0.0, 0.0, 0.0, 0.0, 0.9409947331534898, 0.0, 0.9509521157666382, nan, 0.0, 0.0, 0.0] |
| 1.986 | 1.67 | 80 | 1.9042 | 0.1600 | 0.1997 | 0.7338 | [nan, 0.27359520957005035, 0.6563592089876799, 0.0, 0.23344374046535918, 0.0, nan, 0.0, 0.0, 0.0, 0.5539341917024321, nan, nan, nan, nan, 0.0, 0.0, nan, 0.6213519498256361, 0.0, 0.0, 0.0, nan, nan, 0.0, 0.0, 0.0, 0.0, 0.8012808797206368, 0.0, 0.8609473035107046, nan, 0.0, 0.0, 0.0] | [nan, 0.38598740280061317, 0.9344800917343116, 0.0, 0.23402267811135147, 0.0, nan, 0.0, 0.0, 0.0, 0.6574569071869553, nan, nan, nan, nan, 0.0, 0.0, nan, 0.889953470705536, 0.0, 0.0, 0.0, nan, nan, 0.0, 0.0, 0.0, 0.0, 0.9339123774958169, 0.0, 0.9562267789312698, nan, 0.0, 0.0, 0.0] |
### Framework versions
- Transformers 4.23.1
- Pytorch 1.12.1+cu113
- Datasets 2.5.2
- Tokenizers 0.13.1
| [
"unlabeled",
"flat-road",
"flat-sidewalk",
"flat-crosswalk",
"flat-cyclinglane",
"flat-parkingdriveway",
"flat-railtrack",
"flat-curb",
"human-person",
"human-rider",
"vehicle-car",
"vehicle-truck",
"vehicle-bus",
"vehicle-tramtrain",
"vehicle-motorcycle",
"vehicle-bicycle",
"vehicle-caravan",
"vehicle-cartrailer",
"construction-building",
"construction-door",
"construction-wall",
"construction-fenceguardrail",
"construction-bridge",
"construction-tunnel",
"construction-stairs",
"object-pole",
"object-trafficsign",
"object-trafficlight",
"nature-vegetation",
"nature-terrain",
"sky",
"void-ground",
"void-dynamic",
"void-static",
"void-unclear"
] |
irfan-noordin/segformer-b0-finetuned-segments-sidewalk-oct-22 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# segformer-b0-finetuned-segments-sidewalk-oct-22
This model is a fine-tuned version of [nvidia/mit-b0](https://huggingface.co/nvidia/mit-b0) on the segments/sidewalk-semantic dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9249
- Mean Iou: 0.1675
- Mean Accuracy: 0.2109
- Overall Accuracy: 0.7776
- Accuracy Unlabeled: nan
- Accuracy Flat-road: 0.8631
- Accuracy Flat-sidewalk: 0.9423
- Accuracy Flat-crosswalk: 0.0
- Accuracy Flat-cyclinglane: 0.4704
- Accuracy Flat-parkingdriveway: 0.1421
- Accuracy Flat-railtrack: 0.0
- Accuracy Flat-curb: 0.0061
- Accuracy Human-person: 0.0
- Accuracy Human-rider: 0.0
- Accuracy Vehicle-car: 0.8937
- Accuracy Vehicle-truck: 0.0
- Accuracy Vehicle-bus: 0.0
- Accuracy Vehicle-tramtrain: 0.0
- Accuracy Vehicle-motorcycle: 0.0
- Accuracy Vehicle-bicycle: 0.0
- Accuracy Vehicle-caravan: 0.0
- Accuracy Vehicle-cartrailer: 0.0
- Accuracy Construction-building: 0.9143
- Accuracy Construction-door: 0.0
- Accuracy Construction-wall: 0.0055
- Accuracy Construction-fenceguardrail: 0.0
- Accuracy Construction-bridge: 0.0
- Accuracy Construction-tunnel: nan
- Accuracy Construction-stairs: 0.0
- Accuracy Object-pole: 0.0
- Accuracy Object-trafficsign: 0.0
- Accuracy Object-trafficlight: 0.0
- Accuracy Nature-vegetation: 0.9291
- Accuracy Nature-terrain: 0.8710
- Accuracy Sky: 0.9207
- Accuracy Void-ground: 0.0
- Accuracy Void-dynamic: 0.0
- Accuracy Void-static: 0.0
- Accuracy Void-unclear: 0.0
- Iou Unlabeled: nan
- Iou Flat-road: 0.6127
- Iou Flat-sidewalk: 0.8192
- Iou Flat-crosswalk: 0.0
- Iou Flat-cyclinglane: 0.4256
- Iou Flat-parkingdriveway: 0.1262
- Iou Flat-railtrack: 0.0
- Iou Flat-curb: 0.0061
- Iou Human-person: 0.0
- Iou Human-rider: 0.0
- Iou Vehicle-car: 0.6655
- Iou Vehicle-truck: 0.0
- Iou Vehicle-bus: 0.0
- Iou Vehicle-tramtrain: 0.0
- Iou Vehicle-motorcycle: 0.0
- Iou Vehicle-bicycle: 0.0
- Iou Vehicle-caravan: 0.0
- Iou Vehicle-cartrailer: 0.0
- Iou Construction-building: 0.5666
- Iou Construction-door: 0.0
- Iou Construction-wall: 0.0054
- Iou Construction-fenceguardrail: 0.0
- Iou Construction-bridge: 0.0
- Iou Construction-tunnel: nan
- Iou Construction-stairs: 0.0
- Iou Object-pole: 0.0
- Iou Object-trafficsign: 0.0
- Iou Object-trafficlight: 0.0
- Iou Nature-vegetation: 0.7875
- Iou Nature-terrain: 0.6912
- Iou Sky: 0.8218
- Iou Void-ground: 0.0
- Iou Void-dynamic: 0.0
- Iou Void-static: 0.0
- Iou Void-unclear: 0.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 6e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Mean Iou | Mean Accuracy | Overall Accuracy | Accuracy Unlabeled | Accuracy Flat-road | Accuracy Flat-sidewalk | Accuracy Flat-crosswalk | Accuracy Flat-cyclinglane | Accuracy Flat-parkingdriveway | Accuracy Flat-railtrack | Accuracy Flat-curb | Accuracy Human-person | Accuracy Human-rider | Accuracy Vehicle-car | Accuracy Vehicle-truck | Accuracy Vehicle-bus | Accuracy Vehicle-tramtrain | Accuracy Vehicle-motorcycle | Accuracy Vehicle-bicycle | Accuracy Vehicle-caravan | Accuracy Vehicle-cartrailer | Accuracy Construction-building | Accuracy Construction-door | Accuracy Construction-wall | Accuracy Construction-fenceguardrail | Accuracy Construction-bridge | Accuracy Construction-tunnel | Accuracy Construction-stairs | Accuracy Object-pole | Accuracy Object-trafficsign | Accuracy Object-trafficlight | Accuracy Nature-vegetation | Accuracy Nature-terrain | Accuracy Sky | Accuracy Void-ground | Accuracy Void-dynamic | Accuracy Void-static | Accuracy Void-unclear | Iou Unlabeled | Iou Flat-road | Iou Flat-sidewalk | Iou Flat-crosswalk | Iou Flat-cyclinglane | Iou Flat-parkingdriveway | Iou Flat-railtrack | Iou Flat-curb | Iou Human-person | Iou Human-rider | Iou Vehicle-car | Iou Vehicle-truck | Iou Vehicle-bus | Iou Vehicle-tramtrain | Iou Vehicle-motorcycle | Iou Vehicle-bicycle | Iou Vehicle-caravan | Iou Vehicle-cartrailer | Iou Construction-building | Iou Construction-door | Iou Construction-wall | Iou Construction-fenceguardrail | Iou Construction-bridge | Iou Construction-tunnel | Iou Construction-stairs | Iou Object-pole | Iou Object-trafficsign | Iou Object-trafficlight | Iou Nature-vegetation | Iou Nature-terrain | Iou Sky | Iou Void-ground | Iou Void-dynamic | Iou Void-static | Iou Void-unclear |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:-------------:|:----------------:|:------------------:|:------------------:|:----------------------:|:-----------------------:|:-------------------------:|:-----------------------------:|:-----------------------:|:------------------:|:---------------------:|:--------------------:|:--------------------:|:----------------------:|:--------------------:|:--------------------------:|:---------------------------:|:------------------------:|:------------------------:|:---------------------------:|:------------------------------:|:--------------------------:|:--------------------------:|:------------------------------------:|:----------------------------:|:----------------------------:|:----------------------------:|:--------------------:|:---------------------------:|:----------------------------:|:--------------------------:|:-----------------------:|:------------:|:--------------------:|:---------------------:|:--------------------:|:---------------------:|:-------------:|:-------------:|:-----------------:|:------------------:|:--------------------:|:------------------------:|:------------------:|:-------------:|:----------------:|:---------------:|:---------------:|:-----------------:|:---------------:|:---------------------:|:----------------------:|:-------------------:|:-------------------:|:----------------------:|:-------------------------:|:---------------------:|:---------------------:|:-------------------------------:|:-----------------------:|:-----------------------:|:-----------------------:|:---------------:|:----------------------:|:-----------------------:|:---------------------:|:------------------:|:-------:|:---------------:|:----------------:|:---------------:|:----------------:|
| 2.832 | 0.05 | 20 | 3.1768 | 0.0700 | 0.1095 | 0.5718 | nan | 0.1365 | 0.9472 | 0.0019 | 0.0006 | 0.0004 | 0.0 | 0.0205 | 0.0 | 0.0 | 0.2074 | 0.0 | 0.0 | 0.0 | 0.0017 | 0.0001 | 0.0 | 0.0 | 0.7360 | 0.0 | 0.0235 | 0.0050 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.9559 | 0.0429 | 0.5329 | 0.0010 | 0.0 | 0.0 | 0.0 | 0.0 | 0.1260 | 0.5906 | 0.0016 | 0.0006 | 0.0004 | 0.0 | 0.0175 | 0.0 | 0.0 | 0.2006 | 0.0 | 0.0 | 0.0 | 0.0003 | 0.0001 | 0.0 | 0.0 | 0.3729 | 0.0 | 0.0209 | 0.0044 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.5778 | 0.0408 | 0.4932 | 0.0009 | 0.0 | 0.0 | 0.0 |
| 2.3224 | 0.1 | 40 | 2.4686 | 0.0885 | 0.1321 | 0.6347 | nan | 0.5225 | 0.9260 | 0.0005 | 0.0001 | 0.0006 | 0.0 | 0.0113 | 0.0 | 0.0 | 0.3738 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.8191 | 0.0 | 0.0263 | 0.0012 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.9649 | 0.0701 | 0.6434 | 0.0002 | 0.0 | 0.0 | 0.0 | 0.0 | 0.4240 | 0.6602 | 0.0005 | 0.0001 | 0.0006 | 0.0 | 0.0109 | 0.0 | 0.0 | 0.3292 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.3962 | 0.0 | 0.0260 | 0.0011 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.6019 | 0.0617 | 0.5862 | 0.0001 | 0.0 | 0.0 | 0.0 |
| 2.1961 | 0.15 | 60 | 1.9886 | 0.0988 | 0.1431 | 0.6500 | nan | 0.5168 | 0.9319 | 0.0 | 0.0001 | 0.0000 | 0.0 | 0.0032 | 0.0 | 0.0 | 0.5761 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.8325 | 0.0 | 0.0132 | 0.0003 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.9612 | 0.1260 | 0.7625 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.3929 | 0.6721 | 0.0 | 0.0001 | 0.0000 | 0.0 | 0.0032 | 0.0 | 0.0 | 0.4609 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.4375 | 0.0 | 0.0131 | 0.0002 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.6342 | 0.1108 | 0.6353 | 0.0 | 0.0 | 0.0 | 0.0 |
| 2.2964 | 0.2 | 80 | 2.0597 | 0.1066 | 0.1503 | 0.6682 | nan | 0.6577 | 0.9207 | 0.0 | 0.0000 | 0.0002 | 0.0 | 0.0044 | 0.0 | 0.0 | 0.5257 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.8466 | 0.0 | 0.0094 | 0.0001 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.9526 | 0.2022 | 0.8392 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.4276 | 0.7093 | 0.0 | 0.0000 | 0.0002 | 0.0 | 0.0044 | 0.0 | 0.0 | 0.4438 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.4488 | 0.0 | 0.0093 | 0.0001 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.6560 | 0.1833 | 0.7408 | 0.0 | 0.0 | 0.0 | 0.0 |
| 1.9751 | 0.25 | 100 | 1.7493 | 0.1186 | 0.1645 | 0.6944 | nan | 0.7604 | 0.9146 | 0.0 | 0.0004 | 0.0012 | 0.0 | 0.0016 | 0.0 | 0.0 | 0.7381 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.8273 | 0.0 | 0.0026 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.9636 | 0.3289 | 0.8909 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.4904 | 0.7490 | 0.0 | 0.0004 | 0.0012 | 0.0 | 0.0016 | 0.0 | 0.0 | 0.5465 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.4913 | 0.0 | 0.0026 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.6542 | 0.2761 | 0.7004 | 0.0 | 0.0 | 0.0 | 0.0 |
| 1.7626 | 0.3 | 120 | 1.5608 | 0.1295 | 0.1752 | 0.7118 | nan | 0.8168 | 0.9102 | 0.0 | 0.0002 | 0.0025 | 0.0 | 0.0001 | 0.0 | 0.0 | 0.8094 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.8362 | 0.0 | 0.0030 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.9492 | 0.5677 | 0.8861 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.4958 | 0.7592 | 0.0 | 0.0002 | 0.0025 | 0.0 | 0.0001 | 0.0 | 0.0 | 0.5680 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.5095 | 0.0 | 0.0030 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.7082 | 0.4878 | 0.7392 | 0.0 | 0.0 | 0.0 | 0.0 |
| 1.32 | 0.35 | 140 | 1.5048 | 0.1323 | 0.1797 | 0.7181 | nan | 0.7883 | 0.9260 | 0.0 | 0.0000 | 0.0037 | 0.0 | 0.0000 | 0.0 | 0.0 | 0.8711 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.8590 | 0.0 | 0.0022 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.9128 | 0.7088 | 0.8576 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.5141 | 0.7598 | 0.0 | 0.0000 | 0.0037 | 0.0 | 0.0000 | 0.0 | 0.0 | 0.5287 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.5016 | 0.0 | 0.0022 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.7458 | 0.5602 | 0.7499 | 0.0 | 0.0 | 0.0 | 0.0 |
| 1.6464 | 0.4 | 160 | 1.3886 | 0.1342 | 0.1783 | 0.7217 | nan | 0.7859 | 0.9390 | 0.0 | 0.0 | 0.0059 | 0.0 | 0.0 | 0.0 | 0.0 | 0.7401 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.8508 | 0.0 | 0.0010 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.9368 | 0.7223 | 0.9025 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.5173 | 0.7561 | 0.0 | 0.0 | 0.0058 | 0.0 | 0.0 | 0.0 | 0.0 | 0.5846 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.5059 | 0.0 | 0.0010 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.7366 | 0.5802 | 0.7401 | 0.0 | 0.0 | 0.0 | 0.0 |
| 1.4757 | 0.45 | 180 | 1.3649 | 0.1367 | 0.1840 | 0.7255 | nan | 0.8587 | 0.9185 | 0.0 | 0.0001 | 0.0039 | 0.0 | 0.0 | 0.0 | 0.0 | 0.8588 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.8337 | 0.0 | 0.0014 | 0.0001 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.9036 | 0.7809 | 0.9138 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.5077 | 0.7693 | 0.0 | 0.0001 | 0.0039 | 0.0 | 0.0 | 0.0 | 0.0 | 0.5980 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.5264 | 0.0 | 0.0014 | 0.0001 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.7521 | 0.6078 | 0.7438 | 0.0 | 0.0 | 0.0 | 0.0 |
| 2.0018 | 0.5 | 200 | 1.3118 | 0.1353 | 0.1839 | 0.7242 | nan | 0.7797 | 0.9457 | 0.0 | 0.0029 | 0.0057 | 0.0 | 0.0 | 0.0 | 0.0 | 0.8345 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.8509 | 0.0 | 0.0018 | 0.0001 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.8704 | 0.8688 | 0.9069 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.5321 | 0.7602 | 0.0 | 0.0029 | 0.0057 | 0.0 | 0.0 | 0.0 | 0.0 | 0.6060 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.5276 | 0.0 | 0.0018 | 0.0001 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.7133 | 0.5551 | 0.7593 | 0.0 | 0.0 | 0.0 | 0.0 |
| 1.4636 | 0.55 | 220 | 1.2729 | 0.1330 | 0.1797 | 0.7249 | nan | 0.8619 | 0.9203 | 0.0 | 0.0015 | 0.0067 | 0.0 | 0.0 | 0.0 | 0.0 | 0.8903 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.8514 | 0.0 | 0.0031 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.9447 | 0.5448 | 0.9040 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.5249 | 0.7844 | 0.0 | 0.0015 | 0.0066 | 0.0 | 0.0 | 0.0 | 0.0 | 0.5735 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.5336 | 0.0 | 0.0031 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.7136 | 0.4869 | 0.7613 | 0.0 | 0.0 | 0.0 | 0.0 |
| 1.1856 | 0.6 | 240 | 1.2551 | 0.1382 | 0.1828 | 0.7274 | nan | 0.7497 | 0.9518 | 0.0 | 0.0005 | 0.0048 | 0.0 | 0.0 | 0.0 | 0.0 | 0.8893 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.8153 | 0.0 | 0.0048 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.9475 | 0.7597 | 0.9107 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.5097 | 0.7477 | 0.0 | 0.0005 | 0.0047 | 0.0 | 0.0 | 0.0 | 0.0 | 0.6172 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.5527 | 0.0 | 0.0048 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.7293 | 0.6250 | 0.7703 | 0.0 | 0.0 | 0.0 | 0.0 |
| 1.4577 | 0.65 | 260 | 1.1862 | 0.1387 | 0.1848 | 0.7304 | nan | 0.8842 | 0.9065 | 0.0 | 0.0001 | 0.0024 | 0.0 | 0.0 | 0.0 | 0.0 | 0.8566 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.8632 | 0.0 | 0.0006 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.9442 | 0.7313 | 0.9080 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.5121 | 0.7833 | 0.0 | 0.0001 | 0.0024 | 0.0 | 0.0 | 0.0 | 0.0 | 0.6297 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.5381 | 0.0 | 0.0006 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.7437 | 0.6199 | 0.7486 | 0.0 | 0.0 | 0.0 | 0.0 |
| 1.0748 | 0.7 | 280 | 1.2000 | 0.1391 | 0.1846 | 0.7301 | nan | 0.7249 | 0.9690 | 0.0 | 0.0005 | 0.0064 | 0.0 | 0.0 | 0.0 | 0.0 | 0.8909 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.8656 | 0.0 | 0.0014 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.8917 | 0.8362 | 0.9065 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.5306 | 0.7403 | 0.0 | 0.0005 | 0.0063 | 0.0 | 0.0 | 0.0 | 0.0 | 0.6223 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.5491 | 0.0 | 0.0014 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.7566 | 0.6061 | 0.7761 | 0.0 | 0.0 | 0.0 | 0.0 |
| 1.642 | 0.75 | 300 | 1.1452 | 0.1432 | 0.1880 | 0.7409 | nan | 0.8682 | 0.9389 | 0.0 | 0.0030 | 0.0062 | 0.0 | 0.0 | 0.0 | 0.0 | 0.8605 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.8759 | 0.0 | 0.0020 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.9092 | 0.8515 | 0.8892 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.5333 | 0.7905 | 0.0 | 0.0030 | 0.0062 | 0.0 | 0.0 | 0.0 | 0.0 | 0.6393 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.5418 | 0.0 | 0.0020 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.7655 | 0.6551 | 0.7893 | 0.0 | 0.0 | 0.0 | 0.0 |
| 1.2166 | 0.8 | 320 | 1.1450 | 0.1388 | 0.1849 | 0.7391 | nan | 0.8516 | 0.9460 | 0.0 | 0.0043 | 0.0060 | 0.0 | 0.0000 | 0.0 | 0.0 | 0.8944 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.8803 | 0.0 | 0.0001 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.9283 | 0.6849 | 0.9071 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.5584 | 0.7932 | 0.0 | 0.0043 | 0.0060 | 0.0 | 0.0000 | 0.0 | 0.0 | 0.5844 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.5259 | 0.0 | 0.0001 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.7548 | 0.5985 | 0.7549 | 0.0 | 0.0 | 0.0 | 0.0 |
| 2.1346 | 0.85 | 340 | 1.1215 | 0.1428 | 0.1887 | 0.7411 | nan | 0.7956 | 0.9551 | 0.0 | 0.0145 | 0.0098 | 0.0 | 0.0000 | 0.0 | 0.0 | 0.8646 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.8884 | 0.0 | 0.0001 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.9131 | 0.8828 | 0.9024 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.5611 | 0.7721 | 0.0 | 0.0145 | 0.0097 | 0.0 | 0.0000 | 0.0 | 0.0 | 0.6313 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.5405 | 0.0 | 0.0001 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.7563 | 0.6337 | 0.7917 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.8351 | 0.9 | 360 | 1.1012 | 0.1433 | 0.1896 | 0.7449 | nan | 0.8723 | 0.9432 | 0.0 | 0.0025 | 0.0114 | 0.0 | 0.0 | 0.0 | 0.0 | 0.8822 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.8662 | 0.0 | 0.0001 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.9213 | 0.8361 | 0.9201 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.5472 | 0.7989 | 0.0 | 0.0025 | 0.0113 | 0.0 | 0.0 | 0.0 | 0.0 | 0.6277 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.5416 | 0.0 | 0.0001 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.7666 | 0.6674 | 0.7664 | 0.0 | 0.0 | 0.0 | 0.0 |
| 1.152 | 0.95 | 380 | 1.1045 | 0.1452 | 0.1891 | 0.7453 | nan | 0.8827 | 0.9332 | 0.0 | 0.0457 | 0.0124 | 0.0 | 0.0 | 0.0 | 0.0 | 0.8396 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.8848 | 0.0 | 0.0001 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.9399 | 0.7910 | 0.9107 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.5462 | 0.7966 | 0.0 | 0.0457 | 0.0123 | 0.0 | 0.0 | 0.0 | 0.0 | 0.6494 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.5395 | 0.0 | 0.0001 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.7636 | 0.6627 | 0.7763 | 0.0 | 0.0 | 0.0 | 0.0 |
| 1.2062 | 1.0 | 400 | 1.0607 | 0.1469 | 0.1897 | 0.7482 | nan | 0.8192 | 0.9644 | 0.0 | 0.0944 | 0.0198 | 0.0 | 0.0 | 0.0 | 0.0 | 0.8406 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.8821 | 0.0 | 0.0006 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.9193 | 0.8054 | 0.9137 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.5772 | 0.7742 | 0.0 | 0.0941 | 0.0195 | 0.0 | 0.0 | 0.0 | 0.0 | 0.6414 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.5360 | 0.0 | 0.0006 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.7740 | 0.6591 | 0.7710 | 0.0 | 0.0 | 0.0 | 0.0 |
| 1.0116 | 1.05 | 420 | 1.0503 | 0.1493 | 0.1950 | 0.7554 | nan | 0.8686 | 0.9478 | 0.0 | 0.2033 | 0.0295 | 0.0 | 0.0 | 0.0 | 0.0 | 0.9166 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.8409 | 0.0 | 0.0001 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.9414 | 0.7667 | 0.9196 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.5809 | 0.8022 | 0.0 | 0.1995 | 0.0287 | 0.0 | 0.0 | 0.0 | 0.0 | 0.5916 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.5517 | 0.0 | 0.0001 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.7628 | 0.6441 | 0.7652 | 0.0 | 0.0 | 0.0 | 0.0 |
| 1.009 | 1.1 | 440 | 1.0723 | 0.1529 | 0.1958 | 0.7553 | nan | 0.7797 | 0.9670 | 0.0 | 0.2214 | 0.0547 | 0.0 | 0.0001 | 0.0 | 0.0 | 0.8978 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.8927 | 0.0 | 0.0000 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.9274 | 0.8016 | 0.9176 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.5898 | 0.7717 | 0.0 | 0.2157 | 0.0526 | 0.0 | 0.0001 | 0.0 | 0.0 | 0.6389 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.5499 | 0.0 | 0.0000 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.7760 | 0.6697 | 0.7818 | 0.0 | 0.0 | 0.0 | 0.0 |
| 1.1496 | 1.15 | 460 | 1.0417 | 0.1571 | 0.2017 | 0.7607 | nan | 0.7736 | 0.9645 | 0.0 | 0.3606 | 0.0669 | 0.0 | 0.0001 | 0.0 | 0.0 | 0.8775 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.8801 | 0.0 | 0.0001 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.9098 | 0.8906 | 0.9326 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.6102 | 0.7737 | 0.0 | 0.3374 | 0.0634 | 0.0 | 0.0001 | 0.0 | 0.0 | 0.6549 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.5538 | 0.0 | 0.0001 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.7682 | 0.6437 | 0.7772 | 0.0 | 0.0 | 0.0 | 0.0 |
| 1.4669 | 1.2 | 480 | 1.0161 | 0.1566 | 0.2024 | 0.7637 | nan | 0.8236 | 0.9531 | 0.0 | 0.3507 | 0.0584 | 0.0 | 0.0001 | 0.0 | 0.0 | 0.9165 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.8675 | 0.0 | 0.0001 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.9263 | 0.8597 | 0.9222 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.6005 | 0.7983 | 0.0 | 0.3296 | 0.0556 | 0.0 | 0.0001 | 0.0 | 0.0 | 0.6153 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.5498 | 0.0 | 0.0001 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.7752 | 0.6654 | 0.7770 | 0.0 | 0.0 | 0.0 | 0.0 |
| 1.075 | 1.25 | 500 | 1.0124 | 0.1556 | 0.2000 | 0.7634 | nan | 0.8521 | 0.9499 | 0.0 | 0.3154 | 0.0410 | 0.0 | 0.0001 | 0.0 | 0.0 | 0.8944 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.8618 | 0.0 | 0.0001 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.9442 | 0.8133 | 0.9290 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.5910 | 0.8068 | 0.0 | 0.2992 | 0.0394 | 0.0 | 0.0001 | 0.0 | 0.0 | 0.6338 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.5507 | 0.0 | 0.0001 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.7689 | 0.6697 | 0.7737 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.888 | 1.3 | 520 | 0.9797 | 0.1597 | 0.2028 | 0.7677 | nan | 0.8590 | 0.9472 | 0.0 | 0.3534 | 0.0469 | 0.0 | 0.0001 | 0.0 | 0.0 | 0.8900 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.8807 | 0.0 | 0.0005 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.9379 | 0.8578 | 0.9187 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.5908 | 0.8056 | 0.0 | 0.3311 | 0.0448 | 0.0 | 0.0001 | 0.0 | 0.0 | 0.6598 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.5676 | 0.0 | 0.0005 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.7712 | 0.6912 | 0.8088 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.8099 | 1.35 | 540 | 0.9760 | 0.1589 | 0.2026 | 0.7678 | nan | 0.8526 | 0.9534 | 0.0 | 0.3370 | 0.0313 | 0.0 | 0.0000 | 0.0 | 0.0 | 0.9235 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.8862 | 0.0 | 0.0005 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.9252 | 0.8551 | 0.9206 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.5954 | 0.8014 | 0.0 | 0.3188 | 0.0303 | 0.0 | 0.0000 | 0.0 | 0.0 | 0.6382 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.5706 | 0.0 | 0.0005 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.7830 | 0.6934 | 0.8122 | 0.0 | 0.0 | 0.0 | 0.0 |
| 1.1998 | 1.4 | 560 | 0.9815 | 0.1578 | 0.2030 | 0.7631 | nan | 0.8956 | 0.9250 | 0.0 | 0.3267 | 0.0461 | 0.0 | 0.0004 | 0.0 | 0.0 | 0.8929 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.8956 | 0.0 | 0.0002 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.9206 | 0.8669 | 0.9275 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.5656 | 0.8136 | 0.0 | 0.3102 | 0.0440 | 0.0 | 0.0004 | 0.0 | 0.0 | 0.6574 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.5524 | 0.0 | 0.0002 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.7894 | 0.6940 | 0.7818 | 0.0 | 0.0 | 0.0 | 0.0 |
| 1.5591 | 1.45 | 580 | 0.9654 | 0.1618 | 0.2043 | 0.7698 | nan | 0.8198 | 0.9655 | 0.0 | 0.3715 | 0.0848 | 0.0 | 0.0003 | 0.0 | 0.0 | 0.8935 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.8965 | 0.0 | 0.0013 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.9146 | 0.8730 | 0.9198 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.6182 | 0.7898 | 0.0 | 0.3467 | 0.0792 | 0.0 | 0.0003 | 0.0 | 0.0 | 0.6590 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.5647 | 0.0 | 0.0013 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.7871 | 0.6835 | 0.8101 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.861 | 1.5 | 600 | 0.9622 | 0.1607 | 0.2045 | 0.7689 | nan | 0.8163 | 0.9648 | 0.0 | 0.3780 | 0.0907 | 0.0 | 0.0002 | 0.0 | 0.0 | 0.9187 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.8714 | 0.0 | 0.0006 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.9229 | 0.8485 | 0.9361 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.6180 | 0.7903 | 0.0 | 0.3541 | 0.0844 | 0.0 | 0.0002 | 0.0 | 0.0 | 0.6307 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.5609 | 0.0 | 0.0006 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.7854 | 0.6904 | 0.7884 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.8335 | 1.55 | 620 | 0.9569 | 0.1598 | 0.2050 | 0.7686 | nan | 0.8421 | 0.9561 | 0.0 | 0.3493 | 0.0928 | 0.0 | 0.0012 | 0.0 | 0.0 | 0.9261 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.8753 | 0.0 | 0.0013 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.9172 | 0.8688 | 0.9335 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.6069 | 0.8031 | 0.0 | 0.3306 | 0.0860 | 0.0 | 0.0012 | 0.0 | 0.0 | 0.6123 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.5618 | 0.0 | 0.0013 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.7851 | 0.6911 | 0.7950 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.9988 | 1.6 | 640 | 0.9337 | 0.1611 | 0.2050 | 0.7711 | nan | 0.8595 | 0.9538 | 0.0 | 0.3512 | 0.0928 | 0.0 | 0.0006 | 0.0 | 0.0 | 0.8962 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.8854 | 0.0 | 0.0004 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.9281 | 0.8594 | 0.9367 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.6062 | 0.8105 | 0.0 | 0.3310 | 0.0868 | 0.0 | 0.0006 | 0.0 | 0.0 | 0.6565 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.5596 | 0.0 | 0.0004 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.7819 | 0.6958 | 0.7880 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.966 | 1.65 | 660 | 0.9322 | 0.1612 | 0.2051 | 0.7707 | nan | 0.8706 | 0.9494 | 0.0 | 0.3470 | 0.0997 | 0.0 | 0.0005 | 0.0 | 0.0 | 0.8905 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.8722 | 0.0 | 0.0016 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.9347 | 0.8652 | 0.9364 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.5953 | 0.8136 | 0.0 | 0.3281 | 0.0922 | 0.0 | 0.0005 | 0.0 | 0.0 | 0.6654 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.5696 | 0.0 | 0.0016 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.7756 | 0.6890 | 0.7885 | 0.0 | 0.0 | 0.0 | 0.0 |
| 1.2154 | 1.7 | 680 | 0.9373 | 0.1611 | 0.2048 | 0.7710 | nan | 0.8448 | 0.9577 | 0.0 | 0.3717 | 0.1010 | 0.0 | 0.0007 | 0.0 | 0.0 | 0.9173 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.8613 | 0.0 | 0.0026 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.9411 | 0.8371 | 0.9246 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.6096 | 0.8056 | 0.0 | 0.3487 | 0.0930 | 0.0 | 0.0007 | 0.0 | 0.0 | 0.6272 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.5696 | 0.0 | 0.0026 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.7762 | 0.6911 | 0.7931 | 0.0 | 0.0 | 0.0 | 0.0 |
| 1.7979 | 1.75 | 700 | 0.9429 | 0.1622 | 0.2067 | 0.7717 | nan | 0.8496 | 0.9548 | 0.0 | 0.3821 | 0.1182 | 0.0 | 0.0013 | 0.0 | 0.0 | 0.9071 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.8803 | 0.0 | 0.0043 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.9202 | 0.8812 | 0.9204 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.6104 | 0.8088 | 0.0 | 0.3583 | 0.1074 | 0.0 | 0.0013 | 0.0 | 0.0 | 0.6410 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.5675 | 0.0 | 0.0043 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.7784 | 0.6767 | 0.7994 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.8366 | 1.8 | 720 | 0.9379 | 0.1645 | 0.2075 | 0.7745 | nan | 0.8359 | 0.9580 | 0.0 | 0.4130 | 0.1275 | 0.0 | 0.0021 | 0.0 | 0.0 | 0.8998 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.8704 | 0.0 | 0.0088 | 0.0 | 0.0 | nan | 0.0 | 0.0000 | 0.0 | 0.0 | 0.9450 | 0.8617 | 0.9251 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.6227 | 0.8035 | 0.0 | 0.3850 | 0.1147 | 0.0 | 0.0021 | 0.0 | 0.0 | 0.6544 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.5777 | 0.0 | 0.0088 | 0.0 | 0.0 | nan | 0.0 | 0.0000 | 0.0 | 0.0 | 0.7682 | 0.6867 | 0.8055 | 0.0 | 0.0 | 0.0 | 0.0 |
| 1.0448 | 1.85 | 740 | 0.9419 | 0.1659 | 0.2087 | 0.7769 | nan | 0.8483 | 0.9532 | 0.0 | 0.4442 | 0.1387 | 0.0 | 0.0028 | 0.0 | 0.0 | 0.8986 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.8865 | 0.0 | 0.0042 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.9458 | 0.8442 | 0.9215 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.6240 | 0.8122 | 0.0 | 0.4077 | 0.1237 | 0.0 | 0.0028 | 0.0 | 0.0 | 0.6529 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.5700 | 0.0 | 0.0041 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.7767 | 0.6938 | 0.8070 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.9737 | 1.9 | 760 | 0.9193 | 0.1664 | 0.2082 | 0.7772 | nan | 0.8420 | 0.9586 | 0.0 | 0.4353 | 0.1193 | 0.0 | 0.0010 | 0.0 | 0.0 | 0.9082 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.8955 | 0.0 | 0.0079 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.9385 | 0.8464 | 0.9190 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.6232 | 0.8053 | 0.0 | 0.4022 | 0.1088 | 0.0 | 0.0010 | 0.0 | 0.0 | 0.6549 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.5766 | 0.0 | 0.0079 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.7843 | 0.7077 | 0.8180 | 0.0 | 0.0 | 0.0 | 0.0 |
| 1.0716 | 1.95 | 780 | 0.9170 | 0.1672 | 0.2098 | 0.7785 | nan | 0.8434 | 0.9539 | 0.0 | 0.4671 | 0.1283 | 0.0 | 0.0037 | 0.0 | 0.0 | 0.9012 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.8984 | 0.0 | 0.0058 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.9398 | 0.8661 | 0.9157 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.6242 | 0.8106 | 0.0 | 0.4232 | 0.1156 | 0.0 | 0.0037 | 0.0 | 0.0 | 0.6631 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.5777 | 0.0 | 0.0057 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.7811 | 0.6920 | 0.8223 | 0.0 | 0.0 | 0.0 | 0.0 |
| 1.4144 | 2.0 | 800 | 0.9249 | 0.1675 | 0.2109 | 0.7776 | nan | 0.8631 | 0.9423 | 0.0 | 0.4704 | 0.1421 | 0.0 | 0.0061 | 0.0 | 0.0 | 0.8937 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.9143 | 0.0 | 0.0055 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.9291 | 0.8710 | 0.9207 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.6127 | 0.8192 | 0.0 | 0.4256 | 0.1262 | 0.0 | 0.0061 | 0.0 | 0.0 | 0.6655 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.5666 | 0.0 | 0.0054 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.7875 | 0.6912 | 0.8218 | 0.0 | 0.0 | 0.0 | 0.0 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0+cu113
- Datasets 2.6.1
- Tokenizers 0.12.1
| [
"unlabeled",
"flat-road",
"flat-sidewalk",
"flat-crosswalk",
"flat-cyclinglane",
"flat-parkingdriveway",
"flat-railtrack",
"flat-curb",
"human-person",
"human-rider",
"vehicle-car",
"vehicle-truck",
"vehicle-bus",
"vehicle-tramtrain",
"vehicle-motorcycle",
"vehicle-bicycle",
"vehicle-caravan",
"vehicle-cartrailer",
"construction-building",
"construction-door",
"construction-wall",
"construction-fenceguardrail",
"construction-bridge",
"construction-tunnel",
"construction-stairs",
"object-pole",
"object-trafficsign",
"object-trafficlight",
"nature-vegetation",
"nature-terrain",
"sky",
"void-ground",
"void-dynamic",
"void-static",
"void-unclear"
] |
google/deeplabv3_mobilenet_v2_1.0_513 |
# MobileNetV2 with DeepLabV3+
MobileNet V2 model pre-trained on PASCAL VOC at resolution 513x513. It was introduced in [MobileNetV2: Inverted Residuals and Linear Bottlenecks](https://arxiv.org/abs/1801.04381) by Mark Sandler, Andrew Howard, Menglong Zhu, Andrey Zhmoginov, Liang-Chieh Chen. It was first released in [this repository](https://github.com/tensorflow/models/tree/master/research/deeplab).
Disclaimer: The team releasing MobileNet V2 did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
From the [original README](https://github.com/tensorflow/models/blob/master/research/slim/nets/mobilenet_v1.md):
> MobileNets are small, low-latency, low-power models parameterized to meet the resource constraints of a variety of use cases. They can be built upon for classification, detection, embeddings and segmentation similar to how other popular large scale models, such as Inception, are used. MobileNets can be run efficiently on mobile devices [...] MobileNets trade off between latency, size and accuracy while comparing favorably with popular models from the literature.
The model in this repo adds a [DeepLabV3+](https://arxiv.org/abs/1802.02611) head to the MobileNetV2 backbone for semantic segmentation.
## Intended uses & limitations
You can use the raw model for semantic segmentation. See the [model hub](https://huggingface.co/models?search=mobilenet_v2) to look for fine-tuned versions on a task that interests you.
### How to use
Here is how to use this model:
```python
from transformers import AutoImageProcessor, AutoModelForSemanticSegmentation
from PIL import Image
import requests
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
preprocessor = AutoImageProcessor.from_pretrained("google/deeplabv3_mobilenet_v2_1.0_513")
model = AutoModelForSemanticSegmentation.from_pretrained("google/deeplabv3_mobilenet_v2_1.0_513")
inputs = preprocessor(images=image, return_tensors="pt")
outputs = model(**inputs)
predicted_mask = preprocessor.post_process_semantic_segmentation(outputs)
```
Currently, both the feature extractor and model support PyTorch.
### BibTeX entry and citation info
```bibtex
@inproceedings{deeplabv3plus2018,
title={Encoder-Decoder with Atrous Separable Convolution for Semantic Image Segmentation},
author={Liang-Chieh Chen and Yukun Zhu and George Papandreou and Florian Schroff and Hartwig Adam},
booktitle={ECCV},
year={2018}
}
```
| [
"background",
"aeroplane",
"bicycle",
"bird",
"boat",
"bottle",
"bus",
"car",
"cat",
"chair",
"cow",
"diningtable",
"dog",
"horse",
"motorbike",
"person",
"pottedplant",
"sheep",
"sofa",
"train",
"tvmonitor"
] |
shi-labs/oneformer_ade20k_swin_large |
# OneFormer
OneFormer model trained on the ADE20k dataset (large-sized version, Swin backbone). It was introduced in the paper [OneFormer: One Transformer to Rule Universal Image Segmentation](https://arxiv.org/abs/2211.06220) by Jain et al. and first released in [this repository](https://github.com/SHI-Labs/OneFormer).

## Model description
OneFormer is the first multi-task universal image segmentation framework. It needs to be trained only once with a single universal architecture, a single model, and on a single dataset, to outperform existing specialized models across semantic, instance, and panoptic segmentation tasks. OneFormer uses a task token to condition the model on the task in focus, making the architecture task-guided for training, and task-dynamic for inference, all with a single model.

## Intended uses & limitations
You can use this particular checkpoint for semantic, instance and panoptic segmentation. See the [model hub](https://huggingface.co/models?search=oneformer) to look for other fine-tuned versions on a different dataset.
### How to use
Here is how to use this model:
```python
from transformers import OneFormerProcessor, OneFormerForUniversalSegmentation
from PIL import Image
import requests
url = "https://huggingface.co/datasets/shi-labs/oneformer_demo/blob/main/ade20k.jpeg"
image = Image.open(requests.get(url, stream=True).raw)
# Loading a single model for all three tasks
processor = OneFormerProcessor.from_pretrained("shi-labs/oneformer_ade20k_swin_large")
model = OneFormerForUniversalSegmentation.from_pretrained("shi-labs/oneformer_ade20k_swin_large")
# Semantic Segmentation
semantic_inputs = processor(images=image, task_inputs=["semantic"], return_tensors="pt")
semantic_outputs = model(**semantic_inputs)
# pass through image_processor for postprocessing
predicted_semantic_map = processor.post_process_semantic_segmentation(outputs, target_sizes=[image.size[::-1]])[0]
# Instance Segmentation
instance_inputs = processor(images=image, task_inputs=["instance"], return_tensors="pt")
instance_outputs = model(**instance_inputs)
# pass through image_processor for postprocessing
predicted_instance_map = processor.post_process_instance_segmentation(outputs, target_sizes=[image.size[::-1]])[0]["segmentation"]
# Panoptic Segmentation
panoptic_inputs = processor(images=image, task_inputs=["panoptic"], return_tensors="pt")
panoptic_outputs = model(**panoptic_inputs)
# pass through image_processor for postprocessing
predicted_semantic_map = processor.post_process_panoptic_segmentation(outputs, target_sizes=[image.size[::-1]])[0]["segmentation"]
```
For more examples, please refer to the [documentation](https://huggingface.co/docs/transformers/master/en/model_doc/oneformer).
### Citation
```bibtex
@article{jain2022oneformer,
title={{OneFormer: One Transformer to Rule Universal Image Segmentation}},
author={Jitesh Jain and Jiachen Li and MangTik Chiu and Ali Hassani and Nikita Orlov and Humphrey Shi},
journal={arXiv},
year={2022}
}
```
| [
"wall",
"building",
"sky",
"floor",
"tree",
"ceiling",
"road, route",
"bed",
"window ",
"grass",
"cabinet",
"sidewalk, pavement",
"person",
"earth, ground",
"door",
"table",
"mountain, mount",
"plant",
"curtain",
"chair",
"car",
"water",
"painting, picture",
"sofa",
"shelf",
"house",
"sea",
"mirror",
"rug",
"field",
"armchair",
"seat",
"fence",
"desk",
"rock, stone",
"wardrobe, closet, press",
"lamp",
"tub",
"rail",
"cushion",
"base, pedestal, stand",
"box",
"column, pillar",
"signboard, sign",
"chest of drawers, chest, bureau, dresser",
"counter",
"sand",
"sink",
"skyscraper",
"fireplace",
"refrigerator, icebox",
"grandstand, covered stand",
"path",
"stairs",
"runway",
"case, display case, showcase, vitrine",
"pool table, billiard table, snooker table",
"pillow",
"screen door, screen",
"stairway, staircase",
"river",
"bridge, span",
"bookcase",
"blind, screen",
"coffee table",
"toilet, can, commode, crapper, pot, potty, stool, throne",
"flower",
"book",
"hill",
"bench",
"countertop",
"stove",
"palm, palm tree",
"kitchen island",
"computer",
"swivel chair",
"boat",
"bar",
"arcade machine",
"hovel, hut, hutch, shack, shanty",
"bus",
"towel",
"light",
"truck",
"tower",
"chandelier",
"awning, sunshade, sunblind",
"street lamp",
"booth",
"tv",
"plane",
"dirt track",
"clothes",
"pole",
"land, ground, soil",
"bannister, banister, balustrade, balusters, handrail",
"escalator, moving staircase, moving stairway",
"ottoman, pouf, pouffe, puff, hassock",
"bottle",
"buffet, counter, sideboard",
"poster, posting, placard, notice, bill, card",
"stage",
"van",
"ship",
"fountain",
"conveyer belt, conveyor belt, conveyer, conveyor, transporter",
"canopy",
"washer, automatic washer, washing machine",
"plaything, toy",
"pool",
"stool",
"barrel, cask",
"basket, handbasket",
"falls",
"tent",
"bag",
"minibike, motorbike",
"cradle",
"oven",
"ball",
"food, solid food",
"step, stair",
"tank, storage tank",
"trade name",
"microwave",
"pot",
"animal",
"bicycle",
"lake",
"dishwasher",
"screen",
"blanket, cover",
"sculpture",
"hood, exhaust hood",
"sconce",
"vase",
"traffic light",
"tray",
"trash can",
"fan",
"pier",
"crt screen",
"plate",
"monitor",
"bulletin board",
"shower",
"radiator",
"glass, drinking glass",
"clock",
"flag"
] |
shi-labs/oneformer_cityscapes_swin_large |
# OneFormer
OneFormer model trained on the Cityscapes dataset (large-sized version, Swin backbone). It was introduced in the paper [OneFormer: One Transformer to Rule Universal Image Segmentation](https://arxiv.org/abs/2211.06220) by Jain et al. and first released in [this repository](https://github.com/SHI-Labs/OneFormer).

## Model description
OneFormer is the first multi-task universal image segmentation framework. It needs to be trained only once with a single universal architecture, a single model, and on a single dataset, to outperform existing specialized models across semantic, instance, and panoptic segmentation tasks. OneFormer uses a task token to condition the model on the task in focus, making the architecture task-guided for training, and task-dynamic for inference, all with a single model.

## Intended uses & limitations
You can use this particular checkpoint for semantic, instance and panoptic segmentation. See the [model hub](https://huggingface.co/models?search=oneformer) to look for other fine-tuned versions on a different dataset.
### How to use
Here is how to use this model:
```python
from transformers import OneFormerProcessor, OneFormerForUniversalSegmentation
from PIL import Image
import requests
url = "https://huggingface.co/datasets/shi-labs/oneformer_demo/blob/main/cityscapes.png"
image = Image.open(requests.get(url, stream=True).raw)
# Loading a single model for all three tasks
processor = OneFormerProcessor.from_pretrained("shi-labs/oneformer_cityscapes_swin_large")
model = OneFormerForUniversalSegmentation.from_pretrained("shi-labs/oneformer_cityscapes_swin_large")
# Semantic Segmentation
semantic_inputs = processor(images=image, task_inputs=["semantic"], return_tensors="pt")
semantic_outputs = model(**semantic_inputs)
# pass through image_processor for postprocessing
predicted_semantic_map = processor.post_process_semantic_segmentation(outputs, target_sizes=[image.size[::-1]])[0]
# Instance Segmentation
instance_inputs = processor(images=image, task_inputs=["instance"], return_tensors="pt")
instance_outputs = model(**instance_inputs)
# pass through image_processor for postprocessing
predicted_instance_map = processor.post_process_instance_segmentation(outputs, target_sizes=[image.size[::-1]])[0]["segmentation"]
# Panoptic Segmentation
panoptic_inputs = processor(images=image, task_inputs=["panoptic"], return_tensors="pt")
panoptic_outputs = model(**panoptic_inputs)
# pass through image_processor for postprocessing
predicted_semantic_map = processor.post_process_panoptic_segmentation(outputs, target_sizes=[image.size[::-1]])[0]["segmentation"]
```
For more examples, please refer to the [documentation](https://huggingface.co/docs/transformers/master/en/model_doc/oneformer).
### Citation
```bibtex
@article{jain2022oneformer,
title={{OneFormer: One Transformer to Rule Universal Image Segmentation}},
author={Jitesh Jain and Jiachen Li and MangTik Chiu and Ali Hassani and Nikita Orlov and Humphrey Shi},
journal={arXiv},
year={2022}
}
```
| [
"road",
"sidewalk",
"building",
"wall",
"fence",
"pole",
"traffic light",
"traffic sign",
"vegetation",
"terrain",
"sky",
"person",
"rider",
"car",
"truck",
"bus",
"train",
"motorcycle",
"bicycle"
] |
shi-labs/oneformer_coco_swin_large |
# OneFormer
OneFormer model trained on the COCO dataset (large-sized version, Swin backbone). It was introduced in the paper [OneFormer: One Transformer to Rule Universal Image Segmentation](https://arxiv.org/abs/2211.06220) by Jain et al. and first released in [this repository](https://github.com/SHI-Labs/OneFormer).

## Model description
OneFormer is the first multi-task universal image segmentation framework. It needs to be trained only once with a single universal architecture, a single model, and on a single dataset, to outperform existing specialized models across semantic, instance, and panoptic segmentation tasks. OneFormer uses a task token to condition the model on the task in focus, making the architecture task-guided for training, and task-dynamic for inference, all with a single model.

## Intended uses & limitations
You can use this particular checkpoint for semantic, instance and panoptic segmentation. See the [model hub](https://huggingface.co/models?search=oneformer) to look for other fine-tuned versions on a different dataset.
### How to use
Here is how to use this model:
```python
from transformers import OneFormerProcessor, OneFormerForUniversalSegmentation
from PIL import Image
import requests
url = "https://huggingface.co/datasets/shi-labs/oneformer_demo/blob/main/coco.jpeg"
image = Image.open(requests.get(url, stream=True).raw)
# Loading a single model for all three tasks
processor = OneFormerProcessor.from_pretrained("shi-labs/oneformer_coco_swin_large")
model = OneFormerForUniversalSegmentation.from_pretrained("shi-labs/oneformer_coco_swin_large")
# Semantic Segmentation
semantic_inputs = processor(images=image, task_inputs=["semantic"], return_tensors="pt")
semantic_outputs = model(**semantic_inputs)
# pass through image_processor for postprocessing
predicted_semantic_map = processor.post_process_semantic_segmentation(outputs, target_sizes=[image.size[::-1]])[0]
# Instance Segmentation
instance_inputs = processor(images=image, task_inputs=["instance"], return_tensors="pt")
instance_outputs = model(**instance_inputs)
# pass through image_processor for postprocessing
predicted_instance_map = processor.post_process_instance_segmentation(outputs, target_sizes=[image.size[::-1]])[0]["segmentation"]
# Panoptic Segmentation
panoptic_inputs = processor(images=image, task_inputs=["panoptic"], return_tensors="pt")
panoptic_outputs = model(**panoptic_inputs)
# pass through image_processor for postprocessing
predicted_semantic_map = processor.post_process_panoptic_segmentation(outputs, target_sizes=[image.size[::-1]])[0]["segmentation"]
```
For more examples, please refer to the [documentation](https://huggingface.co/docs/transformers/master/en/model_doc/oneformer).
### Citation
```bibtex
@article{jain2022oneformer,
title={{OneFormer: One Transformer to Rule Universal Image Segmentation}},
author={Jitesh Jain and Jiachen Li and MangTik Chiu and Ali Hassani and Nikita Orlov and Humphrey Shi},
journal={arXiv},
year={2022}
}
```
| [
"person",
"bicycle",
"car",
"motorcycle",
"airplane",
"bus",
"train",
"truck",
"boat",
"traffic light",
"fire hydrant",
"stop sign",
"parking meter",
"bench",
"bird",
"cat",
"dog",
"horse",
"sheep",
"cow",
"elephant",
"bear",
"zebra",
"giraffe",
"backpack",
"umbrella",
"handbag",
"tie",
"suitcase",
"frisbee",
"skis",
"snowboard",
"sports ball",
"kite",
"baseball bat",
"baseball glove",
"skateboard",
"surfboard",
"tennis racket",
"bottle",
"wine glass",
"cup",
"fork",
"knife",
"spoon",
"bowl",
"banana",
"apple",
"sandwich",
"orange",
"broccoli",
"carrot",
"hot dog",
"pizza",
"donut",
"cake",
"chair",
"couch",
"potted plant",
"bed",
"dining table",
"toilet",
"tv",
"laptop",
"mouse",
"remote",
"keyboard",
"cell phone",
"microwave",
"oven",
"toaster",
"sink",
"refrigerator",
"book",
"clock",
"vase",
"scissors",
"teddy bear",
"hair drier",
"toothbrush",
"banner",
"blanket",
"bridge",
"cardboard",
"counter",
"curtain",
"door-stuff",
"floor-wood",
"flower",
"fruit",
"gravel",
"house",
"light",
"mirror-stuff",
"net",
"pillow",
"platform",
"playingfield",
"railroad",
"river",
"road",
"roof",
"sand",
"sea",
"shelf",
"snow",
"stairs",
"tent",
"towel",
"wall-brick",
"wall-stone",
"wall-tile",
"wall-wood",
"water-other",
"window-blind",
"window-other",
"tree-merged",
"fence-merged",
"ceiling-merged",
"sky-other-merged",
"cabinet-merged",
"table-merged",
"floor-other-merged",
"pavement-merged",
"mountain-merged",
"grass-merged",
"dirt-merged",
"paper-merged",
"food-other-merged",
"building-other-merged",
"rock-merged",
"wall-other-merged",
"rug-merged"
] |
shi-labs/oneformer_ade20k_dinat_large |
# OneFormer
OneFormer model trained on the ADE20k dataset (large-sized version, Dinat backbone). It was introduced in the paper [OneFormer: One Transformer to Rule Universal Image Segmentation](https://arxiv.org/abs/2211.06220) by Jain et al. and first released in [this repository](https://github.com/SHI-Labs/OneFormer).

## Model description
OneFormer is the first multi-task universal image segmentation framework. It needs to be trained only once with a single universal architecture, a single model, and on a single dataset, to outperform existing specialized models across semantic, instance, and panoptic segmentation tasks. OneFormer uses a task token to condition the model on the task in focus, making the architecture task-guided for training, and task-dynamic for inference, all with a single model.

## Intended uses & limitations
You can use this particular checkpoint for semantic, instance and panoptic segmentation. See the [model hub](https://huggingface.co/models?search=oneformer) to look for other fine-tuned versions on a different dataset.
### How to use
Here is how to use this model:
```python
from transformers import OneFormerProcessor, OneFormerForUniversalSegmentation
from PIL import Image
import requests
url = "https://huggingface.co/datasets/shi-labs/oneformer_demo/blob/main/ade20k.jpeg"
image = Image.open(requests.get(url, stream=True).raw)
# Loading a single model for all three tasks
processor = OneFormerProcessor.from_pretrained("shi-labs/oneformer_ade20k_dinat_large")
model = OneFormerForUniversalSegmentation.from_pretrained("shi-labs/oneformer_ade20k_dinat_large")
# Semantic Segmentation
semantic_inputs = processor(images=image, task_inputs=["semantic"], return_tensors="pt")
semantic_outputs = model(**semantic_inputs)
# pass through image_processor for postprocessing
predicted_semantic_map = processor.post_process_semantic_segmentation(outputs, target_sizes=[image.size[::-1]])[0]
# Instance Segmentation
instance_inputs = processor(images=image, task_inputs=["instance"], return_tensors="pt")
instance_outputs = model(**instance_inputs)
# pass through image_processor for postprocessing
predicted_instance_map = processor.post_process_instance_segmentation(outputs, target_sizes=[image.size[::-1]])[0]["segmentation"]
# Panoptic Segmentation
panoptic_inputs = processor(images=image, task_inputs=["panoptic"], return_tensors="pt")
panoptic_outputs = model(**panoptic_inputs)
# pass through image_processor for postprocessing
predicted_semantic_map = processor.post_process_panoptic_segmentation(outputs, target_sizes=[image.size[::-1]])[0]["segmentation"]
```
For more examples, please refer to the [documentation](https://huggingface.co/docs/transformers/master/en/model_doc/oneformer).
### Citation
```bibtex
@article{jain2022oneformer,
title={{OneFormer: One Transformer to Rule Universal Image Segmentation}},
author={Jitesh Jain and Jiachen Li and MangTik Chiu and Ali Hassani and Nikita Orlov and Humphrey Shi},
journal={arXiv},
year={2022}
}
```
| [
"wall",
"building",
"sky",
"floor",
"tree",
"ceiling",
"road, route",
"bed",
"window ",
"grass",
"cabinet",
"sidewalk, pavement",
"person",
"earth, ground",
"door",
"table",
"mountain, mount",
"plant",
"curtain",
"chair",
"car",
"water",
"painting, picture",
"sofa",
"shelf",
"house",
"sea",
"mirror",
"rug",
"field",
"armchair",
"seat",
"fence",
"desk",
"rock, stone",
"wardrobe, closet, press",
"lamp",
"tub",
"rail",
"cushion",
"base, pedestal, stand",
"box",
"column, pillar",
"signboard, sign",
"chest of drawers, chest, bureau, dresser",
"counter",
"sand",
"sink",
"skyscraper",
"fireplace",
"refrigerator, icebox",
"grandstand, covered stand",
"path",
"stairs",
"runway",
"case, display case, showcase, vitrine",
"pool table, billiard table, snooker table",
"pillow",
"screen door, screen",
"stairway, staircase",
"river",
"bridge, span",
"bookcase",
"blind, screen",
"coffee table",
"toilet, can, commode, crapper, pot, potty, stool, throne",
"flower",
"book",
"hill",
"bench",
"countertop",
"stove",
"palm, palm tree",
"kitchen island",
"computer",
"swivel chair",
"boat",
"bar",
"arcade machine",
"hovel, hut, hutch, shack, shanty",
"bus",
"towel",
"light",
"truck",
"tower",
"chandelier",
"awning, sunshade, sunblind",
"street lamp",
"booth",
"tv",
"plane",
"dirt track",
"clothes",
"pole",
"land, ground, soil",
"bannister, banister, balustrade, balusters, handrail",
"escalator, moving staircase, moving stairway",
"ottoman, pouf, pouffe, puff, hassock",
"bottle",
"buffet, counter, sideboard",
"poster, posting, placard, notice, bill, card",
"stage",
"van",
"ship",
"fountain",
"conveyer belt, conveyor belt, conveyer, conveyor, transporter",
"canopy",
"washer, automatic washer, washing machine",
"plaything, toy",
"pool",
"stool",
"barrel, cask",
"basket, handbasket",
"falls",
"tent",
"bag",
"minibike, motorbike",
"cradle",
"oven",
"ball",
"food, solid food",
"step, stair",
"tank, storage tank",
"trade name",
"microwave",
"pot",
"animal",
"bicycle",
"lake",
"dishwasher",
"screen",
"blanket, cover",
"sculpture",
"hood, exhaust hood",
"sconce",
"vase",
"traffic light",
"tray",
"trash can",
"fan",
"pier",
"crt screen",
"plate",
"monitor",
"bulletin board",
"shower",
"radiator",
"glass, drinking glass",
"clock",
"flag"
] |
shi-labs/oneformer_coco_dinat_large |
# OneFormer
OneFormer model trained on the COCO dataset (large-sized version, Dinat backbone). It was introduced in the paper [OneFormer: One Transformer to Rule Universal Image Segmentation](https://arxiv.org/abs/2211.06220) by Jain et al. and first released in [this repository](https://github.com/SHI-Labs/OneFormer).

## Model description
OneFormer is the first multi-task universal image segmentation framework. It needs to be trained only once with a single universal architecture, a single model, and on a single dataset, to outperform existing specialized models across semantic, instance, and panoptic segmentation tasks. OneFormer uses a task token to condition the model on the task in focus, making the architecture task-guided for training, and task-dynamic for inference, all with a single model.

## Intended uses & limitations
You can use this particular checkpoint for semantic, instance and panoptic segmentation. See the [model hub](https://huggingface.co/models?search=oneformer) to look for other fine-tuned versions on a different dataset.
### How to use
Here is how to use this model:
```python
from transformers import OneFormerProcessor, OneFormerForUniversalSegmentation
from PIL import Image
import requests
url = "https://huggingface.co/datasets/shi-labs/oneformer_demo/blob/main/coco.jpeg"
image = Image.open(requests.get(url, stream=True).raw)
# Loading a single model for all three tasks
processor = OneFormerProcessor.from_pretrained("shi-labs/oneformer_coco_dinat_large")
model = OneFormerForUniversalSegmentation.from_pretrained("shi-labs/oneformer_coco_dinat_large")
# Semantic Segmentation
semantic_inputs = processor(images=image, task_inputs=["semantic"], return_tensors="pt")
semantic_outputs = model(**semantic_inputs)
# pass through image_processor for postprocessing
predicted_semantic_map = processor.post_process_semantic_segmentation(outputs, target_sizes=[image.size[::-1]])[0]
# Instance Segmentation
instance_inputs = processor(images=image, task_inputs=["instance"], return_tensors="pt")
instance_outputs = model(**instance_inputs)
# pass through image_processor for postprocessing
predicted_instance_map = processor.post_process_instance_segmentation(outputs, target_sizes=[image.size[::-1]])[0]["segmentation"]
# Panoptic Segmentation
panoptic_inputs = processor(images=image, task_inputs=["panoptic"], return_tensors="pt")
panoptic_outputs = model(**panoptic_inputs)
# pass through image_processor for postprocessing
predicted_semantic_map = processor.post_process_panoptic_segmentation(outputs, target_sizes=[image.size[::-1]])[0]["segmentation"]
```
For more examples, please refer to the [documentation](https://huggingface.co/docs/transformers/master/en/model_doc/oneformer).
### Citation
```bibtex
@article{jain2022oneformer,
title={{OneFormer: One Transformer to Rule Universal Image Segmentation}},
author={Jitesh Jain and Jiachen Li and MangTik Chiu and Ali Hassani and Nikita Orlov and Humphrey Shi},
journal={arXiv},
year={2022}
}
```
| [
"person",
"bicycle",
"car",
"motorcycle",
"airplane",
"bus",
"train",
"truck",
"boat",
"traffic light",
"fire hydrant",
"stop sign",
"parking meter",
"bench",
"bird",
"cat",
"dog",
"horse",
"sheep",
"cow",
"elephant",
"bear",
"zebra",
"giraffe",
"backpack",
"umbrella",
"handbag",
"tie",
"suitcase",
"frisbee",
"skis",
"snowboard",
"sports ball",
"kite",
"baseball bat",
"baseball glove",
"skateboard",
"surfboard",
"tennis racket",
"bottle",
"wine glass",
"cup",
"fork",
"knife",
"spoon",
"bowl",
"banana",
"apple",
"sandwich",
"orange",
"broccoli",
"carrot",
"hot dog",
"pizza",
"donut",
"cake",
"chair",
"couch",
"potted plant",
"bed",
"dining table",
"toilet",
"tv",
"laptop",
"mouse",
"remote",
"keyboard",
"cell phone",
"microwave",
"oven",
"toaster",
"sink",
"refrigerator",
"book",
"clock",
"vase",
"scissors",
"teddy bear",
"hair drier",
"toothbrush",
"banner",
"blanket",
"bridge",
"cardboard",
"counter",
"curtain",
"door-stuff",
"floor-wood",
"flower",
"fruit",
"gravel",
"house",
"light",
"mirror-stuff",
"net",
"pillow",
"platform",
"playingfield",
"railroad",
"river",
"road",
"roof",
"sand",
"sea",
"shelf",
"snow",
"stairs",
"tent",
"towel",
"wall-brick",
"wall-stone",
"wall-tile",
"wall-wood",
"water-other",
"window-blind",
"window-other",
"tree-merged",
"fence-merged",
"ceiling-merged",
"sky-other-merged",
"cabinet-merged",
"table-merged",
"floor-other-merged",
"pavement-merged",
"mountain-merged",
"grass-merged",
"dirt-merged",
"paper-merged",
"food-other-merged",
"building-other-merged",
"rock-merged",
"wall-other-merged",
"rug-merged"
] |
shi-labs/oneformer_cityscapes_dinat_large |
# OneFormer
OneFormer model trained on the Cityscapes dataset (large-sized version, Dinat backbone). It was introduced in the paper [OneFormer: One Transformer to Rule Universal Image Segmentation](https://arxiv.org/abs/2211.06220) by Jain et al. and first released in [this repository](https://github.com/SHI-Labs/OneFormer).

## Model description
OneFormer is the first multi-task universal image segmentation framework. It needs to be trained only once with a single universal architecture, a single model, and on a single dataset, to outperform existing specialized models across semantic, instance, and panoptic segmentation tasks. OneFormer uses a task token to condition the model on the task in focus, making the architecture task-guided for training, and task-dynamic for inference, all with a single model.

## Intended uses & limitations
You can use this particular checkpoint for semantic, instance and panoptic segmentation. See the [model hub](https://huggingface.co/models?search=oneformer) to look for other fine-tuned versions on a different dataset.
### How to use
Here is how to use this model:
```python
from transformers import OneFormerProcessor, OneFormerForUniversalSegmentation
from PIL import Image
import requests
url = "https://huggingface.co/datasets/shi-labs/oneformer_demo/blob/main/cityscapes.png"
image = Image.open(requests.get(url, stream=True).raw)
# Loading a single model for all three tasks
processor = OneFormerProcessor.from_pretrained("shi-labs/oneformer_cityscapes_dinat_large")
model = OneFormerForUniversalSegmentation.from_pretrained("shi-labs/oneformer_cityscapes_dinat_large")
# Semantic Segmentation
semantic_inputs = processor(images=image, task_inputs=["semantic"], return_tensors="pt")
semantic_outputs = model(**semantic_inputs)
# pass through image_processor for postprocessing
predicted_semantic_map = processor.post_process_semantic_segmentation(outputs, target_sizes=[image.size[::-1]])[0]
# Instance Segmentation
instance_inputs = processor(images=image, task_inputs=["instance"], return_tensors="pt")
instance_outputs = model(**instance_inputs)
# pass through image_processor for postprocessing
predicted_instance_map = processor.post_process_instance_segmentation(outputs, target_sizes=[image.size[::-1]])[0]["segmentation"]
# Panoptic Segmentation
panoptic_inputs = processor(images=image, task_inputs=["panoptic"], return_tensors="pt")
panoptic_outputs = model(**panoptic_inputs)
# pass through image_processor for postprocessing
predicted_semantic_map = processor.post_process_panoptic_segmentation(outputs, target_sizes=[image.size[::-1]])[0]["segmentation"]
```
For more examples, please refer to the [documentation](https://huggingface.co/docs/transformers/master/en/model_doc/oneformer).
### Citation
```bibtex
@article{jain2022oneformer,
title={{OneFormer: One Transformer to Rule Universal Image Segmentation}},
author={Jitesh Jain and Jiachen Li and MangTik Chiu and Ali Hassani and Nikita Orlov and Humphrey Shi},
journal={arXiv},
year={2022}
}
```
| [
"road",
"sidewalk",
"building",
"wall",
"fence",
"pole",
"traffic light",
"traffic sign",
"vegetation",
"terrain",
"sky",
"person",
"rider",
"car",
"truck",
"bus",
"train",
"motorcycle",
"bicycle"
] |
shi-labs/oneformer_ade20k_swin_tiny |
# OneFormer
OneFormer model trained on the ADE20k dataset (tiny-sized version, Swin backbone). It was introduced in the paper [OneFormer: One Transformer to Rule Universal Image Segmentation](https://arxiv.org/abs/2211.06220) by Jain et al. and first released in [this repository](https://github.com/SHI-Labs/OneFormer).

## Model description
OneFormer is the first multi-task universal image segmentation framework. It needs to be trained only once with a single universal architecture, a single model, and on a single dataset, to outperform existing specialized models across semantic, instance, and panoptic segmentation tasks. OneFormer uses a task token to condition the model on the task in focus, making the architecture task-guided for training, and task-dynamic for inference, all with a single model.

## Intended uses & limitations
You can use this particular checkpoint for semantic, instance and panoptic segmentation. See the [model hub](https://huggingface.co/models?search=oneformer) to look for other fine-tuned versions on a different dataset.
### How to use
Here is how to use this model:
```python
from transformers import OneFormerProcessor, OneFormerForUniversalSegmentation
from PIL import Image
import requests
url = "https://huggingface.co/datasets/shi-labs/oneformer_demo/blob/main/ade20k.jpeg"
image = Image.open(requests.get(url, stream=True).raw)
# Loading a single model for all three tasks
processor = OneFormerProcessor.from_pretrained("shi-labs/oneformer_ade20k_swin_tiny")
model = OneFormerForUniversalSegmentation.from_pretrained("shi-labs/oneformer_ade20k_swin_tiny")
# Semantic Segmentation
semantic_inputs = processor(images=image, task_inputs=["semantic"], return_tensors="pt")
semantic_outputs = model(**semantic_inputs)
# pass through image_processor for postprocessing
predicted_semantic_map = processor.post_process_semantic_segmentation(outputs, target_sizes=[image.size[::-1]])[0]
# Instance Segmentation
instance_inputs = processor(images=image, task_inputs=["instance"], return_tensors="pt")
instance_outputs = model(**instance_inputs)
# pass through image_processor for postprocessing
predicted_instance_map = processor.post_process_instance_segmentation(outputs, target_sizes=[image.size[::-1]])[0]["segmentation"]
# Panoptic Segmentation
panoptic_inputs = processor(images=image, task_inputs=["panoptic"], return_tensors="pt")
panoptic_outputs = model(**panoptic_inputs)
# pass through image_processor for postprocessing
predicted_semantic_map = processor.post_process_panoptic_segmentation(outputs, target_sizes=[image.size[::-1]])[0]["segmentation"]
```
For more examples, please refer to the [documentation](https://huggingface.co/docs/transformers/master/en/model_doc/oneformer).
### Citation
```bibtex
@article{jain2022oneformer,
title={{OneFormer: One Transformer to Rule Universal Image Segmentation}},
author={Jitesh Jain and Jiachen Li and MangTik Chiu and Ali Hassani and Nikita Orlov and Humphrey Shi},
journal={arXiv},
year={2022}
}
```
| [
"wall",
"building",
"sky",
"floor",
"tree",
"ceiling",
"road, route",
"bed",
"window ",
"grass",
"cabinet",
"sidewalk, pavement",
"person",
"earth, ground",
"door",
"table",
"mountain, mount",
"plant",
"curtain",
"chair",
"car",
"water",
"painting, picture",
"sofa",
"shelf",
"house",
"sea",
"mirror",
"rug",
"field",
"armchair",
"seat",
"fence",
"desk",
"rock, stone",
"wardrobe, closet, press",
"lamp",
"tub",
"rail",
"cushion",
"base, pedestal, stand",
"box",
"column, pillar",
"signboard, sign",
"chest of drawers, chest, bureau, dresser",
"counter",
"sand",
"sink",
"skyscraper",
"fireplace",
"refrigerator, icebox",
"grandstand, covered stand",
"path",
"stairs",
"runway",
"case, display case, showcase, vitrine",
"pool table, billiard table, snooker table",
"pillow",
"screen door, screen",
"stairway, staircase",
"river",
"bridge, span",
"bookcase",
"blind, screen",
"coffee table",
"toilet, can, commode, crapper, pot, potty, stool, throne",
"flower",
"book",
"hill",
"bench",
"countertop",
"stove",
"palm, palm tree",
"kitchen island",
"computer",
"swivel chair",
"boat",
"bar",
"arcade machine",
"hovel, hut, hutch, shack, shanty",
"bus",
"towel",
"light",
"truck",
"tower",
"chandelier",
"awning, sunshade, sunblind",
"street lamp",
"booth",
"tv",
"plane",
"dirt track",
"clothes",
"pole",
"land, ground, soil",
"bannister, banister, balustrade, balusters, handrail",
"escalator, moving staircase, moving stairway",
"ottoman, pouf, pouffe, puff, hassock",
"bottle",
"buffet, counter, sideboard",
"poster, posting, placard, notice, bill, card",
"stage",
"van",
"ship",
"fountain",
"conveyer belt, conveyor belt, conveyer, conveyor, transporter",
"canopy",
"washer, automatic washer, washing machine",
"plaything, toy",
"pool",
"stool",
"barrel, cask",
"basket, handbasket",
"falls",
"tent",
"bag",
"minibike, motorbike",
"cradle",
"oven",
"ball",
"food, solid food",
"step, stair",
"tank, storage tank",
"trade name",
"microwave",
"pot",
"animal",
"bicycle",
"lake",
"dishwasher",
"screen",
"blanket, cover",
"sculpture",
"hood, exhaust hood",
"sconce",
"vase",
"traffic light",
"tray",
"trash can",
"fan",
"pier",
"crt screen",
"plate",
"monitor",
"bulletin board",
"shower",
"radiator",
"glass, drinking glass",
"clock",
"flag"
] |
mattmdjaga/segformer_b2_clothes | # Segformer B2 fine-tuned for clothes segmentation
SegFormer model fine-tuned on [ATR dataset](https://github.com/lemondan/HumanParsing-Dataset) for clothes segmentation but can also be used for human segmentation.
The dataset on hugging face is called "mattmdjaga/human_parsing_dataset".
**[Training code](https://github.com/mattmdjaga/segformer_b2_clothes)**.
```python
from transformers import SegformerImageProcessor, AutoModelForSemanticSegmentation
from PIL import Image
import requests
import matplotlib.pyplot as plt
import torch.nn as nn
processor = SegformerImageProcessor.from_pretrained("mattmdjaga/segformer_b2_clothes")
model = AutoModelForSemanticSegmentation.from_pretrained("mattmdjaga/segformer_b2_clothes")
url = "https://plus.unsplash.com/premium_photo-1673210886161-bfcc40f54d1f?ixlib=rb-4.0.3&ixid=MnwxMjA3fDB8MHxzZWFyY2h8MXx8cGVyc29uJTIwc3RhbmRpbmd8ZW58MHx8MHx8&w=1000&q=80"
image = Image.open(requests.get(url, stream=True).raw)
inputs = processor(images=image, return_tensors="pt")
outputs = model(**inputs)
logits = outputs.logits.cpu()
upsampled_logits = nn.functional.interpolate(
logits,
size=image.size[::-1],
mode="bilinear",
align_corners=False,
)
pred_seg = upsampled_logits.argmax(dim=1)[0]
plt.imshow(pred_seg)
```
Labels: 0: "Background", 1: "Hat", 2: "Hair", 3: "Sunglasses", 4: "Upper-clothes", 5: "Skirt", 6: "Pants", 7: "Dress", 8: "Belt", 9: "Left-shoe", 10: "Right-shoe", 11: "Face", 12: "Left-leg", 13: "Right-leg", 14: "Left-arm", 15: "Right-arm", 16: "Bag", 17: "Scarf"
### Evaluation
| Label Index | Label Name | Category Accuracy | Category IoU |
|:-------------:|:----------------:|:-----------------:|:------------:|
| 0 | Background | 0.99 | 0.99 |
| 1 | Hat | 0.73 | 0.68 |
| 2 | Hair | 0.91 | 0.82 |
| 3 | Sunglasses | 0.73 | 0.63 |
| 4 | Upper-clothes | 0.87 | 0.78 |
| 5 | Skirt | 0.76 | 0.65 |
| 6 | Pants | 0.90 | 0.84 |
| 7 | Dress | 0.74 | 0.55 |
| 8 | Belt | 0.35 | 0.30 |
| 9 | Left-shoe | 0.74 | 0.58 |
| 10 | Right-shoe | 0.75 | 0.60 |
| 11 | Face | 0.92 | 0.85 |
| 12 | Left-leg | 0.90 | 0.82 |
| 13 | Right-leg | 0.90 | 0.81 |
| 14 | Left-arm | 0.86 | 0.74 |
| 15 | Right-arm | 0.82 | 0.73 |
| 16 | Bag | 0.91 | 0.84 |
| 17 | Scarf | 0.63 | 0.29 |
Overall Evaluation Metrics:
- Evaluation Loss: 0.15
- Mean Accuracy: 0.80
- Mean IoU: 0.69
### License
The license for this model can be found [here](https://github.com/NVlabs/SegFormer/blob/master/LICENSE).
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2105-15203,
author = {Enze Xie and
Wenhai Wang and
Zhiding Yu and
Anima Anandkumar and
Jose M. Alvarez and
Ping Luo},
title = {SegFormer: Simple and Efficient Design for Semantic Segmentation with
Transformers},
journal = {CoRR},
volume = {abs/2105.15203},
year = {2021},
url = {https://arxiv.org/abs/2105.15203},
eprinttype = {arXiv},
eprint = {2105.15203},
timestamp = {Wed, 02 Jun 2021 11:46:42 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2105-15203.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
``` | [
"background",
"hat",
"hair",
"sunglasses",
"upper-clothes",
"skirt",
"pants",
"dress",
"belt",
"left-shoe",
"right-shoe",
"face",
"left-leg",
"right-leg",
"left-arm",
"right-arm",
"bag",
"scarf"
] |
facebook/mask2former-swin-base-coco-instance |
# Mask2Former
Mask2Former model trained on COCO instance segmentation (base-sized version, Swin backbone). It was introduced in the paper [Masked-attention Mask Transformer for Universal Image Segmentation
](https://arxiv.org/abs/2112.01527) and first released in [this repository](https://github.com/facebookresearch/Mask2Former/).
Disclaimer: The team releasing Mask2Former did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
Mask2Former addresses instance, semantic and panoptic segmentation with the same paradigm: by predicting a set of masks and corresponding labels. Hence, all 3 tasks are treated as if they were instance segmentation. Mask2Former outperforms the previous SOTA,
[MaskFormer](https://arxiv.org/abs/2107.06278) both in terms of performance an efficiency by (i) replacing the pixel decoder with a more advanced multi-scale deformable attention Transformer, (ii) adopting a Transformer decoder with masked attention to boost performance without
without introducing additional computation and (iii) improving training efficiency by calculating the loss on subsampled points instead of whole masks.

## Intended uses & limitations
You can use this particular checkpoint for instance segmentation. See the [model hub](https://huggingface.co/models?search=mask2former) to look for other
fine-tuned versions on a task that interests you.
### How to use
Here is how to use this model:
```python
import requests
import torch
from PIL import Image
from transformers import AutoImageProcessor, Mask2FormerForUniversalSegmentation
# load Mask2Former fine-tuned on COCO instance segmentation
processor = AutoImageProcessor.from_pretrained("facebook/mask2former-swin-base-coco-instance")
model = Mask2FormerForUniversalSegmentation.from_pretrained("facebook/mask2former-swin-base-coco-instance")
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
inputs = processor(images=image, return_tensors="pt")
with torch.no_grad():
outputs = model(**inputs)
# model predicts class_queries_logits of shape `(batch_size, num_queries)`
# and masks_queries_logits of shape `(batch_size, num_queries, height, width)`
class_queries_logits = outputs.class_queries_logits
masks_queries_logits = outputs.masks_queries_logits
# you can pass them to processor for postprocessing
result = processor.post_process_instance_segmentation(outputs, target_sizes=[image.size[::-1]])[0]
# we refer to the demo notebooks for visualization (see "Resources" section in the Mask2Former docs)
predicted_instance_map = result["segmentation"]
```
For more code examples, we refer to the [documentation](https://huggingface.co/docs/transformers/master/en/model_doc/mask2former). | [
"person",
"bicycle",
"car",
"motorbike",
"aeroplane",
"bus",
"train",
"truck",
"boat",
"traffic light",
"fire hydrant",
"stop sign",
"parking meter",
"bench",
"bird",
"cat",
"dog",
"horse",
"sheep",
"cow",
"elephant",
"bear",
"zebra",
"giraffe",
"backpack",
"umbrella",
"handbag",
"tie",
"suitcase",
"frisbee",
"skis",
"snowboard",
"sports ball",
"kite",
"baseball bat",
"baseball glove",
"skateboard",
"surfboard",
"tennis racket",
"bottle",
"wine glass",
"cup",
"fork",
"knife",
"spoon",
"bowl",
"banana",
"apple",
"sandwich",
"orange",
"broccoli",
"carrot",
"hot dog",
"pizza",
"donut",
"cake",
"chair",
"sofa",
"pottedplant",
"bed",
"diningtable",
"toilet",
"tvmonitor",
"laptop",
"mouse",
"remote",
"keyboard",
"cell phone",
"microwave",
"oven",
"toaster",
"sink",
"refrigerator",
"book",
"clock",
"vase",
"scissors",
"teddy bear",
"hair drier",
"toothbrush"
] |
turcuciprian/segformer-b0-finetuned-segments-sidewalk-2 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# segformer-b0-finetuned-segments-sidewalk-2
This model is a fine-tuned version of [nvidia/mit-b0](https://huggingface.co/nvidia/mit-b0) on the segments/sidewalk-semantic dataset.
It achieves the following results on the evaluation set:
- Loss: 2.2794
- Mean Iou: 0.0007
- Mean Accuracy: 0.0333
- Overall Accuracy: 0.0216
- Accuracy Unlabeled: nan
- Accuracy Flat-road: 0.0
- Accuracy Flat-sidewalk: 1.0
- Accuracy Flat-crosswalk: 0.0
- Accuracy Flat-cyclinglane: 0.0
- Accuracy Flat-parkingdriveway: nan
- Accuracy Flat-railtrack: 0.0
- Accuracy Flat-curb: 0.0
- Accuracy Human-person: 0.0
- Accuracy Human-rider: 0.0
- Accuracy Vehicle-car: 0.0
- Accuracy Vehicle-truck: 0.0
- Accuracy Vehicle-bus: nan
- Accuracy Vehicle-tramtrain: 0.0
- Accuracy Vehicle-motorcycle: 0.0
- Accuracy Vehicle-bicycle: 0.0
- Accuracy Vehicle-caravan: 0.0
- Accuracy Vehicle-cartrailer: 0.0
- Accuracy Construction-building: 0.0
- Accuracy Construction-door: 0.0
- Accuracy Construction-wall: 0.0
- Accuracy Construction-fenceguardrail: 0.0
- Accuracy Construction-bridge: nan
- Accuracy Construction-tunnel: 0.0
- Accuracy Construction-stairs: 0.0
- Accuracy Object-pole: 0.0
- Accuracy Object-trafficsign: 0.0
- Accuracy Object-trafficlight: 0.0
- Accuracy Nature-vegetation: 0.0
- Accuracy Nature-terrain: 0.0
- Accuracy Sky: 0.0
- Accuracy Void-ground: 0.0
- Accuracy Void-dynamic: 0.0
- Accuracy Void-static: 0.0
- Accuracy Void-unclear: nan
- Iou Unlabeled: nan
- Iou Flat-road: 0.0
- Iou Flat-sidewalk: 0.0216
- Iou Flat-crosswalk: 0.0
- Iou Flat-cyclinglane: 0.0
- Iou Flat-parkingdriveway: nan
- Iou Flat-railtrack: 0.0
- Iou Flat-curb: 0.0
- Iou Human-person: 0.0
- Iou Human-rider: 0.0
- Iou Vehicle-car: 0.0
- Iou Vehicle-truck: 0.0
- Iou Vehicle-bus: nan
- Iou Vehicle-tramtrain: 0.0
- Iou Vehicle-motorcycle: 0.0
- Iou Vehicle-bicycle: 0.0
- Iou Vehicle-caravan: 0.0
- Iou Vehicle-cartrailer: 0.0
- Iou Construction-building: 0.0
- Iou Construction-door: 0.0
- Iou Construction-wall: 0.0
- Iou Construction-fenceguardrail: 0.0
- Iou Construction-bridge: nan
- Iou Construction-tunnel: 0.0
- Iou Construction-stairs: 0.0
- Iou Object-pole: 0.0
- Iou Object-trafficsign: 0.0
- Iou Object-trafficlight: 0.0
- Iou Nature-vegetation: 0.0
- Iou Nature-terrain: 0.0
- Iou Sky: 0.0
- Iou Void-ground: 0.0
- Iou Void-dynamic: 0.0
- Iou Void-static: 0.0
- Iou Void-unclear: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.5
- train_batch_size: 3
- eval_batch_size: 3
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Mean Iou | Mean Accuracy | Overall Accuracy | Accuracy Unlabeled | Accuracy Flat-road | Accuracy Flat-sidewalk | Accuracy Flat-crosswalk | Accuracy Flat-cyclinglane | Accuracy Flat-parkingdriveway | Accuracy Flat-railtrack | Accuracy Flat-curb | Accuracy Human-person | Accuracy Human-rider | Accuracy Vehicle-car | Accuracy Vehicle-truck | Accuracy Vehicle-bus | Accuracy Vehicle-tramtrain | Accuracy Vehicle-motorcycle | Accuracy Vehicle-bicycle | Accuracy Vehicle-caravan | Accuracy Vehicle-cartrailer | Accuracy Construction-building | Accuracy Construction-door | Accuracy Construction-wall | Accuracy Construction-fenceguardrail | Accuracy Construction-bridge | Accuracy Construction-tunnel | Accuracy Construction-stairs | Accuracy Object-pole | Accuracy Object-trafficsign | Accuracy Object-trafficlight | Accuracy Nature-vegetation | Accuracy Nature-terrain | Accuracy Sky | Accuracy Void-ground | Accuracy Void-dynamic | Accuracy Void-static | Accuracy Void-unclear | Iou Unlabeled | Iou Flat-road | Iou Flat-sidewalk | Iou Flat-crosswalk | Iou Flat-cyclinglane | Iou Flat-parkingdriveway | Iou Flat-railtrack | Iou Flat-curb | Iou Human-person | Iou Human-rider | Iou Vehicle-car | Iou Vehicle-truck | Iou Vehicle-bus | Iou Vehicle-tramtrain | Iou Vehicle-motorcycle | Iou Vehicle-bicycle | Iou Vehicle-caravan | Iou Vehicle-cartrailer | Iou Construction-building | Iou Construction-door | Iou Construction-wall | Iou Construction-fenceguardrail | Iou Construction-bridge | Iou Construction-tunnel | Iou Construction-stairs | Iou Object-pole | Iou Object-trafficsign | Iou Object-trafficlight | Iou Nature-vegetation | Iou Nature-terrain | Iou Sky | Iou Void-ground | Iou Void-dynamic | Iou Void-static | Iou Void-unclear |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:-------------:|:----------------:|:------------------:|:------------------:|:----------------------:|:-----------------------:|:-------------------------:|:-----------------------------:|:-----------------------:|:------------------:|:---------------------:|:--------------------:|:--------------------:|:----------------------:|:--------------------:|:--------------------------:|:---------------------------:|:------------------------:|:------------------------:|:---------------------------:|:------------------------------:|:--------------------------:|:--------------------------:|:------------------------------------:|:----------------------------:|:----------------------------:|:----------------------------:|:--------------------:|:---------------------------:|:----------------------------:|:--------------------------:|:-----------------------:|:------------:|:--------------------:|:---------------------:|:--------------------:|:---------------------:|:-------------:|:-------------:|:-----------------:|:------------------:|:--------------------:|:------------------------:|:------------------:|:-------------:|:----------------:|:---------------:|:---------------:|:-----------------:|:---------------:|:---------------------:|:----------------------:|:-------------------:|:-------------------:|:----------------------:|:-------------------------:|:---------------------:|:---------------------:|:-------------------------------:|:-----------------------:|:-----------------------:|:-----------------------:|:---------------:|:----------------------:|:-----------------------:|:---------------------:|:------------------:|:-------:|:---------------:|:----------------:|:---------------:|:----------------:|
| 1.8968 | 0.07 | 20 | 12.4141 | 0.0007 | 0.0333 | 0.0216 | nan | 0.0 | 1.0 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | nan | nan | 0.0 | 0.0216 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | nan |
| 2.4115 | 0.15 | 40 | 5.4547 | 0.0010 | 0.0310 | 0.0045 | nan | 0.0 | 0.0487 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.2040 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0027 | 0.0 | 0.6733 | 0.0 | 0.0 | 0.0 | nan | nan | 0.0 | 0.0218 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.0036 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0020 | 0.0 | 0.0035 | 0.0 | 0.0 | 0.0 | nan |
| 2.7593 | 0.22 | 60 | 2.5077 | 0.0007 | 0.0333 | 0.0216 | nan | 0.0 | 1.0 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | nan | nan | 0.0 | 0.0216 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | nan |
| 2.447 | 0.3 | 80 | 2.3171 | 0.0007 | 0.0333 | 0.0216 | nan | 0.0 | 1.0 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | nan | nan | 0.0 | 0.0216 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | nan |
| 2.0908 | 0.37 | 100 | 2.2973 | 0.0007 | 0.0333 | 0.0216 | nan | 0.0 | 1.0 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | nan | nan | 0.0 | 0.0216 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | nan |
| 2.0462 | 0.45 | 120 | 2.3021 | 0.0007 | 0.0333 | 0.0216 | nan | 0.0 | 1.0 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | nan | nan | 0.0 | 0.0216 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | nan |
| 2.111 | 0.52 | 140 | 2.3397 | 0.0007 | 0.0333 | 0.0216 | nan | 0.0 | 1.0 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | nan | nan | 0.0 | 0.0216 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | nan |
| 2.4589 | 0.6 | 160 | 2.3188 | 0.0007 | 0.0333 | 0.0216 | nan | 0.0 | 1.0 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | nan | nan | 0.0 | 0.0216 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | nan |
| 2.7771 | 0.67 | 180 | 2.2841 | 0.0007 | 0.0333 | 0.0216 | nan | 0.0 | 1.0 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | nan | nan | 0.0 | 0.0216 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | nan |
| 2.3753 | 0.75 | 200 | 2.2824 | 0.0007 | 0.0333 | 0.0216 | nan | 0.0 | 1.0 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | nan | nan | 0.0 | 0.0216 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | nan |
| 2.4775 | 0.82 | 220 | 2.2890 | 0.0007 | 0.0333 | 0.0216 | nan | 0.0 | 1.0 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | nan | nan | 0.0 | 0.0216 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | nan |
| 2.1964 | 0.9 | 240 | 2.2757 | 0.0007 | 0.0333 | 0.0216 | nan | 0.0 | 1.0 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | nan | nan | 0.0 | 0.0216 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | nan |
| 2.1973 | 0.97 | 260 | 2.2794 | 0.0007 | 0.0333 | 0.0216 | nan | 0.0 | 1.0 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | nan | nan | 0.0 | 0.0216 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | nan |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1+cu117
- Datasets 2.10.1
- Tokenizers 0.13.2
| [
"unlabeled",
"flat-road",
"flat-sidewalk",
"flat-crosswalk",
"flat-cyclinglane",
"flat-parkingdriveway",
"flat-railtrack",
"flat-curb",
"human-person",
"human-rider",
"vehicle-car",
"vehicle-truck",
"vehicle-bus",
"vehicle-tramtrain",
"vehicle-motorcycle",
"vehicle-bicycle",
"vehicle-caravan",
"vehicle-cartrailer",
"construction-building",
"construction-door",
"construction-wall",
"construction-fenceguardrail",
"construction-bridge",
"construction-tunnel",
"construction-stairs",
"object-pole",
"object-trafficsign",
"object-trafficlight",
"nature-vegetation",
"nature-terrain",
"sky",
"void-ground",
"void-dynamic",
"void-static",
"void-unclear"
] |
optimum/segformer-b0-finetuned-ade-512-512 |
# SegFormer (b0-sized) model fine-tuned on ADE20k
SegFormer model fine-tuned on ADE20k at resolution 512x512. It was introduced in the paper [SegFormer: Simple and Efficient Design for Semantic Segmentation with Transformers](https://arxiv.org/abs/2105.15203) by Xie et al. and first released in [this repository](https://github.com/NVlabs/SegFormer).
Disclaimer: The team releasing SegFormer did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
SegFormer consists of a hierarchical Transformer encoder and a lightweight all-MLP decode head to achieve great results on semantic segmentation benchmarks such as ADE20K and Cityscapes. The hierarchical Transformer is first pre-trained on ImageNet-1k, after which a decode head is added and fine-tuned altogether on a downstream dataset.
## Intended uses & limitations
You can use the raw model for semantic segmentation. See the [model hub](https://huggingface.co/models?other=segformer) to look for fine-tuned versions on a task that interests you.
### How to use
Here is how to use this model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes:
```python
from transformers import SegformerImageProcessor
from PIL import Image
import requests
from optimum.onnxruntime import ORTModelForSemanticSegmentation
image_processor = SegformerImageProcessor.from_pretrained("optimum/segformer-b0-finetuned-ade-512-512")
model = ORTModelForSemanticSegmentation.from_pretrained("optimum/segformer-b0-finetuned-ade-512-512")
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
inputs = image_processor(images=image, return_tensors="pt").to(device)
outputs = model(**inputs)
logits = outputs.logits # shape (batch_size, num_labels, height/4, width/4)
```
If you use pipeline:
```python
from transformers import SegformerImageProcessor, pipeline
from optimum.onnxruntime import ORTModelForSemanticSegmentation
image_processor = SegformerImageProcessor.from_pretrained("optimum/segformer-b0-finetuned-ade-512-512")
model = ORTModelForSemanticSegmentation.from_pretrained("optimum/segformer-b0-finetuned-ade-512-512")
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
pipe = pipeline("image-segmentation", model=model, feature_extractor=image_processor)
pred = pipe(url)
```
For more code examples, we refer to the [Optimum documentation](https://huggingface.co/docs/optimum/onnxruntime/usage_guides/models).
### License
The license for this model can be found [here](https://github.com/NVlabs/SegFormer/blob/master/LICENSE).
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2105-15203,
author = {Enze Xie and
Wenhai Wang and
Zhiding Yu and
Anima Anandkumar and
Jose M. Alvarez and
Ping Luo},
title = {SegFormer: Simple and Efficient Design for Semantic Segmentation with
Transformers},
journal = {CoRR},
volume = {abs/2105.15203},
year = {2021},
url = {https://arxiv.org/abs/2105.15203},
eprinttype = {arXiv},
eprint = {2105.15203},
timestamp = {Wed, 02 Jun 2021 11:46:42 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2105-15203.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
| [
"wall",
"building",
"sky",
"floor",
"tree",
"ceiling",
"road",
"bed ",
"windowpane",
"grass",
"cabinet",
"sidewalk",
"person",
"earth",
"door",
"table",
"mountain",
"plant",
"curtain",
"chair",
"car",
"water",
"painting",
"sofa",
"shelf",
"house",
"sea",
"mirror",
"rug",
"field",
"armchair",
"seat",
"fence",
"desk",
"rock",
"wardrobe",
"lamp",
"bathtub",
"railing",
"cushion",
"base",
"box",
"column",
"signboard",
"chest of drawers",
"counter",
"sand",
"sink",
"skyscraper",
"fireplace",
"refrigerator",
"grandstand",
"path",
"stairs",
"runway",
"case",
"pool table",
"pillow",
"screen door",
"stairway",
"river",
"bridge",
"bookcase",
"blind",
"coffee table",
"toilet",
"flower",
"book",
"hill",
"bench",
"countertop",
"stove",
"palm",
"kitchen island",
"computer",
"swivel chair",
"boat",
"bar",
"arcade machine",
"hovel",
"bus",
"towel",
"light",
"truck",
"tower",
"chandelier",
"awning",
"streetlight",
"booth",
"television receiver",
"airplane",
"dirt track",
"apparel",
"pole",
"land",
"bannister",
"escalator",
"ottoman",
"bottle",
"buffet",
"poster",
"stage",
"van",
"ship",
"fountain",
"conveyer belt",
"canopy",
"washer",
"plaything",
"swimming pool",
"stool",
"barrel",
"basket",
"waterfall",
"tent",
"bag",
"minibike",
"cradle",
"oven",
"ball",
"food",
"step",
"tank",
"trade name",
"microwave",
"pot",
"animal",
"bicycle",
"lake",
"dishwasher",
"screen",
"blanket",
"sculpture",
"hood",
"sconce",
"vase",
"traffic light",
"tray",
"ashcan",
"fan",
"pier",
"crt screen",
"plate",
"monitor",
"bulletin board",
"shower",
"radiator",
"glass",
"clock",
"flag"
] |
facebook/mask2former-swin-small-coco-instance |
# Mask2Former
Mask2Former model trained on COCO instance segmentation (small-sized version, Swin backbone). It was introduced in the paper [Masked-attention Mask Transformer for Universal Image Segmentation
](https://arxiv.org/abs/2112.01527) and first released in [this repository](https://github.com/facebookresearch/Mask2Former/).
Disclaimer: The team releasing Mask2Former did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
Mask2Former addresses instance, semantic and panoptic segmentation with the same paradigm: by predicting a set of masks and corresponding labels. Hence, all 3 tasks are treated as if they were instance segmentation. Mask2Former outperforms the previous SOTA,
[MaskFormer](https://arxiv.org/abs/2107.06278) both in terms of performance an efficiency by (i) replacing the pixel decoder with a more advanced multi-scale deformable attention Transformer, (ii) adopting a Transformer decoder with masked attention to boost performance without
without introducing additional computation and (iii) improving training efficiency by calculating the loss on subsampled points instead of whole masks.

## Intended uses & limitations
You can use this particular checkpoint for instance segmentation. See the [model hub](https://huggingface.co/models?search=mask2former) to look for other
fine-tuned versions on a task that interests you.
### How to use
Here is how to use this model:
```python
import requests
import torch
from PIL import Image
from transformers import AutoImageProcessor, Mask2FormerForUniversalSegmentation
# load Mask2Former fine-tuned on COCO instance segmentation
processor = AutoImageProcessor.from_pretrained("facebook/mask2former-swin-small-coco-instance")
model = Mask2FormerForUniversalSegmentation.from_pretrained("facebook/mask2former-swin-small-coco-instance")
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
inputs = processor(images=image, return_tensors="pt")
with torch.no_grad():
outputs = model(**inputs)
# model predicts class_queries_logits of shape `(batch_size, num_queries)`
# and masks_queries_logits of shape `(batch_size, num_queries, height, width)`
class_queries_logits = outputs.class_queries_logits
masks_queries_logits = outputs.masks_queries_logits
# you can pass them to processor for postprocessing
result = processor.post_process_instance_segmentation(outputs, target_sizes=[image.size[::-1]])[0]
# we refer to the demo notebooks for visualization (see "Resources" section in the Mask2Former docs)
predicted_instance_map = result["segmentation"]
```
For more code examples, we refer to the [documentation](https://huggingface.co/docs/transformers/master/en/model_doc/mask2former). | [
"person",
"bicycle",
"car",
"motorbike",
"aeroplane",
"bus",
"train",
"truck",
"boat",
"traffic light",
"fire hydrant",
"stop sign",
"parking meter",
"bench",
"bird",
"cat",
"dog",
"horse",
"sheep",
"cow",
"elephant",
"bear",
"zebra",
"giraffe",
"backpack",
"umbrella",
"handbag",
"tie",
"suitcase",
"frisbee",
"skis",
"snowboard",
"sports ball",
"kite",
"baseball bat",
"baseball glove",
"skateboard",
"surfboard",
"tennis racket",
"bottle",
"wine glass",
"cup",
"fork",
"knife",
"spoon",
"bowl",
"banana",
"apple",
"sandwich",
"orange",
"broccoli",
"carrot",
"hot dog",
"pizza",
"donut",
"cake",
"chair",
"sofa",
"pottedplant",
"bed",
"diningtable",
"toilet",
"tvmonitor",
"laptop",
"mouse",
"remote",
"keyboard",
"cell phone",
"microwave",
"oven",
"toaster",
"sink",
"refrigerator",
"book",
"clock",
"vase",
"scissors",
"teddy bear",
"hair drier",
"toothbrush"
] |
mnosouhi96/segformer-b0-finetuned-segments-sidewalk-2 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# segformer-b0-finetuned-segments-sidewalk-2
This model is a fine-tuned version of [nvidia/mit-b0](https://huggingface.co/nvidia/mit-b0) on the segments/sidewalk-semantic dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 6e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
### Framework versions
- Transformers 4.25.1
- Pytorch 1.13.0+cu116
- Datasets 2.8.0
- Tokenizers 0.13.2
| [
"unlabeled",
"flat-road",
"flat-sidewalk",
"flat-crosswalk",
"flat-cyclinglane",
"flat-parkingdriveway",
"flat-railtrack",
"flat-curb",
"human-person",
"human-rider",
"vehicle-car",
"vehicle-truck",
"vehicle-bus",
"vehicle-tramtrain",
"vehicle-motorcycle",
"vehicle-bicycle",
"vehicle-caravan",
"vehicle-cartrailer",
"construction-building",
"construction-door",
"construction-wall",
"construction-fenceguardrail",
"construction-bridge",
"construction-tunnel",
"construction-stairs",
"object-pole",
"object-trafficsign",
"object-trafficlight",
"nature-vegetation",
"nature-terrain",
"sky",
"void-ground",
"void-dynamic",
"void-static",
"void-unclear"
] |
facebook/mask2former-swin-large-coco-instance |
# Mask2Former
Mask2Former model trained on COCO instance segmentation (large-sized version, Swin backbone). It was introduced in the paper [Masked-attention Mask Transformer for Universal Image Segmentation
](https://arxiv.org/abs/2112.01527) and first released in [this repository](https://github.com/facebookresearch/Mask2Former/).
Disclaimer: The team releasing Mask2Former did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
Mask2Former addresses instance, semantic and panoptic segmentation with the same paradigm: by predicting a set of masks and corresponding labels. Hence, all 3 tasks are treated as if they were instance segmentation. Mask2Former outperforms the previous SOTA,
[MaskFormer](https://arxiv.org/abs/2107.06278) both in terms of performance an efficiency by (i) replacing the pixel decoder with a more advanced multi-scale deformable attention Transformer, (ii) adopting a Transformer decoder with masked attention to boost performance without
without introducing additional computation and (iii) improving training efficiency by calculating the loss on subsampled points instead of whole masks.

## Intended uses & limitations
You can use this particular checkpoint for instance segmentation. See the [model hub](https://huggingface.co/models?search=mask2former) to look for other
fine-tuned versions on a task that interests you.
### How to use
Here is how to use this model:
```python
import requests
import torch
from PIL import Image
from transformers import AutoImageProcessor, Mask2FormerForUniversalSegmentation
# load Mask2Former fine-tuned on COCO instance segmentation
processor = AutoImageProcessor.from_pretrained("facebook/mask2former-swin-large-coco-instance")
model = Mask2FormerForUniversalSegmentation.from_pretrained("facebook/mask2former-swin-large-coco-instance")
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
inputs = processor(images=image, return_tensors="pt")
with torch.no_grad():
outputs = model(**inputs)
# model predicts class_queries_logits of shape `(batch_size, num_queries)`
# and masks_queries_logits of shape `(batch_size, num_queries, height, width)`
class_queries_logits = outputs.class_queries_logits
masks_queries_logits = outputs.masks_queries_logits
# you can pass them to processor for postprocessing
result = processor.post_process_instance_segmentation(outputs, target_sizes=[image.size[::-1]])[0]
# we refer to the demo notebooks for visualization (see "Resources" section in the Mask2Former docs)
predicted_instance_map = result["segmentation"]
```
For more code examples, we refer to the [documentation](https://huggingface.co/docs/transformers/master/en/model_doc/mask2former). | [
"person",
"bicycle",
"car",
"motorbike",
"aeroplane",
"bus",
"train",
"truck",
"boat",
"traffic light",
"fire hydrant",
"stop sign",
"parking meter",
"bench",
"bird",
"cat",
"dog",
"horse",
"sheep",
"cow",
"elephant",
"bear",
"zebra",
"giraffe",
"backpack",
"umbrella",
"handbag",
"tie",
"suitcase",
"frisbee",
"skis",
"snowboard",
"sports ball",
"kite",
"baseball bat",
"baseball glove",
"skateboard",
"surfboard",
"tennis racket",
"bottle",
"wine glass",
"cup",
"fork",
"knife",
"spoon",
"bowl",
"banana",
"apple",
"sandwich",
"orange",
"broccoli",
"carrot",
"hot dog",
"pizza",
"donut",
"cake",
"chair",
"sofa",
"pottedplant",
"bed",
"diningtable",
"toilet",
"tvmonitor",
"laptop",
"mouse",
"remote",
"keyboard",
"cell phone",
"microwave",
"oven",
"toaster",
"sink",
"refrigerator",
"book",
"clock",
"vase",
"scissors",
"teddy bear",
"hair drier",
"toothbrush"
] |
facebook/mask2former-swin-base-coco-panoptic |
# Mask2Former
Mask2Former model trained on COCO panoptic segmentation (base-sized version, Swin backbone). It was introduced in the paper [Masked-attention Mask Transformer for Universal Image Segmentation
](https://arxiv.org/abs/2112.01527) and first released in [this repository](https://github.com/facebookresearch/Mask2Former/).
Disclaimer: The team releasing Mask2Former did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
Mask2Former addresses instance, semantic and panoptic segmentation with the same paradigm: by predicting a set of masks and corresponding labels. Hence, all 3 tasks are treated as if they were instance segmentation. Mask2Former outperforms the previous SOTA,
[MaskFormer](https://arxiv.org/abs/2107.06278) both in terms of performance an efficiency by (i) replacing the pixel decoder with a more advanced multi-scale deformable attention Transformer, (ii) adopting a Transformer decoder with masked attention to boost performance without
without introducing additional computation and (iii) improving training efficiency by calculating the loss on subsampled points instead of whole masks.

## Intended uses & limitations
You can use this particular checkpoint for panoptic segmentation. See the [model hub](https://huggingface.co/models?search=mask2former) to look for other
fine-tuned versions on a task that interests you.
### How to use
Here is how to use this model:
```python
import requests
import torch
from PIL import Image
from transformers import AutoImageProcessor, Mask2FormerForUniversalSegmentation
# load Mask2Former fine-tuned on COCO panoptic segmentation
processor = AutoImageProcessor.from_pretrained("facebook/mask2former-swin-base-coco-panoptic")
model = Mask2FormerForUniversalSegmentation.from_pretrained("facebook/mask2former-swin-base-coco-panoptic")
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
inputs = processor(images=image, return_tensors="pt")
with torch.no_grad():
outputs = model(**inputs)
# model predicts class_queries_logits of shape `(batch_size, num_queries)`
# and masks_queries_logits of shape `(batch_size, num_queries, height, width)`
class_queries_logits = outputs.class_queries_logits
masks_queries_logits = outputs.masks_queries_logits
# you can pass them to processor for postprocessing
result = processor.post_process_panoptic_segmentation(outputs, target_sizes=[image.size[::-1]])[0]
# we refer to the demo notebooks for visualization (see "Resources" section in the Mask2Former docs)
predicted_panoptic_map = result["segmentation"]
```
For more code examples, we refer to the [documentation](https://huggingface.co/docs/transformers/master/en/model_doc/mask2former). | [
"person",
"bicycle",
"car",
"motorcycle",
"airplane",
"bus",
"train",
"truck",
"boat",
"traffic light",
"fire hydrant",
"stop sign",
"parking meter",
"bench",
"bird",
"cat",
"dog",
"horse",
"sheep",
"cow",
"elephant",
"bear",
"zebra",
"giraffe",
"backpack",
"umbrella",
"handbag",
"tie",
"suitcase",
"frisbee",
"skis",
"snowboard",
"sports ball",
"kite",
"baseball bat",
"baseball glove",
"skateboard",
"surfboard",
"tennis racket",
"bottle",
"wine glass",
"cup",
"fork",
"knife",
"spoon",
"bowl",
"banana",
"apple",
"sandwich",
"orange",
"broccoli",
"carrot",
"hot dog",
"pizza",
"donut",
"cake",
"chair",
"couch",
"potted plant",
"bed",
"dining table",
"toilet",
"tv",
"laptop",
"mouse",
"remote",
"keyboard",
"cell phone",
"microwave",
"oven",
"toaster",
"sink",
"refrigerator",
"book",
"clock",
"vase",
"scissors",
"teddy bear",
"hair drier",
"toothbrush",
"banner",
"blanket",
"bridge",
"cardboard",
"counter",
"curtain",
"door-stuff",
"floor-wood",
"flower",
"fruit",
"gravel",
"house",
"light",
"mirror-stuff",
"net",
"pillow",
"platform",
"playingfield",
"railroad",
"river",
"road",
"roof",
"sand",
"sea",
"shelf",
"snow",
"stairs",
"tent",
"towel",
"wall-brick",
"wall-stone",
"wall-tile",
"wall-wood",
"water-other",
"window-blind",
"window-other",
"tree-merged",
"fence-merged",
"ceiling-merged",
"sky-other-merged",
"cabinet-merged",
"table-merged",
"floor-other-merged",
"pavement-merged",
"mountain-merged",
"grass-merged",
"dirt-merged",
"paper-merged",
"food-other-merged",
"building-other-merged",
"rock-merged",
"wall-other-merged",
"rug-merged"
] |
facebook/mask2former-swin-large-coco-panoptic |
# Mask2Former
Mask2Former model trained on COCO panoptic segmentation (large-sized version, Swin backbone). It was introduced in the paper [Masked-attention Mask Transformer for Universal Image Segmentation
](https://arxiv.org/abs/2112.01527) and first released in [this repository](https://github.com/facebookresearch/Mask2Former/).
Disclaimer: The team releasing Mask2Former did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
Mask2Former addresses instance, semantic and panoptic segmentation with the same paradigm: by predicting a set of masks and corresponding labels. Hence, all 3 tasks are treated as if they were instance segmentation. Mask2Former outperforms the previous SOTA,
[MaskFormer](https://arxiv.org/abs/2107.06278) both in terms of performance an efficiency by (i) replacing the pixel decoder with a more advanced multi-scale deformable attention Transformer, (ii) adopting a Transformer decoder with masked attention to boost performance without
without introducing additional computation and (iii) improving training efficiency by calculating the loss on subsampled points instead of whole masks.

## Intended uses & limitations
You can use this particular checkpoint for panoptic segmentation. See the [model hub](https://huggingface.co/models?search=mask2former) to look for other
fine-tuned versions on a task that interests you.
### How to use
Here is how to use this model:
```python
import requests
import torch
from PIL import Image
from transformers import AutoImageProcessor, Mask2FormerForUniversalSegmentation
# load Mask2Former fine-tuned on COCO panoptic segmentation
processor = AutoImageProcessor.from_pretrained("facebook/mask2former-swin-large-coco-panoptic")
model = Mask2FormerForUniversalSegmentation.from_pretrained("facebook/mask2former-swin-large-coco-panoptic")
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
inputs = processor(images=image, return_tensors="pt")
with torch.no_grad():
outputs = model(**inputs)
# model predicts class_queries_logits of shape `(batch_size, num_queries)`
# and masks_queries_logits of shape `(batch_size, num_queries, height, width)`
class_queries_logits = outputs.class_queries_logits
masks_queries_logits = outputs.masks_queries_logits
# you can pass them to processor for postprocessing
result = processor.post_process_panoptic_segmentation(outputs, target_sizes=[image.size[::-1]])[0]
# we refer to the demo notebooks for visualization (see "Resources" section in the Mask2Former docs)
predicted_panoptic_map = result["segmentation"]
```
For more code examples, we refer to the [documentation](https://huggingface.co/docs/transformers/master/en/model_doc/mask2former). | [
"person",
"bicycle",
"car",
"motorcycle",
"airplane",
"bus",
"train",
"truck",
"boat",
"traffic light",
"fire hydrant",
"stop sign",
"parking meter",
"bench",
"bird",
"cat",
"dog",
"horse",
"sheep",
"cow",
"elephant",
"bear",
"zebra",
"giraffe",
"backpack",
"umbrella",
"handbag",
"tie",
"suitcase",
"frisbee",
"skis",
"snowboard",
"sports ball",
"kite",
"baseball bat",
"baseball glove",
"skateboard",
"surfboard",
"tennis racket",
"bottle",
"wine glass",
"cup",
"fork",
"knife",
"spoon",
"bowl",
"banana",
"apple",
"sandwich",
"orange",
"broccoli",
"carrot",
"hot dog",
"pizza",
"donut",
"cake",
"chair",
"couch",
"potted plant",
"bed",
"dining table",
"toilet",
"tv",
"laptop",
"mouse",
"remote",
"keyboard",
"cell phone",
"microwave",
"oven",
"toaster",
"sink",
"refrigerator",
"book",
"clock",
"vase",
"scissors",
"teddy bear",
"hair drier",
"toothbrush",
"banner",
"blanket",
"bridge",
"cardboard",
"counter",
"curtain",
"door-stuff",
"floor-wood",
"flower",
"fruit",
"gravel",
"house",
"light",
"mirror-stuff",
"net",
"pillow",
"platform",
"playingfield",
"railroad",
"river",
"road",
"roof",
"sand",
"sea",
"shelf",
"snow",
"stairs",
"tent",
"towel",
"wall-brick",
"wall-stone",
"wall-tile",
"wall-wood",
"water-other",
"window-blind",
"window-other",
"tree-merged",
"fence-merged",
"ceiling-merged",
"sky-other-merged",
"cabinet-merged",
"table-merged",
"floor-other-merged",
"pavement-merged",
"mountain-merged",
"grass-merged",
"dirt-merged",
"paper-merged",
"food-other-merged",
"building-other-merged",
"rock-merged",
"wall-other-merged",
"rug-merged"
] |
facebook/mask2former-swin-small-coco-panoptic |
# Mask2Former
Mask2Former model trained on COCO panoptic segmentation (small-sized version, Swin backbone). It was introduced in the paper [Masked-attention Mask Transformer for Universal Image Segmentation
](https://arxiv.org/abs/2112.01527) and first released in [this repository](https://github.com/facebookresearch/Mask2Former/).
Disclaimer: The team releasing Mask2Former did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
Mask2Former addresses instance, semantic and panoptic segmentation with the same paradigm: by predicting a set of masks and corresponding labels. Hence, all 3 tasks are treated as if they were instance segmentation. Mask2Former outperforms the previous SOTA,
[MaskFormer](https://arxiv.org/abs/2107.06278) both in terms of performance an efficiency by (i) replacing the pixel decoder with a more advanced multi-scale deformable attention Transformer, (ii) adopting a Transformer decoder with masked attention to boost performance without
without introducing additional computation and (iii) improving training efficiency by calculating the loss on subsampled points instead of whole masks.

## Intended uses & limitations
You can use this particular checkpoint for panoptic segmentation. See the [model hub](https://huggingface.co/models?search=mask2former) to look for other
fine-tuned versions on a task that interests you.
### How to use
Here is how to use this model:
```python
import requests
import torch
from PIL import Image
from transformers import AutoImageProcessor, Mask2FormerForUniversalSegmentation
# load Mask2Former fine-tuned on COCO panoptic segmentation
processor = AutoImageProcessor.from_pretrained("facebook/mask2former-swin-small-coco-panoptic")
model = Mask2FormerForUniversalSegmentation.from_pretrained("facebook/mask2former-swin-small-coco-panoptic")
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
inputs = processor(images=image, return_tensors="pt")
with torch.no_grad():
outputs = model(**inputs)
# model predicts class_queries_logits of shape `(batch_size, num_queries)`
# and masks_queries_logits of shape `(batch_size, num_queries, height, width)`
class_queries_logits = outputs.class_queries_logits
masks_queries_logits = outputs.masks_queries_logits
# you can pass them to processor for postprocessing
result = processor.post_process_panoptic_segmentation(outputs, target_sizes=[image.size[::-1]])[0]
# we refer to the demo notebooks for visualization (see "Resources" section in the Mask2Former docs)
predicted_panoptic_map = result["segmentation"]
```
For more code examples, we refer to the [documentation](https://huggingface.co/docs/transformers/master/en/model_doc/mask2former). | [
"person",
"bicycle",
"car",
"motorcycle",
"airplane",
"bus",
"train",
"truck",
"boat",
"traffic light",
"fire hydrant",
"stop sign",
"parking meter",
"bench",
"bird",
"cat",
"dog",
"horse",
"sheep",
"cow",
"elephant",
"bear",
"zebra",
"giraffe",
"backpack",
"umbrella",
"handbag",
"tie",
"suitcase",
"frisbee",
"skis",
"snowboard",
"sports ball",
"kite",
"baseball bat",
"baseball glove",
"skateboard",
"surfboard",
"tennis racket",
"bottle",
"wine glass",
"cup",
"fork",
"knife",
"spoon",
"bowl",
"banana",
"apple",
"sandwich",
"orange",
"broccoli",
"carrot",
"hot dog",
"pizza",
"donut",
"cake",
"chair",
"couch",
"potted plant",
"bed",
"dining table",
"toilet",
"tv",
"laptop",
"mouse",
"remote",
"keyboard",
"cell phone",
"microwave",
"oven",
"toaster",
"sink",
"refrigerator",
"book",
"clock",
"vase",
"scissors",
"teddy bear",
"hair drier",
"toothbrush",
"banner",
"blanket",
"bridge",
"cardboard",
"counter",
"curtain",
"door-stuff",
"floor-wood",
"flower",
"fruit",
"gravel",
"house",
"light",
"mirror-stuff",
"net",
"pillow",
"platform",
"playingfield",
"railroad",
"river",
"road",
"roof",
"sand",
"sea",
"shelf",
"snow",
"stairs",
"tent",
"towel",
"wall-brick",
"wall-stone",
"wall-tile",
"wall-wood",
"water-other",
"window-blind",
"window-other",
"tree-merged",
"fence-merged",
"ceiling-merged",
"sky-other-merged",
"cabinet-merged",
"table-merged",
"floor-other-merged",
"pavement-merged",
"mountain-merged",
"grass-merged",
"dirt-merged",
"paper-merged",
"food-other-merged",
"building-other-merged",
"rock-merged",
"wall-other-merged",
"rug-merged"
] |
facebook/mask2former-swin-tiny-coco-panoptic |
# Mask2Former
Mask2Former model trained on COCO panoptic segmentation (tiny-sized version, Swin backbone). It was introduced in the paper [Masked-attention Mask Transformer for Universal Image Segmentation
](https://arxiv.org/abs/2112.01527) and first released in [this repository](https://github.com/facebookresearch/Mask2Former/).
Disclaimer: The team releasing Mask2Former did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
Mask2Former addresses instance, semantic and panoptic segmentation with the same paradigm: by predicting a set of masks and corresponding labels. Hence, all 3 tasks are treated as if they were instance segmentation. Mask2Former outperforms the previous SOTA,
[MaskFormer](https://arxiv.org/abs/2107.06278) both in terms of performance an efficiency by (i) replacing the pixel decoder with a more advanced multi-scale deformable attention Transformer, (ii) adopting a Transformer decoder with masked attention to boost performance without
without introducing additional computation and (iii) improving training efficiency by calculating the loss on subsampled points instead of whole masks.

## Intended uses & limitations
You can use this particular checkpoint for panoptic segmentation. See the [model hub](https://huggingface.co/models?search=mask2former) to look for other
fine-tuned versions on a task that interests you.
### How to use
Here is how to use this model:
```python
import requests
import torch
from PIL import Image
from transformers import AutoImageProcessor, Mask2FormerForUniversalSegmentation
# load Mask2Former fine-tuned on COCO panoptic segmentation
processor = AutoImageProcessor.from_pretrained("facebook/mask2former-swin-tiny-coco-panoptic")
model = Mask2FormerForUniversalSegmentation.from_pretrained("facebook/mask2former-swin-tiny-coco-panoptic")
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
inputs = processor(images=image, return_tensors="pt")
with torch.no_grad():
outputs = model(**inputs)
# model predicts class_queries_logits of shape `(batch_size, num_queries)`
# and masks_queries_logits of shape `(batch_size, num_queries, height, width)`
class_queries_logits = outputs.class_queries_logits
masks_queries_logits = outputs.masks_queries_logits
# you can pass them to processor for postprocessing
result = processor.post_process_panoptic_segmentation(outputs, target_sizes=[image.size[::-1]])[0]
# we refer to the demo notebooks for visualization (see "Resources" section in the Mask2Former docs)
predicted_panoptic_map = result["segmentation"]
```
For more code examples, we refer to the [documentation](https://huggingface.co/docs/transformers/master/en/model_doc/mask2former). | [
"person",
"bicycle",
"car",
"motorcycle",
"airplane",
"bus",
"train",
"truck",
"boat",
"traffic light",
"fire hydrant",
"stop sign",
"parking meter",
"bench",
"bird",
"cat",
"dog",
"horse",
"sheep",
"cow",
"elephant",
"bear",
"zebra",
"giraffe",
"backpack",
"umbrella",
"handbag",
"tie",
"suitcase",
"frisbee",
"skis",
"snowboard",
"sports ball",
"kite",
"baseball bat",
"baseball glove",
"skateboard",
"surfboard",
"tennis racket",
"bottle",
"wine glass",
"cup",
"fork",
"knife",
"spoon",
"bowl",
"banana",
"apple",
"sandwich",
"orange",
"broccoli",
"carrot",
"hot dog",
"pizza",
"donut",
"cake",
"chair",
"couch",
"potted plant",
"bed",
"dining table",
"toilet",
"tv",
"laptop",
"mouse",
"remote",
"keyboard",
"cell phone",
"microwave",
"oven",
"toaster",
"sink",
"refrigerator",
"book",
"clock",
"vase",
"scissors",
"teddy bear",
"hair drier",
"toothbrush",
"banner",
"blanket",
"bridge",
"cardboard",
"counter",
"curtain",
"door-stuff",
"floor-wood",
"flower",
"fruit",
"gravel",
"house",
"light",
"mirror-stuff",
"net",
"pillow",
"platform",
"playingfield",
"railroad",
"river",
"road",
"roof",
"sand",
"sea",
"shelf",
"snow",
"stairs",
"tent",
"towel",
"wall-brick",
"wall-stone",
"wall-tile",
"wall-wood",
"water-other",
"window-blind",
"window-other",
"tree-merged",
"fence-merged",
"ceiling-merged",
"sky-other-merged",
"cabinet-merged",
"table-merged",
"floor-other-merged",
"pavement-merged",
"mountain-merged",
"grass-merged",
"dirt-merged",
"paper-merged",
"food-other-merged",
"building-other-merged",
"rock-merged",
"wall-other-merged",
"rug-merged"
] |
facebook/mask2former-swin-tiny-cityscapes-panoptic |
# Mask2Former
Mask2Former model trained on Cityscapes panoptic segmentation (tiny-sized version, Swin backbone). It was introduced in the paper [Masked-attention Mask Transformer for Universal Image Segmentation
](https://arxiv.org/abs/2112.01527) and first released in [this repository](https://github.com/facebookresearch/Mask2Former/).
Disclaimer: The team releasing Mask2Former did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
Mask2Former addresses instance, semantic and panoptic segmentation with the same paradigm: by predicting a set of masks and corresponding labels. Hence, all 3 tasks are treated as if they were instance segmentation. Mask2Former outperforms the previous SOTA,
[MaskFormer](https://arxiv.org/abs/2107.06278) both in terms of performance an efficiency by (i) replacing the pixel decoder with a more advanced multi-scale deformable attention Transformer, (ii) adopting a Transformer decoder with masked attention to boost performance without
without introducing additional computation and (iii) improving training efficiency by calculating the loss on subsampled points instead of whole masks.

## Intended uses & limitations
You can use this particular checkpoint for panoptic segmentation. See the [model hub](https://huggingface.co/models?search=mask2former) to look for other
fine-tuned versions on a task that interests you.
### How to use
Here is how to use this model:
```python
import requests
import torch
from PIL import Image
from transformers import AutoImageProcessor, Mask2FormerForUniversalSegmentation
# load Mask2Former fine-tuned on Cityscapes panoptic segmentation
processor = AutoImageProcessor.from_pretrained("facebook/mask2former-swin-tiny-cityscapes-panoptic")
model = Mask2FormerForUniversalSegmentation.from_pretrained("facebook/mask2former-swin-tiny-cityscapes-panoptic")
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
inputs = processor(images=image, return_tensors="pt")
with torch.no_grad():
outputs = model(**inputs)
# model predicts class_queries_logits of shape `(batch_size, num_queries)`
# and masks_queries_logits of shape `(batch_size, num_queries, height, width)`
class_queries_logits = outputs.class_queries_logits
masks_queries_logits = outputs.masks_queries_logits
# you can pass them to processor for postprocessing
result = processor.post_process_panoptic_segmentation(outputs, target_sizes=[image.size[::-1]])[0]
# we refer to the demo notebooks for visualization (see "Resources" section in the Mask2Former docs)
predicted_panoptic_map = result["segmentation"]
```
For more code examples, we refer to the [documentation](https://huggingface.co/docs/transformers/master/en/model_doc/mask2former). | [
"road",
"sidewalk",
"building",
"wall",
"fence",
"pole",
"traffic light",
"traffic sign",
"vegetation",
"terrain",
"sky",
"person",
"rider",
"car",
"truck",
"bus",
"train",
"motorcycle",
"bicycle"
] |
facebook/mask2former-swin-large-cityscapes-panoptic |
# Mask2Former
Mask2Former model trained on Cityscapes panoptic segmentation (large-sized version, Swin backbone). It was introduced in the paper [Masked-attention Mask Transformer for Universal Image Segmentation
](https://arxiv.org/abs/2112.01527) and first released in [this repository](https://github.com/facebookresearch/Mask2Former/).
Disclaimer: The team releasing Mask2Former did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
Mask2Former addresses instance, semantic and panoptic segmentation with the same paradigm: by predicting a set of masks and corresponding labels. Hence, all 3 tasks are treated as if they were instance segmentation. Mask2Former outperforms the previous SOTA,
[MaskFormer](https://arxiv.org/abs/2107.06278) both in terms of performance an efficiency by (i) replacing the pixel decoder with a more advanced multi-scale deformable attention Transformer, (ii) adopting a Transformer decoder with masked attention to boost performance without
without introducing additional computation and (iii) improving training efficiency by calculating the loss on subsampled points instead of whole masks.

## Intended uses & limitations
You can use this particular checkpoint for panoptic segmentation. See the [model hub](https://huggingface.co/models?search=mask2former) to look for other
fine-tuned versions on a task that interests you.
### How to use
Here is how to use this model:
```python
import requests
import torch
from PIL import Image
from transformers import AutoImageProcessor, Mask2FormerForUniversalSegmentation
# load Mask2Former fine-tuned on Cityscapes panoptic segmentation
processor = AutoImageProcessor.from_pretrained("facebook/mask2former-swin-large-cityscapes-panoptic")
model = Mask2FormerForUniversalSegmentation.from_pretrained("facebook/mask2former-swin-large-cityscapes-panoptic")
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
inputs = processor(images=image, return_tensors="pt")
with torch.no_grad():
outputs = model(**inputs)
# model predicts class_queries_logits of shape `(batch_size, num_queries)`
# and masks_queries_logits of shape `(batch_size, num_queries, height, width)`
class_queries_logits = outputs.class_queries_logits
masks_queries_logits = outputs.masks_queries_logits
# you can pass them to processor for postprocessing
result = processor.post_process_panoptic_segmentation(outputs, target_sizes=[image.size[::-1]])[0]
# we refer to the demo notebooks for visualization (see "Resources" section in the Mask2Former docs)
predicted_panoptic_map = result["segmentation"]
```
For more code examples, we refer to the [documentation](https://huggingface.co/docs/transformers/master/en/model_doc/mask2former). | [
"road",
"sidewalk",
"building",
"wall",
"fence",
"pole",
"traffic light",
"traffic sign",
"vegetation",
"terrain",
"sky",
"person",
"rider",
"car",
"truck",
"bus",
"train",
"motorcycle",
"bicycle"
] |
facebook/mask2former-swin-small-cityscapes-panoptic |
# Mask2Former
Mask2Former model trained on Cityscapes panoptic segmentation (small-sized version, Swin backbone). It was introduced in the paper [Masked-attention Mask Transformer for Universal Image Segmentation
](https://arxiv.org/abs/2112.01527) and first released in [this repository](https://github.com/facebookresearch/Mask2Former/).
Disclaimer: The team releasing Mask2Former did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
Mask2Former addresses instance, semantic and panoptic segmentation with the same paradigm: by predicting a set of masks and corresponding labels. Hence, all 3 tasks are treated as if they were instance segmentation. Mask2Former outperforms the previous SOTA,
[MaskFormer](https://arxiv.org/abs/2107.06278) both in terms of performance an efficiency by (i) replacing the pixel decoder with a more advanced multi-scale deformable attention Transformer, (ii) adopting a Transformer decoder with masked attention to boost performance without
without introducing additional computation and (iii) improving training efficiency by calculating the loss on subsampled points instead of whole masks.

## Intended uses & limitations
You can use this particular checkpoint for panoptic segmentation. See the [model hub](https://huggingface.co/models?search=mask2former) to look for other
fine-tuned versions on a task that interests you.
### How to use
Here is how to use this model:
```python
import requests
import torch
from PIL import Image
from transformers import AutoImageProcessor, Mask2FormerForUniversalSegmentation
# load Mask2Former fine-tuned on Cityscapes panoptic segmentation
processor = AutoImageProcessor.from_pretrained("facebook/mask2former-swin-small-cityscapes-panoptic")
model = Mask2FormerForUniversalSegmentation.from_pretrained("facebook/mask2former-swin-small-cityscapes-panoptic")
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
inputs = processor(images=image, return_tensors="pt")
with torch.no_grad():
outputs = model(**inputs)
# model predicts class_queries_logits of shape `(batch_size, num_queries)`
# and masks_queries_logits of shape `(batch_size, num_queries, height, width)`
class_queries_logits = outputs.class_queries_logits
masks_queries_logits = outputs.masks_queries_logits
# you can pass them to processor for postprocessing
result = processor.post_process_panoptic_segmentation(outputs, target_sizes=[image.size[::-1]])[0]
# we refer to the demo notebooks for visualization (see "Resources" section in the Mask2Former docs)
predicted_panoptic_map = result["segmentation"]
```
For more code examples, we refer to the [documentation](https://huggingface.co/docs/transformers/master/en/model_doc/mask2former). | [
"road",
"sidewalk",
"building",
"wall",
"fence",
"pole",
"traffic light",
"traffic sign",
"vegetation",
"terrain",
"sky",
"person",
"rider",
"car",
"truck",
"bus",
"train",
"motorcycle",
"bicycle"
] |
facebook/mask2former-swin-large-cityscapes-semantic |
# Mask2Former
Mask2Former model trained on Cityscapes semantic segmentation (large-sized version, Swin backbone). It was introduced in the paper [Masked-attention Mask Transformer for Universal Image Segmentation
](https://arxiv.org/abs/2112.01527) and first released in [this repository](https://github.com/facebookresearch/Mask2Former/).
Disclaimer: The team releasing Mask2Former did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
Mask2Former addresses instance, semantic and panoptic segmentation with the same paradigm: by predicting a set of masks and corresponding labels. Hence, all 3 tasks are treated as if they were instance segmentation. Mask2Former outperforms the previous SOTA,
[MaskFormer](https://arxiv.org/abs/2107.06278) both in terms of performance an efficiency by (i) replacing the pixel decoder with a more advanced multi-scale deformable attention Transformer, (ii) adopting a Transformer decoder with masked attention to boost performance without
without introducing additional computation and (iii) improving training efficiency by calculating the loss on subsampled points instead of whole masks.

## Intended uses & limitations
You can use this particular checkpoint for panoptic segmentation. See the [model hub](https://huggingface.co/models?search=mask2former) to look for other
fine-tuned versions on a task that interests you.
### How to use
Here is how to use this model:
```python
import requests
import torch
from PIL import Image
from transformers import AutoImageProcessor, Mask2FormerForUniversalSegmentation
# load Mask2Former fine-tuned on Cityscapes semantic segmentation
processor = AutoImageProcessor.from_pretrained("facebook/mask2former-swin-large-cityscapes-semantic")
model = Mask2FormerForUniversalSegmentation.from_pretrained("facebook/mask2former-swin-large-cityscapes-semantic")
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
inputs = processor(images=image, return_tensors="pt")
with torch.no_grad():
outputs = model(**inputs)
# model predicts class_queries_logits of shape `(batch_size, num_queries)`
# and masks_queries_logits of shape `(batch_size, num_queries, height, width)`
class_queries_logits = outputs.class_queries_logits
masks_queries_logits = outputs.masks_queries_logits
# you can pass them to processor for postprocessing
predicted_semantic_map = processor.post_process_semantic_segmentation(outputs, target_sizes=[image.size[::-1]])[0]
# we refer to the demo notebooks for visualization (see "Resources" section in the Mask2Former docs)
```
For more code examples, we refer to the [documentation](https://huggingface.co/docs/transformers/master/en/model_doc/mask2former). | [
"road",
"sidewalk",
"building",
"wall",
"fence",
"pole",
"traffic light",
"traffic sign",
"vegetation",
"terrain",
"sky",
"person",
"rider",
"car",
"truck",
"bus",
"train",
"motorcycle",
"bicycle"
] |
facebook/mask2former-swin-large-mapillary-vistas-semantic |
# Mask2Former
Mask2Former model trained on Mapillary Vistas semantic segmentation (large-sized version, Swin backbone). It was introduced in the paper [Masked-attention Mask Transformer for Universal Image Segmentation
](https://arxiv.org/abs/2112.01527) and first released in [this repository](https://github.com/facebookresearch/Mask2Former/).
Disclaimer: The team releasing Mask2Former did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
Mask2Former addresses instance, semantic and panoptic segmentation with the same paradigm: by predicting a set of masks and corresponding labels. Hence, all 3 tasks are treated as if they were instance segmentation. Mask2Former outperforms the previous SOTA,
[MaskFormer](https://arxiv.org/abs/2107.06278) both in terms of performance an efficiency by (i) replacing the pixel decoder with a more advanced multi-scale deformable attention Transformer, (ii) adopting a Transformer decoder with masked attention to boost performance without
without introducing additional computation and (iii) improving training efficiency by calculating the loss on subsampled points instead of whole masks.

## Intended uses & limitations
You can use this particular checkpoint for panoptic segmentation. See the [model hub](https://huggingface.co/models?search=mask2former) to look for other
fine-tuned versions on a task that interests you.
### How to use
Here is how to use this model:
```python
import requests
import torch
from PIL import Image
from transformers import AutoImageProcessor, Mask2FormerForUniversalSegmentation
# load Mask2Former fine-tuned on Mapillary Vistas semantic segmentation
processor = AutoImageProcessor.from_pretrained("facebook/mask2former-swin-large-mapillary-vistas-semantic")
model = Mask2FormerForUniversalSegmentation.from_pretrained("facebook/mask2former-swin-large-mapillary-vistas-semantic")
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
inputs = processor(images=image, return_tensors="pt")
with torch.no_grad():
outputs = model(**inputs)
# model predicts class_queries_logits of shape `(batch_size, num_queries)`
# and masks_queries_logits of shape `(batch_size, num_queries, height, width)`
class_queries_logits = outputs.class_queries_logits
masks_queries_logits = outputs.masks_queries_logits
# you can pass them to processor for postprocessing
predicted_semantic_map = processor.post_process_semantic_segmentation(outputs, target_sizes=[image.size[::-1]])[0]
# we refer to the demo notebooks for visualization (see "Resources" section in the Mask2Former docs)
```
For more code examples, we refer to the [documentation](https://huggingface.co/docs/transformers/master/en/model_doc/mask2former). | [
"bird",
"ground animal",
"curb",
"fence",
"guard rail",
"barrier",
"wall",
"bike lane",
"crosswalk - plain",
"curb cut",
"parking",
"pedestrian area",
"rail track",
"road",
"service lane",
"sidewalk",
"bridge",
"building",
"tunnel",
"person",
"bicyclist",
"motorcyclist",
"other rider",
"lane marking - crosswalk",
"lane marking - general",
"mountain",
"sand",
"sky",
"snow",
"terrain",
"vegetation",
"water",
"banner",
"bench",
"bike rack",
"billboard",
"catch basin",
"cctv camera",
"fire hydrant",
"junction box",
"mailbox",
"manhole",
"phone booth",
"pothole",
"street light",
"pole",
"traffic sign frame",
"utility pole",
"traffic light",
"traffic sign (back)",
"traffic sign (front)",
"trash can",
"bicycle",
"boat",
"bus",
"car",
"caravan",
"motorcycle",
"on rails",
"other vehicle",
"trailer",
"truck",
"wheeled slow",
"car mount",
"ego vehicle"
] |
facebook/mask2former-swin-large-mapillary-vistas-panoptic |
# Mask2Former
Mask2Former model trained on Mapillary Vistas panoptic segmentation (large-sized version, Swin backbone). It was introduced in the paper [Masked-attention Mask Transformer for Universal Image Segmentation
](https://arxiv.org/abs/2112.01527) and first released in [this repository](https://github.com/facebookresearch/Mask2Former/).
Disclaimer: The team releasing Mask2Former did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
Mask2Former addresses instance, semantic and panoptic segmentation with the same paradigm: by predicting a set of masks and corresponding labels. Hence, all 3 tasks are treated as if they were instance segmentation. Mask2Former outperforms the previous SOTA,
[MaskFormer](https://arxiv.org/abs/2107.06278) both in terms of performance an efficiency by (i) replacing the pixel decoder with a more advanced multi-scale deformable attention Transformer, (ii) adopting a Transformer decoder with masked attention to boost performance without
without introducing additional computation and (iii) improving training efficiency by calculating the loss on subsampled points instead of whole masks.

## Intended uses & limitations
You can use this particular checkpoint for panoptic segmentation. See the [model hub](https://huggingface.co/models?search=mask2former) to look for other
fine-tuned versions on a task that interests you.
### How to use
Here is how to use this model:
```python
import requests
import torch
from PIL import Image
from transformers import AutoImageProcessor, Mask2FormerForUniversalSegmentation
# load Mask2Former fine-tuned on Mapillary Vistas panoptic segmentation
processor = AutoImageProcessor.from_pretrained("facebook/mask2former-swin-large-mapillary-vistas-panoptic")
model = Mask2FormerForUniversalSegmentation.from_pretrained("facebook/mask2former-swin-large-mapillary-vistas-panoptic")
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
inputs = processor(images=image, return_tensors="pt")
with torch.no_grad():
outputs = model(**inputs)
# model predicts class_queries_logits of shape `(batch_size, num_queries)`
# and masks_queries_logits of shape `(batch_size, num_queries, height, width)`
class_queries_logits = outputs.class_queries_logits
masks_queries_logits = outputs.masks_queries_logits
# you can pass them to processor for postprocessing
result = processor.post_process_panoptic_segmentation(outputs, target_sizes=[image.size[::-1]])[0]
# we refer to the demo notebooks for visualization (see "Resources" section in the Mask2Former docs)
predicted_panoptic_map = result["segmentation"]
```
For more code examples, we refer to the [documentation](https://huggingface.co/docs/transformers/master/en/model_doc/mask2former). | [
"bird",
"ground animal",
"curb",
"fence",
"guard rail",
"barrier",
"wall",
"bike lane",
"crosswalk - plain",
"curb cut",
"parking",
"pedestrian area",
"rail track",
"road",
"service lane",
"sidewalk",
"bridge",
"building",
"tunnel",
"person",
"bicyclist",
"motorcyclist",
"other rider",
"lane marking - crosswalk",
"lane marking - general",
"mountain",
"sand",
"sky",
"snow",
"terrain",
"vegetation",
"water",
"banner",
"bench",
"bike rack",
"billboard",
"catch basin",
"cctv camera",
"fire hydrant",
"junction box",
"mailbox",
"manhole",
"phone booth",
"pothole",
"street light",
"pole",
"traffic sign frame",
"utility pole",
"traffic light",
"traffic sign (back)",
"traffic sign (front)",
"trash can",
"bicycle",
"boat",
"bus",
"car",
"caravan",
"motorcycle",
"on rails",
"other vehicle",
"trailer",
"truck",
"wheeled slow",
"car mount",
"ego vehicle"
] |
facebook/mask2former-swin-small-cityscapes-instance |
# Mask2Former
Mask2Former model trained on Cityscapes instance segmentation (small-sized version, Swin backbone). It was introduced in the paper [Masked-attention Mask Transformer for Universal Image Segmentation
](https://arxiv.org/abs/2112.01527) and first released in [this repository](https://github.com/facebookresearch/Mask2Former/).
Disclaimer: The team releasing Mask2Former did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
Mask2Former addresses instance, semantic and panoptic segmentation with the same paradigm: by predicting a set of masks and corresponding labels. Hence, all 3 tasks are treated as if they were instance segmentation. Mask2Former outperforms the previous SOTA,
[MaskFormer](https://arxiv.org/abs/2107.06278) both in terms of performance an efficiency by (i) replacing the pixel decoder with a more advanced multi-scale deformable attention Transformer, (ii) adopting a Transformer decoder with masked attention to boost performance without
without introducing additional computation and (iii) improving training efficiency by calculating the loss on subsampled points instead of whole masks.

## Intended uses & limitations
You can use this particular checkpoint for instance segmentation. See the [model hub](https://huggingface.co/models?search=mask2former) to look for other
fine-tuned versions on a task that interests you.
### How to use
Here is how to use this model:
```python
import requests
import torch
from PIL import Image
from transformers import AutoImageProcessor, Mask2FormerForUniversalSegmentation
# load Mask2Former fine-tuned on Cityscapes instance segmentation
processor = AutoImageProcessor.from_pretrained("facebook/mask2former-swin-small-cityscapes-instance")
model = Mask2FormerForUniversalSegmentation.from_pretrained("facebook/mask2former-swin-small-cityscapes-instance")
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
inputs = processor(images=image, return_tensors="pt")
with torch.no_grad():
outputs = model(**inputs)
# model predicts class_queries_logits of shape `(batch_size, num_queries)`
# and masks_queries_logits of shape `(batch_size, num_queries, height, width)`
class_queries_logits = outputs.class_queries_logits
masks_queries_logits = outputs.masks_queries_logits
# you can pass them to processor for postprocessing
result = processor.post_process_instance_segmentation(outputs, target_sizes=[image.size[::-1]])[0]
# we refer to the demo notebooks for visualization (see "Resources" section in the Mask2Former docs)
predicted_instance_map = result["segmentation"]
```
For more code examples, we refer to the [documentation](https://huggingface.co/docs/transformers/master/en/model_doc/mask2former). | [
"person",
"rider",
"car",
"truck",
"bus",
"train",
"motorcycle",
"bicycle"
] |
facebook/mask2former-swin-tiny-cityscapes-instance |
# Mask2Former
Mask2Former model trained on Cityscapes instance segmentation (tiny-sized version, Swin backbone). It was introduced in the paper [Masked-attention Mask Transformer for Universal Image Segmentation
](https://arxiv.org/abs/2112.01527) and first released in [this repository](https://github.com/facebookresearch/Mask2Former/).
Disclaimer: The team releasing Mask2Former did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
Mask2Former addresses instance, semantic and panoptic segmentation with the same paradigm: by predicting a set of masks and corresponding labels. Hence, all 3 tasks are treated as if they were instance segmentation. Mask2Former outperforms the previous SOTA,
[MaskFormer](https://arxiv.org/abs/2107.06278) both in terms of performance an efficiency by (i) replacing the pixel decoder with a more advanced multi-scale deformable attention Transformer, (ii) adopting a Transformer decoder with masked attention to boost performance without
without introducing additional computation and (iii) improving training efficiency by calculating the loss on subsampled points instead of whole masks.

## Intended uses & limitations
You can use this particular checkpoint for instance segmentation. See the [model hub](https://huggingface.co/models?search=mask2former) to look for other
fine-tuned versions on a task that interests you.
### How to use
Here is how to use this model:
```python
import requests
import torch
from PIL import Image
from transformers import AutoImageProcessor, Mask2FormerForUniversalSegmentation
# load Mask2Former fine-tuned on Cityscapes instance segmentation
processor = AutoImageProcessor.from_pretrained("facebook/mask2former-swin-tiny-cityscapes-instance")
model = Mask2FormerForUniversalSegmentation.from_pretrained("facebook/mask2former-swin-tiny-cityscapes-instance")
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
inputs = processor(images=image, return_tensors="pt")
with torch.no_grad():
outputs = model(**inputs)
# model predicts class_queries_logits of shape `(batch_size, num_queries)`
# and masks_queries_logits of shape `(batch_size, num_queries, height, width)`
class_queries_logits = outputs.class_queries_logits
masks_queries_logits = outputs.masks_queries_logits
# you can pass them to processor for postprocessing
result = processor.post_process_instance_segmentation(outputs, target_sizes=[image.size[::-1]])[0]
# we refer to the demo notebooks for visualization (see "Resources" section in the Mask2Former docs)
predicted_instance_map = result["segmentation"]
```
For more code examples, we refer to the [documentation](https://huggingface.co/docs/transformers/master/en/model_doc/mask2former). | [
"person",
"rider",
"car",
"truck",
"bus",
"train",
"motorcycle",
"bicycle"
] |
facebook/mask2former-swin-large-ade-panoptic |
# Mask2Former
Mask2Former model trained on ADE20k panoptic segmentation (large-sized version, Swin backbone). It was introduced in the paper [Masked-attention Mask Transformer for Universal Image Segmentation
](https://arxiv.org/abs/2112.01527) and first released in [this repository](https://github.com/facebookresearch/Mask2Former/).
Disclaimer: The team releasing Mask2Former did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
Mask2Former addresses instance, semantic and panoptic segmentation with the same paradigm: by predicting a set of masks and corresponding labels. Hence, all 3 tasks are treated as if they were instance segmentation. Mask2Former outperforms the previous SOTA,
[MaskFormer](https://arxiv.org/abs/2107.06278) both in terms of performance an efficiency by (i) replacing the pixel decoder with a more advanced multi-scale deformable attention Transformer, (ii) adopting a Transformer decoder with masked attention to boost performance without
without introducing additional computation and (iii) improving training efficiency by calculating the loss on subsampled points instead of whole masks.

## Intended uses & limitations
You can use this particular checkpoint for panoptic segmentation. See the [model hub](https://huggingface.co/models?search=mask2former) to look for other
fine-tuned versions on a task that interests you.
### How to use
Here is how to use this model:
```python
import requests
import torch
from PIL import Image
from transformers import AutoImageProcessor, Mask2FormerForUniversalSegmentation
# load Mask2Former fine-tuned on ADE20k panoptic segmentation
processor = AutoImageProcessor.from_pretrained("facebook/mask2former-swin-large-ade-panoptic")
model = Mask2FormerForUniversalSegmentation.from_pretrained("facebook/mask2former-swin-large-ade-panoptic")
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
inputs = processor(images=image, return_tensors="pt")
with torch.no_grad():
outputs = model(**inputs)
# model predicts class_queries_logits of shape `(batch_size, num_queries)`
# and masks_queries_logits of shape `(batch_size, num_queries, height, width)`
class_queries_logits = outputs.class_queries_logits
masks_queries_logits = outputs.masks_queries_logits
# you can pass them to processor for postprocessing
result = processor.post_process_panoptic_segmentation(outputs, target_sizes=[image.size[::-1]])[0]
# we refer to the demo notebooks for visualization (see "Resources" section in the Mask2Former docs)
predicted_panoptic_map = result["segmentation"]
```
For more code examples, we refer to the [documentation](https://huggingface.co/docs/transformers/master/en/model_doc/mask2former). | [
"wall",
"building",
"sky",
"floor",
"tree",
"ceiling",
"road",
"bed ",
"windowpane",
"grass",
"cabinet",
"sidewalk",
"person",
"earth",
"door",
"table",
"mountain",
"plant",
"curtain",
"chair",
"car",
"water",
"painting",
"sofa",
"shelf",
"house",
"sea",
"mirror",
"rug",
"field",
"armchair",
"seat",
"fence",
"desk",
"rock",
"wardrobe",
"lamp",
"bathtub",
"railing",
"cushion",
"base",
"box",
"column",
"signboard",
"chest of drawers",
"counter",
"sand",
"sink",
"skyscraper",
"fireplace",
"refrigerator",
"grandstand",
"path",
"stairs",
"runway",
"case",
"pool table",
"pillow",
"screen door",
"stairway",
"river",
"bridge",
"bookcase",
"blind",
"coffee table",
"toilet",
"flower",
"book",
"hill",
"bench",
"countertop",
"stove",
"palm",
"kitchen island",
"computer",
"swivel chair",
"boat",
"bar",
"arcade machine",
"hovel",
"bus",
"towel",
"light",
"truck",
"tower",
"chandelier",
"awning",
"streetlight",
"booth",
"television receiver",
"airplane",
"dirt track",
"apparel",
"pole",
"land",
"bannister",
"escalator",
"ottoman",
"bottle",
"buffet",
"poster",
"stage",
"van",
"ship",
"fountain",
"conveyer belt",
"canopy",
"washer",
"plaything",
"swimming pool",
"stool",
"barrel",
"basket",
"waterfall",
"tent",
"bag",
"minibike",
"cradle",
"oven",
"ball",
"food",
"step",
"tank",
"trade name",
"microwave",
"pot",
"animal",
"bicycle",
"lake",
"dishwasher",
"screen",
"blanket",
"sculpture",
"hood",
"sconce",
"vase",
"traffic light",
"tray",
"ashcan",
"fan",
"pier",
"crt screen",
"plate",
"monitor",
"bulletin board",
"shower",
"radiator",
"glass",
"clock",
"flag"
] |
facebook/mask2former-swin-base-ade-semantic |
# Mask2Former
Mask2Former model trained on ADE20k semantic segmentation (base-sized version, Swin backbone). It was introduced in the paper [Masked-attention Mask Transformer for Universal Image Segmentation
](https://arxiv.org/abs/2112.01527) and first released in [this repository](https://github.com/facebookresearch/Mask2Former/).
Disclaimer: The team releasing Mask2Former did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
Mask2Former addresses instance, semantic and panoptic segmentation with the same paradigm: by predicting a set of masks and corresponding labels. Hence, all 3 tasks are treated as if they were instance segmentation. Mask2Former outperforms the previous SOTA,
[MaskFormer](https://arxiv.org/abs/2107.06278) both in terms of performance an efficiency by (i) replacing the pixel decoder with a more advanced multi-scale deformable attention Transformer, (ii) adopting a Transformer decoder with masked attention to boost performance without
without introducing additional computation and (iii) improving training efficiency by calculating the loss on subsampled points instead of whole masks.

## Intended uses & limitations
You can use this particular checkpoint for panoptic segmentation. See the [model hub](https://huggingface.co/models?search=mask2former) to look for other
fine-tuned versions on a task that interests you.
### How to use
Here is how to use this model:
```python
import requests
import torch
from PIL import Image
from transformers import AutoImageProcessor, Mask2FormerForUniversalSegmentation
# load Mask2Former fine-tuned on ADE20k semantic segmentation
processor = AutoImageProcessor.from_pretrained("facebook/mask2former-swin-base-ade-semantic")
model = Mask2FormerForUniversalSegmentation.from_pretrained("facebook/mask2former-swin-base-ade-semantic")
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
inputs = processor(images=image, return_tensors="pt")
with torch.no_grad():
outputs = model(**inputs)
# model predicts class_queries_logits of shape `(batch_size, num_queries)`
# and masks_queries_logits of shape `(batch_size, num_queries, height, width)`
class_queries_logits = outputs.class_queries_logits
masks_queries_logits = outputs.masks_queries_logits
# you can pass them to processor for postprocessing
predicted_semantic_map = processor.post_process_semantic_segmentation(outputs, target_sizes=[image.size[::-1]])[0]
# we refer to the demo notebooks for visualization (see "Resources" section in the Mask2Former docs)
```
For more code examples, we refer to the [documentation](https://huggingface.co/docs/transformers/master/en/model_doc/mask2former). | [
"wall",
"building",
"sky",
"floor",
"tree",
"ceiling",
"road",
"bed ",
"windowpane",
"grass",
"cabinet",
"sidewalk",
"person",
"earth",
"door",
"table",
"mountain",
"plant",
"curtain",
"chair",
"car",
"water",
"painting",
"sofa",
"shelf",
"house",
"sea",
"mirror",
"rug",
"field",
"armchair",
"seat",
"fence",
"desk",
"rock",
"wardrobe",
"lamp",
"bathtub",
"railing",
"cushion",
"base",
"box",
"column",
"signboard",
"chest of drawers",
"counter",
"sand",
"sink",
"skyscraper",
"fireplace",
"refrigerator",
"grandstand",
"path",
"stairs",
"runway",
"case",
"pool table",
"pillow",
"screen door",
"stairway",
"river",
"bridge",
"bookcase",
"blind",
"coffee table",
"toilet",
"flower",
"book",
"hill",
"bench",
"countertop",
"stove",
"palm",
"kitchen island",
"computer",
"swivel chair",
"boat",
"bar",
"arcade machine",
"hovel",
"bus",
"towel",
"light",
"truck",
"tower",
"chandelier",
"awning",
"streetlight",
"booth",
"television receiver",
"airplane",
"dirt track",
"apparel",
"pole",
"land",
"bannister",
"escalator",
"ottoman",
"bottle",
"buffet",
"poster",
"stage",
"van",
"ship",
"fountain",
"conveyer belt",
"canopy",
"washer",
"plaything",
"swimming pool",
"stool",
"barrel",
"basket",
"waterfall",
"tent",
"bag",
"minibike",
"cradle",
"oven",
"ball",
"food",
"step",
"tank",
"trade name",
"microwave",
"pot",
"animal",
"bicycle",
"lake",
"dishwasher",
"screen",
"blanket",
"sculpture",
"hood",
"sconce",
"vase",
"traffic light",
"tray",
"ashcan",
"fan",
"pier",
"crt screen",
"plate",
"monitor",
"bulletin board",
"shower",
"radiator",
"glass",
"clock",
"flag"
] |
facebook/mask2former-swin-base-IN21k-ade-semantic |
# Mask2Former
Mask2Former model trained on ADE20k semantic segmentation (base-IN21k version, Swin backbone). It was introduced in the paper [Masked-attention Mask Transformer for Universal Image Segmentation
](https://arxiv.org/abs/2112.01527) and first released in [this repository](https://github.com/facebookresearch/Mask2Former/).
Disclaimer: The team releasing Mask2Former did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
Mask2Former addresses instance, semantic and panoptic segmentation with the same paradigm: by predicting a set of masks and corresponding labels. Hence, all 3 tasks are treated as if they were instance segmentation. Mask2Former outperforms the previous SOTA,
[MaskFormer](https://arxiv.org/abs/2107.06278) both in terms of performance an efficiency by (i) replacing the pixel decoder with a more advanced multi-scale deformable attention Transformer, (ii) adopting a Transformer decoder with masked attention to boost performance without
without introducing additional computation and (iii) improving training efficiency by calculating the loss on subsampled points instead of whole masks.

## Intended uses & limitations
You can use this particular checkpoint for panoptic segmentation. See the [model hub](https://huggingface.co/models?search=mask2former) to look for other
fine-tuned versions on a task that interests you.
### How to use
Here is how to use this model:
```python
import requests
import torch
from PIL import Image
from transformers import AutoImageProcessor, Mask2FormerForUniversalSegmentation
# load Mask2Former fine-tuned on ADE20k semantic segmentation
processor = AutoImageProcessor.from_pretrained("facebook/mask2former-swin-base-IN21k-ade-semantic")
model = Mask2FormerForUniversalSegmentation.from_pretrained("facebook/mask2former-swin-base-IN21k-ade-semantic")
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
inputs = processor(images=image, return_tensors="pt")
with torch.no_grad():
outputs = model(**inputs)
# model predicts class_queries_logits of shape `(batch_size, num_queries)`
# and masks_queries_logits of shape `(batch_size, num_queries, height, width)`
class_queries_logits = outputs.class_queries_logits
masks_queries_logits = outputs.masks_queries_logits
# you can pass them to processor for postprocessing
predicted_semantic_map = processor.post_process_semantic_segmentation(outputs, target_sizes=[image.size[::-1]])[0]
# we refer to the demo notebooks for visualization (see "Resources" section in the Mask2Former docs)
```
For more code examples, we refer to the [documentation](https://huggingface.co/docs/transformers/master/en/model_doc/mask2former). | [
"wall",
"building",
"sky",
"floor",
"tree",
"ceiling",
"road",
"bed ",
"windowpane",
"grass",
"cabinet",
"sidewalk",
"person",
"earth",
"door",
"table",
"mountain",
"plant",
"curtain",
"chair",
"car",
"water",
"painting",
"sofa",
"shelf",
"house",
"sea",
"mirror",
"rug",
"field",
"armchair",
"seat",
"fence",
"desk",
"rock",
"wardrobe",
"lamp",
"bathtub",
"railing",
"cushion",
"base",
"box",
"column",
"signboard",
"chest of drawers",
"counter",
"sand",
"sink",
"skyscraper",
"fireplace",
"refrigerator",
"grandstand",
"path",
"stairs",
"runway",
"case",
"pool table",
"pillow",
"screen door",
"stairway",
"river",
"bridge",
"bookcase",
"blind",
"coffee table",
"toilet",
"flower",
"book",
"hill",
"bench",
"countertop",
"stove",
"palm",
"kitchen island",
"computer",
"swivel chair",
"boat",
"bar",
"arcade machine",
"hovel",
"bus",
"towel",
"light",
"truck",
"tower",
"chandelier",
"awning",
"streetlight",
"booth",
"television receiver",
"airplane",
"dirt track",
"apparel",
"pole",
"land",
"bannister",
"escalator",
"ottoman",
"bottle",
"buffet",
"poster",
"stage",
"van",
"ship",
"fountain",
"conveyer belt",
"canopy",
"washer",
"plaything",
"swimming pool",
"stool",
"barrel",
"basket",
"waterfall",
"tent",
"bag",
"minibike",
"cradle",
"oven",
"ball",
"food",
"step",
"tank",
"trade name",
"microwave",
"pot",
"animal",
"bicycle",
"lake",
"dishwasher",
"screen",
"blanket",
"sculpture",
"hood",
"sconce",
"vase",
"traffic light",
"tray",
"ashcan",
"fan",
"pier",
"crt screen",
"plate",
"monitor",
"bulletin board",
"shower",
"radiator",
"glass",
"clock",
"flag"
] |
facebook/mask2former-swin-small-ade-semantic |
# Mask2Former
Mask2Former model trained on ADE20k semantic segmentation (small-sized version, Swin backbone). It was introduced in the paper [Masked-attention Mask Transformer for Universal Image Segmentation
](https://arxiv.org/abs/2112.01527) and first released in [this repository](https://github.com/facebookresearch/Mask2Former/).
Disclaimer: The team releasing Mask2Former did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
Mask2Former addresses instance, semantic and panoptic segmentation with the same paradigm: by predicting a set of masks and corresponding labels. Hence, all 3 tasks are treated as if they were instance segmentation. Mask2Former outperforms the previous SOTA,
[MaskFormer](https://arxiv.org/abs/2107.06278) both in terms of performance an efficiency by (i) replacing the pixel decoder with a more advanced multi-scale deformable attention Transformer, (ii) adopting a Transformer decoder with masked attention to boost performance without
without introducing additional computation and (iii) improving training efficiency by calculating the loss on subsampled points instead of whole masks.

## Intended uses & limitations
You can use this particular checkpoint for panoptic segmentation. See the [model hub](https://huggingface.co/models?search=mask2former) to look for other
fine-tuned versions on a task that interests you.
### How to use
Here is how to use this model:
```python
import requests
import torch
from PIL import Image
from transformers import AutoImageProcessor, Mask2FormerForUniversalSegmentation
# load Mask2Former fine-tuned on ADE20k semantic segmentation
processor = AutoImageProcessor.from_pretrained("facebook/mask2former-swin-small-ade-semantic")
model = Mask2FormerForUniversalSegmentation.from_pretrained("facebook/mask2former-swin-small-ade-semantic")
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
inputs = processor(images=image, return_tensors="pt")
with torch.no_grad():
outputs = model(**inputs)
# model predicts class_queries_logits of shape `(batch_size, num_queries)`
# and masks_queries_logits of shape `(batch_size, num_queries, height, width)`
class_queries_logits = outputs.class_queries_logits
masks_queries_logits = outputs.masks_queries_logits
# you can pass them to processor for postprocessing
predicted_semantic_map = processor.post_process_semantic_segmentation(outputs, target_sizes=[image.size[::-1]])[0]
# we refer to the demo notebooks for visualization (see "Resources" section in the Mask2Former docs)
```
For more code examples, we refer to the [documentation](https://huggingface.co/docs/transformers/master/en/model_doc/mask2former). | [
"wall",
"building",
"sky",
"floor",
"tree",
"ceiling",
"road",
"bed ",
"windowpane",
"grass",
"cabinet",
"sidewalk",
"person",
"earth",
"door",
"table",
"mountain",
"plant",
"curtain",
"chair",
"car",
"water",
"painting",
"sofa",
"shelf",
"house",
"sea",
"mirror",
"rug",
"field",
"armchair",
"seat",
"fence",
"desk",
"rock",
"wardrobe",
"lamp",
"bathtub",
"railing",
"cushion",
"base",
"box",
"column",
"signboard",
"chest of drawers",
"counter",
"sand",
"sink",
"skyscraper",
"fireplace",
"refrigerator",
"grandstand",
"path",
"stairs",
"runway",
"case",
"pool table",
"pillow",
"screen door",
"stairway",
"river",
"bridge",
"bookcase",
"blind",
"coffee table",
"toilet",
"flower",
"book",
"hill",
"bench",
"countertop",
"stove",
"palm",
"kitchen island",
"computer",
"swivel chair",
"boat",
"bar",
"arcade machine",
"hovel",
"bus",
"towel",
"light",
"truck",
"tower",
"chandelier",
"awning",
"streetlight",
"booth",
"television receiver",
"airplane",
"dirt track",
"apparel",
"pole",
"land",
"bannister",
"escalator",
"ottoman",
"bottle",
"buffet",
"poster",
"stage",
"van",
"ship",
"fountain",
"conveyer belt",
"canopy",
"washer",
"plaything",
"swimming pool",
"stool",
"barrel",
"basket",
"waterfall",
"tent",
"bag",
"minibike",
"cradle",
"oven",
"ball",
"food",
"step",
"tank",
"trade name",
"microwave",
"pot",
"animal",
"bicycle",
"lake",
"dishwasher",
"screen",
"blanket",
"sculpture",
"hood",
"sconce",
"vase",
"traffic light",
"tray",
"ashcan",
"fan",
"pier",
"crt screen",
"plate",
"monitor",
"bulletin board",
"shower",
"radiator",
"glass",
"clock",
"flag"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.