Transformers documentation

DiT

You are viewing main version, which requires installation from source. If you'd like regular pip install, checkout the latest stable version (v4.53.3).
Hugging Face's logo
Join the Hugging Face community

and get access to the augmented documentation experience

to get started

PyTorch Flax

DiT

DiT is an image transformer pretrained on large-scale unlabeled document images. It learns to predict the missing visual tokens from a corrupted input image. The pretrained DiT model can be used as a backbone in other models for visual document tasks like document image classification and table detection.

You can find all the original DiT checkpoints under the Microsoft organization.

Refer to the BEiT docs for more examples of how to apply DiT to different vision tasks.

The example below demonstrates how to classify an image with Pipeline or the AutoModel class.

Pipeline
AutoModel
import torch
from transformers import pipeline

pipeline = pipeline(
    task="image-classification",
    model="microsoft/dit-base-finetuned-rvlcdip",
    torch_dtype=torch.float16,
    device=0
)
pipeline("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/model_doc/dit-example.jpg")

Notes

  • The pretrained DiT weights can be loaded in a [BEiT] model with a modeling head to predict visual tokens.
    from transformers import BeitForMaskedImageModeling
    
    model = BeitForMaskedImageModeling.from_pretraining("microsoft/dit-base")

Resources

  • Refer to this notebook for a document image classification inference example.
< > Update on GitHub