transformers

Get started

  • Quick tour
  • Installation
  • Philosophy
  • Glossary

Using 🤗 Transformers

  • Summary of the tasks
  • Summary of the models
  • Preprocessing data
  • Fine-tuning a pretrained model
  • Model sharing and uploading
  • Summary of the tokenizers
  • Multi-lingual models

Advanced guides

  • Pretrained models
  • Examples
  • Troubleshooting
  • Fine-tuning with custom datasets
  • 🤗 Transformers Notebooks
  • Run training on Amazon SageMaker
  • Community
  • Converting Tensorflow Checkpoints
  • Migrating from previous packages
  • How to contribute to transformers?
  • How to add a model to 🤗 Transformers?
  • How to add a pipeline to 🤗 Transformers?
  • Using tokenizers from 🤗 Tokenizers
  • Performance and Scalability: How To Fit a Bigger Model and Train It Faster
  • Model Parallelism
  • Testing
  • Debugging
  • Exporting transformers models

Research

  • BERTology
  • Perplexity of fixed-length models
  • Benchmarks

Main Classes

  • Callbacks
  • Configuration
  • Data Collator
  • Logging
  • Models
  • Optimization
  • Model outputs
  • Pipelines
  • Processors
  • Tokenizer
  • Trainer
  • DeepSpeed Integration
  • Feature Extractor

Models

  • ALBERT
  • Auto Classes
  • BART
  • BARThez
  • BEiT
  • BERT
  • Bertweet
  • BertGeneration
  • BertJapanese
  • BigBird
  • BigBirdPegasus
  • Blenderbot
  • Blenderbot Small
  • BORT
  • ByT5
  • CamemBERT
  • CANINE
  • CLIP
  • ConvBERT
  • CPM
  • CTRL
  • DeBERTa
  • DeBERTa-v2
  • DeiT
  • DETR
  • DialoGPT
  • DistilBERT
  • DPR
  • ELECTRA
  • Encoder Decoder Models
  • FlauBERT
  • FNet
  • FSMT
  • Funnel Transformer
  • herBERT
  • I-BERT
  • LayoutLM
  • LayoutLMV2
  • LayoutXLM
  • LED
  • Longformer
  • LUKE
  • LXMERT
  • MarianMT
  • M2M100
  • MBart and MBart-50
  • MegatronBERT
  • MegatronGPT2
  • MobileBERT
  • MPNet
  • mT5
  • OpenAI GPT
  • OpenAI GPT2
  • GPT-J
  • GPT Neo
  • Hubert
  • Pegasus
  • PhoBERT
  • ProphetNet
  • RAG
  • Reformer
  • RemBERT
  • RetriBERT
  • RoBERTa
  • RoFormer
  • Speech Encoder Decoder Models
  • Speech2Text
  • Speech2Text2
  • Splinter
  • SqueezeBERT
  • T5
  • T5v1.1
  • TAPAS
  • Transformer XL
  • Vision Transformer (ViT)
  • VisualBERT
  • Wav2Vec2
  • XLM
  • XLM-ProphetNet
  • XLM-RoBERTa
  • XLNet
  • XLSR-Wav2Vec2

Internal Helpers

  • Custom Layers and Utilities
  • Utilities for pipelines
  • Utilities for Tokenizers
  • Utilities for Trainer
  • Utilities for Generation
  • General Utilities
transformers

ⓘ You are viewing legacy docs. Go to latest documentation instead.

  • Docs »
  • Search


© Copyright 2020, The Hugging Face Team, Licenced under the Apache License, Version 2.0

Built with Sphinx using a theme provided by Read the Docs.