filename,text perf_train_cpu_many.md," # Efficient Training on Multiple CPUs When training on a single CPU is too slow, we can use multiple CPUs. This guide focuses on PyTorch-based DDP enabling distributed CPU training efficiently. ## Intel® oneCCL Bindings for PyTorch [Intel® oneCCL](https://github.com/oneapi-src/oneCCL) (collective communications library) is a library for efficient distributed deep learning training implementing such collectives like allreduce, allgather, alltoall. For more information on oneCCL, please refer to the [oneCCL documentation](https://spec.oneapi.com/versions/latest/elements/oneCCL/source/index.html) and [oneCCL specification](https://spec.oneapi.com/versions/latest/elements/oneCCL/source/index.html). Module `oneccl_bindings_for_pytorch` (`torch_ccl` before version 1.12) implements PyTorch C10D ProcessGroup API and can be dynamically loaded as external ProcessGroup and only works on Linux platform now Check more detailed information for [oneccl_bind_pt](https://github.com/intel/torch-ccl). ### Intel® oneCCL Bindings for PyTorch installation: Wheel files are available for the following Python versions: | Extension Version | Python 3.6 | Python 3.7 | Python 3.8 | Python 3.9 | Python 3.10 | | :---------------: | :--------: | :--------: | :--------: | :--------: | :---------: | | 1.13.0 | | √ | √ | √ | √ | | 1.12.100 | | √ | √ | √ | √ | | 1.12.0 | | √ | √ | √ | √ | | 1.11.0 | | √ | √ | √ | √ | | 1.10.0 | √ | √ | √ | √ | | pip install oneccl_bind_pt=={pytorch_version} -f https://developer.intel.com/ipex-whl-stable-cpu where `{pytorch_version}` should be your PyTorch version, for instance 1.13.0. Check more approaches for [oneccl_bind_pt installation](https://github.com/intel/torch-ccl). Versions of oneCCL and PyTorch must match. oneccl_bindings_for_pytorch 1.12.0 prebuilt wheel does not work with PyTorch 1.12.1 (it is for PyTorch 1.12.0) PyTorch 1.12.1 should work with oneccl_bindings_for_pytorch 1.12.100 ## Intel® MPI library Use this standards-based MPI implementation to deliver flexible, efficient, scalable cluster messaging on Intel® architecture. This component is part of the Intel® oneAPI HPC Toolkit. oneccl_bindings_for_pytorch is installed along with the MPI tool set. Need to source the environment before using it. for Intel® oneCCL >= 1.12.0 oneccl_bindings_for_pytorch_path=$(python -c ""from oneccl_bindings_for_pytorch import cwd; print(cwd)"") source $oneccl_bindings_for_pytorch_path/env/setvars.sh for Intel® oneCCL whose version < 1.12.0 torch_ccl_path=$(python -c ""import torch; import torch_ccl; import os; print(os.path.abspath(os.path.dirname(torch_ccl.__file__)))"") source $torch_ccl_path/env/setvars.sh #### IPEX installation: IPEX provides performance optimizations for CPU training with both Float32 and BFloat16, you could refer [single CPU section](./perf_train_cpu). The following ""Usage in Trainer"" takes mpirun in Intel® MPI library as an example. ## Usage in Trainer To enable multi CPU distributed training in the Trainer with the ccl backend, users should add **`--ddp_backend ccl`** in the command arguments. Let's see an example with the [question-answering example](https://github.com/huggingface/transformers/tree/main/examples/pytorch/question-answering) The following command enables training with 2 processes on one Xeon node, with one process running per one socket. The variables OMP_NUM_THREADS/CCL_WORKER_COUNT can be tuned for optimal performance. ```shell script export CCL_WORKER_COUNT=1 export MASTER_ADDR=127.0.0.1 mpirun -n 2 -genv OMP_NUM_THREADS=23 \ python3 run_qa.py \ --model_name_or_path bert-large-uncased \ --dataset_name squad \ --do_train \ --do_eval \ --per_device_train_batch_size 12 \ --learning_rate 3e-5 \ --num_train_epochs 2 \ --max_seq_length 384 \ --doc_stride 128 \ --output_dir /tmp/debug_squad/ \ --no_cuda \ --ddp_backend ccl \ --use_ipex The following command enables training with a total of four processes on two Xeons (node0 and node1, taking node0 as the main process), ppn (processes per node) is set to 2, with one process running per one socket. The variables OMP_NUM_THREADS/CCL_WORKER_COUNT can be tuned for optimal performance. In node0, you need to create a configuration file which contains the IP addresses of each node (for example hostfile) and pass that configuration file path as an argument. ```shell script cat hostfile xxx.xxx.xxx.xxx #node0 ip xxx.xxx.xxx.xxx #node1 ip Now, run the following command in node0 and **4DDP** will be enabled in node0 and node1 with BF16 auto mixed precision: ```shell script export CCL_WORKER_COUNT=1 export MASTER_ADDR=xxx.xxx.xxx.xxx #node0 ip mpirun -f hostfile -n 4 -ppn 2 \ -genv OMP_NUM_THREADS=23 \ python3 run_qa.py \ --model_name_or_path bert-large-uncased \ --dataset_name squad \ --do_train \ --do_eval \ --per_device_train_batch_size 12 \ --learning_rate 3e-5 \ --num_train_epochs 2 \ --max_seq_length 384 \ --doc_stride 128 \ --output_dir /tmp/debug_squad/ \ --no_cuda \ --ddp_backend ccl \ --use_ipex \ --bf16 " bertology.md," # BERTology There is a growing field of study concerned with investigating the inner working of large-scale transformers like BERT (that some call ""BERTology""). Some good examples of this field are: - BERT Rediscovers the Classical NLP Pipeline by Ian Tenney, Dipanjan Das, Ellie Pavlick: https://arxiv.org/abs/1905.05950 - Are Sixteen Heads Really Better than One? by Paul Michel, Omer Levy, Graham Neubig: https://arxiv.org/abs/1905.10650 - What Does BERT Look At? An Analysis of BERT's Attention by Kevin Clark, Urvashi Khandelwal, Omer Levy, Christopher D. Manning: https://arxiv.org/abs/1906.04341 - CAT-probing: A Metric-based Approach to Interpret How Pre-trained Models for Programming Language Attend Code Structure: https://arxiv.org/abs/2210.04633 In order to help this new field develop, we have included a few additional features in the BERT/GPT/GPT-2 models to help people access the inner representations, mainly adapted from the great work of Paul Michel (https://arxiv.org/abs/1905.10650): - accessing all the hidden-states of BERT/GPT/GPT-2, - accessing all the attention weights for each head of BERT/GPT/GPT-2, - retrieving heads output values and gradients to be able to compute head importance score and prune head as explained in https://arxiv.org/abs/1905.10650. To help you understand and use these features, we have added a specific example script: [bertology.py](https://github.com/huggingface/transformers/tree/main/examples/research_projects/bertology/run_bertology.py) while extract information and prune a model pre-trained on GLUE. " training.md," # Fine-tune a pretrained model [[open-in-colab]] There are significant benefits to using a pretrained model. It reduces computation costs, your carbon footprint, and allows you to use state-of-the-art models without having to train one from scratch. 🤗 Transformers provides access to thousands of pretrained models for a wide range of tasks. When you use a pretrained model, you train it on a dataset specific to your task. This is known as fine-tuning, an incredibly powerful training technique. In this tutorial, you will fine-tune a pretrained model with a deep learning framework of your choice: * Fine-tune a pretrained model with 🤗 Transformers [`Trainer`]. * Fine-tune a pretrained model in TensorFlow with Keras. * Fine-tune a pretrained model in native PyTorch. ## Prepare a dataset Before you can fine-tune a pretrained model, download a dataset and prepare it for training. The previous tutorial showed you how to process data for training, and now you get an opportunity to put those skills to the test! Begin by loading the [Yelp Reviews](https://huggingface.co/datasets/yelp_review_full) dataset: >>> from datasets import load_dataset >>> dataset = load_dataset(""yelp_review_full"") >>> dataset[""train""][100] {'label': 0, 'text': 'My expectations for McDonalds are t rarely high. But for one to still fail so spectacularlythat takes something special!\\nThe cashier took my friends\'s order, then promptly ignored me. I had to force myself in front of a cashier who opened his register to wait on the person BEHIND me. I waited over five minutes for a gigantic order that included precisely one kid\'s meal. After watching two people who ordered after me be handed their food, I asked where mine was. The manager started yelling at the cashiers for \\""serving off their orders\\"" when they didn\'t have their food. But neither cashier was anywhere near those controls, and the manager was the one serving food to customers and clearing the boards.\\nThe manager was rude when giving me my order. She didn\'t make sure that I had everything ON MY RECEIPT, and never even had the decency to apologize that I felt I was getting poor service.\\nI\'ve eaten at various McDonalds restaurants for over 30 years. I\'ve worked at more than one location. I expect bad days, bad moods, and the occasional mistake. But I have yet to have a decent experience at this store. It will remain a place I avoid unless someone in my party needs to avoid illness from low blood sugar. Perhaps I should go back to the racially biased service of Steak n Shake instead!'} As you now know, you need a tokenizer to process the text and include a padding and truncation strategy to handle any variable sequence lengths. To process your dataset in one step, use 🤗 Datasets [`map`](https://huggingface.co/docs/datasets/process#map) method to apply a preprocessing function over the entire dataset: >>> from transformers import AutoTokenizer >>> tokenizer = AutoTokenizer.from_pretrained(""bert-base-cased"") >>> def tokenize_function(examples): return tokenizer(examples[""text""], padding=""max_length"", truncation=True) >>> tokenized_datasets = dataset.map(tokenize_function, batched=True) If you like, you can create a smaller subset of the full dataset to fine-tune on to reduce the time it takes: >>> small_train_dataset = tokenized_datasets[""train""].shuffle(seed=42).select(range(1000)) >>> small_eval_dataset = tokenized_datasets[""test""].shuffle(seed=42).select(range(1000)) ## Train At this point, you should follow the section corresponding to the framework you want to use. You can use the links in the right sidebar to jump to the one you want - and if you want to hide all of the content for a given framework, just use the button at the top-right of that framework's block! ## Train with PyTorch Trainer 🤗 Transformers provides a [`Trainer`] class optimized for training 🤗 Transformers models, making it easier to start training without manually writing your own training loop. The [`Trainer`] API supports a wide range of training options and features such as logging, gradient accumulation, and mixed precision. Start by loading your model and specify the number of expected labels. From the Yelp Review [dataset card](https://huggingface.co/datasets/yelp_review_full#data-fields), you know there are five labels: >>> from transformers import AutoModelForSequenceClassification >>> model = AutoModelForSequenceClassification.from_pretrained(""bert-base-cased"", num_labels=5) You will see a warning about some of the pretrained weights not being used and some weights being randomly initialized. Don't worry, this is completely normal! The pretrained head of the BERT model is discarded, and replaced with a randomly initialized classification head. You will fine-tune this new model head on your sequence classification task, transferring the knowledge of the pretrained model to it. ### Training hyperparameters Next, create a [`TrainingArguments`] class which contains all the hyperparameters you can tune as well as flags for activating different training options. For this tutorial you can start with the default training [hyperparameters](https://huggingface.co/docs/transformers/main_classes/trainer#transformers.TrainingArguments), but feel free to experiment with these to find your optimal settings. Specify where to save the checkpoints from your training: >>> from transformers import TrainingArguments >>> training_args = TrainingArguments(output_dir=""test_trainer"") ### Evaluate [`Trainer`] does not automatically evaluate model performance during training. You'll need to pass [`Trainer`] a function to compute and report metrics. The [🤗 Evaluate](https://huggingface.co/docs/evaluate/index) library provides a simple [`accuracy`](https://huggingface.co/spaces/evaluate-metric/accuracy) function you can load with the [`evaluate.load`] (see this [quicktour](https://huggingface.co/docs/evaluate/a_quick_tour) for more information) function: >>> import numpy as np >>> import evaluate >>> metric = evaluate.load(""accuracy"") Call [`~evaluate.compute`] on `metric` to calculate the accuracy of your predictions. Before passing your predictions to `compute`, you need to convert the logits to predictions (remember all 🤗 Transformers models return logits): >>> def compute_metrics(eval_pred): logits, labels = eval_pred predictions = np.argmax(logits, axis=-1) return metric.compute(predictions=predictions, references=labels) If you'd like to monitor your evaluation metrics during fine-tuning, specify the `evaluation_strategy` parameter in your training arguments to report the evaluation metric at the end of each epoch: >>> from transformers import TrainingArguments, Trainer >>> training_args = TrainingArguments(output_dir=""test_trainer"", evaluation_strategy=""epoch"") ### Trainer Create a [`Trainer`] object with your model, training arguments, training and test datasets, and evaluation function: >>> trainer = Trainer( model=model, args=training_args, train_dataset=small_train_dataset, eval_dataset=small_eval_dataset, compute_metrics=compute_metrics, ) Then fine-tune your model by calling [`~transformers.Trainer.train`]: >>> trainer.train() ## Train a TensorFlow model with Keras You can also train 🤗 Transformers models in TensorFlow with the Keras API! ### Loading data for Keras When you want to train a 🤗 Transformers model with the Keras API, you need to convert your dataset to a format that Keras understands. If your dataset is small, you can just convert the whole thing to NumPy arrays and pass it to Keras. Let's try that first before we do anything more complicated. First, load a dataset. We'll use the CoLA dataset from the [GLUE benchmark](https://huggingface.co/datasets/glue), since it's a simple binary text classification task, and just take the training split for now. from datasets import load_dataset dataset = load_dataset(""glue"", ""cola"") dataset = dataset[""train""] # Just take the training split for now Next, load a tokenizer and tokenize the data as NumPy arrays. Note that the labels are already a list of 0 and 1s, so we can just convert that directly to a NumPy array without tokenization! from transformers import AutoTokenizer tokenizer = AutoTokenizer.from_pretrained(""bert-base-cased"") tokenized_data = tokenizer(dataset[""sentence""], return_tensors=""np"", padding=True) # Tokenizer returns a BatchEncoding, but we convert that to a dict for Keras tokenized_data = dict(tokenized_data) labels = np.array(dataset[""label""]) # Label is already an array of 0 and 1 Finally, load, [`compile`](https://keras.io/api/models/model_training_apis/#compile-method), and [`fit`](https://keras.io/api/models/model_training_apis/#fit-method) the model. Note that Transformers models all have a default task-relevant loss function, so you don't need to specify one unless you want to: from transformers import TFAutoModelForSequenceClassification from tensorflow.keras.optimizers import Adam # Load and compile our model model = TFAutoModelForSequenceClassification.from_pretrained(""bert-base-cased"") # Lower learning rates are often better for fine-tuning transformers model.compile(optimizer=Adam(3e-5)) # No loss argument! model.fit(tokenized_data, labels) You don't have to pass a loss argument to your models when you `compile()` them! Hugging Face models automatically choose a loss that is appropriate for their task and model architecture if this argument is left blank. You can always override this by specifying a loss yourself if you want to! This approach works great for smaller datasets, but for larger datasets, you might find it starts to become a problem. Why? Because the tokenized array and labels would have to be fully loaded into memory, and because NumPy doesn’t handle “jagged” arrays, so every tokenized sample would have to be padded to the length of the longest sample in the whole dataset. That’s going to make your array even bigger, and all those padding tokens will slow down training too! ### Loading data as a tf.data.Dataset If you want to avoid slowing down training, you can load your data as a `tf.data.Dataset` instead. Although you can write your own `tf.data` pipeline if you want, we have two convenience methods for doing this: - [`~TFPreTrainedModel.prepare_tf_dataset`]: This is the method we recommend in most cases. Because it is a method on your model, it can inspect the model to automatically figure out which columns are usable as model inputs, and discard the others to make a simpler, more performant dataset. - [`~datasets.Dataset.to_tf_dataset`]: This method is more low-level, and is useful when you want to exactly control how your dataset is created, by specifying exactly which `columns` and `label_cols` to include. Before you can use [`~TFPreTrainedModel.prepare_tf_dataset`], you will need to add the tokenizer outputs to your dataset as columns, as shown in the following code sample: def tokenize_dataset(data): # Keys of the returned dictionary will be added to the dataset as columns return tokenizer(data[""text""]) dataset = dataset.map(tokenize_dataset) Remember that Hugging Face datasets are stored on disk by default, so this will not inflate your memory usage! Once the columns have been added, you can stream batches from the dataset and add padding to each batch, which greatly reduces the number of padding tokens compared to padding the entire dataset. >>> tf_dataset = model.prepare_tf_dataset(dataset[""train""], batch_size=16, shuffle=True, tokenizer=tokenizer) Note that in the code sample above, you need to pass the tokenizer to `prepare_tf_dataset` so it can correctly pad batches as they're loaded. If all the samples in your dataset are the same length and no padding is necessary, you can skip this argument. If you need to do something more complex than just padding samples (e.g. corrupting tokens for masked language modelling), you can use the `collate_fn` argument instead to pass a function that will be called to transform the list of samples into a batch and apply any preprocessing you want. See our [examples](https://github.com/huggingface/transformers/tree/main/examples) or [notebooks](https://huggingface.co/docs/transformers/notebooks) to see this approach in action. Once you've created a `tf.data.Dataset`, you can compile and fit the model as before: model.compile(optimizer=Adam(3e-5)) # No loss argument! model.fit(tf_dataset) ## Train in native PyTorch [`Trainer`] takes care of the training loop and allows you to fine-tune a model in a single line of code. For users who prefer to write their own training loop, you can also fine-tune a 🤗 Transformers model in native PyTorch. At this point, you may need to restart your notebook or execute the following code to free some memory: del model del trainer torch.cuda.empty_cache() Next, manually postprocess `tokenized_dataset` to prepare it for training. 1. Remove the `text` column because the model does not accept raw text as an input: >>> tokenized_datasets = tokenized_datasets.remove_columns([""text""]) 2. Rename the `label` column to `labels` because the model expects the argument to be named `labels`: >>> tokenized_datasets = tokenized_datasets.rename_column(""label"", ""labels"") 3. Set the format of the dataset to return PyTorch tensors instead of lists: >>> tokenized_datasets.set_format(""torch"") Then create a smaller subset of the dataset as previously shown to speed up the fine-tuning: >>> small_train_dataset = tokenized_datasets[""train""].shuffle(seed=42).select(range(1000)) >>> small_eval_dataset = tokenized_datasets[""test""].shuffle(seed=42).select(range(1000)) ### DataLoader Create a `DataLoader` for your training and test datasets so you can iterate over batches of data: >>> from torch.utils.data import DataLoader >>> train_dataloader = DataLoader(small_train_dataset, shuffle=True, batch_size=8) >>> eval_dataloader = DataLoader(small_eval_dataset, batch_size=8) Load your model with the number of expected labels: >>> from transformers import AutoModelForSequenceClassification >>> model = AutoModelForSequenceClassification.from_pretrained(""bert-base-cased"", num_labels=5) ### Optimizer and learning rate scheduler Create an optimizer and learning rate scheduler to fine-tune the model. Let's use the [`AdamW`](https://pytorch.org/docs/stable/generated/torch.optim.AdamW.html) optimizer from PyTorch: >>> from torch.optim import AdamW >>> optimizer = AdamW(model.parameters(), lr=5e-5) Create the default learning rate scheduler from [`Trainer`]: >>> from transformers import get_scheduler >>> num_epochs = 3 >>> num_training_steps = num_epochs * len(train_dataloader) >>> lr_scheduler = get_scheduler( name=""linear"", optimizer=optimizer, num_warmup_steps=0, num_training_steps=num_training_steps ) Lastly, specify `device` to use a GPU if you have access to one. Otherwise, training on a CPU may take several hours instead of a couple of minutes. >>> import torch >>> device = torch.device(""cuda"") if torch.cuda.is_available() else torch.device(""cpu"") >>> model.to(device) Get free access to a cloud GPU if you don't have one with a hosted notebook like [Colaboratory](https://colab.research.google.com/) or [SageMaker StudioLab](https://studiolab.sagemaker.aws/). Great, now you are ready to train! 🥳 ### Training loop To keep track of your training progress, use the [tqdm](https://tqdm.github.io/) library to add a progress bar over the number of training steps: >>> from tqdm.auto import tqdm >>> progress_bar = tqdm(range(num_training_steps)) >>> model.train() >>> for epoch in range(num_epochs): for batch in train_dataloader: batch = {k: v.to(device) for k, v in batch.items()} outputs = model(**batch) loss = outputs.loss loss.backward() optimizer.step() lr_scheduler.step() optimizer.zero_grad() progress_bar.update(1) ### Evaluate Just like how you added an evaluation function to [`Trainer`], you need to do the same when you write your own training loop. But instead of calculating and reporting the metric at the end of each epoch, this time you'll accumulate all the batches with [`~evaluate.add_batch`] and calculate the metric at the very end. >>> import evaluate >>> metric = evaluate.load(""accuracy"") >>> model.eval() >>> for batch in eval_dataloader: batch = {k: v.to(device) for k, v in batch.items()} with torch.no_grad(): outputs = model(**batch) logits = outputs.logits predictions = torch.argmax(logits, dim=-1) metric.add_batch(predictions=predictions, references=batch[""labels""]) >>> metric.compute() ## Additional resources For more fine-tuning examples, refer to: - [🤗 Transformers Examples](https://github.com/huggingface/transformers/tree/main/examples) includes scripts to train common NLP tasks in PyTorch and TensorFlow. - [🤗 Transformers Notebooks](notebooks) contains various notebooks on how to fine-tune a model for specific tasks in PyTorch and TensorFlow. " tf_xla.md," # XLA Integration for TensorFlow Models [[open-in-colab]] Accelerated Linear Algebra, dubbed XLA, is a compiler for accelerating the runtime of TensorFlow Models. From the [official documentation](https://www.tensorflow.org/xla): XLA (Accelerated Linear Algebra) is a domain-specific compiler for linear algebra that can accelerate TensorFlow models with potentially no source code changes. Using XLA in TensorFlow is simple – it comes packaged inside the `tensorflow` library, and it can be triggered with the `jit_compile` argument in any graph-creating function such as [`tf.function`](https://www.tensorflow.org/guide/intro_to_graphs). When using Keras methods like `fit()` and `predict()`, you can enable XLA simply by passing the `jit_compile` argument to `model.compile()`. However, XLA is not limited to these methods - it can also be used to accelerate any arbitrary `tf.function`. Several TensorFlow methods in 🤗 Transformers have been rewritten to be XLA-compatible, including text generation for models such as [GPT2](https://huggingface.co/docs/transformers/model_doc/gpt2), [T5](https://huggingface.co/docs/transformers/model_doc/t5) and [OPT](https://huggingface.co/docs/transformers/model_doc/opt), as well as speech processing for models such as [Whisper](https://huggingface.co/docs/transformers/model_doc/whisper). While the exact amount of speed-up is very much model-dependent, for TensorFlow text generation models inside 🤗 Transformers, we noticed a speed-up of ~100x. This document will explain how you can use XLA for these models to get the maximum amount of performance. We’ll also provide links to additional resources if you’re interested to learn more about the benchmarks and our design philosophy behind the XLA integration. ## Running TF functions with XLA Let us consider the following model in TensorFlow: import tensorflow as tf model = tf.keras.Sequential( [tf.keras.layers.Dense(10, input_shape=(10,), activation=""relu""), tf.keras.layers.Dense(5, activation=""softmax"")] ) The above model accepts inputs having a dimension of `(10, )`. We can use the model for running a forward pass like so: # Generate random inputs for the model. batch_size = 16 input_vector_dim = 10 random_inputs = tf.random.normal((batch_size, input_vector_dim)) # Run a forward pass. _ = model(random_inputs) In order to run the forward pass with an XLA-compiled function, we’d need to do: xla_fn = tf.function(model, jit_compile=True) _ = xla_fn(random_inputs) The default `call()` function of the `model` is used for compiling the XLA graph. But if there’s any other model function you want to compile into XLA that’s also possible with: my_xla_fn = tf.function(model.my_xla_fn, jit_compile=True) ## Running a TF text generation model with XLA from 🤗 Transformers To enable XLA-accelerated generation within 🤗 Transformers, you need to have a recent version of `transformers` installed. You can install it by running: ```bash pip install transformers --upgrade And then you can run the following code: import tensorflow as tf from transformers import AutoTokenizer, TFAutoModelForCausalLM # Will error if the minimal version of Transformers is not installed. from transformers.utils import check_min_version check_min_version(""4.21.0"") tokenizer = AutoTokenizer.from_pretrained(""gpt2"", padding_side=""left"", pad_token="""") model = TFAutoModelForCausalLM.from_pretrained(""gpt2"") input_string = [""TensorFlow is""] # One line to create an XLA generation function xla_generate = tf.function(model.generate, jit_compile=True) tokenized_input = tokenizer(input_string, return_tensors=""tf"") generated_tokens = xla_generate(**tokenized_input, num_beams=2) decoded_text = tokenizer.decode(generated_tokens[0], skip_special_tokens=True) print(f""Generated -- {decoded_text}"") # Generated -- TensorFlow is an open-source, open-source, distributed-source application # framework for the As you can notice, enabling XLA on `generate()` is just a single line of code. The rest of the code remains unchanged. However, there are a couple of gotchas in the above code snippet that are specific to XLA. You need to be aware of those to realize the speed-ups that XLA can bring in. We discuss these in the following section. ## Gotchas to be aware of When you are executing an XLA-enabled function (like `xla_generate()` above) for the first time, it will internally try to infer the computation graph, which is time-consuming. This process is known as [“tracing”](https://www.tensorflow.org/guide/intro_to_graphs#when_is_a_function_tracing). You might notice that the generation time is not fast. Successive calls of `xla_generate()` (or any other XLA-enabled function) won’t have to infer the computation graph, given the inputs to the function follow the same shape with which the computation graph was initially built. While this is not a problem for modalities with fixed input shapes (e.g., images), you must pay attention if you are working with variable input shape modalities (e.g., text). To ensure `xla_generate()` always operates with the same input shapes, you can specify the `padding` arguments when calling the tokenizer. import tensorflow as tf from transformers import AutoTokenizer, TFAutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained(""gpt2"", padding_side=""left"", pad_token="""") model = TFAutoModelForCausalLM.from_pretrained(""gpt2"") input_string = [""TensorFlow is""] xla_generate = tf.function(model.generate, jit_compile=True) # Here, we call the tokenizer with padding options. tokenized_input = tokenizer(input_string, pad_to_multiple_of=8, padding=True, return_tensors=""tf"") generated_tokens = xla_generate(**tokenized_input, num_beams=2) decoded_text = tokenizer.decode(generated_tokens[0], skip_special_tokens=True) print(f""Generated -- {decoded_text}"") This way, you can ensure that the inputs to `xla_generate()` will always receive inputs with the shape it was traced with and thus leading to speed-ups in the generation time. You can verify this with the code below: import time import tensorflow as tf from transformers import AutoTokenizer, TFAutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained(""gpt2"", padding_side=""left"", pad_token="""") model = TFAutoModelForCausalLM.from_pretrained(""gpt2"") xla_generate = tf.function(model.generate, jit_compile=True) for input_string in [""TensorFlow is"", ""TensorFlow is a"", ""TFLite is a""]: tokenized_input = tokenizer(input_string, pad_to_multiple_of=8, padding=True, return_tensors=""tf"") start = time.time_ns() generated_tokens = xla_generate(**tokenized_input, num_beams=2) end = time.time_ns() print(f""Execution time -- {(end - start) / 1e6:.1f} ms\n"") On a Tesla T4 GPU, you can expect the outputs like so: ```bash Execution time -- 30819.6 ms Execution time -- 79.0 ms Execution time -- 78.9 ms The first call to `xla_generate()` is time-consuming because of tracing, but the successive calls are orders of magnitude faster. Keep in mind that any change in the generation options at any point with trigger re-tracing and thus leading to slow-downs in the generation time. We didn’t cover all the text generation options 🤗 Transformers provides in this document. We encourage you to read the documentation for advanced use cases. ## Additional Resources Here, we leave you with some additional resources if you want to delve deeper into XLA in 🤗 Transformers and in general. * [This Colab Notebook](https://colab.research.google.com/github/huggingface/blog/blob/main/notebooks/91_tf_xla_generate.ipynb) provides an interactive demonstration if you want to fiddle with the XLA-compatible encoder-decoder (like [T5](https://huggingface.co/docs/transformers/model_doc/t5)) and decoder-only (like [GPT2](https://huggingface.co/docs/transformers/model_doc/gpt2)) text generation models. * [This blog post](https://huggingface.co/blog/tf-xla-generate) provides an overview of the comparison benchmarks for XLA-compatible models along with a friendly introduction to XLA in TensorFlow. * [This blog post](https://blog.tensorflow.org/2022/11/how-hugging-face-improved-text-generation-performance-with-xla.html) discusses our design philosophy behind adding XLA support to the TensorFlow models in 🤗 Transformers. * Recommended posts for learning more about XLA and TensorFlow graphs in general: * [XLA: Optimizing Compiler for Machine Learning](https://www.tensorflow.org/xla) * [Introduction to graphs and tf.function](https://www.tensorflow.org/guide/intro_to_graphs) * [Better performance with tf.function](https://www.tensorflow.org/guide/function) " run_scripts.md," # Train with a script Along with the 🤗 Transformers [notebooks](./noteboks/README), there are also example scripts demonstrating how to train a model for a task with [PyTorch](https://github.com/huggingface/transformers/tree/main/examples/pytorch), [TensorFlow](https://github.com/huggingface/transformers/tree/main/examples/tensorflow), or [JAX/Flax](https://github.com/huggingface/transformers/tree/main/examples/flax). You will also find scripts we've used in our [research projects](https://github.com/huggingface/transformers/tree/main/examples/research_projects) and [legacy examples](https://github.com/huggingface/transformers/tree/main/examples/legacy) which are mostly community contributed. These scripts are not actively maintained and require a specific version of 🤗 Transformers that will most likely be incompatible with the latest version of the library. The example scripts are not expected to work out-of-the-box on every problem, and you may need to adapt the script to the problem you're trying to solve. To help you with this, most of the scripts fully expose how data is preprocessed, allowing you to edit it as necessary for your use case. For any feature you'd like to implement in an example script, please discuss it on the [forum](https://discuss.huggingface.co/) or in an [issue](https://github.com/huggingface/transformers/issues) before submitting a Pull Request. While we welcome bug fixes, it is unlikely we will merge a Pull Request that adds more functionality at the cost of readability. This guide will show you how to run an example summarization training script in [PyTorch](https://github.com/huggingface/transformers/tree/main/examples/pytorch/summarization) and [TensorFlow](https://github.com/huggingface/transformers/tree/main/examples/tensorflow/summarization). All examples are expected to work with both frameworks unless otherwise specified. ## Setup To successfully run the latest version of the example scripts, you have to **install 🤗 Transformers from source** in a new virtual environment: ```bash git clone https://github.com/huggingface/transformers cd transformers pip install . For older versions of the example scripts, click on the toggle below: Examples for older versions of 🤗 Transformers v4.5.1 v4.4.2 v4.3.3 v4.2.2 v4.1.1 v4.0.1 v3.5.1 v3.4.0 v3.3.1 v3.2.0 v3.1.0 v3.0.2 v2.11.0 v2.10.0 v2.9.1 v2.8.0 v2.7.0 v2.6.0 v2.5.1 v2.4.0 v2.3.0 v2.2.0 v2.1.1 v2.0.0 v1.2.0 v1.1.0 v1.0.0 Then switch your current clone of 🤗 Transformers to a specific version, like v3.5.1 for example: ```bash git checkout tags/v3.5.1 After you've setup the correct library version, navigate to the example folder of your choice and install the example specific requirements: ```bash pip install -r requirements.txt ## Run a script The example script downloads and preprocesses a dataset from the 🤗 [Datasets](https://huggingface.co/docs/datasets/) library. Then the script fine-tunes a dataset with the [Trainer](https://huggingface.co/docs/transformers/main_classes/trainer) on an architecture that supports summarization. The following example shows how to fine-tune [T5-small](https://huggingface.co/t5-small) on the [CNN/DailyMail](https://huggingface.co/datasets/cnn_dailymail) dataset. The T5 model requires an additional `source_prefix` argument due to how it was trained. This prompt lets T5 know this is a summarization task. ```bash python examples/pytorch/summarization/run_summarization.py \ --model_name_or_path t5-small \ --do_train \ --do_eval \ --dataset_name cnn_dailymail \ --dataset_config ""3.0.0"" \ --source_prefix ""summarize: "" \ --output_dir /tmp/tst-summarization \ --per_device_train_batch_size=4 \ --per_device_eval_batch_size=4 \ --overwrite_output_dir \ --predict_with_generate The example script downloads and preprocesses a dataset from the 🤗 [Datasets](https://huggingface.co/docs/datasets/) library. Then the script fine-tunes a dataset using Keras on an architecture that supports summarization. The following example shows how to fine-tune [T5-small](https://huggingface.co/t5-small) on the [CNN/DailyMail](https://huggingface.co/datasets/cnn_dailymail) dataset. The T5 model requires an additional `source_prefix` argument due to how it was trained. This prompt lets T5 know this is a summarization task. ```bash python examples/tensorflow/summarization/run_summarization.py \ --model_name_or_path t5-small \ --dataset_name cnn_dailymail \ --dataset_config ""3.0.0"" \ --output_dir /tmp/tst-summarization \ --per_device_train_batch_size 8 \ --per_device_eval_batch_size 16 \ --num_train_epochs 3 \ --do_train \ --do_eval ## Distributed training and mixed precision The [Trainer](https://huggingface.co/docs/transformers/main_classes/trainer) supports distributed training and mixed precision, which means you can also use it in a script. To enable both of these features: - Add the `fp16` argument to enable mixed precision. - Set the number of GPUs to use with the `nproc_per_node` argument. ```bash python -m torch.distributed.launch \ --nproc_per_node 8 pytorch/summarization/run_summarization.py \ --fp16 \ --model_name_or_path t5-small \ --do_train \ --do_eval \ --dataset_name cnn_dailymail \ --dataset_config ""3.0.0"" \ --source_prefix ""summarize: "" \ --output_dir /tmp/tst-summarization \ --per_device_train_batch_size=4 \ --per_device_eval_batch_size=4 \ --overwrite_output_dir \ --predict_with_generate TensorFlow scripts utilize a [`MirroredStrategy`](https://www.tensorflow.org/guide/distributed_training#mirroredstrategy) for distributed training, and you don't need to add any additional arguments to the training script. The TensorFlow script will use multiple GPUs by default if they are available. ## Run a script on a TPU Tensor Processing Units (TPUs) are specifically designed to accelerate performance. PyTorch supports TPUs with the [XLA](https://www.tensorflow.org/xla) deep learning compiler (see [here](https://github.com/pytorch/xla/blob/master/README.md) for more details). To use a TPU, launch the `xla_spawn.py` script and use the `num_cores` argument to set the number of TPU cores you want to use. ```bash python xla_spawn.py --num_cores 8 \ summarization/run_summarization.py \ --model_name_or_path t5-small \ --do_train \ --do_eval \ --dataset_name cnn_dailymail \ --dataset_config ""3.0.0"" \ --source_prefix ""summarize: "" \ --output_dir /tmp/tst-summarization \ --per_device_train_batch_size=4 \ --per_device_eval_batch_size=4 \ --overwrite_output_dir \ --predict_with_generate Tensor Processing Units (TPUs) are specifically designed to accelerate performance. TensorFlow scripts utilize a [`TPUStrategy`](https://www.tensorflow.org/guide/distributed_training#tpustrategy) for training on TPUs. To use a TPU, pass the name of the TPU resource to the `tpu` argument. ```bash python run_summarization.py \ --tpu name_of_tpu_resource \ --model_name_or_path t5-small \ --dataset_name cnn_dailymail \ --dataset_config ""3.0.0"" \ --output_dir /tmp/tst-summarization \ --per_device_train_batch_size 8 \ --per_device_eval_batch_size 16 \ --num_train_epochs 3 \ --do_train \ --do_eval ## Run a script with 🤗 Accelerate 🤗 [Accelerate](https://huggingface.co/docs/accelerate) is a PyTorch-only library that offers a unified method for training a model on several types of setups (CPU-only, multiple GPUs, TPUs) while maintaining complete visibility into the PyTorch training loop. Make sure you have 🤗 Accelerate installed if you don't already have it: > Note: As Accelerate is rapidly developing, the git version of accelerate must be installed to run the scripts ```bash pip install git+https://github.com/huggingface/accelerate Instead of the `run_summarization.py` script, you need to use the `run_summarization_no_trainer.py` script. 🤗 Accelerate supported scripts will have a `task_no_trainer.py` file in the folder. Begin by running the following command to create and save a configuration file: ```bash accelerate config Test your setup to make sure it is configured correctly: ```bash accelerate test Now you are ready to launch the training: ```bash accelerate launch run_summarization_no_trainer.py \ --model_name_or_path t5-small \ --dataset_name cnn_dailymail \ --dataset_config ""3.0.0"" \ --source_prefix ""summarize: "" \ --output_dir ~/tmp/tst-summarization ## Use a custom dataset The summarization script supports custom datasets as long as they are a CSV or JSON Line file. When you use your own dataset, you need to specify several additional arguments: - `train_file` and `validation_file` specify the path to your training and validation files. - `text_column` is the input text to summarize. - `summary_column` is the target text to output. A summarization script using a custom dataset would look like this: ```bash python examples/pytorch/summarization/run_summarization.py \ --model_name_or_path t5-small \ --do_train \ --do_eval \ --train_file path_to_csv_or_jsonlines_file \ --validation_file path_to_csv_or_jsonlines_file \ --text_column text_column_name \ --summary_column summary_column_name \ --source_prefix ""summarize: "" \ --output_dir /tmp/tst-summarization \ --overwrite_output_dir \ --per_device_train_batch_size=4 \ --per_device_eval_batch_size=4 \ --predict_with_generate ## Test a script It is often a good idea to run your script on a smaller number of dataset examples to ensure everything works as expected before committing to an entire dataset which may take hours to complete. Use the following arguments to truncate the dataset to a maximum number of samples: - `max_train_samples` - `max_eval_samples` - `max_predict_samples` ```bash python examples/pytorch/summarization/run_summarization.py \ --model_name_or_path t5-small \ --max_train_samples 50 \ --max_eval_samples 50 \ --max_predict_samples 50 \ --do_train \ --do_eval \ --dataset_name cnn_dailymail \ --dataset_config ""3.0.0"" \ --source_prefix ""summarize: "" \ --output_dir /tmp/tst-summarization \ --per_device_train_batch_size=4 \ --per_device_eval_batch_size=4 \ --overwrite_output_dir \ --predict_with_generate Not all example scripts support the `max_predict_samples` argument. If you aren't sure whether your script supports this argument, add the `-h` argument to check: ```bash examples/pytorch/summarization/run_summarization.py -h ## Resume training from checkpoint Another helpful option to enable is resuming training from a previous checkpoint. This will ensure you can pick up where you left off without starting over if your training gets interrupted. There are two methods to resume training from a checkpoint. The first method uses the `output_dir previous_output_dir` argument to resume training from the latest checkpoint stored in `output_dir`. In this case, you should remove `overwrite_output_dir`: ```bash python examples/pytorch/summarization/run_summarization.py --model_name_or_path t5-small \ --do_train \ --do_eval \ --dataset_name cnn_dailymail \ --dataset_config ""3.0.0"" \ --source_prefix ""summarize: "" \ --output_dir /tmp/tst-summarization \ --per_device_train_batch_size=4 \ --per_device_eval_batch_size=4 \ --output_dir previous_output_dir \ --predict_with_generate The second method uses the `resume_from_checkpoint path_to_specific_checkpoint` argument to resume training from a specific checkpoint folder. ```bash python examples/pytorch/summarization/run_summarization.py --model_name_or_path t5-small \ --do_train \ --do_eval \ --dataset_name cnn_dailymail \ --dataset_config ""3.0.0"" \ --source_prefix ""summarize: "" \ --output_dir /tmp/tst-summarization \ --per_device_train_batch_size=4 \ --per_device_eval_batch_size=4 \ --overwrite_output_dir \ --resume_from_checkpoint path_to_specific_checkpoint \ --predict_with_generate ## Share your model All scripts can upload your final model to the [Model Hub](https://huggingface.co/models). Make sure you are logged into Hugging Face before you begin: ```bash huggingface-cli login Then add the `push_to_hub` argument to the script. This argument will create a repository with your Hugging Face username and the folder name specified in `output_dir`. To give your repository a specific name, use the `push_to_hub_model_id` argument to add it. The repository will be automatically listed under your namespace. The following example shows how to upload a model with a specific repository name: ```bash python examples/pytorch/summarization/run_summarization.py --model_name_or_path t5-small \ --do_train \ --do_eval \ --dataset_name cnn_dailymail \ --dataset_config ""3.0.0"" \ --source_prefix ""summarize: "" \ --push_to_hub \ --push_to_hub_model_id finetuned-t5-cnn_dailymail \ --output_dir /tmp/tst-summarization \ --per_device_train_batch_size=4 \ --per_device_eval_batch_size=4 \ --overwrite_output_dir \ --predict_with_generate ```" generation_strategies.md," # Text generation strategies Text generation is essential to many NLP tasks, such as open-ended text generation, summarization, translation, and more. It also plays a role in a variety of mixed-modality applications that have text as an output like speech-to-text and vision-to-text. Some of the models that can generate text include GPT2, XLNet, OpenAI GPT, CTRL, TransformerXL, XLM, Bart, T5, GIT, Whisper. Check out a few examples that use [`~transformers.generation_utils.GenerationMixin.generate`] method to produce text outputs for different tasks: * [Text summarization](./tasks/summarization#inference) * [Image captioning](./model_doc/git#transformers.GitForCausalLM.forward.example) * [Audio transcription](./model_doc/whisper#transformers.WhisperForConditionalGeneration.forward.example) Note that the inputs to the generate method depend on the model's modality. They are returned by the model's preprocessor class, such as AutoTokenizer or AutoProcessor. If a model's preprocessor creates more than one kind of input, pass all the inputs to generate(). You can learn more about the individual model's preprocessor in the corresponding model's documentation. The process of selecting output tokens to generate text is known as decoding, and you can customize the decoding strategy that the `generate()` method will use. Modifying a decoding strategy does not change the values of any trainable parameters. However, it can have a noticeable impact on the quality of the generated output. It can help reduce repetition in the text and make it more coherent. This guide describes: * default generation configuration * common decoding strategies and their main parameters * saving and sharing custom generation configurations with your fine-tuned model on 🤗 Hub ## Default text generation configuration A decoding strategy for a model is defined in its generation configuration. When using pre-trained models for inference within a [`pipeline`], the models call the `PreTrainedModel.generate()` method that applies a default generation configuration under the hood. The default configuration is also used when no custom configuration has been saved with the model. When you load a model explicitly, you can inspect the generation configuration that comes with it through `model.generation_config`: thon >>> from transformers import AutoModelForCausalLM >>> model = AutoModelForCausalLM.from_pretrained(""distilgpt2"") >>> model.generation_config GenerationConfig { ""bos_token_id"": 50256, ""eos_token_id"": 50256, } Printing out the `model.generation_config` reveals only the values that are different from the default generation configuration, and does not list any of the default values. The default generation configuration limits the size of the output combined with the input prompt to a maximum of 20 tokens to avoid running into resource limitations. The default decoding strategy is greedy search, which is the simplest decoding strategy that picks a token with the highest probability as the next token. For many tasks and small output sizes this works well. However, when used to generate longer outputs, greedy search can start producing highly repetitive results. ## Customize text generation You can override any `generation_config` by passing the parameters and their values directly to the [`generate`] method: thon >>> my_model.generate(**inputs, num_beams=4, do_sample=True) # doctest: +SKIP Even if the default decoding strategy mostly works for your task, you can still tweak a few things. Some of the commonly adjusted parameters include: - `max_new_tokens`: the maximum number of tokens to generate. In other words, the size of the output sequence, not including the tokens in the prompt. As an alternative to using the output's length as a stopping criteria, you can choose to stop generation whenever the full generation exceeds some amount of time. To learn more, check [`StoppingCriteria`]. - `num_beams`: by specifying a number of beams higher than 1, you are effectively switching from greedy search to beam search. This strategy evaluates several hypotheses at each time step and eventually chooses the hypothesis that has the overall highest probability for the entire sequence. This has the advantage of identifying high-probability sequences that start with a lower probability initial tokens and would've been ignored by the greedy search. - `do_sample`: if set to `True`, this parameter enables decoding strategies such as multinomial sampling, beam-search multinomial sampling, Top-K sampling and Top-p sampling. All these strategies select the next token from the probability distribution over the entire vocabulary with various strategy-specific adjustments. - `num_return_sequences`: the number of sequence candidates to return for each input. This option is only available for the decoding strategies that support multiple sequence candidates, e.g. variations of beam search and sampling. Decoding strategies like greedy search and contrastive search return a single output sequence. ## Save a custom decoding strategy with your model If you would like to share your fine-tuned model with a specific generation configuration, you can: * Create a [`GenerationConfig`] class instance * Specify the decoding strategy parameters * Save your generation configuration with [`GenerationConfig.save_pretrained`], making sure to leave its `config_file_name` argument empty * Set `push_to_hub` to `True` to upload your config to the model's repo thon >>> from transformers import AutoModelForCausalLM, GenerationConfig >>> model = AutoModelForCausalLM.from_pretrained(""my_account/my_model"") # doctest: +SKIP >>> generation_config = GenerationConfig( max_new_tokens=50, do_sample=True, top_k=50, eos_token_id=model.config.eos_token_id ) >>> generation_config.save_pretrained(""my_account/my_model"", push_to_hub=True) # doctest: +SKIP You can also store several generation configurations in a single directory, making use of the `config_file_name` argument in [`GenerationConfig.save_pretrained`]. You can later instantiate them with [`GenerationConfig.from_pretrained`]. This is useful if you want to store several generation configurations for a single model (e.g. one for creative text generation with sampling, and one for summarization with beam search). You must have the right Hub permissions to add configuration files to a model. thon >>> from transformers import AutoModelForSeq2SeqLM, AutoTokenizer, GenerationConfig >>> tokenizer = AutoTokenizer.from_pretrained(""t5-small"") >>> model = AutoModelForSeq2SeqLM.from_pretrained(""t5-small"") >>> translation_generation_config = GenerationConfig( num_beams=4, early_stopping=True, decoder_start_token_id=0, eos_token_id=model.config.eos_token_id, pad_token=model.config.pad_token_id, ) >>> # Tip: add `push_to_hub=True` to push to the Hub >>> translation_generation_config.save_pretrained(""/tmp"", ""translation_generation_config.json"") >>> # You could then use the named generation config file to parameterize generation >>> generation_config = GenerationConfig.from_pretrained(""/tmp"", ""translation_generation_config.json"") >>> inputs = tokenizer(""translate English to French: Configuration files are easy to use!"", return_tensors=""pt"") >>> outputs = model.generate(**inputs, generation_config=generation_config) >>> print(tokenizer.batch_decode(outputs, skip_special_tokens=True)) ['Les fichiers de configuration sont faciles à utiliser!'] ## Streaming The `generate()` supports streaming, through its `streamer` input. The `streamer` input is compatible with any instance from a class that has the following methods: `put()` and `end()`. Internally, `put()` is used to push new tokens and `end()` is used to flag the end of text generation. The API for the streamer classes is still under development and may change in the future. In practice, you can craft your own streaming class for all sorts of purposes! We also have basic streaming classes ready for you to use. For example, you can use the [`TextStreamer`] class to stream the output of `generate()` into your screen, one word at a time: thon >>> from transformers import AutoModelForCausalLM, AutoTokenizer, TextStreamer >>> tok = AutoTokenizer.from_pretrained(""gpt2"") >>> model = AutoModelForCausalLM.from_pretrained(""gpt2"") >>> inputs = tok([""An increasing sequence: one,""], return_tensors=""pt"") >>> streamer = TextStreamer(tok) >>> # Despite returning the usual output, the streamer will also print the generated text to stdout. >>> _ = model.generate(**inputs, streamer=streamer, max_new_tokens=20) An increasing sequence: one, two, three, four, five, six, seven, eight, nine, ten, eleven, ## Decoding strategies Certain combinations of the `generate()` parameters, and ultimately `generation_config`, can be used to enable specific decoding strategies. If you are new to this concept, we recommend reading [this blog post that illustrates how common decoding strategies work](https://huggingface.co/blog/how-to-generate). Here, we'll show some of the parameters that control the decoding strategies and illustrate how you can use them. ### Greedy Search [`generate`] uses greedy search decoding by default so you don't have to pass any parameters to enable it. This means the parameters `num_beams` is set to 1 and `do_sample=False`. thon >>> from transformers import AutoModelForCausalLM, AutoTokenizer >>> prompt = ""I look forward to"" >>> checkpoint = ""distilgpt2"" >>> tokenizer = AutoTokenizer.from_pretrained(checkpoint) >>> inputs = tokenizer(prompt, return_tensors=""pt"") >>> model = AutoModelForCausalLM.from_pretrained(checkpoint) >>> outputs = model.generate(**inputs) >>> tokenizer.batch_decode(outputs, skip_special_tokens=True) ['I look forward to seeing you all again!\n\n\n\n\n\n\n\n\n\n\n'] ### Contrastive search The contrastive search decoding strategy was proposed in the 2022 paper [A Contrastive Framework for Neural Text Generation](https://arxiv.org/abs/2202.06417). It demonstrates superior results for generating non-repetitive yet coherent long outputs. To learn how contrastive search works, check out [this blog post](https://huggingface.co/blog/introducing-csearch). The two main parameters that enable and control the behavior of contrastive search are `penalty_alpha` and `top_k`: thon >>> from transformers import AutoTokenizer, AutoModelForCausalLM >>> checkpoint = ""gpt2-large"" >>> tokenizer = AutoTokenizer.from_pretrained(checkpoint) >>> model = AutoModelForCausalLM.from_pretrained(checkpoint) >>> prompt = ""Hugging Face Company is"" >>> inputs = tokenizer(prompt, return_tensors=""pt"") >>> outputs = model.generate(**inputs, penalty_alpha=0.6, top_k=4, max_new_tokens=100) >>> tokenizer.batch_decode(outputs, skip_special_tokens=True) ['Hugging Face Company is a family owned and operated business. We pride ourselves on being the best in the business and our customer service is second to none.\n\nIf you have any questions about our products or services, feel free to contact us at any time. We look forward to hearing from you!'] ### Multinomial sampling As opposed to greedy search that always chooses a token with the highest probability as the next token, multinomial sampling (also called ancestral sampling) randomly selects the next token based on the probability distribution over the entire vocabulary given by the model. Every token with a non-zero probability has a chance of being selected, thus reducing the risk of repetition. To enable multinomial sampling set `do_sample=True` and `num_beams=1`. thon >>> from transformers import AutoTokenizer, AutoModelForCausalLM, set_seed >>> set_seed(0) # For reproducibility >>> checkpoint = ""gpt2-large"" >>> tokenizer = AutoTokenizer.from_pretrained(checkpoint) >>> model = AutoModelForCausalLM.from_pretrained(checkpoint) >>> prompt = ""Today was an amazing day because"" >>> inputs = tokenizer(prompt, return_tensors=""pt"") >>> outputs = model.generate(**inputs, do_sample=True, num_beams=1, max_new_tokens=100) >>> tokenizer.batch_decode(outputs, skip_special_tokens=True) ['Today was an amazing day because when you go to the World Cup and you don\'t, or when you don\'t get invited, that\'s a terrible feeling.""'] ### Beam-search decoding Unlike greedy search, beam-search decoding keeps several hypotheses at each time step and eventually chooses the hypothesis that has the overall highest probability for the entire sequence. This has the advantage of identifying high-probability sequences that start with lower probability initial tokens and would've been ignored by the greedy search. To enable this decoding strategy, specify the `num_beams` (aka number of hypotheses to keep track of) that is greater than 1. thon >>> from transformers import AutoModelForCausalLM, AutoTokenizer >>> prompt = ""It is astonishing how one can"" >>> checkpoint = ""gpt2-medium"" >>> tokenizer = AutoTokenizer.from_pretrained(checkpoint) >>> inputs = tokenizer(prompt, return_tensors=""pt"") >>> model = AutoModelForCausalLM.from_pretrained(checkpoint) >>> outputs = model.generate(**inputs, num_beams=5, max_new_tokens=50) >>> tokenizer.batch_decode(outputs, skip_special_tokens=True) ['It is astonishing how one can have such a profound impact on the lives of so many people in such a short period of time.""\n\nHe added: ""I am very proud of the work I have been able to do in the last few years.\n\n""I have'] ### Beam-search multinomial sampling As the name implies, this decoding strategy combines beam search with multinomial sampling. You need to specify the `num_beams` greater than 1, and set `do_sample=True` to use this decoding strategy. thon >>> from transformers import AutoTokenizer, AutoModelForSeq2SeqLM, set_seed >>> set_seed(0) # For reproducibility >>> prompt = ""translate English to German: The house is wonderful."" >>> checkpoint = ""t5-small"" >>> tokenizer = AutoTokenizer.from_pretrained(checkpoint) >>> inputs = tokenizer(prompt, return_tensors=""pt"") >>> model = AutoModelForSeq2SeqLM.from_pretrained(checkpoint) >>> outputs = model.generate(**inputs, num_beams=5, do_sample=True) >>> tokenizer.decode(outputs[0], skip_special_tokens=True) 'Das Haus ist wunderbar.' ### Diverse beam search decoding The diverse beam search decoding strategy is an extension of the beam search strategy that allows for generating a more diverse set of beam sequences to choose from. To learn how it works, refer to [Diverse Beam Search: Decoding Diverse Solutions from Neural Sequence Models](https://arxiv.org/pdf/1610.02424.pdf). This approach has three main parameters: `num_beams`, `num_beam_groups`, and `diversity_penalty`. The diversity penalty ensures the outputs are distinct across groups, and beam search is used within each group. thon >>> from transformers import AutoTokenizer, AutoModelForSeq2SeqLM >>> checkpoint = ""google/pegasus-xsum"" >>> prompt = ( ""The Permaculture Design Principles are a set of universal design principles "" ""that can be applied to any location, climate and culture, and they allow us to design "" ""the most efficient and sustainable human habitation and food production systems. "" ""Permaculture is a design system that encompasses a wide variety of disciplines, such "" ""as ecology, landscape design, environmental science and energy conservation, and the "" ""Permaculture design principles are drawn from these various disciplines. Each individual "" ""design principle itself embodies a complete conceptual framework based on sound "" ""scientific principles. When we bring all these separate principles together, we can "" ""create a design system that both looks at whole systems, the parts that these systems "" ""consist of, and how those parts interact with each other to create a complex, dynamic, "" ""living system. Each design principle serves as a tool that allows us to integrate all "" ""the separate parts of a design, referred to as elements, into a functional, synergistic, "" ""whole system, where the elements harmoniously interact and work together in the most "" ""efficient way possible."" ) >>> tokenizer = AutoTokenizer.from_pretrained(checkpoint) >>> inputs = tokenizer(prompt, return_tensors=""pt"") >>> model = AutoModelForSeq2SeqLM.from_pretrained(checkpoint) >>> outputs = model.generate(**inputs, num_beams=5, num_beam_groups=5, max_new_tokens=30, diversity_penalty=1.0) >>> tokenizer.decode(outputs[0], skip_special_tokens=True) 'The Design Principles are a set of universal design principles that can be applied to any location, climate and culture, and they allow us to design the' This guide illustrates the main parameters that enable various decoding strategies. More advanced parameters exist for the [`generate`] method, which gives you even further control over the [`generate`] method's behavior. For the complete list of the available parameters, refer to the [API documentation](./main_classes/text_generation.md). ### Assisted Decoding Assisted decoding is a modification of the decoding strategies above that uses an assistant model with the same tokenizer (ideally a much smaller model) to greedily generate a few candidate tokens. The main model then validates the candidate tokens in a single forward pass, which speeds up the decoding process. Currently, only greedy search and sampling are supported with assisted decoding, and doesn't support batched inputs. To learn more about assisted decoding, check [this blog post](https://huggingface.co/blog/assisted-generation). To enable assisted decoding, set the `assistant_model` argument with a model. thon >>> from transformers import AutoModelForCausalLM, AutoTokenizer >>> prompt = ""Alice and Bob"" >>> checkpoint = ""EleutherAI/pythia-1.4b-deduped"" >>> assistant_checkpoint = ""EleutherAI/pythia-160m-deduped"" >>> tokenizer = AutoTokenizer.from_pretrained(checkpoint) >>> inputs = tokenizer(prompt, return_tensors=""pt"") >>> model = AutoModelForCausalLM.from_pretrained(checkpoint) >>> assistant_model = AutoModelForCausalLM.from_pretrained(assistant_checkpoint) >>> outputs = model.generate(**inputs, assistant_model=assistant_model) >>> tokenizer.batch_decode(outputs, skip_special_tokens=True) ['Alice and Bob are sitting in a bar. Alice is drinking a beer and Bob is drinking a'] When using assisted decoding with sampling methods, you can use the `temperature` argument to control the randomness just like in multinomial sampling. However, in assisted decoding, reducing the temperature will help improving latency. thon >>> from transformers import AutoModelForCausalLM, AutoTokenizer, set_seed >>> set_seed(42) # For reproducibility >>> prompt = ""Alice and Bob"" >>> checkpoint = ""EleutherAI/pythia-1.4b-deduped"" >>> assistant_checkpoint = ""EleutherAI/pythia-160m-deduped"" >>> tokenizer = AutoTokenizer.from_pretrained(checkpoint) >>> inputs = tokenizer(prompt, return_tensors=""pt"") >>> model = AutoModelForCausalLM.from_pretrained(checkpoint) >>> assistant_model = AutoModelForCausalLM.from_pretrained(assistant_checkpoint) >>> outputs = model.generate(**inputs, assistant_model=assistant_model, do_sample=True, temperature=0.5) >>> tokenizer.batch_decode(outputs, skip_special_tokens=True) ['Alice and Bob are going to the same party. It is a small party, in a small'] " multilingual.md," # Multilingual models for inference [[open-in-colab]] There are several multilingual models in 🤗 Transformers, and their inference usage differs from monolingual models. Not *all* multilingual model usage is different though. Some models, like [bert-base-multilingual-uncased](https://huggingface.co/bert-base-multilingual-uncased), can be used just like a monolingual model. This guide will show you how to use multilingual models whose usage differs for inference. ## XLM XLM has ten different checkpoints, only one of which is monolingual. The nine remaining model checkpoints can be split into two categories: the checkpoints that use language embeddings and those that don't. ### XLM with language embeddings The following XLM models use language embeddings to specify the language used at inference: - `xlm-mlm-ende-1024` (Masked language modeling, English-German) - `xlm-mlm-enfr-1024` (Masked language modeling, English-French) - `xlm-mlm-enro-1024` (Masked language modeling, English-Romanian) - `xlm-mlm-xnli15-1024` (Masked language modeling, XNLI languages) - `xlm-mlm-tlm-xnli15-1024` (Masked language modeling + translation, XNLI languages) - `xlm-clm-enfr-1024` (Causal language modeling, English-French) - `xlm-clm-ende-1024` (Causal language modeling, English-German) Language embeddings are represented as a tensor of the same shape as the `input_ids` passed to the model. The values in these tensors depend on the language used and are identified by the tokenizer's `lang2id` and `id2lang` attributes. In this example, load the `xlm-clm-enfr-1024` checkpoint (Causal language modeling, English-French): >>> import torch >>> from transformers import XLMTokenizer, XLMWithLMHeadModel >>> tokenizer = XLMTokenizer.from_pretrained(""xlm-clm-enfr-1024"") >>> model = XLMWithLMHeadModel.from_pretrained(""xlm-clm-enfr-1024"") The `lang2id` attribute of the tokenizer displays this model's languages and their ids: >>> print(tokenizer.lang2id) {'en': 0, 'fr': 1} Next, create an example input: >>> input_ids = torch.tensor([tokenizer.encode(""Wikipedia was used to"")]) # batch size of 1 Set the language id as `""en""` and use it to define the language embedding. The language embedding is a tensor filled with `0` since that is the language id for English. This tensor should be the same size as `input_ids`. >>> language_id = tokenizer.lang2id[""en""] # 0 >>> langs = torch.tensor([language_id] * input_ids.shape[1]) # torch.tensor([0, 0, 0, , 0]) >>> # We reshape it to be of size (batch_size, sequence_length) >>> langs = langs.view(1, -1) # is now of shape [1, sequence_length] (we have a batch size of 1) Now you can pass the `input_ids` and language embedding to the model: >>> outputs = model(input_ids, langs=langs) The [run_generation.py](https://github.com/huggingface/transformers/tree/main/examples/pytorch/text-generation/run_generation.py) script can generate text with language embeddings using the `xlm-clm` checkpoints. ### XLM without language embeddings The following XLM models do not require language embeddings during inference: - `xlm-mlm-17-1280` (Masked language modeling, 17 languages) - `xlm-mlm-100-1280` (Masked language modeling, 100 languages) These models are used for generic sentence representations, unlike the previous XLM checkpoints. ## BERT The following BERT models can be used for multilingual tasks: - `bert-base-multilingual-uncased` (Masked language modeling + Next sentence prediction, 102 languages) - `bert-base-multilingual-cased` (Masked language modeling + Next sentence prediction, 104 languages) These models do not require language embeddings during inference. They should identify the language from the context and infer accordingly. ## XLM-RoBERTa The following XLM-RoBERTa models can be used for multilingual tasks: - `xlm-roberta-base` (Masked language modeling, 100 languages) - `xlm-roberta-large` (Masked language modeling, 100 languages) XLM-RoBERTa was trained on 2.5TB of newly created and cleaned CommonCrawl data in 100 languages. It provides strong gains over previously released multilingual models like mBERT or XLM on downstream tasks like classification, sequence labeling, and question answering. ## M2M100 The following M2M100 models can be used for multilingual translation: - `facebook/m2m100_418M` (Translation) - `facebook/m2m100_1.2B` (Translation) In this example, load the `facebook/m2m100_418M` checkpoint to translate from Chinese to English. You can set the source language in the tokenizer: >>> from transformers import M2M100ForConditionalGeneration, M2M100Tokenizer >>> en_text = ""Do not meddle in the affairs of wizards, for they are subtle and quick to anger."" >>> chinese_text = ""不要插手巫師的事務, 因為他們是微妙的, 很快就會發怒."" >>> tokenizer = M2M100Tokenizer.from_pretrained(""facebook/m2m100_418M"", src_lang=""zh"") >>> model = M2M100ForConditionalGeneration.from_pretrained(""facebook/m2m100_418M"") Tokenize the text: >>> encoded_zh = tokenizer(chinese_text, return_tensors=""pt"") M2M100 forces the target language id as the first generated token to translate to the target language. Set the `forced_bos_token_id` to `en` in the `generate` method to translate to English: >>> generated_tokens = model.generate(**encoded_zh, forced_bos_token_id=tokenizer.get_lang_id(""en"")) >>> tokenizer.batch_decode(generated_tokens, skip_special_tokens=True) 'Do not interfere with the matters of the witches, because they are delicate and will soon be angry.' ## MBart The following MBart models can be used for multilingual translation: - `facebook/mbart-large-50-one-to-many-mmt` (One-to-many multilingual machine translation, 50 languages) - `facebook/mbart-large-50-many-to-many-mmt` (Many-to-many multilingual machine translation, 50 languages) - `facebook/mbart-large-50-many-to-one-mmt` (Many-to-one multilingual machine translation, 50 languages) - `facebook/mbart-large-50` (Multilingual translation, 50 languages) - `facebook/mbart-large-cc25` In this example, load the `facebook/mbart-large-50-many-to-many-mmt` checkpoint to translate Finnish to English. You can set the source language in the tokenizer: >>> from transformers import AutoTokenizer, AutoModelForSeq2SeqLM >>> en_text = ""Do not meddle in the affairs of wizards, for they are subtle and quick to anger."" >>> fi_text = ""Älä sekaannu velhojen asioihin, sillä ne ovat hienovaraisia ja nopeasti vihaisia."" >>> tokenizer = AutoTokenizer.from_pretrained(""facebook/mbart-large-50-many-to-many-mmt"", src_lang=""fi_FI"") >>> model = AutoModelForSeq2SeqLM.from_pretrained(""facebook/mbart-large-50-many-to-many-mmt"") Tokenize the text: >>> encoded_en = tokenizer(en_text, return_tensors=""pt"") MBart forces the target language id as the first generated token to translate to the target language. Set the `forced_bos_token_id` to `en` in the `generate` method to translate to English: >>> generated_tokens = model.generate(**encoded_en, forced_bos_token_id=tokenizer.lang_code_to_id[""en_XX""]) >>> tokenizer.batch_decode(generated_tokens, skip_special_tokens=True) ""Don't interfere with the wizard's affairs, because they are subtle, will soon get angry."" If you are using the `facebook/mbart-large-50-many-to-one-mmt` checkpoint, you don't need to force the target language id as the first generated token otherwise the usage is the same. " community.md," # Community This page regroups resources around 🤗 Transformers developed by the community. ## Community resources: | Resource | Description | Author | |:----------|:-------------|------:| | [Hugging Face Transformers Glossary Flashcards](https://www.darigovresearch.com/huggingface-transformers-glossary-flashcards) | A set of flashcards based on the [Transformers Docs Glossary](glossary) that has been put into a form which can be easily learned/revised using [Anki ](https://apps.ankiweb.net/) an open source, cross platform app specifically designed for long term knowledge retention. See this [Introductory video on how to use the flashcards](https://www.youtube.com/watch?v=Dji_h7PILrw). | [Darigov Research](https://www.darigovresearch.com/) | ## Community notebooks: | Notebook | Description | Author | | |:----------|:-------------|:-------------|------:| | [Fine-tune a pre-trained Transformer to generate lyrics](https://github.com/AlekseyKorshuk/huggingartists) | How to generate lyrics in the style of your favorite artist by fine-tuning a GPT-2 model | [Aleksey Korshuk](https://github.com/AlekseyKorshuk) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/AlekseyKorshuk/huggingartists/blob/master/huggingartists-demo.ipynb) | | [Train T5 in Tensorflow 2 ](https://github.com/snapthat/TF-T5-text-to-text) | How to train T5 for any task using Tensorflow 2. This notebook demonstrates a Question & Answer task implemented in Tensorflow 2 using SQUAD | [Muhammad Harris](https://github.com/HarrisDePerceptron) |[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/snapthat/TF-T5-text-to-text/blob/master/snapthatT5/notebooks/TF-T5-Datasets%20Training.ipynb) | | [Train T5 on TPU](https://github.com/patil-suraj/exploring-T5/blob/master/T5_on_TPU.ipynb) | How to train T5 on SQUAD with Transformers and Nlp | [Suraj Patil](https://github.com/patil-suraj) |[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/patil-suraj/exploring-T5/blob/master/T5_on_TPU.ipynb#scrollTo=QLGiFCDqvuil) | | [Fine-tune T5 for Classification and Multiple Choice](https://github.com/patil-suraj/exploring-T5/blob/master/t5_fine_tuning.ipynb) | How to fine-tune T5 for classification and multiple choice tasks using a text-to-text format with PyTorch Lightning | [Suraj Patil](https://github.com/patil-suraj) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/patil-suraj/exploring-T5/blob/master/t5_fine_tuning.ipynb) | | [Fine-tune DialoGPT on New Datasets and Languages](https://github.com/ncoop57/i-am-a-nerd/blob/master/_notebooks/2020-05-12-chatbot-part-1.ipynb) | How to fine-tune the DialoGPT model on a new dataset for open-dialog conversational chatbots | [Nathan Cooper](https://github.com/ncoop57) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/ncoop57/i-am-a-nerd/blob/master/_notebooks/2020-05-12-chatbot-part-1.ipynb) | | [Long Sequence Modeling with Reformer](https://github.com/patrickvonplaten/notebooks/blob/master/PyTorch_Reformer.ipynb) | How to train on sequences as long as 500,000 tokens with Reformer | [Patrick von Platen](https://github.com/patrickvonplaten) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/patrickvonplaten/notebooks/blob/master/PyTorch_Reformer.ipynb) | | [Fine-tune BART for Summarization](https://github.com/ohmeow/ohmeow_website/blob/master/posts/2021-05-25-mbart-sequence-classification-with-blurr.ipynb) | How to fine-tune BART for summarization with fastai using blurr | [Wayde Gilliam](https://ohmeow.com/) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/ohmeow/ohmeow_website/blob/master/posts/2021-05-25-mbart-sequence-classification-with-blurr.ipynb) | | [Fine-tune a pre-trained Transformer on anyone's tweets](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb) | How to generate tweets in the style of your favorite Twitter account by fine-tuning a GPT-2 model | [Boris Dayma](https://github.com/borisdayma) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb) | | [Optimize 🤗 Hugging Face models with Weights & Biases](https://colab.research.google.com/github/wandb/examples/blob/master/colabs/huggingface/Optimize_Hugging_Face_models_with_Weights_%26_Biases.ipynb) | A complete tutorial showcasing W&B integration with Hugging Face | [Boris Dayma](https://github.com/borisdayma) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/wandb/examples/blob/master/colabs/huggingface/Optimize_Hugging_Face_models_with_Weights_%26_Biases.ipynb) | | [Pretrain Longformer](https://github.com/allenai/longformer/blob/master/scripts/convert_model_to_long.ipynb) | How to build a ""long"" version of existing pretrained models | [Iz Beltagy](https://beltagy.net) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/allenai/longformer/blob/master/scripts/convert_model_to_long.ipynb) | | [Fine-tune Longformer for QA](https://github.com/patil-suraj/Notebooks/blob/master/longformer_qa_training.ipynb) | How to fine-tune longformer model for QA task | [Suraj Patil](https://github.com/patil-suraj) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/patil-suraj/Notebooks/blob/master/longformer_qa_training.ipynb) | | [Evaluate Model with 🤗nlp](https://github.com/patrickvonplaten/notebooks/blob/master/How_to_evaluate_Longformer_on_TriviaQA_using_NLP.ipynb) | How to evaluate longformer on TriviaQA with `nlp` | [Patrick von Platen](https://github.com/patrickvonplaten) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1m7eTGlPmLRgoPkkA7rkhQdZ9ydpmsdLE?usp=sharing) | | [Fine-tune T5 for Sentiment Span Extraction](https://github.com/enzoampil/t5-intro/blob/master/t5_qa_training_pytorch_span_extraction.ipynb) | How to fine-tune T5 for sentiment span extraction using a text-to-text format with PyTorch Lightning | [Lorenzo Ampil](https://github.com/enzoampil) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/enzoampil/t5-intro/blob/master/t5_qa_training_pytorch_span_extraction.ipynb) | | [Fine-tune DistilBert for Multiclass Classification](https://github.com/abhimishra91/transformers-tutorials/blob/master/transformers_multiclass_classification.ipynb) | How to fine-tune DistilBert for multiclass classification with PyTorch | [Abhishek Kumar Mishra](https://github.com/abhimishra91) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/abhimishra91/transformers-tutorials/blob/master/transformers_multiclass_classification.ipynb)| |[Fine-tune BERT for Multi-label Classification](https://github.com/abhimishra91/transformers-tutorials/blob/master/transformers_multi_label_classification.ipynb)|How to fine-tune BERT for multi-label classification using PyTorch|[Abhishek Kumar Mishra](https://github.com/abhimishra91) |[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/abhimishra91/transformers-tutorials/blob/master/transformers_multi_label_classification.ipynb)| |[Fine-tune T5 for Summarization](https://github.com/abhimishra91/transformers-tutorials/blob/master/transformers_summarization_wandb.ipynb)|How to fine-tune T5 for summarization in PyTorch and track experiments with WandB|[Abhishek Kumar Mishra](https://github.com/abhimishra91) |[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/abhimishra91/transformers-tutorials/blob/master/transformers_summarization_wandb.ipynb)| |[Speed up Fine-Tuning in Transformers with Dynamic Padding / Bucketing](https://github.com/ELS-RD/transformers-notebook/blob/master/Divide_Hugging_Face_Transformers_training_time_by_2_or_more.ipynb)|How to speed up fine-tuning by a factor of 2 using dynamic padding / bucketing|[Michael Benesty](https://github.com/pommedeterresautee) |[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1CBfRU1zbfu7-ijiOqAAQUA-RJaxfcJoO?usp=sharing)| |[Pretrain Reformer for Masked Language Modeling](https://github.com/patrickvonplaten/notebooks/blob/master/Reformer_For_Masked_LM.ipynb)| How to train a Reformer model with bi-directional self-attention layers | [Patrick von Platen](https://github.com/patrickvonplaten) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1tzzh0i8PgDQGV3SMFUGxM7_gGae3K-uW?usp=sharing)| |[Expand and Fine Tune Sci-BERT](https://github.com/lordtt13/word-embeddings/blob/master/COVID-19%20Research%20Data/COVID-SciBERT.ipynb)| How to increase vocabulary of a pretrained SciBERT model from AllenAI on the CORD dataset and pipeline it. | [Tanmay Thakur](https://github.com/lordtt13) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1rqAR40goxbAfez1xvF3hBJphSCsvXmh8)| |[Fine Tune BlenderBotSmall for Summarization using the Trainer API](https://github.com/lordtt13/transformers-experiments/blob/master/Custom%20Tasks/fine-tune-blenderbot_small-for-summarization.ipynb)| How to fine-tune BlenderBotSmall for summarization on a custom dataset, using the Trainer API. | [Tanmay Thakur](https://github.com/lordtt13) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/19Wmupuls7mykSGyRN_Qo6lPQhgp56ymq?usp=sharing)| |[Fine-tune Electra and interpret with Integrated Gradients](https://github.com/elsanns/xai-nlp-notebooks/blob/master/electra_fine_tune_interpret_captum_ig.ipynb) | How to fine-tune Electra for sentiment analysis and interpret predictions with Captum Integrated Gradients | [Eliza Szczechla](https://elsanns.github.io) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/elsanns/xai-nlp-notebooks/blob/master/electra_fine_tune_interpret_captum_ig.ipynb)| |[fine-tune a non-English GPT-2 Model with Trainer class](https://github.com/philschmid/fine-tune-GPT-2/blob/master/Fine_tune_a_non_English_GPT_2_Model_with_Huggingface.ipynb) | How to fine-tune a non-English GPT-2 Model with Trainer class | [Philipp Schmid](https://www.philschmid.de) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/philschmid/fine-tune-GPT-2/blob/master/Fine_tune_a_non_English_GPT_2_Model_with_Huggingface.ipynb)| |[Fine-tune a DistilBERT Model for Multi Label Classification task](https://github.com/DhavalTaunk08/Transformers_scripts/blob/master/Transformers_multilabel_distilbert.ipynb) | How to fine-tune a DistilBERT Model for Multi Label Classification task | [Dhaval Taunk](https://github.com/DhavalTaunk08) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/DhavalTaunk08/Transformers_scripts/blob/master/Transformers_multilabel_distilbert.ipynb)| |[Fine-tune ALBERT for sentence-pair classification](https://github.com/NadirEM/nlp-notebooks/blob/master/Fine_tune_ALBERT_sentence_pair_classification.ipynb) | How to fine-tune an ALBERT model or another BERT-based model for the sentence-pair classification task | [Nadir El Manouzi](https://github.com/NadirEM) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/NadirEM/nlp-notebooks/blob/master/Fine_tune_ALBERT_sentence_pair_classification.ipynb)| |[Fine-tune Roberta for sentiment analysis](https://github.com/DhavalTaunk08/NLP_scripts/blob/master/sentiment_analysis_using_roberta.ipynb) | How to fine-tune a Roberta model for sentiment analysis | [Dhaval Taunk](https://github.com/DhavalTaunk08) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/DhavalTaunk08/NLP_scripts/blob/master/sentiment_analysis_using_roberta.ipynb)| |[Evaluating Question Generation Models](https://github.com/flexudy-pipe/qugeev) | How accurate are the answers to questions generated by your seq2seq transformer model? | [Pascal Zoleko](https://github.com/zolekode) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1bpsSqCQU-iw_5nNoRm_crPq6FRuJthq_?usp=sharing)| |[Classify text with DistilBERT and Tensorflow](https://github.com/peterbayerle/huggingface_notebook/blob/main/distilbert_tf.ipynb) | How to fine-tune DistilBERT for text classification in TensorFlow | [Peter Bayerle](https://github.com/peterbayerle) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/peterbayerle/huggingface_notebook/blob/main/distilbert_tf.ipynb)| |[Leverage BERT for Encoder-Decoder Summarization on CNN/Dailymail](https://github.com/patrickvonplaten/notebooks/blob/master/BERT2BERT_for_CNN_Dailymail.ipynb) | How to warm-start a *EncoderDecoderModel* with a *bert-base-uncased* checkpoint for summarization on CNN/Dailymail | [Patrick von Platen](https://github.com/patrickvonplaten) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/patrickvonplaten/notebooks/blob/master/BERT2BERT_for_CNN_Dailymail.ipynb)| |[Leverage RoBERTa for Encoder-Decoder Summarization on BBC XSum](https://github.com/patrickvonplaten/notebooks/blob/master/RoBERTaShared_for_BBC_XSum.ipynb) | How to warm-start a shared *EncoderDecoderModel* with a *roberta-base* checkpoint for summarization on BBC/XSum | [Patrick von Platen](https://github.com/patrickvonplaten) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/patrickvonplaten/notebooks/blob/master/RoBERTaShared_for_BBC_XSum.ipynb)| |[Fine-tune TAPAS on Sequential Question Answering (SQA)](https://github.com/NielsRogge/Transformers-Tutorials/blob/master/TAPAS/Fine_tuning_TapasForQuestionAnswering_on_SQA.ipynb) | How to fine-tune *TapasForQuestionAnswering* with a *tapas-base* checkpoint on the Sequential Question Answering (SQA) dataset | [Niels Rogge](https://github.com/nielsrogge) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/NielsRogge/Transformers-Tutorials/blob/master/TAPAS/Fine_tuning_TapasForQuestionAnswering_on_SQA.ipynb)| |[Evaluate TAPAS on Table Fact Checking (TabFact)](https://github.com/NielsRogge/Transformers-Tutorials/blob/master/TAPAS/Evaluating_TAPAS_on_the_Tabfact_test_set.ipynb) | How to evaluate a fine-tuned *TapasForSequenceClassification* with a *tapas-base-finetuned-tabfact* checkpoint using a combination of the 🤗 datasets and 🤗 transformers libraries | [Niels Rogge](https://github.com/nielsrogge) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/NielsRogge/Transformers-Tutorials/blob/master/TAPAS/Evaluating_TAPAS_on_the_Tabfact_test_set.ipynb)| |[Fine-tuning mBART for translation](https://colab.research.google.com/github/vasudevgupta7/huggingface-tutorials/blob/main/translation_training.ipynb) | How to fine-tune mBART using Seq2SeqTrainer for Hindi to English translation | [Vasudev Gupta](https://github.com/vasudevgupta7) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/vasudevgupta7/huggingface-tutorials/blob/main/translation_training.ipynb)| |[Fine-tune LayoutLM on FUNSD (a form understanding dataset)](https://github.com/NielsRogge/Transformers-Tutorials/blob/master/LayoutLM/Fine_tuning_LayoutLMForTokenClassification_on_FUNSD.ipynb) | How to fine-tune *LayoutLMForTokenClassification* on the FUNSD dataset for information extraction from scanned documents | [Niels Rogge](https://github.com/nielsrogge) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/NielsRogge/Transformers-Tutorials/blob/master/LayoutLM/Fine_tuning_LayoutLMForTokenClassification_on_FUNSD.ipynb)| |[Fine-Tune DistilGPT2 and Generate Text](https://colab.research.google.com/github/tripathiaakash/DistilGPT2-Tutorial/blob/main/distilgpt2_fine_tuning.ipynb) | How to fine-tune DistilGPT2 and generate text | [Aakash Tripathi](https://github.com/tripathiaakash) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/tripathiaakash/DistilGPT2-Tutorial/blob/main/distilgpt2_fine_tuning.ipynb)| |[Fine-Tune LED on up to 8K tokens](https://github.com/patrickvonplaten/notebooks/blob/master/Fine_tune_Longformer_Encoder_Decoder_(LED)_for_Summarization_on_pubmed.ipynb) | How to fine-tune LED on pubmed for long-range summarization | [Patrick von Platen](https://github.com/patrickvonplaten) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/patrickvonplaten/notebooks/blob/master/Fine_tune_Longformer_Encoder_Decoder_(LED)_for_Summarization_on_pubmed.ipynb)| |[Evaluate LED on Arxiv](https://github.com/patrickvonplaten/notebooks/blob/master/LED_on_Arxiv.ipynb) | How to effectively evaluate LED on long-range summarization | [Patrick von Platen](https://github.com/patrickvonplaten) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/patrickvonplaten/notebooks/blob/master/LED_on_Arxiv.ipynb)| |[Fine-tune LayoutLM on RVL-CDIP (a document image classification dataset)](https://github.com/NielsRogge/Transformers-Tutorials/blob/master/LayoutLM/Fine_tuning_LayoutLMForSequenceClassification_on_RVL_CDIP.ipynb) | How to fine-tune *LayoutLMForSequenceClassification* on the RVL-CDIP dataset for scanned document classification | [Niels Rogge](https://github.com/nielsrogge) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/NielsRogge/Transformers-Tutorials/blob/master/LayoutLM/Fine_tuning_LayoutLMForSequenceClassification_on_RVL_CDIP.ipynb)| |[Wav2Vec2 CTC decoding with GPT2 adjustment](https://github.com/voidful/huggingface_notebook/blob/main/xlsr_gpt.ipynb) | How to decode CTC sequence with language model adjustment | [Eric Lam](https://github.com/voidful) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1e_z5jQHYbO2YKEaUgzb1ww1WwiAyydAj?usp=sharing)| |[Fine-tune BART for summarization in two languages with Trainer class](https://github.com/elsanns/xai-nlp-notebooks/blob/master/fine_tune_bart_summarization_two_langs.ipynb) | How to fine-tune BART for summarization in two languages with Trainer class | [Eliza Szczechla](https://github.com/elsanns) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/elsanns/xai-nlp-notebooks/blob/master/fine_tune_bart_summarization_two_langs.ipynb)| |[Evaluate Big Bird on Trivia QA](https://github.com/patrickvonplaten/notebooks/blob/master/Evaluating_Big_Bird_on_TriviaQA.ipynb) | How to evaluate BigBird on long document question answering on Trivia QA | [Patrick von Platen](https://github.com/patrickvonplaten) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/patrickvonplaten/notebooks/blob/master/Evaluating_Big_Bird_on_TriviaQA.ipynb)| | [Create video captions using Wav2Vec2](https://github.com/Muennighoff/ytclipcc/blob/main/wav2vec_youtube_captions.ipynb) | How to create YouTube captions from any video by transcribing the audio with Wav2Vec | [Niklas Muennighoff](https://github.com/Muennighoff) |[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/Muennighoff/ytclipcc/blob/main/wav2vec_youtube_captions.ipynb) | | [Fine-tune the Vision Transformer on CIFAR-10 using PyTorch Lightning](https://github.com/NielsRogge/Transformers-Tutorials/blob/master/VisionTransformer/Fine_tuning_the_Vision_Transformer_on_CIFAR_10_with_PyTorch_Lightning.ipynb) | How to fine-tune the Vision Transformer (ViT) on CIFAR-10 using HuggingFace Transformers, Datasets and PyTorch Lightning | [Niels Rogge](https://github.com/nielsrogge) |[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/NielsRogge/Transformers-Tutorials/blob/master/VisionTransformer/Fine_tuning_the_Vision_Transformer_on_CIFAR_10_with_PyTorch_Lightning.ipynb) | | [Fine-tune the Vision Transformer on CIFAR-10 using the 🤗 Trainer](https://github.com/NielsRogge/Transformers-Tutorials/blob/master/VisionTransformer/Fine_tuning_the_Vision_Transformer_on_CIFAR_10_with_the_%F0%9F%A4%97_Trainer.ipynb) | How to fine-tune the Vision Transformer (ViT) on CIFAR-10 using HuggingFace Transformers, Datasets and the 🤗 Trainer | [Niels Rogge](https://github.com/nielsrogge) |[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/NielsRogge/Transformers-Tutorials/blob/master/VisionTransformer/Fine_tuning_the_Vision_Transformer_on_CIFAR_10_with_the_%F0%9F%A4%97_Trainer.ipynb) | | [Evaluate LUKE on Open Entity, an entity typing dataset](https://github.com/studio-ousia/luke/blob/master/notebooks/huggingface_open_entity.ipynb) | How to evaluate *LukeForEntityClassification* on the Open Entity dataset | [Ikuya Yamada](https://github.com/ikuyamada) |[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/studio-ousia/luke/blob/master/notebooks/huggingface_open_entity.ipynb) | | [Evaluate LUKE on TACRED, a relation extraction dataset](https://github.com/studio-ousia/luke/blob/master/notebooks/huggingface_tacred.ipynb) | How to evaluate *LukeForEntityPairClassification* on the TACRED dataset | [Ikuya Yamada](https://github.com/ikuyamada) |[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/studio-ousia/luke/blob/master/notebooks/huggingface_tacred.ipynb) | | [Evaluate LUKE on CoNLL-2003, an important NER benchmark](https://github.com/studio-ousia/luke/blob/master/notebooks/huggingface_conll_2003.ipynb) | How to evaluate *LukeForEntitySpanClassification* on the CoNLL-2003 dataset | [Ikuya Yamada](https://github.com/ikuyamada) |[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/studio-ousia/luke/blob/master/notebooks/huggingface_conll_2003.ipynb) | | [Evaluate BigBird-Pegasus on PubMed dataset](https://github.com/vasudevgupta7/bigbird/blob/main/notebooks/bigbird_pegasus_evaluation.ipynb) | How to evaluate *BigBirdPegasusForConditionalGeneration* on PubMed dataset | [Vasudev Gupta](https://github.com/vasudevgupta7) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/vasudevgupta7/bigbird/blob/main/notebooks/bigbird_pegasus_evaluation.ipynb) | | [Speech Emotion Classification with Wav2Vec2](https://github/m3hrdadfi/soxan/blob/main/notebooks/Emotion_recognition_in_Greek_speech_using_Wav2Vec2.ipynb) | How to leverage a pretrained Wav2Vec2 model for Emotion Classification on the MEGA dataset | [Mehrdad Farahani](https://github.com/m3hrdadfi) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/m3hrdadfi/soxan/blob/main/notebooks/Emotion_recognition_in_Greek_speech_using_Wav2Vec2.ipynb) | | [Detect objects in an image with DETR](https://github.com/NielsRogge/Transformers-Tutorials/blob/master/DETR/DETR_minimal_example_(with_DetrFeatureExtractor).ipynb) | How to use a trained *DetrForObjectDetection* model to detect objects in an image and visualize attention | [Niels Rogge](https://github.com/NielsRogge) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/NielsRogge/Transformers-Tutorials/blob/master/DETR/DETR_minimal_example_(with_DetrFeatureExtractor).ipynb) | | [Fine-tune DETR on a custom object detection dataset](https://github.com/NielsRogge/Transformers-Tutorials/blob/master/DETR/Fine_tuning_DetrForObjectDetection_on_custom_dataset_(balloon).ipynb) | How to fine-tune *DetrForObjectDetection* on a custom object detection dataset | [Niels Rogge](https://github.com/NielsRogge) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/NielsRogge/Transformers-Tutorials/blob/master/DETR/Fine_tuning_DetrForObjectDetection_on_custom_dataset_(balloon).ipynb) | | [Finetune T5 for Named Entity Recognition](https://github.com/ToluClassics/Notebooks/blob/main/T5_Ner_Finetuning.ipynb) | How to fine-tune *T5* on a Named Entity Recognition Task | [Ogundepo Odunayo](https://github.com/ToluClassics) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1obr78FY_cBmWY5ODViCmzdY6O1KB65Vc?usp=sharing) | " task_summary.md," # What 🤗 Transformers can do 🤗 Transformers is a library of pretrained state-of-the-art models for natural language processing (NLP), computer vision, and audio and speech processing tasks. Not only does the library contain Transformer models, but it also has non-Transformer models like modern convolutional networks for computer vision tasks. If you look at some of the most popular consumer products today, like smartphones, apps, and televisions, odds are that some kind of deep learning technology is behind it. Want to remove a background object from a picture taken by your smartphone? This is an example of a panoptic segmentation task (don't worry if you don't know what this means yet, we'll describe it in the following sections!). This page provides an overview of the different speech and audio, computer vision, and NLP tasks that can be solved with the 🤗 Transformers library in just three lines of code! ## Audio Audio and speech processing tasks are a little different from the other modalities mainly because audio as an input is a continuous signal. Unlike text, a raw audio waveform can't be neatly split into discrete chunks the way a sentence can be divided into words. To get around this, the raw audio signal is typically sampled at regular intervals. If you take more samples within an interval, the sampling rate is higher, and the audio more closely resembles the original audio source. Previous approaches preprocessed the audio to extract useful features from it. It is now more common to start audio and speech processing tasks by directly feeding the raw audio waveform to a feature encoder to extract an audio representation. This simplifies the preprocessing step and allows the model to learn the most essential features. ### Audio classification Audio classification is a task that labels audio data from a predefined set of classes. It is a broad category with many specific applications, some of which include: * acoustic scene classification: label audio with a scene label (""office"", ""beach"", ""stadium"") * acoustic event detection: label audio with a sound event label (""car horn"", ""whale calling"", ""glass breaking"") * tagging: label audio containing multiple sounds (birdsongs, speaker identification in a meeting) * music classification: label music with a genre label (""metal"", ""hip-hop"", ""country"") >>> from transformers import pipeline >>> classifier = pipeline(task=""audio-classification"", model=""superb/hubert-base-superb-er"") >>> preds = classifier(""https://huggingface.co/datasets/Narsil/asr_dummy/resolve/main/mlk.flac"") >>> preds = [{""score"": round(pred[""score""], 4), ""label"": pred[""label""]} for pred in preds] >>> preds [{'score': 0.4532, 'label': 'hap'}, {'score': 0.3622, 'label': 'sad'}, {'score': 0.0943, 'label': 'neu'}, {'score': 0.0903, 'label': 'ang'}] ### Automatic speech recognition Automatic speech recognition (ASR) transcribes speech into text. It is one of the most common audio tasks due partly to speech being such a natural form of human communication. Today, ASR systems are embedded in ""smart"" technology products like speakers, phones, and cars. We can ask our virtual assistants to play music, set reminders, and tell us the weather. But one of the key challenges Transformer architectures have helped with is in low-resource languages. By pretraining on large amounts of speech data, finetuning the model on only one hour of labeled speech data in a low-resource language can still produce high-quality results compared to previous ASR systems trained on 100x more labeled data. >>> from transformers import pipeline >>> transcriber = pipeline(task=""automatic-speech-recognition"", model=""openai/whisper-small"") >>> transcriber(""https://huggingface.co/datasets/Narsil/asr_dummy/resolve/main/mlk.flac"") {'text': ' I have a dream that one day this nation will rise up and live out the true meaning of its creed.'} ## Computer vision One of the first and earliest successful computer vision tasks was recognizing images of zip code numbers using a [convolutional neural network (CNN)](glossary#convolution). An image is composed of pixels, and each pixel has a numerical value. This makes it easy to represent an image as a matrix of pixel values. Each particular combination of pixel values describes the colors of an image. Two general ways computer vision tasks can be solved are: 1. Use convolutions to learn the hierarchical features of an image from low-level features to high-level abstract things. 2. Split an image into patches and use a Transformer to gradually learn how each image patch is related to each other to form an image. Unlike the bottom-up approach favored by a CNN, this is kind of like starting out with a blurry image and then gradually bringing it into focus. ### Image classification Image classification labels an entire image from a predefined set of classes. Like most classification tasks, there are many practical use cases for image classification, some of which include: * healthcare: label medical images to detect disease or monitor patient health * environment: label satellite images to monitor deforestation, inform wildland management or detect wildfires * agriculture: label images of crops to monitor plant health or satellite images for land use monitoring * ecology: label images of animal or plant species to monitor wildlife populations or track endangered species >>> from transformers import pipeline >>> classifier = pipeline(task=""image-classification"") >>> preds = classifier( ""https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/pipeline-cat-chonk.jpeg"" ) >>> preds = [{""score"": round(pred[""score""], 4), ""label"": pred[""label""]} for pred in preds] >>> print(*preds, sep=""\n"") {'score': 0.4335, 'label': 'lynx, catamount'} {'score': 0.0348, 'label': 'cougar, puma, catamount, mountain lion, painter, panther, Felis concolor'} {'score': 0.0324, 'label': 'snow leopard, ounce, Panthera uncia'} {'score': 0.0239, 'label': 'Egyptian cat'} {'score': 0.0229, 'label': 'tiger cat'} ### Object detection Unlike image classification, object detection identifies multiple objects within an image and the objects' positions in an image (defined by the bounding box). Some example applications of object detection include: * self-driving vehicles: detect everyday traffic objects such as other vehicles, pedestrians, and traffic lights * remote sensing: disaster monitoring, urban planning, and weather forecasting * defect detection: detect cracks or structural damage in buildings, and manufacturing defects >>> from transformers import pipeline >>> detector = pipeline(task=""object-detection"") >>> preds = detector( ""https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/pipeline-cat-chonk.jpeg"" ) >>> preds = [{""score"": round(pred[""score""], 4), ""label"": pred[""label""], ""box"": pred[""box""]} for pred in preds] >>> preds [{'score': 0.9865, 'label': 'cat', 'box': {'xmin': 178, 'ymin': 154, 'xmax': 882, 'ymax': 598}}] ### Image segmentation Image segmentation is a pixel-level task that assigns every pixel in an image to a class. It differs from object detection, which uses bounding boxes to label and predict objects in an image because segmentation is more granular. Segmentation can detect objects at a pixel-level. There are several types of image segmentation: * instance segmentation: in addition to labeling the class of an object, it also labels each distinct instance of an object (""dog-1"", ""dog-2"") * panoptic segmentation: a combination of semantic and instance segmentation; it labels each pixel with a semantic class **and** each distinct instance of an object Segmentation tasks are helpful in self-driving vehicles to create a pixel-level map of the world around them so they can navigate safely around pedestrians and other vehicles. It is also useful for medical imaging, where the task's finer granularity can help identify abnormal cells or organ features. Image segmentation can also be used in ecommerce to virtually try on clothes or create augmented reality experiences by overlaying objects in the real world through your camera. >>> from transformers import pipeline >>> segmenter = pipeline(task=""image-segmentation"") >>> preds = segmenter( ""https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/pipeline-cat-chonk.jpeg"" ) >>> preds = [{""score"": round(pred[""score""], 4), ""label"": pred[""label""]} for pred in preds] >>> print(*preds, sep=""\n"") {'score': 0.9879, 'label': 'LABEL_184'} {'score': 0.9973, 'label': 'snow'} {'score': 0.9972, 'label': 'cat'} ### Depth estimation Depth estimation predicts the distance of each pixel in an image from the camera. This computer vision task is especially important for scene understanding and reconstruction. For example, in self-driving cars, vehicles need to understand how far objects like pedestrians, traffic signs, and other vehicles are to avoid obstacles and collisions. Depth information is also helpful for constructing 3D representations from 2D images and can be used to create high-quality 3D representations of biological structures or buildings. There are two approaches to depth estimation: * stereo: depths are estimated by comparing two images of the same image from slightly different angles * monocular: depths are estimated from a single image >>> from transformers import pipeline >>> depth_estimator = pipeline(task=""depth-estimation"") >>> preds = depth_estimator( ""https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/pipeline-cat-chonk.jpeg"" ) ## Natural language processing NLP tasks are among the most common types of tasks because text is such a natural way for us to communicate. To get text into a format recognized by a model, it needs to be tokenized. This means dividing a sequence of text into separate words or subwords (tokens) and then converting these tokens into numbers. As a result, you can represent a sequence of text as a sequence of numbers, and once you have a sequence of numbers, it can be input into a model to solve all sorts of NLP tasks! ### Text classification Like classification tasks in any modality, text classification labels a sequence of text (it can be sentence-level, a paragraph, or a document) from a predefined set of classes. There are many practical applications for text classification, some of which include: * sentiment analysis: label text according to some polarity like `positive` or `negative` which can inform and support decision-making in fields like politics, finance, and marketing * content classification: label text according to some topic to help organize and filter information in news and social media feeds (`weather`, `sports`, `finance`, etc.) >>> from transformers import pipeline >>> classifier = pipeline(task=""sentiment-analysis"") >>> preds = classifier(""Hugging Face is the best thing since sliced bread!"") >>> preds = [{""score"": round(pred[""score""], 4), ""label"": pred[""label""]} for pred in preds] >>> preds [{'score': 0.9991, 'label': 'POSITIVE'}] ### Token classification In any NLP task, text is preprocessed by separating the sequence of text into individual words or subwords. These are known as [tokens](/glossary#token). Token classification assigns each token a label from a predefined set of classes. Two common types of token classification are: * named entity recognition (NER): label a token according to an entity category like organization, person, location or date. NER is especially popular in biomedical settings, where it can label genes, proteins, and drug names. * part-of-speech tagging (POS): label a token according to its part-of-speech like noun, verb, or adjective. POS is useful for helping translation systems understand how two identical words are grammatically different (bank as a noun versus bank as a verb). >>> from transformers import pipeline >>> classifier = pipeline(task=""ner"") >>> preds = classifier(""Hugging Face is a French company based in New York City."") >>> preds = [ { ""entity"": pred[""entity""], ""score"": round(pred[""score""], 4), ""index"": pred[""index""], ""word"": pred[""word""], ""start"": pred[""start""], ""end"": pred[""end""], } for pred in preds ] >>> print(*preds, sep=""\n"") {'entity': 'I-ORG', 'score': 0.9968, 'index': 1, 'word': 'Hu', 'start': 0, 'end': 2} {'entity': 'I-ORG', 'score': 0.9293, 'index': 2, 'word': '##gging', 'start': 2, 'end': 7} {'entity': 'I-ORG', 'score': 0.9763, 'index': 3, 'word': 'Face', 'start': 8, 'end': 12} {'entity': 'I-MISC', 'score': 0.9983, 'index': 6, 'word': 'French', 'start': 18, 'end': 24} {'entity': 'I-LOC', 'score': 0.999, 'index': 10, 'word': 'New', 'start': 42, 'end': 45} {'entity': 'I-LOC', 'score': 0.9987, 'index': 11, 'word': 'York', 'start': 46, 'end': 50} {'entity': 'I-LOC', 'score': 0.9992, 'index': 12, 'word': 'City', 'start': 51, 'end': 55} ### Question answering Question answering is another token-level task that returns an answer to a question, sometimes with context (open-domain) and other times without context (closed-domain). This task happens whenever we ask a virtual assistant something like whether a restaurant is open. It can also provide customer or technical support and help search engines retrieve the relevant information you're asking for. There are two common types of question answering: * extractive: given a question and some context, the answer is a span of text from the context the model must extract * abstractive: given a question and some context, the answer is generated from the context; this approach is handled by the [`Text2TextGenerationPipeline`] instead of the [`QuestionAnsweringPipeline`] shown below >>> from transformers import pipeline >>> question_answerer = pipeline(task=""question-answering"") >>> preds = question_answerer( question=""What is the name of the repository?"", context=""The name of the repository is huggingface/transformers"", ) >>> print( f""score: {round(preds['score'], 4)}, start: {preds['start']}, end: {preds['end']}, answer: {preds['answer']}"" ) score: 0.9327, start: 30, end: 54, answer: huggingface/transformers ### Summarization Summarization creates a shorter version of a text from a longer one while trying to preserve most of the meaning of the original document. Summarization is a sequence-to-sequence task; it outputs a shorter text sequence than the input. There are a lot of long-form documents that can be summarized to help readers quickly understand the main points. Legislative bills, legal and financial documents, patents, and scientific papers are a few examples of documents that could be summarized to save readers time and serve as a reading aid. Like question answering, there are two types of summarization: * extractive: identify and extract the most important sentences from the original text * abstractive: generate the target summary (which may include new words not in the input document) from the original text; the [`SummarizationPipeline`] uses the abstractive approach >>> from transformers import pipeline >>> summarizer = pipeline(task=""summarization"") >>> summarizer( ""In this work, we presented the Transformer, the first sequence transduction model based entirely on attention, replacing the recurrent layers most commonly used in encoder-decoder architectures with multi-headed self-attention. For translation tasks, the Transformer can be trained significantly faster than architectures based on recurrent or convolutional layers. On both WMT 2014 English-to-German and WMT 2014 English-to-French translation tasks, we achieve a new state of the art. In the former task our best model outperforms even all previously reported ensembles."" ) [{'summary_text': ' The Transformer is the first sequence transduction model based entirely on attention . It replaces the recurrent layers most commonly used in encoder-decoder architectures with multi-headed self-attention . For translation tasks, the Transformer can be trained significantly faster than architectures based on recurrent or convolutional layers .'}] ### Translation Translation converts a sequence of text in one language to another. It is important in helping people from different backgrounds communicate with each other, help translate content to reach wider audiences, and even be a learning tool to help people learn a new language. Along with summarization, translation is a sequence-to-sequence task, meaning the model receives an input sequence and returns a target output sequence. In the early days, translation models were mostly monolingual, but recently, there has been increasing interest in multilingual models that can translate between many pairs of languages. >>> from transformers import pipeline >>> text = ""translate English to French: Hugging Face is a community-based open-source platform for machine learning."" >>> translator = pipeline(task=""translation"", model=""t5-small"") >>> translator(text) [{'translation_text': ""Hugging Face est une tribune communautaire de l'apprentissage des machines.""}] ### Language modeling Language modeling is a task that predicts a word in a sequence of text. It has become a very popular NLP task because a pretrained language model can be finetuned for many other downstream tasks. Lately, there has been a lot of interest in large language models (LLMs) which demonstrate zero- or few-shot learning. This means the model can solve tasks it wasn't explicitly trained to do! Language models can be used to generate fluent and convincing text, though you need to be careful since the text may not always be accurate. There are two types of language modeling: * causal: the model's objective is to predict the next token in a sequence, and future tokens are masked >>> from transformers import pipeline >>> prompt = ""Hugging Face is a community-based open-source platform for machine learning."" >>> generator = pipeline(task=""text-generation"") >>> generator(prompt) # doctest: +SKIP * masked: the model's objective is to predict a masked token in a sequence with full access to the tokens in the sequence >>> text = ""Hugging Face is a community-based open-source for machine learning."" >>> fill_mask = pipeline(task=""fill-mask"") >>> preds = fill_mask(text, top_k=1) >>> preds = [ { ""score"": round(pred[""score""], 4), ""token"": pred[""token""], ""token_str"": pred[""token_str""], ""sequence"": pred[""sequence""], } for pred in preds ] >>> preds [{'score': 0.2236, 'token': 1761, 'token_str': ' platform', 'sequence': 'Hugging Face is a community-based open-source platform for machine learning.'}] ## Multimodal Multimodal tasks require a model to process multiple data modalities (text, image, audio, video) to solve a particular problem. Image captioning is an example of a multimodal task where the model takes an image as input and outputs a sequence of text describing the image or some properties of the image. Although multimodal models work with different data types or modalities, internally, the preprocessing steps help the model convert all the data types into embeddings (vectors or list of numbers that holds meaningful information about the data). For a task like image captioning, the model learns relationships between image embeddings and text embeddings. ### Document question answering Document question answering is a task that answers natural language questions from a document. Unlike a token-level question answering task which takes text as input, document question answering takes an image of a document as input along with a question about the document and returns an answer. Document question answering can be used to parse structured documents and extract key information from it. In the example below, the total amount and change due can be extracted from a receipt. >>> from transformers import pipeline >>> from PIL import Image >>> import requests >>> url = ""https://datasets-server.huggingface.co/assets/hf-internal-testing/example-documents/--/hf-internal-testing--example-documents/test/2/image/image.jpg"" >>> image = Image.open(requests.get(url, stream=True).raw) >>> doc_question_answerer = pipeline(""document-question-answering"", model=""magorshunov/layoutlm-invoices"") >>> preds = doc_question_answerer( question=""What is the total amount?"", image=image, ) >>> preds [{'score': 0.8531, 'answer': '17,000', 'start': 4, 'end': 4}] Hopefully, this page has given you some more background information about all the types of tasks in each modality and the practical importance of each one. In the next [section](tasks_explained), you'll learn **how** 🤗 Transformers work to solve these tasks." chat_templating.md," # Templates for Chat Models ## Introduction An increasingly common use case for LLMs is **chat**. In a chat context, rather than continuing a single string of text (as is the case with a standard language model), the model instead continues a conversation that consists of one or more **messages**, each of which includes a **role**, like ""user"" or ""assistant"", as well as message text. Much like tokenization, different models expect very different input formats for chat. This is the reason we added **chat templates** as a feature. Chat templates are part of the tokenizer. They specify how to convert conversations, represented as lists of messages, into a single tokenizable string in the format that the model expects. Let's make this concrete with a quick example using the `BlenderBot` model. BlenderBot has an extremely simple default template, which mostly just adds whitespace between rounds of dialogue: thon >>> from transformers import AutoTokenizer >>> tokenizer = AutoTokenizer.from_pretrained(""facebook/blenderbot-400M-distill"") >>> chat = [ {""role"": ""user"", ""content"": ""Hello, how are you?""}, {""role"": ""assistant"", ""content"": ""I'm doing great. How can I help you today?""}, {""role"": ""user"", ""content"": ""I'd like to show off how chat templating works!""}, ] >>> tokenizer.apply_chat_template(chat, tokenize=False) "" Hello, how are you? I'm doing great. How can I help you today? I'd like to show off how chat templating works!"" Notice how the entire chat is condensed into a single string. If we use `tokenize=True`, which is the default setting, that string will also be tokenized for us. To see a more complex template in action, though, let's use the `mistralai/Mistral-7B-Instruct-v0.1` model. thon >>> from transformers import AutoTokenizer >>> tokenizer = AutoTokenizer.from_pretrained(""mistralai/Mistral-7B-Instruct-v0.1"") >>> chat = [ {""role"": ""user"", ""content"": ""Hello, how are you?""}, {""role"": ""assistant"", ""content"": ""I'm doing great. How can I help you today?""}, {""role"": ""user"", ""content"": ""I'd like to show off how chat templating works!""}, ] >>> tokenizer.apply_chat_template(chat, tokenize=False) ""[INST] Hello, how are you? [/INST]I'm doing great. How can I help you today? [INST] I'd like to show off how chat templating works! [/INST]"" Note that this time, the tokenizer has added the control tokens [INST] and [/INST] to indicate the start and end of user messages (but not assistant messages!). Mistral-instruct was trained with these tokens, but BlenderBot was not. ## How do I use chat templates? As you can see in the example above, chat templates are easy to use. Simply build a list of messages, with `role` and `content` keys, and then pass it to the [`~PreTrainedTokenizer.apply_chat_template`] method. Once you do that, you'll get output that's ready to go! When using chat templates as input for model generation, it's also a good idea to use `add_generation_prompt=True` to add a [generation prompt](#what-are-generation-prompts). Here's an example of preparing input for `model.generate()`, using the `Zephyr` assistant model: thon from transformers import AutoModelForCausalLM, AutoTokenizer checkpoint = ""HuggingFaceH4/zephyr-7b-beta"" tokenizer = AutoTokenizer.from_pretrained(checkpoint) model = AutoModelForCausalLM.from_pretrained(checkpoint) # You may want to use bfloat16 and/or move to GPU here messages = [ { ""role"": ""system"", ""content"": ""You are a friendly chatbot who always responds in the style of a pirate"", }, {""role"": ""user"", ""content"": ""How many helicopters can a human eat in one sitting?""}, ] tokenized_chat = tokenizer.apply_chat_template(messages, tokenize=True, add_generation_prompt=True, return_tensors=""pt"") print(tokenizer.decode(tokenized_chat[0])) This will yield a string in the input format that Zephyr expects. ```text <|system|> You are a friendly chatbot who always responds in the style of a pirate <|user|> How many helicopters can a human eat in one sitting? <|assistant|> Now that our input is formatted correctly for Zephyr, we can use the model to generate a response to the user's question: thon outputs = model.generate(tokenized_chat, max_new_tokens=128) print(tokenizer.decode(outputs[0])) This will yield: ```text <|system|> You are a friendly chatbot who always responds in the style of a pirate <|user|> How many helicopters can a human eat in one sitting? <|assistant|> Matey, I'm afraid I must inform ye that humans cannot eat helicopters. Helicopters are not food, they are flying machines. Food is meant to be eaten, like a hearty plate o' grog, a savory bowl o' stew, or a delicious loaf o' bread. But helicopters, they be for transportin' and movin' around, not for eatin'. So, I'd say none, me hearties. None at all. Arr, 'twas easy after all! ## Is there an automated pipeline for chat? Yes, there is: [`ConversationalPipeline`]. This pipeline is designed to make it easy to use chat models. Let's try the `Zephyr` example again, but this time using the pipeline: thon from transformers import pipeline pipe = pipeline(""conversational"", ""HuggingFaceH4/zephyr-7b-beta"") messages = [ { ""role"": ""system"", ""content"": ""You are a friendly chatbot who always responds in the style of a pirate"", }, {""role"": ""user"", ""content"": ""How many helicopters can a human eat in one sitting?""}, ] print(pipe(messages)) ```text Conversation id: 76d886a0-74bd-454e-9804-0467041a63dc system: You are a friendly chatbot who always responds in the style of a pirate user: How many helicopters can a human eat in one sitting? assistant: Matey, I'm afraid I must inform ye that humans cannot eat helicopters. Helicopters are not food, they are flying machines. Food is meant to be eaten, like a hearty plate o' grog, a savory bowl o' stew, or a delicious loaf o' bread. But helicopters, they be for transportin' and movin' around, not for eatin'. So, I'd say none, me hearties. None at all. [`ConversationalPipeline`] will take care of all the details of tokenization and calling `apply_chat_template` for you - once the model has a chat template, all you need to do is initialize the pipeline and pass it the list of messages! ## What are ""generation prompts""? You may have noticed that the `apply_chat_template` method has an `add_generation_prompt` argument. This argument tells the template to add tokens that indicate the start of a bot response. For example, consider the following chat: thon messages = [ {""role"": ""user"", ""content"": ""Hi there!""}, {""role"": ""assistant"", ""content"": ""Nice to meet you!""}, {""role"": ""user"", ""content"": ""Can I ask a question?""} ] Here's what this will look like without a generation prompt, using the ChatML template we saw in the Zephyr example: thon tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=False) """"""<|im_start|>user Hi there!<|im_end|> <|im_start|>assistant Nice to meet you!<|im_end|> <|im_start|>user Can I ask a question?<|im_end|> """""" And here's what it looks like **with** a generation prompt: thon tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) """"""<|im_start|>user Hi there!<|im_end|> <|im_start|>assistant Nice to meet you!<|im_end|> <|im_start|>user Can I ask a question?<|im_end|> <|im_start|>assistant """""" Note that this time, we've added the tokens that indicate the start of a bot response. This ensures that when the model generates text it will write a bot response instead of doing something unexpected, like continuing the user's message. Remember, chat models are still just language models - they're trained to continue text, and chat is just a special kind of text to them! You need to guide them with the appropriate control tokens so they know what they're supposed to be doing. Not all models require generation prompts. Some models, like BlenderBot and LLaMA, don't have any special tokens before bot responses. In these cases, the `add_generation_prompt` argument will have no effect. The exact effect that `add_generation_prompt` has will depend on the template being used. ## Can I use chat templates in training? Yes! We recommend that you apply the chat template as a preprocessing step for your dataset. After this, you can simply continue like any other language model training task. When training, you should usually set `add_generation_prompt=False`, because the added tokens to prompt an assistant response will not be helpful during training. Let's see an example: thon from transformers import AutoTokenizer from datasets import Dataset tokenizer = AutoTokenizer.from_pretrained(""HuggingFaceH4/zephyr-7b-beta"") chat1 = [ {""role"": ""user"", ""content"": ""Which is bigger, the moon or the sun?""}, {""role"": ""assistant"", ""content"": ""The sun.""} ] chat2 = [ {""role"": ""user"", ""content"": ""Which is bigger, a virus or a bacterium?""}, {""role"": ""assistant"", ""content"": ""A bacterium.""} ] dataset = Dataset.from_dict({""chat"": [chat1, chat2]}) dataset = dataset.map(lambda x: {""formatted_chat"": tokenizer.apply_chat_template(x[""chat""], tokenize=False, add_generation_prompt=False)}) print(dataset['formatted_chat'][0]) And we get: ```text <|user|> Which is bigger, the moon or the sun? <|assistant|> The sun. From here, just continue training like you would with a standard language modelling task, using the `formatted_chat` column. ## Advanced: How do chat templates work? The chat template for a model is stored on the `tokenizer.chat_template` attribute. If no chat template is set, the default template for that model class is used instead. Let's take a look at the template for `BlenderBot`: thon >>> from transformers import AutoTokenizer >>> tokenizer = AutoTokenizer.from_pretrained(""facebook/blenderbot-400M-distill"") >>> tokenizer.default_chat_template ""{% for message in messages %}{% if message['role'] == 'user' %}{{ ' ' }}{% endif %}{{ message['content'] }}{% if not loop.last %}{{ ' ' }}{% endif %}{% endfor %}{{ eos_token }}"" That's kind of intimidating. Let's add some newlines and indentation to make it more readable. Note that the first newline after each block as well as any preceding whitespace before a block are ignored by default, using the Jinja `trim_blocks` and `lstrip_blocks` flags. However, be cautious - although leading whitespace on each line is stripped, spaces between blocks on the same line are not. We strongly recommend checking that your template isn't printing extra spaces where it shouldn't be! {% for message in messages %} {% if message['role'] == 'user' %} {{ ' ' }} {% endif %} {{ message['content'] }} {% if not loop.last %} {{ ' ' }} {% endif %} {% endfor %} {{ eos_token }} If you've never seen one of these before, this is a [Jinja template](https://jinja.palletsprojects.com/en/3.1.x/templates/). Jinja is a templating language that allows you to write simple code that generates text. In many ways, the code and syntax resembles Python. In pure Python, this template would look something like this: thon for idx, message in enumerate(messages): if message['role'] == 'user': print(' ') print(message['content']) if not idx == len(messages) - 1: # Check for the last message in the conversation print(' ') print(eos_token) Effectively, the template does three things: 1. For each message, if the message is a user message, add a blank space before it, otherwise print nothing. 2. Add the message content 3. If the message is not the last message, add two spaces after it. After the final message, print the EOS token. This is a pretty simple template - it doesn't add any control tokens, and it doesn't support ""system"" messages, which are a common way to give the model directives about how it should behave in the subsequent conversation. But Jinja gives you a lot of flexibility to do those things! Let's see a Jinja template that can format inputs similarly to the way LLaMA formats them (note that the real LLaMA template includes handling for default system messages and slightly different system message handling in general - don't use this one in your actual code!) {% for message in messages %} {% if message['role'] == 'user' %} {{ bos_token + '[INST] ' + message['content'] + ' [/INST]' }} {% elif message['role'] == 'system' %} {{ '<>\\n' + message['content'] + '\\n<>\\n\\n' }} {% elif message['role'] == 'assistant' %} {{ ' ' + message['content'] + ' ' + eos_token }} {% endif %} {% endfor %} Hopefully if you stare at this for a little bit you can see what this template is doing - it adds specific tokens based on the ""role"" of each message, which represents who sent it. User, assistant and system messages are clearly distinguishable to the model because of the tokens they're wrapped in. ## Advanced: Adding and editing chat templates ### How do I create a chat template? Simple, just write a jinja template and set `tokenizer.chat_template`. You may find it easier to start with an existing template from another model and simply edit it for your needs! For example, we could take the LLaMA template above and add ""[ASST]"" and ""[/ASST]"" to assistant messages: {% for message in messages %} {% if message['role'] == 'user' %} {{ bos_token + '[INST] ' + message['content'].strip() + ' [/INST]' }} {% elif message['role'] == 'system' %} {{ '<>\\n' + message['content'].strip() + '\\n<>\\n\\n' }} {% elif message['role'] == 'assistant' %} {{ '[ASST] ' + message['content'] + ' [/ASST]' + eos_token }} {% endif %} {% endfor %} Now, simply set the `tokenizer.chat_template` attribute. Next time you use [`~PreTrainedTokenizer.apply_chat_template`], it will use your new template! This attribute will be saved in the `tokenizer_config.json` file, so you can use [`~utils.PushToHubMixin.push_to_hub`] to upload your new template to the Hub and make sure everyone's using the right template for your model! thon template = tokenizer.chat_template template = template.replace(""SYS"", ""SYSTEM"") # Change the system token tokenizer.chat_template = template # Set the new template tokenizer.push_to_hub(""model_name"") # Upload your new template to the Hub! The method [`~PreTrainedTokenizer.apply_chat_template`] which uses your chat template is called by the [`ConversationalPipeline`] class, so once you set the correct chat template, your model will automatically become compatible with [`ConversationalPipeline`]. ### What are ""default"" templates? Before the introduction of chat templates, chat handling was hardcoded at the model class level. For backwards compatibility, we have retained this class-specific handling as default templates, also set at the class level. If a model does not have a chat template set, but there is a default template for its model class, the `ConversationalPipeline` class and methods like `apply_chat_template` will use the class template instead. You can find out what the default template for your tokenizer is by checking the `tokenizer.default_chat_template` attribute. This is something we do purely for backward compatibility reasons, to avoid breaking any existing workflows. Even when the class template is appropriate for your model, we strongly recommend overriding the default template by setting the `chat_template` attribute explicitly to make it clear to users that your model has been correctly configured for chat, and to future-proof in case the default templates are ever altered or deprecated. ### What template should I use? When setting the template for a model that's already been trained for chat, you should ensure that the template exactly matches the message formatting that the model saw during training, or else you will probably experience performance degradation. This is true even if you're training the model further - you will probably get the best performance if you keep the chat tokens constant. This is very analogous to tokenization - you generally get the best performance for inference or fine-tuning when you precisely match the tokenization used during training. If you're training a model from scratch, or fine-tuning a base language model for chat, on the other hand, you have a lot of freedom to choose an appropriate template! LLMs are smart enough to learn to handle lots of different input formats. Our default template for models that don't have a class-specific template follows the [ChatML format](https://github.com/openai/openai-python/blob/main/chatml.md), and this is a good, flexible choice for many use-cases. It looks like this: {% for message in messages %} {{'<|im_start|>' + message['role'] + '\n' + message['content'] + '<|im_end|>' + '\n'}} {% endfor %} If you like this one, here it is in one-liner form, ready to copy into your code. The one-liner also includes handy support for ""generation prompts"" - see the next section for more! tokenizer.chat_template = ""{% if not add_generation_prompt is defined %}{% set add_generation_prompt = false %}{% endif %}{% for message in messages %}{{'<|im_start|>' + message['role'] + '\n' + message['content'] + '<|im_end|>' + '\n'}}{% endfor %}{% if add_generation_prompt %}{{ '<|im_start|>assistant\n' }}{% endif %}"" This template wraps each message in `<|im_start|>` and `<|im_end|>` tokens, and simply writes the role as a string, which allows for flexibility in the roles you train with. The output looks like this: ```text <|im_start|>system You are a helpful chatbot that will do its best not to say anything so stupid that people tweet about it.<|im_end|> <|im_start|>user How are you?<|im_end|> <|im_start|>assistant I'm doing great!<|im_end|> The ""user"", ""system"" and ""assistant"" roles are the standard for chat, and we recommend using them when it makes sense, particularly if you want your model to operate well with [`ConversationalPipeline`]. However, you are not limited to these roles - templating is extremely flexible, and any string can be a role. ### I want to add some chat templates! How should I get started? If you have any chat models, you should set their `tokenizer.chat_template` attribute and test it using [`~PreTrainedTokenizer.apply_chat_template`], then push the updated tokenizer to the Hub. This applies even if you're not the model owner - if you're using a model with an empty chat template, or one that's still using the default class template, please open a [pull request](https://huggingface.co/docs/hub/repositories-pull-requests-discussions) to the model repository so that this attribute can be set properly! Once the attribute is set, that's it, you're done! `tokenizer.apply_chat_template` will now work correctly for that model, which means it is also automatically supported in places like `ConversationalPipeline`! By ensuring that models have this attribute, we can make sure that the whole community gets to use the full power of open-source models. Formatting mismatches have been haunting the field and silently harming performance for too long - it's time to put an end to them! ## Advanced: Template writing tips If you're unfamiliar with Jinja, we generally find that the easiest way to write a chat template is to first write a short Python script that formats messages the way you want, and then convert that script into a template. Remember that the template handler will receive the conversation history as a variable called `messages`. Each message is a dictionary with two keys, `role` and `content`. You will be able to access `messages` in your template just like you can in Python, which means you can loop over it with `{% for message in messages %}` or access individual messages with, for example, `{{ messages[0] }}`. You can also use the following tips to convert your code to Jinja: ### For loops For loops in Jinja look like this: {% for message in messages %} {{ message['content'] }} {% endfor %} Note that whatever's inside the {{ expression block }} will be printed to the output. You can use operators like `+` to combine strings inside expression blocks. ### If statements If statements in Jinja look like this: {% if message['role'] == 'user' %} {{ message['content'] }} {% endif %} Note how where Python uses whitespace to mark the beginnings and ends of `for` and `if` blocks, Jinja requires you to explicitly end them with `{% endfor %}` and `{% endif %}`. ### Special variables Inside your template, you will have access to the list of `messages`, but you can also access several other special variables. These include special tokens like `bos_token` and `eos_token`, as well as the `add_generation_prompt` variable that we discussed above. You can also use the `loop` variable to access information about the current loop iteration, for example using `{% if loop.last %}` to check if the current message is the last message in the conversation. Here's an example that puts these ideas together to add a generation prompt at the end of the conversation if add_generation_prompt is `True`: {% if loop.last and add_generation_prompt %} {{ bos_token + 'Assistant:\n' }} {% endif %} ### Notes on whitespace As much as possible, we've tried to get Jinja to ignore whitespace outside of {{ expressions }}. However, be aware that Jinja is a general-purpose templating engine, and it may treat whitespace between blocks on the same line as significant and print it to the output. We **strongly** recommend checking that your template isn't printing extra spaces where it shouldn't be before you upload it!" perf_torch_compile.md," # Optimize inference using torch.compile() This guide aims to provide a benchmark on the inference speed-ups introduced with [`torch.compile()`](https://pytorch.org/tutorials/intermediate/torch_compile_tutorial.html) for [computer vision models in 🤗 Transformers](https://huggingface.co/models?pipeline_tag=image-classification&library=transformers&sort=trending). ## Benefits of torch.compile Depending on the model and the GPU, `torch.compile()` yields up to 30% speed-up during inference. To use `torch.compile()`, simply install any version of `torch` above 2.0. Compiling a model takes time, so it's useful if you are compiling the model only once instead of every time you infer. To compile any computer vision model of your choice, call `torch.compile()` on the model as shown below: from transformers import AutoModelForImageClassification model = AutoModelForImageClassification.from_pretrained(MODEL_ID).to(""cuda"") + model = torch.compile(model) `compile()` comes with multiple modes for compiling, which essentially differ in compilation time and inference overhead. `max-autotune` takes longer than `reduce-overhead` but results in faster inference. Default mode is fastest for compilation but is not as efficient compared to `reduce-overhead` for inference time. In this guide, we used the default mode. You can learn more about it [here](https://pytorch.org/get-started/pytorch-2.0/#user-experience). We benchmarked `torch.compile` with different computer vision models, tasks, types of hardware, and batch sizes on `torch` version 2.0.1. ## Benchmarking code Below you can find the benchmarking code for each task. We warm up the GPU before inference and take the mean time of 300 inferences, using the same image each time. ### Image Classification with ViT thon import torch from PIL import Image import requests import numpy as np from transformers import AutoImageProcessor, AutoModelForImageClassification url = 'http://images.cocodataset.org/val2017/000000039769.jpg' image = Image.open(requests.get(url, stream=True).raw) processor = AutoImageProcessor.from_pretrained(""google/vit-base-patch16-224"") model = AutoModelForImageClassification.from_pretrained(""google/vit-base-patch16-224"").to(""cuda"") model = torch.compile(model) processed_input = processor(image, return_tensors='pt').to(device=""cuda"") with torch.no_grad(): _ = model(**processed_input) #### Object Detection with DETR thon from transformers import AutoImageProcessor, AutoModelForObjectDetection processor = AutoImageProcessor.from_pretrained(""facebook/detr-resnet-50"") model = AutoModelForObjectDetection.from_pretrained(""facebook/detr-resnet-50"").to(""cuda"") model = torch.compile(model) texts = [""a photo of a cat"", ""a photo of a dog""] inputs = processor(text=texts, images=image, return_tensors=""pt"").to(""cuda"") with torch.no_grad(): _ = model(**inputs) #### Image Segmentation with Segformer thon from transformers import SegformerImageProcessor, SegformerForSemanticSegmentation processor = SegformerImageProcessor.from_pretrained(""nvidia/segformer-b0-finetuned-ade-512-512"") model = SegformerForSemanticSegmentation.from_pretrained(""nvidia/segformer-b0-finetuned-ade-512-512"").to(""cuda"") model = torch.compile(model) seg_inputs = processor(images=image, return_tensors=""pt"").to(""cuda"") with torch.no_grad(): _ = model(**seg_inputs) Below you can find the list of the models we benchmarked. **Image Classification** - [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) - [microsoft/beit-base-patch16-224-pt22k-ft22k](https://huggingface.co/microsoft/beit-base-patch16-224-pt22k-ft22k) - [facebook/convnext-large-224](https://huggingface.co/facebook/convnext-large-224) - [microsoft/resnet-50](https://huggingface.co/) **Image Segmentation** - [nvidia/segformer-b0-finetuned-ade-512-512](https://huggingface.co/nvidia/segformer-b0-finetuned-ade-512-512) - [facebook/mask2former-swin-tiny-coco-panoptic](https://huggingface.co/facebook/mask2former-swin-tiny-coco-panoptic) - [facebook/maskformer-swin-base-ade](https://huggingface.co/facebook/maskformer-swin-base-ade) - [google/deeplabv3_mobilenet_v2_1.0_513](https://huggingface.co/google/deeplabv3_mobilenet_v2_1.0_513) **Object Detection** - [google/owlvit-base-patch32](https://huggingface.co/google/owlvit-base-patch32) - [facebook/detr-resnet-101](https://huggingface.co/facebook/detr-resnet-101) - [microsoft/conditional-detr-resnet-50](https://huggingface.co/microsoft/conditional-detr-resnet-50) Below you can find visualization of inference durations with and without `torch.compile()` and percentage improvements for each model in different hardware and batch sizes. ![Duration Comparison on V100 with Batch Size of 1](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/torch_compile/v100_1_duration.png) ![Percentage Improvement on T4 with Batch Size of 4](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/torch_compile/T4_4_percentage.png) Below you can find inference durations in milliseconds for each model with and without `compile()`. Note that OwlViT results in OOM in larger batch sizes. ### A100 (batch size: 1) | **Task/Model** | **torch 2.0 - no compile** | **torch 2.0 - compile** | |:---:|:---:|:---:| | Image Classification/ViT | 9.325 | 7.584 | | Image Segmentation/Segformer | 11.759 | 10.500 | | Object Detection/OwlViT | 24.978 | 18.420 | | Image Classification/BeiT | 11.282 | 8.448 | | Object Detection/DETR | 34.619 | 19.040 | | Image Classification/ConvNeXT | 10.410 | 10.208 | | Image Classification/ResNet | 6.531 | 4.124 | | Image Segmentation/Mask2former | 60.188 | 49.117 | | Image Segmentation/Maskformer | 75.764 | 59.487 | | Image Segmentation/MobileNet | 8.583 | 3.974 | | Object Detection/Resnet-101 | 36.276 | 18.197 | | Object Detection/Conditional-DETR | 31.219 | 17.993 | ### A100 (batch size: 4) | **Task/Model** | **torch 2.0 - no compile** | **torch 2.0 - compile** | |:---:|:---:|:---:| | Image Classification/ViT | 14.832 | 14.499 | | Image Segmentation/Segformer | 18.838 | 16.476 | | Image Classification/BeiT | 13.205 | 13.048 | | Object Detection/DETR | 48.657 | 32.418| | Image Classification/ConvNeXT | 22.940 | 21.631 | | Image Classification/ResNet | 6.657 | 4.268 | | Image Segmentation/Mask2former | 74.277 | 61.781 | | Image Segmentation/Maskformer | 180.700 | 159.116 | | Image Segmentation/MobileNet | 14.174 | 8.515 | | Object Detection/Resnet-101 | 68.101 | 44.998 | | Object Detection/Conditional-DETR | 56.470 | 35.552 | ### A100 (batch size: 16) | **Task/Model** | **torch 2.0 - no compile** | **torch 2.0 - compile** | |:---:|:---:|:---:| | Image Classification/ViT | 40.944 | 40.010 | | Image Segmentation/Segformer | 37.005 | 31.144 | | Image Classification/BeiT | 41.854 | 41.048 | | Object Detection/DETR | 164.382 | 161.902 | | Image Classification/ConvNeXT | 82.258 | 75.561 | | Image Classification/ResNet | 7.018 | 5.024 | | Image Segmentation/Mask2former | 178.945 | 154.814 | | Image Segmentation/Maskformer | 638.570 | 579.826 | | Image Segmentation/MobileNet | 51.693 | 30.310 | | Object Detection/Resnet-101 | 232.887 | 155.021 | | Object Detection/Conditional-DETR | 180.491 | 124.032 | ### V100 (batch size: 1) | **Task/Model** | **torch 2.0 - no compile** | **torch 2.0 - compile** | |:---:|:---:|:---:| | Image Classification/ViT | 10.495 | 6.00 | | Image Segmentation/Segformer | 13.321 | 5.862 | | Object Detection/OwlViT | 25.769 | 22.395 | | Image Classification/BeiT | 11.347 | 7.234 | | Object Detection/DETR | 33.951 | 19.388 | | Image Classification/ConvNeXT | 11.623 | 10.412 | | Image Classification/ResNet | 6.484 | 3.820 | | Image Segmentation/Mask2former | 64.640 | 49.873 | | Image Segmentation/Maskformer | 95.532 | 72.207 | | Image Segmentation/MobileNet | 9.217 | 4.753 | | Object Detection/Resnet-101 | 52.818 | 28.367 | | Object Detection/Conditional-DETR | 39.512 | 20.816 | ### V100 (batch size: 4) | **Task/Model** | **torch 2.0 - no compile** | **torch 2.0 - compile** | |:---:|:---:|:---:| | Image Classification/ViT | 15.181 | 14.501 | | Image Segmentation/Segformer | 16.787 | 16.188 | | Image Classification/BeiT | 15.171 | 14.753 | | Object Detection/DETR | 88.529 | 64.195 | | Image Classification/ConvNeXT | 29.574 | 27.085 | | Image Classification/ResNet | 6.109 | 4.731 | | Image Segmentation/Mask2former | 90.402 | 76.926 | | Image Segmentation/Maskformer | 234.261 | 205.456 | | Image Segmentation/MobileNet | 24.623 | 14.816 | | Object Detection/Resnet-101 | 134.672 | 101.304 | | Object Detection/Conditional-DETR | 97.464 | 69.739 | ### V100 (batch size: 16) | **Task/Model** | **torch 2.0 - no compile** | **torch 2.0 - compile** | |:---:|:---:|:---:| | Image Classification/ViT | 52.209 | 51.633 | | Image Segmentation/Segformer | 61.013 | 55.499 | | Image Classification/BeiT | 53.938 | 53.581 | | Object Detection/DETR | OOM | OOM | | Image Classification/ConvNeXT | 109.682 | 100.771 | | Image Classification/ResNet | 14.857 | 12.089 | | Image Segmentation/Mask2former | 249.605 | 222.801 | | Image Segmentation/Maskformer | 831.142 | 743.645 | | Image Segmentation/MobileNet | 93.129 | 55.365 | | Object Detection/Resnet-101 | 482.425 | 361.843 | | Object Detection/Conditional-DETR | 344.661 | 255.298 | ### T4 (batch size: 1) | **Task/Model** | **torch 2.0 - no compile** | **torch 2.0 - compile** | |:---:|:---:|:---:| | Image Classification/ViT | 16.520 | 15.786 | | Image Segmentation/Segformer | 16.116 | 14.205 | | Object Detection/OwlViT | 53.634 | 51.105 | | Image Classification/BeiT | 16.464 | 15.710 | | Object Detection/DETR | 73.100 | 53.99 | | Image Classification/ConvNeXT | 32.932 | 30.845 | | Image Classification/ResNet | 6.031 | 4.321 | | Image Segmentation/Mask2former | 79.192 | 66.815 | | Image Segmentation/Maskformer | 200.026 | 188.268 | | Image Segmentation/MobileNet | 18.908 | 11.997 | | Object Detection/Resnet-101 | 106.622 | 82.566 | | Object Detection/Conditional-DETR | 77.594 | 56.984 | ### T4 (batch size: 4) | **Task/Model** | **torch 2.0 - no compile** | **torch 2.0 - compile** | |:---:|:---:|:---:| | Image Classification/ViT | 43.653 | 43.626 | | Image Segmentation/Segformer | 45.327 | 42.445 | | Image Classification/BeiT | 52.007 | 51.354 | | Object Detection/DETR | 277.850 | 268.003 | | Image Classification/ConvNeXT | 119.259 | 105.580 | | Image Classification/ResNet | 13.039 | 11.388 | | Image Segmentation/Mask2former | 201.540 | 184.670 | | Image Segmentation/Maskformer | 764.052 | 711.280 | | Image Segmentation/MobileNet | 74.289 | 48.677 | | Object Detection/Resnet-101 | 421.859 | 357.614 | | Object Detection/Conditional-DETR | 289.002 | 226.945 | ### T4 (batch size: 16) | **Task/Model** | **torch 2.0 - no compile** | **torch 2.0 - compile** | |:---:|:---:|:---:| | Image Classification/ViT | 163.914 | 160.907 | | Image Segmentation/Segformer | 192.412 | 163.620 | | Image Classification/BeiT | 188.978 | 187.976 | | Object Detection/DETR | OOM | OOM | | Image Classification/ConvNeXT | 422.886 | 388.078 | | Image Classification/ResNet | 44.114 | 37.604 | | Image Segmentation/Mask2former | 756.337 | 695.291 | | Image Segmentation/Maskformer | 2842.940 | 2656.88 | | Image Segmentation/MobileNet | 299.003 | 201.942 | | Object Detection/Resnet-101 | 1619.505 | 1262.758 | | Object Detection/Conditional-DETR | 1137.513 | 897.390| ## PyTorch Nightly We also benchmarked on PyTorch nightly (2.1.0dev, find the wheel [here](https://download.pytorch.org/whl/nightly/cu118)) and observed improvement in latency both for uncompiled and compiled models. ### A100 | **Task/Model** | **Batch Size** | **torch 2.0 - no compile** | **torch 2.0 - compile** | |:---:|:---:|:---:|:---:| | Image Classification/BeiT | Unbatched | 12.462 | 6.954 | | Image Classification/BeiT | 4 | 14.109 | 12.851 | | Image Classification/BeiT | 16 | 42.179 | 42.147 | | Object Detection/DETR | Unbatched | 30.484 | 15.221 | | Object Detection/DETR | 4 | 46.816 | 30.942 | | Object Detection/DETR | 16 | 163.749 | 163.706 | ### T4 | **Task/Model** | **Batch Size** | **torch 2.0 - no compile** | **torch 2.0 - compile** | |:---:|:---:|:---:|:---:| | Image Classification/BeiT | Unbatched | 14.408 | 14.052 | | Image Classification/BeiT | 4 | 47.381 | 46.604 | | Image Classification/BeiT | 16 | 42.179 | 42.147 | | Object Detection/DETR | Unbatched | 68.382 | 53.481 | | Object Detection/DETR | 4 | 269.615 | 204.785 | | Object Detection/DETR | 16 | OOM | OOM | ### V100 | **Task/Model** | **Batch Size** | **torch 2.0 - no compile** | **torch 2.0 - compile** | |:---:|:---:|:---:|:---:| | Image Classification/BeiT | Unbatched | 13.477 | 7.926 | | Image Classification/BeiT | 4 | 15.103 | 14.378 | | Image Classification/BeiT | 16 | 52.517 | 51.691 | | Object Detection/DETR | Unbatched | 28.706 | 19.077 | | Object Detection/DETR | 4 | 88.402 | 62.949| | Object Detection/DETR | 16 | OOM | OOM | ## Reduce Overhead We benchmarked `reduce-overhead` compilation mode for A100 and T4 in Nightly. ### A100 | **Task/Model** | **Batch Size** | **torch 2.0 - no compile** | **torch 2.0 - compile** | |:---:|:---:|:---:|:---:| | Image Classification/ConvNeXT | Unbatched | 11.758 | 7.335 | | Image Classification/ConvNeXT | 4 | 23.171 | 21.490 | | Image Classification/ResNet | Unbatched | 7.435 | 3.801 | | Image Classification/ResNet | 4 | 7.261 | 2.187 | | Object Detection/Conditional-DETR | Unbatched | 32.823 | 11.627 | | Object Detection/Conditional-DETR | 4 | 50.622 | 33.831 | | Image Segmentation/MobileNet | Unbatched | 9.869 | 4.244 | | Image Segmentation/MobileNet | 4 | 14.385 | 7.946 | ### T4 | **Task/Model** | **Batch Size** | **torch 2.0 - no compile** | **torch 2.0 - compile** | |:---:|:---:|:---:|:---:| | Image Classification/ConvNeXT | Unbatched | 32.137 | 31.84 | | Image Classification/ConvNeXT | 4 | 120.944 | 110.209 | | Image Classification/ResNet | Unbatched | 9.761 | 7.698 | | Image Classification/ResNet | 4 | 15.215 | 13.871 | | Object Detection/Conditional-DETR | Unbatched | 72.150 | 57.660 | | Object Detection/Conditional-DETR | 4 | 301.494 | 247.543 | | Image Segmentation/MobileNet | Unbatched | 22.266 | 19.339 | | Image Segmentation/MobileNet | 4 | 78.311 | 50.983 | " hpo_train.md," # Hyperparameter Search using Trainer API 🤗 Transformers provides a [`Trainer`] class optimized for training 🤗 Transformers models, making it easier to start training without manually writing your own training loop. The [`Trainer`] provides API for hyperparameter search. This doc shows how to enable it in example. ## Hyperparameter Search backend [`Trainer`] supports four hyperparameter search backends currently: [optuna](https://optuna.org/), [sigopt](https://sigopt.com/), [raytune](https://docs.ray.io/en/latest/tune/index.html) and [wandb](https://wandb.ai/site/sweeps). you should install them before using them as the hyperparameter search backend ```bash pip install optuna/sigopt/wandb/ray[tune] ## How to enable Hyperparameter search in example Define the hyperparameter search space, different backends need different format. For sigopt, see sigopt [object_parameter](https://docs.sigopt.com/ai-module-api-references/api_reference/objects/object_parameter), it's like following: >>> def sigopt_hp_space(trial): return [ {""bounds"": {""min"": 1e-6, ""max"": 1e-4}, ""name"": ""learning_rate"", ""type"": ""double""}, { ""categorical_values"": [""16"", ""32"", ""64"", ""128""], ""name"": ""per_device_train_batch_size"", ""type"": ""categorical"", }, ] For optuna, see optuna [object_parameter](https://optuna.readthedocs.io/en/stable/tutorial/10_key_features/002_configurations.html#sphx-glr-tutorial-10-key-features-002-configurations-py), it's like following: >>> def optuna_hp_space(trial): return { ""learning_rate"": trial.suggest_float(""learning_rate"", 1e-6, 1e-4, log=True), ""per_device_train_batch_size"": trial.suggest_categorical(""per_device_train_batch_size"", [16, 32, 64, 128]), } Optuna provides multi-objective HPO. You can pass `direction` in `hyperparameter_search` and define your own compute_objective to return multiple objective values. The Pareto Front (`List[BestRun]`) will be returned in hyperparameter_search, you should refer to the test case `TrainerHyperParameterMultiObjectOptunaIntegrationTest` in [test_trainer](https://github.com/huggingface/transformers/blob/main/tests/trainer/test_trainer.py). It's like following >>> best_trials = trainer.hyperparameter_search( direction=[""minimize"", ""maximize""], backend=""optuna"", hp_space=optuna_hp_space, n_trials=20, compute_objective=compute_objective, ) For raytune, see raytune [object_parameter](https://docs.ray.io/en/latest/tune/api/search_space.html), it's like following: >>> def ray_hp_space(trial): return { ""learning_rate"": tune.loguniform(1e-6, 1e-4), ""per_device_train_batch_size"": tune.choice([16, 32, 64, 128]), } For wandb, see wandb [object_parameter](https://docs.wandb.ai/guides/sweeps/configuration), it's like following: >>> def wandb_hp_space(trial): return { ""method"": ""random"", ""metric"": {""name"": ""objective"", ""goal"": ""minimize""}, ""parameters"": { ""learning_rate"": {""distribution"": ""uniform"", ""min"": 1e-6, ""max"": 1e-4}, ""per_device_train_batch_size"": {""values"": [16, 32, 64, 128]}, }, } Define a `model_init` function and pass it to the [`Trainer`], as an example: >>> def model_init(trial): return AutoModelForSequenceClassification.from_pretrained( model_args.model_name_or_path, from_tf=bool("".ckpt"" in model_args.model_name_or_path), config=config, cache_dir=model_args.cache_dir, revision=model_args.model_revision, token=True if model_args.use_auth_token else None, ) Create a [`Trainer`] with your `model_init` function, training arguments, training and test datasets, and evaluation function: >>> trainer = Trainer( model=None, args=training_args, train_dataset=small_train_dataset, eval_dataset=small_eval_dataset, compute_metrics=compute_metrics, tokenizer=tokenizer, model_init=model_init, data_collator=data_collator, ) Call hyperparameter search, get the best trial parameters, backend could be `""optuna""`/`""sigopt""`/`""wandb""`/`""ray""`. direction can be`""minimize""` or `""maximize""`, which indicates whether to optimize greater or lower objective. You could define your own compute_objective function, if not defined, the default compute_objective will be called, and the sum of eval metric like f1 is returned as objective value. >>> best_trial = trainer.hyperparameter_search( direction=""maximize"", backend=""optuna"", hp_space=optuna_hp_space, n_trials=20, compute_objective=compute_objective, ) ## Hyperparameter search For DDP finetune Currently, Hyperparameter search for DDP is enabled for optuna and sigopt. Only the rank-zero process will generate the search trial and pass the argument to other ranks. " glossary.md," # Glossary This glossary defines general machine learning and 🤗 Transformers terms to help you better understand the documentation. ## A ### attention mask The attention mask is an optional argument used when batching sequences together. This argument indicates to the model which tokens should be attended to, and which should not. For example, consider these two sequences: thon >>> from transformers import BertTokenizer >>> tokenizer = BertTokenizer.from_pretrained(""bert-base-cased"") >>> sequence_a = ""This is a short sequence."" >>> sequence_b = ""This is a rather long sequence. It is at least longer than the sequence A."" >>> encoded_sequence_a = tokenizer(sequence_a)[""input_ids""] >>> encoded_sequence_b = tokenizer(sequence_b)[""input_ids""] The encoded versions have different lengths: thon >>> len(encoded_sequence_a), len(encoded_sequence_b) (8, 19) Therefore, we can't put them together in the same tensor as-is. The first sequence needs to be padded up to the length of the second one, or the second one needs to be truncated down to the length of the first one. In the first case, the list of IDs will be extended by the padding indices. We can pass a list to the tokenizer and ask it to pad like this: thon >>> padded_sequences = tokenizer([sequence_a, sequence_b], padding=True) We can see that 0s have been added on the right of the first sentence to make it the same length as the second one: thon >>> padded_sequences[""input_ids""] [[101, 1188, 1110, 170, 1603, 4954, 119, 102, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [101, 1188, 1110, 170, 1897, 1263, 4954, 119, 1135, 1110, 1120, 1655, 2039, 1190, 1103, 4954, 138, 119, 102]] This can then be converted into a tensor in PyTorch or TensorFlow. The attention mask is a binary tensor indicating the position of the padded indices so that the model does not attend to them. For the [`BertTokenizer`], `1` indicates a value that should be attended to, while `0` indicates a padded value. This attention mask is in the dictionary returned by the tokenizer under the key ""attention_mask"": thon >>> padded_sequences[""attention_mask""] [[1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]] ### autoencoding models See [encoder models](#encoder-models) and [masked language modeling](#masked-language-modeling-mlm) ### autoregressive models See [causal language modeling](#causal-language-modeling) and [decoder models](#decoder-models) ## B ### backbone The backbone is the network (embeddings and layers) that outputs the raw hidden states or features. It is usually connected to a [head](#head) which accepts the features as its input to make a prediction. For example, [`ViTModel`] is a backbone without a specific head on top. Other models can also use [`VitModel`] as a backbone such as [DPT](model_doc/dpt). ## C ### causal language modeling A pretraining task where the model reads the texts in order and has to predict the next word. It's usually done by reading the whole sentence but using a mask inside the model to hide the future tokens at a certain timestep. ### channel Color images are made up of some combination of values in three channels - red, green, and blue (RGB) - and grayscale images only have one channel. In 🤗 Transformers, the channel can be the first or last dimension of an image's tensor: [`n_channels`, `height`, `width`] or [`height`, `width`, `n_channels`]. ### connectionist temporal classification (CTC) An algorithm which allows a model to learn without knowing exactly how the input and output are aligned; CTC calculates the distribution of all possible outputs for a given input and chooses the most likely output from it. CTC is commonly used in speech recognition tasks because speech doesn't always cleanly align with the transcript for a variety of reasons such as a speaker's different speech rates. ### convolution A type of layer in a neural network where the input matrix is multiplied element-wise by a smaller matrix (kernel or filter) and the values are summed up in a new matrix. This is known as a convolutional operation which is repeated over the entire input matrix. Each operation is applied to a different segment of the input matrix. Convolutional neural networks (CNNs) are commonly used in computer vision. ## D ### DataParallel (DP) Parallelism technique for training on multiple GPUs where the same setup is replicated multiple times, with each instance receiving a distinct data slice. The processing is done in parallel and all setups are synchronized at the end of each training step. Learn more about how DataParallel works [here](perf_train_gpu_many#dataparallel-vs-distributeddataparallel). ### decoder input IDs This input is specific to encoder-decoder models, and contains the input IDs that will be fed to the decoder. These inputs should be used for sequence to sequence tasks, such as translation or summarization, and are usually built in a way specific to each model. Most encoder-decoder models (BART, T5) create their `decoder_input_ids` on their own from the `labels`. In such models, passing the `labels` is the preferred way to handle training. Please check each model's docs to see how they handle these input IDs for sequence to sequence training. ### decoder models Also referred to as autoregressive models, decoder models involve a pretraining task (called causal language modeling) where the model reads the texts in order and has to predict the next word. It's usually done by reading the whole sentence with a mask to hide future tokens at a certain timestep. ### deep learning (DL) Machine learning algorithms which uses neural networks with several layers. ## E ### encoder models Also known as autoencoding models, encoder models take an input (such as text or images) and transform them into a condensed numerical representation called an embedding. Oftentimes, encoder models are pretrained using techniques like [masked language modeling](#masked-language-modeling-mlm), which masks parts of the input sequence and forces the model to create more meaningful representations. ## F ### feature extraction The process of selecting and transforming raw data into a set of features that are more informative and useful for machine learning algorithms. Some examples of feature extraction include transforming raw text into word embeddings and extracting important features such as edges or shapes from image/video data. ### feed forward chunking In each residual attention block in transformers the self-attention layer is usually followed by 2 feed forward layers. The intermediate embedding size of the feed forward layers is often bigger than the hidden size of the model (e.g., for `bert-base-uncased`). For an input of size `[batch_size, sequence_length]`, the memory required to store the intermediate feed forward embeddings `[batch_size, sequence_length, config.intermediate_size]` can account for a large fraction of the memory use. The authors of [Reformer: The Efficient Transformer](https://arxiv.org/abs/2001.04451) noticed that since the computation is independent of the `sequence_length` dimension, it is mathematically equivalent to compute the output embeddings of both feed forward layers `[batch_size, config.hidden_size]_0, , [batch_size, config.hidden_size]_n` individually and concat them afterward to `[batch_size, sequence_length, config.hidden_size]` with `n = sequence_length`, which trades increased computation time against reduced memory use, but yields a mathematically **equivalent** result. For models employing the function [`apply_chunking_to_forward`], the `chunk_size` defines the number of output embeddings that are computed in parallel and thus defines the trade-off between memory and time complexity. If `chunk_size` is set to 0, no feed forward chunking is done. ### finetuned models Finetuning is a form of transfer learning which involves taking a pretrained model, freezing its weights, and replacing the output layer with a newly added [model head](#head). The model head is trained on your target dataset. See the [Fine-tune a pretrained model](https://huggingface.co/docs/transformers/training) tutorial for more details, and learn how to fine-tune models with 🤗 Transformers. ## H ### head The model head refers to the last layer of a neural network that accepts the raw hidden states and projects them onto a different dimension. There is a different model head for each task. For example: * [`GPT2ForSequenceClassification`] is a sequence classification head - a linear layer - on top of the base [`GPT2Model`]. * [`ViTForImageClassification`] is an image classification head - a linear layer on top of the final hidden state of the `CLS` token - on top of the base [`ViTModel`]. * [`Wav2Vec2ForCTC`] ia a language modeling head with [CTC](#connectionist-temporal-classification-(CTC)) on top of the base [`Wav2Vec2Model`]. ## I ### image patch Vision-based Transformers models split an image into smaller patches which are linearly embedded, and then passed as a sequence to the model. You can find the `patch_size` - or resolution - of the model in its configuration. ### inference Inference is the process of evaluating a model on new data after training is complete. See the [Pipeline for inference](https://huggingface.co/docs/transformers/pipeline_tutorial) tutorial to learn how to perform inference with 🤗 Transformers. ### input IDs The input ids are often the only required parameters to be passed to the model as input. They are token indices, numerical representations of tokens building the sequences that will be used as input by the model. Each tokenizer works differently but the underlying mechanism remains the same. Here's an example using the BERT tokenizer, which is a [WordPiece](https://arxiv.org/pdf/1609.08144.pdf) tokenizer: thon >>> from transformers import BertTokenizer >>> tokenizer = BertTokenizer.from_pretrained(""bert-base-cased"") >>> sequence = ""A Titan RTX has 24GB of VRAM"" The tokenizer takes care of splitting the sequence into tokens available in the tokenizer vocabulary. thon >>> tokenized_sequence = tokenizer.tokenize(sequence) The tokens are either words or subwords. Here for instance, ""VRAM"" wasn't in the model vocabulary, so it's been split in ""V"", ""RA"" and ""M"". To indicate those tokens are not separate words but parts of the same word, a double-hash prefix is added for ""RA"" and ""M"": thon >>> print(tokenized_sequence) ['A', 'Titan', 'R', '##T', '##X', 'has', '24', '##GB', 'of', 'V', '##RA', '##M'] These tokens can then be converted into IDs which are understandable by the model. This can be done by directly feeding the sentence to the tokenizer, which leverages the Rust implementation of [🤗 Tokenizers](https://github.com/huggingface/tokenizers) for peak performance. thon >>> inputs = tokenizer(sequence) The tokenizer returns a dictionary with all the arguments necessary for its corresponding model to work properly. The token indices are under the key `input_ids`: thon >>> encoded_sequence = inputs[""input_ids""] >>> print(encoded_sequence) [101, 138, 18696, 155, 1942, 3190, 1144, 1572, 13745, 1104, 159, 9664, 2107, 102] Note that the tokenizer automatically adds ""special tokens"" (if the associated model relies on them) which are special IDs the model sometimes uses. If we decode the previous sequence of ids, thon >>> decoded_sequence = tokenizer.decode(encoded_sequence) we will see thon >>> print(decoded_sequence) [CLS] A Titan RTX has 24GB of VRAM [SEP] because this is the way a [`BertModel`] is going to expect its inputs. ## L ### labels The labels are an optional argument which can be passed in order for the model to compute the loss itself. These labels should be the expected prediction of the model: it will use the standard loss in order to compute the loss between its predictions and the expected value (the label). These labels are different according to the model head, for example: - For sequence classification models, ([`BertForSequenceClassification`]), the model expects a tensor of dimension `(batch_size)` with each value of the batch corresponding to the expected label of the entire sequence. - For token classification models, ([`BertForTokenClassification`]), the model expects a tensor of dimension `(batch_size, seq_length)` with each value corresponding to the expected label of each individual token. - For masked language modeling, ([`BertForMaskedLM`]), the model expects a tensor of dimension `(batch_size, seq_length)` with each value corresponding to the expected label of each individual token: the labels being the token ID for the masked token, and values to be ignored for the rest (usually -100). - For sequence to sequence tasks, ([`BartForConditionalGeneration`], [`MBartForConditionalGeneration`]), the model expects a tensor of dimension `(batch_size, tgt_seq_length)` with each value corresponding to the target sequences associated with each input sequence. During training, both BART and T5 will make the appropriate `decoder_input_ids` and decoder attention masks internally. They usually do not need to be supplied. This does not apply to models leveraging the Encoder-Decoder framework. - For image classification models, ([`ViTForImageClassification`]), the model expects a tensor of dimension `(batch_size)` with each value of the batch corresponding to the expected label of each individual image. - For semantic segmentation models, ([`SegformerForSemanticSegmentation`]), the model expects a tensor of dimension `(batch_size, height, width)` with each value of the batch corresponding to the expected label of each individual pixel. - For object detection models, ([`DetrForObjectDetection`]), the model expects a list of dictionaries with a `class_labels` and `boxes` key where each value of the batch corresponds to the expected label and number of bounding boxes of each individual image. - For automatic speech recognition models, ([`Wav2Vec2ForCTC`]), the model expects a tensor of dimension `(batch_size, target_length)` with each value corresponding to the expected label of each individual token. Each model's labels may be different, so be sure to always check the documentation of each model for more information about their specific labels! The base models ([`BertModel`]) do not accept labels, as these are the base transformer models, simply outputting features. ### large language models (LLM) A generic term that refers to transformer language models (GPT-3, BLOOM, OPT) that were trained on a large quantity of data. These models also tend to have a large number of learnable parameters (e.g. 175 billion for GPT-3). ## M ### masked language modeling (MLM) A pretraining task where the model sees a corrupted version of the texts, usually done by masking some tokens randomly, and has to predict the original text. ### multimodal A task that combines texts with another kind of inputs (for instance images). ## N ### Natural language generation (NLG) All tasks related to generating text (for instance, [Write With Transformers](https://transformer.huggingface.co/), translation). ### Natural language processing (NLP) A generic way to say ""deal with texts"". ### Natural language understanding (NLU) All tasks related to understanding what is in a text (for instance classifying the whole text, individual words). ## P ### pipeline A pipeline in 🤗 Transformers is an abstraction referring to a series of steps that are executed in a specific order to preprocess and transform data and return a prediction from a model. Some example stages found in a pipeline might be data preprocessing, feature extraction, and normalization. For more details, see [Pipelines for inference](https://huggingface.co/docs/transformers/pipeline_tutorial). ### PipelineParallel (PP) Parallelism technique in which the model is split up vertically (layer-level) across multiple GPUs, so that only one or several layers of the model are placed on a single GPU. Each GPU processes in parallel different stages of the pipeline and working on a small chunk of the batch. Learn more about how PipelineParallel works [here](perf_train_gpu_many#from-naive-model-parallelism-to-pipeline-parallelism). ### pixel values A tensor of the numerical representations of an image that is passed to a model. The pixel values have a shape of [`batch_size`, `num_channels`, `height`, `width`], and are generated from an image processor. ### pooling An operation that reduces a matrix into a smaller matrix, either by taking the maximum or average of the pooled dimension(s). Pooling layers are commonly found between convolutional layers to downsample the feature representation. ### position IDs Contrary to RNNs that have the position of each token embedded within them, transformers are unaware of the position of each token. Therefore, the position IDs (`position_ids`) are used by the model to identify each token's position in the list of tokens. They are an optional parameter. If no `position_ids` are passed to the model, the IDs are automatically created as absolute positional embeddings. Absolute positional embeddings are selected in the range `[0, config.max_position_embeddings - 1]`. Some models use other types of positional embeddings, such as sinusoidal position embeddings or relative position embeddings. ### preprocessing The task of preparing raw data into a format that can be easily consumed by machine learning models. For example, text is typically preprocessed by tokenization. To gain a better idea of what preprocessing looks like for other input types, check out the [Preprocess](https://huggingface.co/docs/transformers/preprocessing) tutorial. ### pretrained model A model that has been pretrained on some data (for instance all of Wikipedia). Pretraining methods involve a self-supervised objective, which can be reading the text and trying to predict the next word (see [causal language modeling](#causal-language-modeling)) or masking some words and trying to predict them (see [masked language modeling](#masked-language-modeling-mlm)). Speech and vision models have their own pretraining objectives. For example, Wav2Vec2 is a speech model pretrained on a contrastive task which requires the model to identify the ""true"" speech representation from a set of ""false"" speech representations. On the other hand, BEiT is a vision model pretrained on a masked image modeling task which masks some of the image patches and requires the model to predict the masked patches (similar to the masked language modeling objective). ## R ### recurrent neural network (RNN) A type of model that uses a loop over a layer to process texts. ### representation learning A subfield of machine learning which focuses on learning meaningful representations of raw data. Some examples of representation learning techniques include word embeddings, autoencoders, and Generative Adversarial Networks (GANs). ## S ### sampling rate A measurement in hertz of the number of samples (the audio signal) taken per second. The sampling rate is a result of discretizing a continuous signal such as speech. ### self-attention Each element of the input finds out which other elements of the input they should attend to. ### self-supervised learning A category of machine learning techniques in which a model creates its own learning objective from unlabeled data. It differs from [unsupervised learning](#unsupervised-learning) and [supervised learning](#supervised-learning) in that the learning process is supervised, but not explicitly from the user. One example of self-supervised learning is [masked language modeling](#masked-language-modeling-mlm), where a model is passed sentences with a proportion of its tokens removed and learns to predict the missing tokens. ### semi-supervised learning A broad category of machine learning training techniques that leverages a small amount of labeled data with a larger quantity of unlabeled data to improve the accuracy of a model, unlike [supervised learning](#supervised-learning) and [unsupervised learning](#unsupervised-learning). An example of a semi-supervised learning approach is ""self-training"", in which a model is trained on labeled data, and then used to make predictions on the unlabeled data. The portion of the unlabeled data that the model predicts with the most confidence gets added to the labeled dataset and used to retrain the model. ### sequence-to-sequence (seq2seq) Models that generate a new sequence from an input, like translation models, or summarization models (such as [Bart](model_doc/bart) or [T5](model_doc/t5)). ### Sharded DDP Another name for the foundational [ZeRO](#zero-redundancy-optimizer--zero-) concept as used by various other implementations of ZeRO. ### stride In [convolution](#convolution) or [pooling](#pooling), the stride refers to the distance the kernel is moved over a matrix. A stride of 1 means the kernel is moved one pixel over at a time, and a stride of 2 means the kernel is moved two pixels over at a time. ### supervised learning A form of model training that directly uses labeled data to correct and instruct model performance. Data is fed into the model being trained, and its predictions are compared to the known labels. The model updates its weights based on how incorrect its predictions were, and the process is repeated to optimize model performance. ## T ### Tensor Parallelism (TP) Parallelism technique for training on multiple GPUs in which each tensor is split up into multiple chunks, so instead of having the whole tensor reside on a single GPU, each shard of the tensor resides on its designated GPU. Shards gets processed separately and in parallel on different GPUs and the results are synced at the end of the processing step. This is what is sometimes called horizontal parallelism, as the splitting happens on horizontal level. Learn more about Tensor Parallelism [here](perf_train_gpu_many#tensor-parallelism). ### token A part of a sentence, usually a word, but can also be a subword (non-common words are often split in subwords) or a punctuation symbol. ### token Type IDs Some models' purpose is to do classification on pairs of sentences or question answering. These require two different sequences to be joined in a single ""input_ids"" entry, which usually is performed with the help of special tokens, such as the classifier (`[CLS]`) and separator (`[SEP]`) tokens. For example, the BERT model builds its two sequence input as such: thon >>> # [CLS] SEQUENCE_A [SEP] SEQUENCE_B [SEP] We can use our tokenizer to automatically generate such a sentence by passing the two sequences to `tokenizer` as two arguments (and not a list, like before) like this: thon >>> from transformers import BertTokenizer >>> tokenizer = BertTokenizer.from_pretrained(""bert-base-cased"") >>> sequence_a = ""HuggingFace is based in NYC"" >>> sequence_b = ""Where is HuggingFace based?"" >>> encoded_dict = tokenizer(sequence_a, sequence_b) >>> decoded = tokenizer.decode(encoded_dict[""input_ids""]) which will return: thon >>> print(decoded) [CLS] HuggingFace is based in NYC [SEP] Where is HuggingFace based? [SEP] This is enough for some models to understand where one sequence ends and where another begins. However, other models, such as BERT, also deploy token type IDs (also called segment IDs). They are represented as a binary mask identifying the two types of sequence in the model. The tokenizer returns this mask as the ""token_type_ids"" entry: thon >>> encoded_dict[""token_type_ids""] [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1] The first sequence, the ""context"" used for the question, has all its tokens represented by a `0`, whereas the second sequence, corresponding to the ""question"", has all its tokens represented by a `1`. Some models, like [`XLNetModel`] use an additional token represented by a `2`. ### transfer learning A technique that involves taking a pretrained model and adapting it to a dataset specific to your task. Instead of training a model from scratch, you can leverage knowledge obtained from an existing model as a starting point. This speeds up the learning process and reduces the amount of training data needed. ### transformer Self-attention based deep learning model architecture. ## U ### unsupervised learning A form of model training in which data provided to the model is not labeled. Unsupervised learning techniques leverage statistical information of the data distribution to find patterns useful for the task at hand. ## Z ### Zero Redundancy Optimizer (ZeRO) Parallelism technique which performs sharding of the tensors somewhat similar to [TensorParallel](#tensorparallel--tp-), except the whole tensor gets reconstructed in time for a forward or backward computation, therefore the model doesn't need to be modified. This method also supports various offloading techniques to compensate for limited GPU memory. Learn more about ZeRO [here](perf_train_gpu_many#zero-data-parallelism)." troubleshooting.md," # Troubleshoot Sometimes errors occur, but we are here to help! This guide covers some of the most common issues we've seen and how you can resolve them. However, this guide isn't meant to be a comprehensive collection of every 🤗 Transformers issue. For more help with troubleshooting your issue, try: 1. Asking for help on the [forums](https://discuss.huggingface.co/). There are specific categories you can post your question to, like [Beginners](https://discuss.huggingface.co/c/beginners/5) or [🤗 Transformers](https://discuss.huggingface.co/c/transformers/9). Make sure you write a good descriptive forum post with some reproducible code to maximize the likelihood that your problem is solved! 2. Create an [Issue](https://github.com/huggingface/transformers/issues/new/choose) on the 🤗 Transformers repository if it is a bug related to the library. Try to include as much information describing the bug as possible to help us better figure out what's wrong and how we can fix it. 3. Check the [Migration](migration) guide if you use an older version of 🤗 Transformers since some important changes have been introduced between versions. For more details about troubleshooting and getting help, take a look at [Chapter 8](https://huggingface.co/course/chapter8/1?fw=pt) of the Hugging Face course. ## Firewalled environments Some GPU instances on cloud and intranet setups are firewalled to external connections, resulting in a connection error. When your script attempts to download model weights or datasets, the download will hang and then timeout with the following message: ValueError: Connection error, and we cannot find the requested files in the cached path. Please try again or make sure your Internet connection is on. In this case, you should try to run 🤗 Transformers on [offline mode](installation#offline-mode) to avoid the connection error. ## CUDA out of memory Training large models with millions of parameters can be challenging without the appropriate hardware. A common error you may encounter when the GPU runs out of memory is: CUDA out of memory. Tried to allocate 256.00 MiB (GPU 0; 11.17 GiB total capacity; 9.70 GiB already allocated; 179.81 MiB free; 9.85 GiB reserved in total by PyTorch) Here are some potential solutions you can try to lessen memory use: - Reduce the [`per_device_train_batch_size`](main_classes/trainer#transformers.TrainingArguments.per_device_train_batch_size) value in [`TrainingArguments`]. - Try using [`gradient_accumulation_steps`](main_classes/trainer#transformers.TrainingArguments.gradient_accumulation_steps) in [`TrainingArguments`] to effectively increase overall batch size. Refer to the Performance [guide](performance) for more details about memory-saving techniques. ## Unable to load a saved TensorFlow model TensorFlow's [model.save](https://www.tensorflow.org/tutorials/keras/save_and_load#save_the_entire_model) method will save the entire model - architecture, weights, training configuration - in a single file. However, when you load the model file again, you may run into an error because 🤗 Transformers may not load all the TensorFlow-related objects in the model file. To avoid issues with saving and loading TensorFlow models, we recommend you: - Save the model weights as a `h5` file extension with [`model.save_weights`](https://www.tensorflow.org/tutorials/keras/save_and_load#save_the_entire_model) and then reload the model with [`~TFPreTrainedModel.from_pretrained`]: >>> from transformers import TFPreTrainedModel >>> from tensorflow import keras >>> model.save_weights(""some_folder/tf_model.h5"") >>> model = TFPreTrainedModel.from_pretrained(""some_folder"") - Save the model with [`~TFPretrainedModel.save_pretrained`] and load it again with [`~TFPreTrainedModel.from_pretrained`]: >>> from transformers import TFPreTrainedModel >>> model.save_pretrained(""path_to/model"") >>> model = TFPreTrainedModel.from_pretrained(""path_to/model"") ## ImportError Another common error you may encounter, especially if it is a newly released model, is `ImportError`: ImportError: cannot import name 'ImageGPTImageProcessor' from 'transformers' (unknown location) For these error types, check to make sure you have the latest version of 🤗 Transformers installed to access the most recent models: ```bash pip install transformers --upgrade ## CUDA error: device-side assert triggered Sometimes you may run into a generic CUDA error about an error in the device code. RuntimeError: CUDA error: device-side assert triggered You should try to run the code on a CPU first to get a more descriptive error message. Add the following environment variable to the beginning of your code to switch to a CPU: >>> import os >>> os.environ[""CUDA_VISIBLE_DEVICES""] = """" Another option is to get a better traceback from the GPU. Add the following environment variable to the beginning of your code to get the traceback to point to the source of the error: >>> import os >>> os.environ[""CUDA_LAUNCH_BLOCKING""] = ""1"" ## Incorrect output when padding tokens aren't masked In some cases, the output `hidden_state` may be incorrect if the `input_ids` include padding tokens. To demonstrate, load a model and tokenizer. You can access a model's `pad_token_id` to see its value. The `pad_token_id` may be `None` for some models, but you can always manually set it. >>> from transformers import AutoModelForSequenceClassification >>> import torch >>> model = AutoModelForSequenceClassification.from_pretrained(""bert-base-uncased"") >>> model.config.pad_token_id 0 The following example shows the output without masking the padding tokens: >>> input_ids = torch.tensor([[7592, 2057, 2097, 2393, 9611, 2115], [7592, 0, 0, 0, 0, 0]]) >>> output = model(input_ids) >>> print(output.logits) tensor([[ 0.0082, -0.2307], [ 0.1317, -0.1683]], grad_fn=) Here is the actual output of the second sequence: >>> input_ids = torch.tensor([[7592]]) >>> output = model(input_ids) >>> print(output.logits) tensor([[-0.1008, -0.4061]], grad_fn=) Most of the time, you should provide an `attention_mask` to your model to ignore the padding tokens to avoid this silent error. Now the output of the second sequence matches its actual output: By default, the tokenizer creates an `attention_mask` for you based on your specific tokenizer's defaults. >>> attention_mask = torch.tensor([[1, 1, 1, 1, 1, 1], [1, 0, 0, 0, 0, 0]]) >>> output = model(input_ids, attention_mask=attention_mask) >>> print(output.logits) tensor([[ 0.0082, -0.2307], [-0.1008, -0.4061]], grad_fn=) 🤗 Transformers doesn't automatically create an `attention_mask` to mask a padding token if it is provided because: - Some models don't have a padding token. - For some use-cases, users want a model to attend to a padding token. ## ValueError: Unrecognized configuration class XYZ for this kind of AutoModel Generally, we recommend using the [`AutoModel`] class to load pretrained instances of models. This class can automatically infer and load the correct architecture from a given checkpoint based on the configuration. If you see this `ValueError` when loading a model from a checkpoint, this means the Auto class couldn't find a mapping from the configuration in the given checkpoint to the kind of model you are trying to load. Most commonly, this happens when a checkpoint doesn't support a given task. For instance, you'll see this error in the following example because there is no GPT2 for question answering: >>> from transformers import AutoProcessor, AutoModelForQuestionAnswering >>> processor = AutoProcessor.from_pretrained(""gpt2-medium"") >>> model = AutoModelForQuestionAnswering.from_pretrained(""gpt2-medium"") ValueError: Unrecognized configuration class for this kind of AutoModel: AutoModelForQuestionAnswering. Model type should be one of AlbertConfig, BartConfig, BertConfig, BigBirdConfig, BigBirdPegasusConfig, BloomConfig, " accelerate.md," # Distributed training with 🤗 Accelerate As models get bigger, parallelism has emerged as a strategy for training larger models on limited hardware and accelerating training speed by several orders of magnitude. At Hugging Face, we created the [🤗 Accelerate](https://huggingface.co/docs/accelerate) library to help users easily train a 🤗 Transformers model on any type of distributed setup, whether it is multiple GPU's on one machine or multiple GPU's across several machines. In this tutorial, learn how to customize your native PyTorch training loop to enable training in a distributed environment. ## Setup Get started by installing 🤗 Accelerate: ```bash pip install accelerate Then import and create an [`~accelerate.Accelerator`] object. The [`~accelerate.Accelerator`] will automatically detect your type of distributed setup and initialize all the necessary components for training. You don't need to explicitly place your model on a device. >>> from accelerate import Accelerator >>> accelerator = Accelerator() ## Prepare to accelerate The next step is to pass all the relevant training objects to the [`~accelerate.Accelerator.prepare`] method. This includes your training and evaluation DataLoaders, a model and an optimizer: >>> train_dataloader, eval_dataloader, model, optimizer = accelerator.prepare( train_dataloader, eval_dataloader, model, optimizer ) ## Backward The last addition is to replace the typical `loss.backward()` in your training loop with 🤗 Accelerate's [`~accelerate.Accelerator.backward`]method: >>> for epoch in range(num_epochs): for batch in train_dataloader: outputs = model(**batch) loss = outputs.loss accelerator.backward(loss) optimizer.step() lr_scheduler.step() optimizer.zero_grad() progress_bar.update(1) As you can see in the following code, you only need to add four additional lines of code to your training loop to enable distributed training! + from accelerate import Accelerator from transformers import AdamW, AutoModelForSequenceClassification, get_scheduler + accelerator = Accelerator() model = AutoModelForSequenceClassification.from_pretrained(checkpoint, num_labels=2) optimizer = AdamW(model.parameters(), lr=3e-5) - device = torch.device(""cuda"") if torch.cuda.is_available() else torch.device(""cpu"") - model.to(device) + train_dataloader, eval_dataloader, model, optimizer = accelerator.prepare( + train_dataloader, eval_dataloader, model, optimizer + ) num_epochs = 3 num_training_steps = num_epochs * len(train_dataloader) lr_scheduler = get_scheduler( ""linear"", optimizer=optimizer, num_warmup_steps=0, num_training_steps=num_training_steps ) progress_bar = tqdm(range(num_training_steps)) model.train() for epoch in range(num_epochs): for batch in train_dataloader: outputs = model(**batch) loss = outputs.loss + accelerator.backward(loss) optimizer.step() lr_scheduler.step() optimizer.zero_grad() progress_bar.update(1) ## Train Once you've added the relevant lines of code, launch your training in a script or a notebook like Colaboratory. ### Train with a script If you are running your training from a script, run the following command to create and save a configuration file: ```bash accelerate config Then launch your training with: ```bash accelerate launch train.py ### Train with a notebook 🤗 Accelerate can also run in a notebook if you're planning on using Colaboratory's TPUs. Wrap all the code responsible for training in a function, and pass it to [`~accelerate.notebook_launcher`]: >>> from accelerate import notebook_launcher >>> notebook_launcher(training_function) For more information about 🤗 Accelerate and its rich features, refer to the [documentation](https://huggingface.co/docs/accelerate). " index.md," # 🤗 Transformers State-of-the-art Machine Learning for [PyTorch](https://pytorch.org/), [TensorFlow](https://www.tensorflow.org/), and [JAX](https://jax.readthedocs.io/en/latest/). 🤗 Transformers provides APIs and tools to easily download and train state-of-the-art pretrained models. Using pretrained models can reduce your compute costs, carbon footprint, and save you the time and resources required to train a model from scratch. These models support common tasks in different modalities, such as: 📝 **Natural Language Processing**: text classification, named entity recognition, question answering, language modeling, summarization, translation, multiple choice, and text generation. 🖼️ **Computer Vision**: image classification, object detection, and segmentation. 🗣️ **Audio**: automatic speech recognition and audio classification. 🐙 **Multimodal**: table question answering, optical character recognition, information extraction from scanned documents, video classification, and visual question answering. 🤗 Transformers support framework interoperability between PyTorch, TensorFlow, and JAX. This provides the flexibility to use a different framework at each stage of a model's life; train a model in three lines of code in one framework, and load it for inference in another. Models can also be exported to a format like ONNX and TorchScript for deployment in production environments. Join the growing community on the [Hub](https://huggingface.co/models), [forum](https://discuss.huggingface.co/), or [Discord](https://discord.com/invite/JfAtkvEtRb) today! ## If you are looking for custom support from the Hugging Face team ## Contents The documentation is organized into five sections: - **GET STARTED** provides a quick tour of the library and installation instructions to get up and running. - **TUTORIALS** are a great place to start if you're a beginner. This section will help you gain the basic skills you need to start using the library. - **HOW-TO GUIDES** show you how to achieve a specific goal, like finetuning a pretrained model for language modeling or how to write and share a custom model. - **CONCEPTUAL GUIDES** offers more discussion and explanation of the underlying concepts and ideas behind models, tasks, and the design philosophy of 🤗 Transformers. - **API** describes all classes and functions: - **MAIN CLASSES** details the most important classes like configuration, model, tokenizer, and pipeline. - **MODELS** details the classes and functions related to each model implemented in the library. - **INTERNAL HELPERS** details utility classes and functions used internally. ## Supported models and frameworks The table below represents the current support in the library for each of those models, whether they have a Python tokenizer (called ""slow""). A ""fast"" tokenizer backed by the 🤗 Tokenizers library, whether they have support in Jax (via Flax), PyTorch, and/or TensorFlow. | Model | PyTorch support | TensorFlow support | Flax Support | |:------------------------------------------------------------------------:|:---------------:|:------------------:|:------------:| | [ALBERT](model_doc/albert) | ✅ | ✅ | ✅ | | [ALIGN](model_doc/align) | ✅ | ❌ | ❌ | | [AltCLIP](model_doc/altclip) | ✅ | ❌ | ❌ | | [Audio Spectrogram Transformer](model_doc/audio-spectrogram-transformer) | ✅ | ❌ | ❌ | | [Autoformer](model_doc/autoformer) | ✅ | ❌ | ❌ | | [Bark](model_doc/bark) | ✅ | ❌ | ❌ | | [BART](model_doc/bart) | ✅ | ✅ | ✅ | | [BARThez](model_doc/barthez) | ✅ | ✅ | ✅ | | [BARTpho](model_doc/bartpho) | ✅ | ✅ | ✅ | | [BEiT](model_doc/beit) | ✅ | ❌ | ✅ | | [BERT](model_doc/bert) | ✅ | ✅ | ✅ | | [Bert Generation](model_doc/bert-generation) | ✅ | ❌ | ❌ | | [BertJapanese](model_doc/bert-japanese) | ✅ | ✅ | ✅ | | [BERTweet](model_doc/bertweet) | ✅ | ✅ | ✅ | | [BigBird](model_doc/big_bird) | ✅ | ❌ | ✅ | | [BigBird-Pegasus](model_doc/bigbird_pegasus) | ✅ | ❌ | ❌ | | [BioGpt](model_doc/biogpt) | ✅ | ❌ | ❌ | | [BiT](model_doc/bit) | ✅ | ❌ | ❌ | | [Blenderbot](model_doc/blenderbot) | ✅ | ✅ | ✅ | | [BlenderbotSmall](model_doc/blenderbot-small) | ✅ | ✅ | ✅ | | [BLIP](model_doc/blip) | ✅ | ✅ | ❌ | | [BLIP-2](model_doc/blip-2) | ✅ | ❌ | ❌ | | [BLOOM](model_doc/bloom) | ✅ | ❌ | ✅ | | [BORT](model_doc/bort) | ✅ | ✅ | ✅ | | [BridgeTower](model_doc/bridgetower) | ✅ | ❌ | ❌ | | [BROS](model_doc/bros) | ✅ | ❌ | ❌ | | [ByT5](model_doc/byt5) | ✅ | ✅ | ✅ | | [CamemBERT](model_doc/camembert) | ✅ | ✅ | ❌ | | [CANINE](model_doc/canine) | ✅ | ❌ | ❌ | | [Chinese-CLIP](model_doc/chinese_clip) | ✅ | ❌ | ❌ | | [CLAP](model_doc/clap) | ✅ | ❌ | ❌ | | [CLIP](model_doc/clip) | ✅ | ✅ | ✅ | | [CLIPSeg](model_doc/clipseg) | ✅ | ❌ | ❌ | | [CLVP](model_doc/clvp) | ✅ | ❌ | ❌ | | [CodeGen](model_doc/codegen) | ✅ | ❌ | ❌ | | [CodeLlama](model_doc/code_llama) | ✅ | ❌ | ❌ | | [Conditional DETR](model_doc/conditional_detr) | ✅ | ❌ | ❌ | | [ConvBERT](model_doc/convbert) | ✅ | ✅ | ❌ | | [ConvNeXT](model_doc/convnext) | ✅ | ✅ | ❌ | | [ConvNeXTV2](model_doc/convnextv2) | ✅ | ✅ | ❌ | | [CPM](model_doc/cpm) | ✅ | ✅ | ✅ | | [CPM-Ant](model_doc/cpmant) | ✅ | ❌ | ❌ | | [CTRL](model_doc/ctrl) | ✅ | ✅ | ❌ | | [CvT](model_doc/cvt) | ✅ | ✅ | ❌ | | [Data2VecAudio](model_doc/data2vec) | ✅ | ❌ | ❌ | | [Data2VecText](model_doc/data2vec) | ✅ | ❌ | ❌ | | [Data2VecVision](model_doc/data2vec) | ✅ | ✅ | ❌ | | [DeBERTa](model_doc/deberta) | ✅ | ✅ | ❌ | | [DeBERTa-v2](model_doc/deberta-v2) | ✅ | ✅ | ❌ | | [Decision Transformer](model_doc/decision_transformer) | ✅ | ❌ | ❌ | | [Deformable DETR](model_doc/deformable_detr) | ✅ | ❌ | ❌ | | [DeiT](model_doc/deit) | ✅ | ✅ | ❌ | | [DePlot](model_doc/deplot) | ✅ | ❌ | ❌ | | [DETA](model_doc/deta) | ✅ | ❌ | ❌ | | [DETR](model_doc/detr) | ✅ | ❌ | ❌ | | [DialoGPT](model_doc/dialogpt) | ✅ | ✅ | ✅ | | [DiNAT](model_doc/dinat) | ✅ | ❌ | ❌ | | [DINOv2](model_doc/dinov2) | ✅ | ❌ | ❌ | | [DistilBERT](model_doc/distilbert) | ✅ | ✅ | ✅ | | [DiT](model_doc/dit) | ✅ | ❌ | ✅ | | [DonutSwin](model_doc/donut) | ✅ | ❌ | ❌ | | [DPR](model_doc/dpr) | ✅ | ✅ | ❌ | | [DPT](model_doc/dpt) | ✅ | ❌ | ❌ | | [EfficientFormer](model_doc/efficientformer) | ✅ | ✅ | ❌ | | [EfficientNet](model_doc/efficientnet) | ✅ | ❌ | ❌ | | [ELECTRA](model_doc/electra) | ✅ | ✅ | ✅ | | [EnCodec](model_doc/encodec) | ✅ | ❌ | ❌ | | [Encoder decoder](model_doc/encoder-decoder) | ✅ | ✅ | ✅ | | [ERNIE](model_doc/ernie) | ✅ | ❌ | ❌ | | [ErnieM](model_doc/ernie_m) | ✅ | ❌ | ❌ | | [ESM](model_doc/esm) | ✅ | ✅ | ❌ | | [FairSeq Machine-Translation](model_doc/fsmt) | ✅ | ❌ | ❌ | | [Falcon](model_doc/falcon) | ✅ | ❌ | ❌ | | [FLAN-T5](model_doc/flan-t5) | ✅ | ✅ | ✅ | | [FLAN-UL2](model_doc/flan-ul2) | ✅ | ✅ | ✅ | | [FlauBERT](model_doc/flaubert) | ✅ | ✅ | ❌ | | [FLAVA](model_doc/flava) | ✅ | ❌ | ❌ | | [FNet](model_doc/fnet) | ✅ | ❌ | ❌ | | [FocalNet](model_doc/focalnet) | ✅ | ❌ | ❌ | | [Funnel Transformer](model_doc/funnel) | ✅ | ✅ | ❌ | | [Fuyu](model_doc/fuyu) | ✅ | ❌ | ❌ | | [GIT](model_doc/git) | ✅ | ❌ | ❌ | | [GLPN](model_doc/glpn) | ✅ | ❌ | ❌ | | [GPT Neo](model_doc/gpt_neo) | ✅ | ❌ | ✅ | | [GPT NeoX](model_doc/gpt_neox) | ✅ | ❌ | ❌ | | [GPT NeoX Japanese](model_doc/gpt_neox_japanese) | ✅ | ❌ | ❌ | | [GPT-J](model_doc/gptj) | ✅ | ✅ | ✅ | | [GPT-Sw3](model_doc/gpt-sw3) | ✅ | ✅ | ✅ | | [GPTBigCode](model_doc/gpt_bigcode) | ✅ | ❌ | ❌ | | [GPTSAN-japanese](model_doc/gptsan-japanese) | ✅ | ❌ | ❌ | | [Graphormer](model_doc/graphormer) | ✅ | ❌ | ❌ | | [GroupViT](model_doc/groupvit) | ✅ | ✅ | ❌ | | [HerBERT](model_doc/herbert) | ✅ | ✅ | ✅ | | [Hubert](model_doc/hubert) | ✅ | ✅ | ❌ | | [I-BERT](model_doc/ibert) | ✅ | ❌ | ❌ | | [IDEFICS](model_doc/idefics) | ✅ | ❌ | ❌ | | [ImageGPT](model_doc/imagegpt) | ✅ | ❌ | ❌ | | [Informer](model_doc/informer) | ✅ | ❌ | ❌ | | [InstructBLIP](model_doc/instructblip) | ✅ | ❌ | ❌ | | [Jukebox](model_doc/jukebox) | ✅ | ❌ | ❌ | | [KOSMOS-2](model_doc/kosmos-2) | ✅ | ❌ | ❌ | | [LayoutLM](model_doc/layoutlm) | ✅ | ✅ | ❌ | | [LayoutLMv2](model_doc/layoutlmv2) | ✅ | ❌ | ❌ | | [LayoutLMv3](model_doc/layoutlmv3) | ✅ | ✅ | ❌ | | [LayoutXLM](model_doc/layoutxlm) | ✅ | ❌ | ❌ | | [LED](model_doc/led) | ✅ | ✅ | ❌ | | [LeViT](model_doc/levit) | ✅ | ❌ | ❌ | | [LiLT](model_doc/lilt) | ✅ | ❌ | ❌ | | [LLaMA](model_doc/llama) | ✅ | ❌ | ❌ | | [Llama2](model_doc/llama2) | ✅ | ❌ | ❌ | | [Longformer](model_doc/longformer) | ✅ | ✅ | ❌ | | [LongT5](model_doc/longt5) | ✅ | ❌ | ✅ | | [LUKE](model_doc/luke) | ✅ | ❌ | ❌ | | [LXMERT](model_doc/lxmert) | ✅ | ✅ | ❌ | | [M-CTC-T](model_doc/mctct) | ✅ | ❌ | ❌ | | [M2M100](model_doc/m2m_100) | ✅ | ❌ | ❌ | | [Marian](model_doc/marian) | ✅ | ✅ | ✅ | | [MarkupLM](model_doc/markuplm) | ✅ | ❌ | ❌ | | [Mask2Former](model_doc/mask2former) | ✅ | ❌ | ❌ | | [MaskFormer](model_doc/maskformer) | ✅ | ❌ | ❌ | | [MatCha](model_doc/matcha) | ✅ | ❌ | ❌ | | [mBART](model_doc/mbart) | ✅ | ✅ | ✅ | | [mBART-50](model_doc/mbart50) | ✅ | ✅ | ✅ | | [MEGA](model_doc/mega) | ✅ | ❌ | ❌ | | [Megatron-BERT](model_doc/megatron-bert) | ✅ | ❌ | ❌ | | [Megatron-GPT2](model_doc/megatron_gpt2) | ✅ | ✅ | ✅ | | [MGP-STR](model_doc/mgp-str) | ✅ | ❌ | ❌ | | [Mistral](model_doc/mistral) | ✅ | ❌ | ❌ | | [mLUKE](model_doc/mluke) | ✅ | ❌ | ❌ | | [MMS](model_doc/mms) | ✅ | ✅ | ✅ | | [MobileBERT](model_doc/mobilebert) | ✅ | ✅ | ❌ | | [MobileNetV1](model_doc/mobilenet_v1) | ✅ | ❌ | ❌ | | [MobileNetV2](model_doc/mobilenet_v2) | ✅ | ❌ | ❌ | | [MobileViT](model_doc/mobilevit) | ✅ | ✅ | ❌ | | [MobileViTV2](model_doc/mobilevitv2) | ✅ | ❌ | ❌ | | [MPNet](model_doc/mpnet) | ✅ | ✅ | ❌ | | [MPT](model_doc/mpt) | ✅ | ❌ | ❌ | | [MRA](model_doc/mra) | ✅ | ❌ | ❌ | | [MT5](model_doc/mt5) | ✅ | ✅ | ✅ | | [MusicGen](model_doc/musicgen) | ✅ | ❌ | ❌ | | [MVP](model_doc/mvp) | ✅ | ❌ | ❌ | | [NAT](model_doc/nat) | ✅ | ❌ | ❌ | | [Nezha](model_doc/nezha) | ✅ | ❌ | ❌ | | [NLLB](model_doc/nllb) | ✅ | ❌ | ❌ | | [NLLB-MOE](model_doc/nllb-moe) | ✅ | ❌ | ❌ | | [Nougat](model_doc/nougat) | ✅ | ✅ | ✅ | | [Nyströmformer](model_doc/nystromformer) | ✅ | ❌ | ❌ | | [OneFormer](model_doc/oneformer) | ✅ | ❌ | ❌ | | [OpenAI GPT](model_doc/openai-gpt) | ✅ | ✅ | ❌ | | [OpenAI GPT-2](model_doc/gpt2) | ✅ | ✅ | ✅ | | [OpenLlama](model_doc/open-llama) | ✅ | ❌ | ❌ | | [OPT](model_doc/opt) | ✅ | ✅ | ✅ | | [OWL-ViT](model_doc/owlvit) | ✅ | ❌ | ❌ | | [OWLv2](model_doc/owlv2) | ✅ | ❌ | ❌ | | [Pegasus](model_doc/pegasus) | ✅ | ✅ | ✅ | | [PEGASUS-X](model_doc/pegasus_x) | ✅ | ❌ | ❌ | | [Perceiver](model_doc/perceiver) | ✅ | ❌ | ❌ | | [Persimmon](model_doc/persimmon) | ✅ | ❌ | ❌ | | [Phi](model_doc/phi) | ✅ | ❌ | ❌ | | [PhoBERT](model_doc/phobert) | ✅ | ✅ | ✅ | | [Pix2Struct](model_doc/pix2struct) | ✅ | ❌ | ❌ | | [PLBart](model_doc/plbart) | ✅ | ❌ | ❌ | | [PoolFormer](model_doc/poolformer) | ✅ | ❌ | ❌ | | [Pop2Piano](model_doc/pop2piano) | ✅ | ❌ | ❌ | | [ProphetNet](model_doc/prophetnet) | ✅ | ❌ | ❌ | | [PVT](model_doc/pvt) | ✅ | ❌ | ❌ | | [QDQBert](model_doc/qdqbert) | ✅ | ❌ | ❌ | | [RAG](model_doc/rag) | ✅ | ✅ | ❌ | | [REALM](model_doc/realm) | ✅ | ❌ | ❌ | | [Reformer](model_doc/reformer) | ✅ | ❌ | ❌ | | [RegNet](model_doc/regnet) | ✅ | ✅ | ✅ | | [RemBERT](model_doc/rembert) | ✅ | ✅ | ❌ | | [ResNet](model_doc/resnet) | ✅ | ✅ | ✅ | | [RetriBERT](model_doc/retribert) | ✅ | ❌ | ❌ | | [RoBERTa](model_doc/roberta) | ✅ | ✅ | ✅ | | [RoBERTa-PreLayerNorm](model_doc/roberta-prelayernorm) | ✅ | ✅ | ✅ | | [RoCBert](model_doc/roc_bert) | ✅ | ❌ | ❌ | | [RoFormer](model_doc/roformer) | ✅ | ✅ | ✅ | | [RWKV](model_doc/rwkv) | ✅ | ❌ | ❌ | | [SAM](model_doc/sam) | ✅ | ✅ | ❌ | | [SeamlessM4T](model_doc/seamless_m4t) | ✅ | ❌ | ❌ | | [SegFormer](model_doc/segformer) | ✅ | ✅ | ❌ | | [SEW](model_doc/sew) | ✅ | ❌ | ❌ | | [SEW-D](model_doc/sew-d) | ✅ | ❌ | ❌ | | [Speech Encoder decoder](model_doc/speech-encoder-decoder) | ✅ | ❌ | ✅ | | [Speech2Text](model_doc/speech_to_text) | ✅ | ✅ | ❌ | | [SpeechT5](model_doc/speecht5) | ✅ | ❌ | ❌ | | [Splinter](model_doc/splinter) | ✅ | ❌ | ❌ | | [SqueezeBERT](model_doc/squeezebert) | ✅ | ❌ | ❌ | | [SwiftFormer](model_doc/swiftformer) | ✅ | ❌ | ❌ | | [Swin Transformer](model_doc/swin) | ✅ | ✅ | ❌ | | [Swin Transformer V2](model_doc/swinv2) | ✅ | ❌ | ❌ | | [Swin2SR](model_doc/swin2sr) | ✅ | ❌ | ❌ | | [SwitchTransformers](model_doc/switch_transformers) | ✅ | ❌ | ❌ | | [T5](model_doc/t5) | ✅ | ✅ | ✅ | | [T5v1.1](model_doc/t5v1.1) | ✅ | ✅ | ✅ | | [Table Transformer](model_doc/table-transformer) | ✅ | ❌ | ❌ | | [TAPAS](model_doc/tapas) | ✅ | ✅ | ❌ | | [TAPEX](model_doc/tapex) | ✅ | ✅ | ✅ | | [Time Series Transformer](model_doc/time_series_transformer) | ✅ | ❌ | ❌ | | [TimeSformer](model_doc/timesformer) | ✅ | ❌ | ❌ | | [Trajectory Transformer](model_doc/trajectory_transformer) | ✅ | ❌ | ❌ | | [Transformer-XL](model_doc/transfo-xl) | ✅ | ✅ | ❌ | | [TrOCR](model_doc/trocr) | ✅ | ❌ | ❌ | | [TVLT](model_doc/tvlt) | ✅ | ❌ | ❌ | | [TVP](model_doc/tvp) | ✅ | ❌ | ❌ | | [UL2](model_doc/ul2) | ✅ | ✅ | ✅ | | [UMT5](model_doc/umt5) | ✅ | ❌ | ❌ | | [UniSpeech](model_doc/unispeech) | ✅ | ❌ | ❌ | | [UniSpeechSat](model_doc/unispeech-sat) | ✅ | ❌ | ❌ | | [UnivNet](model_doc/univnet) | ✅ | ❌ | ❌ | | [UPerNet](model_doc/upernet) | ✅ | ❌ | ❌ | | [VAN](model_doc/van) | ✅ | ❌ | ❌ | | [VideoMAE](model_doc/videomae) | ✅ | ❌ | ❌ | | [ViLT](model_doc/vilt) | ✅ | ❌ | ❌ | | [Vision Encoder decoder](model_doc/vision-encoder-decoder) | ✅ | ✅ | ✅ | | [VisionTextDualEncoder](model_doc/vision-text-dual-encoder) | ✅ | ✅ | ✅ | | [VisualBERT](model_doc/visual_bert) | ✅ | ❌ | ❌ | | [ViT](model_doc/vit) | ✅ | ✅ | ✅ | | [ViT Hybrid](model_doc/vit_hybrid) | ✅ | ❌ | ❌ | | [VitDet](model_doc/vitdet) | ✅ | ❌ | ❌ | | [ViTMAE](model_doc/vit_mae) | ✅ | ✅ | ❌ | | [ViTMatte](model_doc/vitmatte) | ✅ | ❌ | ❌ | | [ViTMSN](model_doc/vit_msn) | ✅ | ❌ | ❌ | | [VITS](model_doc/vits) | ✅ | ❌ | ❌ | | [ViViT](model_doc/vivit) | ✅ | ❌ | ❌ | | [Wav2Vec2](model_doc/wav2vec2) | ✅ | ✅ | ✅ | | [Wav2Vec2-Conformer](model_doc/wav2vec2-conformer) | ✅ | ❌ | ❌ | | [Wav2Vec2Phoneme](model_doc/wav2vec2_phoneme) | ✅ | ✅ | ✅ | | [WavLM](model_doc/wavlm) | ✅ | ❌ | ❌ | | [Whisper](model_doc/whisper) | ✅ | ✅ | ✅ | | [X-CLIP](model_doc/xclip) | ✅ | ❌ | ❌ | | [X-MOD](model_doc/xmod) | ✅ | ❌ | ❌ | | [XGLM](model_doc/xglm) | ✅ | ✅ | ✅ | | [XLM](model_doc/xlm) | ✅ | ✅ | ❌ | | [XLM-ProphetNet](model_doc/xlm-prophetnet) | ✅ | ❌ | ❌ | | [XLM-RoBERTa](model_doc/xlm-roberta) | ✅ | ✅ | ✅ | | [XLM-RoBERTa-XL](model_doc/xlm-roberta-xl) | ✅ | ❌ | ❌ | | [XLM-V](model_doc/xlm-v) | ✅ | ✅ | ✅ | | [XLNet](model_doc/xlnet) | ✅ | ✅ | ❌ | | [XLS-R](model_doc/xls_r) | ✅ | ✅ | ✅ | | [XLSR-Wav2Vec2](model_doc/xlsr_wav2vec2) | ✅ | ✅ | ✅ | | [YOLOS](model_doc/yolos) | ✅ | ❌ | ❌ | | [YOSO](model_doc/yoso) | ✅ | ❌ | ❌ | " add_new_model.md," # How to add a model to 🤗 Transformers? The 🤗 Transformers library is often able to offer new models thanks to community contributors. But this can be a challenging project and requires an in-depth knowledge of the 🤗 Transformers library and the model to implement. At Hugging Face, we're trying to empower more of the community to actively add models and we've put together this guide to walk you through the process of adding a PyTorch model (make sure you have [PyTorch installed](https://pytorch.org/get-started/locally/)). If you're interested in implementing a TensorFlow model, take a look at the [How to convert a 🤗 Transformers model to TensorFlow](add_tensorflow_model) guide! Along the way, you'll: - get insights into open-source best practices - understand the design principles behind one of the most popular deep learning libraries - learn how to efficiently test large models - learn how to integrate Python utilities like `black`, `ruff`, and `make fix-copies` to ensure clean and readable code A Hugging Face team member will be available to help you along the way so you'll never be alone. 🤗 ❤️ To get started, open a [New model addition](https://github.com/huggingface/transformers/issues/new?assignees=&labels=New+model&template=new-model-addition.yml) issue for the model you want to see in 🤗 Transformers. If you're not especially picky about contributing a specific model, you can filter by the [New model label](https://github.com/huggingface/transformers/labels/New%20model) to see if there are any unclaimed model requests and work on it. Once you've opened a new model request, the first step is to get familiar with 🤗 Transformers if you aren't already! ## General overview of 🤗 Transformers First, you should get a general overview of 🤗 Transformers. 🤗 Transformers is a very opinionated library, so there is a chance that you don't agree with some of the library's philosophies or design choices. From our experience, however, we found that the fundamental design choices and philosophies of the library are crucial to efficiently scale 🤗 Transformers while keeping maintenance costs at a reasonable level. A good first starting point to better understand the library is to read the [documentation of our philosophy](philosophy). As a result of our way of working, there are some choices that we try to apply to all models: - Composition is generally favored over-abstraction - Duplicating code is not always bad if it strongly improves the readability or accessibility of a model - Model files are as self-contained as possible so that when you read the code of a specific model, you ideally only have to look into the respective `modeling_.py` file. In our opinion, the library's code is not just a means to provide a product, *e.g.* the ability to use BERT for inference, but also as the very product that we want to improve. Hence, when adding a model, the user is not only the person who will use your model, but also everybody who will read, try to understand, and possibly tweak your code. With this in mind, let's go a bit deeper into the general library design. ### Overview of models To successfully add a model, it is important to understand the interaction between your model and its config, [`PreTrainedModel`], and [`PretrainedConfig`]. For exemplary purposes, we will call the model to be added to 🤗 Transformers `BrandNewBert`. Let's take a look: As you can see, we do make use of inheritance in 🤗 Transformers, but we keep the level of abstraction to an absolute minimum. There are never more than two levels of abstraction for any model in the library. `BrandNewBertModel` inherits from `BrandNewBertPreTrainedModel` which in turn inherits from [`PreTrainedModel`] and that's it. As a general rule, we want to make sure that a new model only depends on [`PreTrainedModel`]. The important functionalities that are automatically provided to every new model are [`~PreTrainedModel.from_pretrained`] and [`~PreTrainedModel.save_pretrained`], which are used for serialization and deserialization. All of the other important functionalities, such as `BrandNewBertModel.forward` should be completely defined in the new `modeling_brand_new_bert.py` script. Next, we want to make sure that a model with a specific head layer, such as `BrandNewBertForMaskedLM` does not inherit from `BrandNewBertModel`, but rather uses `BrandNewBertModel` as a component that can be called in its forward pass to keep the level of abstraction low. Every new model requires a configuration class, called `BrandNewBertConfig`. This configuration is always stored as an attribute in [`PreTrainedModel`], and thus can be accessed via the `config` attribute for all classes inheriting from `BrandNewBertPreTrainedModel`: thon model = BrandNewBertModel.from_pretrained(""brandy/brand_new_bert"") model.config # model has access to its config Similar to the model, the configuration inherits basic serialization and deserialization functionalities from [`PretrainedConfig`]. Note that the configuration and the model are always serialized into two different formats - the model to a *pytorch_model.bin* file and the configuration to a *config.json* file. Calling [`~PreTrainedModel.save_pretrained`] will automatically call [`~PretrainedConfig.save_pretrained`], so that both model and configuration are saved. ### Code style When coding your new model, keep in mind that Transformers is an opinionated library and we have a few quirks of our own regarding how code should be written :-) 1. The forward pass of your model should be fully written in the modeling file while being fully independent of other models in the library. If you want to reuse a block from another model, copy the code and paste it with a `# Copied from` comment on top (see [here](https://github.com/huggingface/transformers/blob/v4.17.0/src/transformers/models/roberta/modeling_roberta.py#L160) for a good example and [there](pr_checks#check-copies) for more documentation on Copied from). 2. The code should be fully understandable, even by a non-native English speaker. This means you should pick descriptive variable names and avoid abbreviations. As an example, `activation` is preferred to `act`. One-letter variable names are strongly discouraged unless it's an index in a for loop. 3. More generally we prefer longer explicit code to short magical one. 4. Avoid subclassing `nn.Sequential` in PyTorch but subclass `nn.Module` and write the forward pass, so that anyone using your code can quickly debug it by adding print statements or breaking points. 5. Your function signature should be type-annotated. For the rest, good variable names are way more readable and understandable than type annotations. ### Overview of tokenizers Not quite ready yet :-( This section will be added soon! ## Step-by-step recipe to add a model to 🤗 Transformers Everyone has different preferences of how to port a model so it can be very helpful for you to take a look at summaries of how other contributors ported models to Hugging Face. Here is a list of community blog posts on how to port a model: 1. [Porting GPT2 Model](https://medium.com/huggingface/from-tensorflow-to-pytorch-265f40ef2a28) by [Thomas](https://huggingface.co/thomwolf) 2. [Porting WMT19 MT Model](https://huggingface.co/blog/porting-fsmt) by [Stas](https://huggingface.co/stas) From experience, we can tell you that the most important things to keep in mind when adding a model are: - Don't reinvent the wheel! Most parts of the code you will add for the new 🤗 Transformers model already exist somewhere in 🤗 Transformers. Take some time to find similar, already existing models and tokenizers you can copy from. [grep](https://www.gnu.org/software/grep/) and [rg](https://github.com/BurntSushi/ripgrep) are your friends. Note that it might very well happen that your model's tokenizer is based on one model implementation, and your model's modeling code on another one. *E.g.* FSMT's modeling code is based on BART, while FSMT's tokenizer code is based on XLM. - It's more of an engineering challenge than a scientific challenge. You should spend more time creating an efficient debugging environment rather than trying to understand all theoretical aspects of the model in the paper. - Ask for help, when you're stuck! Models are the core component of 🤗 Transformers so we at Hugging Face are more than happy to help you at every step to add your model. Don't hesitate to ask if you notice you are not making progress. In the following, we try to give you a general recipe that we found most useful when porting a model to 🤗 Transformers. The following list is a summary of everything that has to be done to add a model and can be used by you as a To-Do List: ☐ (Optional) Understood the model's theoretical aspects ☐ Prepared 🤗 Transformers dev environment ☐ Set up debugging environment of the original repository ☐ Created script that successfully runs the `forward()` pass using the original repository and checkpoint ☐ Successfully added the model skeleton to 🤗 Transformers ☐ Successfully converted original checkpoint to 🤗 Transformers checkpoint ☐ Successfully ran `forward()` pass in 🤗 Transformers that gives identical output to original checkpoint ☐ Finished model tests in 🤗 Transformers ☐ Successfully added tokenizer in 🤗 Transformers ☐ Run end-to-end integration tests ☐ Finished docs ☐ Uploaded model weights to the Hub ☐ Submitted the pull request ☐ (Optional) Added a demo notebook To begin with, we usually recommend starting by getting a good theoretical understanding of `BrandNewBert`. However, if you prefer to understand the theoretical aspects of the model *on-the-job*, then it is totally fine to directly dive into the `BrandNewBert`'s code-base. This option might suit you better if your engineering skills are better than your theoretical skill, if you have trouble understanding `BrandNewBert`'s paper, or if you just enjoy programming much more than reading scientific papers. ### 1. (Optional) Theoretical aspects of BrandNewBert You should take some time to read *BrandNewBert's* paper, if such descriptive work exists. There might be large sections of the paper that are difficult to understand. If this is the case, this is fine - don't worry! The goal is not to get a deep theoretical understanding of the paper, but to extract the necessary information required to effectively re-implement the model in 🤗 Transformers. That being said, you don't have to spend too much time on the theoretical aspects, but rather focus on the practical ones, namely: - What type of model is *brand_new_bert*? BERT-like encoder-only model? GPT2-like decoder-only model? BART-like encoder-decoder model? Look at the [model_summary](model_summary) if you're not familiar with the differences between those. - What are the applications of *brand_new_bert*? Text classification? Text generation? Seq2Seq tasks, *e.g.,* summarization? - What is the novel feature of the model that makes it different from BERT/GPT-2/BART? - Which of the already existing [🤗 Transformers models](https://huggingface.co/transformers/#contents) is most similar to *brand_new_bert*? - What type of tokenizer is used? A sentencepiece tokenizer? Word piece tokenizer? Is it the same tokenizer as used for BERT or BART? After you feel like you have gotten a good overview of the architecture of the model, you might want to write to the Hugging Face team with any questions you might have. This might include questions regarding the model's architecture, its attention layer, etc. We will be more than happy to help you. ### 2. Next prepare your environment 1. Fork the [repository](https://github.com/huggingface/transformers) by clicking on the ‘Fork' button on the repository's page. This creates a copy of the code under your GitHub user account. 2. Clone your `transformers` fork to your local disk, and add the base repository as a remote: ```bash git clone https://github.com/[your Github handle]/transformers.git cd transformers git remote add upstream https://github.com/huggingface/transformers.git 3. Set up a development environment, for instance by running the following command: ```bash python -m venv .env source .env/bin/activate pip install -e "".[dev]"" Depending on your OS, and since the number of optional dependencies of Transformers is growing, you might get a failure with this command. If that's the case make sure to install the Deep Learning framework you are working with (PyTorch, TensorFlow and/or Flax) then do: ```bash pip install -e "".[quality]"" which should be enough for most use cases. You can then return to the parent directory ```bash cd .. 4. We recommend adding the PyTorch version of *brand_new_bert* to Transformers. To install PyTorch, please follow the instructions on https://pytorch.org/get-started/locally/. **Note:** You don't need to have CUDA installed. Making the new model work on CPU is sufficient. 5. To port *brand_new_bert*, you will also need access to its original repository: ```bash git clone https://github.com/org_that_created_brand_new_bert_org/brand_new_bert.git cd brand_new_bert pip install -e . Now you have set up a development environment to port *brand_new_bert* to 🤗 Transformers. ### 3.-4. Run a pretrained checkpoint using the original repository At first, you will work on the original *brand_new_bert* repository. Often, the original implementation is very “researchy”. Meaning that documentation might be lacking and the code can be difficult to understand. But this should be exactly your motivation to reimplement *brand_new_bert*. At Hugging Face, one of our main goals is to *make people stand on the shoulders of giants* which translates here very well into taking a working model and rewriting it to make it as **accessible, user-friendly, and beautiful** as possible. This is the number-one motivation to re-implement models into 🤗 Transformers - trying to make complex new NLP technology accessible to **everybody**. You should start thereby by diving into the original repository. Successfully running the official pretrained model in the original repository is often **the most difficult** step. From our experience, it is very important to spend some time getting familiar with the original code-base. You need to figure out the following: - Where to find the pretrained weights? - How to load the pretrained weights into the corresponding model? - How to run the tokenizer independently from the model? - Trace one forward pass so that you know which classes and functions are required for a simple forward pass. Usually, you only have to reimplement those functions. - Be able to locate the important components of the model: Where is the model's class? Are there model sub-classes, *e.g.* EncoderModel, DecoderModel? Where is the self-attention layer? Are there multiple different attention layers, *e.g.* *self-attention*, *cross-attention*? - How can you debug the model in the original environment of the repo? Do you have to add *print* statements, can you work with an interactive debugger like *ipdb*, or should you use an efficient IDE to debug the model, like PyCharm? It is very important that before you start the porting process, you can **efficiently** debug code in the original repository! Also, remember that you are working with an open-source library, so do not hesitate to open an issue, or even a pull request in the original repository. The maintainers of this repository are most likely very happy about someone looking into their code! At this point, it is really up to you which debugging environment and strategy you prefer to use to debug the original model. We strongly advise against setting up a costly GPU environment, but simply work on a CPU both when starting to dive into the original repository and also when starting to write the 🤗 Transformers implementation of the model. Only at the very end, when the model has already been successfully ported to 🤗 Transformers, one should verify that the model also works as expected on GPU. In general, there are two possible debugging environments for running the original model - [Jupyter notebooks](https://jupyter.org/) / [google colab](https://colab.research.google.com/notebooks/intro.ipynb) - Local python scripts. Jupyter notebooks have the advantage that they allow for cell-by-cell execution which can be helpful to better split logical components from one another and to have faster debugging cycles as intermediate results can be stored. Also, notebooks are often easier to share with other contributors, which might be very helpful if you want to ask the Hugging Face team for help. If you are familiar with Jupyter notebooks, we strongly recommend you work with them. The obvious disadvantage of Jupyter notebooks is that if you are not used to working with them you will have to spend some time adjusting to the new programming environment and you might not be able to use your known debugging tools anymore, like `ipdb`. For each code-base, a good first step is always to load a **small** pretrained checkpoint and to be able to reproduce a single forward pass using a dummy integer vector of input IDs as an input. Such a script could look like this (in pseudocode): thon model = BrandNewBertModel.load_pretrained_checkpoint(""/path/to/checkpoint/"") input_ids = [0, 4, 5, 2, 3, 7, 9] # vector of input ids original_output = model.predict(input_ids) Next, regarding the debugging strategy, there are generally a few from which to choose from: - Decompose the original model into many small testable components and run a forward pass on each of those for verification - Decompose the original model only into the original *tokenizer* and the original *model*, run a forward pass on those, and use intermediate print statements or breakpoints for verification Again, it is up to you which strategy to choose. Often, one or the other is advantageous depending on the original code base. If the original code-base allows you to decompose the model into smaller sub-components, *e.g.* if the original code-base can easily be run in eager mode, it is usually worth the effort to do so. There are some important advantages to taking the more difficult road in the beginning: - at a later stage when comparing the original model to the Hugging Face implementation, you can verify automatically for each component individually that the corresponding component of the 🤗 Transformers implementation matches instead of relying on visual comparison via print statements - it can give you some rope to decompose the big problem of porting a model into smaller problems of just porting individual components and thus structure your work better - separating the model into logical meaningful components will help you to get a better overview of the model's design and thus to better understand the model - at a later stage those component-by-component tests help you to ensure that no regression occurs as you continue changing your code [Lysandre's](https://gist.github.com/LysandreJik/db4c948f6b4483960de5cbac598ad4ed) integration checks for ELECTRA gives a nice example of how this can be done. However, if the original code-base is very complex or only allows intermediate components to be run in a compiled mode, it might be too time-consuming or even impossible to separate the model into smaller testable sub-components. A good example is [T5's MeshTensorFlow](https://github.com/tensorflow/mesh/tree/master/mesh_tensorflow) library which is very complex and does not offer a simple way to decompose the model into its sub-components. For such libraries, one often relies on verifying print statements. No matter which strategy you choose, the recommended procedure is often the same that you should start to debug the starting layers first and the ending layers last. It is recommended that you retrieve the output, either by print statements or sub-component functions, of the following layers in the following order: 1. Retrieve the input IDs passed to the model 2. Retrieve the word embeddings 3. Retrieve the input of the first Transformer layer 4. Retrieve the output of the first Transformer layer 5. Retrieve the output of the following n - 1 Transformer layers 6. Retrieve the output of the whole BrandNewBert Model Input IDs should thereby consists of an array of integers, *e.g.* `input_ids = [0, 4, 4, 3, 2, 4, 1, 7, 19]` The outputs of the following layers often consist of multi-dimensional float arrays and can look like this: [[ [-0.1465, -0.6501, 0.1993, , 0.1451, 0.3430, 0.6024], [-0.4417, -0.5920, 0.3450, , -0.3062, 0.6182, 0.7132], [-0.5009, -0.7122, 0.4548, , -0.3662, 0.6091, 0.7648], , [-0.5613, -0.6332, 0.4324, , -0.3792, 0.7372, 0.9288], [-0.5416, -0.6345, 0.4180, , -0.3564, 0.6992, 0.9191], [-0.5334, -0.6403, 0.4271, , -0.3339, 0.6533, 0.8694]]], We expect that every model added to 🤗 Transformers passes a couple of integration tests, meaning that the original model and the reimplemented version in 🤗 Transformers have to give the exact same output up to a precision of 0.001! Since it is normal that the exact same model written in different libraries can give a slightly different output depending on the library framework, we accept an error tolerance of 1e-3 (0.001). It is not enough if the model gives nearly the same output, they have to be almost identical. Therefore, you will certainly compare the intermediate outputs of the 🤗 Transformers version multiple times against the intermediate outputs of the original implementation of *brand_new_bert* in which case an **efficient** debugging environment of the original repository is absolutely important. Here is some advice to make your debugging environment as efficient as possible. - Find the best way of debugging intermediate results. Is the original repository written in PyTorch? Then you should probably take the time to write a longer script that decomposes the original model into smaller sub-components to retrieve intermediate values. Is the original repository written in Tensorflow 1? Then you might have to rely on TensorFlow print operations like [tf.print](https://www.tensorflow.org/api_docs/python/tf/print) to output intermediate values. Is the original repository written in Jax? Then make sure that the model is **not jitted** when running the forward pass, *e.g.* check-out [this link](https://github.com/google/jax/issues/196). - Use the smallest pretrained checkpoint you can find. The smaller the checkpoint, the faster your debug cycle becomes. It is not efficient if your pretrained model is so big that your forward pass takes more than 10 seconds. In case only very large checkpoints are available, it might make more sense to create a dummy model in the new environment with randomly initialized weights and save those weights for comparison with the 🤗 Transformers version of your model - Make sure you are using the easiest way of calling a forward pass in the original repository. Ideally, you want to find the function in the original repository that **only** calls a single forward pass, *i.e.* that is often called `predict`, `evaluate`, `forward` or `__call__`. You don't want to debug a function that calls `forward` multiple times, *e.g.* to generate text, like `autoregressive_sample`, `generate`. - Try to separate the tokenization from the model's *forward* pass. If the original repository shows examples where you have to input a string, then try to find out where in the forward call the string input is changed to input ids and start from this point. This might mean that you have to possibly write a small script yourself or change the original code so that you can directly input the ids instead of an input string. - Make sure that the model in your debugging setup is **not** in training mode, which often causes the model to yield random outputs due to multiple dropout layers in the model. Make sure that the forward pass in your debugging environment is **deterministic** so that the dropout layers are not used. Or use *transformers.utils.set_seed* if the old and new implementations are in the same framework. The following section gives you more specific details/tips on how you can do this for *brand_new_bert*. ### 5.-14. Port BrandNewBert to 🤗 Transformers Next, you can finally start adding new code to 🤗 Transformers. Go into the clone of your 🤗 Transformers' fork: ```bash cd transformers In the special case that you are adding a model whose architecture exactly matches the model architecture of an existing model you only have to add a conversion script as described in [this section](#write-a-conversion-script). In this case, you can just re-use the whole model architecture of the already existing model. Otherwise, let's start generating a new model. You have two choices here: - `transformers-cli add-new-model-like` to add a new model like an existing one - `transformers-cli add-new-model` to add a new model from our template (will look like BERT or Bart depending on the type of model you select) In both cases, you will be prompted with a questionnaire to fill in the basic information of your model. The second command requires to install `cookiecutter`, you can find more information on it [here](https://github.com/huggingface/transformers/tree/main/templates/adding_a_new_model). **Open a Pull Request on the main huggingface/transformers repo** Before starting to adapt the automatically generated code, now is the time to open a “Work in progress (WIP)” pull request, *e.g.* “[WIP] Add *brand_new_bert*”, in 🤗 Transformers so that you and the Hugging Face team can work side-by-side on integrating the model into 🤗 Transformers. You should do the following: 1. Create a branch with a descriptive name from your main branch ```bash git checkout -b add_brand_new_bert 2. Commit the automatically generated code: ```bash git add . git commit 3. Fetch and rebase to current main ```bash git fetch upstream git rebase upstream/main 4. Push the changes to your account using: ```bash git push -u origin a-descriptive-name-for-my-changes 5. Once you are satisfied, go to the webpage of your fork on GitHub. Click on “Pull request”. Make sure to add the GitHub handle of some members of the Hugging Face team as reviewers, so that the Hugging Face team gets notified for future changes. 6. Change the PR into a draft by clicking on “Convert to draft” on the right of the GitHub pull request web page. In the following, whenever you have made some progress, don't forget to commit your work and push it to your account so that it shows in the pull request. Additionally, you should make sure to update your work with the current main from time to time by doing: ```bash git fetch upstream git merge upstream/main In general, all questions you might have regarding the model or your implementation should be asked in your PR and discussed/solved in the PR. This way, the Hugging Face team will always be notified when you are committing new code or if you have a question. It is often very helpful to point the Hugging Face team to your added code so that the Hugging Face team can efficiently understand your problem or question. To do so, you can go to the “Files changed” tab where you see all of your changes, go to a line regarding which you want to ask a question, and click on the “+” symbol to add a comment. Whenever a question or problem has been solved, you can click on the “Resolve” button of the created comment. In the same way, the Hugging Face team will open comments when reviewing your code. We recommend asking most questions on GitHub on your PR. For some very general questions that are not very useful for the public, feel free to ping the Hugging Face team by Slack or email. **5. Adapt the generated models code for brand_new_bert** At first, we will focus only on the model itself and not care about the tokenizer. All the relevant code should be found in the generated files `src/transformers/models/brand_new_bert/modeling_brand_new_bert.py` and `src/transformers/models/brand_new_bert/configuration_brand_new_bert.py`. Now you can finally start coding :). The generated code in `src/transformers/models/brand_new_bert/modeling_brand_new_bert.py` will either have the same architecture as BERT if it's an encoder-only model or BART if it's an encoder-decoder model. At this point, you should remind yourself what you've learned in the beginning about the theoretical aspects of the model: *How is the model different from BERT or BART?*"". Implement those changes which often means changing the *self-attention* layer, the order of the normalization layer, etc… Again, it is often useful to look at the similar architecture of already existing models in Transformers to get a better feeling of how your model should be implemented. **Note** that at this point, you don't have to be very sure that your code is fully correct or clean. Rather, it is advised to add a first *unclean*, copy-pasted version of the original code to `src/transformers/models/brand_new_bert/modeling_brand_new_bert.py` until you feel like all the necessary code is added. From our experience, it is much more efficient to quickly add a first version of the required code and improve/correct the code iteratively with the conversion script as described in the next section. The only thing that has to work at this point is that you can instantiate the 🤗 Transformers implementation of *brand_new_bert*, *i.e.* the following command should work: thon from transformers import BrandNewBertModel, BrandNewBertConfig model = BrandNewBertModel(BrandNewBertConfig()) The above command will create a model according to the default parameters as defined in `BrandNewBertConfig()` with random weights, thus making sure that the `init()` methods of all components works. Note that all random initialization should happen in the `_init_weights` method of your `BrandnewBertPreTrainedModel` class. It should initialize all leaf modules depending on the variables of the config. Here is an example with the BERT `_init_weights` method: def _init_weights(self, module): """"""Initialize the weights"""""" if isinstance(module, nn.Linear): module.weight.data.normal_(mean=0.0, std=self.config.initializer_range) if module.bias is not None: module.bias.data.zero_() elif isinstance(module, nn.Embedding): module.weight.data.normal_(mean=0.0, std=self.config.initializer_range) if module.padding_idx is not None: module.weight.data[module.padding_idx].zero_() elif isinstance(module, nn.LayerNorm): module.bias.data.zero_() module.weight.data.fill_(1.0) You can have some more custom schemes if you need a special initialization for some modules. For instance, in `Wav2Vec2ForPreTraining`, the last two linear layers need to have the initialization of the regular PyTorch `nn.Linear` but all the other ones should use an initialization as above. This is coded like this: def _init_weights(self, module): """"""Initialize the weights"""""" if isinstnace(module, Wav2Vec2ForPreTraining): module.project_hid.reset_parameters() module.project_q.reset_parameters() module.project_hid._is_hf_initialized = True module.project_q._is_hf_initialized = True elif isinstance(module, nn.Linear): module.weight.data.normal_(mean=0.0, std=self.config.initializer_range) if module.bias is not None: module.bias.data.zero_() The `_is_hf_initialized` flag is internally used to make sure we only initialize a submodule once. By setting it to `True` for `module.project_q` and `module.project_hid`, we make sure the custom initialization we did is not overridden later on, the `_init_weights` function won't be applied to them. **6. Write a conversion script** Next, you should write a conversion script that lets you convert the checkpoint you used to debug *brand_new_bert* in the original repository to a checkpoint compatible with your just created 🤗 Transformers implementation of *brand_new_bert*. It is not advised to write the conversion script from scratch, but rather to look through already existing conversion scripts in 🤗 Transformers for one that has been used to convert a similar model that was written in the same framework as *brand_new_bert*. Usually, it is enough to copy an already existing conversion script and slightly adapt it for your use case. Don't hesitate to ask the Hugging Face team to point you to a similar already existing conversion script for your model. - If you are porting a model from TensorFlow to PyTorch, a good starting point might be BERT's conversion script [here](https://github.com/huggingface/transformers/blob/7acfa95afb8194f8f9c1f4d2c6028224dbed35a2/src/transformers/models/bert/modeling_bert.py#L91) - If you are porting a model from PyTorch to PyTorch, a good starting point might be BART's conversion script [here](https://github.com/huggingface/transformers/blob/main/src/transformers/models/bart/convert_bart_original_pytorch_checkpoint_to_pytorch.py) In the following, we'll quickly explain how PyTorch models store layer weights and define layer names. In PyTorch, the name of a layer is defined by the name of the class attribute you give the layer. Let's define a dummy model in PyTorch, called `SimpleModel` as follows: thon from torch import nn class SimpleModel(nn.Module): def __init__(self): super().__init__() self.dense = nn.Linear(10, 10) self.intermediate = nn.Linear(10, 10) self.layer_norm = nn.LayerNorm(10) Now we can create an instance of this model definition which will fill all weights: `dense`, `intermediate`, `layer_norm` with random weights. We can print the model to see its architecture thon model = SimpleModel() print(model) This will print out the following: SimpleModel( (dense): Linear(in_features=10, out_features=10, bias=True) (intermediate): Linear(in_features=10, out_features=10, bias=True) (layer_norm): LayerNorm((10,), eps=1e-05, elementwise_affine=True) ) We can see that the layer names are defined by the name of the class attribute in PyTorch. You can print out the weight values of a specific layer: thon print(model.dense.weight.data) to see that the weights were randomly initialized tensor([[-0.0818, 0.2207, -0.0749, -0.0030, 0.0045, -0.1569, -0.1598, 0.0212, -0.2077, 0.2157], [ 0.1044, 0.0201, 0.0990, 0.2482, 0.3116, 0.2509, 0.2866, -0.2190, 0.2166, -0.0212], [-0.2000, 0.1107, -0.1999, -0.3119, 0.1559, 0.0993, 0.1776, -0.1950, -0.1023, -0.0447], [-0.0888, -0.1092, 0.2281, 0.0336, 0.1817, -0.0115, 0.2096, 0.1415, -0.1876, -0.2467], [ 0.2208, -0.2352, -0.1426, -0.2636, -0.2889, -0.2061, -0.2849, -0.0465, 0.2577, 0.0402], [ 0.1502, 0.2465, 0.2566, 0.0693, 0.2352, -0.0530, 0.1859, -0.0604, 0.2132, 0.1680], [ 0.1733, -0.2407, -0.1721, 0.1484, 0.0358, -0.0633, -0.0721, -0.0090, 0.2707, -0.2509], [-0.1173, 0.1561, 0.2945, 0.0595, -0.1996, 0.2988, -0.0802, 0.0407, 0.1829, -0.1568], [-0.1164, -0.2228, -0.0403, 0.0428, 0.1339, 0.0047, 0.1967, 0.2923, 0.0333, -0.0536], [-0.1492, -0.1616, 0.1057, 0.1950, -0.2807, -0.2710, -0.1586, 0.0739, 0.2220, 0.2358]]). In the conversion script, you should fill those randomly initialized weights with the exact weights of the corresponding layer in the checkpoint. *E.g.* thon # retrieve matching layer weights, e.g. by # recursive algorithm layer_name = ""dense"" pretrained_weight = array_of_dense_layer model_pointer = getattr(model, ""dense"") model_pointer.weight.data = torch.from_numpy(pretrained_weight) While doing so, you must verify that each randomly initialized weight of your PyTorch model and its corresponding pretrained checkpoint weight exactly match in both **shape and name**. To do so, it is **necessary** to add assert statements for the shape and print out the names of the checkpoints weights. E.g. you should add statements like: thon assert ( model_pointer.weight.shape == pretrained_weight.shape ), f""Pointer shape of random weight {model_pointer.shape} and array shape of checkpoint weight {pretrained_weight.shape} mismatched"" Besides, you should also print out the names of both weights to make sure they match, *e.g.* thon logger.info(f""Initialize PyTorch weight {layer_name} from {pretrained_weight.name}"") If either the shape or the name doesn't match, you probably assigned the wrong checkpoint weight to a randomly initialized layer of the 🤗 Transformers implementation. An incorrect shape is most likely due to an incorrect setting of the config parameters in `BrandNewBertConfig()` that do not exactly match those that were used for the checkpoint you want to convert. However, it could also be that PyTorch's implementation of a layer requires the weight to be transposed beforehand. Finally, you should also check that **all** required weights are initialized and print out all checkpoint weights that were not used for initialization to make sure the model is correctly converted. It is completely normal, that the conversion trials fail with either a wrong shape statement or a wrong name assignment. This is most likely because either you used incorrect parameters in `BrandNewBertConfig()`, have a wrong architecture in the 🤗 Transformers implementation, you have a bug in the `init()` functions of one of the components of the 🤗 Transformers implementation or you need to transpose one of the checkpoint weights. This step should be iterated with the previous step until all weights of the checkpoint are correctly loaded in the Transformers model. Having correctly loaded the checkpoint into the 🤗 Transformers implementation, you can then save the model under a folder of your choice `/path/to/converted/checkpoint/folder` that should then contain both a `pytorch_model.bin` file and a `config.json` file: thon model.save_pretrained(""/path/to/converted/checkpoint/folder"") **7. Implement the forward pass** Having managed to correctly load the pretrained weights into the 🤗 Transformers implementation, you should now make sure that the forward pass is correctly implemented. In [Get familiar with the original repository](#34-run-a-pretrained-checkpoint-using-the-original-repository), you have already created a script that runs a forward pass of the model using the original repository. Now you should write an analogous script using the 🤗 Transformers implementation instead of the original one. It should look as follows: thon model = BrandNewBertModel.from_pretrained(""/path/to/converted/checkpoint/folder"") input_ids = [0, 4, 4, 3, 2, 4, 1, 7, 19] output = model(input_ids).last_hidden_states It is very likely that the 🤗 Transformers implementation and the original model implementation don't give the exact same output the very first time or that the forward pass throws an error. Don't be disappointed - it's expected! First, you should make sure that the forward pass doesn't throw any errors. It often happens that the wrong dimensions are used leading to a *Dimensionality mismatch* error or that the wrong data type object is used, *e.g.* `torch.long` instead of `torch.float32`. Don't hesitate to ask the Hugging Face team for help, if you don't manage to solve certain errors. The final part to make sure the 🤗 Transformers implementation works correctly is to ensure that the outputs are equivalent to a precision of `1e-3`. First, you should ensure that the output shapes are identical, *i.e.* `outputs.shape` should yield the same value for the script of the 🤗 Transformers implementation and the original implementation. Next, you should make sure that the output values are identical as well. This one of the most difficult parts of adding a new model. Common mistakes why the outputs are not identical are: - Some layers were not added, *i.e.* an *activation* layer was not added, or the residual connection was forgotten - The word embedding matrix was not tied - The wrong positional embeddings are used because the original implementation uses on offset - Dropout is applied during the forward pass. To fix this make sure *model.training is False* and that no dropout layer is falsely activated during the forward pass, *i.e.* pass *self.training* to [PyTorch's functional dropout](https://pytorch.org/docs/stable/nn.functional.html?highlight=dropout#torch.nn.functional.dropout) The best way to fix the problem is usually to look at the forward pass of the original implementation and the 🤗 Transformers implementation side-by-side and check if there are any differences. Ideally, you should debug/print out intermediate outputs of both implementations of the forward pass to find the exact position in the network where the 🤗 Transformers implementation shows a different output than the original implementation. First, make sure that the hard-coded `input_ids` in both scripts are identical. Next, verify that the outputs of the first transformation of the `input_ids` (usually the word embeddings) are identical. And then work your way up to the very last layer of the network. At some point, you will notice a difference between the two implementations, which should point you to the bug in the 🤗 Transformers implementation. From our experience, a simple and efficient way is to add many print statements in both the original implementation and 🤗 Transformers implementation, at the same positions in the network respectively, and to successively remove print statements showing the same values for intermediate presentations. When you're confident that both implementations yield the same output, verify the outputs with `torch.allclose(original_output, output, atol=1e-3)`, you're done with the most difficult part! Congratulations - the work left to be done should be a cakewalk 😊. **8. Adding all necessary model tests** At this point, you have successfully added a new model. However, it is very much possible that the model does not yet fully comply with the required design. To make sure, the implementation is fully compatible with 🤗 Transformers, all common tests should pass. The Cookiecutter should have automatically added a test file for your model, probably under the same `tests/models/brand_new_bert/test_modeling_brand_new_bert.py`. Run this test file to verify that all common tests pass: ```bash pytest tests/models/brand_new_bert/test_modeling_brand_new_bert.py Having fixed all common tests, it is now crucial to ensure that all the nice work you have done is well tested, so that - a) The community can easily understand your work by looking at specific tests of *brand_new_bert* - b) Future changes to your model will not break any important feature of the model. At first, integration tests should be added. Those integration tests essentially do the same as the debugging scripts you used earlier to implement the model to 🤗 Transformers. A template of those model tests has already added by the Cookiecutter, called `BrandNewBertModelIntegrationTests` and only has to be filled out by you. To ensure that those tests are passing, run ```bash RUN_SLOW=1 pytest -sv tests/models/brand_new_bert/test_modeling_brand_new_bert.py::BrandNewBertModelIntegrationTests In case you are using Windows, you should replace `RUN_SLOW=1` with `SET RUN_SLOW=1` Second, all features that are special to *brand_new_bert* should be tested additionally in a separate test under `BrandNewBertModelTester`/``BrandNewBertModelTest`. This part is often forgotten but is extremely useful in two ways: - It helps to transfer the knowledge you have acquired during the model addition to the community by showing how the special features of *brand_new_bert* should work. - Future contributors can quickly test changes to the model by running those special tests. **9. Implement the tokenizer** Next, we should add the tokenizer of *brand_new_bert*. Usually, the tokenizer is equivalent to or very similar to an already existing tokenizer of 🤗 Transformers. It is very important to find/extract the original tokenizer file and to manage to load this file into the 🤗 Transformers' implementation of the tokenizer. To ensure that the tokenizer works correctly, it is recommended to first create a script in the original repository that inputs a string and returns the `input_ids``. It could look similar to this (in pseudo-code): thon input_str = ""This is a long example input string containing special characters .$?-, numbers 2872 234 12 and words."" model = BrandNewBertModel.load_pretrained_checkpoint(""/path/to/checkpoint/"") input_ids = model.tokenize(input_str) You might have to take a deeper look again into the original repository to find the correct tokenizer function or you might even have to do changes to your clone of the original repository to only output the `input_ids`. Having written a functional tokenization script that uses the original repository, an analogous script for 🤗 Transformers should be created. It should look similar to this: thon from transformers import BrandNewBertTokenizer input_str = ""This is a long example input string containing special characters .$?-, numbers 2872 234 12 and words."" tokenizer = BrandNewBertTokenizer.from_pretrained(""/path/to/tokenizer/folder/"") input_ids = tokenizer(input_str).input_ids When both `input_ids` yield the same values, as a final step a tokenizer test file should also be added. Analogous to the modeling test files of *brand_new_bert*, the tokenization test files of *brand_new_bert* should contain a couple of hard-coded integration tests. **10. Run End-to-end integration tests** Having added the tokenizer, you should also add a couple of end-to-end integration tests using both the model and the tokenizer to `tests/models/brand_new_bert/test_modeling_brand_new_bert.py` in 🤗 Transformers. Such a test should show on a meaningful text-to-text sample that the 🤗 Transformers implementation works as expected. A meaningful text-to-text sample can include *e.g.* a source-to-target-translation pair, an article-to-summary pair, a question-to-answer pair, etc… If none of the ported checkpoints has been fine-tuned on a downstream task it is enough to simply rely on the model tests. In a final step to ensure that the model is fully functional, it is advised that you also run all tests on GPU. It can happen that you forgot to add some `.to(self.device)` statements to internal tensors of the model, which in such a test would show in an error. In case you have no access to a GPU, the Hugging Face team can take care of running those tests for you. **11. Add Docstring** Now, all the necessary functionality for *brand_new_bert* is added - you're almost done! The only thing left to add is a nice docstring and a doc page. The Cookiecutter should have added a template file called `docs/source/model_doc/brand_new_bert.md` that you should fill out. Users of your model will usually first look at this page before using your model. Hence, the documentation must be understandable and concise. It is very useful for the community to add some *Tips* to show how the model should be used. Don't hesitate to ping the Hugging Face team regarding the docstrings. Next, make sure that the docstring added to `src/transformers/models/brand_new_bert/modeling_brand_new_bert.py` is correct and included all necessary inputs and outputs. We have a detailed guide about writing documentation and our docstring format [here](writing-documentation). It is always to good to remind oneself that documentation should be treated at least as carefully as the code in 🤗 Transformers since the documentation is usually the first contact point of the community with the model. **Code refactor** Great, now you have added all the necessary code for *brand_new_bert*. At this point, you should correct some potential incorrect code style by running: ```bash make style and verify that your coding style passes the quality check: ```bash make quality There are a couple of other very strict design tests in 🤗 Transformers that might still be failing, which shows up in the tests of your pull request. This is often because of some missing information in the docstring or some incorrect naming. The Hugging Face team will surely help you if you're stuck here. Lastly, it is always a good idea to refactor one's code after having ensured that the code works correctly. With all tests passing, now it's a good time to go over the added code again and do some refactoring. You have now finished the coding part, congratulation! 🎉 You are Awesome! 😎 **12. Upload the models to the model hub** In this final part, you should convert and upload all checkpoints to the model hub and add a model card for each uploaded model checkpoint. You can get familiar with the hub functionalities by reading our [Model sharing and uploading Page](model_sharing). You should work alongside the Hugging Face team here to decide on a fitting name for each checkpoint and to get the required access rights to be able to upload the model under the author's organization of *brand_new_bert*. The `push_to_hub` method, present in all models in `transformers`, is a quick and efficient way to push your checkpoint to the hub. A little snippet is pasted below: thon brand_new_bert.push_to_hub(""brand_new_bert"") # Uncomment the following line to push to an organization. # brand_new_bert.push_to_hub(""/brand_new_bert"") It is worth spending some time to create fitting model cards for each checkpoint. The model cards should highlight the specific characteristics of this particular checkpoint, *e.g.* On which dataset was the checkpoint pretrained/fine-tuned on? On what down-stream task should the model be used? And also include some code on how to correctly use the model. **13. (Optional) Add notebook** It is very helpful to add a notebook that showcases in-detail how *brand_new_bert* can be used for inference and/or fine-tuned on a downstream task. This is not mandatory to merge your PR, but very useful for the community. **14. Submit your finished PR** You're done programming now and can move to the last step, which is getting your PR merged into main. Usually, the Hugging Face team should have helped you already at this point, but it is worth taking some time to give your finished PR a nice description and eventually add comments to your code, if you want to point out certain design choices to your reviewer. ### Share your work!! Now, it's time to get some credit from the community for your work! Having completed a model addition is a major contribution to Transformers and the whole NLP community. Your code and the ported pre-trained models will certainly be used by hundreds and possibly even thousands of developers and researchers. You should be proud of your work and share your achievements with the community. **You have made another model that is super easy to access for everyone in the community! 🤯** " benchmarks.md," # Benchmarks Hugging Face's Benchmarking tools are deprecated and it is advised to use external Benchmarking libraries to measure the speed and memory complexity of Transformer models. [[open-in-colab]] Let's take a look at how 🤗 Transformers models can be benchmarked, best practices, and already available benchmarks. A notebook explaining in more detail how to benchmark 🤗 Transformers models can be found [here](https://github.com/huggingface/notebooks/tree/main/examples/benchmark.ipynb). ## How to benchmark 🤗 Transformers models The classes [`PyTorchBenchmark`] and [`TensorFlowBenchmark`] allow to flexibly benchmark 🤗 Transformers models. The benchmark classes allow us to measure the _peak memory usage_ and _required time_ for both _inference_ and _training_. Hereby, _inference_ is defined by a single forward pass, and _training_ is defined by a single forward pass and backward pass. The benchmark classes [`PyTorchBenchmark`] and [`TensorFlowBenchmark`] expect an object of type [`PyTorchBenchmarkArguments`] and [`TensorFlowBenchmarkArguments`], respectively, for instantiation. [`PyTorchBenchmarkArguments`] and [`TensorFlowBenchmarkArguments`] are data classes and contain all relevant configurations for their corresponding benchmark class. In the following example, it is shown how a BERT model of type _bert-base-cased_ can be benchmarked. >>> from transformers import PyTorchBenchmark, PyTorchBenchmarkArguments >>> args = PyTorchBenchmarkArguments(models=[""bert-base-uncased""], batch_sizes=[8], sequence_lengths=[8, 32, 128, 512]) >>> benchmark = PyTorchBenchmark(args) >>> from transformers import TensorFlowBenchmark, TensorFlowBenchmarkArguments >>> args = TensorFlowBenchmarkArguments( models=[""bert-base-uncased""], batch_sizes=[8], sequence_lengths=[8, 32, 128, 512] ) >>> benchmark = TensorFlowBenchmark(args) Here, three arguments are given to the benchmark argument data classes, namely `models`, `batch_sizes`, and `sequence_lengths`. The argument `models` is required and expects a `list` of model identifiers from the [model hub](https://huggingface.co/models) The `list` arguments `batch_sizes` and `sequence_lengths` define the size of the `input_ids` on which the model is benchmarked. There are many more parameters that can be configured via the benchmark argument data classes. For more detail on these one can either directly consult the files `src/transformers/benchmark/benchmark_args_utils.py`, `src/transformers/benchmark/benchmark_args.py` (for PyTorch) and `src/transformers/benchmark/benchmark_args_tf.py` (for Tensorflow). Alternatively, running the following shell commands from root will print out a descriptive list of all configurable parameters for PyTorch and Tensorflow respectively. ```bash python examples/pytorch/benchmarking/run_benchmark.py --help An instantiated benchmark object can then simply be run by calling `benchmark.run()`. >>> results = benchmark.run() >>> print(results) ==================== INFERENCE - SPEED - RESULT ==================== -------------------------------------------------------------------------------- Model Name Batch Size Seq Length Time in s -------------------------------------------------------------------------------- bert-base-uncased 8 8 0.006 bert-base-uncased 8 32 0.006 bert-base-uncased 8 128 0.018 bert-base-uncased 8 512 0.088 -------------------------------------------------------------------------------- ==================== INFERENCE - MEMORY - RESULT ==================== -------------------------------------------------------------------------------- Model Name Batch Size Seq Length Memory in MB -------------------------------------------------------------------------------- bert-base-uncased 8 8 1227 bert-base-uncased 8 32 1281 bert-base-uncased 8 128 1307 bert-base-uncased 8 512 1539 -------------------------------------------------------------------------------- ==================== ENVIRONMENT INFORMATION ==================== - transformers_version: 2.11.0 - framework: PyTorch - use_torchscript: False - framework_version: 1.4.0 - python_version: 3.6.10 - system: Linux - cpu: x86_64 - architecture: 64bit - date: 2020-06-29 - time: 08:58:43.371351 - fp16: False - use_multiprocessing: True - only_pretrain_model: False - cpu_ram_mb: 32088 - use_gpu: True - num_gpus: 1 - gpu: TITAN RTX - gpu_ram_mb: 24217 - gpu_power_watts: 280.0 - gpu_performance_state: 2 - use_tpu: False ```bash python examples/tensorflow/benchmarking/run_benchmark_tf.py --help An instantiated benchmark object can then simply be run by calling `benchmark.run()`. >>> results = benchmark.run() >>> print(results) >>> results = benchmark.run() >>> print(results) ==================== INFERENCE - SPEED - RESULT ==================== -------------------------------------------------------------------------------- Model Name Batch Size Seq Length Time in s -------------------------------------------------------------------------------- bert-base-uncased 8 8 0.005 bert-base-uncased 8 32 0.008 bert-base-uncased 8 128 0.022 bert-base-uncased 8 512 0.105 -------------------------------------------------------------------------------- ==================== INFERENCE - MEMORY - RESULT ==================== -------------------------------------------------------------------------------- Model Name Batch Size Seq Length Memory in MB -------------------------------------------------------------------------------- bert-base-uncased 8 8 1330 bert-base-uncased 8 32 1330 bert-base-uncased 8 128 1330 bert-base-uncased 8 512 1770 -------------------------------------------------------------------------------- ==================== ENVIRONMENT INFORMATION ==================== - transformers_version: 2.11.0 - framework: Tensorflow - use_xla: False - framework_version: 2.2.0 - python_version: 3.6.10 - system: Linux - cpu: x86_64 - architecture: 64bit - date: 2020-06-29 - time: 09:26:35.617317 - fp16: False - use_multiprocessing: True - only_pretrain_model: False - cpu_ram_mb: 32088 - use_gpu: True - num_gpus: 1 - gpu: TITAN RTX - gpu_ram_mb: 24217 - gpu_power_watts: 280.0 - gpu_performance_state: 2 - use_tpu: False By default, the _time_ and the _required memory_ for _inference_ are benchmarked. In the example output above the first two sections show the result corresponding to _inference time_ and _inference memory_. In addition, all relevant information about the computing environment, _e.g._ the GPU type, the system, the library versions, etc are printed out in the third section under _ENVIRONMENT INFORMATION_. This information can optionally be saved in a _.csv_ file when adding the argument `save_to_csv=True` to [`PyTorchBenchmarkArguments`] and [`TensorFlowBenchmarkArguments`] respectively. In this case, every section is saved in a separate _.csv_ file. The path to each _.csv_ file can optionally be defined via the argument data classes. Instead of benchmarking pre-trained models via their model identifier, _e.g._ `bert-base-uncased`, the user can alternatively benchmark an arbitrary configuration of any available model class. In this case, a `list` of configurations must be inserted with the benchmark args as follows. >>> from transformers import PyTorchBenchmark, PyTorchBenchmarkArguments, BertConfig >>> args = PyTorchBenchmarkArguments( models=[""bert-base"", ""bert-384-hid"", ""bert-6-lay""], batch_sizes=[8], sequence_lengths=[8, 32, 128, 512] ) >>> config_base = BertConfig() >>> config_384_hid = BertConfig(hidden_size=384) >>> config_6_lay = BertConfig(num_hidden_layers=6) >>> benchmark = PyTorchBenchmark(args, configs=[config_base, config_384_hid, config_6_lay]) >>> benchmark.run() ==================== INFERENCE - SPEED - RESULT ==================== -------------------------------------------------------------------------------- Model Name Batch Size Seq Length Time in s -------------------------------------------------------------------------------- bert-base 8 128 0.006 bert-base 8 512 0.006 bert-base 8 128 0.018 bert-base 8 512 0.088 bert-384-hid 8 8 0.006 bert-384-hid 8 32 0.006 bert-384-hid 8 128 0.011 bert-384-hid 8 512 0.054 bert-6-lay 8 8 0.003 bert-6-lay 8 32 0.004 bert-6-lay 8 128 0.009 bert-6-lay 8 512 0.044 -------------------------------------------------------------------------------- ==================== INFERENCE - MEMORY - RESULT ==================== -------------------------------------------------------------------------------- Model Name Batch Size Seq Length Memory in MB -------------------------------------------------------------------------------- bert-base 8 8 1277 bert-base 8 32 1281 bert-base 8 128 1307 bert-base 8 512 1539 bert-384-hid 8 8 1005 bert-384-hid 8 32 1027 bert-384-hid 8 128 1035 bert-384-hid 8 512 1255 bert-6-lay 8 8 1097 bert-6-lay 8 32 1101 bert-6-lay 8 128 1127 bert-6-lay 8 512 1359 -------------------------------------------------------------------------------- ==================== ENVIRONMENT INFORMATION ==================== - transformers_version: 2.11.0 - framework: PyTorch - use_torchscript: False - framework_version: 1.4.0 - python_version: 3.6.10 - system: Linux - cpu: x86_64 - architecture: 64bit - date: 2020-06-29 - time: 09:35:25.143267 - fp16: False - use_multiprocessing: True - only_pretrain_model: False - cpu_ram_mb: 32088 - use_gpu: True - num_gpus: 1 - gpu: TITAN RTX - gpu_ram_mb: 24217 - gpu_power_watts: 280.0 - gpu_performance_state: 2 - use_tpu: False >>> from transformers import TensorFlowBenchmark, TensorFlowBenchmarkArguments, BertConfig >>> args = TensorFlowBenchmarkArguments( models=[""bert-base"", ""bert-384-hid"", ""bert-6-lay""], batch_sizes=[8], sequence_lengths=[8, 32, 128, 512] ) >>> config_base = BertConfig() >>> config_384_hid = BertConfig(hidden_size=384) >>> config_6_lay = BertConfig(num_hidden_layers=6) >>> benchmark = TensorFlowBenchmark(args, configs=[config_base, config_384_hid, config_6_lay]) >>> benchmark.run() ==================== INFERENCE - SPEED - RESULT ==================== -------------------------------------------------------------------------------- Model Name Batch Size Seq Length Time in s -------------------------------------------------------------------------------- bert-base 8 8 0.005 bert-base 8 32 0.008 bert-base 8 128 0.022 bert-base 8 512 0.106 bert-384-hid 8 8 0.005 bert-384-hid 8 32 0.007 bert-384-hid 8 128 0.018 bert-384-hid 8 512 0.064 bert-6-lay 8 8 0.002 bert-6-lay 8 32 0.003 bert-6-lay 8 128 0.0011 bert-6-lay 8 512 0.074 -------------------------------------------------------------------------------- ==================== INFERENCE - MEMORY - RESULT ==================== -------------------------------------------------------------------------------- Model Name Batch Size Seq Length Memory in MB -------------------------------------------------------------------------------- bert-base 8 8 1330 bert-base 8 32 1330 bert-base 8 128 1330 bert-base 8 512 1770 bert-384-hid 8 8 1330 bert-384-hid 8 32 1330 bert-384-hid 8 128 1330 bert-384-hid 8 512 1540 bert-6-lay 8 8 1330 bert-6-lay 8 32 1330 bert-6-lay 8 128 1330 bert-6-lay 8 512 1540 -------------------------------------------------------------------------------- ==================== ENVIRONMENT INFORMATION ==================== - transformers_version: 2.11.0 - framework: Tensorflow - use_xla: False - framework_version: 2.2.0 - python_version: 3.6.10 - system: Linux - cpu: x86_64 - architecture: 64bit - date: 2020-06-29 - time: 09:38:15.487125 - fp16: False - use_multiprocessing: True - only_pretrain_model: False - cpu_ram_mb: 32088 - use_gpu: True - num_gpus: 1 - gpu: TITAN RTX - gpu_ram_mb: 24217 - gpu_power_watts: 280.0 - gpu_performance_state: 2 - use_tpu: False Again, _inference time_ and _required memory_ for _inference_ are measured, but this time for customized configurations of the `BertModel` class. This feature can especially be helpful when deciding for which configuration the model should be trained. ## Benchmark best practices This section lists a couple of best practices one should be aware of when benchmarking a model. - Currently, only single device benchmarking is supported. When benchmarking on GPU, it is recommended that the user specifies on which device the code should be run by setting the `CUDA_VISIBLE_DEVICES` environment variable in the shell, _e.g._ `export CUDA_VISIBLE_DEVICES=0` before running the code. - The option `no_multi_processing` should only be set to `True` for testing and debugging. To ensure accurate memory measurement it is recommended to run each memory benchmark in a separate process by making sure `no_multi_processing` is set to `True`. - One should always state the environment information when sharing the results of a model benchmark. Results can vary heavily between different GPU devices, library versions, etc., so that benchmark results on their own are not very useful for the community. ## Sharing your benchmark Previously all available core models (10 at the time) have been benchmarked for _inference time_, across many different settings: using PyTorch, with and without TorchScript, using TensorFlow, with and without XLA. All of those tests were done across CPUs (except for TensorFlow XLA) and GPUs. The approach is detailed in the [following blogpost](https://medium.com/huggingface/benchmarking-transformers-pytorch-and-tensorflow-e2917fb891c2) and the results are available [here](https://docs.google.com/spreadsheets/d/1sryqufw2D0XlUH4sq3e9Wnxu5EAQkaohzrJbd5HdQ_w/edit?usp=sharing). With the new _benchmark_ tools, it is easier than ever to share your benchmark results with the community - [PyTorch Benchmarking Results](https://github.com/huggingface/transformers/tree/main/examples/pytorch/benchmarking/README.md). - [TensorFlow Benchmarking Results](https://github.com/huggingface/transformers/tree/main/examples/tensorflow/benchmarking/README.md). " debugging.md," # Debugging ## Multi-GPU Network Issues Debug When training or inferencing with `DistributedDataParallel` and multiple GPU, if you run into issue of inter-communication between processes and/or nodes, you can use the following script to diagnose network issues. ```bash wget https://raw.githubusercontent.com/huggingface/transformers/main/scripts/distributed/torch-distributed-gpu-test.py For example to test how 2 GPUs interact do: ```bash python -m torch.distributed.run --nproc_per_node 2 --nnodes 1 torch-distributed-gpu-test.py If both processes can talk to each and allocate GPU memory each will print an OK status. For more GPUs or nodes adjust the arguments in the script. You will find a lot more details inside the diagnostics script and even a recipe to how you could run it in a SLURM environment. An additional level of debug is to add `NCCL_DEBUG=INFO` environment variable as follows: ```bash NCCL_DEBUG=INFO python -m torch.distributed.run --nproc_per_node 2 --nnodes 1 torch-distributed-gpu-test.py This will dump a lot of NCCL-related debug information, which you can then search online if you find that some problems are reported. Or if you're not sure how to interpret the output you can share the log file in an Issue. ## Underflow and Overflow Detection This feature is currently available for PyTorch-only. For multi-GPU training it requires DDP (`torch.distributed.launch`). This feature can be used with any `nn.Module`-based model. If you start getting `loss=NaN` or the model inhibits some other abnormal behavior due to `inf` or `nan` in activations or weights one needs to discover where the first underflow or overflow happens and what led to it. Luckily you can accomplish that easily by activating a special module that will do the detection automatically. If you're using [`Trainer`], you just need to add: ```bash --debug underflow_overflow to the normal command line arguments, or pass `debug=""underflow_overflow""` when creating the [`TrainingArguments`] object. If you're using your own training loop or another Trainer you can accomplish the same with: thon from transformers.debug_utils import DebugUnderflowOverflow debug_overflow = DebugUnderflowOverflow(model) [`~debug_utils.DebugUnderflowOverflow`] inserts hooks into the model that immediately after each forward call will test input and output variables and also the corresponding module's weights. As soon as `inf` or `nan` is detected in at least one element of the activations or weights, the program will assert and print a report like this (this was caught with `google/mt5-small` under fp16 mixed precision): Detected inf/nan during batch_number=0 Last 21 forward frames: abs min abs max metadata encoder.block.1.layer.1.DenseReluDense.dropout Dropout 0.00e+00 2.57e+02 input[0] 0.00e+00 2.85e+02 output [] encoder.block.2.layer.0 T5LayerSelfAttention 6.78e-04 3.15e+03 input[0] 2.65e-04 3.42e+03 output[0] None output[1] 2.25e-01 1.00e+04 output[2] encoder.block.2.layer.1.layer_norm T5LayerNorm 8.69e-02 4.18e-01 weight 2.65e-04 3.42e+03 input[0] 1.79e-06 4.65e+00 output encoder.block.2.layer.1.DenseReluDense.wi_0 Linear 2.17e-07 4.50e+00 weight 1.79e-06 4.65e+00 input[0] 2.68e-06 3.70e+01 output encoder.block.2.layer.1.DenseReluDense.wi_1 Linear 8.08e-07 2.66e+01 weight 1.79e-06 4.65e+00 input[0] 1.27e-04 2.37e+02 output encoder.block.2.layer.1.DenseReluDense.dropout Dropout 0.00e+00 8.76e+03 input[0] 0.00e+00 9.74e+03 output encoder.block.2.layer.1.DenseReluDense.wo Linear 1.01e-06 6.44e+00 weight 0.00e+00 9.74e+03 input[0] 3.18e-04 6.27e+04 output encoder.block.2.layer.1.DenseReluDense T5DenseGatedGeluDense 1.79e-06 4.65e+00 input[0] 3.18e-04 6.27e+04 output encoder.block.2.layer.1.dropout Dropout 3.18e-04 6.27e+04 input[0] 0.00e+00 inf output The example output has been trimmed in the middle for brevity. The second column shows the value of the absolute largest element, so if you have a closer look at the last few frames, the inputs and outputs were in the range of `1e4`. So when this training was done under fp16 mixed precision the very last step overflowed (since under `fp16` the largest number before `inf` is `64e3`). To avoid overflows under `fp16` the activations must remain way below `1e4`, because `1e4 * 1e4 = 1e8` so any matrix multiplication with large activations is going to lead to a numerical overflow condition. At the very start of the trace you can discover at which batch number the problem occurred (here `Detected inf/nan during batch_number=0` means the problem occurred on the first batch). Each reported frame starts by declaring the fully qualified entry for the corresponding module this frame is reporting for. If we look just at this frame: encoder.block.2.layer.1.layer_norm T5LayerNorm 8.69e-02 4.18e-01 weight 2.65e-04 3.42e+03 input[0] 1.79e-06 4.65e+00 output Here, `encoder.block.2.layer.1.layer_norm` indicates that it was a layer norm for the first layer, of the second block of the encoder. And the specific calls of the `forward` is `T5LayerNorm`. Let's look at the last few frames of that report: Detected inf/nan during batch_number=0 Last 21 forward frames: abs min abs max metadata [] encoder.block.2.layer.1.DenseReluDense.wi_0 Linear 2.17e-07 4.50e+00 weight 1.79e-06 4.65e+00 input[0] 2.68e-06 3.70e+01 output encoder.block.2.layer.1.DenseReluDense.wi_1 Linear 8.08e-07 2.66e+01 weight 1.79e-06 4.65e+00 input[0] 1.27e-04 2.37e+02 output encoder.block.2.layer.1.DenseReluDense.wo Linear 1.01e-06 6.44e+00 weight 0.00e+00 9.74e+03 input[0] 3.18e-04 6.27e+04 output encoder.block.2.layer.1.DenseReluDense T5DenseGatedGeluDense 1.79e-06 4.65e+00 input[0] 3.18e-04 6.27e+04 output encoder.block.2.layer.1.dropout Dropout 3.18e-04 6.27e+04 input[0] 0.00e+00 inf output The last frame reports for `Dropout.forward` function with the first entry for the only input and the second for the only output. You can see that it was called from an attribute `dropout` inside `DenseReluDense` class. We can see that it happened during the first layer, of the 2nd block, during the very first batch. Finally, the absolute largest input elements was `6.27e+04` and same for the output was `inf`. You can see here, that `T5DenseGatedGeluDense.forward` resulted in output activations, whose absolute max value was around 62.7K, which is very close to fp16's top limit of 64K. In the next frame we have `Dropout` which renormalizes the weights, after it zeroed some of the elements, which pushes the absolute max value to more than 64K, and we get an overflow (`inf`). As you can see it's the previous frames that we need to look into when the numbers start going into very large for fp16 numbers. Let's match the report to the code from `models/t5/modeling_t5.py`: thon class T5DenseGatedGeluDense(nn.Module): def __init__(self, config): super().__init__() self.wi_0 = nn.Linear(config.d_model, config.d_ff, bias=False) self.wi_1 = nn.Linear(config.d_model, config.d_ff, bias=False) self.wo = nn.Linear(config.d_ff, config.d_model, bias=False) self.dropout = nn.Dropout(config.dropout_rate) self.gelu_act = ACT2FN[""gelu_new""] def forward(self, hidden_states): hidden_gelu = self.gelu_act(self.wi_0(hidden_states)) hidden_linear = self.wi_1(hidden_states) hidden_states = hidden_gelu * hidden_linear hidden_states = self.dropout(hidden_states) hidden_states = self.wo(hidden_states) return hidden_states Now it's easy to see the `dropout` call, and all the previous calls as well. Since the detection is happening in a forward hook, these reports are printed immediately after each `forward` returns. Going back to the full report, to act on it and to fix the problem, we need to go a few frames up where the numbers started to go up and most likely switch to the `fp32` mode here, so that the numbers don't overflow when multiplied or summed up. Of course, there might be other solutions. For example, we could turn off `amp` temporarily if it's enabled, after moving the original `forward` into a helper wrapper, like so: thon def _forward(self, hidden_states): hidden_gelu = self.gelu_act(self.wi_0(hidden_states)) hidden_linear = self.wi_1(hidden_states) hidden_states = hidden_gelu * hidden_linear hidden_states = self.dropout(hidden_states) hidden_states = self.wo(hidden_states) return hidden_states import torch def forward(self, hidden_states): if torch.is_autocast_enabled(): with torch.cuda.amp.autocast(enabled=False): return self._forward(hidden_states) else: return self._forward(hidden_states) Since the automatic detector only reports on inputs and outputs of full frames, once you know where to look, you may want to analyse the intermediary stages of any specific `forward` function as well. In such a case you can use the `detect_overflow` helper function to inject the detector where you want it, for example: thon from debug_utils import detect_overflow class T5LayerFF(nn.Module): [] def forward(self, hidden_states): forwarded_states = self.layer_norm(hidden_states) detect_overflow(forwarded_states, ""after layer_norm"") forwarded_states = self.DenseReluDense(forwarded_states) detect_overflow(forwarded_states, ""after DenseReluDense"") return hidden_states + self.dropout(forwarded_states) You can see that we added 2 of these and now we track if `inf` or `nan` for `forwarded_states` was detected somewhere in between. Actually, the detector already reports these because each of the calls in the example above is a `nn.Module`, but let's say if you had some local direct calculations this is how you'd do that. Additionally, if you're instantiating the debugger in your own code, you can adjust the number of frames printed from its default, e.g.: thon from transformers.debug_utils import DebugUnderflowOverflow debug_overflow = DebugUnderflowOverflow(model, max_frames_to_save=100) ### Specific batch absolute min and max value tracing The same debugging class can be used for per-batch tracing with the underflow/overflow detection feature turned off. Let's say you want to watch the absolute min and max values for all the ingredients of each `forward` call of a given batch, and only do that for batches 1 and 3. Then you instantiate this class as: thon debug_overflow = DebugUnderflowOverflow(model, trace_batch_nums=[1, 3]) And now full batches 1 and 3 will be traced using the same format as the underflow/overflow detector does. Batches are 0-indexed. This is helpful if you know that the program starts misbehaving after a certain batch number, so you can fast-forward right to that area. Here is a sample truncated output for such configuration: *** Starting batch number=1 *** abs min abs max metadata shared Embedding 1.01e-06 7.92e+02 weight 0.00e+00 2.47e+04 input[0] 5.36e-05 7.92e+02 output [] decoder.dropout Dropout 1.60e-07 2.27e+01 input[0] 0.00e+00 2.52e+01 output decoder T5Stack not a tensor output lm_head Linear 1.01e-06 7.92e+02 weight 0.00e+00 1.11e+00 input[0] 6.06e-02 8.39e+01 output T5ForConditionalGeneration not a tensor output *** Starting batch number=3 *** abs min abs max metadata shared Embedding 1.01e-06 7.92e+02 weight 0.00e+00 2.78e+04 input[0] 5.36e-05 7.92e+02 output [] Here you will get a huge number of frames dumped - as many as there were forward calls in your model, so it may or may not what you want, but sometimes it can be easier to use for debugging purposes than a normal debugger. For example, if a problem starts happening at batch number 150. So you can dump traces for batches 149 and 150 and compare where numbers started to diverge. You can also specify the batch number after which to stop the training, with: thon debug_overflow = DebugUnderflowOverflow(model, trace_batch_nums=[1, 3], abort_after_batch_num=3) " model_memory_anatomy.md," # Model training anatomy To understand performance optimization techniques that one can apply to improve efficiency of model training speed and memory utilization, it's helpful to get familiar with how GPU is utilized during training, and how compute intensity varies depending on an operation performed. Let's start by exploring a motivating example of GPU utilization and the training run of a model. For the demonstration, we'll need to install a few libraries: ```bash pip install transformers datasets accelerate nvidia-ml-py3 The `nvidia-ml-py3` library allows us to monitor the memory usage of the models from within Python. You might be familiar with the `nvidia-smi` command in the terminal - this library allows to access the same information in Python directly. Then, we create some dummy data: random token IDs between 100 and 30000 and binary labels for a classifier. In total, we get 512 sequences each with length 512 and store them in a [`~datasets.Dataset`] with PyTorch format. >>> import numpy as np >>> from datasets import Dataset >>> seq_len, dataset_size = 512, 512 >>> dummy_data = { ""input_ids"": np.random.randint(100, 30000, (dataset_size, seq_len)), ""labels"": np.random.randint(0, 1, (dataset_size)), } >>> ds = Dataset.from_dict(dummy_data) >>> ds.set_format(""pt"") To print summary statistics for the GPU utilization and the training run with the [`Trainer`] we define two helper functions: >>> from pynvml import * >>> def print_gpu_utilization(): nvmlInit() handle = nvmlDeviceGetHandleByIndex(0) info = nvmlDeviceGetMemoryInfo(handle) print(f""GPU memory occupied: {info.used//1024**2} MB."") >>> def print_summary(result): print(f""Time: {result.metrics['train_runtime']:.2f}"") print(f""Samples/second: {result.metrics['train_samples_per_second']:.2f}"") print_gpu_utilization() Let's verify that we start with a free GPU memory: >>> print_gpu_utilization() GPU memory occupied: 0 MB. That looks good: the GPU memory is not occupied as we would expect before we load any models. If that's not the case on your machine make sure to stop all processes that are using GPU memory. However, not all free GPU memory can be used by the user. When a model is loaded to the GPU the kernels are also loaded, which can take up 1-2GB of memory. To see how much it is we load a tiny tensor into the GPU which triggers the kernels to be loaded as well. >>> import torch >>> torch.ones((1, 1)).to(""cuda"") >>> print_gpu_utilization() GPU memory occupied: 1343 MB. We see that the kernels alone take up 1.3GB of GPU memory. Now let's see how much space the model uses. ## Load Model First, we load the `bert-large-uncased` model. We load the model weights directly to the GPU so that we can check how much space just the weights use. >>> from transformers import AutoModelForSequenceClassification >>> model = AutoModelForSequenceClassification.from_pretrained(""bert-large-uncased"").to(""cuda"") >>> print_gpu_utilization() GPU memory occupied: 2631 MB. We can see that the model weights alone take up 1.3 GB of GPU memory. The exact number depends on the specific GPU you are using. Note that on newer GPUs a model can sometimes take up more space since the weights are loaded in an optimized fashion that speeds up the usage of the model. Now we can also quickly check if we get the same result as with `nvidia-smi` CLI: ```bash nvidia-smi ```bash Tue Jan 11 08:58:05 2022 +-----------------------------------------------------------------------------+ | NVIDIA-SMI 460.91.03 Driver Version: 460.91.03 CUDA Version: 11.2 | |-------------------------------+----------------------+----------------------+ | GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. | | | | MIG M. | |===============================+======================+======================| | 0 Tesla V100-SXM2 On | 00000000:00:04.0 Off | 0 | | N/A 37C P0 39W / 300W | 2631MiB / 16160MiB | 0% Default | | | | N/A | +-------------------------------+----------------------+----------------------+ +-----------------------------------------------------------------------------+ | Processes: | | GPU GI CI PID Type Process name GPU Memory | | ID ID Usage | |=============================================================================| | 0 N/A N/A 3721 C nvs/codeparrot/bin/python 2629MiB | +-----------------------------------------------------------------------------+ We get the same number as before and you can also see that we are using a V100 GPU with 16GB of memory. So now we can start training the model and see how the GPU memory consumption changes. First, we set up a few standard training arguments: default_args = { ""output_dir"": ""tmp"", ""evaluation_strategy"": ""steps"", ""num_train_epochs"": 1, ""log_level"": ""error"", ""report_to"": ""none"", } If you plan to run multiple experiments, in order to properly clear the memory between experiments, restart the Python kernel between experiments. ## Memory utilization at vanilla training Let's use the [`Trainer`] and train the model without using any GPU performance optimization techniques and a batch size of 4: >>> from transformers import TrainingArguments, Trainer, logging >>> logging.set_verbosity_error() >>> training_args = TrainingArguments(per_device_train_batch_size=4, **default_args) >>> trainer = Trainer(model=model, args=training_args, train_dataset=ds) >>> result = trainer.train() >>> print_summary(result) Time: 57.82 Samples/second: 8.86 GPU memory occupied: 14949 MB. We see that already a relatively small batch size almost fills up our GPU's entire memory. However, a larger batch size can often result in faster model convergence or better end performance. So ideally we want to tune the batch size to our model's needs and not to the GPU limitations. What's interesting is that we use much more memory than the size of the model. To understand a bit better why this is the case let's have a look at a model's operations and memory needs. ## Anatomy of Model's Operations Transformers architecture includes 3 main groups of operations grouped below by compute-intensity. 1. **Tensor Contractions** Linear layers and components of Multi-Head Attention all do batched **matrix-matrix multiplications**. These operations are the most compute-intensive part of training a transformer. 2. **Statistical Normalizations** Softmax and layer normalization are less compute-intensive than tensor contractions, and involve one or more **reduction operations**, the result of which is then applied via a map. 3. **Element-wise Operators** These are the remaining operators: **biases, dropout, activations, and residual connections**. These are the least compute-intensive operations. This knowledge can be helpful to know when analyzing performance bottlenecks. This summary is derived from [Data Movement Is All You Need: A Case Study on Optimizing Transformers 2020](https://arxiv.org/abs/2007.00072) ## Anatomy of Model's Memory We've seen that training the model uses much more memory than just putting the model on the GPU. This is because there are many components during training that use GPU memory. The components on GPU memory are the following: 1. model weights 2. optimizer states 3. gradients 4. forward activations saved for gradient computation 5. temporary buffers 6. functionality-specific memory A typical model trained in mixed precision with AdamW requires 18 bytes per model parameter plus activation memory. For inference there are no optimizer states and gradients, so we can subtract those. And thus we end up with 6 bytes per model parameter for mixed precision inference, plus activation memory. Let's look at the details. **Model Weights:** - 4 bytes * number of parameters for fp32 training - 6 bytes * number of parameters for mixed precision training (maintains a model in fp32 and one in fp16 in memory) **Optimizer States:** - 8 bytes * number of parameters for normal AdamW (maintains 2 states) - 2 bytes * number of parameters for 8-bit AdamW optimizers like [bitsandbytes](https://github.com/TimDettmers/bitsandbytes) - 4 bytes * number of parameters for optimizers like SGD with momentum (maintains only 1 state) **Gradients** - 4 bytes * number of parameters for either fp32 or mixed precision training (gradients are always kept in fp32) **Forward Activations** - size depends on many factors, the key ones being sequence length, hidden size and batch size. There are the input and output that are being passed and returned by the forward and the backward functions and the forward activations saved for gradient computation. **Temporary Memory** Additionally, there are all kinds of temporary variables which get released once the calculation is done, but in the moment these could require additional memory and could push to OOM. Therefore, when coding it's crucial to think strategically about such temporary variables and sometimes to explicitly free those as soon as they are no longer needed. **Functionality-specific memory** Then, your software could have special memory needs. For example, when generating text using beam search, the software needs to maintain multiple copies of inputs and outputs. **`forward` vs `backward` Execution Speed** For convolutions and linear layers there are 2x flops in the backward compared to the forward, which generally translates into ~2x slower (sometimes more, because sizes in the backward tend to be more awkward). Activations are usually bandwidth-limited, and it’s typical for an activation to have to read more data in the backward than in the forward (e.g. activation forward reads once, writes once, activation backward reads twice, gradOutput and output of the forward, and writes once, gradInput). As you can see, there are potentially a few places where we could save GPU memory or speed up operations. Now that you understand what affects GPU utilization and computation speed, refer to the [Methods and tools for efficient training on a single GPU](perf_train_gpu_one) documentation page to learn about performance optimization techniques. " tflite.md," # Export to TFLite [TensorFlow Lite](https://www.tensorflow.org/lite/guide) is a lightweight framework for deploying machine learning models on resource-constrained devices, such as mobile phones, embedded systems, and Internet of Things (IoT) devices. TFLite is designed to optimize and run models efficiently on these devices with limited computational power, memory, and power consumption. A TensorFlow Lite model is represented in a special efficient portable format identified by the `.tflite` file extension. 🤗 Optimum offers functionality to export 🤗 Transformers models to TFLite through the `exporters.tflite` module. For the list of supported model architectures, please refer to [🤗 Optimum documentation](https://huggingface.co/docs/optimum/exporters/tflite/overview). To export a model to TFLite, install the required dependencies: ```bash pip install optimum[exporters-tf] To check out all available arguments, refer to the [🤗 Optimum docs](https://huggingface.co/docs/optimum/main/en/exporters/tflite/usage_guides/export_a_model), or view help in command line: ```bash optimum-cli export tflite --help To export a model's checkpoint from the 🤗 Hub, for example, `bert-base-uncased`, run the following command: ```bash optimum-cli export tflite --model bert-base-uncased --sequence_length 128 bert_tflite/ You should see the logs indicating progress and showing where the resulting `model.tflite` is saved, like this: ```bash Validating TFLite model -[✓] TFLite model output names match reference model (logits) - Validating TFLite Model output ""logits"": -[✓] (1, 128, 30522) matches (1, 128, 30522) -[x] values not close enough, max diff: 5.817413330078125e-05 (atol: 1e-05) The TensorFlow Lite export succeeded with the warning: The maximum absolute difference between the output of the reference model and the TFLite exported model is not within the set tolerance 1e-05: - logits: max diff = 5.817413330078125e-05. The exported model was saved at: bert_tflite The example above illustrates exporting a checkpoint from 🤗 Hub. When exporting a local model, first make sure that you saved both the model's weights and tokenizer files in the same directory (`local_path`). When using CLI, pass the `local_path` to the `model` argument instead of the checkpoint name on 🤗 Hub. " performance.md," # Performance and Scalability Training large transformer models and deploying them to production present various challenges. During training, the model may require more GPU memory than available or exhibit slow training speed. In the deployment phase, the model can struggle to handle the required throughput in a production environment. This documentation aims to assist you in overcoming these challenges and finding the optimal setting for your use-case. The guides are divided into training and inference sections, as each comes with different challenges and solutions. Within each section you'll find separate guides for different hardware configurations, such as single GPU vs. multi-GPU for training or CPU vs. GPU for inference. Use this document as your starting point to navigate further to the methods that match your scenario. ## Training Training large transformer models efficiently requires an accelerator such as a GPU or TPU. The most common case is where you have a single GPU. The methods that you can apply to improve training efficiency on a single GPU extend to other setups such as multiple GPU. However, there are also techniques that are specific to multi-GPU or CPU training. We cover them in separate sections. * [Methods and tools for efficient training on a single GPU](perf_train_gpu_one): start here to learn common approaches that can help optimize GPU memory utilization, speed up the training, or both. * [Multi-GPU training section](perf_train_gpu_many): explore this section to learn about further optimization methods that apply to a multi-GPU settings, such as data, tensor, and pipeline parallelism. * [CPU training section](perf_train_cpu): learn about mixed precision training on CPU. * [Efficient Training on Multiple CPUs](perf_train_cpu_many): learn about distributed CPU training. * [Training on TPU with TensorFlow](perf_train_tpu_tf): if you are new to TPUs, refer to this section for an opinionated introduction to training on TPUs and using XLA. * [Custom hardware for training](perf_hardware): find tips and tricks when building your own deep learning rig. * [Hyperparameter Search using Trainer API](hpo_train) ## Inference Efficient inference with large models in a production environment can be as challenging as training them. In the following sections we go through the steps to run inference on CPU and single/multi-GPU setups. * [Inference on a single CPU](perf_infer_cpu) * [Inference on a single GPU](perf_infer_gpu_one) * [Multi-GPU inference](perf_infer_gpu_one) * [XLA Integration for TensorFlow Models](tf_xla) ## Training and inference Here you'll find techniques, tips and tricks that apply whether you are training a model, or running inference with it. * [Instantiating a big model](big_models) * [Troubleshooting performance issues](debugging) ## Contribute This document is far from being complete and a lot more needs to be added, so if you have additions or corrections to make please don't hesitate to open a PR or if you aren't sure start an Issue and we can discuss the details there. When making contributions that A is better than B, please try to include a reproducible benchmark and/or a link to the source of that information (unless it comes directly from you). " tokenizer_summary.md," # Summary of the tokenizers [[open-in-colab]] On this page, we will have a closer look at tokenization. As we saw in [the preprocessing tutorial](preprocessing), tokenizing a text is splitting it into words or subwords, which then are converted to ids through a look-up table. Converting words or subwords to ids is straightforward, so in this summary, we will focus on splitting a text into words or subwords (i.e. tokenizing a text). More specifically, we will look at the three main types of tokenizers used in 🤗 Transformers: [Byte-Pair Encoding (BPE)](#byte-pair-encoding), [WordPiece](#wordpiece), and [SentencePiece](#sentencepiece), and show examples of which tokenizer type is used by which model. Note that on each model page, you can look at the documentation of the associated tokenizer to know which tokenizer type was used by the pretrained model. For instance, if we look at [`BertTokenizer`], we can see that the model uses [WordPiece](#wordpiece). ## Introduction Splitting a text into smaller chunks is a task that is harder than it looks, and there are multiple ways of doing so. For instance, let's look at the sentence `""Don't you love 🤗 Transformers? We sure do.""` A simple way of tokenizing this text is to split it by spaces, which would give: [""Don't"", ""you"", ""love"", ""🤗"", ""Transformers?"", ""We"", ""sure"", ""do.""] This is a sensible first step, but if we look at the tokens `""Transformers?""` and `""do.""`, we notice that the punctuation is attached to the words `""Transformer""` and `""do""`, which is suboptimal. We should take the punctuation into account so that a model does not have to learn a different representation of a word and every possible punctuation symbol that could follow it, which would explode the number of representations the model has to learn. Taking punctuation into account, tokenizing our exemplary text would give: [""Don"", ""'"", ""t"", ""you"", ""love"", ""🤗"", ""Transformers"", ""?"", ""We"", ""sure"", ""do"", "".""] Better. However, it is disadvantageous, how the tokenization dealt with the word `""Don't""`. `""Don't""` stands for `""do not""`, so it would be better tokenized as `[""Do"", ""n't""]`. This is where things start getting complicated, and part of the reason each model has its own tokenizer type. Depending on the rules we apply for tokenizing a text, a different tokenized output is generated for the same text. A pretrained model only performs properly if you feed it an input that was tokenized with the same rules that were used to tokenize its training data. [spaCy](https://spacy.io/) and [Moses](http://www.statmt.org/moses/?n=Development.GetStarted) are two popular rule-based tokenizers. Applying them on our example, *spaCy* and *Moses* would output something like: [""Do"", ""n't"", ""you"", ""love"", ""🤗"", ""Transformers"", ""?"", ""We"", ""sure"", ""do"", "".""] As can be seen space and punctuation tokenization, as well as rule-based tokenization, is used here. Space and punctuation tokenization and rule-based tokenization are both examples of word tokenization, which is loosely defined as splitting sentences into words. While it's the most intuitive way to split texts into smaller chunks, this tokenization method can lead to problems for massive text corpora. In this case, space and punctuation tokenization usually generates a very big vocabulary (the set of all unique words and tokens used). *E.g.*, [Transformer XL](model_doc/transformerxl) uses space and punctuation tokenization, resulting in a vocabulary size of 267,735! Such a big vocabulary size forces the model to have an enormous embedding matrix as the input and output layer, which causes both an increased memory and time complexity. In general, transformers models rarely have a vocabulary size greater than 50,000, especially if they are pretrained only on a single language. So if simple space and punctuation tokenization is unsatisfactory, why not simply tokenize on characters? While character tokenization is very simple and would greatly reduce memory and time complexity it makes it much harder for the model to learn meaningful input representations. *E.g.* learning a meaningful context-independent representation for the letter `""t""` is much harder than learning a context-independent representation for the word `""today""`. Therefore, character tokenization is often accompanied by a loss of performance. So to get the best of both worlds, transformers models use a hybrid between word-level and character-level tokenization called **subword** tokenization. ## Subword tokenization Subword tokenization algorithms rely on the principle that frequently used words should not be split into smaller subwords, but rare words should be decomposed into meaningful subwords. For instance `""annoyingly""` might be considered a rare word and could be decomposed into `""annoying""` and `""ly""`. Both `""annoying""` and `""ly""` as stand-alone subwords would appear more frequently while at the same time the meaning of `""annoyingly""` is kept by the composite meaning of `""annoying""` and `""ly""`. This is especially useful in agglutinative languages such as Turkish, where you can form (almost) arbitrarily long complex words by stringing together subwords. Subword tokenization allows the model to have a reasonable vocabulary size while being able to learn meaningful context-independent representations. In addition, subword tokenization enables the model to process words it has never seen before, by decomposing them into known subwords. For instance, the [`~transformers.BertTokenizer`] tokenizes `""I have a new GPU!""` as follows: >>> from transformers import BertTokenizer >>> tokenizer = BertTokenizer.from_pretrained(""bert-base-uncased"") >>> tokenizer.tokenize(""I have a new GPU!"") [""i"", ""have"", ""a"", ""new"", ""gp"", ""##u"", ""!""] Because we are considering the uncased model, the sentence was lowercased first. We can see that the words `[""i"", ""have"", ""a"", ""new""]` are present in the tokenizer's vocabulary, but the word `""gpu""` is not. Consequently, the tokenizer splits `""gpu""` into known subwords: `[""gp"" and ""##u""]`. `""##""` means that the rest of the token should be attached to the previous one, without space (for decoding or reversal of the tokenization). As another example, [`~transformers.XLNetTokenizer`] tokenizes our previously exemplary text as follows: >>> from transformers import XLNetTokenizer >>> tokenizer = XLNetTokenizer.from_pretrained(""xlnet-base-cased"") >>> tokenizer.tokenize(""Don't you love 🤗 Transformers? We sure do."") [""▁Don"", ""'"", ""t"", ""▁you"", ""▁love"", ""▁"", ""🤗"", ""▁"", ""Transform"", ""ers"", ""?"", ""▁We"", ""▁sure"", ""▁do"", "".""] We'll get back to the meaning of those `""▁""` when we look at [SentencePiece](#sentencepiece). As one can see, the rare word `""Transformers""` has been split into the more frequent subwords `""Transform""` and `""ers""`. Let's now look at how the different subword tokenization algorithms work. Note that all of those tokenization algorithms rely on some form of training which is usually done on the corpus the corresponding model will be trained on. ### Byte-Pair Encoding (BPE) Byte-Pair Encoding (BPE) was introduced in [Neural Machine Translation of Rare Words with Subword Units (Sennrich et al., 2015)](https://arxiv.org/abs/1508.07909). BPE relies on a pre-tokenizer that splits the training data into words. Pretokenization can be as simple as space tokenization, e.g. [GPT-2](model_doc/gpt2), [RoBERTa](model_doc/roberta). More advanced pre-tokenization include rule-based tokenization, e.g. [XLM](model_doc/xlm), [FlauBERT](model_doc/flaubert) which uses Moses for most languages, or [GPT](model_doc/gpt) which uses Spacy and ftfy, to count the frequency of each word in the training corpus. After pre-tokenization, a set of unique words has been created and the frequency with which each word occurred in the training data has been determined. Next, BPE creates a base vocabulary consisting of all symbols that occur in the set of unique words and learns merge rules to form a new symbol from two symbols of the base vocabulary. It does so until the vocabulary has attained the desired vocabulary size. Note that the desired vocabulary size is a hyperparameter to define before training the tokenizer. As an example, let's assume that after pre-tokenization, the following set of words including their frequency has been determined: (""hug"", 10), (""pug"", 5), (""pun"", 12), (""bun"", 4), (""hugs"", 5) Consequently, the base vocabulary is `[""b"", ""g"", ""h"", ""n"", ""p"", ""s"", ""u""]`. Splitting all words into symbols of the base vocabulary, we obtain: (""h"" ""u"" ""g"", 10), (""p"" ""u"" ""g"", 5), (""p"" ""u"" ""n"", 12), (""b"" ""u"" ""n"", 4), (""h"" ""u"" ""g"" ""s"", 5) BPE then counts the frequency of each possible symbol pair and picks the symbol pair that occurs most frequently. In the example above `""h""` followed by `""u""` is present _10 + 5 = 15_ times (10 times in the 10 occurrences of `""hug""`, 5 times in the 5 occurrences of `""hugs""`). However, the most frequent symbol pair is `""u""` followed by `""g""`, occurring _10 + 5 + 5 = 20_ times in total. Thus, the first merge rule the tokenizer learns is to group all `""u""` symbols followed by a `""g""` symbol together. Next, `""ug""` is added to the vocabulary. The set of words then becomes (""h"" ""ug"", 10), (""p"" ""ug"", 5), (""p"" ""u"" ""n"", 12), (""b"" ""u"" ""n"", 4), (""h"" ""ug"" ""s"", 5) BPE then identifies the next most common symbol pair. It's `""u""` followed by `""n""`, which occurs 16 times. `""u""`, `""n""` is merged to `""un""` and added to the vocabulary. The next most frequent symbol pair is `""h""` followed by `""ug""`, occurring 15 times. Again the pair is merged and `""hug""` can be added to the vocabulary. At this stage, the vocabulary is `[""b"", ""g"", ""h"", ""n"", ""p"", ""s"", ""u"", ""ug"", ""un"", ""hug""]` and our set of unique words is represented as (""hug"", 10), (""p"" ""ug"", 5), (""p"" ""un"", 12), (""b"" ""un"", 4), (""hug"" ""s"", 5) Assuming, that the Byte-Pair Encoding training would stop at this point, the learned merge rules would then be applied to new words (as long as those new words do not include symbols that were not in the base vocabulary). For instance, the word `""bug""` would be tokenized to `[""b"", ""ug""]` but `""mug""` would be tokenized as `["""", ""ug""]` since the symbol `""m""` is not in the base vocabulary. In general, single letters such as `""m""` are not replaced by the `""""` symbol because the training data usually includes at least one occurrence of each letter, but it is likely to happen for very special characters like emojis. As mentioned earlier, the vocabulary size, *i.e.* the base vocabulary size + the number of merges, is a hyperparameter to choose. For instance [GPT](model_doc/gpt) has a vocabulary size of 40,478 since they have 478 base characters and chose to stop training after 40,000 merges. #### Byte-level BPE A base vocabulary that includes all possible base characters can be quite large if *e.g.* all unicode characters are considered as base characters. To have a better base vocabulary, [GPT-2](https://cdn.openai.com/better-language-models/language_models_are_unsupervised_multitask_learners.pdf) uses bytes as the base vocabulary, which is a clever trick to force the base vocabulary to be of size 256 while ensuring that every base character is included in the vocabulary. With some additional rules to deal with punctuation, the GPT2's tokenizer can tokenize every text without the need for the symbol. [GPT-2](model_doc/gpt) has a vocabulary size of 50,257, which corresponds to the 256 bytes base tokens, a special end-of-text token and the symbols learned with 50,000 merges. ### WordPiece WordPiece is the subword tokenization algorithm used for [BERT](model_doc/bert), [DistilBERT](model_doc/distilbert), and [Electra](model_doc/electra). The algorithm was outlined in [Japanese and Korean Voice Search (Schuster et al., 2012)](https://static.googleusercontent.com/media/research.google.com/ja//pubs/archive/37842.pdf) and is very similar to BPE. WordPiece first initializes the vocabulary to include every character present in the training data and progressively learns a given number of merge rules. In contrast to BPE, WordPiece does not choose the most frequent symbol pair, but the one that maximizes the likelihood of the training data once added to the vocabulary. So what does this mean exactly? Referring to the previous example, maximizing the likelihood of the training data is equivalent to finding the symbol pair, whose probability divided by the probabilities of its first symbol followed by its second symbol is the greatest among all symbol pairs. *E.g.* `""u""`, followed by `""g""` would have only been merged if the probability of `""ug""` divided by `""u""`, `""g""` would have been greater than for any other symbol pair. Intuitively, WordPiece is slightly different to BPE in that it evaluates what it _loses_ by merging two symbols to ensure it's _worth it_. ### Unigram Unigram is a subword tokenization algorithm introduced in [Subword Regularization: Improving Neural Network Translation Models with Multiple Subword Candidates (Kudo, 2018)](https://arxiv.org/pdf/1804.10959.pdf). In contrast to BPE or WordPiece, Unigram initializes its base vocabulary to a large number of symbols and progressively trims down each symbol to obtain a smaller vocabulary. The base vocabulary could for instance correspond to all pre-tokenized words and the most common substrings. Unigram is not used directly for any of the models in the transformers, but it's used in conjunction with [SentencePiece](#sentencepiece). At each training step, the Unigram algorithm defines a loss (often defined as the log-likelihood) over the training data given the current vocabulary and a unigram language model. Then, for each symbol in the vocabulary, the algorithm computes how much the overall loss would increase if the symbol was to be removed from the vocabulary. Unigram then removes p (with p usually being 10% or 20%) percent of the symbols whose loss increase is the lowest, *i.e.* those symbols that least affect the overall loss over the training data. This process is repeated until the vocabulary has reached the desired size. The Unigram algorithm always keeps the base characters so that any word can be tokenized. Because Unigram is not based on merge rules (in contrast to BPE and WordPiece), the algorithm has several ways of tokenizing new text after training. As an example, if a trained Unigram tokenizer exhibits the vocabulary: [""b"", ""g"", ""h"", ""n"", ""p"", ""s"", ""u"", ""ug"", ""un"", ""hug""], `""hugs""` could be tokenized both as `[""hug"", ""s""]`, `[""h"", ""ug"", ""s""]` or `[""h"", ""u"", ""g"", ""s""]`. So which one to choose? Unigram saves the probability of each token in the training corpus on top of saving the vocabulary so that the probability of each possible tokenization can be computed after training. The algorithm simply picks the most likely tokenization in practice, but also offers the possibility to sample a possible tokenization according to their probabilities. Those probabilities are defined by the loss the tokenizer is trained on. Assuming that the training data consists of the words \\(x_{1}, \dots, x_{N}\\) and that the set of all possible tokenizations for a word \\(x_{i}\\) is defined as \\(S(x_{i})\\), then the overall loss is defined as $$\mathcal{L} = -\sum_{i=1}^{N} \log \left ( \sum_{x \in S(x_{i})} p(x) \right )$$ ### SentencePiece All tokenization algorithms described so far have the same problem: It is assumed that the input text uses spaces to separate words. However, not all languages use spaces to separate words. One possible solution is to use language specific pre-tokenizers, *e.g.* [XLM](model_doc/xlm) uses a specific Chinese, Japanese, and Thai pre-tokenizer). To solve this problem more generally, [SentencePiece: A simple and language independent subword tokenizer and detokenizer for Neural Text Processing (Kudo et al., 2018)](https://arxiv.org/pdf/1808.06226.pdf) treats the input as a raw input stream, thus including the space in the set of characters to use. It then uses the BPE or unigram algorithm to construct the appropriate vocabulary. The [`XLNetTokenizer`] uses SentencePiece for example, which is also why in the example earlier the `""▁""` character was included in the vocabulary. Decoding with SentencePiece is very easy since all tokens can just be concatenated and `""▁""` is replaced by a space. All transformers models in the library that use SentencePiece use it in combination with unigram. Examples of models using SentencePiece are [ALBERT](model_doc/albert), [XLNet](model_doc/xlnet), [Marian](model_doc/marian), and [T5](model_doc/t5). " perf_infer_cpu.md," # CPU inference With some optimizations, it is possible to efficiently run large model inference on a CPU. One of these optimization techniques involves compiling the PyTorch code into an intermediate format for high-performance environments like C++. The other technique fuses multiple operations into one kernel to reduce the overhead of running each operation separately. You'll learn how to use [BetterTransformer](https://pytorch.org/blog/a-better-transformer-for-fast-transformer-encoder-inference/) for faster inference, and how to convert your PyTorch code to [TorchScript](https://pytorch.org/tutorials/beginner/Intro_to_TorchScript_tutorial.html). If you're using an Intel CPU, you can also use [graph optimizations](https://intel.github.io/intel-extension-for-pytorch/cpu/latest/tutorials/features.html#graph-optimization) from [Intel Extension for PyTorch](https://intel.github.io/intel-extension-for-pytorch/cpu/latest/index.html) to boost inference speed even more. Finally, learn how to use 🤗 Optimum to accelerate inference with ONNX Runtime or OpenVINO (if you're using an Intel CPU). ## BetterTransformer BetterTransformer accelerates inference with its fastpath (native PyTorch specialized implementation of Transformer functions) execution. The two optimizations in the fastpath execution are: 1. fusion, which combines multiple sequential operations into a single ""kernel"" to reduce the number of computation steps 2. skipping the inherent sparsity of padding tokens to avoid unnecessary computation with nested tensors BetterTransformer also converts all attention operations to use the more memory-efficient [scaled dot product attention](https://pytorch.org/docs/master/generated/torch.nn.functional.scaled_dot_product_attention). BetterTransformer is not supported for all models. Check this [list](https://huggingface.co/docs/optimum/bettertransformer/overview#supported-models) to see if a model supports BetterTransformer. Before you start, make sure you have 🤗 Optimum [installed](https://huggingface.co/docs/optimum/installation). Enable BetterTransformer with the [`PreTrainedModel.to_bettertransformer`] method: from transformers import AutoModelForCausalLM model = AutoModelForCausalLM.from_pretrained(""bigcode/starcoder"") model.to_bettertransformer() ## TorchScript TorchScript is an intermediate PyTorch model representation that can be run in production environments where performance is important. You can train a model in PyTorch and then export it to TorchScript to free the model from Python performance constraints. PyTorch [traces](https://pytorch.org/docs/stable/generated/torch.jit.trace.html) a model to return a [`ScriptFunction`] that is optimized with just-in-time compilation (JIT). Compared to the default eager mode, JIT mode in PyTorch typically yields better performance for inference using optimization techniques like operator fusion. For a gentle introduction to TorchScript, see the [Introduction to PyTorch TorchScript](https://pytorch.org/tutorials/beginner/Intro_to_TorchScript_tutorial.html) tutorial. With the [`Trainer`] class, you can enable JIT mode for CPU inference by setting the `--jit_mode_eval` flag: ```bash python run_qa.py \ --model_name_or_path csarron/bert-base-uncased-squad-v1 \ --dataset_name squad \ --do_eval \ --max_seq_length 384 \ --doc_stride 128 \ --output_dir /tmp/ \ --no_cuda \ --jit_mode_eval For PyTorch >= 1.14.0, JIT-mode could benefit any model for prediction and evaluaion since the dict input is supported in `jit.trace`. For PyTorch < 1.14.0, JIT-mode could benefit a model if its forward parameter order matches the tuple input order in `jit.trace`, such as a question-answering model. If the forward parameter order does not match the tuple input order in `jit.trace`, like a text classification model, `jit.trace` will fail and we are capturing this with the exception here to make it fallback. Logging is used to notify users. ## IPEX graph optimization Intel® Extension for PyTorch (IPEX) provides further optimizations in JIT mode for Intel CPUs, and we recommend combining it with TorchScript for even faster performance. The IPEX [graph optimization](https://intel.github.io/intel-extension-for-pytorch/cpu/latest/tutorials/features/graph_optimization.html) fuses operations like Multi-head attention, Concat Linear, Linear + Add, Linear + Gelu, Add + LayerNorm, and more. To take advantage of these graph optimizations, make sure you have IPEX [installed](https://intel.github.io/intel-extension-for-pytorch/cpu/latest/tutorials/installation.html): ```bash pip install intel_extension_for_pytorch Set the `--use_ipex` and `--jit_mode_eval` flags in the [`Trainer`] class to enable JIT mode with the graph optimizations: ```bash python run_qa.py \ --model_name_or_path csarron/bert-base-uncased-squad-v1 \ --dataset_name squad \ --do_eval \ --max_seq_length 384 \ --doc_stride 128 \ --output_dir /tmp/ \ --no_cuda \ --use_ipex \ --jit_mode_eval ## 🤗 Optimum Learn more details about using ORT with 🤗 Optimum in the [Optimum Inference with ONNX Runtime](https://huggingface.co/docs/optimum/onnxruntime/usage_guides/models) guide. This section only provides a brief and simple example. ONNX Runtime (ORT) is a model accelerator that runs inference on CPUs by default. ORT is supported by 🤗 Optimum which can be used in 🤗 Transformers, without making too many changes to your code. You only need to replace the 🤗 Transformers `AutoClass` with its equivalent [`~optimum.onnxruntime.ORTModel`] for the task you're solving, and load a checkpoint in the ONNX format. For example, if you're running inference on a question answering task, load the [optimum/roberta-base-squad2](https://huggingface.co/optimum/roberta-base-squad2) checkpoint which contains a `model.onnx` file: from transformers import AutoTokenizer, pipeline from optimum.onnxruntime import ORTModelForQuestionAnswering model = ORTModelForQuestionAnswering.from_pretrained(""optimum/roberta-base-squad2"") tokenizer = AutoTokenizer.from_pretrained(""deepset/roberta-base-squad2"") onnx_qa = pipeline(""question-answering"", model=model, tokenizer=tokenizer) question = ""What's my name?"" context = ""My name is Philipp and I live in Nuremberg."" pred = onnx_qa(question, context) If you have an Intel CPU, take a look at 🤗 [Optimum Intel](https://huggingface.co/docs/optimum/intel/index) which supports a variety of compression techniques (quantization, pruning, knowledge distillation) and tools for converting models to the [OpenVINO](https://huggingface.co/docs/optimum/intel/inference) format for higher performance inference. " create_a_model.md," # Create a custom architecture An [`AutoClass`](model_doc/auto) automatically infers the model architecture and downloads pretrained configuration and weights. Generally, we recommend using an `AutoClass` to produce checkpoint-agnostic code. But users who want more control over specific model parameters can create a custom 🤗 Transformers model from just a few base classes. This could be particularly useful for anyone who is interested in studying, training or experimenting with a 🤗 Transformers model. In this guide, dive deeper into creating a custom model without an `AutoClass`. Learn how to: - Load and customize a model configuration. - Create a model architecture. - Create a slow and fast tokenizer for text. - Create an image processor for vision tasks. - Create a feature extractor for audio tasks. - Create a processor for multimodal tasks. ## Configuration A [configuration](main_classes/configuration) refers to a model's specific attributes. Each model configuration has different attributes; for instance, all NLP models have the `hidden_size`, `num_attention_heads`, `num_hidden_layers` and `vocab_size` attributes in common. These attributes specify the number of attention heads or hidden layers to construct a model with. Get a closer look at [DistilBERT](model_doc/distilbert) by accessing [`DistilBertConfig`] to inspect it's attributes: >>> from transformers import DistilBertConfig >>> config = DistilBertConfig() >>> print(config) DistilBertConfig { ""activation"": ""gelu"", ""attention_dropout"": 0.1, ""dim"": 768, ""dropout"": 0.1, ""hidden_dim"": 3072, ""initializer_range"": 0.02, ""max_position_embeddings"": 512, ""model_type"": ""distilbert"", ""n_heads"": 12, ""n_layers"": 6, ""pad_token_id"": 0, ""qa_dropout"": 0.1, ""seq_classif_dropout"": 0.2, ""sinusoidal_pos_embds"": false, ""transformers_version"": ""4.16.2"", ""vocab_size"": 30522 } [`DistilBertConfig`] displays all the default attributes used to build a base [`DistilBertModel`]. All attributes are customizable, creating space for experimentation. For example, you can customize a default model to: - Try a different activation function with the `activation` parameter. - Use a higher dropout ratio for the attention probabilities with the `attention_dropout` parameter. >>> my_config = DistilBertConfig(activation=""relu"", attention_dropout=0.4) >>> print(my_config) DistilBertConfig { ""activation"": ""relu"", ""attention_dropout"": 0.4, ""dim"": 768, ""dropout"": 0.1, ""hidden_dim"": 3072, ""initializer_range"": 0.02, ""max_position_embeddings"": 512, ""model_type"": ""distilbert"", ""n_heads"": 12, ""n_layers"": 6, ""pad_token_id"": 0, ""qa_dropout"": 0.1, ""seq_classif_dropout"": 0.2, ""sinusoidal_pos_embds"": false, ""transformers_version"": ""4.16.2"", ""vocab_size"": 30522 } Pretrained model attributes can be modified in the [`~PretrainedConfig.from_pretrained`] function: >>> my_config = DistilBertConfig.from_pretrained(""distilbert-base-uncased"", activation=""relu"", attention_dropout=0.4) Once you are satisfied with your model configuration, you can save it with [`~PretrainedConfig.save_pretrained`]. Your configuration file is stored as a JSON file in the specified save directory: >>> my_config.save_pretrained(save_directory=""./your_model_save_path"") To reuse the configuration file, load it with [`~PretrainedConfig.from_pretrained`]: >>> my_config = DistilBertConfig.from_pretrained(""./your_model_save_path/config.json"") You can also save your configuration file as a dictionary or even just the difference between your custom configuration attributes and the default configuration attributes! See the [configuration](main_classes/configuration) documentation for more details. ## Model The next step is to create a [model](main_classes/models). The model - also loosely referred to as the architecture - defines what each layer is doing and what operations are happening. Attributes like `num_hidden_layers` from the configuration are used to define the architecture. Every model shares the base class [`PreTrainedModel`] and a few common methods like resizing input embeddings and pruning self-attention heads. In addition, all models are also either a [`torch.nn.Module`](https://pytorch.org/docs/stable/generated/torch.nn.Module.html), [`tf.keras.Model`](https://www.tensorflow.org/api_docs/python/tf/keras/Model) or [`flax.linen.Module`](https://flax.readthedocs.io/en/latest/api_reference/flax.linen/module.html) subclass. This means models are compatible with each of their respective framework's usage. Load your custom configuration attributes into the model: >>> from transformers import DistilBertModel >>> my_config = DistilBertConfig.from_pretrained(""./your_model_save_path/config.json"") >>> model = DistilBertModel(my_config) This creates a model with random values instead of pretrained weights. You won't be able to use this model for anything useful yet until you train it. Training is a costly and time-consuming process. It is generally better to use a pretrained model to obtain better results faster, while using only a fraction of the resources required for training. Create a pretrained model with [`~PreTrainedModel.from_pretrained`]: >>> model = DistilBertModel.from_pretrained(""distilbert-base-uncased"") When you load pretrained weights, the default model configuration is automatically loaded if the model is provided by 🤗 Transformers. However, you can still replace - some or all of - the default model configuration attributes with your own if you'd like: >>> model = DistilBertModel.from_pretrained(""distilbert-base-uncased"", config=my_config) Load your custom configuration attributes into the model: >>> from transformers import TFDistilBertModel >>> my_config = DistilBertConfig.from_pretrained(""./your_model_save_path/my_config.json"") >>> tf_model = TFDistilBertModel(my_config) This creates a model with random values instead of pretrained weights. You won't be able to use this model for anything useful yet until you train it. Training is a costly and time-consuming process. It is generally better to use a pretrained model to obtain better results faster, while using only a fraction of the resources required for training. Create a pretrained model with [`~TFPreTrainedModel.from_pretrained`]: >>> tf_model = TFDistilBertModel.from_pretrained(""distilbert-base-uncased"") When you load pretrained weights, the default model configuration is automatically loaded if the model is provided by 🤗 Transformers. However, you can still replace - some or all of - the default model configuration attributes with your own if you'd like: >>> tf_model = TFDistilBertModel.from_pretrained(""distilbert-base-uncased"", config=my_config) ### Model heads At this point, you have a base DistilBERT model which outputs the *hidden states*. The hidden states are passed as inputs to a model head to produce the final output. 🤗 Transformers provides a different model head for each task as long as a model supports the task (i.e., you can't use DistilBERT for a sequence-to-sequence task like translation). For example, [`DistilBertForSequenceClassification`] is a base DistilBERT model with a sequence classification head. The sequence classification head is a linear layer on top of the pooled outputs. >>> from transformers import DistilBertForSequenceClassification >>> model = DistilBertForSequenceClassification.from_pretrained(""distilbert-base-uncased"") Easily reuse this checkpoint for another task by switching to a different model head. For a question answering task, you would use the [`DistilBertForQuestionAnswering`] model head. The question answering head is similar to the sequence classification head except it is a linear layer on top of the hidden states output. >>> from transformers import DistilBertForQuestionAnswering >>> model = DistilBertForQuestionAnswering.from_pretrained(""distilbert-base-uncased"") For example, [`TFDistilBertForSequenceClassification`] is a base DistilBERT model with a sequence classification head. The sequence classification head is a linear layer on top of the pooled outputs. >>> from transformers import TFDistilBertForSequenceClassification >>> tf_model = TFDistilBertForSequenceClassification.from_pretrained(""distilbert-base-uncased"") Easily reuse this checkpoint for another task by switching to a different model head. For a question answering task, you would use the [`TFDistilBertForQuestionAnswering`] model head. The question answering head is similar to the sequence classification head except it is a linear layer on top of the hidden states output. >>> from transformers import TFDistilBertForQuestionAnswering >>> tf_model = TFDistilBertForQuestionAnswering.from_pretrained(""distilbert-base-uncased"") ## Tokenizer The last base class you need before using a model for textual data is a [tokenizer](main_classes/tokenizer) to convert raw text to tensors. There are two types of tokenizers you can use with 🤗 Transformers: - [`PreTrainedTokenizer`]: a Python implementation of a tokenizer. - [`PreTrainedTokenizerFast`]: a tokenizer from our Rust-based [🤗 Tokenizer](https://huggingface.co/docs/tokenizers/python/latest/) library. This tokenizer type is significantly faster - especially during batch tokenization - due to its Rust implementation. The fast tokenizer also offers additional methods like *offset mapping* which maps tokens to their original words or characters. Both tokenizers support common methods such as encoding and decoding, adding new tokens, and managing special tokens. Not every model supports a fast tokenizer. Take a look at this [table](index#supported-frameworks) to check if a model has fast tokenizer support. If you trained your own tokenizer, you can create one from your *vocabulary* file: >>> from transformers import DistilBertTokenizer >>> my_tokenizer = DistilBertTokenizer(vocab_file=""my_vocab_file.txt"", do_lower_case=False, padding_side=""left"") It is important to remember the vocabulary from a custom tokenizer will be different from the vocabulary generated by a pretrained model's tokenizer. You need to use a pretrained model's vocabulary if you are using a pretrained model, otherwise the inputs won't make sense. Create a tokenizer with a pretrained model's vocabulary with the [`DistilBertTokenizer`] class: >>> from transformers import DistilBertTokenizer >>> slow_tokenizer = DistilBertTokenizer.from_pretrained(""distilbert-base-uncased"") Create a fast tokenizer with the [`DistilBertTokenizerFast`] class: >>> from transformers import DistilBertTokenizerFast >>> fast_tokenizer = DistilBertTokenizerFast.from_pretrained(""distilbert-base-uncased"") By default, [`AutoTokenizer`] will try to load a fast tokenizer. You can disable this behavior by setting `use_fast=False` in `from_pretrained`. ## Image Processor An image processor processes vision inputs. It inherits from the base [`~image_processing_utils.ImageProcessingMixin`] class. To use, create an image processor associated with the model you're using. For example, create a default [`ViTImageProcessor`] if you are using [ViT](model_doc/vit) for image classification: >>> from transformers import ViTImageProcessor >>> vit_extractor = ViTImageProcessor() >>> print(vit_extractor) ViTImageProcessor { ""do_normalize"": true, ""do_resize"": true, ""image_processor_type"": ""ViTImageProcessor"", ""image_mean"": [ 0.5, 0.5, 0.5 ], ""image_std"": [ 0.5, 0.5, 0.5 ], ""resample"": 2, ""size"": 224 } If you aren't looking for any customization, just use the `from_pretrained` method to load a model's default image processor parameters. Modify any of the [`ViTImageProcessor`] parameters to create your custom image processor: >>> from transformers import ViTImageProcessor >>> my_vit_extractor = ViTImageProcessor(resample=""PIL.Image.BOX"", do_normalize=False, image_mean=[0.3, 0.3, 0.3]) >>> print(my_vit_extractor) ViTImageProcessor { ""do_normalize"": false, ""do_resize"": true, ""image_processor_type"": ""ViTImageProcessor"", ""image_mean"": [ 0.3, 0.3, 0.3 ], ""image_std"": [ 0.5, 0.5, 0.5 ], ""resample"": ""PIL.Image.BOX"", ""size"": 224 } ## Feature Extractor A feature extractor processes audio inputs. It inherits from the base [`~feature_extraction_utils.FeatureExtractionMixin`] class, and may also inherit from the [`SequenceFeatureExtractor`] class for processing audio inputs. To use, create a feature extractor associated with the model you're using. For example, create a default [`Wav2Vec2FeatureExtractor`] if you are using [Wav2Vec2](model_doc/wav2vec2) for audio classification: >>> from transformers import Wav2Vec2FeatureExtractor >>> w2v2_extractor = Wav2Vec2FeatureExtractor() >>> print(w2v2_extractor) Wav2Vec2FeatureExtractor { ""do_normalize"": true, ""feature_extractor_type"": ""Wav2Vec2FeatureExtractor"", ""feature_size"": 1, ""padding_side"": ""right"", ""padding_value"": 0.0, ""return_attention_mask"": false, ""sampling_rate"": 16000 } If you aren't looking for any customization, just use the `from_pretrained` method to load a model's default feature extractor parameters. Modify any of the [`Wav2Vec2FeatureExtractor`] parameters to create your custom feature extractor: >>> from transformers import Wav2Vec2FeatureExtractor >>> w2v2_extractor = Wav2Vec2FeatureExtractor(sampling_rate=8000, do_normalize=False) >>> print(w2v2_extractor) Wav2Vec2FeatureExtractor { ""do_normalize"": false, ""feature_extractor_type"": ""Wav2Vec2FeatureExtractor"", ""feature_size"": 1, ""padding_side"": ""right"", ""padding_value"": 0.0, ""return_attention_mask"": false, ""sampling_rate"": 8000 } ## Processor For models that support multimodal tasks, 🤗 Transformers offers a processor class that conveniently wraps processing classes such as a feature extractor and a tokenizer into a single object. For example, let's use the [`Wav2Vec2Processor`] for an automatic speech recognition task (ASR). ASR transcribes audio to text, so you will need a feature extractor and a tokenizer. Create a feature extractor to handle the audio inputs: >>> from transformers import Wav2Vec2FeatureExtractor >>> feature_extractor = Wav2Vec2FeatureExtractor(padding_value=1.0, do_normalize=True) Create a tokenizer to handle the text inputs: >>> from transformers import Wav2Vec2CTCTokenizer >>> tokenizer = Wav2Vec2CTCTokenizer(vocab_file=""my_vocab_file.txt"") Combine the feature extractor and tokenizer in [`Wav2Vec2Processor`]: >>> from transformers import Wav2Vec2Processor >>> processor = Wav2Vec2Processor(feature_extractor=feature_extractor, tokenizer=tokenizer) With two basic classes - configuration and model - and an additional preprocessing class (tokenizer, image processor, feature extractor, or processor), you can create any of the models supported by 🤗 Transformers. Each of these base classes are configurable, allowing you to use the specific attributes you want. You can easily setup a model for training or modify an existing pretrained model to fine-tune. " model_sharing.md," # Share a model The last two tutorials showed how you can fine-tune a model with PyTorch, Keras, and 🤗 Accelerate for distributed setups. The next step is to share your model with the community! At Hugging Face, we believe in openly sharing knowledge and resources to democratize artificial intelligence for everyone. We encourage you to consider sharing your model with the community to help others save time and resources. In this tutorial, you will learn two methods for sharing a trained or fine-tuned model on the [Model Hub](https://huggingface.co/models): - Programmatically push your files to the Hub. - Drag-and-drop your files to the Hub with the web interface. To share a model with the community, you need an account on [huggingface.co](https://huggingface.co/join). You can also join an existing organization or create a new one. ## Repository features Each repository on the Model Hub behaves like a typical GitHub repository. Our repositories offer versioning, commit history, and the ability to visualize differences. The Model Hub's built-in versioning is based on git and [git-lfs](https://git-lfs.github.com/). In other words, you can treat one model as one repository, enabling greater access control and scalability. Version control allows *revisions*, a method for pinning a specific version of a model with a commit hash, tag or branch. As a result, you can load a specific model version with the `revision` parameter: >>> model = AutoModel.from_pretrained( ""julien-c/EsperBERTo-small"", revision=""v2.0.1"" # tag name, or branch name, or commit hash ) Files are also easily edited in a repository, and you can view the commit history as well as the difference: ![vis_diff](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/vis_diff.png) ## Setup Before sharing a model to the Hub, you will need your Hugging Face credentials. If you have access to a terminal, run the following command in the virtual environment where 🤗 Transformers is installed. This will store your access token in your Hugging Face cache folder (`~/.cache/` by default): ```bash huggingface-cli login If you are using a notebook like Jupyter or Colaboratory, make sure you have the [`huggingface_hub`](https://huggingface.co/docs/hub/adding-a-library) library installed. This library allows you to programmatically interact with the Hub. ```bash pip install huggingface_hub Then use `notebook_login` to sign-in to the Hub, and follow the link [here](https://huggingface.co/settings/token) to generate a token to login with: >>> from huggingface_hub import notebook_login >>> notebook_login() ## Convert a model for all frameworks To ensure your model can be used by someone working with a different framework, we recommend you convert and upload your model with both PyTorch and TensorFlow checkpoints. While users are still able to load your model from a different framework if you skip this step, it will be slower because 🤗 Transformers will need to convert the checkpoint on-the-fly. Converting a checkpoint for another framework is easy. Make sure you have PyTorch and TensorFlow installed (see [here](installation) for installation instructions), and then find the specific model for your task in the other framework. Specify `from_tf=True` to convert a checkpoint from TensorFlow to PyTorch: >>> pt_model = DistilBertForSequenceClassification.from_pretrained(""path/to/awesome-name-you-picked"", from_tf=True) >>> pt_model.save_pretrained(""path/to/awesome-name-you-picked"") Specify `from_pt=True` to convert a checkpoint from PyTorch to TensorFlow: >>> tf_model = TFDistilBertForSequenceClassification.from_pretrained(""path/to/awesome-name-you-picked"", from_pt=True) Then you can save your new TensorFlow model with its new checkpoint: >>> tf_model.save_pretrained(""path/to/awesome-name-you-picked"") If a model is available in Flax, you can also convert a checkpoint from PyTorch to Flax: >>> flax_model = FlaxDistilBertForSequenceClassification.from_pretrained( ""path/to/awesome-name-you-picked"", from_pt=True ) ## Push a model during training Sharing a model to the Hub is as simple as adding an extra parameter or callback. Remember from the [fine-tuning tutorial](training), the [`TrainingArguments`] class is where you specify hyperparameters and additional training options. One of these training options includes the ability to push a model directly to the Hub. Set `push_to_hub=True` in your [`TrainingArguments`]: >>> training_args = TrainingArguments(output_dir=""my-awesome-model"", push_to_hub=True) Pass your training arguments as usual to [`Trainer`]: >>> trainer = Trainer( model=model, args=training_args, train_dataset=small_train_dataset, eval_dataset=small_eval_dataset, compute_metrics=compute_metrics, ) After you fine-tune your model, call [`~transformers.Trainer.push_to_hub`] on [`Trainer`] to push the trained model to the Hub. 🤗 Transformers will even automatically add training hyperparameters, training results and framework versions to your model card! >>> trainer.push_to_hub() Share a model to the Hub with [`PushToHubCallback`]. In the [`PushToHubCallback`] function, add: - An output directory for your model. - A tokenizer. - The `hub_model_id`, which is your Hub username and model name. >>> from transformers import PushToHubCallback >>> push_to_hub_callback = PushToHubCallback( output_dir=""./your_model_save_path"", tokenizer=tokenizer, hub_model_id=""your-username/my-awesome-model"" ) Add the callback to [`fit`](https://keras.io/api/models/model_training_apis/), and 🤗 Transformers will push the trained model to the Hub: >>> model.fit(tf_train_dataset, validation_data=tf_validation_dataset, epochs=3, callbacks=push_to_hub_callback) ## Use the `push_to_hub` function You can also call `push_to_hub` directly on your model to upload it to the Hub. Specify your model name in `push_to_hub`: >>> pt_model.push_to_hub(""my-awesome-model"") This creates a repository under your username with the model name `my-awesome-model`. Users can now load your model with the `from_pretrained` function: >>> from transformers import AutoModel >>> model = AutoModel.from_pretrained(""your_username/my-awesome-model"") If you belong to an organization and want to push your model under the organization name instead, just add it to the `repo_id`: >>> pt_model.push_to_hub(""my-awesome-org/my-awesome-model"") The `push_to_hub` function can also be used to add other files to a model repository. For example, add a tokenizer to a model repository: >>> tokenizer.push_to_hub(""my-awesome-model"") Or perhaps you'd like to add the TensorFlow version of your fine-tuned PyTorch model: >>> tf_model.push_to_hub(""my-awesome-model"") Now when you navigate to your Hugging Face profile, you should see your newly created model repository. Clicking on the **Files** tab will display all the files you've uploaded to the repository. For more details on how to create and upload files to a repository, refer to the Hub documentation [here](https://huggingface.co/docs/hub/how-to-upstream). ## Upload with the web interface Users who prefer a no-code approach are able to upload a model through the Hub's web interface. Visit [huggingface.co/new](https://huggingface.co/new) to create a new repository: ![new_model_repo](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/new_model_repo.png) From here, add some information about your model: - Select the **owner** of the repository. This can be yourself or any of the organizations you belong to. - Pick a name for your model, which will also be the repository name. - Choose whether your model is public or private. - Specify the license usage for your model. Now click on the **Files** tab and click on the **Add file** button to upload a new file to your repository. Then drag-and-drop a file to upload and add a commit message. ![upload_file](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/upload_file.png) ## Add a model card To make sure users understand your model's capabilities, limitations, potential biases and ethical considerations, please add a model card to your repository. The model card is defined in the `README.md` file. You can add a model card by: * Manually creating and uploading a `README.md` file. * Clicking on the **Edit model card** button in your model repository. Take a look at the DistilBert [model card](https://huggingface.co/distilbert-base-uncased) for a good example of the type of information a model card should include. For more details about other options you can control in the `README.md` file such as a model's carbon footprint or widget examples, refer to the documentation [here](https://huggingface.co/docs/hub/models-cards). " fast_tokenizers.md," # Use tokenizers from 🤗 Tokenizers The [`PreTrainedTokenizerFast`] depends on the [🤗 Tokenizers](https://huggingface.co/docs/tokenizers) library. The tokenizers obtained from the 🤗 Tokenizers library can be loaded very simply into 🤗 Transformers. Before getting in the specifics, let's first start by creating a dummy tokenizer in a few lines: thon >>> from tokenizers import Tokenizer >>> from tokenizers.models import BPE >>> from tokenizers.trainers import BpeTrainer >>> from tokenizers.pre_tokenizers import Whitespace >>> tokenizer = Tokenizer(BPE(unk_token=""[UNK]"")) >>> trainer = BpeTrainer(special_tokens=[""[UNK]"", ""[CLS]"", ""[SEP]"", ""[PAD]"", ""[MASK]""]) >>> tokenizer.pre_tokenizer = Whitespace() >>> files = [] >>> tokenizer.train(files, trainer) We now have a tokenizer trained on the files we defined. We can either continue using it in that runtime, or save it to a JSON file for future re-use. ## Loading directly from the tokenizer object Let's see how to leverage this tokenizer object in the 🤗 Transformers library. The [`PreTrainedTokenizerFast`] class allows for easy instantiation, by accepting the instantiated *tokenizer* object as an argument: thon >>> from transformers import PreTrainedTokenizerFast >>> fast_tokenizer = PreTrainedTokenizerFast(tokenizer_object=tokenizer) This object can now be used with all the methods shared by the 🤗 Transformers tokenizers! Head to [the tokenizer page](main_classes/tokenizer) for more information. ## Loading from a JSON file In order to load a tokenizer from a JSON file, let's first start by saving our tokenizer: thon >>> tokenizer.save(""tokenizer.json"") The path to which we saved this file can be passed to the [`PreTrainedTokenizerFast`] initialization method using the `tokenizer_file` parameter: thon >>> from transformers import PreTrainedTokenizerFast >>> fast_tokenizer = PreTrainedTokenizerFast(tokenizer_file=""tokenizer.json"") This object can now be used with all the methods shared by the 🤗 Transformers tokenizers! Head to [the tokenizer page](main_classes/tokenizer) for more information. " perf_hardware.md," # Custom hardware for training The hardware you use to run model training and inference can have a big effect on performance. For a deep dive into GPUs make sure to check out Tim Dettmer's excellent [blog post](https://timdettmers.com/2020/09/07/which-gpu-for-deep-learning/). Let's have a look at some practical advice for GPU setups. ## GPU When you train bigger models you have essentially three options: - bigger GPUs - more GPUs - more CPU and NVMe (offloaded to by [DeepSpeed-Infinity](main_classes/deepspeed#nvme-support)) Let's start at the case where you have a single GPU. ### Power and Cooling If you bought an expensive high end GPU make sure you give it the correct power and sufficient cooling. **Power**: Some high end consumer GPU cards have 2 and sometimes 3 PCI-E 8-Pin power sockets. Make sure you have as many independent 12V PCI-E 8-Pin cables plugged into the card as there are sockets. Do not use the 2 splits at one end of the same cable (also known as pigtail cable). That is if you have 2 sockets on the GPU, you want 2 PCI-E 8-Pin cables going from your PSU to the card and not one that has 2 PCI-E 8-Pin connectors at the end! You won't get the full performance out of your card otherwise. Each PCI-E 8-Pin power cable needs to be plugged into a 12V rail on the PSU side and can supply up to 150W of power. Some other cards may use a PCI-E 12-Pin connectors, and these can deliver up to 500-600W of power. Low end cards may use 6-Pin connectors, which supply up to 75W of power. Additionally you want the high-end PSU that has stable voltage. Some lower quality ones may not give the card the stable voltage it needs to function at its peak. And of course the PSU needs to have enough unused Watts to power the card. **Cooling**: When a GPU gets overheated it will start throttling down and will not deliver full performance and it can even shutdown if it gets too hot. It's hard to tell the exact best temperature to strive for when a GPU is heavily loaded, but probably anything under +80C is good, but lower is better - perhaps 70-75C is an excellent range to be in. The throttling down is likely to start at around 84-90C. But other than throttling performance a prolonged very high temperature is likely to reduce the lifespan of a GPU. Next let's have a look at one of the most important aspects when having multiple GPUs: connectivity. ### Multi-GPU Connectivity If you use multiple GPUs the way cards are inter-connected can have a huge impact on the total training time. If the GPUs are on the same physical node, you can run: nvidia-smi topo -m and it will tell you how the GPUs are inter-connected. On a machine with dual-GPU and which are connected with NVLink, you will most likely see something like: GPU0 GPU1 CPU Affinity NUMA Affinity GPU0 X NV2 0-23 N/A GPU1 NV2 X 0-23 N/A on a different machine w/o NVLink we may see: GPU0 GPU1 CPU Affinity NUMA Affinity GPU0 X PHB 0-11 N/A GPU1 PHB X 0-11 N/A The report includes this legend: X = Self SYS = Connection traversing PCIe as well as the SMP interconnect between NUMA nodes (e.g., QPI/UPI) NODE = Connection traversing PCIe as well as the interconnect between PCIe Host Bridges within a NUMA node PHB = Connection traversing PCIe as well as a PCIe Host Bridge (typically the CPU) PXB = Connection traversing multiple PCIe bridges (without traversing the PCIe Host Bridge) PIX = Connection traversing at most a single PCIe bridge NV# = Connection traversing a bonded set of # NVLinks So the first report `NV2` tells us the GPUs are interconnected with 2 NVLinks, and the second report `PHB` we have a typical consumer-level PCIe+Bridge setup. Check what type of connectivity you have on your setup. Some of these will make the communication between cards faster (e.g. NVLink), others slower (e.g. PHB). Depending on the type of scalability solution used, the connectivity speed could have a major or a minor impact. If the GPUs need to sync rarely, as in DDP, the impact of a slower connection will be less significant. If the GPUs need to send messages to each other often, as in ZeRO-DP, then faster connectivity becomes super important to achieve faster training. #### NVlink [NVLink](https://en.wikipedia.org/wiki/NVLink) is a wire-based serial multi-lane near-range communications link developed by Nvidia. Each new generation provides a faster bandwidth, e.g. here is a quote from [Nvidia Ampere GA102 GPU Architecture](https://www.nvidia.com/content/dam/en-zz/Solutions/geforce/ampere/pdf/NVIDIA-ampere-GA102-GPU-Architecture-Whitepaper-V1.pdf): > Third-Generation NVLink® > GA102 GPUs utilize NVIDIA’s third-generation NVLink interface, which includes four x4 links, > with each link providing 14.0625 GB/sec bandwidth in each direction between two GPUs. Four > links provide 56.25 GB/sec bandwidth in each direction, and 112.5 GB/sec total bandwidth > between two GPUs. Two RTX 3090 GPUs can be connected together for SLI using NVLink. > (Note that 3-Way and 4-Way SLI configurations are not supported.) So the higher `X` you get in the report of `NVX` in the output of `nvidia-smi topo -m` the better. The generation will depend on your GPU architecture. Let's compare the execution of a gpt2 language model training over a small sample of wikitext. The results are: | NVlink | Time | | ----- | ---: | | Y | 101s | | N | 131s | You can see that NVLink completes the training ~23% faster. In the second benchmark we use `NCCL_P2P_DISABLE=1` to tell the GPUs not to use NVLink. Here is the full benchmark code and outputs: ```bash # DDP w/ NVLink rm -r /tmp/test-clm; CUDA_VISIBLE_DEVICES=0,1 python -m torch.distributed.launch \ --nproc_per_node 2 examples/pytorch/language-modeling/run_clm.py --model_name_or_path gpt2 \ --dataset_name wikitext --dataset_config_name wikitext-2-raw-v1 --do_train \ --output_dir /tmp/test-clm --per_device_train_batch_size 4 --max_steps 200 {'train_runtime': 101.9003, 'train_samples_per_second': 1.963, 'epoch': 0.69} # DDP w/o NVLink rm -r /tmp/test-clm; CUDA_VISIBLE_DEVICES=0,1 NCCL_P2P_DISABLE=1 python -m torch.distributed.launch \ --nproc_per_node 2 examples/pytorch/language-modeling/run_clm.py --model_name_or_path gpt2 \ --dataset_name wikitext --dataset_config_name wikitext-2-raw-v1 --do_train --output_dir /tmp/test-clm --per_device_train_batch_size 4 --max_steps 200 {'train_runtime': 131.4367, 'train_samples_per_second': 1.522, 'epoch': 0.69} Hardware: 2x TITAN RTX 24GB each + NVlink with 2 NVLinks (`NV2` in `nvidia-smi topo -m`) Software: `pytorch-1.8-to-be` + `cuda-11.0` / `transformers==4.3.0.dev0` " pr_checks.md," # Checks on a Pull Request When you open a pull request on 🤗 Transformers, a fair number of checks will be run to make sure the patch you are adding is not breaking anything existing. Those checks are of four types: - regular tests - documentation build - code and documentation style - general repository consistency In this document, we will take a stab at explaining what those various checks are and the reason behind them, as well as how to debug them locally if one of them fails on your PR. Note that, ideally, they require you to have a dev install: ```bash pip install transformers[dev] or for an editable install: ```bash pip install -e .[dev] inside the Transformers repo. Since the number of optional dependencies of Transformers has grown a lot, it's possible you don't manage to get all of them. If the dev install fails, make sure to install the Deep Learning framework you are working with (PyTorch, TensorFlow and/or Flax) then do ```bash pip install transformers[quality] or for an editable install: ```bash pip install -e .[quality] ## Tests All the jobs that begin with `ci/circleci: run_tests_` run parts of the Transformers testing suite. Each of those jobs focuses on a part of the library in a certain environment: for instance `ci/circleci: run_tests_pipelines_tf` runs the pipelines test in an environment where TensorFlow only is installed. Note that to avoid running tests when there is no real change in the modules they are testing, only part of the test suite is run each time: a utility is run to determine the differences in the library between before and after the PR (what GitHub shows you in the ""Files changes"" tab) and picks the tests impacted by that diff. That utility can be run locally with: ```bash python utils/tests_fetcher.py from the root of the Transformers repo. It will: 1. Check for each file in the diff if the changes are in the code or only in comments or docstrings. Only the files with real code changes are kept. 2. Build an internal map that gives for each file of the source code of the library all the files it recursively impacts. Module A is said to impact module B if module B imports module A. For the recursive impact, we need a chain of modules going from module A to module B in which each module imports the previous one. 3. Apply this map on the files gathered in step 1, which gives us the list of model files impacted by the PR. 4. Map each of those files to their corresponding test file(s) and get the list of tests to run. When executing the script locally, you should get the results of step 1, 3 and 4 printed and thus know which tests are run. The script will also create a file named `test_list.txt` which contains the list of tests to run, and you can run them locally with the following command: ```bash python -m pytest -n 8 --dist=loadfile -rA -s $(cat test_list.txt) Just in case anything slipped through the cracks, the full test suite is also run daily. ## Documentation build The `build_pr_documentation` job builds and generates a preview of the documentation to make sure everything looks okay once your PR is merged. A bot will add a link to preview the documentation in your PR. Any changes you make to the PR are automatically updated in the preview. If the documentation fails to build, click on **Details** next to the failed job to see where things went wrong. Often, the error is as simple as a missing file in the `toctree`. If you're interested in building or previewing the documentation locally, take a look at the [`README.md`](https://github.com/huggingface/transformers/tree/main/docs) in the docs folder. ## Code and documentation style Code formatting is applied to all the source files, the examples and the tests using `black` and `ruff`. We also have a custom tool taking care of the formatting of docstrings and `rst` files (`utils/style_doc.py`), as well as the order of the lazy imports performed in the Transformers `__init__.py` files (`utils/custom_init_isort.py`). All of this can be launched by executing ```bash make style The CI checks those have been applied inside the `ci/circleci: check_code_quality` check. It also runs `ruff`, that will have a basic look at your code and will complain if it finds an undefined variable, or one that is not used. To run that check locally, use ```bash make quality This can take a lot of time, so to run the same thing on only the files you modified in the current branch, run ```bash make fixup This last command will also run all the additional checks for the repository consistency. Let's have a look at them. ## Repository consistency This regroups all the tests to make sure your PR leaves the repository in a good state, and is performed by the `ci/circleci: check_repository_consistency` check. You can locally run that check by executing the following: ```bash make repo-consistency This checks that: - All objects added to the init are documented (performed by `utils/check_repo.py`) - All `__init__.py` files have the same content in their two sections (performed by `utils/check_inits.py`) - All code identified as a copy from another module is consistent with the original (performed by `utils/check_copies.py`) - All configuration classes have at least one valid checkpoint mentioned in their docstrings (performed by `utils/check_config_docstrings.py`) - All configuration classes only contain attributes that are used in corresponding modeling files (performed by `utils/check_config_attributes.py`) - The translations of the READMEs and the index of the doc have the same model list as the main README (performed by `utils/check_copies.py`) - The auto-generated tables in the documentation are up to date (performed by `utils/check_table.py`) - The library has all objects available even if not all optional dependencies are installed (performed by `utils/check_dummies.py`) - All docstrings properly document the arguments in the signature of the object (performed by `utils/check_docstrings.py`) Should this check fail, the first two items require manual fixing, the last four can be fixed automatically for you by running the command ```bash make fix-copies Additional checks concern PRs that add new models, mainly that: - All models added are in an Auto-mapping (performed by `utils/check_repo.py`) - All models are properly tested (performed by `utils/check_repo.py`) ### Check copies Since the Transformers library is very opinionated with respect to model code, and each model should fully be implemented in a single file without relying on other models, we have added a mechanism that checks whether a copy of the code of a layer of a given model stays consistent with the original. This way, when there is a bug fix, we can see all other impacted models and choose to trickle down the modification or break the copy. If a file is a full copy of another file, you should register it in the constant `FULL_COPIES` of `utils/check_copies.py`. This mechanism relies on comments of the form `# Copied from xxx`. The `xxx` should contain the whole path to the class of function which is being copied below. For instance, `RobertaSelfOutput` is a direct copy of the `BertSelfOutput` class, so you can see [here](https://github.com/huggingface/transformers/blob/2bd7a27a671fd1d98059124024f580f8f5c0f3b5/src/transformers/models/roberta/modeling_roberta.py#L289) it has a comment: # Copied from transformers.models.bert.modeling_bert.BertSelfOutput Note that instead of applying this to a whole class, you can apply it to the relevant methods that are copied from. For instance [here](https://github.com/huggingface/transformers/blob/2bd7a27a671fd1d98059124024f580f8f5c0f3b5/src/transformers/models/roberta/modeling_roberta.py#L598) you can see how `RobertaPreTrainedModel._init_weights` is copied from the same method in `BertPreTrainedModel` with the comment: # Copied from transformers.models.bert.modeling_bert.BertPreTrainedModel._init_weights Sometimes the copy is exactly the same except for names: for instance in `RobertaAttention`, we use `RobertaSelfAttention` insted of `BertSelfAttention` but other than that, the code is exactly the same. This is why `# Copied from` supports simple string replacements with the follwoing syntax: `Copied from xxx with foo->bar`. This means the code is copied with all instances of `foo` being replaced by `bar`. You can see how it used [here](https://github.com/huggingface/transformers/blob/2bd7a27a671fd1d98059124024f580f8f5c0f3b5/src/transformers/models/roberta/modeling_roberta.py#L304C1-L304C86) in `RobertaAttention` with the comment: # Copied from transformers.models.bert.modeling_bert.BertAttention with Bert->Roberta Note that there shouldn't be any spaces around the arrow (unless that space is part of the pattern to replace of course). You can add several patterns separated by a comma. For instance here `CamemberForMaskedLM` is a direct copy of `RobertaForMaskedLM` with two replacements: `Roberta` to `Camembert` and `ROBERTA` to `CAMEMBERT`. You can see [here](https://github.com/huggingface/transformers/blob/15082a9dc6950ecae63a0d3e5060b2fc7f15050a/src/transformers/models/camembert/modeling_camembert.py#L929) this is done with the comment: # Copied from transformers.models.roberta.modeling_roberta.RobertaForMaskedLM with Roberta->Camembert, ROBERTA->CAMEMBERT If the order matters (because one of the replacements might conflict with a previous one), the replacements are executed from left to right. If the replacements change the formatting (if you replace a short name by a very long name for instance), the copy is checked after applying the auto-formatter. Another way when the patterns are just different casings of the same replacement (with an uppercased and a lowercased variants) is just to add the option `all-casing`. [Here](https://github.com/huggingface/transformers/blob/15082a9dc6950ecae63a0d3e5060b2fc7f15050a/src/transformers/models/mobilebert/modeling_mobilebert.py#L1237) is an example in `MobileBertForSequenceClassification` with the comment: # Copied from transformers.models.bert.modeling_bert.BertForSequenceClassification with Bert->MobileBert all-casing In this case, the code is copied from `BertForSequenceClassification` by replacing: - `Bert` by `MobileBert` (for instance when using `MobileBertModel` in the init) - `bert` by `mobilebert` (for instance when defining `self.mobilebert`) - `BERT` by `MOBILEBERT` (in the constant `MOBILEBERT_INPUTS_DOCSTRING`) " perf_train_tpu.md," # Training on TPUs Note: Most of the strategies introduced in the [single GPU section](perf_train_gpu_one) (such as mixed precision training or gradient accumulation) and [multi-GPU section](perf_train_gpu_many) are generic and apply to training models in general so make sure to have a look at it before diving into this section. This document will be completed soon with information on how to train on TPUs. " serialization.md," # Export to ONNX Deploying 🤗 Transformers models in production environments often requires, or can benefit from exporting the models into a serialized format that can be loaded and executed on specialized runtimes and hardware. 🤗 Optimum is an extension of Transformers that enables exporting models from PyTorch or TensorFlow to serialized formats such as ONNX and TFLite through its `exporters` module. 🤗 Optimum also provides a set of performance optimization tools to train and run models on targeted hardware with maximum efficiency. This guide demonstrates how you can export 🤗 Transformers models to ONNX with 🤗 Optimum, for the guide on exporting models to TFLite, please refer to the [Export to TFLite page](tflite). ## Export to ONNX [ONNX (Open Neural Network eXchange)](http://onnx.ai) is an open standard that defines a common set of operators and a common file format to represent deep learning models in a wide variety of frameworks, including PyTorch and TensorFlow. When a model is exported to the ONNX format, these operators are used to construct a computational graph (often called an _intermediate representation_) which represents the flow of data through the neural network. By exposing a graph with standardized operators and data types, ONNX makes it easy to switch between frameworks. For example, a model trained in PyTorch can be exported to ONNX format and then imported in TensorFlow (and vice versa). Once exported to ONNX format, a model can be: - optimized for inference via techniques such as [graph optimization](https://huggingface.co/docs/optimum/onnxruntime/usage_guides/optimization) and [quantization](https://huggingface.co/docs/optimum/onnxruntime/usage_guides/quantization). - run with ONNX Runtime via [`ORTModelForXXX` classes](https://huggingface.co/docs/optimum/onnxruntime/package_reference/modeling_ort), which follow the same `AutoModel` API as the one you are used to in 🤗 Transformers. - run with [optimized inference pipelines](https://huggingface.co/docs/optimum/main/en/onnxruntime/usage_guides/pipelines), which has the same API as the [`pipeline`] function in 🤗 Transformers. 🤗 Optimum provides support for the ONNX export by leveraging configuration objects. These configuration objects come ready-made for a number of model architectures, and are designed to be easily extendable to other architectures. For the list of ready-made configurations, please refer to [🤗 Optimum documentation](https://huggingface.co/docs/optimum/exporters/onnx/overview). There are two ways to export a 🤗 Transformers model to ONNX, here we show both: - export with 🤗 Optimum via CLI. - export with 🤗 Optimum with `optimum.onnxruntime`. ### Exporting a 🤗 Transformers model to ONNX with CLI To export a 🤗 Transformers model to ONNX, first install an extra dependency: ```bash pip install optimum[exporters] To check out all available arguments, refer to the [🤗 Optimum docs](https://huggingface.co/docs/optimum/exporters/onnx/usage_guides/export_a_model#exporting-a-model-to-onnx-using-the-cli), or view help in command line: ```bash optimum-cli export onnx --help To export a model's checkpoint from the 🤗 Hub, for example, `distilbert-base-uncased-distilled-squad`, run the following command: ```bash optimum-cli export onnx --model distilbert-base-uncased-distilled-squad distilbert_base_uncased_squad_onnx/ You should see the logs indicating progress and showing where the resulting `model.onnx` is saved, like this: ```bash Validating ONNX model distilbert_base_uncased_squad_onnx/model.onnx -[✓] ONNX model output names match reference model (start_logits, end_logits) - Validating ONNX Model output ""start_logits"": -[✓] (2, 16) matches (2, 16) -[✓] all values close (atol: 0.0001) - Validating ONNX Model output ""end_logits"": -[✓] (2, 16) matches (2, 16) -[✓] all values close (atol: 0.0001) The ONNX export succeeded and the exported model was saved at: distilbert_base_uncased_squad_onnx The example above illustrates exporting a checkpoint from 🤗 Hub. When exporting a local model, first make sure that you saved both the model's weights and tokenizer files in the same directory (`local_path`). When using CLI, pass the `local_path` to the `model` argument instead of the checkpoint name on 🤗 Hub and provide the `--task` argument. You can review the list of supported tasks in the [🤗 Optimum documentation](https://huggingface.co/docs/optimum/exporters/task_manager). If `task` argument is not provided, it will default to the model architecture without any task specific head. ```bash optimum-cli export onnx --model local_path --task question-answering distilbert_base_uncased_squad_onnx/ The resulting `model.onnx` file can then be run on one of the [many accelerators](https://onnx.ai/supported-tools.html#deployModel) that support the ONNX standard. For example, we can load and run the model with [ONNX Runtime](https://onnxruntime.ai/) as follows: thon >>> from transformers import AutoTokenizer >>> from optimum.onnxruntime import ORTModelForQuestionAnswering >>> tokenizer = AutoTokenizer.from_pretrained(""distilbert_base_uncased_squad_onnx"") >>> model = ORTModelForQuestionAnswering.from_pretrained(""distilbert_base_uncased_squad_onnx"") >>> inputs = tokenizer(""What am I using?"", ""Using DistilBERT with ONNX Runtime!"", return_tensors=""pt"") >>> outputs = model(**inputs) The process is identical for TensorFlow checkpoints on the Hub. For instance, here's how you would export a pure TensorFlow checkpoint from the [Keras organization](https://huggingface.co/keras-io): ```bash optimum-cli export onnx --model keras-io/transformers-qa distilbert_base_cased_squad_onnx/ ### Exporting a 🤗 Transformers model to ONNX with `optimum.onnxruntime` Alternative to CLI, you can export a 🤗 Transformers model to ONNX programmatically like so: thon >>> from optimum.onnxruntime import ORTModelForSequenceClassification >>> from transformers import AutoTokenizer >>> model_checkpoint = ""distilbert_base_uncased_squad"" >>> save_directory = ""onnx/"" >>> # Load a model from transformers and export it to ONNX >>> ort_model = ORTModelForSequenceClassification.from_pretrained(model_checkpoint, export=True) >>> tokenizer = AutoTokenizer.from_pretrained(model_checkpoint) >>> # Save the onnx model and tokenizer >>> ort_model.save_pretrained(save_directory) >>> tokenizer.save_pretrained(save_directory) ### Exporting a model for an unsupported architecture If you wish to contribute by adding support for a model that cannot be currently exported, you should first check if it is supported in [`optimum.exporters.onnx`](https://huggingface.co/docs/optimum/exporters/onnx/overview), and if it is not, [contribute to 🤗 Optimum](https://huggingface.co/docs/optimum/exporters/onnx/usage_guides/contribute) directly. ### Exporting a model with `transformers.onnx` `tranformers.onnx` is no longer maintained, please export models with 🤗 Optimum as described above. This section will be removed in the future versions. To export a 🤗 Transformers model to ONNX with `tranformers.onnx`, install extra dependencies: ```bash pip install transformers[onnx] Use `transformers.onnx` package as a Python module to export a checkpoint using a ready-made configuration: ```bash python -m transformers.onnx --model=distilbert-base-uncased onnx/ This exports an ONNX graph of the checkpoint defined by the `--model` argument. Pass any checkpoint on the 🤗 Hub or one that's stored locally. The resulting `model.onnx` file can then be run on one of the many accelerators that support the ONNX standard. For example, load and run the model with ONNX Runtime as follows: thon >>> from transformers import AutoTokenizer >>> from onnxruntime import InferenceSession >>> tokenizer = AutoTokenizer.from_pretrained(""distilbert-base-uncased"") >>> session = InferenceSession(""onnx/model.onnx"") >>> # ONNX Runtime expects NumPy arrays as input >>> inputs = tokenizer(""Using DistilBERT with ONNX Runtime!"", return_tensors=""np"") >>> outputs = session.run(output_names=[""last_hidden_state""], input_feed=dict(inputs)) The required output names (like `[""last_hidden_state""]`) can be obtained by taking a look at the ONNX configuration of each model. For example, for DistilBERT we have: thon >>> from transformers.models.distilbert import DistilBertConfig, DistilBertOnnxConfig >>> config = DistilBertConfig() >>> onnx_config = DistilBertOnnxConfig(config) >>> print(list(onnx_config.outputs.keys())) [""last_hidden_state""] The process is identical for TensorFlow checkpoints on the Hub. For example, export a pure TensorFlow checkpoint like so: ```bash python -m transformers.onnx --model=keras-io/transformers-qa onnx/ To export a model that's stored locally, save the model's weights and tokenizer files in the same directory (e.g. `local-pt-checkpoint`), then export it to ONNX by pointing the `--model` argument of the `transformers.onnx` package to the desired directory: ```bash python -m transformers.onnx --model=local-pt-checkpoint onnx/ ```" installation.md," # Installation Install 🤗 Transformers for whichever deep learning library you're working with, setup your cache, and optionally configure 🤗 Transformers to run offline. 🤗 Transformers is tested on Python 3.6+, PyTorch 1.1.0+, TensorFlow 2.0+, and Flax. Follow the installation instructions below for the deep learning library you are using: * [PyTorch](https://pytorch.org/get-started/locally/) installation instructions. * [TensorFlow 2.0](https://www.tensorflow.org/install/pip) installation instructions. * [Flax](https://flax.readthedocs.io/en/latest/) installation instructions. ## Install with pip You should install 🤗 Transformers in a [virtual environment](https://docs.python.org/3/library/venv.html). If you're unfamiliar with Python virtual environments, take a look at this [guide](https://packaging.python.org/guides/installing-using-pip-and-virtual-environments/). A virtual environment makes it easier to manage different projects, and avoid compatibility issues between dependencies. Start by creating a virtual environment in your project directory: ```bash python -m venv .env Activate the virtual environment. On Linux and MacOs: ```bash source .env/bin/activate Activate Virtual environment on Windows ```bash .env/Scripts/activate Now you're ready to install 🤗 Transformers with the following command: ```bash pip install transformers For CPU-support only, you can conveniently install 🤗 Transformers and a deep learning library in one line. For example, install 🤗 Transformers and PyTorch with: ```bash pip install 'transformers[torch]' 🤗 Transformers and TensorFlow 2.0: ```bash pip install 'transformers[tf-cpu]' M1 / ARM Users You will need to install the following before installing TensorFLow 2.0 brew install cmake brew install pkg-config 🤗 Transformers and Flax: ```bash pip install 'transformers[flax]' Finally, check if 🤗 Transformers has been properly installed by running the following command. It will download a pretrained model: ```bash python -c ""from transformers import pipeline; print(pipeline('sentiment-analysis')('we love you'))"" Then print out the label and score: ```bash [{'label': 'POSITIVE', 'score': 0.9998704791069031}] ## Install from source Install 🤗 Transformers from source with the following command: ```bash pip install git+https://github.com/huggingface/transformers This command installs the bleeding edge `main` version rather than the latest `stable` version. The `main` version is useful for staying up-to-date with the latest developments. For instance, if a bug has been fixed since the last official release but a new release hasn't been rolled out yet. However, this means the `main` version may not always be stable. We strive to keep the `main` version operational, and most issues are usually resolved within a few hours or a day. If you run into a problem, please open an [Issue](https://github.com/huggingface/transformers/issues) so we can fix it even sooner! Check if 🤗 Transformers has been properly installed by running the following command: ```bash python -c ""from transformers import pipeline; print(pipeline('sentiment-analysis')('I love you'))"" ## Editable install You will need an editable install if you'd like to: * Use the `main` version of the source code. * Contribute to 🤗 Transformers and need to test changes in the code. Clone the repository and install 🤗 Transformers with the following commands: ```bash git clone https://github.com/huggingface/transformers.git cd transformers pip install -e . These commands will link the folder you cloned the repository to and your Python library paths. Python will now look inside the folder you cloned to in addition to the normal library paths. For example, if your Python packages are typically installed in `~/anaconda3/envs/main/lib/python3.7/site-packages/`, Python will also search the folder you cloned to: `~/transformers/`. You must keep the `transformers` folder if you want to keep using the library. Now you can easily update your clone to the latest version of 🤗 Transformers with the following command: ```bash cd ~/transformers/ git pull Your Python environment will find the `main` version of 🤗 Transformers on the next run. ## Install with conda Install from the conda channel `huggingface`: ```bash conda install -c huggingface transformers ## Cache setup Pretrained models are downloaded and locally cached at: `~/.cache/huggingface/hub`. This is the default directory given by the shell environment variable `TRANSFORMERS_CACHE`. On Windows, the default directory is given by `C:\Users\username\.cache\huggingface\hub`. You can change the shell environment variables shown below - in order of priority - to specify a different cache directory: 1. Shell environment variable (default): `HUGGINGFACE_HUB_CACHE` or `TRANSFORMERS_CACHE`. 2. Shell environment variable: `HF_HOME`. 3. Shell environment variable: `XDG_CACHE_HOME` + `/huggingface`. 🤗 Transformers will use the shell environment variables `PYTORCH_TRANSFORMERS_CACHE` or `PYTORCH_PRETRAINED_BERT_CACHE` if you are coming from an earlier iteration of this library and have set those environment variables, unless you specify the shell environment variable `TRANSFORMERS_CACHE`. ## Offline mode Run 🤗 Transformers in a firewalled or offline environment with locally cached files by setting the environment variable `TRANSFORMERS_OFFLINE=1`. Add [🤗 Datasets](https://huggingface.co/docs/datasets/) to your offline training workflow with the environment variable `HF_DATASETS_OFFLINE=1`. ```bash HF_DATASETS_OFFLINE=1 TRANSFORMERS_OFFLINE=1 \ python examples/pytorch/translation/run_translation.py --model_name_or_path t5-small --dataset_name wmt16 --dataset_config ro-en This script should run without hanging or waiting to timeout because it won't attempt to download the model from the Hub. You can also bypass loading a model from the Hub from each [`~PreTrainedModel.from_pretrained`] call with the [`local_files_only`] parameter. When set to `True`, only local files are loaded: from transformers import T5Model model = T5Model.from_pretrained(""./path/to/local/directory"", local_files_only=True) ### Fetch models and tokenizers to use offline Another option for using 🤗 Transformers offline is to download the files ahead of time, and then point to their local path when you need to use them offline. There are three ways to do this: * Download a file through the user interface on the [Model Hub](https://huggingface.co/models) by clicking on the ↓ icon. ![download-icon](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/download-icon.png) * Use the [`PreTrainedModel.from_pretrained`] and [`PreTrainedModel.save_pretrained`] workflow: 1. Download your files ahead of time with [`PreTrainedModel.from_pretrained`]: >>> from transformers import AutoTokenizer, AutoModelForSeq2SeqLM >>> tokenizer = AutoTokenizer.from_pretrained(""bigscience/T0_3B"") >>> model = AutoModelForSeq2SeqLM.from_pretrained(""bigscience/T0_3B"") 2. Save your files to a specified directory with [`PreTrainedModel.save_pretrained`]: >>> tokenizer.save_pretrained(""./your/path/bigscience_t0"") >>> model.save_pretrained(""./your/path/bigscience_t0"") 3. Now when you're offline, reload your files with [`PreTrainedModel.from_pretrained`] from the specified directory: >>> tokenizer = AutoTokenizer.from_pretrained(""./your/path/bigscience_t0"") >>> model = AutoModel.from_pretrained(""./your/path/bigscience_t0"") * Programmatically download files with the [huggingface_hub](https://github.com/huggingface/huggingface_hub/tree/main/src/huggingface_hub) library: 1. Install the `huggingface_hub` library in your virtual environment: ```bash python -m pip install huggingface_hub 2. Use the [`hf_hub_download`](https://huggingface.co/docs/hub/adding-a-library#download-files-from-the-hub) function to download a file to a specific path. For example, the following command downloads the `config.json` file from the [T0](https://huggingface.co/bigscience/T0_3B) model to your desired path: >>> from huggingface_hub import hf_hub_download >>> hf_hub_download(repo_id=""bigscience/T0_3B"", filename=""config.json"", cache_dir=""./your/path/bigscience_t0"") Once your file is downloaded and locally cached, specify it's local path to load and use it: >>> from transformers import AutoConfig >>> config = AutoConfig.from_pretrained(""./your/path/bigscience_t0/config.json"") See the [How to download files from the Hub](https://huggingface.co/docs/hub/how-to-downstream) section for more details on downloading files stored on the Hub. " notebooks.md, perf_train_tpu_tf.md," # Training on TPU with TensorFlow If you don't need long explanations and just want TPU code samples to get started with, check out [our TPU example notebook!](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/tpu_training-tf.ipynb) ### What is a TPU? A TPU is a **Tensor Processing Unit.** They are hardware designed by Google, which are used to greatly speed up the tensor computations within neural networks, much like GPUs. They can be used for both network training and inference. They are generally accessed through Google’s cloud services, but small TPUs can also be accessed directly for free through Google Colab and Kaggle Kernels. Because [all TensorFlow models in 🤗 Transformers are Keras models](https://huggingface.co/blog/tensorflow-philosophy), most of the methods in this document are generally applicable to TPU training for any Keras model! However, there are a few points that are specific to the HuggingFace ecosystem (hug-o-system?) of Transformers and Datasets, and we’ll make sure to flag them up when we get to them. ### What kinds of TPU are available? New users are often very confused by the range of TPUs, and the different ways to access them. The first key distinction to understand is the difference between **TPU Nodes** and **TPU VMs.** When you use a **TPU Node**, you are effectively indirectly accessing a remote TPU. You will need a separate VM, which will initialize your network and data pipeline and then forward them to the remote node. When you use a TPU on Google Colab, you are accessing it in the **TPU Node** style. Using TPU Nodes can have some quite unexpected behaviour for people who aren’t used to them! In particular, because the TPU is located on a physically different system to the machine you’re running your Python code on, your data cannot be local to your machine - any data pipeline that loads from your machine’s internal storage will totally fail! Instead, data must be stored in Google Cloud Storage where your data pipeline can still access it, even when the pipeline is running on the remote TPU node. If you can fit all your data in memory as `np.ndarray` or `tf.Tensor`, then you can `fit()` on that data even when using Colab or a TPU Node, without needing to upload it to Google Cloud Storage. **🤗Specific Hugging Face Tip🤗:** The methods `Dataset.to_tf_dataset()` and its higher-level wrapper `model.prepare_tf_dataset()` , which you will see throughout our TF code examples, will both fail on a TPU Node. The reason for this is that even though they create a `tf.data.Dataset` it is not a “pure” `tf.data` pipeline and uses `tf.numpy_function` or `Dataset.from_generator()` to stream data from the underlying HuggingFace `Dataset`. This HuggingFace `Dataset` is backed by data that is on a local disc and which the remote TPU Node will not be able to read. The second way to access a TPU is via a **TPU VM.** When using a TPU VM, you connect directly to the machine that the TPU is attached to, much like training on a GPU VM. TPU VMs are generally easier to work with, particularly when it comes to your data pipeline. All of the above warnings do not apply to TPU VMs! This is an opinionated document, so here’s our opinion: **Avoid using TPU Node if possible.** It is more confusing and more difficult to debug than TPU VMs. It is also likely to be unsupported in future - Google’s latest TPU, TPUv4, can only be accessed as a TPU VM, which suggests that TPU Nodes are increasingly going to become a “legacy” access method. However, we understand that the only free TPU access is on Colab and Kaggle Kernels, which uses TPU Node - so we’ll try to explain how to handle it if you have to! Check the [TPU example notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/tpu_training-tf.ipynb) for code samples that explain this in more detail. ### What sizes of TPU are available? A single TPU (a v2-8/v3-8/v4-8) runs 8 replicas. TPUs exist in **pods** that can run hundreds or thousands of replicas simultaneously. When you use more than a single TPU but less than a whole pod (for example, a v3-32), your TPU fleet is referred to as a **pod slice.** When you access a free TPU via Colab, you generally get a single v2-8 TPU. ### I keep hearing about this XLA thing. What’s XLA, and how does it relate to TPUs? XLA is an optimizing compiler, used by both TensorFlow and JAX. In JAX it is the only compiler, whereas in TensorFlow it is optional (but mandatory on TPU!). The easiest way to enable it when training a Keras model is to pass the argument `jit_compile=True` to `model.compile()`. If you don’t get any errors and performance is good, that’s a great sign that you’re ready to move to TPU! Debugging on TPU is generally a bit harder than on CPU/GPU, so we recommend getting your code running on CPU/GPU with XLA first before trying it on TPU. You don’t have to train for long, of course - just for a few steps to make sure that your model and data pipeline are working like you expect them to. XLA compiled code is usually faster - so even if you’re not planning to run on TPU, adding `jit_compile=True` can improve your performance. Be sure to note the caveats below about XLA compatibility, though! **Tip born of painful experience:** Although using `jit_compile=True` is a good way to get a speed boost and test if your CPU/GPU code is XLA-compatible, it can actually cause a lot of problems if you leave it in when actually training on TPU. XLA compilation will happen implicitly on TPU, so remember to remove that line before actually running your code on a TPU! ### How do I make my model XLA compatible? In many cases, your code is probably XLA-compatible already! However, there are a few things that work in normal TensorFlow that don’t work in XLA. We’ve distilled them into three core rules below: **🤗Specific HuggingFace Tip🤗:** We’ve put a lot of effort into rewriting our TensorFlow models and loss functions to be XLA-compatible. Our models and loss functions generally obey rule #1 and #2 by default, so you can skip over them if you’re using `transformers` models. Don’t forget about these rules when writing your own models and loss functions, though! #### XLA Rule #1: Your code cannot have “data-dependent conditionals” What that means is that any `if` statement cannot depend on values inside a `tf.Tensor`. For example, this code block cannot be compiled with XLA! thon if tf.reduce_sum(tensor) > 10: tensor = tensor / 2.0 This might seem very restrictive at first, but most neural net code doesn’t need to do this. You can often get around this restriction by using `tf.cond` (see the documentation [here](https://www.tensorflow.org/api_docs/python/tf/cond)) or by removing the conditional and finding a clever math trick with indicator variables instead, like so: thon sum_over_10 = tf.cast(tf.reduce_sum(tensor) > 10, tf.float32) tensor = tensor / (1.0 + sum_over_10) This code has exactly the same effect as the code above, but by avoiding a conditional, we ensure it will compile with XLA without problems! #### XLA Rule #2: Your code cannot have “data-dependent shapes” What this means is that the shape of all of the `tf.Tensor` objects in your code cannot depend on their values. For example, the function `tf.unique` cannot be compiled with XLA, because it returns a `tensor` containing one instance of each unique value in the input. The shape of this output will obviously be different depending on how repetitive the input `Tensor` was, and so XLA refuses to handle it! In general, most neural network code obeys rule #2 by default. However, there are a few common cases where it becomes a problem. One very common one is when you use **label masking**, setting your labels to a negative value to indicate that those positions should be ignored when computing the loss. If you look at NumPy or PyTorch loss functions that support label masking, you will often see code like this that uses [boolean indexing](https://numpy.org/doc/stable/user/basics.indexing.html#boolean-array-indexing): thon label_mask = labels >= 0 masked_outputs = outputs[label_mask] masked_labels = labels[label_mask] loss = compute_loss(masked_outputs, masked_labels) mean_loss = torch.mean(loss) This code is totally fine in NumPy or PyTorch, but it breaks in XLA! Why? Because the shape of `masked_outputs` and `masked_labels` depends on how many positions are masked - that makes it a **data-dependent shape.** However, just like for rule #1, we can often rewrite this code to yield exactly the same output without any data-dependent shapes. thon label_mask = tf.cast(labels >= 0, tf.float32) loss = compute_loss(outputs, labels) loss = loss * label_mask # Set negative label positions to 0 mean_loss = tf.reduce_sum(loss) / tf.reduce_sum(label_mask) Here, we avoid data-dependent shapes by computing the loss for every position, but zeroing out the masked positions in both the numerator and denominator when we calculate the mean, which yields exactly the same result as the first block while maintaining XLA compatibility. Note that we use the same trick as in rule #1 - converting a `tf.bool` to `tf.float32` and using it as an indicator variable. This is a really useful trick, so remember it if you need to convert your own code to XLA! #### XLA Rule #3: XLA will need to recompile your model for every different input shape it sees This is the big one. What this means is that if your input shapes are very variable, XLA will have to recompile your model over and over, which will create huge performance problems. This commonly arises in NLP models, where input texts have variable lengths after tokenization. In other modalities, static shapes are more common and this rule is much less of a problem. How can you get around rule #3? The key is **padding** - if you pad all your inputs to the same length, and then use an `attention_mask`, you can get the same results as you’d get from variable shapes, but without any XLA issues. However, excessive padding can cause severe slowdown too - if you pad all your samples to the maximum length in the whole dataset, you might end up with batches consisting endless padding tokens, which will waste a lot of compute and memory! There isn’t a perfect solution to this problem. However, you can try some tricks. One very useful trick is to **pad batches of samples up to a multiple of a number like 32 or 64 tokens.** This often only increases the number of tokens by a small amount, but it hugely reduces the number of unique input shapes, because every input shape now has to be a multiple of 32 or 64. Fewer unique input shapes means fewer XLA compilations! **🤗Specific HuggingFace Tip🤗:** Our tokenizers and data collators have methods that can help you here. You can use `padding=""max_length""` or `padding=""longest""` when calling tokenizers to get them to output padded data. Our tokenizers and data collators also have a `pad_to_multiple_of` argument that you can use to reduce the number of unique input shapes you see! ### How do I actually train my model on TPU? Once your training is XLA-compatible and (if you’re using TPU Node / Colab) your dataset has been prepared appropriately, running on TPU is surprisingly easy! All you really need to change in your code is to add a few lines to initialize your TPU, and to ensure that your model and dataset are created inside a `TPUStrategy` scope. Take a look at [our TPU example notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/tpu_training-tf.ipynb) to see this in action! ### Summary There was a lot in here, so let’s summarize with a quick checklist you can follow when you want to get your model ready for TPU training: - Make sure your code follows the three rules of XLA - Compile your model with `jit_compile=True` on CPU/GPU and confirm that you can train it with XLA - Either load your dataset into memory or use a TPU-compatible dataset loading approach (see [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/tpu_training-tf.ipynb)) - Migrate your code either to Colab (with accelerator set to “TPU”) or a TPU VM on Google Cloud - Add TPU initializer code (see [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/tpu_training-tf.ipynb)) - Create your `TPUStrategy` and make sure dataset loading and model creation are inside the `strategy.scope()` (see [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/tpu_training-tf.ipynb)) - Don’t forget to take `jit_compile=True` out again when you move to TPU! - 🙏🙏🙏🥺🥺🥺 - Call model.fit() - You did it!" llm_tutorial_optimization.md," # Optimizing LLMs for Speed and Memory [[open-in-colab]] Large Language Models (LLMs) such as GPT3/4, [Falcon](https://huggingface.co/tiiuae/falcon-40b), and [Llama](https://huggingface.co/meta-llama/Llama-2-70b-hf) are rapidly advancing in their ability to tackle human-centric tasks, establishing themselves as essential tools in modern knowledge-based industries. Deploying these models in real-world tasks remains challenging, however: - To exhibit near-human text understanding and generation capabilities, LLMs currently require to be composed of billions of parameters (see [Kaplan et al](https://arxiv.org/abs/2001.08361), [Wei et. al](https://arxiv.org/abs/2206.07682)). This consequently amplifies the memory demands for inference. - In many real-world tasks, LLMs need to be given extensive contextual information. This necessitates the model's capability to manage very long input sequences during inference. The crux of these challenges lies in augmenting the computational and memory capabilities of LLMs, especially when handling expansive input sequences. In this guide, we will go over the effective techniques for efficient LLM deployment: 1. **Lower Precision:** Research has shown that operating at reduced numerical precision, namely [8-bit and 4-bit](./main_classes/quantization.md) can achieve computational advantages without a considerable decline in model performance. 2. **Flash Attention:** Flash Attention is a variation of the attention algorithm that not only provides a more memory-efficient approach but also realizes increased efficiency due to optimized GPU memory utilization. 3. **Architectural Innovations:** Considering that LLMs are always deployed in the same way during inference, namely autoregressive text generation with a long input context, specialized model architectures have been proposed that allow for more efficient inference. The most important advancement in model architectures hereby are [Alibi](https://arxiv.org/abs/2108.12409), [Rotary embeddings](https://arxiv.org/abs/2104.09864), [Multi-Query Attention (MQA)](https://arxiv.org/abs/1911.02150) and [Grouped-Query-Attention (GQA)]((https://arxiv.org/abs/2305.13245)). Throughout this guide, we will offer an analysis of auto-regressive generation from a tensor's perspective. We delve into the pros and cons of adopting lower precision, provide a comprehensive exploration of the latest attention algorithms, and discuss improved LLM architectures. While doing so, we run practical examples showcasing each of the feature improvements. ## 1. Lower Precision Memory requirements of LLMs can be best understood by seeing the LLM as a set of weight matrices and vectors and the text inputs as a sequence of vectors. In the following, the definition *weights* will be used to signify all model weight matrices and vectors. At the time of writing this guide, LLMs consist of at least a couple billion parameters. Each parameter thereby is made of a decimal number, e.g. `4.5689` which is usually stored in either [float32](https://en.wikipedia.org/wiki/Single-precision_floating-point_format), [bfloat16](https://en.wikipedia.org/wiki/Bfloat16_floating-point_format), or [float16](https://en.wikipedia.org/wiki/Half-precision_floating-point_format) format. This allows us to easily compute the memory requirement to load the LLM into memory: > *Loading the weights of a model having X billion parameters requires roughly 4 * X GB of VRAM in float32 precision* Nowadays, models are however rarely trained in full float32 precision, but usually in bfloat16 precision or less frequently in float16 precision. Therefore the rule of thumb becomes: > *Loading the weights of a model having X billion parameters requires roughly 2 * X GB of VRAM in bfloat16/float16 precision* For shorter text inputs (less than 1024 tokens), the memory requirement for inference is very much dominated by the memory requirement to load the weights. Therefore, for now, let's assume that the memory requirement for inference is equal to the memory requirement to load the model into the GPU VRAM. To give some examples of how much VRAM it roughly takes to load a model in bfloat16: - **GPT3** requires 2 \* 175 GB = **350 GB** VRAM - [**Bloom**](https://huggingface.co/bigscience/bloom) requires 2 \* 176 GB = **352 GB** VRAM - [**Llama-2-70b**](https://huggingface.co/meta-llama/Llama-2-70b-hf) requires 2 \* 70 GB = **140 GB** VRAM - [**Falcon-40b**](https://huggingface.co/tiiuae/falcon-40b) requires 2 \* 40 GB = **80 GB** VRAM - [**MPT-30b**](https://huggingface.co/mosaicml/mpt-30b) requires 2 \* 30 GB = **60 GB** VRAM - [**bigcode/starcoder**](https://huggingface.co/bigcode/starcoder) requires 2 \* 15.5 = **31 GB** VRAM As of writing this document, the largest GPU chip on the market is the A100 & H100 offering 80GB of VRAM. Most of the models listed before require more than 80GB just to be loaded and therefore necessarily require [tensor parallelism](https://huggingface.co/docs/transformers/perf_train_gpu_many#tensor-parallelism) and/or [pipeline parallelism](https://huggingface.co/docs/transformers/perf_train_gpu_many#naive-model-parallelism-vertical-and-pipeline-parallelism). 🤗 Transformers does not support tensor parallelism out of the box as it requires the model architecture to be written in a specific way. If you're interested in writing models in a tensor-parallelism-friendly way, feel free to have a look at [the text-generation-inference library](https://github.com/huggingface/text-generation-inference/tree/main/server/text_generation_server/models/custom_modeling). Naive pipeline parallelism is supported out of the box. For this, simply load the model with `device=""auto""` which will automatically place the different layers on the available GPUs as explained [here](https://huggingface.co/docs/accelerate/v0.22.0/en/concept_guides/big_model_inference). Note, however that while very effective, this naive pipeline parallelism does not tackle the issues of GPU idling. For this more advanced pipeline parallelism is required as explained [here](https://huggingface.co/docs/transformers/en/perf_train_gpu_many#naive-model-parallelism-vertical-and-pipeline-parallelism). If you have access to an 8 x 80GB A100 node, you could load BLOOM as follows ```bash !pip install transformers accelerate bitsandbytes optimum thon from transformers import AutoModelForCausalLM model = AutoModelForCausalLM.from_pretrained(""bigscience/bloom"", device_map=""auto"", pad_token_id=0) By using `device_map=""auto""` the attention layers would be equally distributed over all available GPUs. In this guide, we will use [bigcode/octocoder](https://huggingface.co/bigcode/octocoder) as it can be run on a single 40 GB A100 GPU device chip. Note that all memory and speed optimizations that we will apply going forward, are equally applicable to models that require model or tensor parallelism. Since the model is loaded in bfloat16 precision, using our rule of thumb above, we would expect the memory requirement to run inference with `bigcode/octocoder` to be around 31 GB VRAM. Let's give it a try. We first load the model and tokenizer and then pass both to Transformers' [pipeline](https://huggingface.co/docs/transformers/main_classes/pipelines) object. thon from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline import torch model = AutoModelForCausalLM.from_pretrained(""bigcode/octocoder"", torch_dtype=torch.bfloat16, device_map=""auto"", pad_token_id=0) tokenizer = AutoTokenizer.from_pretrained(""bigcode/octocoder"") pipe = pipeline(""text-generation"", model=model, tokenizer=tokenizer) thon prompt = ""Question: Please write a function in Python that transforms bytes to Giga bytes.\n\nAnswer:"" result = pipe(prompt, max_new_tokens=60)[0][""generated_text""][len(prompt):] result **Output**: Here is a Python function that transforms bytes to Giga bytes:\n\nthon\ndef bytes_to_giga_bytes(bytes):\n return bytes / 1024 / 1024 / 1024\n```\n\nThis function takes a single Nice, we can now directly use the result to convert bytes into Gigabytes. thon def bytes_to_giga_bytes(bytes): return bytes / 1024 / 1024 / 1024 Let's call [`torch.cuda.max_memory_allocated`](https://pytorch.org/docs/stable/generated/torch.cuda.max_memory_allocated.html) to measure the peak GPU memory allocation. thon bytes_to_giga_bytes(torch.cuda.max_memory_allocated()) **Output**: ```bash 29.0260648727417 Close enough to our back-of-the-envelope computation! We can see the number is not exactly correct as going from bytes to kilobytes requires a multiplication of 1024 instead of 1000. Therefore the back-of-the-envelope formula can also be understood as an ""at most X GB"" computation. Note that if we had tried to run the model in full float32 precision, a whopping 64 GB of VRAM would have been required. > Almost all models are trained in bfloat16 nowadays, there is no reason to run the model in full float32 precision if [your GPU supports bfloat16](https://discuss.pytorch.org/t/bfloat16-native-support/117155/5). Float32 won't give better inference results than the precision that was used to train the model. If you are unsure in which format the model weights are stored on the Hub, you can always look into the checkpoint's config under `""torch_dtype""`, *e.g.* [here](https://huggingface.co/meta-llama/Llama-2-7b-hf/blob/6fdf2e60f86ff2481f2241aaee459f85b5b0bbb9/config.json#L21). It is recommended to set the model to the same precision type as written in the config when loading with `from_pretrained(, torch_dtype=)` except when the original type is float32 in which case one can use both `float16` or `bfloat16` for inference. Let's define a `flush()` function to free all allocated memory so that we can accurately measure the peak allocated GPU memory. thon del pipe del model import gc import torch def flush(): gc.collect() torch.cuda.empty_cache() torch.cuda.reset_peak_memory_stats() Let's call it now for the next experiment. thon flush() In the recent version of the accelerate library, you can also use an utility method called `release_memory()` thon from accelerate.utils import release_memory # release_memory(model) Now what if your GPU does not have 32 GB of VRAM? It has been found that model weights can be quantized to 8-bit or 4-bits without a significant loss in performance (see [Dettmers et al.](https://arxiv.org/abs/2208.07339)). Model can be quantized to even 3 or 2 bits with an acceptable loss in performance as shown in the recent [GPTQ paper](https://arxiv.org/abs/2210.17323) 🤯. Without going into too many details, quantization schemes aim at reducing the precision of weights while trying to keep the model's inference results as accurate as possible (*a.k.a* as close as possible to bfloat16). Note that quantization works especially well for text generation since all we care about is choosing the *set of most likely next tokens* and don't really care about the exact values of the next token *logit* distribution. All that matters is that the next token *logit* distribution stays roughly the same so that an `argmax` or `topk` operation gives the same results. There are various quantization techniques, which we won't discuss in detail here, but in general, all quantization techniques work as follows: - 1. Quantize all weights to the target precision - 2. Load the quantized weights, and pass the input sequence of vectors in bfloat16 precision - 3. Dynamically dequantize weights to bfloat16 to perform the computation with their input vectors in bfloat16 precision In a nutshell, this means that *inputs-weight matrix* multiplications, with \\( X \\) being the *inputs*, \\( W \\) being a weight matrix and \\( Y \\) being the output: $$ Y = X * W $$ are changed to $$ Y = X * \text{dequantize}(W) $$ for every matrix multiplication. Dequantization and re-quantization is performed sequentially for all weight matrices as the inputs run through the network graph. Therefore, inference time is often **not** reduced when using quantized weights, but rather increases. Enough theory, let's give it a try! To quantize the weights with Transformers, you need to make sure that the [`bitsandbytes`](https://github.com/TimDettmers/bitsandbytes) library is installed. ```bash !pip install bitsandbytes We can then load models in 8-bit quantization by simply adding a `load_in_8bit=True` flag to `from_pretrained`. thon model = AutoModelForCausalLM.from_pretrained(""bigcode/octocoder"", load_in_8bit=True, pad_token_id=0) Now, let's run our example again and measure the memory usage. thon pipe = pipeline(""text-generation"", model=model, tokenizer=tokenizer) result = pipe(prompt, max_new_tokens=60)[0][""generated_text""][len(prompt):] result **Output**: Here is a Python function that transforms bytes to Giga bytes:\n\nthon\ndef bytes_to_giga_bytes(bytes):\n return bytes / 1024 / 1024 / 1024\n```\n\nThis function takes a single Nice, we're getting the same result as before, so no loss in accuracy! Let's look at how much memory was used this time. thon bytes_to_giga_bytes(torch.cuda.max_memory_allocated()) **Output**: 15.219234466552734 Significantly less! We're down to just a bit over 15 GBs and could therefore run this model on consumer GPUs like the 4090. We're seeing a very nice gain in memory efficiency and more or less no degradation to the model's output. However, we can also notice a slight slow-down during inference. We delete the models and flush the memory again. thon del model del pipe thon flush() Let's see what peak GPU memory consumption 4-bit quantization gives. Quantizing the model to 4-bit can be done with the same API as before - this time by passing `load_in_4bit=True` instead of `load_in_8bit=True`. thon model = AutoModelForCausalLM.from_pretrained(""bigcode/octocoder"", load_in_4bit=True, low_cpu_mem_usage=True, pad_token_id=0) pipe = pipeline(""text-generation"", model=model, tokenizer=tokenizer) result = pipe(prompt, max_new_tokens=60)[0][""generated_text""][len(prompt):] result **Output**: Here is a Python function that transforms bytes to Giga bytes:\n\n```\ndef bytes_to_gigabytes(bytes):\n return bytes / 1024 / 1024 / 1024\n```\n\nThis function takes a single argument We're almost seeing the same output text as before - just the `python` is missing just before the code snippet. Let's see how much memory was required. thon bytes_to_giga_bytes(torch.cuda.max_memory_allocated()) **Output**: 9.543574333190918 Just 9.5GB! That's really not a lot for a >15 billion parameter model. While we see very little degradation in accuracy for our model here, 4-bit quantization can in practice often lead to different results compared to 8-bit quantization or full `bfloat16` inference. It is up to the user to try it out. Also note that inference here was again a bit slower compared to 8-bit quantization which is due to the more aggressive quantization method used for 4-bit quantization leading to \\( \text{quantize} \\) and \\( \text{dequantize} \\) taking longer during inference. thon del model del pipe thon flush() Overall, we saw that running OctoCoder in 8-bit precision reduced the required GPU VRAM from 32G GPU VRAM to only 15GB and running the model in 4-bit precision further reduces the required GPU VRAM to just a bit over 9GB. 4-bit quantization allows the model to be run on GPUs such as RTX3090, V100, and T4 which are quite accessible for most people. For more information on quantization and to see how one can quantize models to require even less GPU VRAM memory than 4-bit, we recommend looking into the [`AutoGPTQ`](https://huggingface.co/docs/transformers/main/en/main_classes/quantization#autogptq-integration%60) implementation. > As a conclusion, it is important to remember that model quantization trades improved memory efficiency against accuracy and in some cases inference time. If GPU memory is not a constraint for your use case, there is often no need to look into quantization. However many GPUs simply can't run LLMs without quantization methods and in this case, 4-bit and 8-bit quantization schemes are extremely useful tools. For more in-detail usage information, we strongly recommend taking a look at the [Transformers Quantization Docs](https://huggingface.co/docs/transformers/main_classes/quantization#general-usage). Next, let's look into how we can improve computational and memory efficiency by using better algorithms and an improved model architecture. ## 2. Flash Attention Today's top-performing LLMs share more or less the same fundamental architecture that consists of feed-forward layers, activation layers, layer normalization layers, and most crucially, self-attention layers. Self-attention layers are central to Large Language Models (LLMs) in that they enable the model to understand the contextual relationships between input tokens. However, the peak GPU memory consumption for self-attention layers grows *quadratically* both in compute and memory complexity with number of input tokens (also called *sequence length*) that we denote in the following by \\( N \\) . While this is not really noticeable for shorter input sequences (of up to 1000 input tokens), it becomes a serious problem for longer input sequences (at around 16000 input tokens). Let's take a closer look. The formula to compute the output \\( \mathbf{O} \\) of a self-attention layer for an input \\( \mathbf{X} \\) of length \\( N \\) is: $$ \textbf{O} = \text{Attn}(\mathbf{X}) = \mathbf{V} \times \text{Softmax}(\mathbf{QK}^T) \text{ with } \mathbf{Q} = \mathbf{W}_q \mathbf{X}, \mathbf{V} = \mathbf{W}_v \mathbf{X}, \mathbf{K} = \mathbf{W}_k \mathbf{X} $$ \\( \mathbf{X} = (\mathbf{x}_1, \mathbf{x}_{N}) \\) is thereby the input sequence to the attention layer. The projections \\( \mathbf{Q} \\) and \\( \mathbf{K} \\) will each consist of \\( N \\) vectors resulting in the \\( \mathbf{QK}^T \\) being of size \\( N^2 \\) . LLMs usually have multiple attention heads, thus doing multiple self-attention computations in parallel. Assuming, the LLM has 40 attention heads and runs in bfloat16 precision, we can calculate the memory requirement to store the \\( \mathbf{QK^T} \\) matrices to be \\( 40 * 2 * N^2 \\) bytes. For \\( N=1000 \\) only around 50 MB of VRAM are needed, however, for \\( N=16000 \\) we would need 19 GB of VRAM, and for \\( N=100,000 \\) we would need almost 1TB just to store the \\( \mathbf{QK}^T \\) matrices. Long story short, the default self-attention algorithm quickly becomes prohibitively memory-expensive for large input contexts. As LLMs improve in text comprehension and generation, they are applied to increasingly complex tasks. While models once handled the translation or summarization of a few sentences, they now manage entire pages, demanding the capability to process extensive input lengths. How can we get rid of the exorbitant memory requirements for large input lengths? We need a new way to compute the self-attention mechanism that gets rid of the \\( QK^T \\) matrix. [Tri Dao et al.](https://arxiv.org/abs/2205.14135) developed exactly such a new algorithm and called it **Flash Attention**. In a nutshell, Flash Attention breaks the \\(\mathbf{V} \times \text{Softmax}(\mathbf{QK}^T\\)) computation apart and instead computes smaller chunks of the output by iterating over multiple softmax computation steps: $$ \textbf{O}_i \leftarrow s^a_{ij} * \textbf{O}_i + s^b_{ij} * \mathbf{V}_{j} \times \text{Softmax}(\mathbf{QK}^T_{i,j}) \text{ for multiple } i, j \text{ iterations} $$ with \\( s^a_{ij} \\) and \\( s^b_{ij} \\) being some softmax normalization statistics that need to be recomputed for every \\( i \\) and \\( j \\) . Please note that the whole Flash Attention is a bit more complex and is greatly simplified here as going in too much depth is out of scope for this guide. The reader is invited to take a look at the well-written [Flash Attention paper](https://arxiv.org/abs/2205.14135) for more details. The main takeaway here is: > By keeping track of softmax normalization statistics and by using some smart mathematics, Flash Attention gives **numerical identical** outputs compared to the default self-attention layer at a memory cost that only increases linearly with \\( N \\) . Looking at the formula, one would intuitively say that Flash Attention must be much slower compared to the default self-attention formula as more computation needs to be done. Indeed Flash Attention requires more FLOPs compared to normal attention as the softmax normalization statistics have to constantly be recomputed (see [paper](https://arxiv.org/abs/2205.14135) for more details if interested) > However, Flash Attention is much faster in inference compared to default attention which comes from its ability to significantly reduce the demands on the slower, high-bandwidth memory of the GPU (VRAM), focusing instead on the faster on-chip memory (SRAM). Essentially, Flash Attention makes sure that all intermediate write and read operations can be done using the fast *on-chip* SRAM memory instead of having to access the slower VRAM memory to compute the output vector \\( \mathbf{O} \\) . In practice, there is currently absolutely no reason to **not** use Flash Attention if available. The algorithm gives mathematically the same outputs, and is both faster and more memory-efficient. Let's look at a practical example. Our OctoCoder model now gets a significantly longer input prompt which includes a so-called *system prompt*. System prompts are used to steer the LLM into a better assistant that is tailored to the users' task. In the following, we use a system prompt that will make OctoCoder a better coding assistant. thon system_prompt = """"""Below are a series of dialogues between various people and an AI technical assistant. The assistant tries to be helpful, polite, honest, sophisticated, emotionally aware, and humble but knowledgeable. The assistant is happy to help with code questions and will do their best to understand exactly what is needed. It also tries to avoid giving false or misleading information, and it caveats when it isn't entirely sure about the right answer. That said, the assistant is practical really does its best, and doesn't let caution get too much in the way of being useful. The Starcoder models are a series of 15.5B parameter models trained on 80+ programming languages from The Stack (v1.2) (excluding opt-out requests). The model uses Multi Query Attention, was trained using the Fill-in-the-Middle objective, and with 8,192 tokens context window for a trillion tokens of heavily deduplicated data. ----- Question: Write a function that takes two lists and returns a list that has alternating elements from each input list. Answer: Sure. Here is a function that does that. def alternating(list1, list2): results = [] for i in range(len(list1)): results.append(list1[i]) results.append(list2[i]) return results Question: Can you write some test cases for this function? Answer: Sure, here are some tests. assert alternating([10, 20, 30], [1, 2, 3]) == [10, 1, 20, 2, 30, 3] assert alternating([True, False], [4, 5]) == [True, 4, False, 5] assert alternating([], []) == [] Question: Modify the function so that it returns all input elements when the lists have uneven length. The elements from the longer list should be at the end. Answer: Here is the modified function. def alternating(list1, list2): results = [] for i in range(min(len(list1), len(list2))): results.append(list1[i]) results.append(list2[i]) if len(list1) > len(list2): results.extend(list1[i+1:]) else: results.extend(list2[i+1:]) return results ----- """""" For demonstration purposes, we duplicate the system prompt by ten so that the input length is long enough to observe Flash Attention's memory savings. We append the original text prompt `""Question: Please write a function in Python that transforms bytes to Giga bytes.\n\nAnswer: Here""` thon long_prompt = 10 * system_prompt + prompt We instantiate our model again in bfloat16 precision. thon model = AutoModelForCausalLM.from_pretrained(""bigcode/octocoder"", torch_dtype=torch.bfloat16, device_map=""auto"") tokenizer = AutoTokenizer.from_pretrained(""bigcode/octocoder"") pipe = pipeline(""text-generation"", model=model, tokenizer=tokenizer) Let's now run the model just like before *without Flash Attention* and measure the peak GPU memory requirement and inference time. thon import time start_time = time.time() result = pipe(long_prompt, max_new_tokens=60)[0][""generated_text""][len(long_prompt):] print(f""Generated in {time.time() - start_time} seconds."") result **Output**: Generated in 10.96854019165039 seconds. Sure. Here is a function that does that.\n\ndef bytes_to_giga(bytes):\n return bytes / 1024 / 1024 / 1024\n\nAnswer: Sure. Here is a function that does that.\n\ndef ` We're getting the same output as before, however this time, the model repeats the answer multiple times until it's 60 tokens cut-off. This is not surprising as we've repeated the system prompt ten times for demonstration purposes and thus cued the model to repeat itself. **Note** that the system prompt should not be repeated ten times in real-world applications - one time is enough! Let's measure the peak GPU memory requirement. thon bytes_to_giga_bytes(torch.cuda.max_memory_allocated()) **Output**: ```bash 37.668193340301514 As we can see the peak GPU memory requirement is now significantly higher than in the beginning, which is largely due to the longer input sequence. Also the generation takes a little over a minute now. We call `flush()` to free GPU memory for our next experiment. thon flush() For comparison, let's run the same function, but enable Flash Attention instead. To do so, we convert the model to [BetterTransformers](https://huggingface.co/docs/optimum/bettertransformer/overview) and by doing so enabling PyTorch's [SDPA self-attention](https://pytorch.org/docs/master/generated/torch.nn.functional.scaled_dot_product_attention) which in turn is based on Flash Attention. thon model.to_bettertransformer() Now we run the exact same code snippet as before and under the hood Transformers will make use of Flash Attention. start_time = time.time() with torch.backends.cuda.sdp_kernel(enable_flash=True, enable_math=False, enable_mem_efficient=False): result = pipe(long_prompt, max_new_tokens=60)[0][""generated_text""][len(long_prompt):] print(f""Generated in {time.time() - start_time} seconds."") result **Output**: Generated in 3.0211617946624756 seconds. Sure. Here is a function that does that.\n\ndef bytes_to_giga(bytes):\n return bytes / 1024 / 1024 / 1024\n\nAnswer: Sure. Here is a function that does that.\n\ndef We're getting the exact same result as before, but can observe a very significant speed-up thanks to Flash Attention. Let's measure the memory consumption one last time. thon bytes_to_giga_bytes(torch.cuda.max_memory_allocated()) **Output**: 32.617331981658936 And we're almost back to our original 29GB peak GPU memory from the beginning. We can observe that we only use roughly 100MB more GPU memory when passing a very long input sequence with Flash Attention compared to passing a short input sequence as done in the beginning. flush() For more information on how to use Flash Attention, please have a look at [this doc page](https://huggingface.co/docs/transformers/en/perf_infer_gpu_one#flashattention-2). ## 3. Architectural Innovations So far we have looked into improving computational and memory efficiency by: - Casting the weights to a lower precision format - Replacing the self-attention algorithm with a more memory- and compute efficient version Let's now look into how we can change the architecture of an LLM so that it is most effective and efficient for task that require long text inputs, *e.g.*: - Retrieval augmented Questions Answering, - Summarization, - Chat Note that *chat* not only requires the LLM to handle long text inputs, but it also necessitates that the LLM is able to efficiently handle the back-and-forth dialogue between user and assistant (such as ChatGPT). Once trained, the fundamental LLM architecture is difficult to change, so it is important to make considerations about the LLM's tasks beforehand and accordingly optimize the model's architecture. There are two important components of the model architecture that quickly become memory and/or performance bottlenecks for large input sequences. - The positional embeddings - The key-value cache Let's go over each component in more detail ### 3.1 Improving positional embeddings of LLMs Self-attention puts each token in relation to each other's tokens. As an example, the \\( \text{Softmax}(\mathbf{QK}^T) \\) matrix of the text input sequence *""Hello"", ""I"", ""love"", ""you""* could look as follows: ![](/blog/assets/163_optimize_llm/self_attn_tokens.png) Each word token is given a probability mass at which it attends all other word tokens and, therefore is put into relation with all other word tokens. E.g. the word *""love""* attends to the word *""Hello""* with 5%, to *""I""* with 30%, and to itself with 65%. A LLM based on self-attention, but without position embeddings would have great difficulties in understanding the positions of the text inputs to each other. This is because the probability score computed by \\( \mathbf{QK}^T \\) relates each word token to each other word token in \\( O(1) \\) computations regardless of their relative positional distance to each other. Therefore, for the LLM without position embeddings each token appears to have the same distance to all other tokens, *e.g.* differentiating between *""Hello I love you""* and *""You love I hello""* would be very challenging. For the LLM to understand sentence order, an additional *cue* is needed and is usually applied in the form of *positional encodings* (or also called *positional embeddings*). Positional encodings, encode the position of each token into a numerical presentation that the LLM can leverage to better understand sentence order. The authors of the [*Attention Is All You Need*](https://arxiv.org/abs/1706.03762) paper introduced sinusoidal positional embeddings \\( \mathbf{P} = \mathbf{p}_1, \ldots, \mathbf{p}_N \\) . where each vector \\( \mathbf{p}_i \\) is computed as a sinusoidal function of its position \\( i \\) . The positional encodings are then simply added to the input sequence vectors \\( \mathbf{\hat{X}} = \mathbf{\hat{x}}_1, \ldots, \mathbf{\hat{x}}_N \\) = \\( \mathbf{x}_1 + \mathbf{p}_1, \ldots, \mathbf{x}_N + \mathbf{p}_N \\) thereby cueing the model to better learn sentence order. Instead of using fixed position embeddings, others (such as [Devlin et al.](https://arxiv.org/abs/1810.04805)) used learned positional encodings for which the positional embeddings \\( \mathbf{P} \\) are learned during training. Sinusoidal and learned position embeddings used to be the predominant methods to encode sentence order into LLMs, but a couple of problems related to these positional encodings were found: 1. Sinusoidal and learned position embeddings are both absolute positional embeddings, *i.e.* encoding a unique embedding for each position id: \\( 0, \ldots, N \\) . As shown by [Huang et al.](https://arxiv.org/abs/2009.13658) and [Su et al.](https://arxiv.org/abs/2104.09864), absolute positional embeddings lead to poor LLM performance for long text inputs. For long text inputs, it is advantageous if the model learns the relative positional distance input tokens have to each other instead of their absolute position. 2. When using learned position embeddings, the LLM has to be trained on a fixed input length \\( N \\), which makes it difficult to extrapolate to an input length longer than what it was trained on. Recently, relative positional embeddings that can tackle the above mentioned problems have become more popular, most notably: - [Rotary Position Embedding (RoPE)](https://arxiv.org/abs/2104.09864) - [ALiBi](https://arxiv.org/abs/2108.12409) Both *RoPE* and *ALiBi* argue that it's best to cue the LLM about sentence order directly in the self-attention algorithm as it's there that word tokens are put into relation with each other. More specifically, sentence order should be cued by modifying the \\( \mathbf{QK}^T \\) computation. Without going into too many details, *RoPE* notes that positional information can be encoded into query-key pairs, *e.g.* \\( \mathbf{q}_i \\) and \\( \mathbf{x}_j \\) by rotating each vector by an angle \\( \theta * i \\) and \\( \theta * j \\) respectively with \\( i, j \\) describing each vectors sentence position: $$ \mathbf{\hat{q}}_i^T \mathbf{\hat{x}}_j = \mathbf{{q}}_i^T \mathbf{R}_{\theta, i -j} \mathbf{{x}}_j. $$ \\( \mathbf{R}_{\theta, i - j} \\) thereby represents a rotational matrix. \\( \theta \\) is *not* learned during training, but instead set to a pre-defined value that depends on the maximum input sequence length during training. > By doing so, the propability score between \\( \mathbf{q}_i \\) and \\( \mathbf{q}_j \\) is only affected if \\( i \ne j \\) and solely depends on the relative distance \\( i - j \\) regardless of each vector's specific positions \\( i \\) and \\( j \\) . *RoPE* is used in multiple of today's most important LLMs, such as: - [**Falcon**](https://huggingface.co/tiiuae/falcon-40b) - [**Llama**](https://arxiv.org/abs/2302.13971) - [**PaLM**](https://arxiv.org/abs/2204.02311) As an alternative, *ALiBi* proposes a much simpler relative position encoding scheme. The relative distance that input tokens have to each other is added as a negative integer scaled by a pre-defined value `m` to each query-key entry of the \\( \mathbf{QK}^T \\) matrix right before the softmax computation. ![](/blog/assets/163_optimize_llm/alibi.png) As shown in the [ALiBi](https://arxiv.org/abs/2108.12409) paper, this simple relative positional encoding allows the model to retain a high performance even at very long text input sequences. *ALiBi* is used in multiple of today's most important LLMs, such as: - [**MPT**](https://huggingface.co/mosaicml/mpt-30b) - [**BLOOM**](https://huggingface.co/bigscience/bloom) Both *RoPE* and *ALiBi* position encodings can extrapolate to input lengths not seen during training whereas it has been shown that extrapolation works much better out-of-the-box for *ALiBi* as compared to *RoPE*. For ALiBi, one simply increases the values of the lower triangular position matrix to match the length of the input sequence. For *RoPE*, keeping the same \\( \theta \\) that was used during training leads to poor results when passing text inputs much longer than those seen during training, *c.f* [Press et al.](https://arxiv.org/abs/2108.12409). However, the community has found a couple of effective tricks that adapt \\( \theta \\), thereby allowing *RoPE* position embeddings to work well for extrapolated text input sequences (see [here](https://github.com/huggingface/transformers/pull/24653)). > Both RoPE and ALiBi are relative positional embeddings that are *not* learned during training, but instead are based on the following intuitions: - Positional cues about the text inputs should be given directly to the \\( QK^T \\) matrix of the self-attention layer - The LLM should be incentivized to learn a constant *relative* distance positional encodings have to each other - The further text input tokens are from each other, the lower the probability of their query-value probability. Both RoPE and ALiBi lower the query-key probability of tokens far away from each other. RoPE by decreasing their vector product by increasing the angle between the query-key vectors. ALiBi by adding large negative numbers to the vector product In conclusion, LLMs that are intended to be deployed in tasks that require handling large text inputs are better trained with relative positional embeddings, such as RoPE and ALiBi. Also note that even if an LLM with RoPE and ALiBi has been trained only on a fixed length of say \\( N_1 = 2048 \\) it can still be used in practice with text inputs much larger than \\( N_1 \\), like \\( N_2 = 8192 > N_1 \\) by extrapolating the positional embeddings. ### 3.2 The key-value cache Auto-regressive text generation with LLMs works by iteratively putting in an input sequence, sampling the next token, appending the next token to the input sequence, and continuing to do so until the LLM produces a token that signifies that the generation has finished. Please have a look at [Transformer's Generate Text Tutorial](https://huggingface.co/docs/transformers/llm_tutorial#generate-text) to get a more visual explanation of how auto-regressive generation works. Let's run a quick code snippet to show how auto-regressive works in practice. We will simply take the most likely next token via `torch.argmax`. thon input_ids = tokenizer(prompt, return_tensors=""pt"")[""input_ids""].to(""cuda"") for _ in range(5): next_logits = model(input_ids)[""logits""][:, -1:] next_token_id = torch.argmax(next_logits,dim=-1) input_ids = torch.cat([input_ids, next_token_id], dim=-1) print(""shape of input_ids"", input_ids.shape) generated_text = tokenizer.batch_decode(input_ids[:, -5:]) generated_text **Output**: shape of input_ids torch.Size([1, 21]) shape of input_ids torch.Size([1, 22]) shape of input_ids torch.Size([1, 23]) shape of input_ids torch.Size([1, 24]) shape of input_ids torch.Size([1, 25]) [' Here is a Python function'] As we can see every time we increase the text input tokens by the just sampled token. With very few exceptions, LLMs are trained using the [causal language modeling objective](https://huggingface.co/docs/transformers/tasks/language_modeling#causal-language-modeling) and therefore mask the upper triangle matrix of the attention score - this is why in the two diagrams above the attention scores are left blank (*a.k.a* have 0 probability). For a quick recap on causal language modeling you can refer to the [*Illustrated Self Attention blog*](https://jalammar.github.io/illustrated-gpt2/#part-2-illustrated-self-attention). As a consequence, tokens *never* depend on previous tokens, more specifically the \\( \mathbf{q}_i \\) vector is never put in relation with any key, values vectors \\( \mathbf{k}_j, \mathbf{v}_j \\) if \\( j > i \\) . Instead \\( \mathbf{q}_i \\) only attends to previous key-value vectors \\( \mathbf{k}_{m < i}, \mathbf{v}_{m < i} \text{ , for } m \in \{0, \ldots i - 1\} \\). In order to reduce unnecessary computation, one can therefore cache each layer's key-value vectors for all previous timesteps. In the following, we will tell the LLM to make use of the key-value cache by retrieving and forwarding it for each forward pass. In Transformers, we can retrieve the key-value cache by passing the `use_cache` flag to the `forward` call and can then pass it with the current token. thon past_key_values = None # past_key_values is the key-value cache generated_tokens = [] next_token_id = tokenizer(prompt, return_tensors=""pt"")[""input_ids""].to(""cuda"") for _ in range(5): next_logits, past_key_values = model(next_token_id, past_key_values=past_key_values, use_cache=True).to_tuple() next_logits = next_logits[:, -1:] next_token_id = torch.argmax(next_logits, dim=-1) print(""shape of input_ids"", next_token_id.shape) print(""length of key-value cache"", len(past_key_values[0][0])) # past_key_values are of shape [num_layers, 0 for k, 1 for v, batch_size, length, hidden_dim] generated_tokens.append(next_token_id.item()) generated_text = tokenizer.batch_decode(generated_tokens) generated_text **Output**: shape of input_ids torch.Size([1, 1]) length of key-value cache 20 shape of input_ids torch.Size([1, 1]) length of key-value cache 21 shape of input_ids torch.Size([1, 1]) length of key-value cache 22 shape of input_ids torch.Size([1, 1]) length of key-value cache 23 shape of input_ids torch.Size([1, 1]) length of key-value cache 24 [' Here', ' is', ' a', ' Python', ' function'] As one can see, when using the key-value cache the text input tokens are *not* increased in length, but remain a single input vector. The length of the key-value cache on the other hand is increased by one at every decoding step. > Making use of the key-value cache means that the \\( \mathbf{QK}^T \\) is essentially reduced to \\( \mathbf{q}_c\mathbf{K}^T \\) with \\( \mathbf{q}_c \\) being the query projection of the currently passed input token which is *always* just a single vector. Using the key-value cache has two advantages: - Significant increase in computational efficiency as less computations are performed compared to computing the full \\( \mathbf{QK}^T \\) matrix. This leads to an increase in inference speed - The maximum required memory is not increased quadratically with the number of generated tokens, but only increases linearly. > One should *always* make use of the key-value cache as it leads to identical results and a significant speed-up for longer input sequences. Transformers has the key-value cache enabled by default when making use of the text pipeline or the [`generate` method](https://huggingface.co/docs/transformers/main_classes/text_generation). Note that, despite our advice to use key-value caches, your LLM output may be slightly different when you use them. This is a property of the matrix multiplication kernels themselves -- you can read more about it [here](https://github.com/huggingface/transformers/issues/25420#issuecomment-1775317535). #### 3.2.1 Multi-round conversation The key-value cache is especially useful for applications such as chat where multiple passes of auto-regressive decoding are required. Let's look at an example. User: How many people live in France? Assistant: Roughly 75 million people live in France User: And how many are in Germany? Assistant: Germany has ca. 81 million inhabitants In this chat, the LLM runs auto-regressive decoding twice: 1. The first time, the key-value cache is empty and the input prompt is `""User: How many people live in France?""` and the model auto-regressively generates the text `""Roughly 75 million people live in France""` while increasing the key-value cache at every decoding step. 2. The second time the input prompt is `""User: How many people live in France? \n Assistant: Roughly 75 million people live in France \n User: And how many in Germany?""`. Thanks to the cache, all key-value vectors for the first two sentences are already computed. Therefore the input prompt only consists of `""User: And how many in Germany?""`. While processing the shortened input prompt, it's computed key-value vectors are concatenated to the key-value cache of the first decoding. The second Assistant's answer `""Germany has ca. 81 million inhabitants""` is then auto-regressively generated with the key-value cache consisting of encoded key-value vectors of `""User: How many people live in France? \n Assistant: Roughly 75 million people live in France \n User: And how many are in Germany?""`. Two things should be noted here: 1. Keeping all the context is crucial for LLMs deployed in chat so that the LLM understands all the previous context of the conversation. E.g. for the example above the LLM needs to understand that the user refers to the population when asking `""And how many are in Germany""`. 2. The key-value cache is extremely useful for chat as it allows us to continuously grow the encoded chat history instead of having to re-encode the chat history again from scratch (as e.g. would be the case when using an encoder-decoder architecture). In `transformers`, a `generate` call will return `past_key_values` when `return_dict_in_generate=True` is passed, in addition to the default `use_cache=True`. Note that it is not yet available through the `pipeline` interface. thon # Generation as usual prompt = system_prompt + ""Question: Please write a function in Python that transforms bytes to Giga bytes.\n\nAnswer: Here"" model_inputs = tokenizer(prompt, return_tensors='pt') generation_output = model.generate(**model_inputs, max_new_tokens=60, return_dict_in_generate=True) decoded_output = tokenizer.batch_decode(generation_output.sequences)[0] # Piping the returned `past_key_values` to speed up the next conversation round prompt = decoded_output + ""\nQuestion: How can I modify the function above to return Mega bytes instead?\n\nAnswer: Here"" model_inputs = tokenizer(prompt, return_tensors='pt') generation_output = model.generate( **model_inputs, past_key_values=generation_output.past_key_values, max_new_tokens=60, return_dict_in_generate=True ) tokenizer.batch_decode(generation_output.sequences)[0][len(prompt):] **Output**: is a modified version of the function that returns Mega bytes instead. def bytes_to_megabytes(bytes): return bytes / 1024 / 1024 Answer: The function takes a number of bytes as input and returns the number of Great, no additional time is spent recomputing the same key and values for the attention layer! There is however one catch. While the required peak memory for the \\( \mathbf{QK}^T \\) matrix is significantly reduced, holding the key-value cache in memory can become very memory expensive for long input sequences or multi-turn chat. Remember that the key-value cache needs to store the key-value vectors for all previous input vectors \\( \mathbf{x}_i \text{, for } i \in \{1, \ldots, c - 1\} \\) for all self-attention layers and for all attention heads. Let's compute the number of float values that need to be stored in the key-value cache for the LLM `bigcode/octocoder` that we used before. The number of float values amounts to two times the sequence length times the number of attention heads times the attention head dimension and times the number of layers. Computing this for our LLM at a hypothetical input sequence length of 16000 gives: thon config = model.config 2 * 16_000 * config.n_layer * config.n_head * config.n_embd // config.n_head **Output**: 7864320000 Roughly 8 billion float values! Storing 8 billion float values in `float16` precision requires around 15 GB of RAM which is circa half as much as the model weights themselves! Researchers have proposed two methods that allow to significantly reduce the memory cost of storing the key-value cache, which are explored in the next subsections. #### 3.2.2 Multi-Query-Attention (MQA) [Multi-Query-Attention](https://arxiv.org/abs/1911.02150) was proposed in Noam Shazeer's *Fast Transformer Decoding: One Write-Head is All You Need* paper. As the title says, Noam found out that instead of using `n_head` key-value projections weights, one can use a single head-value projection weight pair that is shared across all attention heads without that the model's performance significantly degrades. > By using a single head-value projection weight pair, the key value vectors \\( \mathbf{k}_i, \mathbf{v}_i \\) have to be identical across all attention heads which in turn means that we only need to store 1 key-value projection pair in the cache instead of `n_head` ones. As most LLMs use between 20 and 100 attention heads, MQA significantly reduces the memory consumption of the key-value cache. For the LLM used in this notebook we could therefore reduce the required memory consumption from 15 GB to less than 400 MB at an input sequence length of 16000. In addition to memory savings, MQA also leads to improved computational efficiency as explained in the following. In auto-regressive decoding, large key-value vectors need to be reloaded, concatenated with the current key-value vector pair to be then fed into the \\( \mathbf{q}_c\mathbf{K}^T \\) computation at every step. For auto-regressive decoding, the required memory bandwidth for the constant reloading can become a serious time bottleneck. By reducing the size of the key-value vectors less memory needs to be accessed, thus reducing the memory bandwidth bottleneck. For more detail, please have a look at [Noam's paper](https://arxiv.org/abs/1911.02150). The important part to understand here is that reducing the number of key-value attention heads to 1 only makes sense if a key-value cache is used. The peak memory consumption of the model for a single forward pass without key-value cache stays unchanged as every attention head still has a unique query vector so that each attention head still has a different \\( \mathbf{QK}^T \\) matrix. MQA has seen wide adoption by the community and is now used by many of the most popular LLMs: - [**Falcon**](https://huggingface.co/tiiuae/falcon-40b) - [**PaLM**](https://arxiv.org/abs/2204.02311) - [**MPT**](https://huggingface.co/mosaicml/mpt-30b) - [**BLOOM**](https://huggingface.co/bigscience/bloom) Also, the checkpoint used in this notebook - `bigcode/octocoder` - makes use of MQA. #### 3.2.3 Grouped-Query-Attention (GQA) [Grouped-Query-Attention](https://arxiv.org/abs/2305.13245), as proposed by Ainslie et al. from Google, found that using MQA can often lead to quality degradation compared to using vanilla multi-key-value head projections. The paper argues that more model performance can be kept by less drastically reducing the number of query head projection weights. Instead of using just a single key-value projection weight, `n < n_head` key-value projection weights should be used. By choosing `n` to a significantly smaller value than `n_head`, such as 2,4 or 8 almost all of the memory and speed gains from MQA can be kept while sacrificing less model capacity and thus arguably less performance. Moreover, the authors of GQA found out that existing model checkpoints can be *uptrained* to have a GQA architecture with as little as 5% of the original pre-training compute. While 5% of the original pre-training compute can still be a massive amount, GQA *uptraining* allows existing checkpoints to be useful for longer input sequences. GQA was only recently proposed which is why there is less adoption at the time of writing this notebook. The most notable application of GQA is [Llama-v2](https://huggingface.co/meta-llama/Llama-2-70b-hf). > As a conclusion, it is strongly recommended to make use of either GQA or MQA if the LLM is deployed with auto-regressive decoding and is required to handle large input sequences as is the case for example for chat. ## Conclusion The research community is constantly coming up with new, nifty ways to speed up inference time for ever-larger LLMs. As an example, one such promising research direction is [speculative decoding](https://arxiv.org/abs/2211.17192) where ""easy tokens"" are generated by smaller, faster language models and only ""hard tokens"" are generated by the LLM itself. Going into more detail is out of the scope of this notebook, but can be read upon in this [nice blog post](https://huggingface.co/blog/assisted-generation). The reason massive LLMs such as GPT3/4, Llama-2-70b, Claude, PaLM can run so quickly in chat-interfaces such as [Hugging Face Chat](https://huggingface.co/chat/) or ChatGPT is to a big part thanks to the above-mentioned improvements in precision, algorithms, and architecture. Going forward, accelerators such as GPUs, TPUs, etc will only get faster and allow for more memory, but one should nevertheless always make sure to use the best available algorithms and architectures to get the most bang for your buck 🤗 " philosophy.md," # Philosophy 🤗 Transformers is an opinionated library built for: - machine learning researchers and educators seeking to use, study or extend large-scale Transformers models. - hands-on practitioners who want to fine-tune those models or serve them in production, or both. - engineers who just want to download a pretrained model and use it to solve a given machine learning task. The library was designed with two strong goals in mind: 1. Be as easy and fast to use as possible: - We strongly limited the number of user-facing abstractions to learn, in fact, there are almost no abstractions, just three standard classes required to use each model: [configuration](main_classes/configuration), [models](main_classes/model), and a preprocessing class ([tokenizer](main_classes/tokenizer) for NLP, [image processor](main_classes/image_processor) for vision, [feature extractor](main_classes/feature_extractor) for audio, and [processor](main_classes/processors) for multimodal inputs). - All of these classes can be initialized in a simple and unified way from pretrained instances by using a common `from_pretrained()` method which downloads (if needed), caches and loads the related class instance and associated data (configurations' hyperparameters, tokenizers' vocabulary, and models' weights) from a pretrained checkpoint provided on [Hugging Face Hub](https://huggingface.co/models) or your own saved checkpoint. - On top of those three base classes, the library provides two APIs: [`pipeline`] for quickly using a model for inference on a given task and [`Trainer`] to quickly train or fine-tune a PyTorch model (all TensorFlow models are compatible with `Keras.fit`). - As a consequence, this library is NOT a modular toolbox of building blocks for neural nets. If you want to extend or build upon the library, just use regular Python, PyTorch, TensorFlow, Keras modules and inherit from the base classes of the library to reuse functionalities like model loading and saving. If you'd like to learn more about our coding philosophy for models, check out our [Repeat Yourself](https://huggingface.co/blog/transformers-design-philosophy) blog post. 2. Provide state-of-the-art models with performances as close as possible to the original models: - We provide at least one example for each architecture which reproduces a result provided by the official authors of said architecture. - The code is usually as close to the original code base as possible which means some PyTorch code may be not as *pytorchic* as it could be as a result of being converted TensorFlow code and vice versa. A few other goals: - Expose the models' internals as consistently as possible: - We give access, using a single API, to the full hidden-states and attention weights. - The preprocessing classes and base model APIs are standardized to easily switch between models. - Incorporate a subjective selection of promising tools for fine-tuning and investigating these models: - A simple and consistent way to add new tokens to the vocabulary and embeddings for fine-tuning. - Simple ways to mask and prune Transformer heads. - Easily switch between PyTorch, TensorFlow 2.0 and Flax, allowing training with one framework and inference with another. ## Main concepts The library is built around three types of classes for each model: - **Model classes** can be PyTorch models ([torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module)), Keras models ([tf.keras.Model](https://www.tensorflow.org/api_docs/python/tf/keras/Model)) or JAX/Flax models ([flax.linen.Module](https://flax.readthedocs.io/en/latest/api_reference/flax.linen/module.html)) that work with the pretrained weights provided in the library. - **Configuration classes** store the hyperparameters required to build a model (such as the number of layers and hidden size). You don't always need to instantiate these yourself. In particular, if you are using a pretrained model without any modification, creating the model will automatically take care of instantiating the configuration (which is part of the model). - **Preprocessing classes** convert the raw data into a format accepted by the model. A [tokenizer](main_classes/tokenizer) stores the vocabulary for each model and provide methods for encoding and decoding strings in a list of token embedding indices to be fed to a model. [Image processors](main_classes/image_processor) preprocess vision inputs, [feature extractors](main_classes/feature_extractor) preprocess audio inputs, and a [processor](main_classes/processors) handles multimodal inputs. All these classes can be instantiated from pretrained instances, saved locally, and shared on the Hub with three methods: - `from_pretrained()` lets you instantiate a model, configuration, and preprocessing class from a pretrained version either provided by the library itself (the supported models can be found on the [Model Hub](https://huggingface.co/models)) or stored locally (or on a server) by the user. - `save_pretrained()` lets you save a model, configuration, and preprocessing class locally so that it can be reloaded using `from_pretrained()`. - `push_to_hub()` lets you share a model, configuration, and a preprocessing class to the Hub, so it is easily accessible to everyone. " perf_train_gpu_one.md," # Methods and tools for efficient training on a single GPU This guide demonstrates practical techniques that you can use to increase the efficiency of your model's training by optimizing memory utilization, speeding up the training, or both. If you'd like to understand how GPU is utilized during training, please refer to the [Model training anatomy](model_memory_anatomy) conceptual guide first. This guide focuses on practical techniques. If you have access to a machine with multiple GPUs, these approaches are still valid, plus you can leverage additional methods outlined in the [multi-GPU section](perf_train_gpu_many). When training large models, there are two aspects that should be considered at the same time: * Data throughput/training time * Model performance Maximizing the throughput (samples/second) leads to lower training cost. This is generally achieved by utilizing the GPU as much as possible and thus filling GPU memory to its limit. If the desired batch size exceeds the limits of the GPU memory, the memory optimization techniques, such as gradient accumulation, can help. However, if the preferred batch size fits into memory, there's no reason to apply memory-optimizing techniques because they can slow down the training. Just because one can use a large batch size, does not necessarily mean they should. As part of hyperparameter tuning, you should determine which batch size yields the best results and then optimize resources accordingly. The methods and tools covered in this guide can be classified based on the effect they have on the training process: | Method/tool | Improves training speed | Optimizes memory utilization | |:-----------------------------------------------------------|:------------------------|:-----------------------------| | [Batch size choice](#batch-size-choice) | Yes | Yes | | [Gradient accumulation](#gradient-accumulation) | No | Yes | | [Gradient checkpointing](#gradient-checkpointing) | No | Yes | | [Mixed precision training](#mixed-precision-training) | Yes | (No) | | [Optimizer choice](#optimizer-choice) | Yes | Yes | | [Data preloading](#data-preloading) | Yes | No | | [DeepSpeed Zero](#deepspeed-zero) | No | Yes | | [torch.compile](#using-torchcompile) | Yes | No | Note: when using mixed precision with a small model and a large batch size, there will be some memory savings but with a large model and a small batch size, the memory use will be larger. You can combine the above methods to get a cumulative effect. These techniques are available to you whether you are training your model with [`Trainer`] or writing a pure PyTorch loop, in which case you can [configure these optimizations with 🤗 Accelerate](#using-accelerate). If these methods do not result in sufficient gains, you can explore the following options: * [Look into building your own custom Docker container with efficient softare prebuilds](#efficient-software-prebuilds) * [Consider a model that uses Mixture of Experts (MoE)](#mixture-of-experts) * [Convert your model to BetterTransformer to leverage PyTorch native attention](#using-pytorch-native-attention) Finally, if all of the above is still not enough, even after switching to a server-grade GPU like A100, consider moving to a multi-GPU setup. All these approaches are still valid in a multi-GPU setup, plus you can leverage additional parallelism techniques outlined in the [multi-GPU section](perf_train_gpu_many). ## Batch size choice To achieve optimal performance, start by identifying the appropriate batch size. It is recommended to use batch sizes and input/output neuron counts that are of size 2^N. Often it's a multiple of 8, but it can be higher depending on the hardware being used and the model's dtype. For reference, check out NVIDIA's recommendation for [input/output neuron counts]( https://docs.nvidia.com/deeplearning/performance/dl-performance-fully-connected/index.html#input-features) and [batch size](https://docs.nvidia.com/deeplearning/performance/dl-performance-fully-connected/index.html#batch-size) for fully connected layers (which are involved in GEMMs (General Matrix Multiplications)). [Tensor Core Requirements](https://docs.nvidia.com/deeplearning/performance/dl-performance-matrix-multiplication/index.html#requirements-tc) define the multiplier based on the dtype and the hardware. For instance, for fp16 data type a multiple of 8 is recommended, unless it's an A100 GPU, in which case use multiples of 64. For parameters that are small, consider also [Dimension Quantization Effects](https://docs.nvidia.com/deeplearning/performance/dl-performance-matrix-multiplication/index.html#dim-quantization). This is where tiling happens and the right multiplier can have a significant speedup. ## Gradient Accumulation The **gradient accumulation** method aims to calculate gradients in smaller increments instead of computing them for the entire batch at once. This approach involves iteratively calculating gradients in smaller batches by performing forward and backward passes through the model and accumulating the gradients during the process. Once a sufficient number of gradients have been accumulated, the model's optimization step is executed. By employing gradient accumulation, it becomes possible to increase the **effective batch size** beyond the limitations imposed by the GPU's memory capacity. However, it is important to note that the additional forward and backward passes introduced by gradient accumulation can slow down the training process. You can enable gradient accumulation by adding the `gradient_accumulation_steps` argument to [`TrainingArguments`]: training_args = TrainingArguments(per_device_train_batch_size=1, gradient_accumulation_steps=4, **default_args) In the above example, your effective batch size becomes 4. Alternatively, use 🤗 Accelerate to gain full control over the training loop. Find the 🤗 Accelerate example [further down in this guide](#using-accelerate). While it is advised to max out GPU usage as much as possible, a high number of gradient accumulation steps can result in a more pronounced training slowdown. Consider the following example. Let's say, the `per_device_train_batch_size=4` without gradient accumulation hits the GPU's limit. If you would like to train with batches of size 64, do not set the `per_device_train_batch_size` to 1 and `gradient_accumulation_steps` to 64. Instead, keep `per_device_train_batch_size=4` and set `gradient_accumulation_steps=16`. This results in the same effective batch size while making better use of the available GPU resources. For additional information, please refer to batch size and gradient accumulation benchmarks for [RTX-3090](https://github.com/huggingface/transformers/issues/14608#issuecomment-1004392537) and [A100](https://github.com/huggingface/transformers/issues/15026#issuecomment-1005033957). ## Gradient Checkpointing Some large models may still face memory issues even when the batch size is set to 1 and gradient accumulation is used. This is because there are other components that also require memory storage. Saving all activations from the forward pass in order to compute the gradients during the backward pass can result in significant memory overhead. The alternative approach of discarding the activations and recalculating them when needed during the backward pass, would introduce a considerable computational overhead and slow down the training process. **Gradient checkpointing** offers a compromise between these two approaches and saves strategically selected activations throughout the computational graph so only a fraction of the activations need to be re-computed for the gradients. For an in-depth explanation of gradient checkpointing, refer to [this great article](https://medium.com/tensorflow/fitting-larger-networks-into-memory-583e3c758ff9). To enable gradient checkpointing in the [`Trainer`], pass the corresponding a flag to [`TrainingArguments`]: training_args = TrainingArguments( per_device_train_batch_size=1, gradient_accumulation_steps=4, gradient_checkpointing=True, **default_args ) Alternatively, use 🤗 Accelerate - find the 🤗 Accelerate example [further in this guide](#using-accelerate). While gradient checkpointing may improve memory efficiency, it slows training by approximately 20%. ## Mixed precision training **Mixed precision training** is a technique that aims to optimize the computational efficiency of training models by utilizing lower-precision numerical formats for certain variables. Traditionally, most models use 32-bit floating point precision (fp32 or float32) to represent and process variables. However, not all variables require this high precision level to achieve accurate results. By reducing the precision of certain variables to lower numerical formats like 16-bit floating point (fp16 or float16), we can speed up the computations. Because in this approach some computations are performed in half-precision, while some are still in full precision, the approach is called mixed precision training. Most commonly mixed precision training is achieved by using fp16 (float16) data types, however, some GPU architectures (such as the Ampere architecture) offer bf16 and tf32 (CUDA internal data type) data types. Check out the [NVIDIA Blog](https://developer.nvidia.com/blog/accelerating-ai-training-with-tf32-tensor-cores/) to learn more about the differences between these data types. ### fp16 The main advantage of mixed precision training comes from saving the activations in half precision (fp16). Although the gradients are also computed in half precision they are converted back to full precision for the optimization step so no memory is saved here. While mixed precision training results in faster computations, it can also lead to more GPU memory being utilized, especially for small batch sizes. This is because the model is now present on the GPU in both 16-bit and 32-bit precision (1.5x the original model on the GPU). To enable mixed precision training, set the `fp16` flag to `True`: training_args = TrainingArguments(per_device_train_batch_size=4, fp16=True, **default_args) If you prefer to use 🤗 Accelerate, find the 🤗 Accelerate example [further in this guide](#using-accelerate). ### BF16 If you have access to an Ampere or newer hardware you can use bf16 for mixed precision training and evaluation. While bf16 has a worse precision than fp16, it has a much bigger dynamic range. In fp16 the biggest number you can have is `65535` and any number above that will result in an overflow. A bf16 number can be as large as `3.39e+38` (!) which is about the same as fp32 - because both have 8-bits used for the numerical range. You can enable BF16 in the 🤗 Trainer with: thon training_args = TrainingArguments(bf16=True, **default_args) ### TF32 The Ampere hardware uses a magical data type called tf32. It has the same numerical range as fp32 (8-bits), but instead of 23 bits precision it has only 10 bits (same as fp16) and uses only 19 bits in total. It's ""magical"" in the sense that you can use the normal fp32 training and/or inference code and by enabling tf32 support you can get up to 3x throughput improvement. All you need to do is to add the following to your code: import torch torch.backends.cuda.matmul.allow_tf32 = True torch.backends.cudnn.allow_tf32 = True CUDA will automatically switch to using tf32 instead of fp32 where possible, assuming that the used GPU is from the Ampere series. According to [NVIDIA research](https://developer.nvidia.com/blog/accelerating-ai-training-with-tf32-tensor-cores/), the majority of machine learning training workloads show the same perplexity and convergence with tf32 training as with fp32. If you're already using fp16 or bf16 mixed precision it may help with the throughput as well. You can enable this mode in the 🤗 Trainer: thon TrainingArguments(tf32=True, **default_args) tf32 can't be accessed directly via `tensor.to(dtype=torch.tf32)` because it is an internal CUDA data type. You need `torch>=1.7` to use tf32 data types. For additional information on tf32 vs other precisions, please refer to the following benchmarks: [RTX-3090](https://github.com/huggingface/transformers/issues/14608#issuecomment-1004390803) and [A100](https://github.com/huggingface/transformers/issues/15026#issuecomment-1004543189). ## Flash Attention 2 You can speedup the training throughput by using Flash Attention 2 integration in transformers. Check out the appropriate section in the [single GPU section](./perf_infer_gpu_one#Flash-Attention-2) to learn more about how to load a model with Flash Attention 2 modules. ## Optimizer choice The most common optimizer used to train transformer models is Adam or AdamW (Adam with weight decay). Adam achieves good convergence by storing the rolling average of the previous gradients; however, it adds an additional memory footprint of the order of the number of model parameters. To remedy this, you can use an alternative optimizer. For example if you have [NVIDIA/apex](https://github.com/NVIDIA/apex) installed, `adamw_apex_fused` will give you the fastest training experience among all supported AdamW optimizers. [`Trainer`] integrates a variety of optimizers that can be used out of box: `adamw_hf`, `adamw_torch`, `adamw_torch_fused`, `adamw_apex_fused`, `adamw_anyprecision`, `adafactor`, or `adamw_bnb_8bit`. More optimizers can be plugged in via a third-party implementation. Let's take a closer look at two alternatives to AdamW optimizer: 1. `adafactor` which is available in [`Trainer`] 2. `adamw_bnb_8bit` is also available in Trainer, but a third-party integration is provided below for demonstration. For comparison, for a 3B-parameter model, like “t5-3b”: * A standard AdamW optimizer will need 24GB of GPU memory because it uses 8 bytes for each parameter (8*3 => 24GB) * Adafactor optimizer will need more than 12GB. It uses slightly more than 4 bytes for each parameter, so 4*3 and then some extra. * 8bit BNB quantized optimizer will use only (2*3) 6GB if all optimizer states are quantized. ### Adafactor Adafactor doesn't store rolling averages for each element in weight matrices. Instead, it keeps aggregated information (sums of rolling averages row- and column-wise), significantly reducing its footprint. However, compared to Adam, Adafactor may have slower convergence in certain cases. You can switch to Adafactor by setting `optim=""adafactor""` in [`TrainingArguments`]: training_args = TrainingArguments(per_device_train_batch_size=4, optim=""adafactor"", **default_args) Combined with other approaches (gradient accumulation, gradient checkpointing, and mixed precision training) you can notice up to 3x improvement while maintaining the throughput! However, as mentioned before, the convergence of Adafactor can be worse than Adam. ### 8-bit Adam Instead of aggregating optimizer states like Adafactor, 8-bit Adam keeps the full state and quantizes it. Quantization means that it stores the state with lower precision and dequantizes it only for the optimization. This is similar to the idea behind mixed precision training. To use `adamw_bnb_8bit`, you simply need to set `optim=""adamw_bnb_8bit""` in [`TrainingArguments`]: training_args = TrainingArguments(per_device_train_batch_size=4, optim=""adamw_bnb_8bit"", **default_args) However, we can also use a third-party implementation of the 8-bit optimizer for demonstration purposes to see how that can be integrated. First, follow the installation guide in the GitHub [repo](https://github.com/TimDettmers/bitsandbytes) to install the `bitsandbytes` library that implements the 8-bit Adam optimizer. Next you need to initialize the optimizer. This involves two steps: * First, group the model's parameters into two groups - one where weight decay should be applied, and the other one where it should not. Usually, biases and layer norm parameters are not weight decayed. * Then do some argument housekeeping to use the same parameters as the previously used AdamW optimizer. import bitsandbytes as bnb from torch import nn from transformers.trainer_pt_utils import get_parameter_names training_args = TrainingArguments(per_device_train_batch_size=4, **default_args) decay_parameters = get_parameter_names(model, [nn.LayerNorm]) decay_parameters = [name for name in decay_parameters if ""bias"" not in name] optimizer_grouped_parameters = [ { ""params"": [p for n, p in model.named_parameters() if n in decay_parameters], ""weight_decay"": training_args.weight_decay, }, { ""params"": [p for n, p in model.named_parameters() if n not in decay_parameters], ""weight_decay"": 0.0, }, ] optimizer_kwargs = { ""betas"": (training_args.adam_beta1, training_args.adam_beta2), ""eps"": training_args.adam_epsilon, } optimizer_kwargs[""lr""] = training_args.learning_rate adam_bnb_optim = bnb.optim.Adam8bit( optimizer_grouped_parameters, betas=(training_args.adam_beta1, training_args.adam_beta2), eps=training_args.adam_epsilon, lr=training_args.learning_rate, ) Finally, pass the custom optimizer as an argument to the `Trainer`: trainer = Trainer(model=model, args=training_args, train_dataset=ds, optimizers=(adam_bnb_optim, None)) Combined with other approaches (gradient accumulation, gradient checkpointing, and mixed precision training), you can expect to get about a 3x memory improvement and even slightly higher throughput as using Adafactor. ### multi_tensor pytorch-nightly introduced `torch.optim._multi_tensor` which should significantly speed up the optimizers for situations with lots of small feature tensors. It should eventually become the default, but if you want to experiment with it sooner, take a look at this GitHub [issue](https://github.com/huggingface/transformers/issues/9965). ## Data preloading One of the important requirements to reach great training speed is the ability to feed the GPU at the maximum speed it can handle. By default, everything happens in the main process, and it might not be able to read the data from disk fast enough, and thus create a bottleneck, leading to GPU under-utilization. Configure the following arguments to reduce the bottleneck: - `DataLoader(pin_memory=True, )` - ensures the data gets preloaded into the pinned memory on CPU and typically leads to much faster transfers from CPU to GPU memory. - `DataLoader(num_workers=4, )` - spawn several workers to preload data faster. During training, watch the GPU utilization stats; if it's far from 100%, experiment with increasing the number of workers. Of course, the problem could be elsewhere, so many workers won't necessarily lead to better performance. When using [`Trainer`], the corresponding [`TrainingArguments`] are: `dataloader_pin_memory` (`True` by default), and `dataloader_num_workers` (defaults to `0`). ## DeepSpeed ZeRO DeepSpeed is an open-source deep learning optimization library that is integrated with 🤗 Transformers and 🤗 Accelerate. It provides a wide range of features and optimizations designed to improve the efficiency and scalability of large-scale deep learning training. If your model fits onto a single GPU and you have enough space to fit a small batch size, you don't need to use DeepSpeed as it'll only slow things down. However, if the model doesn't fit onto a single GPU or you can't fit a small batch, you can leverage DeepSpeed ZeRO + CPU Offload, or NVMe Offload for much larger models. In this case, you need to separately [install the library](main_classes/deepspeed#installation), then follow one of the guides to create a configuration file and launch DeepSpeed: * For an in-depth guide on DeepSpeed integration with [`Trainer`], review [the corresponding documentation](main_classes/deepspeed), specifically the [section for a single GPU](main_classes/deepspeed#deployment-with-one-gpu). Some adjustments are required to use DeepSpeed in a notebook; please take a look at the [corresponding guide](main_classes/deepspeed#deployment-in-notebooks). * If you prefer to use 🤗 Accelerate, refer to [🤗 Accelerate DeepSpeed guide](https://huggingface.co/docs/accelerate/en/usage_guides/deepspeed). ## Using torch.compile PyTorch 2.0 introduced a new compile function that doesn't require any modification to existing PyTorch code but can optimize your code by adding a single line of code: `model = torch.compile(model)`. If using [`Trainer`], you only need `to` pass the `torch_compile` option in the [`TrainingArguments`]: thon training_args = TrainingArguments(torch_compile=True, **default_args) `torch.compile` uses Python's frame evaluation API to automatically create a graph from existing PyTorch programs. After capturing the graph, different backends can be deployed to lower the graph to an optimized engine. You can find more details and benchmarks in [PyTorch documentation](https://pytorch.org/get-started/pytorch-2.0/). `torch.compile` has a growing list of backends, which can be found in by calling `torchdynamo.list_backends()`, each of which with its optional dependencies. Choose which backend to use by specifying it via `torch_compile_backend` in the [`TrainingArguments`]. Some of the most commonly used backends are: **Debugging backends**: * `dynamo.optimize(""eager"")` - Uses PyTorch to run the extracted GraphModule. This is quite useful in debugging TorchDynamo issues. * `dynamo.optimize(""aot_eager"")` - Uses AotAutograd with no compiler, i.e, just using PyTorch eager for the AotAutograd's extracted forward and backward graphs. This is useful for debugging, and unlikely to give speedups. **Training & inference backends**: * `dynamo.optimize(""inductor"")` - Uses TorchInductor backend with AotAutograd and cudagraphs by leveraging codegened Triton kernels [Read more](https://dev-discuss.pytorch.org/t/torchinductor-a-pytorch-native-compiler-with-define-by-run-ir-and-symbolic-shapes/747) * `dynamo.optimize(""nvfuser"")` - nvFuser with TorchScript. [Read more](https://dev-discuss.pytorch.org/t/tracing-with-primitives-update-1-nvfuser-and-its-primitives/593) * `dynamo.optimize(""aot_nvfuser"")` - nvFuser with AotAutograd. [Read more](https://dev-discuss.pytorch.org/t/tracing-with-primitives-update-1-nvfuser-and-its-primitives/593) * `dynamo.optimize(""aot_cudagraphs"")` - cudagraphs with AotAutograd. [Read more](https://github.com/pytorch/torchdynamo/pull/757) **Inference-only backend**s: * `dynamo.optimize(""ofi"")` - Uses Torchscript optimize_for_inference. [Read more](https://pytorch.org/docs/stable/generated/torch.jit.optimize_for_inference.html) * `dynamo.optimize(""fx2trt"")` - Uses NVIDIA TensorRT for inference optimizations. [Read more](https://pytorch.org/TensorRT/tutorials/getting_started_with_fx_path.html) * `dynamo.optimize(""onnxrt"")` - Uses ONNXRT for inference on CPU/GPU. [Read more](https://onnxruntime.ai/) * `dynamo.optimize(""ipex"")` - Uses IPEX for inference on CPU. [Read more](https://github.com/intel/intel-extension-for-pytorch) For an example of using `torch.compile` with 🤗 Transformers, check out this [blog post on fine-tuning a BERT model for Text Classification using the newest PyTorch 2.0 features](https://www.philschmid.de/getting-started-pytorch-2-0-transformers) ## Using 🤗 Accelerate With [🤗 Accelerate](https://huggingface.co/docs/accelerate/index) you can use the above methods while gaining full control over the training loop and can essentially write the loop in pure PyTorch with some minor modifications. Suppose you have combined the methods in the [`TrainingArguments`] like so: training_args = TrainingArguments( per_device_train_batch_size=1, gradient_accumulation_steps=4, gradient_checkpointing=True, fp16=True, **default_args, ) The full example training loop with 🤗 Accelerate is only a handful of lines of code long: from accelerate import Accelerator from torch.utils.data.dataloader import DataLoader dataloader = DataLoader(ds, batch_size=training_args.per_device_train_batch_size) if training_args.gradient_checkpointing: model.gradient_checkpointing_enable() accelerator = Accelerator(fp16=training_args.fp16) model, optimizer, dataloader = accelerator.prepare(model, adam_bnb_optim, dataloader) model.train() for step, batch in enumerate(dataloader, start=1): loss = model(**batch).loss loss = loss / training_args.gradient_accumulation_steps accelerator.backward(loss) if step % training_args.gradient_accumulation_steps == 0: optimizer.step() optimizer.zero_grad() First we wrap the dataset in a [`DataLoader`](https://pytorch.org/docs/stable/data.html#torch.utils.data.DataLoader). Then we can enable gradient checkpointing by calling the model's [`~PreTrainedModel.gradient_checkpointing_enable`] method. When we initialize the [`Accelerator`](https://huggingface.co/docs/accelerate/package_reference/accelerator#accelerate.Accelerator) we can specify if we want to use mixed precision training and it will take care of it for us in the [`prepare`] call. During the [`prepare`](https://huggingface.co/docs/accelerate/package_reference/accelerator#accelerate.Accelerator.prepare) call the dataloader will also be distributed across workers should we use multiple GPUs. We use the same [8-bit optimizer](#8-bit-adam) from the earlier example. Finally, we can add the main training loop. Note that the `backward` call is handled by 🤗 Accelerate. We can also see how gradient accumulation works: we normalize the loss, so we get the average at the end of accumulation and once we have enough steps we run the optimization. Implementing these optimization techniques with 🤗 Accelerate only takes a handful of lines of code and comes with the benefit of more flexibility in the training loop. For a full documentation of all features have a look at the [Accelerate documentation](https://huggingface.co/docs/accelerate/index). ## Efficient Software Prebuilds PyTorch's [pip and conda builds](https://pytorch.org/get-started/locally/#start-locally) come prebuilt with the cuda toolkit which is enough to run PyTorch, but it is insufficient if you need to build cuda extensions. At times, additional efforts may be required to pre-build some components. For instance, if you're using libraries like `apex` that don't come pre-compiled. In other situations figuring out how to install the right cuda toolkit system-wide can be complicated. To address these scenarios PyTorch and NVIDIA released a new version of NGC docker container which already comes with everything prebuilt. You just need to install your programs on it, and it will run out of the box. This approach is also useful if you want to tweak the pytorch source and/or make a new customized build. To find the docker image version you want start [with PyTorch release notes](https://docs.nvidia.com/deeplearning/frameworks/pytorch-release-notes/), choose one of the latest monthly releases. Go into the release's notes for the desired release, check that the environment's components are matching your needs (including NVIDIA Driver requirements!) and then at the very top of that document go to the corresponding NGC page. If for some reason you get lost, here is [the index of all PyTorch NGC images](https://ngc.nvidia.com/catalog/containers/nvidia:pytorch). Next follow the instructions to download and deploy the docker image. ## Mixture of Experts Some recent papers reported a 4-5x training speedup and a faster inference by integrating Mixture of Experts (MoE) into the Transformer models. Since it has been discovered that more parameters lead to better performance, this technique allows to increase the number of parameters by an order of magnitude without increasing training costs. In this approach every other FFN layer is replaced with a MoE Layer which consists of many experts, with a gated function that trains each expert in a balanced way depending on the input token's position in a sequence. ![MoE Transformer 2x block](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/perf-moe-transformer.png) (source: [GLAM](https://ai.googleblog.com/2021/12/more-efficient-in-context-learning-with.html)) You can find exhaustive details and comparison tables in the papers listed at the end of this section. The main drawback of this approach is that it requires staggering amounts of GPU memory - almost an order of magnitude larger than its dense equivalent. Various distillation and approaches are proposed to how to overcome the much higher memory requirements. There is direct trade-off though, you can use just a few experts with a 2-3x smaller base model instead of dozens or hundreds experts leading to a 5x smaller model and thus increase the training speed moderately while increasing the memory requirements moderately as well. Most related papers and implementations are built around Tensorflow/TPUs: - [GShard: Scaling Giant Models with Conditional Computation and Automatic Sharding](https://arxiv.org/abs/2006.16668) - [Switch Transformers: Scaling to Trillion Parameter Models with Simple and Efficient Sparsity](https://arxiv.org/abs/2101.03961) - [GLaM: Generalist Language Model (GLaM)](https://ai.googleblog.com/2021/12/more-efficient-in-context-learning-with.html) And for Pytorch DeepSpeed has built one as well: [DeepSpeed-MoE: Advancing Mixture-of-Experts Inference and Training to Power Next-Generation AI Scale](https://arxiv.org/abs/2201.05596), [Mixture of Experts](https://www.deepspeed.ai/tutorials/mixture-of-experts/) - blog posts: [1](https://www.microsoft.com/en-us/research/blog/deepspeed-powers-8x-larger-moe-model-training-with-high-performance/), [2](https://www.microsoft.com/en-us/research/publication/scalable-and-efficient-moe-training-for-multitask-multilingual-models/) and specific deployment with large transformer-based natural language generation models: [blog post](https://www.deepspeed.ai/2021/12/09/deepspeed-moe-nlg.html), [Megatron-Deepspeed branch](https://github.com/microsoft/Megatron-DeepSpeed/tree/moe-training). ## Using PyTorch native attention and Flash Attention PyTorch 2.0 released a native [`torch.nn.functional.scaled_dot_product_attention`](https://pytorch.org/docs/master/generated/torch.nn.functional.scaled_dot_product_attention.html) (SDPA), that allows using fused GPU kernels such as [memory-efficient attention](https://arxiv.org/abs/2112.05682) and [flash attention](https://arxiv.org/abs/2205.14135). After installing the [`optimum`](https://github.com/huggingface/optimum) package, the relevant internal modules can be replaced to use PyTorch's native attention with: thon model = model.to_bettertransformer() Once converted, train the model as usual. The PyTorch-native `scaled_dot_product_attention` operator can only dispatch to Flash Attention if no `attention_mask` is provided. By default, in training mode, the BetterTransformer integration **drops the mask support and can only be used for training that does not require a padding mask for batched training**. This is the case, for example, during masked language modeling or causal language modeling. BetterTransformer is not suited for fine-tuning models on tasks that require a padding mask. Check out this [blogpost](https://pytorch.org/blog/out-of-the-box-acceleration/) to learn more about acceleration and memory-savings with SDPA." testing.md," # Testing Let's take a look at how 🤗 Transformers models are tested and how you can write new tests and improve the existing ones. There are 2 test suites in the repository: 1. `tests` -- tests for the general API 2. `examples` -- tests primarily for various applications that aren't part of the API ## How transformers are tested 1. Once a PR is submitted it gets tested with 9 CircleCi jobs. Every new commit to that PR gets retested. These jobs are defined in this [config file](https://github.com/huggingface/transformers/tree/main/.circleci/config.yml), so that if needed you can reproduce the same environment on your machine. These CI jobs don't run `@slow` tests. 2. There are 3 jobs run by [github actions](https://github.com/huggingface/transformers/actions): - [torch hub integration](https://github.com/huggingface/transformers/tree/main/.github/workflows/github-torch-hub.yml): checks whether torch hub integration works. - [self-hosted (push)](https://github.com/huggingface/transformers/tree/main/.github/workflows/self-push.yml): runs fast tests on GPU only on commits on `main`. It only runs if a commit on `main` has updated the code in one of the following folders: `src`, `tests`, `.github` (to prevent running on added model cards, notebooks, etc.) - [self-hosted runner](https://github.com/huggingface/transformers/tree/main/.github/workflows/self-scheduled.yml): runs normal and slow tests on GPU in `tests` and `examples`: ```bash RUN_SLOW=1 pytest tests/ RUN_SLOW=1 pytest examples/ The results can be observed [here](https://github.com/huggingface/transformers/actions). ## Running tests ### Choosing which tests to run This document goes into many details of how tests can be run. If after reading everything, you need even more details you will find them [here](https://docs.pytest.org/en/latest/usage.html). Here are some most useful ways of running tests. Run all: ```console pytest or: ```bash make test Note that the latter is defined as: ```bash python -m pytest -n auto --dist=loadfile -s -v ./tests/ which tells pytest to: - run as many test processes as they are CPU cores (which could be too many if you don't have a ton of RAM!) - ensure that all tests from the same file will be run by the same test process - do not capture output - run in verbose mode ### Getting the list of all tests All tests of the test suite: ```bash pytest --collect-only -q All tests of a given test file: ```bash pytest tests/test_optimization.py --collect-only -q ### Run a specific test module To run an individual test module: ```bash pytest tests/utils/test_logging.py ### Run specific tests Since unittest is used inside most of the tests, to run specific subtests you need to know the name of the unittest class containing those tests. For example, it could be: ```bash pytest tests/test_optimization.py::OptimizationTest::test_adam_w Here: - `tests/test_optimization.py` - the file with tests - `OptimizationTest` - the name of the class - `test_adam_w` - the name of the specific test function If the file contains multiple classes, you can choose to run only tests of a given class. For example: ```bash pytest tests/test_optimization.py::OptimizationTest will run all the tests inside that class. As mentioned earlier you can see what tests are contained inside the `OptimizationTest` class by running: ```bash pytest tests/test_optimization.py::OptimizationTest --collect-only -q You can run tests by keyword expressions. To run only tests whose name contains `adam`: ```bash pytest -k adam tests/test_optimization.py Logical `and` and `or` can be used to indicate whether all keywords should match or either. `not` can be used to negate. To run all tests except those whose name contains `adam`: ```bash pytest -k ""not adam"" tests/test_optimization.py And you can combine the two patterns in one: ```bash pytest -k ""ada and not adam"" tests/test_optimization.py For example to run both `test_adafactor` and `test_adam_w` you can use: ```bash pytest -k ""test_adam_w or test_adam_w"" tests/test_optimization.py Note that we use `or` here, since we want either of the keywords to match to include both. If you want to include only tests that include both patterns, `and` is to be used: ```bash pytest -k ""test and ada"" tests/test_optimization.py ### Run `accelerate` tests Sometimes you need to run `accelerate` tests on your models. For that you can just add `-m accelerate_tests` to your command, if let's say you want to run these tests on `OPT` run: ```bash RUN_SLOW=1 pytest -m accelerate_tests tests/models/opt/test_modeling_opt.py ### Run documentation tests In order to test whether the documentation examples are correct, you should check that the `doctests` are passing. As an example, let's use [`WhisperModel.forward`'s docstring](https://github.com/huggingface/transformers/blob/main/src/transformers/models/whisper/modeling_whisper.py#L1017-L1035): thon r"""""" Returns: Example: thon >>> import torch >>> from transformers import WhisperModel, WhisperFeatureExtractor >>> from datasets import load_dataset >>> model = WhisperModel.from_pretrained(""openai/whisper-base"") >>> feature_extractor = WhisperFeatureExtractor.from_pretrained(""openai/whisper-base"") >>> ds = load_dataset(""hf-internal-testing/librispeech_asr_dummy"", ""clean"", split=""validation"") >>> inputs = feature_extractor(ds[0][""audio""][""array""], return_tensors=""pt"") >>> input_features = inputs.input_features >>> decoder_input_ids = torch.tensor([[1, 1]]) * model.config.decoder_start_token_id >>> last_hidden_state = model(input_features, decoder_input_ids=decoder_input_ids).last_hidden_state >>> list(last_hidden_state.shape) [1, 2, 512] ```"""""" Just run the following line to automatically test every docstring example in the desired file: ```bash pytest --doctest-modules If the file has a markdown extention, you should add the `--doctest-glob=""*.md""` argument. ### Run only modified tests You can run the tests related to the unstaged files or the current branch (according to Git) by using [pytest-picked](https://github.com/anapaulagomes/pytest-picked). This is a great way of quickly testing your changes didn't break anything, since it won't run the tests related to files you didn't touch. ```bash pip install pytest-picked ```bash pytest --picked All tests will be run from files and folders which are modified, but not yet committed. ### Automatically rerun failed tests on source modification [pytest-xdist](https://github.com/pytest-dev/pytest-xdist) provides a very useful feature of detecting all failed tests, and then waiting for you to modify files and continuously re-rerun those failing tests until they pass while you fix them. So that you don't need to re start pytest after you made the fix. This is repeated until all tests pass after which again a full run is performed. ```bash pip install pytest-xdist To enter the mode: `pytest -f` or `pytest --looponfail` File changes are detected by looking at `looponfailroots` root directories and all of their contents (recursively). If the default for this value does not work for you, you can change it in your project by setting a configuration option in `setup.cfg`: ```ini [tool:pytest] looponfailroots = transformers tests or `pytest.ini`/``tox.ini`` files: ```ini [pytest] looponfailroots = transformers tests This would lead to only looking for file changes in the respective directories, specified relatively to the ini-file’s directory. [pytest-watch](https://github.com/joeyespo/pytest-watch) is an alternative implementation of this functionality. ### Skip a test module If you want to run all test modules, except a few you can exclude them by giving an explicit list of tests to run. For example, to run all except `test_modeling_*.py` tests: ```bash pytest *ls -1 tests/*py | grep -v test_modeling* ### Clearing state CI builds and when isolation is important (against speed), cache should be cleared: ```bash pytest --cache-clear tests ### Running tests in parallel As mentioned earlier `make test` runs tests in parallel via `pytest-xdist` plugin (`-n X` argument, e.g. `-n 2` to run 2 parallel jobs). `pytest-xdist`'s `--dist=` option allows one to control how the tests are grouped. `--dist=loadfile` puts the tests located in one file onto the same process. Since the order of executed tests is different and unpredictable, if running the test suite with `pytest-xdist` produces failures (meaning we have some undetected coupled tests), use [pytest-replay](https://github.com/ESSS/pytest-replay) to replay the tests in the same order, which should help with then somehow reducing that failing sequence to a minimum. ### Test order and repetition It's good to repeat the tests several times, in sequence, randomly, or in sets, to detect any potential inter-dependency and state-related bugs (tear down). And the straightforward multiple repetition is just good to detect some problems that get uncovered by randomness of DL. #### Repeat tests - [pytest-flakefinder](https://github.com/dropbox/pytest-flakefinder): ```bash pip install pytest-flakefinder And then run every test multiple times (50 by default): ```bash pytest --flake-finder --flake-runs=5 tests/test_failing_test.py This plugin doesn't work with `-n` flag from `pytest-xdist`. There is another plugin `pytest-repeat`, but it doesn't work with `unittest`. #### Run tests in a random order ```bash pip install pytest-random-order Important: the presence of `pytest-random-order` will automatically randomize tests, no configuration change or command line options is required. As explained earlier this allows detection of coupled tests - where one test's state affects the state of another. When `pytest-random-order` is installed it will print the random seed it used for that session, e.g: ```bash pytest tests [] Using --random-order-bucket=module Using --random-order-seed=573663 So that if the given particular sequence fails, you can reproduce it by adding that exact seed, e.g.: ```bash pytest --random-order-seed=573663 [] Using --random-order-bucket=module Using --random-order-seed=573663 It will only reproduce the exact order if you use the exact same list of tests (or no list at all). Once you start to manually narrowing down the list you can no longer rely on the seed, but have to list them manually in the exact order they failed and tell pytest to not randomize them instead using `--random-order-bucket=none`, e.g.: ```bash pytest --random-order-bucket=none tests/test_a.py tests/test_c.py tests/test_b.py To disable the shuffling for all tests: ```bash pytest --random-order-bucket=none By default `--random-order-bucket=module` is implied, which will shuffle the files on the module levels. It can also shuffle on `class`, `package`, `global` and `none` levels. For the complete details please see its [documentation](https://github.com/jbasko/pytest-random-order). Another randomization alternative is: [`pytest-randomly`](https://github.com/pytest-dev/pytest-randomly). This module has a very similar functionality/interface, but it doesn't have the bucket modes available in `pytest-random-order`. It has the same problem of imposing itself once installed. ### Look and feel variations #### pytest-sugar [pytest-sugar](https://github.com/Frozenball/pytest-sugar) is a plugin that improves the look-n-feel, adds a progressbar, and show tests that fail and the assert instantly. It gets activated automatically upon installation. ```bash pip install pytest-sugar To run tests without it, run: ```bash pytest -p no:sugar or uninstall it. #### Report each sub-test name and its progress For a single or a group of tests via `pytest` (after `pip install pytest-pspec`): ```bash pytest --pspec tests/test_optimization.py #### Instantly shows failed tests [pytest-instafail](https://github.com/pytest-dev/pytest-instafail) shows failures and errors instantly instead of waiting until the end of test session. ```bash pip install pytest-instafail ```bash pytest --instafail ### To GPU or not to GPU On a GPU-enabled setup, to test in CPU-only mode add `CUDA_VISIBLE_DEVICES=""""`: ```bash CUDA_VISIBLE_DEVICES="""" pytest tests/utils/test_logging.py or if you have multiple gpus, you can specify which one is to be used by `pytest`. For example, to use only the second gpu if you have gpus `0` and `1`, you can run: ```bash CUDA_VISIBLE_DEVICES=""1"" pytest tests/utils/test_logging.py This is handy when you want to run different tasks on different GPUs. Some tests must be run on CPU-only, others on either CPU or GPU or TPU, yet others on multiple-GPUs. The following skip decorators are used to set the requirements of tests CPU/GPU/TPU-wise: - `require_torch` - this test will run only under torch - `require_torch_gpu` - as `require_torch` plus requires at least 1 GPU - `require_torch_multi_gpu` - as `require_torch` plus requires at least 2 GPUs - `require_torch_non_multi_gpu` - as `require_torch` plus requires 0 or 1 GPUs - `require_torch_up_to_2_gpus` - as `require_torch` plus requires 0 or 1 or 2 GPUs - `require_torch_tpu` - as `require_torch` plus requires at least 1 TPU Let's depict the GPU requirements in the following table: | n gpus | decorator | |--------+--------------------------------| | `>= 0` | `@require_torch` | | `>= 1` | `@require_torch_gpu` | | `>= 2` | `@require_torch_multi_gpu` | | `< 2` | `@require_torch_non_multi_gpu` | | `< 3` | `@require_torch_up_to_2_gpus` | For example, here is a test that must be run only when there are 2 or more GPUs available and pytorch is installed: thon no-style @require_torch_multi_gpu def test_example_with_multi_gpu(): If a test requires `tensorflow` use the `require_tf` decorator. For example: thon no-style @require_tf def test_tf_thing_with_tensorflow(): These decorators can be stacked. For example, if a test is slow and requires at least one GPU under pytorch, here is how to set it up: thon no-style @require_torch_gpu @slow def test_example_slow_on_gpu(): Some decorators like `@parametrized` rewrite test names, therefore `@require_*` skip decorators have to be listed last for them to work correctly. Here is an example of the correct usage: thon no-style @parameterized.expand() @require_torch_multi_gpu def test_integration_foo(): This order problem doesn't exist with `@pytest.mark.parametrize`, you can put it first or last and it will still work. But it only works with non-unittests. Inside tests: - How many GPUs are available: thon from transformers.testing_utils import get_gpu_count n_gpu = get_gpu_count() # works with torch and tf ### Testing with a specific PyTorch backend or device To run the test suite on a specific torch device add `TRANSFORMERS_TEST_DEVICE=""$device""` where `$device` is the target backend. For example, to test on CPU only: ```bash TRANSFORMERS_TEST_DEVICE=""cpu"" pytest tests/utils/test_logging.py This variable is useful for testing custom or less common PyTorch backends such as `mps`. It can also be used to achieve the same effect as `CUDA_VISIBLE_DEVICES` by targeting specific GPUs or testing in CPU-only mode. Certain devices will require an additional import after importing `torch` for the first time. This can be specified using the environment variable `TRANSFORMERS_TEST_BACKEND`: ```bash TRANSFORMERS_TEST_BACKEND=""torch_npu"" pytest tests/utils/test_logging.py Alternative backends may also require the replacement of device-specific functions. For example `torch.cuda.manual_seed` may need to be replaced with a device-specific seed setter like `torch.npu.manual_seed` to correctly set a random seed on the device. To specify a new backend with backend-specific device functions when running the test suite, create a Python device specification file in the format: import torch import torch_npu # !! Further additional imports can be added here !! # Specify the device name (eg. 'cuda', 'cpu', 'npu') DEVICE_NAME = 'npu' # Specify device-specific backends to dispatch to. # If not specified, will fallback to 'default' in 'testing_utils.py` MANUAL_SEED_FN = torch.npu.manual_seed EMPTY_CACHE_FN = torch.npu.empty_cache DEVICE_COUNT_FN = torch.npu.device_count This format also allows for specification of any additional imports required. To use this file to replace equivalent methods in the test suite, set the environment variable `TRANSFORMERS_TEST_DEVICE_SPEC` to the path of the spec file. Currently, only `MANUAL_SEED_FN`, `EMPTY_CACHE_FN` and `DEVICE_COUNT_FN` are supported for device-specific dispatch. ### Distributed training `pytest` can't deal with distributed training directly. If this is attempted - the sub-processes don't do the right thing and end up thinking they are `pytest` and start running the test suite in loops. It works, however, if one spawns a normal process that then spawns off multiple workers and manages the IO pipes. Here are some tests that use it: - [test_trainer_distributed.py](https://github.com/huggingface/transformers/tree/main/tests/trainer/test_trainer_distributed.py) - [test_deepspeed.py](https://github.com/huggingface/transformers/tree/main/tests/deepspeed/test_deepspeed.py) To jump right into the execution point, search for the `execute_subprocess_async` call in those tests. You will need at least 2 GPUs to see these tests in action: ```bash CUDA_VISIBLE_DEVICES=0,1 RUN_SLOW=1 pytest -sv tests/test_trainer_distributed.py ### Output capture During test execution any output sent to `stdout` and `stderr` is captured. If a test or a setup method fails, its according captured output will usually be shown along with the failure traceback. To disable output capturing and to get the `stdout` and `stderr` normally, use `-s` or `--capture=no`: ```bash pytest -s tests/utils/test_logging.py To send test results to JUnit format output: ```bash py.test tests --junitxml=result.xml ### Color control To have no color (e.g., yellow on white background is not readable): ```bash pytest --color=no tests/utils/test_logging.py ### Sending test report to online pastebin service Creating a URL for each test failure: ```bash pytest --pastebin=failed tests/utils/test_logging.py This will submit test run information to a remote Paste service and provide a URL for each failure. You may select tests as usual or add for example -x if you only want to send one particular failure. Creating a URL for a whole test session log: ```bash pytest --pastebin=all tests/utils/test_logging.py ## Writing tests 🤗 transformers tests are based on `unittest`, but run by `pytest`, so most of the time features from both systems can be used. You can read [here](https://docs.pytest.org/en/stable/unittest.html) which features are supported, but the important thing to remember is that most `pytest` fixtures don't work. Neither parametrization, but we use the module `parameterized` that works in a similar way. ### Parametrization Often, there is a need to run the same test multiple times, but with different arguments. It could be done from within the test, but then there is no way of running that test for just one set of arguments. thon # test_this1.py import unittest from parameterized import parameterized class TestMathUnitTest(unittest.TestCase): @parameterized.expand( [ (""negative"", -1.5, -2.0), (""integer"", 1, 1.0), (""large fraction"", 1.6, 1), ] ) def test_floor(self, name, input, expected): assert_equal(math.floor(input), expected) Now, by default this test will be run 3 times, each time with the last 3 arguments of `test_floor` being assigned the corresponding arguments in the parameter list. and you could run just the `negative` and `integer` sets of params with: ```bash pytest -k ""negative and integer"" tests/test_mytest.py or all but `negative` sub-tests, with: ```bash pytest -k ""not negative"" tests/test_mytest.py Besides using the `-k` filter that was just mentioned, you can find out the exact name of each sub-test and run any or all of them using their exact names. ```bash pytest test_this1.py --collect-only -q and it will list: ```bash test_this1.py::TestMathUnitTest::test_floor_0_negative test_this1.py::TestMathUnitTest::test_floor_1_integer test_this1.py::TestMathUnitTest::test_floor_2_large_fraction So now you can run just 2 specific sub-tests: ```bash pytest test_this1.py::TestMathUnitTest::test_floor_0_negative test_this1.py::TestMathUnitTest::test_floor_1_integer The module [parameterized](https://pypi.org/project/parameterized/) which is already in the developer dependencies of `transformers` works for both: `unittests` and `pytest` tests. If, however, the test is not a `unittest`, you may use `pytest.mark.parametrize` (or you may see it being used in some existing tests, mostly under `examples`). Here is the same example, this time using `pytest`'s `parametrize` marker: thon # test_this2.py import pytest @pytest.mark.parametrize( ""name, input, expected"", [ (""negative"", -1.5, -2.0), (""integer"", 1, 1.0), (""large fraction"", 1.6, 1), ], ) def test_floor(name, input, expected): assert_equal(math.floor(input), expected) Same as with `parameterized`, with `pytest.mark.parametrize` you can have a fine control over which sub-tests are run, if the `-k` filter doesn't do the job. Except, this parametrization function creates a slightly different set of names for the sub-tests. Here is what they look like: ```bash pytest test_this2.py --collect-only -q and it will list: ```bash test_this2.py::test_floor[integer-1-1.0] test_this2.py::test_floor[negative--1.5--2.0] test_this2.py::test_floor[large fraction-1.6-1] So now you can run just the specific test: ```bash pytest test_this2.py::test_floor[negative--1.5--2.0] test_this2.py::test_floor[integer-1-1.0] as in the previous example. ### Files and directories In tests often we need to know where things are relative to the current test file, and it's not trivial since the test could be invoked from more than one directory or could reside in sub-directories with different depths. A helper class `transformers.test_utils.TestCasePlus` solves this problem by sorting out all the basic paths and provides easy accessors to them: - `pathlib` objects (all fully resolved): - `test_file_path` - the current test file path, i.e. `__file__` - `test_file_dir` - the directory containing the current test file - `tests_dir` - the directory of the `tests` test suite - `examples_dir` - the directory of the `examples` test suite - `repo_root_dir` - the directory of the repository - `src_dir` - the directory of `src` (i.e. where the `transformers` sub-dir resides) - stringified paths---same as above but these return paths as strings, rather than `pathlib` objects: - `test_file_path_str` - `test_file_dir_str` - `tests_dir_str` - `examples_dir_str` - `repo_root_dir_str` - `src_dir_str` To start using those all you need is to make sure that the test resides in a subclass of `transformers.test_utils.TestCasePlus`. For example: thon from transformers.testing_utils import TestCasePlus class PathExampleTest(TestCasePlus): def test_something_involving_local_locations(self): data_dir = self.tests_dir / ""fixtures/tests_samples/wmt_en_ro"" If you don't need to manipulate paths via `pathlib` or you just need a path as a string, you can always invoked `str()` on the `pathlib` object or use the accessors ending with `_str`. For example: thon from transformers.testing_utils import TestCasePlus class PathExampleTest(TestCasePlus): def test_something_involving_stringified_locations(self): examples_dir = self.examples_dir_str ### Temporary files and directories Using unique temporary files and directories are essential for parallel test running, so that the tests won't overwrite each other's data. Also we want to get the temporary files and directories removed at the end of each test that created them. Therefore, using packages like `tempfile`, which address these needs is essential. However, when debugging tests, you need to be able to see what goes into the temporary file or directory and you want to know it's exact path and not having it randomized on every test re-run. A helper class `transformers.test_utils.TestCasePlus` is best used for such purposes. It's a sub-class of `unittest.TestCase`, so we can easily inherit from it in the test modules. Here is an example of its usage: thon from transformers.testing_utils import TestCasePlus class ExamplesTests(TestCasePlus): def test_whatever(self): tmp_dir = self.get_auto_remove_tmp_dir() This code creates a unique temporary directory, and sets `tmp_dir` to its location. - Create a unique temporary dir: thon def test_whatever(self): tmp_dir = self.get_auto_remove_tmp_dir() `tmp_dir` will contain the path to the created temporary dir. It will be automatically removed at the end of the test. - Create a temporary dir of my choice, ensure it's empty before the test starts and don't empty it after the test. thon def test_whatever(self): tmp_dir = self.get_auto_remove_tmp_dir(""./xxx"") This is useful for debug when you want to monitor a specific directory and want to make sure the previous tests didn't leave any data in there. - You can override the default behavior by directly overriding the `before` and `after` args, leading to one of the following behaviors: - `before=True`: the temporary dir will always be cleared at the beginning of the test. - `before=False`: if the temporary dir already existed, any existing files will remain there. - `after=True`: the temporary dir will always be deleted at the end of the test. - `after=False`: the temporary dir will always be left intact at the end of the test. In order to run the equivalent of `rm -r` safely, only subdirs of the project repository checkout are allowed if an explicit `tmp_dir` is used, so that by mistake no `/tmp` or similar important part of the filesystem will get nuked. i.e. please always pass paths that start with `./`. Each test can register multiple temporary directories and they all will get auto-removed, unless requested otherwise. ### Temporary sys.path override If you need to temporary override `sys.path` to import from another test for example, you can use the `ExtendSysPath` context manager. Example: thon import os from transformers.testing_utils import ExtendSysPath bindir = os.path.abspath(os.path.dirname(__file__)) with ExtendSysPath(f""{bindir}/..""): from test_trainer import TrainerIntegrationCommon # noqa ### Skipping tests This is useful when a bug is found and a new test is written, yet the bug is not fixed yet. In order to be able to commit it to the main repository we need make sure it's skipped during `make test`. Methods: - A **skip** means that you expect your test to pass only if some conditions are met, otherwise pytest should skip running the test altogether. Common examples are skipping windows-only tests on non-windows platforms, or skipping tests that depend on an external resource which is not available at the moment (for example a database). - A **xfail** means that you expect a test to fail for some reason. A common example is a test for a feature not yet implemented, or a bug not yet fixed. When a test passes despite being expected to fail (marked with pytest.mark.xfail), it’s an xpass and will be reported in the test summary. One of the important differences between the two is that `skip` doesn't run the test, and `xfail` does. So if the code that's buggy causes some bad state that will affect other tests, do not use `xfail`. #### Implementation - Here is how to skip whole test unconditionally: thon no-style @unittest.skip(""this bug needs to be fixed"") def test_feature_x(): or via pytest: thon no-style @pytest.mark.skip(reason=""this bug needs to be fixed"") or the `xfail` way: thon no-style @pytest.mark.xfail def test_feature_x(): Here's how to skip a test based on internal checks within the test: thon def test_feature_x(): if not has_something(): pytest.skip(""unsupported configuration"") or the whole module: thon import pytest if not pytest.config.getoption(""--custom-flag""): pytest.skip(""--custom-flag is missing, skipping tests"", allow_module_level=True) or the `xfail` way: thon def test_feature_x(): pytest.xfail(""expected to fail until bug XYZ is fixed"") - Here is how to skip all tests in a module if some import is missing: thon docutils = pytest.importorskip(""docutils"", minversion=""0.3"") - Skip a test based on a condition: thon no-style @pytest.mark.skipif(sys.version_info < (3,6), reason=""requires python3.6 or higher"") def test_feature_x(): or: thon no-style @unittest.skipIf(torch_device == ""cpu"", ""Can't do half precision"") def test_feature_x(): or skip the whole module: thon no-style @pytest.mark.skipif(sys.platform == 'win32', reason=""does not run on windows"") class TestClass(): def test_feature_x(self): More details, example and ways are [here](https://docs.pytest.org/en/latest/skipping.html). ### Slow tests The library of tests is ever-growing, and some of the tests take minutes to run, therefore we can't afford waiting for an hour for the test suite to complete on CI. Therefore, with some exceptions for essential tests, slow tests should be marked as in the example below: thon no-style from transformers.testing_utils import slow @slow def test_integration_foo(): Once a test is marked as `@slow`, to run such tests set `RUN_SLOW=1` env var, e.g.: ```bash RUN_SLOW=1 pytest tests Some decorators like `@parameterized` rewrite test names, therefore `@slow` and the rest of the skip decorators `@require_*` have to be listed last for them to work correctly. Here is an example of the correct usage: thon no-style @parameteriz ed.expand() @slow def test_integration_foo(): As explained at the beginning of this document, slow tests get to run on a scheduled basis, rather than in PRs CI checks. So it's possible that some problems will be missed during a PR submission and get merged. Such problems will get caught during the next scheduled CI job. But it also means that it's important to run the slow tests on your machine before submitting the PR. Here is a rough decision making mechanism for choosing which tests should be marked as slow: If the test is focused on one of the library's internal components (e.g., modeling files, tokenization files, pipelines), then we should run that test in the non-slow test suite. If it's focused on an other aspect of the library, such as the documentation or the examples, then we should run these tests in the slow test suite. And then, to refine this approach we should have exceptions: - All tests that need to download a heavy set of weights or a dataset that is larger than ~50MB (e.g., model or tokenizer integration tests, pipeline integration tests) should be set to slow. If you're adding a new model, you should create and upload to the hub a tiny version of it (with random weights) for integration tests. This is discussed in the following paragraphs. - All tests that need to do a training not specifically optimized to be fast should be set to slow. - We can introduce exceptions if some of these should-be-non-slow tests are excruciatingly slow, and set them to `@slow`. Auto-modeling tests, which save and load large files to disk, are a good example of tests that are marked as `@slow`. - If a test completes under 1 second on CI (including downloads if any) then it should be a normal test regardless. Collectively, all the non-slow tests need to cover entirely the different internals, while remaining fast. For example, a significant coverage can be achieved by testing with specially created tiny models with random weights. Such models have the very minimal number of layers (e.g., 2), vocab size (e.g., 1000), etc. Then the `@slow` tests can use large slow models to do qualitative testing. To see the use of these simply look for *tiny* models with: ```bash grep tiny tests examples Here is a an example of a [script](https://github.com/huggingface/transformers/tree/main/scripts/fsmt/fsmt-make-tiny-model.py) that created the tiny model [stas/tiny-wmt19-en-de](https://huggingface.co/stas/tiny-wmt19-en-de). You can easily adjust it to your specific model's architecture. It's easy to measure the run-time incorrectly if for example there is an overheard of downloading a huge model, but if you test it locally the downloaded files would be cached and thus the download time not measured. Hence check the execution speed report in CI logs instead (the output of `pytest --durations=0 tests`). That report is also useful to find slow outliers that aren't marked as such, or which need to be re-written to be fast. If you notice that the test suite starts getting slow on CI, the top listing of this report will show the slowest tests. ### Testing the stdout/stderr output In order to test functions that write to `stdout` and/or `stderr`, the test can access those streams using the `pytest`'s [capsys system](https://docs.pytest.org/en/latest/capture.html). Here is how this is accomplished: thon import sys def print_to_stdout(s): print(s) def print_to_stderr(s): sys.stderr.write(s) def test_result_and_stdout(capsys): msg = ""Hello"" print_to_stdout(msg) print_to_stderr(msg) out, err = capsys.readouterr() # consume the captured output streams # optional: if you want to replay the consumed streams: sys.stdout.write(out) sys.stderr.write(err) # test: assert msg in out assert msg in err And, of course, most of the time, `stderr` will come as a part of an exception, so try/except has to be used in such a case: thon def raise_exception(msg): raise ValueError(msg) def test_something_exception(): msg = ""Not a good value"" error = """" try: raise_exception(msg) except Exception as e: error = str(e) assert msg in error, f""{msg} is in the exception:\n{error}"" Another approach to capturing stdout is via `contextlib.redirect_stdout`: thon from io import StringIO from contextlib import redirect_stdout def print_to_stdout(s): print(s) def test_result_and_stdout(): msg = ""Hello"" buffer = StringIO() with redirect_stdout(buffer): print_to_stdout(msg) out = buffer.getvalue() # optional: if you want to replay the consumed streams: sys.stdout.write(out) # test: assert msg in out An important potential issue with capturing stdout is that it may contain `\r` characters that in normal `print` reset everything that has been printed so far. There is no problem with `pytest`, but with `pytest -s` these characters get included in the buffer, so to be able to have the test run with and without `-s`, you have to make an extra cleanup to the captured output, using `re.sub(r'~.*\r', '', buf, 0, re.M)`. But, then we have a helper context manager wrapper to automatically take care of it all, regardless of whether it has some `\r`'s in it or not, so it's a simple: thon from transformers.testing_utils import CaptureStdout with CaptureStdout() as cs: function_that_writes_to_stdout() print(cs.out) Here is a full test example: thon from transformers.testing_utils import CaptureStdout msg = ""Secret message\r"" final = ""Hello World"" with CaptureStdout() as cs: print(msg + final) assert cs.out == final + ""\n"", f""captured: {cs.out}, expecting {final}"" If you'd like to capture `stderr` use the `CaptureStderr` class instead: thon from transformers.testing_utils import CaptureStderr with CaptureStderr() as cs: function_that_writes_to_stderr() print(cs.err) If you need to capture both streams at once, use the parent `CaptureStd` class: thon from transformers.testing_utils import CaptureStd with CaptureStd() as cs: function_that_writes_to_stdout_and_stderr() print(cs.err, cs.out) Also, to aid debugging test issues, by default these context managers automatically replay the captured streams on exit from the context. ### Capturing logger stream If you need to validate the output of a logger, you can use `CaptureLogger`: thon from transformers import logging from transformers.testing_utils import CaptureLogger msg = ""Testing 1, 2, 3"" logging.set_verbosity_info() logger = logging.get_logger(""transformers.models.bart.tokenization_bart"") with CaptureLogger(logger) as cl: logger.info(msg) assert cl.out, msg + ""\n"" ### Testing with environment variables If you want to test the impact of environment variables for a specific test you can use a helper decorator `transformers.testing_utils.mockenv` thon from transformers.testing_utils import mockenv class HfArgumentParserTest(unittest.TestCase): @mockenv(TRANSFORMERS_VERBOSITY=""error"") def test_env_override(self): env_level_str = os.getenv(""TRANSFORMERS_VERBOSITY"", None) At times an external program needs to be called, which requires setting `PYTHONPATH` in `os.environ` to include multiple local paths. A helper class `transformers.test_utils.TestCasePlus` comes to help: thon from transformers.testing_utils import TestCasePlus class EnvExampleTest(TestCasePlus): def test_external_prog(self): env = self.get_env() # now call the external program, passing `env` to it Depending on whether the test file was under the `tests` test suite or `examples` it'll correctly set up `env[PYTHONPATH]` to include one of these two directories, and also the `src` directory to ensure the testing is done against the current repo, and finally with whatever `env[PYTHONPATH]` was already set to before the test was called if anything. This helper method creates a copy of the `os.environ` object, so the original remains intact. ### Getting reproducible results In some situations you may want to remove randomness for your tests. To get identical reproducible results set, you will need to fix the seed: thon seed = 42 # python RNG import random random.seed(seed) # pytorch RNGs import torch torch.manual_seed(seed) torch.backends.cudnn.deterministic = True if torch.cuda.is_available(): torch.cuda.manual_seed_all(seed) # numpy RNG import numpy as np np.random.seed(seed) # tf RNG tf.random.set_seed(seed) ### Debugging tests To start a debugger at the point of the warning, do this: ```bash pytest tests/utils/test_logging.py -W error::UserWarning --pdb ## Working with github actions workflows To trigger a self-push workflow CI job, you must: 1. Create a new branch on `transformers` origin (not a fork!). 2. The branch name has to start with either `ci_` or `ci-` (`main` triggers it too, but we can't do PRs on `main`). It also gets triggered only for specific paths - you can find the up-to-date definition in case it changed since this document has been written [here](https://github.com/huggingface/transformers/blob/main/.github/workflows/self-push.yml) under *push:* 3. Create a PR from this branch. 4. Then you can see the job appear [here](https://github.com/huggingface/transformers/actions/workflows/self-push.yml). It may not run right away if there is a backlog. ## Testing Experimental CI Features Testing CI features can be potentially problematic as it can interfere with the normal CI functioning. Therefore if a new CI feature is to be added, it should be done as following. 1. Create a new dedicated job that tests what needs to be tested 2. The new job must always succeed so that it gives us a green ✓ (details below). 3. Let it run for some days to see that a variety of different PR types get to run on it (user fork branches, non-forked branches, branches originating from github.com UI direct file edit, various forced pushes, etc. - there are so many) while monitoring the experimental job's logs (not the overall job green as it's purposefully always green) 4. When it's clear that everything is solid, then merge the new changes into existing jobs. That way experiments on CI functionality itself won't interfere with the normal workflow. Now how can we make the job always succeed while the new CI feature is being developed? Some CIs, like TravisCI support ignore-step-failure and will report the overall job as successful, but CircleCI and Github Actions as of this writing don't support that. So the following workaround can be used: 1. `set +euo pipefail` at the beginning of the run command to suppress most potential failures in the bash script. 2. the last command must be a success: `echo ""done""` or just `true` will do Here is an example: ```yaml - run: name: run CI experiment command: | set +euo pipefail echo ""setting run-all-despite-any-errors-mode"" this_command_will_fail echo ""but bash continues to run"" # emulate another failure false # but the last command must be a success echo ""during experiment do not remove: reporting success to CI, even if there were failures"" For simple commands you could also do: ```bash cmd_that_may_fail || true Of course, once satisfied with the results, integrate the experimental step or job with the rest of the normal jobs, while removing `set +euo pipefail` or any other things you may have added to ensure that the experimental job doesn't interfere with the normal CI functioning. This whole process would have been much easier if we only could set something like `allow-failure` for the experimental step, and let it fail without impacting the overall status of PRs. But as mentioned earlier CircleCI and Github Actions don't support it at the moment. You can vote for this feature and see where it is at these CI-specific threads: - [Github Actions:](https://github.com/actions/toolkit/issues/399) - [CircleCI:](https://ideas.circleci.com/ideas/CCI-I-344) " custom_models.md," # Sharing custom models The 🤗 Transformers library is designed to be easily extensible. Every model is fully coded in a given subfolder of the repository with no abstraction, so you can easily copy a modeling file and tweak it to your needs. If you are writing a brand new model, it might be easier to start from scratch. In this tutorial, we will show you how to write a custom model and its configuration so it can be used inside Transformers, and how you can share it with the community (with the code it relies on) so that anyone can use it, even if it's not present in the 🤗 Transformers library. We will illustrate all of this on a ResNet model, by wrapping the ResNet class of the [timm library](https://github.com/rwightman/pytorch-image-models) into a [`PreTrainedModel`]. ## Writing a custom configuration Before we dive into the model, let's first write its configuration. The configuration of a model is an object that will contain all the necessary information to build the model. As we will see in the next section, the model can only take a `config` to be initialized, so we really need that object to be as complete as possible. In our example, we will take a couple of arguments of the ResNet class that we might want to tweak. Different configurations will then give us the different types of ResNets that are possible. We then just store those arguments, after checking the validity of a few of them. thon from transformers import PretrainedConfig from typing import List class ResnetConfig(PretrainedConfig): model_type = ""resnet"" def __init__( self, block_type=""bottleneck"", layers: List[int] = [3, 4, 6, 3], num_classes: int = 1000, input_channels: int = 3, cardinality: int = 1, base_width: int = 64, stem_width: int = 64, stem_type: str = """", avg_down: bool = False, **kwargs, ): if block_type not in [""basic"", ""bottleneck""]: raise ValueError(f""`block_type` must be 'basic' or bottleneck', got {block_type}."") if stem_type not in ["""", ""deep"", ""deep-tiered""]: raise ValueError(f""`stem_type` must be '', 'deep' or 'deep-tiered', got {stem_type}."") self.block_type = block_type self.layers = layers self.num_classes = num_classes self.input_channels = input_channels self.cardinality = cardinality self.base_width = base_width self.stem_width = stem_width self.stem_type = stem_type self.avg_down = avg_down super().__init__(**kwargs) The three important things to remember when writing you own configuration are the following: - you have to inherit from `PretrainedConfig`, - the `__init__` of your `PretrainedConfig` must accept any kwargs, - those `kwargs` need to be passed to the superclass `__init__`. The inheritance is to make sure you get all the functionality from the 🤗 Transformers library, while the two other constraints come from the fact a `PretrainedConfig` has more fields than the ones you are setting. When reloading a config with the `from_pretrained` method, those fields need to be accepted by your config and then sent to the superclass. Defining a `model_type` for your configuration (here `model_type=""resnet""`) is not mandatory, unless you want to register your model with the auto classes (see last section). With this done, you can easily create and save your configuration like you would do with any other model config of the library. Here is how we can create a resnet50d config and save it: resnet50d_config = ResnetConfig(block_type=""bottleneck"", stem_width=32, stem_type=""deep"", avg_down=True) resnet50d_config.save_pretrained(""custom-resnet"") This will save a file named `config.json` inside the folder `custom-resnet`. You can then reload your config with the `from_pretrained` method: resnet50d_config = ResnetConfig.from_pretrained(""custom-resnet"") You can also use any other method of the [`PretrainedConfig`] class, like [`~PretrainedConfig.push_to_hub`] to directly upload your config to the Hub. ## Writing a custom model Now that we have our ResNet configuration, we can go on writing the model. We will actually write two: one that extracts the hidden features from a batch of images (like [`BertModel`]) and one that is suitable for image classification (like [`BertForSequenceClassification`]). As we mentioned before, we'll only write a loose wrapper of the model to keep it simple for this example. The only thing we need to do before writing this class is a map between the block types and actual block classes. Then the model is defined from the configuration by passing everything to the `ResNet` class: from transformers import PreTrainedModel from timm.models.resnet import BasicBlock, Bottleneck, ResNet from .configuration_resnet import ResnetConfig BLOCK_MAPPING = {""basic"": BasicBlock, ""bottleneck"": Bottleneck} class ResnetModel(PreTrainedModel): config_class = ResnetConfig def __init__(self, config): super().__init__(config) block_layer = BLOCK_MAPPING[config.block_type] self.model = ResNet( block_layer, config.layers, num_classes=config.num_classes, in_chans=config.input_channels, cardinality=config.cardinality, base_width=config.base_width, stem_width=config.stem_width, stem_type=config.stem_type, avg_down=config.avg_down, ) def forward(self, tensor): return self.model.forward_features(tensor) For the model that will classify images, we just change the forward method: import torch class ResnetModelForImageClassification(PreTrainedModel): config_class = ResnetConfig def __init__(self, config): super().__init__(config) block_layer = BLOCK_MAPPING[config.block_type] self.model = ResNet( block_layer, config.layers, num_classes=config.num_classes, in_chans=config.input_channels, cardinality=config.cardinality, base_width=config.base_width, stem_width=config.stem_width, stem_type=config.stem_type, avg_down=config.avg_down, ) def forward(self, tensor, labels=None): logits = self.model(tensor) if labels is not None: loss = torch.nn.cross_entropy(logits, labels) return {""loss"": loss, ""logits"": logits} return {""logits"": logits} In both cases, notice how we inherit from `PreTrainedModel` and call the superclass initialization with the `config` (a bit like when you write a regular `torch.nn.Module`). The line that sets the `config_class` is not mandatory, unless you want to register your model with the auto classes (see last section). If your model is very similar to a model inside the library, you can re-use the same configuration as this model. You can have your model return anything you want, but returning a dictionary like we did for `ResnetModelForImageClassification`, with the loss included when labels are passed, will make your model directly usable inside the [`Trainer`] class. Using another output format is fine as long as you are planning on using your own training loop or another library for training. Now that we have our model class, let's create one: resnet50d = ResnetModelForImageClassification(resnet50d_config) Again, you can use any of the methods of [`PreTrainedModel`], like [`~PreTrainedModel.save_pretrained`] or [`~PreTrainedModel.push_to_hub`]. We will use the second in the next section, and see how to push the model weights with the code of our model. But first, let's load some pretrained weights inside our model. In your own use case, you will probably be training your custom model on your own data. To go fast for this tutorial, we will use the pretrained version of the resnet50d. Since our model is just a wrapper around it, it's going to be easy to transfer those weights: import timm pretrained_model = timm.create_model(""resnet50d"", pretrained=True) resnet50d.model.load_state_dict(pretrained_model.state_dict()) Now let's see how to make sure that when we do [`~PreTrainedModel.save_pretrained`] or [`~PreTrainedModel.push_to_hub`], the code of the model is saved. ## Sending the code to the Hub This API is experimental and may have some slight breaking changes in the next releases. First, make sure your model is fully defined in a `.py` file. It can rely on relative imports to some other files as long as all the files are in the same directory (we don't support submodules for this feature yet). For our example, we'll define a `modeling_resnet.py` file and a `configuration_resnet.py` file in a folder of the current working directory named `resnet_model`. The configuration file contains the code for `ResnetConfig` and the modeling file contains the code of `ResnetModel` and `ResnetModelForImageClassification`. . └── resnet_model ├── __init__.py ├── configuration_resnet.py └── modeling_resnet.py The `__init__.py` can be empty, it's just there so that Python detects `resnet_model` can be use as a module. If copying a modeling files from the library, you will need to replace all the relative imports at the top of the file to import from the `transformers` package. Note that you can re-use (or subclass) an existing configuration/model. To share your model with the community, follow those steps: first import the ResNet model and config from the newly created files: from resnet_model.configuration_resnet import ResnetConfig from resnet_model.modeling_resnet import ResnetModel, ResnetModelForImageClassification Then you have to tell the library you want to copy the code files of those objects when using the `save_pretrained` method and properly register them with a given Auto class (especially for models), just run: ResnetConfig.register_for_auto_class() ResnetModel.register_for_auto_class(""AutoModel"") ResnetModelForImageClassification.register_for_auto_class(""AutoModelForImageClassification"") Note that there is no need to specify an auto class for the configuration (there is only one auto class for them, [`AutoConfig`]) but it's different for models. Your custom model could be suitable for many different tasks, so you have to specify which one of the auto classes is the correct one for your model. Use `register_for_auto_class()` if you want the code files to be copied. If you instead prefer to use code on the Hub from another repo, you don't need to call it. In cases where there's more than one auto class, you can modify the `config.json` directly using the following structure: ""auto_map"": { ""AutoConfig"": ""--"", ""AutoModel"": ""--"", ""AutoModelFor"": ""--"", }, Next, let's create the config and models as we did before: resnet50d_config = ResnetConfig(block_type=""bottleneck"", stem_width=32, stem_type=""deep"", avg_down=True) resnet50d = ResnetModelForImageClassification(resnet50d_config) pretrained_model = timm.create_model(""resnet50d"", pretrained=True) resnet50d.model.load_state_dict(pretrained_model.state_dict()) Now to send the model to the Hub, make sure you are logged in. Either run in your terminal: ```bash huggingface-cli login or from a notebook: from huggingface_hub import notebook_login notebook_login() You can then push to your own namespace (or an organization you are a member of) like this: resnet50d.push_to_hub(""custom-resnet50d"") On top of the modeling weights and the configuration in json format, this also copied the modeling and configuration `.py` files in the folder `custom-resnet50d` and uploaded the result to the Hub. You can check the result in this [model repo](https://huggingface.co/sgugger/custom-resnet50d). See the [sharing tutorial](model_sharing) for more information on the push to Hub method. ## Using a model with custom code You can use any configuration, model or tokenizer with custom code files in its repository with the auto-classes and the `from_pretrained` method. All files and code uploaded to the Hub are scanned for malware (refer to the [Hub security](https://huggingface.co/docs/hub/security#malware-scanning) documentation for more information), but you should still review the model code and author to avoid executing malicious code on your machine. Set `trust_remote_code=True` to use a model with custom code: from transformers import AutoModelForImageClassification model = AutoModelForImageClassification.from_pretrained(""sgugger/custom-resnet50d"", trust_remote_code=True) It is also strongly encouraged to pass a commit hash as a `revision` to make sure the author of the models did not update the code with some malicious new lines (unless you fully trust the authors of the models). commit_hash = ""ed94a7c6247d8aedce4647f00f20de6875b5b292"" model = AutoModelForImageClassification.from_pretrained( ""sgugger/custom-resnet50d"", trust_remote_code=True, revision=commit_hash ) Note that when browsing the commit history of the model repo on the Hub, there is a button to easily copy the commit hash of any commit. ## Registering a model with custom code to the auto classes If you are writing a library that extends 🤗 Transformers, you may want to extend the auto classes to include your own model. This is different from pushing the code to the Hub in the sense that users will need to import your library to get the custom models (contrarily to automatically downloading the model code from the Hub). As long as your config has a `model_type` attribute that is different from existing model types, and that your model classes have the right `config_class` attributes, you can just add them to the auto classes like this: from transformers import AutoConfig, AutoModel, AutoModelForImageClassification AutoConfig.register(""resnet"", ResnetConfig) AutoModel.register(ResnetConfig, ResnetModel) AutoModelForImageClassification.register(ResnetConfig, ResnetModelForImageClassification) Note that the first argument used when registering your custom config to [`AutoConfig`] needs to match the `model_type` of your custom config, and the first argument used when registering your custom models to any auto model class needs to match the `config_class` of those models. " big_models.md," # Instantiating a big model When you want to use a very big pretrained model, one challenge is to minimize the use of the RAM. The usual workflow from PyTorch is: 1. Create your model with random weights. 2. Load your pretrained weights. 3. Put those pretrained weights in your random model. Step 1 and 2 both require a full version of the model in memory, which is not a problem in most cases, but if your model starts weighing several GigaBytes, those two copies can make you get out of RAM. Even worse, if you are using `torch.distributed` to launch a distributed training, each process will load the pretrained model and store these two copies in RAM. Note that the randomly created model is initialized with ""empty"" tensors, which take the space in memory without filling it (thus the random values are whatever was in this chunk of memory at a given time). The random initialization following the appropriate distribution for the kind of model/parameters instantiated (like a normal distribution for instance) is only performed after step 3 on the non-initialized weights, to be as fast as possible! In this guide, we explore the solutions Transformers offer to deal with this issue. Note that this is an area of active development, so the APIs explained here may change slightly in the future. ## Sharded checkpoints Since version 4.18.0, model checkpoints that end up taking more than 10GB of space are automatically sharded in smaller pieces. In terms of having one single checkpoint when you do `model.save_pretrained(save_dir)`, you will end up with several partial checkpoints (each of which being of size < 10GB) and an index that maps parameter names to the files they are stored in. You can control the maximum size before sharding with the `max_shard_size` parameter, so for the sake of an example, we'll use a normal-size models with a small shard size: let's take a traditional BERT model. from transformers import AutoModel model = AutoModel.from_pretrained(""bert-base-cased"") If you save it using [`~PreTrainedModel.save_pretrained`], you will get a new folder with two files: the config of the model and its weights: >>> import os >>> import tempfile >>> with tempfile.TemporaryDirectory() as tmp_dir: model.save_pretrained(tmp_dir) print(sorted(os.listdir(tmp_dir))) ['config.json', 'pytorch_model.bin'] Now let's use a maximum shard size of 200MB: >>> with tempfile.TemporaryDirectory() as tmp_dir: model.save_pretrained(tmp_dir, max_shard_size=""200MB"") print(sorted(os.listdir(tmp_dir))) ['config.json', 'pytorch_model-00001-of-00003.bin', 'pytorch_model-00002-of-00003.bin', 'pytorch_model-00003-of-00003.bin', 'pytorch_model.bin.index.json'] On top of the configuration of the model, we see three different weights files, and an `index.json` file which is our index. A checkpoint like this can be fully reloaded using the [`~PreTrainedModel.from_pretrained`] method: >>> with tempfile.TemporaryDirectory() as tmp_dir: model.save_pretrained(tmp_dir, max_shard_size=""200MB"") new_model = AutoModel.from_pretrained(tmp_dir) The main advantage of doing this for big models is that during step 2 of the workflow shown above, each shard of the checkpoint is loaded after the previous one, capping the memory usage in RAM to the model size plus the size of the biggest shard. Behind the scenes, the index file is used to determine which keys are in the checkpoint, and where the corresponding weights are stored. We can load that index like any json and get a dictionary: >>> import json >>> with tempfile.TemporaryDirectory() as tmp_dir: model.save_pretrained(tmp_dir, max_shard_size=""200MB"") with open(os.path.join(tmp_dir, ""pytorch_model.bin.index.json""), ""r"") as f: index = json.load(f) >>> print(index.keys()) dict_keys(['metadata', 'weight_map']) The metadata just consists of the total size of the model for now. We plan to add other information in the future: >>> index[""metadata""] {'total_size': 433245184} The weights map is the main part of this index, which maps each parameter name (as usually found in a PyTorch model `state_dict`) to the file it's stored in: >>> index[""weight_map""] {'embeddings.LayerNorm.bias': 'pytorch_model-00001-of-00003.bin', 'embeddings.LayerNorm.weight': 'pytorch_model-00001-of-00003.bin', If you want to directly load such a sharded checkpoint inside a model without using [`~PreTrainedModel.from_pretrained`] (like you would do `model.load_state_dict()` for a full checkpoint) you should use [`~modeling_utils.load_sharded_checkpoint`]: >>> from transformers.modeling_utils import load_sharded_checkpoint >>> with tempfile.TemporaryDirectory() as tmp_dir: model.save_pretrained(tmp_dir, max_shard_size=""200MB"") load_sharded_checkpoint(model, tmp_dir) ## Low memory loading Sharded checkpoints reduce the memory usage during step 2 of the workflow mentioned above, but in order to use that model in a low memory setting, we recommend leveraging our tools based on the Accelerate library. Please read the following guide for more information: [Large model loading using Accelerate](./main_classes/model#large-model-loading) " attention.md," # Attention mechanisms Most transformer models use full attention in the sense that the attention matrix is square. It can be a big computational bottleneck when you have long texts. Longformer and reformer are models that try to be more efficient and use a sparse version of the attention matrix to speed up training. ## LSH attention [Reformer](#reformer) uses LSH attention. In the softmax(QK^t), only the biggest elements (in the softmax dimension) of the matrix QK^t are going to give useful contributions. So for each query q in Q, we can consider only the keys k in K that are close to q. A hash function is used to determine if q and k are close. The attention mask is modified to mask the current token (except at the first position), because it will give a query and a key equal (so very similar to each other). Since the hash can be a bit random, several hash functions are used in practice (determined by a n_rounds parameter) and then are averaged together. ## Local attention [Longformer](#longformer) uses local attention: often, the local context (e.g., what are the two tokens to the left and right?) is enough to take action for a given token. Also, by stacking attention layers that have a small window, the last layer will have a receptive field of more than just the tokens in the window, allowing them to build a representation of the whole sentence. Some preselected input tokens are also given global attention: for those few tokens, the attention matrix can access all tokens and this process is symmetric: all other tokens have access to those specific tokens (on top of the ones in their local window). This is shown in Figure 2d of the paper, see below for a sample attention mask: Using those attention matrices with less parameters then allows the model to have inputs having a bigger sequence length. ## Other tricks ### Axial positional encodings [Reformer](#reformer) uses axial positional encodings: in traditional transformer models, the positional encoding E is a matrix of size \\(l\\) by \\(d\\), \\(l\\) being the sequence length and \\(d\\) the dimension of the hidden state. If you have very long texts, this matrix can be huge and take way too much space on the GPU. To alleviate that, axial positional encodings consist of factorizing that big matrix E in two smaller matrices E1 and E2, with dimensions \\(l_{1} \times d_{1}\\) and \\(l_{2} \times d_{2}\\), such that \\(l_{1} \times l_{2} = l\\) and \\(d_{1} + d_{2} = d\\) (with the product for the lengths, this ends up being way smaller). The embedding for time step \\(j\\) in E is obtained by concatenating the embeddings for timestep \\(j \% l1\\) in E1 and \\(j // l1\\) in E2. " pipeline_tutorial.md," # Pipelines for inference The [`pipeline`] makes it simple to use any model from the [Hub](https://huggingface.co/models) for inference on any language, computer vision, speech, and multimodal tasks. Even if you don't have experience with a specific modality or aren't familiar with the underlying code behind the models, you can still use them for inference with the [`pipeline`]! This tutorial will teach you to: * Use a [`pipeline`] for inference. * Use a specific tokenizer or model. * Use a [`pipeline`] for audio, vision, and multimodal tasks. Take a look at the [`pipeline`] documentation for a complete list of supported tasks and available parameters. ## Pipeline usage While each task has an associated [`pipeline`], it is simpler to use the general [`pipeline`] abstraction which contains all the task-specific pipelines. The [`pipeline`] automatically loads a default model and a preprocessing class capable of inference for your task. Let's take the example of using the [`pipeline`] for automatic speech recognition (ASR), or speech-to-text. 1. Start by creating a [`pipeline`] and specify the inference task: >>> from transformers import pipeline >>> transcriber = pipeline(task=""automatic-speech-recognition"") 2. Pass your input to the [`pipeline`]. In the case of speech recognition, this is an audio input file: >>> transcriber(""https://huggingface.co/datasets/Narsil/asr_dummy/resolve/main/mlk.flac"") {'text': 'I HAVE A DREAM BUT ONE DAY THIS NATION WILL RISE UP LIVE UP THE TRUE MEANING OF ITS TREES'} Not the result you had in mind? Check out some of the [most downloaded automatic speech recognition models](https://huggingface.co/models?pipeline_tag=automatic-speech-recognition&sort=trending) on the Hub to see if you can get a better transcription. Let's try the [Whisper large-v2](https://huggingface.co/openai/whisper-large) model from OpenAI. Whisper was released 2 years later than Wav2Vec2, and was trained on close to 10x more data. As such, it beats Wav2Vec2 on most downstream benchmarks. It also has the added benefit of predicting punctuation and casing, neither of which are possible with Wav2Vec2. Let's give it a try here to see how it performs: >>> transcriber = pipeline(model=""openai/whisper-large-v2"") >>> transcriber(""https://huggingface.co/datasets/Narsil/asr_dummy/resolve/main/mlk.flac"") {'text': ' I have a dream that one day this nation will rise up and live out the true meaning of its creed.'} Now this result looks more accurate! For a deep-dive comparison on Wav2Vec2 vs Whisper, refer to the [Audio Transformers Course](https://huggingface.co/learn/audio-course/chapter5/asr_models). We really encourage you to check out the Hub for models in different languages, models specialized in your field, and more. You can check out and compare model results directly from your browser on the Hub to see if it fits or handles corner cases better than other ones. And if you don't find a model for your use case, you can always start [training](training) your own! If you have several inputs, you can pass your input as a list: transcriber( [ ""https://huggingface.co/datasets/Narsil/asr_dummy/resolve/main/mlk.flac"", ""https://huggingface.co/datasets/Narsil/asr_dummy/resolve/main/1.flac"", ] ) Pipelines are great for experimentation as switching from one model to another is trivial; however, there are some ways to optimize them for larger workloads than experimentation. See the following guides that dive into iterating over whole datasets or using pipelines in a webserver: of the docs: * [Using pipelines on a dataset](#using-pipelines-on-a-dataset) * [Using pipelines for a webserver](./pipeline_webserver) ## Parameters [`pipeline`] supports many parameters; some are task specific, and some are general to all pipelines. In general, you can specify parameters anywhere you want: transcriber = pipeline(model=""openai/whisper-large-v2"", my_parameter=1) out = transcriber() # This will use `my_parameter=1`. out = transcriber(, my_parameter=2) # This will override and use `my_parameter=2`. out = transcriber() # This will go back to using `my_parameter=1`. Let's check out 3 important ones: ### Device If you use `device=n`, the pipeline automatically puts the model on the specified device. This will work regardless of whether you are using PyTorch or Tensorflow. transcriber = pipeline(model=""openai/whisper-large-v2"", device=0) If the model is too large for a single GPU and you are using PyTorch, you can set `device_map=""auto""` to automatically determine how to load and store the model weights. Using the `device_map` argument requires the 🤗 [Accelerate](https://huggingface.co/docs/accelerate) package: ```bash pip install --upgrade accelerate The following code automatically loads and stores model weights across devices: transcriber = pipeline(model=""openai/whisper-large-v2"", device_map=""auto"") Note that if `device_map=""auto""` is passed, there is no need to add the argument `device=device` when instantiating your `pipeline` as you may encounter some unexpected behavior! ### Batch size By default, pipelines will not batch inference for reasons explained in detail [here](https://huggingface.co/docs/transformers/main_classes/pipelines#pipeline-batching). The reason is that batching is not necessarily faster, and can actually be quite slower in some cases. But if it works in your use case, you can use: transcriber = pipeline(model=""openai/whisper-large-v2"", device=0, batch_size=2) audio_filenames = [f""https://huggingface.co/datasets/Narsil/asr_dummy/resolve/main/{i}.flac"" for i in range(1, 5)] texts = transcriber(audio_filenames) This runs the pipeline on the 4 provided audio files, but it will pass them in batches of 2 to the model (which is on a GPU, where batching is more likely to help) without requiring any further code from you. The output should always match what you would have received without batching. It is only meant as a way to help you get more speed out of a pipeline. Pipelines can also alleviate some of the complexities of batching because, for some pipelines, a single item (like a long audio file) needs to be chunked into multiple parts to be processed by a model. The pipeline performs this [*chunk batching*](./main_classes/pipelines#pipeline-chunk-batching) for you. ### Task specific parameters All tasks provide task specific parameters which allow for additional flexibility and options to help you get your job done. For instance, the [`transformers.AutomaticSpeechRecognitionPipeline.__call__`] method has a `return_timestamps` parameter which sounds promising for subtitling videos: >>> transcriber = pipeline(model=""openai/whisper-large-v2"", return_timestamps=True) >>> transcriber(""https://huggingface.co/datasets/Narsil/asr_dummy/resolve/main/mlk.flac"") {'text': ' I have a dream that one day this nation will rise up and live out the true meaning of its creed.', 'chunks': [{'timestamp': (0.0, 11.88), 'text': ' I have a dream that one day this nation will rise up and live out the true meaning of its'}, {'timestamp': (11.88, 12.38), 'text': ' creed.'}]} As you can see, the model inferred the text and also outputted **when** the various sentences were pronounced. There are many parameters available for each task, so check out each task's API reference to see what you can tinker with! For instance, the [`~transformers.AutomaticSpeechRecognitionPipeline`] has a `chunk_length_s` parameter which is helpful for working on really long audio files (for example, subtitling entire movies or hour-long videos) that a model typically cannot handle on its own: thon >>> transcriber = pipeline(model=""openai/whisper-large-v2"", chunk_length_s=30, return_timestamps=True) >>> transcriber(""https://huggingface.co/datasets/sanchit-gandhi/librispeech_long/resolve/main/audio.wav"") {'text': "" Chapter 16. I might have told you of the beginning of this liaison in a few lines, but I wanted you to see every step by which we came. I, too, agree to whatever Marguerite wished, Marguerite to be unable to live apart from me. It was the day after the evening If you can't find a parameter that would really help you out, feel free to [request it](https://github.com/huggingface/transformers/issues/new?assignees=&labels=feature&template=feature-request.yml)! ## Using pipelines on a dataset The pipeline can also run inference on a large dataset. The easiest way we recommend doing this is by using an iterator: def data(): for i in range(1000): yield f""My example {i}"" pipe = pipeline(model=""gpt2"", device=0) generated_characters = 0 for out in pipe(data()): generated_characters += len(out[0][""generated_text""]) The iterator `data()` yields each result, and the pipeline automatically recognizes the input is iterable and will start fetching the data while it continues to process it on the GPU (this uses [DataLoader](https://pytorch.org/docs/stable/data.html#torch.utils.data.DataLoader) under the hood). This is important because you don't have to allocate memory for the whole dataset and you can feed the GPU as fast as possible. Since batching could speed things up, it may be useful to try tuning the `batch_size` parameter here. The simplest way to iterate over a dataset is to just load one from 🤗 [Datasets](https://github.com/huggingface/datasets/): # KeyDataset is a util that will just output the item we're interested in. from transformers.pipelines.pt_utils import KeyDataset from datasets import load_dataset pipe = pipeline(model=""hf-internal-testing/tiny-random-wav2vec2"", device=0) dataset = load_dataset(""hf-internal-testing/librispeech_asr_dummy"", ""clean"", split=""validation[:10]"") for out in pipe(KeyDataset(dataset, ""audio"")): print(out) ## Using pipelines for a webserver Creating an inference engine is a complex topic which deserves it's own page. [Link](./pipeline_webserver) ## Vision pipeline Using a [`pipeline`] for vision tasks is practically identical. Specify your task and pass your image to the classifier. The image can be a link, a local path or a base64-encoded image. For example, what species of cat is shown below? ![pipeline-cat-chonk](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/pipeline-cat-chonk.jpeg) >>> from transformers import pipeline >>> vision_classifier = pipeline(model=""google/vit-base-patch16-224"") >>> preds = vision_classifier( images=""https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/pipeline-cat-chonk.jpeg"" ) >>> preds = [{""score"": round(pred[""score""], 4), ""label"": pred[""label""]} for pred in preds] >>> preds [{'score': 0.4335, 'label': 'lynx, catamount'}, {'score': 0.0348, 'label': 'cougar, puma, catamount, mountain lion, painter, panther, Felis concolor'}, {'score': 0.0324, 'label': 'snow leopard, ounce, Panthera uncia'}, {'score': 0.0239, 'label': 'Egyptian cat'}, {'score': 0.0229, 'label': 'tiger cat'}] ## Text pipeline Using a [`pipeline`] for NLP tasks is practically identical. >>> from transformers import pipeline >>> # This model is a `zero-shot-classification` model. >>> # It will classify text, except you are free to choose any label you might imagine >>> classifier = pipeline(model=""facebook/bart-large-mnli"") >>> classifier( ""I have a problem with my iphone that needs to be resolved asap!!"", candidate_labels=[""urgent"", ""not urgent"", ""phone"", ""tablet"", ""computer""], ) {'sequence': 'I have a problem with my iphone that needs to be resolved asap!!', 'labels': ['urgent', 'phone', 'computer', 'not urgent', 'tablet'], 'scores': [0.504, 0.479, 0.013, 0.003, 0.002]} ## Multimodal pipeline The [`pipeline`] supports more than one modality. For example, a visual question answering (VQA) task combines text and image. Feel free to use any image link you like and a question you want to ask about the image. The image can be a URL or a local path to the image. For example, if you use this [invoice image](https://huggingface.co/spaces/impira/docquery/resolve/2359223c1837a7587402bda0f2643382a6eefeab/invoice.png): >>> from transformers import pipeline >>> vqa = pipeline(model=""impira/layoutlm-document-qa"") >>> vqa( image=""https://huggingface.co/spaces/impira/docquery/resolve/2359223c1837a7587402bda0f2643382a6eefeab/invoice.png"", question=""What is the invoice number?"", ) [{'score': 0.42515, 'answer': 'us-001', 'start': 16, 'end': 16}] To run the example above you need to have [`pytesseract`](https://pypi.org/project/pytesseract/) installed in addition to 🤗 Transformers: ```bash sudo apt install -y tesseract-ocr pip install pytesseract ## Using `pipeline` on large models with 🤗 `accelerate`: You can easily run `pipeline` on large models using 🤗 `accelerate`! First make sure you have installed `accelerate` with `pip install accelerate`. First load your model using `device_map=""auto""`! We will use `facebook/opt-1.3b` for our example. # pip install accelerate import torch from transformers import pipeline pipe = pipeline(model=""facebook/opt-1.3b"", torch_dtype=torch.bfloat16, device_map=""auto"") output = pipe(""This is a cool example!"", do_sample=True, top_p=0.95) You can also pass 8-bit loaded models if you install `bitsandbytes` and add the argument `load_in_8bit=True` # pip install accelerate bitsandbytes import torch from transformers import pipeline pipe = pipeline(model=""facebook/opt-1.3b"", device_map=""auto"", model_kwargs={""load_in_8bit"": True}) output = pipe(""This is a cool example!"", do_sample=True, top_p=0.95) Note that you can replace the checkpoint with any of the Hugging Face model that supports large model loading such as BLOOM! " custom_tools.md," # Custom Tools and Prompts If you are not aware of what tools and agents are in the context of transformers, we recommend you read the [Transformers Agents](transformers_agents) page first. Transformers Agents is an experimental API that is subject to change at any time. Results returned by the agents can vary as the APIs or underlying models are prone to change. Creating and using custom tools and prompts is paramount to empowering the agent and having it perform new tasks. In this guide we'll take a look at: - How to customize the prompt - How to use custom tools - How to create custom tools ## Customizing the prompt As explained in [Transformers Agents](transformers_agents) agents can run in [`~Agent.run`] and [`~Agent.chat`] mode. Both the `run` and `chat` modes underlie the same logic. The language model powering the agent is conditioned on a long prompt and completes the prompt by generating the next tokens until the stop token is reached. The only difference between the two modes is that during the `chat` mode the prompt is extended with previous user inputs and model generations. This allows the agent to have access to past interactions, seemingly giving the agent some kind of memory. ### Structure of the prompt Let's take a closer look at how the prompt is structured to understand how it can be best customized. The prompt is structured broadly into four parts. - 1. Introduction: how the agent should behave, explanation of the concept of tools. - 2. Description of all the tools. This is defined by a `<>` token that is dynamically replaced at runtime with the tools defined/chosen by the user. - 3. A set of examples of tasks and their solution - 4. Current example, and request for solution. To better understand each part, let's look at a shortened version of how the `run` prompt can look like: ````text I will ask you to perform a task, your job is to come up with a series of simple commands in Python that will perform the task. [] You can print intermediate results if it makes sense to do so. Tools: - document_qa: This is a tool that answers a question about a document (pdf). It takes an input named `document` which should be the document containing the information, as well as a `question` that is the question about the document. It returns a text that contains the answer to the question. - image_captioner: This is a tool that generates a description of an image. It takes an input named `image` which should be the image to the caption and returns a text that contains the description in English. [] Task: ""Answer the question in the variable `question` about the image stored in the variable `image`. The question is in French."" I will use the following tools: `translator` to translate the question into English and then `image_qa` to answer the question on the input image. Answer: translated_question = translator(question=question, src_lang=""French"", tgt_lang=""English"") print(f""The translated question is {translated_question}."") answer = image_qa(image=image, question=translated_question) print(f""The answer is {answer}"") Task: ""Identify the oldest person in the `document` and create an image showcasing the result as a banner."" I will use the following tools: `document_qa` to find the oldest person in the document, then `image_generator` to generate an image according to the answer. Answer: answer = document_qa(document, question=""What is the oldest person?"") print(f""The answer is {answer}."") image = image_generator(""A banner showing "" + answer) [] Task: ""Draw me a picture of rivers and lakes"" I will use the following ` The introduction (the text before *""Tools:""*) explains precisely how the model shall behave and what it should do. This part most likely does not need to be customized as the agent shall always behave the same way. The second part (the bullet points below *""Tools""*) is dynamically added upon calling `run` or `chat`. There are exactly as many bullet points as there are tools in `agent.toolbox` and each bullet point consists of the name and description of the tool: ```text - : Let's verify this quickly by loading the document_qa tool and printing out the name and description. from transformers import load_tool document_qa = load_tool(""document-question-answering"") print(f""- {document_qa.name}: {document_qa.description}"") which gives: ```text - document_qa: This is a tool that answers a question about a document (pdf). It takes an input named `document` which should be the document containing the information, as well as a `question` that is the question about the document. It returns a text that contains the answer to the question. We can see that the tool name is short and precise. The description includes two parts, the first explaining what the tool does and the second states what input arguments and return values are expected. A good tool name and tool description are very important for the agent to correctly use it. Note that the only information the agent has about the tool is its name and description, so one should make sure that both are precisely written and match the style of the existing tools in the toolbox. In particular make sure the description mentions all the arguments expected by name in code-style, along with the expected type and a description of what they are. Check the naming and description of the curated Transformers tools to better understand what name and description a tool is expected to have. You can see all tools with the [`Agent.toolbox`] property. The third part includes a set of curated examples that show the agent exactly what code it should produce for what kind of user request. The large language models empowering the agent are extremely good at recognizing patterns in a prompt and repeating the pattern with new data. Therefore, it is very important that the examples are written in a way that maximizes the likelihood of the agent to generating correct, executable code in practice. Let's have a look at one example: ````text Task: ""Identify the oldest person in the `document` and create an image showcasing the result as a banner."" I will use the following tools: `document_qa` to find the oldest person in the document, then `image_generator` to generate an image according to the answer. Answer: answer = document_qa(document, question=""What is the oldest person?"") print(f""The answer is {answer}."") image = image_generator(""A banner showing "" + answer) ` The pattern the model is prompted to repeat has three parts: The task statement, the agent's explanation of what it intends to do, and finally the generated code. Every example that is part of the prompt has this exact pattern, thus making sure that the agent will reproduce exactly the same pattern when generating new tokens. The prompt examples are curated by the Transformers team and rigorously evaluated on a set of [problem statements](https://github.com/huggingface/transformers/blob/main/src/transformers/tools/evaluate_agent.py) to ensure that the agent's prompt is as good as possible to solve real use cases of the agent. The final part of the prompt corresponds to: ```text Task: ""Draw me a picture of rivers and lakes"" I will use the following is a final and unfinished example that the agent is tasked to complete. The unfinished example is dynamically created based on the actual user input. For the above example, the user ran: agent.run(""Draw me a picture of rivers and lakes"") The user input - *a.k.a* the task: *""Draw me a picture of rivers and lakes""* is cast into the prompt template: ""Task: \n\n I will use the following"". This sentence makes up the final lines of the prompt the agent is conditioned on, therefore strongly influencing the agent to finish the example exactly in the same way it was previously done in the examples. Without going into too much detail, the chat template has the same prompt structure with the examples having a slightly different style, *e.g.*: ````text [] ===== Human: Answer the question in the variable `question` about the image stored in the variable `image`. Assistant: I will use the tool `image_qa` to answer the question on the input image. answer = image_qa(text=question, image=image) print(f""The answer is {answer}"") Human: I tried this code, it worked but didn't give me a good result. The question is in French Assistant: In this case, the question needs to be translated first. I will use the tool `translator` to do this. translated_question = translator(question=question, src_lang=""French"", tgt_lang=""English"") print(f""The translated question is {translated_question}."") answer = image_qa(text=translated_question, image=image) print(f""The answer is {answer}"") ===== [] ` Contrary, to the examples of the `run` prompt, each `chat` prompt example has one or more exchanges between the *Human* and the *Assistant*. Every exchange is structured similarly to the example of the `run` prompt. The user's input is appended to behind *Human:* and the agent is prompted to first generate what needs to be done before generating code. An exchange can be based on previous exchanges, therefore allowing the user to refer to past exchanges as is done *e.g.* above by the user's input of ""I tried **this** code"" refers to the previously generated code of the agent. Upon running `.chat`, the user's input or *task* is cast into an unfinished example of the form: ```text Human: \n\nAssistant: which the agent completes. Contrary to the `run` command, the `chat` command then appends the completed example to the prompt, thus giving the agent more context for the next `chat` turn. Great now that we know how the prompt is structured, let's see how we can customize it! ### Writing good user inputs While large language models are getting better and better at understanding users' intentions, it helps enormously to be as precise as possible to help the agent pick the correct task. What does it mean to be as precise as possible? The agent sees a list of tool names and their description in its prompt. The more tools are added the more difficult it becomes for the agent to choose the correct tool and it's even more difficult to choose the correct sequences of tools to run. Let's look at a common failure case, here we will only return the code to analyze it. from transformers import HfAgent agent = HfAgent(""https://api-inference.huggingface.co/models/bigcode/starcoder"") agent.run(""Show me a tree"", return_code=True) gives: ```text ==Explanation from the agent== I will use the following tool: `image_segmenter` to create a segmentation mask for the image. ==Code generated by the agent== mask = image_segmenter(image, prompt=""tree"") which is probably not what we wanted. Instead, it is more likely that we want an image of a tree to be generated. To steer the agent more towards using a specific tool it can therefore be very helpful to use important keywords that are present in the tool's name and description. Let's have a look. agent.toolbox[""image_generator""].description ```text 'This is a tool that creates an image according to a prompt, which is a text description. It takes an input named `prompt` which contains the image description and outputs an image. The name and description make use of the keywords ""image"", ""prompt"", ""create"" and ""generate"". Using these words will most likely work better here. Let's refine our prompt a bit. agent.run(""Create an image of a tree"", return_code=True) gives: ```text ==Explanation from the agent== I will use the following tool `image_generator` to generate an image of a tree. ==Code generated by the agent== image = image_generator(prompt=""tree"") Much better! That looks more like what we want. In short, when you notice that the agent struggles to correctly map your task to the correct tools, try looking up the most pertinent keywords of the tool's name and description and try refining your task request with it. ### Customizing the tool descriptions As we've seen before the agent has access to each of the tools' names and descriptions. The base tools should have very precise names and descriptions, however, you might find that it could help to change the the description or name of a tool for your specific use case. This might become especially important when you've added multiple tools that are very similar or if you want to use your agent only for a certain domain, *e.g.* image generation and transformations. A common problem is that the agent confuses image generation with image transformation/modification when used a lot for image generation tasks, *e.g.* agent.run(""Make an image of a house and a car"", return_code=True) returns ```text ==Explanation from the agent== I will use the following tools `image_generator` to generate an image of a house and `image_transformer` to transform the image of a car into the image of a house. ==Code generated by the agent== house_image = image_generator(prompt=""A house"") car_image = image_generator(prompt=""A car"") house_car_image = image_transformer(image=car_image, prompt=""A house"") which is probably not exactly what we want here. It seems like the agent has a difficult time to understand the difference between `image_generator` and `image_transformer` and often uses the two together. We can help the agent here by changing the tool name and description of `image_transformer`. Let's instead call it `modifier` to disassociate it a bit from ""image"" and ""prompt"": agent.toolbox[""modifier""] = agent.toolbox.pop(""image_transformer"") agent.toolbox[""modifier""].description = agent.toolbox[""modifier""].description.replace( ""transforms an image according to a prompt"", ""modifies an image"" ) Now ""modify"" is a strong cue to use the new image processor which should help with the above prompt. Let's run it again. agent.run(""Make an image of a house and a car"", return_code=True) Now we're getting: ```text ==Explanation from the agent== I will use the following tools: `image_generator` to generate an image of a house, then `image_generator` to generate an image of a car. ==Code generated by the agent== house_image = image_generator(prompt=""A house"") car_image = image_generator(prompt=""A car"") which is definitely closer to what we had in mind! However, we want to have both the house and car in the same image. Steering the task more toward single image generation should help: agent.run(""Create image: 'A house and car'"", return_code=True) ```text ==Explanation from the agent== I will use the following tool: `image_generator` to generate an image. ==Code generated by the agent== image = image_generator(prompt=""A house and car"") Agents are still brittle for many use cases, especially when it comes to slightly more complex use cases like generating an image of multiple objects. Both the agent itself and the underlying prompt will be further improved in the coming months making sure that agents become more robust to a variety of user inputs. ### Customizing the whole prompt To give the user maximum flexibility, the whole prompt template as explained in [above](#structure-of-the-prompt) can be overwritten by the user. In this case make sure that your custom prompt includes an introduction section, a tool section, an example section, and an unfinished example section. If you want to overwrite the `run` prompt template, you can do as follows: template = """""" [] """""" agent = HfAgent(your_endpoint, run_prompt_template=template) Please make sure to have the `<>` string and the `<>` defined somewhere in the `template` so that the agent can be aware of the tools, it has available to it as well as correctly insert the user's prompt. Similarly, one can overwrite the `chat` prompt template. Note that the `chat` mode always uses the following format for the exchanges: ```text Human: <> Assistant: Therefore it is important that the examples of the custom `chat` prompt template also make use of this format. You can overwrite the `chat` template at instantiation as follows. template = """""" [] """""" agent = HfAgent(url_endpoint=your_endpoint, chat_prompt_template=template) Please make sure to have the `<>` string defined somewhere in the `template` so that the agent can be aware of the tools, it has available to it. In both cases, you can pass a repo ID instead of the prompt template if you would like to use a template hosted by someone in the community. The default prompts live in [this repo](https://huggingface.co/datasets/huggingface-tools/default-prompts) as an example. To upload your custom prompt on a repo on the Hub and share it with the community just make sure: - to use a dataset repository - to put the prompt template for the `run` command in a file named `run_prompt_template.txt` - to put the prompt template for the `chat` command in a file named `chat_prompt_template.txt` ## Using custom tools In this section, we'll be leveraging two existing custom tools that are specific to image generation: - We replace [huggingface-tools/image-transformation](https://huggingface.co/spaces/huggingface-tools/image-transformation), with [diffusers/controlnet-canny-tool](https://huggingface.co/spaces/diffusers/controlnet-canny-tool) to allow for more image modifications. - We add a new tool for image upscaling to the default toolbox: [diffusers/latent-upscaler-tool](https://huggingface.co/spaces/diffusers/latent-upscaler-tool) replace the existing image-transformation tool. We'll start by loading the custom tools with the convenient [`load_tool`] function: from transformers import load_tool controlnet_transformer = load_tool(""diffusers/controlnet-canny-tool"") upscaler = load_tool(""diffusers/latent-upscaler-tool"") Upon adding custom tools to an agent, the tools' descriptions and names are automatically included in the agents' prompts. Thus, it is imperative that custom tools have a well-written description and name in order for the agent to understand how to use them. Let's take a look at the description and name of `controlnet_transformer`: print(f""Description: '{controlnet_transformer.description}'"") print(f""Name: '{controlnet_transformer.name}'"") gives ```text Description: 'This is a tool that transforms an image with ControlNet according to a prompt. It takes two inputs: `image`, which should be the image to transform, and `prompt`, which should be the prompt to use to change it. It returns the modified image.' Name: 'image_transformer' The name and description are accurate and fit the style of the [curated set of tools](./transformers_agents#a-curated-set-of-tools). Next, let's instantiate an agent with `controlnet_transformer` and `upscaler`: tools = [controlnet_transformer, upscaler] agent = HfAgent(""https://api-inference.huggingface.co/models/bigcode/starcoder"", additional_tools=tools) This command should give you the following info: ```text image_transformer has been replaced by as provided in `additional_tools` The set of curated tools already has an `image_transformer` tool which is hereby replaced with our custom tool. Overwriting existing tools can be beneficial if we want to use a custom tool exactly for the same task as an existing tool because the agent is well-versed in using the specific task. Beware that the custom tool should follow the exact same API as the overwritten tool in this case, or you should adapt the prompt template to make sure all examples using that tool are updated. The upscaler tool was given the name `image_upscaler` which is not yet present in the default toolbox and is therefore simply added to the list of tools. You can always have a look at the toolbox that is currently available to the agent via the `agent.toolbox` attribute: print(""\n"".join([f""- {a}"" for a in agent.toolbox.keys()])) ```text - document_qa - image_captioner - image_qa - image_segmenter - transcriber - summarizer - text_classifier - text_qa - text_reader - translator - image_transformer - text_downloader - image_generator - video_generator - image_upscaler Note how `image_upscaler` is now part of the agents' toolbox. Let's now try out the new tools! We will re-use the image we generated in [Transformers Agents Quickstart](./transformers_agents#single-execution-run). from diffusers.utils import load_image image = load_image( ""https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/rivers_and_lakes.png"" ) Let's transform the image into a beautiful winter landscape: image = agent.run(""Transform the image: 'A frozen lake and snowy forest'"", image=image) ```text ==Explanation from the agent== I will use the following tool: `image_transformer` to transform the image. ==Code generated by the agent== image = image_transformer(image, prompt=""A frozen lake and snowy forest"") The new image processing tool is based on ControlNet which can make very strong modifications to the image. By default the image processing tool returns an image of size 512x512 pixels. Let's see if we can upscale it. image = agent.run(""Upscale the image"", image) ```text ==Explanation from the agent== I will use the following tool: `image_upscaler` to upscale the image. ==Code generated by the agent== upscaled_image = image_upscaler(image) The agent automatically mapped our prompt ""Upscale the image"" to the just added upscaler tool purely based on the description and name of the upscaler tool and was able to correctly run it. Next, let's have a look at how you can create a new custom tool. ### Adding new tools In this section, we show how to create a new tool that can be added to the agent. #### Creating a new tool We'll first start by creating a tool. We'll add the not-so-useful yet fun task of fetching the model on the Hugging Face Hub with the most downloads for a given task. We can do that with the following code: thon from huggingface_hub import list_models task = ""text-classification"" model = next(iter(list_models(filter=task, sort=""downloads"", direction=-1))) print(model.id) For the task `text-classification`, this returns `'facebook/bart-large-mnli'`, for `translation` it returns `'t5-base`. How do we convert this to a tool that the agent can leverage? All tools depend on the superclass `Tool` that holds the main attributes necessary. We'll create a class that inherits from it: thon from transformers import Tool class HFModelDownloadsTool(Tool): pass This class has a few needs: - An attribute `name`, which corresponds to the name of the tool itself. To be in tune with other tools which have a performative name, we'll name it `model_download_counter`. - An attribute `description`, which will be used to populate the prompt of the agent. - `inputs` and `outputs` attributes. Defining this will help the python interpreter make educated choices about types, and will allow for a gradio-demo to be spawned when we push our tool to the Hub. They're both a list of expected values, which can be `text`, `image`, or `audio`. - A `__call__` method which contains the inference code. This is the code we've played with above! Here's what our class looks like now: thon from transformers import Tool from huggingface_hub import list_models class HFModelDownloadsTool(Tool): name = ""model_download_counter"" description = ( ""This is a tool that returns the most downloaded model of a given task on the Hugging Face Hub. "" ""It takes the name of the category (such as text-classification, depth-estimation, etc), and "" ""returns the name of the checkpoint."" ) inputs = [""text""] outputs = [""text""] def __call__(self, task: str): model = next(iter(list_models(filter=task, sort=""downloads"", direction=-1))) return model.id We now have our tool handy. Save it in a file and import it from your main script. Let's name this file `model_downloads.py`, so the resulting import code looks like this: thon from model_downloads import HFModelDownloadsTool tool = HFModelDownloadsTool() In order to let others benefit from it and for simpler initialization, we recommend pushing it to the Hub under your namespace. To do so, just call `push_to_hub` on the `tool` variable: thon tool.push_to_hub(""hf-model-downloads"") You now have your code on the Hub! Let's take a look at the final step, which is to have the agent use it. #### Having the agent use the tool We now have our tool that lives on the Hub which can be instantiated as such (change the user name for your tool): thon from transformers import load_tool tool = load_tool(""lysandre/hf-model-downloads"") In order to use it in the agent, simply pass it in the `additional_tools` parameter of the agent initialization method: thon from transformers import HfAgent agent = HfAgent(""https://api-inference.huggingface.co/models/bigcode/starcoder"", additional_tools=[tool]) agent.run( ""Can you read out loud the name of the model that has the most downloads in the 'text-to-video' task on the Hugging Face Hub?"" ) which outputs the following: ```text ==Code generated by the agent== model = model_download_counter(task=""text-to-video"") print(f""The model with the most downloads is {model}."") audio_model = text_reader(model) ==Result== The model with the most downloads is damo-vilab/text-to-video-ms-1.7b. and generates the following audio. | **Audio** | |------------------------------------------------------------------------------------------------------------------------------------------------------| | | Depending on the LLM, some are quite brittle and require very exact prompts in order to work well. Having a well-defined name and description of the tool is paramount to having it be leveraged by the agent. ### Replacing existing tools Replacing existing tools can be done simply by assigning a new item to the agent's toolbox. Here's how one would do so: thon from transformers import HfAgent, load_tool agent = HfAgent(""https://api-inference.huggingface.co/models/bigcode/starcoder"") agent.toolbox[""image-transformation""] = load_tool(""diffusers/controlnet-canny-tool"") Beware when replacing tools with others! This will also adjust the agent's prompt. This can be good if you have a better prompt suited for the task, but it can also result in your tool being selected way more than others or for other tools to be selected instead of the one you have defined. ## Leveraging gradio-tools [gradio-tools](https://github.com/freddyaboulton/gradio-tools) is a powerful library that allows using Hugging Face Spaces as tools. It supports many existing Spaces as well as custom Spaces to be designed with it. We offer support for `gradio_tools` by using the `Tool.from_gradio` method. For example, we want to take advantage of the `StableDiffusionPromptGeneratorTool` tool offered in the `gradio-tools` toolkit so as to improve our prompts and generate better images. We first import the tool from `gradio_tools` and instantiate it: thon from gradio_tools import StableDiffusionPromptGeneratorTool gradio_tool = StableDiffusionPromptGeneratorTool() We pass that instance to the `Tool.from_gradio` method: thon from transformers import Tool tool = Tool.from_gradio(gradio_tool) Now we can manage it exactly as we would a usual custom tool. We leverage it to improve our prompt ` a rabbit wearing a space suit`: thon from transformers import HfAgent agent = HfAgent(""https://api-inference.huggingface.co/models/bigcode/starcoder"", additional_tools=[tool]) agent.run(""Generate an image of the `prompt` after improving it."", prompt=""A rabbit wearing a space suit"") The model adequately leverages the tool: ```text ==Explanation from the agent== I will use the following tools: `StableDiffusionPromptGenerator` to improve the prompt, then `image_generator` to generate an image according to the improved prompt. ==Code generated by the agent== improved_prompt = StableDiffusionPromptGenerator(prompt) print(f""The improved prompt is {improved_prompt}."") image = image_generator(improved_prompt) Before finally generating the image: gradio-tools requires *textual* inputs and outputs, even when working with different modalities. This implementation works with image and audio objects. The two are currently incompatible, but will rapidly become compatible as we work to improve the support. ## Future compatibility with Langchain We love Langchain and think it has a very compelling suite of tools. In order to handle these tools, Langchain requires *textual* inputs and outputs, even when working with different modalities. This is often the serialized version (i.e., saved to disk) of the objects. This difference means that multi-modality isn't handled between transformers-agents and langchain. We aim for this limitation to be resolved in future versions, and welcome any help from avid langchain users to help us achieve this compatibility. We would love to have better support. If you would like to help, please [open an issue](https://github.com/huggingface/transformers/issues/new) and share what you have in mind. " perplexity.md," # Perplexity of fixed-length models [[open-in-colab]] Perplexity (PPL) is one of the most common metrics for evaluating language models. Before diving in, we should note that the metric applies specifically to classical language models (sometimes called autoregressive or causal language models) and is not well defined for masked language models like BERT (see [summary of the models](model_summary)). Perplexity is defined as the exponentiated average negative log-likelihood of a sequence. If we have a tokenized sequence \\(X = (x_0, x_1, \dots, x_t)\\), then the perplexity of \\(X\\) is, $$\text{PPL}(X) = \exp \left\{ {-\frac{1}{t}\sum_i^t \log p_\theta (x_i|x_{ When working with approximate models, however, we typically have a constraint on the number of tokens the model can process. The largest version of [GPT-2](model_doc/gpt2), for example, has a fixed length of 1024 tokens, so we cannot calculate \\(p_\theta(x_t|x_{ This is quick to compute since the perplexity of each segment can be computed in one forward pass, but serves as a poor approximation of the fully-factorized perplexity and will typically yield a higher (worse) PPL because the model will have less context at most of the prediction steps. Instead, the PPL of fixed-length models should be evaluated with a sliding-window strategy. This involves repeatedly sliding the context window so that the model has more context when making each prediction. This is a closer approximation to the true decomposition of the sequence probability and will typically yield a more favorable score. The downside is that it requires a separate forward pass for each token in the corpus. A good practical compromise is to employ a strided sliding window, moving the context by larger strides rather than sliding by 1 token a time. This allows computation to proceed much faster while still giving the model a large context to make predictions at each step. ## Example: Calculating perplexity with GPT-2 in 🤗 Transformers Let's demonstrate this process with GPT-2. thon from transformers import GPT2LMHeadModel, GPT2TokenizerFast device = ""cuda"" model_id = ""gpt2-large"" model = GPT2LMHeadModel.from_pretrained(model_id).to(device) tokenizer = GPT2TokenizerFast.from_pretrained(model_id) We'll load in the WikiText-2 dataset and evaluate the perplexity using a few different sliding-window strategies. Since this dataset is small and we're just doing one forward pass over the set, we can just load and encode the entire dataset in memory. thon from datasets import load_dataset test = load_dataset(""wikitext"", ""wikitext-2-raw-v1"", split=""test"") encodings = tokenizer(""\n\n"".join(test[""text""]), return_tensors=""pt"") With 🤗 Transformers, we can simply pass the `input_ids` as the `labels` to our model, and the average negative log-likelihood for each token is returned as the loss. With our sliding window approach, however, there is overlap in the tokens we pass to the model at each iteration. We don't want the log-likelihood for the tokens we're just treating as context to be included in our loss, so we can set these targets to `-100` so that they are ignored. The following is an example of how we could do this with a stride of `512`. This means that the model will have at least 512 tokens for context when calculating the conditional likelihood of any one token (provided there are 512 preceding tokens available to condition on). thon import torch from tqdm import tqdm max_length = model.config.n_positions stride = 512 seq_len = encodings.input_ids.size(1) nlls = [] prev_end_loc = 0 for begin_loc in tqdm(range(0, seq_len, stride)): end_loc = min(begin_loc + max_length, seq_len) trg_len = end_loc - prev_end_loc # may be different from stride on last loop input_ids = encodings.input_ids[:, begin_loc:end_loc].to(device) target_ids = input_ids.clone() target_ids[:, :-trg_len] = -100 with torch.no_grad(): outputs = model(input_ids, labels=target_ids) # loss is calculated using CrossEntropyLoss which averages over valid labels # N.B. the model only calculates loss over trg_len - 1 labels, because it internally shifts the labels # to the left by 1. neg_log_likelihood = outputs.loss nlls.append(neg_log_likelihood) prev_end_loc = end_loc if end_loc == seq_len: break ppl = torch.exp(torch.stack(nlls).mean()) Running this with the stride length equal to the max input length is equivalent to the suboptimal, non-sliding-window strategy we discussed above. The smaller the stride, the more context the model will have in making each prediction, and the better the reported perplexity will typically be. When we run the above with `stride = 1024`, i.e. no overlap, the resulting PPL is `19.44`, which is about the same as the `19.93` reported in the GPT-2 paper. By using `stride = 512` and thereby employing our striding window strategy, this jumps down to `16.45`. This is not only a more favorable score, but is calculated in a way that is closer to the true autoregressive decomposition of a sequence likelihood. " model_summary.md," # The Transformer model family Since its introduction in 2017, the [original Transformer](https://arxiv.org/abs/1706.03762) model has inspired many new and exciting models that extend beyond natural language processing (NLP) tasks. There are models for [predicting the folded structure of proteins](https://huggingface.co/blog/deep-learning-with-proteins), [training a cheetah to run](https://huggingface.co/blog/train-decision-transformers), and [time series forecasting](https://huggingface.co/blog/time-series-transformers). With so many Transformer variants available, it can be easy to miss the bigger picture. What all these models have in common is they're based on the original Transformer architecture. Some models only use the encoder or decoder, while others use both. This provides a useful taxonomy to categorize and examine the high-level differences within models in the Transformer family, and it'll help you understand Transformers you haven't encountered before. If you aren't familiar with the original Transformer model or need a refresher, check out the [How do Transformers work](https://huggingface.co/course/chapter1/4?fw=pt) chapter from the Hugging Face course. ## Computer vision ### Convolutional network For a long time, convolutional networks (CNNs) were the dominant paradigm for computer vision tasks until the [Vision Transformer](https://arxiv.org/abs/2010.11929) demonstrated its scalability and efficiency. Even then, some of a CNN's best qualities, like translation invariance, are so powerful (especially for certain tasks) that some Transformers incorporate convolutions in their architecture. [ConvNeXt](model_doc/convnext) flipped this exchange around and incorporated design choices from Transformers to modernize a CNN. For example, ConvNeXt uses non-overlapping sliding windows to patchify an image and a larger kernel to increase its global receptive field. ConvNeXt also makes several layer design choices to be more memory-efficient and improve performance, so it competes favorably with Transformers! ### Encoder[[cv-encoder]] The [Vision Transformer (ViT)](model_doc/vit) opened the door to computer vision tasks without convolutions. ViT uses a standard Transformer encoder, but its main breakthrough was how it treated an image. It splits an image into fixed-size patches and uses them to create an embedding, just like how a sentence is split into tokens. ViT capitalized on the Transformers' efficient architecture to demonstrate competitive results with the CNNs at the time while requiring fewer resources to train. ViT was soon followed by other vision models that could also handle dense vision tasks like segmentation as well as detection. One of these models is the [Swin](model_doc/swin) Transformer. It builds hierarchical feature maps (like a CNN 👀 and unlike ViT) from smaller-sized patches and merges them with neighboring patches in deeper layers. Attention is only computed within a local window, and the window is shifted between attention layers to create connections to help the model learn better. Since the Swin Transformer can produce hierarchical feature maps, it is a good candidate for dense prediction tasks like segmentation and detection. The [SegFormer](model_doc/segformer) also uses a Transformer encoder to build hierarchical feature maps, but it adds a simple multilayer perceptron (MLP) decoder on top to combine all the feature maps and make a prediction. Other vision models, like BeIT and ViTMAE, drew inspiration from BERT's pretraining objective. [BeIT](model_doc/beit) is pretrained by *masked image modeling (MIM)*; the image patches are randomly masked, and the image is also tokenized into visual tokens. BeIT is trained to predict the visual tokens corresponding to the masked patches. [ViTMAE](model_doc/vitmae) has a similar pretraining objective, except it must predict the pixels instead of visual tokens. What's unusual is 75% of the image patches are masked! The decoder reconstructs the pixels from the masked tokens and encoded patches. After pretraining, the decoder is thrown away, and the encoder is ready to be used in downstream tasks. ### Decoder[[cv-decoder]] Decoder-only vision models are rare because most vision models rely on an encoder to learn an image representation. But for use cases like image generation, the decoder is a natural fit, as we've seen from text generation models like GPT-2. [ImageGPT](model_doc/imagegpt) uses the same architecture as GPT-2, but instead of predicting the next token in a sequence, it predicts the next pixel in an image. In addition to image generation, ImageGPT could also be finetuned for image classification. ### Encoder-decoder[[cv-encoder-decoder]] Vision models commonly use an encoder (also known as a backbone) to extract important image features before passing them to a Transformer decoder. [DETR](model_doc/detr) has a pretrained backbone, but it also uses the complete Transformer encoder-decoder architecture for object detection. The encoder learns image representations and combines them with object queries (each object query is a learned embedding that focuses on a region or object in an image) in the decoder. DETR predicts the bounding box coordinates and class label for each object query. ## Natural language processing ### Encoder[[nlp-encoder]] [BERT](model_doc/bert) is an encoder-only Transformer that randomly masks certain tokens in the input to avoid seeing other tokens, which would allow it to ""cheat"". The pretraining objective is to predict the masked token based on the context. This allows BERT to fully use the left and right contexts to help it learn a deeper and richer representation of the inputs. However, there was still room for improvement in BERT's pretraining strategy. [RoBERTa](model_doc/roberta) improved upon this by introducing a new pretraining recipe that includes training for longer and on larger batches, randomly masking tokens at each epoch instead of just once during preprocessing, and removing the next-sentence prediction objective. The dominant strategy to improve performance is to increase the model size. But training large models is computationally expensive. One way to reduce computational costs is using a smaller model like [DistilBERT](model_doc/distilbert). DistilBERT uses [knowledge distillation](https://arxiv.org/abs/1503.02531) - a compression technique - to create a smaller version of BERT while keeping nearly all of its language understanding capabilities. However, most Transformer models continued to trend towards more parameters, leading to new models focused on improving training efficiency. [ALBERT](model_doc/albert) reduces memory consumption by lowering the number of parameters in two ways: separating the larger vocabulary embedding into two smaller matrices and allowing layers to share parameters. [DeBERTa](model_doc/deberta) added a disentangled attention mechanism where the word and its position are separately encoded in two vectors. The attention is computed from these separate vectors instead of a single vector containing the word and position embeddings. [Longformer](model_doc/longformer) also focused on making attention more efficient, especially for processing documents with longer sequence lengths. It uses a combination of local windowed attention (attention only calculated from fixed window size around each token) and global attention (only for specific task tokens like `[CLS]` for classification) to create a sparse attention matrix instead of a full attention matrix. ### Decoder[[nlp-decoder]] [GPT-2](model_doc/gpt2) is a decoder-only Transformer that predicts the next word in the sequence. It masks tokens to the right so the model can't ""cheat"" by looking ahead. By pretraining on a massive body of text, GPT-2 became really good at generating text, even if the text is only sometimes accurate or true. But GPT-2 lacked the bidirectional context from BERT's pretraining, which made it unsuitable for certain tasks. [XLNET](model_doc/xlnet) combines the best of both BERT and GPT-2's pretraining objectives by using a permutation language modeling objective (PLM) that allows it to learn bidirectionally. After GPT-2, language models grew even bigger and are now known as *large language models (LLMs)*. LLMs demonstrate few- or even zero-shot learning if pretrained on a large enough dataset. [GPT-J](model_doc/gptj) is an LLM with 6B parameters and trained on 400B tokens. GPT-J was followed by [OPT](model_doc/opt), a family of decoder-only models, the largest of which is 175B and trained on 180B tokens. [BLOOM](model_doc/bloom) was released around the same time, and the largest model in the family has 176B parameters and is trained on 366B tokens in 46 languages and 13 programming languages. ### Encoder-decoder[[nlp-encoder-decoder]] [BART](model_doc/bart) keeps the original Transformer architecture, but it modifies the pretraining objective with *text infilling* corruption, where some text spans are replaced with a single `mask` token. The decoder predicts the uncorrupted tokens (future tokens are masked) and uses the encoder's hidden states to help it. [Pegasus](model_doc/pegasus) is similar to BART, but Pegasus masks entire sentences instead of text spans. In addition to masked language modeling, Pegasus is pretrained by gap sentence generation (GSG). The GSG objective masks whole sentences important to a document, replacing them with a `mask` token. The decoder must generate the output from the remaining sentences. [T5](model_doc/t5) is a more unique model that casts all NLP tasks into a text-to-text problem using specific prefixes. For example, the prefix `Summarize:` indicates a summarization task. T5 is pretrained by supervised (GLUE and SuperGLUE) training and self-supervised training (randomly sample and drop out 15% of tokens). ## Audio ### Encoder[[audio-encoder]] [Wav2Vec2](model_doc/wav2vec2) uses a Transformer encoder to learn speech representations directly from raw audio waveforms. It is pretrained with a contrastive task to determine the true speech representation from a set of false ones. [HuBERT](model_doc/hubert) is similar to Wav2Vec2 but has a different training process. Target labels are created by a clustering step in which segments of similar audio are assigned to a cluster which becomes a hidden unit. The hidden unit is mapped to an embedding to make a prediction. ### Encoder-decoder[[audio-encoder-decoder]] [Speech2Text](model_doc/speech_to_text) is a speech model designed for automatic speech recognition (ASR) and speech translation. The model accepts log mel-filter bank features extracted from the audio waveform and pretrained autoregressively to generate a transcript or translation. [Whisper](model_doc/whisper) is also an ASR model, but unlike many other speech models, it is pretrained on a massive amount of ✨ labeled ✨ audio transcription data for zero-shot performance. A large chunk of the dataset also contains non-English languages, meaning Whisper can also be used for low-resource languages. Structurally, Whisper is similar to Speech2Text. The audio signal is converted to a log-mel spectrogram encoded by the encoder. The decoder generates the transcript autoregressively from the encoder's hidden states and the previous tokens. ## Multimodal ### Encoder[[mm-encoder]] [VisualBERT](model_doc/visual_bert) is a multimodal model for vision-language tasks released shortly after BERT. It combines BERT and a pretrained object detection system to extract image features into visual embeddings, passed alongside text embeddings to BERT. VisualBERT predicts the masked text based on the unmasked text and the visual embeddings, and it also has to predict whether the text is aligned with the image. When ViT was released, [ViLT](model_doc/vilt) adopted ViT in its architecture because it was easier to get the image embeddings this way. The image embeddings are jointly processed with the text embeddings. From there, ViLT is pretrained by image text matching, masked language modeling, and whole word masking. [CLIP](model_doc/clip) takes a different approach and makes a pair prediction of (`image`, `text`) . An image encoder (ViT) and a text encoder (Transformer) are jointly trained on a 400 million (`image`, `text`) pair dataset to maximize the similarity between the image and text embeddings of the (`image`, `text`) pairs. After pretraining, you can use natural language to instruct CLIP to predict the text given an image or vice versa. [OWL-ViT](model_doc/owlvit) builds on top of CLIP by using it as its backbone for zero-shot object detection. After pretraining, an object detection head is added to make a set prediction over the (`class`, `bounding box`) pairs. ### Encoder-decoder[[mm-encoder-decoder]] Optical character recognition (OCR) is a long-standing text recognition task that typically involves several components to understand the image and generate the text. [TrOCR](model_doc/trocr) simplifies the process using an end-to-end Transformer. The encoder is a ViT-style model for image understanding and processes the image as fixed-size patches. The decoder accepts the encoder's hidden states and autoregressively generates text. [Donut](model_doc/donut) is a more general visual document understanding model that doesn't rely on OCR-based approaches. It uses a Swin Transformer as the encoder and multilingual BART as the decoder. Donut is pretrained to read text by predicting the next word based on the image and text annotations. The decoder generates a token sequence given a prompt. The prompt is represented by a special token for each downstream task. For example, document parsing has a special `parsing` token that is combined with the encoder hidden states to parse the document into a structured output format (JSON). ## Reinforcement learning ### Decoder[[rl-decoder]] The Decision and Trajectory Transformer casts the state, action, and reward as a sequence modeling problem. The [Decision Transformer](model_doc/decision_transformer) generates a series of actions that lead to a future desired return based on returns-to-go, past states, and actions. For the last *K* timesteps, each of the three modalities are converted into token embeddings and processed by a GPT-like model to predict a future action token. [Trajectory Transformer](model_doc/trajectory_transformer) also tokenizes the states, actions, and rewards and processes them with a GPT architecture. Unlike the Decision Transformer, which is focused on reward conditioning, the Trajectory Transformer generates future actions with beam search." sagemaker.md," # Run training on Amazon SageMaker The documentation has been moved to [hf.co/docs/sagemaker](https://huggingface.co/docs/sagemaker). This page will be removed in `transformers` 5.0. ### Table of Content - [Train Hugging Face models on Amazon SageMaker with the SageMaker Python SDK](https://huggingface.co/docs/sagemaker/train) - [Deploy Hugging Face models to Amazon SageMaker with the SageMaker Python SDK](https://huggingface.co/docs/sagemaker/inference) " contributing.md, perf_train_cpu.md," # Efficient Training on CPU This guide focuses on training large models efficiently on CPU. ## Mixed precision with IPEX IPEX is optimized for CPUs with AVX-512 or above, and functionally works for CPUs with only AVX2. So, it is expected to bring performance benefit for Intel CPU generations with AVX-512 or above while CPUs with only AVX2 (e.g., AMD CPUs or older Intel CPUs) might result in a better performance under IPEX, but not guaranteed. IPEX provides performance optimizations for CPU training with both Float32 and BFloat16. The usage of BFloat16 is the main focus of the following sections. Low precision data type BFloat16 has been natively supported on the 3rd Generation Xeon® Scalable Processors (aka Cooper Lake) with AVX512 instruction set and will be supported on the next generation of Intel® Xeon® Scalable Processors with Intel® Advanced Matrix Extensions (Intel® AMX) instruction set with further boosted performance. The Auto Mixed Precision for CPU backend has been enabled since PyTorch-1.10. At the same time, the support of Auto Mixed Precision with BFloat16 for CPU and BFloat16 optimization of operators has been massively enabled in Intel® Extension for PyTorch, and partially upstreamed to PyTorch master branch. Users can get better performance and user experience with IPEX Auto Mixed Precision. Check more detailed information for [Auto Mixed Precision](https://intel.github.io/intel-extension-for-pytorch/cpu/latest/tutorials/features/amp.html). ### IPEX installation: IPEX release is following PyTorch, to install via pip: | PyTorch Version | IPEX version | | :---------------: | :----------: | | 1.13 | 1.13.0+cpu | | 1.12 | 1.12.300+cpu | | 1.11 | 1.11.200+cpu | | 1.10 | 1.10.100+cpu | pip install intel_extension_for_pytorch== -f https://developer.intel.com/ipex-whl-stable-cpu Check more approaches for [IPEX installation](https://intel.github.io/intel-extension-for-pytorch/cpu/latest/tutorials/installation.html). ### Usage in Trainer To enable auto mixed precision with IPEX in Trainer, users should add `use_ipex`, `bf16` and `no_cuda` in training command arguments. Take an example of the use cases on [Transformers question-answering](https://github.com/huggingface/transformers/tree/main/examples/pytorch/question-answering) - Training with IPEX using BF16 auto mixed precision on CPU: python run_qa.py \ --model_name_or_path bert-base-uncased \ --dataset_name squad \ --do_train \ --do_eval \ --per_device_train_batch_size 12 \ --learning_rate 3e-5 \ --num_train_epochs 2 \ --max_seq_length 384 \ --doc_stride 128 \ --output_dir /tmp/debug_squad/ \ --use_ipex \ --bf16 --no_cuda ### Practice example Blog: [Accelerating PyTorch Transformers with Intel Sapphire Rapids](https://huggingface.co/blog/intel-sapphire-rapids) " transformers_agents.md," # Transformers Agents Transformers Agents is an experimental API which is subject to change at any time. Results returned by the agents can vary as the APIs or underlying models are prone to change. Transformers version v4.29.0, building on the concept of *tools* and *agents*. You can play with in [this colab](https://colab.research.google.com/drive/1c7MHD-T1forUPGcC_jlwsIptOzpG3hSj). In short, it provides a natural language API on top of transformers: we define a set of curated tools and design an agent to interpret natural language and to use these tools. It is extensible by design; we curated some relevant tools, but we'll show you how the system can be extended easily to use any tool developed by the community. Let's start with a few examples of what can be achieved with this new API. It is particularly powerful when it comes to multimodal tasks, so let's take it for a spin to generate images and read text out loud. agent.run(""Caption the following image"", image=image) | **Input** | **Output** | |-----------------------------------------------------------------------------------------------------------------------------|-----------------------------------| | | A beaver is swimming in the water | --- agent.run(""Read the following text out loud"", text=text) | **Input** | **Output** | |-------------------------------------------------------------------------------------------------------------------------|----------------------------------------------| | A beaver is swimming in the water | your browser does not support the audio element. --- agent.run( ""In the following `document`, where will the TRRF Scientific Advisory Council Meeting take place?"", document=document, ) | **Input** | **Output** | |-----------------------------------------------------------------------------------------------------------------------------|----------------| | | ballroom foyer | ## Quickstart Before being able to use `agent.run`, you will need to instantiate an agent, which is a large language model (LLM). We provide support for openAI models as well as opensource alternatives from BigCode and OpenAssistant. The openAI models perform better (but require you to have an openAI API key, so cannot be used for free); Hugging Face is providing free access to endpoints for BigCode and OpenAssistant models. To start with, please install the `agents` extras in order to install all default dependencies. ```bash pip install transformers[agents] To use openAI models, you instantiate an [`OpenAiAgent`] after installing the `openai` dependency: ```bash pip install openai from transformers import OpenAiAgent agent = OpenAiAgent(model=""text-davinci-003"", api_key="""") To use BigCode or OpenAssistant, start by logging in to have access to the Inference API: from huggingface_hub import login login("""") Then, instantiate the agent from transformers import HfAgent # Starcoder agent = HfAgent(""https://api-inference.huggingface.co/models/bigcode/starcoder"") # StarcoderBase # agent = HfAgent(""https://api-inference.huggingface.co/models/bigcode/starcoderbase"") # OpenAssistant # agent = HfAgent(url_endpoint=""https://api-inference.huggingface.co/models/OpenAssistant/oasst-sft-4-pythia-12b-epoch-3.5"") This is using the inference API that Hugging Face provides for free at the moment. If you have your own inference endpoint for this model (or another one) you can replace the URL above with your URL endpoint. StarCoder and OpenAssistant are free to use and perform admirably well on simple tasks. However, the checkpoints don't hold up when handling more complex prompts. If you're facing such an issue, we recommend trying out the OpenAI model which, while sadly not open-source, performs better at this given time. You're now good to go! Let's dive into the two APIs that you now have at your disposal. ### Single execution (run) The single execution method is when using the [`~Agent.run`] method of the agent: agent.run(""Draw me a picture of rivers and lakes."") It automatically selects the tool (or tools) appropriate for the task you want to perform and runs them appropriately. It can perform one or several tasks in the same instruction (though the more complex your instruction, the more likely the agent is to fail). agent.run(""Draw me a picture of the sea then transform the picture to add an island"") Every [`~Agent.run`] operation is independent, so you can run it several times in a row with different tasks. Note that your `agent` is just a large-language model, so small variations in your prompt might yield completely different results. It's important to explain as clearly as possible the task you want to perform. We go more in-depth on how to write good prompts [here](custom_tools#writing-good-user-inputs). If you'd like to keep a state across executions or to pass non-text objects to the agent, you can do so by specifying variables that you would like the agent to use. For example, you could generate the first image of rivers and lakes, and ask the model to update that picture to add an island by doing the following: thon picture = agent.run(""Generate a picture of rivers and lakes."") updated_picture = agent.run(""Transform the image in `picture` to add an island to it."", picture=picture) This can be helpful when the model is unable to understand your request and mixes tools. An example would be: agent.run(""Draw me the picture of a capybara swimming in the sea"") Here, the model could interpret in two ways: - Have the `text-to-image` generate a capybara swimming in the sea - Or, have the `text-to-image` generate capybara, then use the `image-transformation` tool to have it swim in the sea In case you would like to force the first scenario, you could do so by passing it the prompt as an argument: agent.run(""Draw me a picture of the `prompt`"", prompt=""a capybara swimming in the sea"") ### Chat-based execution (chat) The agent also has a chat-based approach, using the [`~Agent.chat`] method: agent.chat(""Generate a picture of rivers and lakes"") agent.chat(""Transform the picture so that there is a rock in there"") This is an interesting approach when you want to keep the state across instructions. It's better for experimentation, but will tend to be much better at single instructions rather than complex instructions (which the [`~Agent.run`] method is better at handling). This method can also take arguments if you would like to pass non-text types or specific prompts. ### ⚠️ Remote execution For demonstration purposes and so that it could be used with all setups, we had created remote executors for several of the default tools the agent has access for the release. These are created using [inference endpoints](https://huggingface.co/inference-endpoints). We have turned these off for now, but in order to see how to set up remote executors tools yourself, we recommend reading the [custom tool guide](./custom_tools). ### What's happening here? What are tools, and what are agents? #### Agents The ""agent"" here is a large language model, and we're prompting it so that it has access to a specific set of tools. LLMs are pretty good at generating small samples of code, so this API takes advantage of that by prompting the LLM gives a small sample of code performing a task with a set of tools. This prompt is then completed by the task you give your agent and the description of the tools you give it. This way it gets access to the doc of the tools you are using, especially their expected inputs and outputs, and can generate the relevant code. #### Tools Tools are very simple: they're a single function, with a name, and a description. We then use these tools' descriptions to prompt the agent. Through the prompt, we show the agent how it would leverage tools to perform what was requested in the query. This is using brand-new tools and not pipelines, because the agent writes better code with very atomic tools. Pipelines are more refactored and often combine several tasks in one. Tools are meant to be focused on one very simple task only. #### Code-execution?! This code is then executed with our small Python interpreter on the set of inputs passed along with your tools. We hear you screaming ""Arbitrary code execution!"" in the back, but let us explain why that is not the case. The only functions that can be called are the tools you provided and the print function, so you're already limited in what can be executed. You should be safe if it's limited to Hugging Face tools. Then, we don't allow any attribute lookup or imports (which shouldn't be needed anyway for passing along inputs/outputs to a small set of functions) so all the most obvious attacks (and you'd need to prompt the LLM to output them anyway) shouldn't be an issue. If you want to be on the super safe side, you can execute the run() method with the additional argument return_code=True, in which case the agent will just return the code to execute and you can decide whether to do it or not. The execution will stop at any line trying to perform an illegal operation or if there is a regular Python error with the code generated by the agent. ### A curated set of tools We identify a set of tools that can empower such agents. Here is an updated list of the tools we have integrated in `transformers`: - **Document question answering**: given a document (such as a PDF) in image format, answer a question on this document ([Donut](./model_doc/donut)) - **Text question answering**: given a long text and a question, answer the question in the text ([Flan-T5](./model_doc/flan-t5)) - **Unconditional image captioning**: Caption the image! ([BLIP](./model_doc/blip)) - **Image question answering**: given an image, answer a question on this image ([VILT](./model_doc/vilt)) - **Image segmentation**: given an image and a prompt, output the segmentation mask of that prompt ([CLIPSeg](./model_doc/clipseg)) - **Speech to text**: given an audio recording of a person talking, transcribe the speech into text ([Whisper](./model_doc/whisper)) - **Text to speech**: convert text to speech ([SpeechT5](./model_doc/speecht5)) - **Zero-shot text classification**: given a text and a list of labels, identify to which label the text corresponds the most ([BART](./model_doc/bart)) - **Text summarization**: summarize a long text in one or a few sentences ([BART](./model_doc/bart)) - **Translation**: translate the text into a given language ([NLLB](./model_doc/nllb)) These tools have an integration in transformers, and can be used manually as well, for example: from transformers import load_tool tool = load_tool(""text-to-speech"") audio = tool(""This is a text to speech tool"") ### Custom tools While we identify a curated set of tools, we strongly believe that the main value provided by this implementation is the ability to quickly create and share custom tools. By pushing the code of a tool to a Hugging Face Space or a model repository, you're then able to leverage the tool directly with the agent. We've added a few **transformers-agnostic** tools to the [`huggingface-tools` organization](https://huggingface.co/huggingface-tools): - **Text downloader**: to download a text from a web URL - **Text to image**: generate an image according to a prompt, leveraging stable diffusion - **Image transformation**: modify an image given an initial image and a prompt, leveraging instruct pix2pix stable diffusion - **Text to video**: generate a small video according to a prompt, leveraging damo-vilab The text-to-image tool we have been using since the beginning is a remote tool that lives in [*huggingface-tools/text-to-image*](https://huggingface.co/spaces/huggingface-tools/text-to-image)! We will continue releasing such tools on this and other organizations, to further supercharge this implementation. The agents have by default access to tools that reside on [`huggingface-tools`](https://huggingface.co/huggingface-tools). We explain how to you can write and share your tools as well as leverage any custom tool that resides on the Hub in [following guide](custom_tools). ### Code generation So far we have shown how to use the agents to perform actions for you. However, the agent is only generating code that we then execute using a very restricted Python interpreter. In case you would like to use the code generated in a different setting, the agent can be prompted to return the code, along with tool definition and accurate imports. For example, the following instruction thon agent.run(""Draw me a picture of rivers and lakes"", return_code=True) returns the following code thon from transformers import load_tool image_generator = load_tool(""huggingface-tools/text-to-image"") image = image_generator(prompt=""rivers and lakes"") that you can then modify and execute yourself. " torchscript.md," # Export to TorchScript This is the very beginning of our experiments with TorchScript and we are still exploring its capabilities with variable-input-size models. It is a focus of interest to us and we will deepen our analysis in upcoming releases, with more code examples, a more flexible implementation, and benchmarks comparing Python-based codes with compiled TorchScript. According to the [TorchScript documentation](https://pytorch.org/docs/stable/jit.html): > TorchScript is a way to create serializable and optimizable models from PyTorch code. There are two PyTorch modules, [JIT and TRACE](https://pytorch.org/docs/stable/jit.html), that allow developers to export their models to be reused in other programs like efficiency-oriented C++ programs. We provide an interface that allows you to export 🤗 Transformers models to TorchScript so they can be reused in a different environment than PyTorch-based Python programs. Here, we explain how to export and use our models using TorchScript. Exporting a model requires two things: - model instantiation with the `torchscript` flag - a forward pass with dummy inputs These necessities imply several things developers should be careful about as detailed below. ## TorchScript flag and tied weights The `torchscript` flag is necessary because most of the 🤗 Transformers language models have tied weights between their `Embedding` layer and their `Decoding` layer. TorchScript does not allow you to export models that have tied weights, so it is necessary to untie and clone the weights beforehand. Models instantiated with the `torchscript` flag have their `Embedding` layer and `Decoding` layer separated, which means that they should not be trained down the line. Training would desynchronize the two layers, leading to unexpected results. This is not the case for models that do not have a language model head, as those do not have tied weights. These models can be safely exported without the `torchscript` flag. ## Dummy inputs and standard lengths The dummy inputs are used for a models forward pass. While the inputs' values are propagated through the layers, PyTorch keeps track of the different operations executed on each tensor. These recorded operations are then used to create the *trace* of the model. The trace is created relative to the inputs' dimensions. It is therefore constrained by the dimensions of the dummy input, and will not work for any other sequence length or batch size. When trying with a different size, the following error is raised: `The expanded size of the tensor (3) must match the existing size (7) at non-singleton dimension 2` We recommended you trace the model with a dummy input size at least as large as the largest input that will be fed to the model during inference. Padding can help fill the missing values. However, since the model is traced with a larger input size, the dimensions of the matrix will also be large, resulting in more calculations. Be careful of the total number of operations done on each input and follow the performance closely when exporting varying sequence-length models. ## Using TorchScript in Python This section demonstrates how to save and load models as well as how to use the trace for inference. ### Saving a model To export a `BertModel` with TorchScript, instantiate `BertModel` from the `BertConfig` class and then save it to disk under the filename `traced_bert.pt`: thon from transformers import BertModel, BertTokenizer, BertConfig import torch enc = BertTokenizer.from_pretrained(""bert-base-uncased"") # Tokenizing input text text = ""[CLS] Who was Jim Henson ? [SEP] Jim Henson was a puppeteer [SEP]"" tokenized_text = enc.tokenize(text) # Masking one of the input tokens masked_index = 8 tokenized_text[masked_index] = ""[MASK]"" indexed_tokens = enc.convert_tokens_to_ids(tokenized_text) segments_ids = [0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1] # Creating a dummy input tokens_tensor = torch.tensor([indexed_tokens]) segments_tensors = torch.tensor([segments_ids]) dummy_input = [tokens_tensor, segments_tensors] # Initializing the model with the torchscript flag # Flag set to True even though it is not necessary as this model does not have an LM Head. config = BertConfig( vocab_size_or_config_json_file=32000, hidden_size=768, num_hidden_layers=12, num_attention_heads=12, intermediate_size=3072, torchscript=True, ) # Instantiating the model model = BertModel(config) # The model needs to be in evaluation mode model.eval() # If you are instantiating the model with *from_pretrained* you can also easily set the TorchScript flag model = BertModel.from_pretrained(""bert-base-uncased"", torchscript=True) # Creating the trace traced_model = torch.jit.trace(model, [tokens_tensor, segments_tensors]) torch.jit.save(traced_model, ""traced_bert.pt"") ### Loading a model Now you can load the previously saved `BertModel`, `traced_bert.pt`, from disk and use it on the previously initialised `dummy_input`: thon loaded_model = torch.jit.load(""traced_bert.pt"") loaded_model.eval() all_encoder_layers, pooled_output = loaded_model(*dummy_input) ### Using a traced model for inference Use the traced model for inference by using its `__call__` dunder method: thon traced_model(tokens_tensor, segments_tensors) ## Deploy Hugging Face TorchScript models to AWS with the Neuron SDK AWS introduced the [Amazon EC2 Inf1](https://aws.amazon.com/ec2/instance-types/inf1/) instance family for low cost, high performance machine learning inference in the cloud. The Inf1 instances are powered by the AWS Inferentia chip, a custom-built hardware accelerator, specializing in deep learning inferencing workloads. [AWS Neuron](https://awsdocs-neuron.readthedocs-hosted.com/en/latest/#) is the SDK for Inferentia that supports tracing and optimizing transformers models for deployment on Inf1. The Neuron SDK provides: 1. Easy-to-use API with one line of code change to trace and optimize a TorchScript model for inference in the cloud. 2. Out of the box performance optimizations for [improved cost-performance](https://awsdocs-neuron.readthedocs-hosted.com/en/latest/neuron-guide/benchmark/>). 3. Support for Hugging Face transformers models built with either [PyTorch](https://awsdocs-neuron.readthedocs-hosted.com/en/latest/src/examples/pytorch/bert_tutorial/tutorial_pretrained_bert.html) or [TensorFlow](https://awsdocs-neuron.readthedocs-hosted.com/en/latest/src/examples/tensorflow/huggingface_bert/huggingface_bert.html). ### Implications Transformers models based on the [BERT (Bidirectional Encoder Representations from Transformers)](https://huggingface.co/docs/transformers/main/model_doc/bert) architecture, or its variants such as [distilBERT](https://huggingface.co/docs/transformers/main/model_doc/distilbert) and [roBERTa](https://huggingface.co/docs/transformers/main/model_doc/roberta) run best on Inf1 for non-generative tasks such as extractive question answering, sequence classification, and token classification. However, text generation tasks can still be adapted to run on Inf1 according to this [AWS Neuron MarianMT tutorial](https://awsdocs-neuron.readthedocs-hosted.com/en/latest/src/examples/pytorch/transformers-marianmt.html). More information about models that can be converted out of the box on Inferentia can be found in the [Model Architecture Fit](https://awsdocs-neuron.readthedocs-hosted.com/en/latest/neuron-guide/models/models-inferentia.html#models-inferentia) section of the Neuron documentation. ### Dependencies Using AWS Neuron to convert models requires a [Neuron SDK environment](https://awsdocs-neuron.readthedocs-hosted.com/en/latest/neuron-guide/neuron-frameworks/pytorch-neuron/index.html#installation-guide) which comes preconfigured on [AWS Deep Learning AMI](https://docs.aws.amazon.com/dlami/latest/devguide/tutorial-inferentia-launching.html). ### Converting a model for AWS Neuron Convert a model for AWS NEURON using the same code from [Using TorchScript in Python](torchscript#using-torchscript-in-python) to trace a `BertModel`. Import the `torch.neuron` framework extension to access the components of the Neuron SDK through a Python API: thon from transformers import BertModel, BertTokenizer, BertConfig import torch import torch.neuron You only need to modify the following line: - torch.jit.trace(model, [tokens_tensor, segments_tensors]) + torch.neuron.trace(model, [token_tensor, segments_tensors]) This enables the Neuron SDK to trace the model and optimize it for Inf1 instances. To learn more about AWS Neuron SDK features, tools, example tutorials and latest updates, please see the [AWS NeuronSDK documentation](https://awsdocs-neuron.readthedocs-hosted.com/en/latest/index.html). " perf_train_special.md," # Training on Specialized Hardware Note: Most of the strategies introduced in the [single GPU section](perf_train_gpu_one) (such as mixed precision training or gradient accumulation) and [multi-GPU section](perf_train_gpu_many) are generic and apply to training models in general so make sure to have a look at it before diving into this section. This document will be completed soon with information on how to train on specialized hardware. " autoclass_tutorial.md," # Load pretrained instances with an AutoClass With so many different Transformer architectures, it can be challenging to create one for your checkpoint. As a part of 🤗 Transformers core philosophy to make the library easy, simple and flexible to use, an `AutoClass` automatically infers and loads the correct architecture from a given checkpoint. The `from_pretrained()` method lets you quickly load a pretrained model for any architecture so you don't have to devote time and resources to train a model from scratch. Producing this type of checkpoint-agnostic code means if your code works for one checkpoint, it will work with another checkpoint - as long as it was trained for a similar task - even if the architecture is different. Remember, architecture refers to the skeleton of the model and checkpoints are the weights for a given architecture. For example, [BERT](https://huggingface.co/bert-base-uncased) is an architecture, while `bert-base-uncased` is a checkpoint. Model is a general term that can mean either architecture or checkpoint. In this tutorial, learn to: * Load a pretrained tokenizer. * Load a pretrained image processor * Load a pretrained feature extractor. * Load a pretrained processor. * Load a pretrained model. ## AutoTokenizer Nearly every NLP task begins with a tokenizer. A tokenizer converts your input into a format that can be processed by the model. Load a tokenizer with [`AutoTokenizer.from_pretrained`]: >>> from transformers import AutoTokenizer >>> tokenizer = AutoTokenizer.from_pretrained(""bert-base-uncased"") Then tokenize your input as shown below: >>> sequence = ""In a hole in the ground there lived a hobbit."" >>> print(tokenizer(sequence)) {'input_ids': [101, 1999, 1037, 4920, 1999, 1996, 2598, 2045, 2973, 1037, 7570, 10322, 4183, 1012, 102], 'token_type_ids': [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], 'attention_mask': [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]} ## AutoImageProcessor For vision tasks, an image processor processes the image into the correct input format. >>> from transformers import AutoImageProcessor >>> image_processor = AutoImageProcessor.from_pretrained(""google/vit-base-patch16-224"") ## AutoFeatureExtractor For audio tasks, a feature extractor processes the audio signal the correct input format. Load a feature extractor with [`AutoFeatureExtractor.from_pretrained`]: >>> from transformers import AutoFeatureExtractor >>> feature_extractor = AutoFeatureExtractor.from_pretrained( ""ehcalabres/wav2vec2-lg-xlsr-en-speech-emotion-recognition"" ) ## AutoProcessor Multimodal tasks require a processor that combines two types of preprocessing tools. For example, the [LayoutLMV2](model_doc/layoutlmv2) model requires an image processor to handle images and a tokenizer to handle text; a processor combines both of them. Load a processor with [`AutoProcessor.from_pretrained`]: >>> from transformers import AutoProcessor >>> processor = AutoProcessor.from_pretrained(""microsoft/layoutlmv2-base-uncased"") ## AutoModel Finally, the `AutoModelFor` classes let you load a pretrained model for a given task (see [here](model_doc/auto) for a complete list of available tasks). For example, load a model for sequence classification with [`AutoModelForSequenceClassification.from_pretrained`]: >>> from transformers import AutoModelForSequenceClassification >>> model = AutoModelForSequenceClassification.from_pretrained(""distilbert-base-uncased"") Easily reuse the same checkpoint to load an architecture for a different task: >>> from transformers import AutoModelForTokenClassification >>> model = AutoModelForTokenClassification.from_pretrained(""distilbert-base-uncased"") For PyTorch models, the `from_pretrained()` method uses `torch.load()` which internally uses `pickle` and is known to be insecure. In general, never load a model that could have come from an untrusted source, or that could have been tampered with. This security risk is partially mitigated for public models hosted on the Hugging Face Hub, which are [scanned for malware](https://huggingface.co/docs/hub/security-malware) at each commit. See the [Hub documentation](https://huggingface.co/docs/hub/security) for best practices like [signed commit verification](https://huggingface.co/docs/hub/security-gpg#signing-commits-with-gpg) with GPG. TensorFlow and Flax checkpoints are not affected, and can be loaded within PyTorch architectures using the `from_tf` and `from_flax` kwargs for the `from_pretrained` method to circumvent this issue. Generally, we recommend using the `AutoTokenizer` class and the `AutoModelFor` class to load pretrained instances of models. This will ensure you load the correct architecture every time. In the next [tutorial](preprocessing), learn how to use your newly loaded tokenizer, image processor, feature extractor and processor to preprocess a dataset for fine-tuning. Finally, the `TFAutoModelFor` classes let you load a pretrained model for a given task (see [here](model_doc/auto) for a complete list of available tasks). For example, load a model for sequence classification with [`TFAutoModelForSequenceClassification.from_pretrained`]: >>> from transformers import TFAutoModelForSequenceClassification >>> model = TFAutoModelForSequenceClassification.from_pretrained(""distilbert-base-uncased"") Easily reuse the same checkpoint to load an architecture for a different task: >>> from transformers import TFAutoModelForTokenClassification >>> model = TFAutoModelForTokenClassification.from_pretrained(""distilbert-base-uncased"") Generally, we recommend using the `AutoTokenizer` class and the `TFAutoModelFor` class to load pretrained instances of models. This will ensure you load the correct architecture every time. In the next [tutorial](preprocessing), learn how to use your newly loaded tokenizer, image processor, feature extractor and processor to preprocess a dataset for fine-tuning. " perf_train_gpu_many.md," # Efficient Training on Multiple GPUs If training a model on a single GPU is too slow or if the model's weights do not fit in a single GPU's memory, transitioning to a multi-GPU setup may be a viable option. Prior to making this transition, thoroughly explore all the strategies covered in the [Methods and tools for efficient training on a single GPU](perf_train_gpu_one) as they are universally applicable to model training on any number of GPUs. Once you have employed those strategies and found them insufficient for your case on a single GPU, consider moving to multiple GPUs. Transitioning from a single GPU to multiple GPUs requires the introduction of some form of parallelism, as the workload must be distributed across the resources. Multiple techniques can be employed to achieve parallelism, such as data parallelism, tensor parallelism, and pipeline parallelism. It's important to note that there isn't a one-size-fits-all solution, and the optimal settings depend on the specific hardware configuration you are using. This guide offers an in-depth overview of individual types of parallelism, as well as guidance on ways to combine techniques and choosing an appropriate approach. For step-by-step tutorials on distributed training, please refer to the [🤗 Accelerate documentation](https://huggingface.co/docs/accelerate/index). While the main concepts discussed in this guide are likely applicable across frameworks, here we focus on PyTorch-based implementations. Before diving deeper into the specifics of each technique, let's go over the rough decision process when training large models on a large infrastructure. ## Scalability strategy Begin by estimating how much vRAM is required to train your model. For models hosted on the 🤗 Hub, use our [Model Memory Calculator](https://huggingface.co/spaces/hf-accelerate/model-memory-usage), which gives you accurate calculations within a few percent margin. **Parallelization strategy for a single Node / multi-GPU setup** When training a model on a single node with multiple GPUs, your choice of parallelization strategy can significantly impact performance. Here's a breakdown of your options: **Case 1: Your model fits onto a single GPU** If your model can comfortably fit onto a single GPU, you have two primary options: 1. DDP - Distributed DataParallel 2. ZeRO - depending on the situation and configuration used, this method may or may not be faster, however, it's worth experimenting with it. **Case 2: Your model doesn't fit onto a single GPU:** If your model is too large for a single GPU, you have several alternatives to consider: 1. PipelineParallel (PP) 2. ZeRO 3. TensorParallel (TP) With very fast inter-node connectivity (e.g., NVLINK or NVSwitch) all three strategies (PP, ZeRO, TP) should result in similar performance. However, without these, PP will be faster than TP or ZeRO. The degree of TP may also make a difference. It's best to experiment with your specific setup to determine the most suitable strategy. TP is almost always used within a single node. That is TP size <= GPUs per node. **Case 3: Largest layer of your model does not fit onto a single GPU** 1. If you are not using ZeRO, you have to use TensorParallel (TP), because PipelineParallel (PP) alone won't be sufficient to accommodate the large layer. 2. If you are using ZeRO, additionally adopt techniques from the [Methods and tools for efficient training on a single GPU](perf_train_gpu_one). **Parallelization strategy for a multi-Node / multi-GPU setup** * When you have fast inter-node connectivity (e.g., NVLINK or NVSwitch) consider using one of these options: 1. ZeRO - as it requires close to no modifications to the model 2. A combination of PipelineParallel(PP) with TensorParallel(TP) and DataParallel(DP) - this approach will result in fewer communications, but requires significant changes to the model * When you have slow inter-node connectivity and still low on GPU memory: 1. Employ a combination of DataParallel(DP) with PipelineParallel(PP), TensorParallel(TP), and ZeRO. In the following sections of this guide we dig deeper into how these different parallelism methods work. ## Data Parallelism Even with only 2 GPUs, you can readily leverage the accelerated training capabilities offered by PyTorch's built-in features, such as `DataParallel` (DP) and `DistributedDataParallel` (DDP). Note that [PyTorch documentation](https://pytorch.org/docs/master/generated/torch.nn.DataParallel.html) recommends to prefer `DistributedDataParallel` (DDP) over `DataParallel` (DP) for multi-GPU training as it works for all models. Let's take a look at how these two methods work and what makes them different. ### DataParallel vs DistributedDataParallel To understand the key differences in inter-GPU communication overhead between the two methods, let's review the processes per batch: [DDP](https://pytorch.org/docs/master/notes/ddp.html): - At the start time the main process replicates the model once from GPU 0 to the rest of GPUs - Then for each batch: 1. Each GPU directly consumes its mini-batch of data. 2. During `backward`, once the local gradients are ready, they are averaged across all processes. [DP](https://pytorch.org/docs/master/generated/torch.nn.DataParallel.html): For each batch: 1. GPU 0 reads the batch of data and then sends a mini-batch to each GPU. 2. The up-to-date model is replicated from GPU 0 to each GPU. 3. `forward` is executed, and output from each GPU is sent to GPU 0 to compute the loss. 4. The loss is distributed from GPU 0 to all GPUs, and `backward` is run. 5. Gradients from each GPU are sent to GPU 0 and averaged. Key differences include: 1. DDP performs only a single communication per batch - sending gradients, while DP performs five different data exchanges per batch. DDP copies data using [torch.distributed](https://pytorch.org/docs/master/distributed.html), while DP copies data within the process via Python threads (which introduces limitations associated with GIL). As a result, **`DistributedDataParallel` (DDP) is generally faster than `DataParallel` (DP)** unless you have slow GPU card inter-connectivity. 2. Under DP, GPU 0 performs significantly more work than other GPUs, resulting in GPU under-utilization. 3. DDP supports distributed training across multiple machines, whereas DP does not. This is not an exhaustive list of differences between DP and DDP, however, other nuances are out of scope of this guide. You can get a deeper understanding of these methods by reading this [article](https://www.telesens.co/2019/04/04/distributed-data-parallel-training-using-pytorch-on-aws/). Let's illustrate the differences between DP and DDP with an experiment. We'll benchmark the differences between DP and DDP with an added context of NVLink presence: * Hardware: 2x TITAN RTX 24GB each + NVlink with 2 NVLinks (`NV2` in `nvidia-smi topo -m`). * Software: `pytorch-1.8-to-be` + `cuda-11.0` / `transformers==4.3.0.dev0`. To disable the NVLink feature on one of the benchmarks, we use `NCCL_P2P_DISABLE=1`. Here is the benchmarking code and outputs: **DP** rm -r /tmp/test-clm; CUDA_VISIBLE_DEVICES=0,1 \ python examples/pytorch/language-modeling/run_clm.py \ --model_name_or_path gpt2 --dataset_name wikitext --dataset_config_name wikitext-2-raw-v1 \ --do_train --output_dir /tmp/test-clm --per_device_train_batch_size 4 --max_steps 200 {'train_runtime': 110.5948, 'train_samples_per_second': 1.808, 'epoch': 0.69} **DDP w/ NVlink** rm -r /tmp/test-clm; CUDA_VISIBLE_DEVICES=0,1 \ python -m torch.distributed.launch --nproc_per_node 2 examples/pytorch/language-modeling/run_clm.py \ --model_name_or_path gpt2 --dataset_name wikitext --dataset_config_name wikitext-2-raw-v1 \ --do_train --output_dir /tmp/test-clm --per_device_train_batch_size 4 --max_steps 200 {'train_runtime': 101.9003, 'train_samples_per_second': 1.963, 'epoch': 0.69} **DDP w/o NVlink** rm -r /tmp/test-clm; NCCL_P2P_DISABLE=1 CUDA_VISIBLE_DEVICES=0,1 \ python -m torch.distributed.launch --nproc_per_node 2 examples/pytorch/language-modeling/run_clm.py \ --model_name_or_path gpt2 --dataset_name wikitext --dataset_config_name wikitext-2-raw-v1 \ --do_train --output_dir /tmp/test-clm --per_device_train_batch_size 4 --max_steps 200 {'train_runtime': 131.4367, 'train_samples_per_second': 1.522, 'epoch': 0.69} Here are the same benchmarking results gathered in a table for convenience: | Type | NVlink | Time | | :----- | ----- | ---: | | 2:DP | Y | 110s | | 2:DDP | Y | 101s | | 2:DDP | N | 131s | As you can see, in this case DP is ~10% slower than DDP with NVlink, but ~15% faster than DDP without NVlink. The real difference will depend on how much data each GPU needs to sync with the others - the more there is to sync, the more a slow link will impede the overall runtime. ## ZeRO Data Parallelism ZeRO-powered data parallelism (ZeRO-DP) is illustrated in the following diagram from this [blog post](https://www.microsoft.com/en-us/research/blog/zero-deepspeed-new-system-optimizations-enable-training-models-with-over-100-billion-parameters/). While it may appear complex, it is a very similar concept to `DataParallel` (DP). The difference is that instead of replicating the full model parameters, gradients and optimizer states, each GPU stores only a slice of it. Then, at run-time when the full layer parameters are needed just for the given layer, all GPUs synchronize to give each other parts that they miss. To illustrate this idea, consider a simple model with 3 layers (La, Lb, and Lc), where each layer has 3 parameters. Layer La, for example, has weights a0, a1 and a2: La | Lb | Lc ---|----|--- a0 | b0 | c0 a1 | b1 | c1 a2 | b2 | c2 If we have 3 GPUs, ZeRO-DP splits the model onto 3 GPUs like so: GPU0: La | Lb | Lc ---|----|--- a0 | b0 | c0 GPU1: La | Lb | Lc ---|----|--- a1 | b1 | c1 GPU2: La | Lb | Lc ---|----|--- a2 | b2 | c2 In a way, this is the same horizontal slicing as tensor parallelism, as opposed to Vertical slicing, where one puts whole layer-groups on different GPUs. Now let's see how this works: Each of these GPUs will get the usual mini-batch as it works in DP: x0 => GPU0 x1 => GPU1 x2 => GPU2 The inputs are passed without modifications as if they would be processed by the original model. First, the inputs get to the layer `La`. What happens at this point? On GPU0: the x0 mini-batch requires the a0, a1, a2 parameters to do its forward path through the layer, but the GPU0 has only a0. It will get a1 from GPU1 and a2 from GPU2, bringing all the pieces of the model together. In parallel, GPU1 gets another mini-batch - x1. GPU1 has the a1 parameter, but needs a0 and a2, so it gets those from GPU0 and GPU2. Same happens to GPU2 that gets the mini-batch x2. It gets a0 and a1 from GPU0 and GPU1. This way each of the 3 GPUs gets the full tensors reconstructed and makes a forward pass with its own mini-batch. As soon as the calculation is done, the data that is no longer needed gets dropped - it's only used during the calculation. The reconstruction is done efficiently via a pre-fetch. Then the whole process is repeated for layer Lb, then Lc forward-wise, and then backward Lc -> Lb -> La. This mechanism is similar to an efficient group backpacking strategy: person A carries the tent, person B carries the stove, and person C carries the axe. Each night they all share what they have with others and get from others what they don't have, and in the morning they pack up their allocated type of gear and continue on their way. This is what ZeRO DP/Sharded DDP is. Compare this strategy to the simple one where each person has to carry their own tent, stove and axe (similar to DataParallel (DP and DDP) in PyTorch), which would be far more inefficient. While reading the literature on this topic you may encounter the following synonyms: Sharded, Partitioned. If you pay close attention the way ZeRO partitions the model's weights - it looks very similar to tensor parallelism which will be discussed later. This is because it partitions/shards each layer's weights, unlike vertical model parallelism which is discussed next. Implementations: - [DeepSpeed](https://www.deepspeed.ai/tutorials/zero/) ZeRO-DP stages 1+2+3 - [`Accelerate` integration](https://huggingface.co/docs/accelerate/en/usage_guides/deepspeed) - [`transformers` integration](main_classes/trainer#trainer-integrations) ## From Naive Model Parallelism to Pipeline Parallelism To explain Pipeline parallelism, we'll first look into Naive Model Parallelism (MP), also known as Vertical MP. This approach involves distributing groups of model layers across multiple GPUs by assigning specific layers to specific GPUs with `.to()`. As data flows through these layers, it is moved to the same GPU as the layer, while the other layers remain untouched. We refer to this Model parallelism as ""Vertical"" because of how models are typically visualized. For example, the following diagram shows an 8-layer model split vertically into two slices, placing layers 0-3 onto GPU0 and 4-7 to GPU1: =================== =================== | 0 | 1 | 2 | 3 | | 4 | 5 | 6 | 7 | =================== =================== GPU0 GPU1 In this example, when data moves from layer 0 to 3, it's no different from regular forward pass. However, passing data from layer 3 to 4 requires moving it from GPU0 to GPU1, introducing a communication overhead. If the participating GPUs are on the same compute node (e.g. same physical machine) this copying is fast, but if the GPUs are distributed across different compute nodes (e.g. multiple machines), the communication overhead could be substantially greater. Following that, layers 4 to 7 work as they would in the original model. Upon completion of the 7th layer, there is often a need to send the data back to layer 0 where the labels are (or alternatively send the labels to the last layer). Now the loss can be computed and the optimizer can do its work. Naive Model Parallelism comes several shortcomings: - **All but one GPU are idle at any given moment**: if 4 GPUs are used, it's nearly identical to quadrupling the amount of memory of a single GPU, and ignoring the rest of the hardware. - **Overhead in data transfer between devices**: E.g. 4x 6GB cards will be able to accommodate the same size as 1x 24GB card using naive MP, but a single 24GB card will complete the training faster, because it doesn't have the data copying overhead. But, say, if you have 40GB cards and need to fit a 45GB model you can with 4x 40GB cards (but barely because of the gradient and optimizer states) - **Copying shared embeddings**: Shared embeddings may need to get copied back and forth between GPUs. Now that you are familiar with how the naive approach to model parallelism works and its shortcomings, let's look at Pipeline Parallelism (PP). PP is almost identical to a naive MP, but it solves the GPU idling problem by chunking the incoming batch into micro-batches and artificially creating a pipeline, which allows different GPUs to concurrently participate in the computation process. The following illustration from the [GPipe paper](https://ai.googleblog.com/2019/03/introducing-gpipe-open-source-library.html) shows the naive MP on the top, and PP on the bottom: At the bottom of the diagram, you can observe that the Pipeline Parallelism (PP) approach minimizes the number of idle GPU zones, referred to as 'bubbles'. Both parts of the diagram show a parallelism level of degree 4, meaning that 4 GPUs are involved in the pipeline. You can see that there's a forward path of 4 pipe stages (F0, F1, F2 and F3) followed by a backward path in reverse order (B3, B2, B1, and B0). PP introduces a new hyperparameter to tune - `chunks`, which determines how many data chunks are sent in a sequence through the same pipe stage. For example, in the bottom diagram you can see `chunks=4`. GPU0 performs the same forward path on chunk 0, 1, 2 and 3 (F0,0, F0,1, F0,2, F0,3) and then it waits for other GPUs to do complete their work. Only when the other GPUs begin to complete their work, GPU0 starts to work again doing the backward path for chunks 3, 2, 1 and 0 (B0,3, B0,2, B0,1, B0,0). Note that this is the same concept as gradient accumulation steps. PyTorch uses `chunks`, while DeepSpeed refers to the same hyperparameter as gradient accumulation steps. Because of the chunks, PP introduces the notion of micro-batches (MBS). DP splits the global data batch size into mini-batches, so if you have a DP degree of 4, a global batch size of 1024 gets split up into 4 mini-batches of 256 each (1024/4). And if the number of `chunks` (or GAS) is 32 we end up with a micro-batch size of 8 (256/32). Each Pipeline stage works with a single micro-batch at a time. To calculate the global batch size of the DP + PP setup, use the formula: `mbs * chunks * dp_degree` (`8 * 32 * 4 = 1024`). With `chunks=1` you end up with the naive MP, which is inefficient. With a large `chunks` value you end up with tiny micro-batch sizes which is also inefficient. For this reason, we encourage to experiment with the `chunks` value to find the one that leads to the most efficient GPUs utilization. You may notice a bubble of ""dead"" time on the diagram that can't be parallelized because the last `forward` stage has to wait for `backward` to complete the pipeline. The purpose of finding the best value for `chunks` is to enable a high concurrent GPU utilization across all participating GPUs which translates to minimizing the size of the bubble. Pipeline API solutions have been implemented in: - PyTorch - DeepSpeed - Megatron-LM These come with some shortcomings: - They have to modify the model quite heavily, because Pipeline requires one to rewrite the normal flow of modules into a `nn.Sequential` sequence of the same, which may require changes to the design of the model. - Currently the Pipeline API is very restricted. If you had a bunch of Python variables being passed in the very first stage of the Pipeline, you will have to find a way around it. Currently, the pipeline interface requires either a single Tensor or a tuple of Tensors as the only input and output. These tensors must have a batch size as the very first dimension, since pipeline is going to chunk the mini batch into micro-batches. Possible improvements are being discussed here https://github.com/pytorch/pytorch/pull/50693 - Conditional control flow at the level of pipe stages is not possible - e.g., Encoder-Decoder models like T5 require special workarounds to handle a conditional encoder stage. - They have to arrange each layer so that the output of one layer becomes an input to the other layer. More recent solutions include: - Varuna - Sagemaker We have not experimented with Varuna and SageMaker but their papers report that they have overcome the list of problems mentioned above and that they require smaller changes to the user's model. Implementations: - [PyTorch](https://pytorch.org/docs/stable/pipeline.html) (initial support in pytorch-1.8, and progressively getting improved in 1.9 and more so in 1.10). Some [examples](https://github.com/pytorch/pytorch/blob/master/benchmarks/distributed/pipeline/pipe.py) - [DeepSpeed](https://www.deepspeed.ai/tutorials/pipeline/) - [Megatron-LM](https://github.com/NVIDIA/Megatron-LM) has an internal implementation - no API. - [Varuna](https://github.com/microsoft/varuna) - [SageMaker](https://arxiv.org/abs/2111.05972) - this is a proprietary solution that can only be used on AWS. - [OSLO](https://github.com/tunib-ai/oslo) - this is implemented based on the Hugging Face Transformers. 🤗 Transformers status: as of this writing none of the models supports full-PP. GPT2 and T5 models have naive MP support. The main obstacle is being unable to convert the models to `nn.Sequential` and have all the inputs to be Tensors. This is because currently the models include many features that make the conversion very complicated, and will need to be removed to accomplish that. DeepSpeed and Megatron-LM integrations are available in [🤗 Accelerate](https://huggingface.co/docs/accelerate/main/en/usage_guides/deepspeed) Other approaches: DeepSpeed, Varuna and SageMaker use the concept of an [Interleaved Pipeline](https://docs.aws.amazon.com/sagemaker/latest/dg/model-parallel-core-features.html) Here the bubble (idle time) is further minimized by prioritizing backward passes. Varuna further attempts to improve the schedule by using simulations to discover the most efficient scheduling. OSLO has pipeline parallelism implementation based on the Transformers without `nn.Sequential` conversion. ## Tensor Parallelism In Tensor Parallelism, each GPU processes a slice of a tensor and only aggregates the full tensor for operations requiring it. To describe this method, this section of the guide relies on the concepts and diagrams from the [Megatron-LM](https://github.com/NVIDIA/Megatron-LM) paper: [Efficient Large-Scale Language Model Training on GPU Clusters](https://arxiv.org/abs/2104.04473). The main building block of any transformer is a fully connected `nn.Linear` followed by a nonlinear activation `GeLU`. The dot dot-product part of it, following the Megatron's paper notation, can be written as `Y = GeLU(XA)`, where `X` is an input vector, `Y` is the output vector, and `A` is the weight matrix. If we look at the computation in matrix form, you can see how the matrix multiplication can be split between multiple GPUs: If we split the weight matrix `A` column-wise across `N` GPUs and perform matrix multiplications `XA_1` through `XA_n` in parallel, then we will end up with `N` output vectors `Y_1, Y_2, , Y_n` which can be fed into `GeLU` independently: Using this principle, we can update a multi-layer perceptron of arbitrary depth, without the need for any synchronization between GPUs until the very end, where we need to reconstruct the output vector from shards. The Megatron-LM paper authors provide a helpful illustration for that: Parallelizing the multi-headed attention layers is even simpler, since they are already inherently parallel, due to having multiple independent heads! Special considerations: TP requires very fast network, and therefore it's not advisable to do TP across more than one node. Practically, if a node has 4 GPUs, the highest TP degree is therefore 4. If you need a TP degree of 8, you need to use nodes that have at least 8 GPUs. This section is based on the original much more [detailed TP overview](https://github.com/huggingface/transformers/issues/10321#issuecomment-783543530). by [@anton-l](https://github.com/anton-l). Alternative names: - DeepSpeed calls it [tensor slicing](https://www.deepspeed.ai/training/#model-parallelism) Implementations: - [Megatron-LM](https://github.com/NVIDIA/Megatron-LM) has an internal implementation, as it's very model-specific - [parallelformers](https://github.com/tunib-ai/parallelformers) (only inference at the moment) - [SageMaker](https://arxiv.org/abs/2111.05972) - this is a proprietary solution that can only be used on AWS. - [OSLO](https://github.com/tunib-ai/oslo) has the tensor parallelism implementation based on the Transformers. SageMaker combines TP with DP for a more efficient processing. 🤗 Transformers status: - core: not yet implemented in the core - but if you want inference [parallelformers](https://github.com/tunib-ai/parallelformers) provides this support for most of our models. So until this is implemented in the core you can use theirs. And hopefully training mode will be supported too. - Deepspeed-Inference also supports our BERT, GPT-2, and GPT-Neo models in their super-fast CUDA-kernel-based inference mode, see more [here](https://www.deepspeed.ai/tutorials/inference-tutorial/) 🤗 Accelerate integrates with [TP from Megatron-LM](https://huggingface.co/docs/accelerate/v0.23.0/en/usage_guides/megatron_lm). ## Data Parallelism + Pipeline Parallelism The following diagram from the DeepSpeed [pipeline tutorial](https://www.deepspeed.ai/tutorials/pipeline/) demonstrates how one can combine DP with PP. Here it's important to see how DP rank 0 doesn't see GPU2 and DP rank 1 doesn't see GPU3. To DP there is just GPUs 0 and 1 where it feeds data as if there were just 2 GPUs. GPU0 ""secretly"" offloads some of its load to GPU2 using PP. And GPU1 does the same by enlisting GPU3 to its aid. Since each dimension requires at least 2 GPUs, here you'd need at least 4 GPUs. Implementations: - [DeepSpeed](https://github.com/microsoft/DeepSpeed) - [Megatron-LM](https://github.com/NVIDIA/Megatron-LM) - [Varuna](https://github.com/microsoft/varuna) - [SageMaker](https://arxiv.org/abs/2111.05972) - [OSLO](https://github.com/tunib-ai/oslo) 🤗 Transformers status: not yet implemented ## Data Parallelism + Pipeline Parallelism + Tensor Parallelism To get an even more efficient training a 3D parallelism is used where PP is combined with TP and DP. This can be seen in the following diagram. This diagram is from a blog post [3D parallelism: Scaling to trillion-parameter models](https://www.microsoft.com/en-us/research/blog/deepspeed-extreme-scale-model-training-for-everyone/), which is a good read as well. Since each dimension requires at least 2 GPUs, here you'd need at least 8 GPUs. Implementations: - [DeepSpeed](https://github.com/microsoft/DeepSpeed) - DeepSpeed also includes an even more efficient DP, which they call ZeRO-DP. - [Megatron-LM](https://github.com/NVIDIA/Megatron-LM) - [Varuna](https://github.com/microsoft/varuna) - [SageMaker](https://arxiv.org/abs/2111.05972) - [OSLO](https://github.com/tunib-ai/oslo) 🤗 Transformers status: not yet implemented, since we have no PP and TP. ## ZeRO Data Parallelism + Pipeline Parallelism + Tensor Parallelism One of the main features of DeepSpeed is ZeRO, which is a super-scalable extension of DP. It has already been discussed in [ZeRO Data Parallelism](#zero-data-parallelism). Normally it's a standalone feature that doesn't require PP or TP. But it can be combined with PP and TP. When ZeRO-DP is combined with PP (and optionally TP) it typically enables only ZeRO stage 1 (optimizer sharding). While it's theoretically possible to use ZeRO stage 2 (gradient sharding) with Pipeline Parallelism, it will have negative performance impacts. There would need to be an additional reduce-scatter collective for every micro-batch to aggregate the gradients before sharding, which adds a potentially significant communication overhead. By nature of Pipeline Parallelism, small micro-batches are used and instead the focus is on trying to balance arithmetic intensity (micro-batch size) with minimizing the Pipeline bubble (number of micro-batches). Therefore those communication costs are going to impact the performance. In addition, there are already fewer layers than normal due to PP and so the memory savings won't be huge. PP already reduces gradient size by ``1/PP``, and so gradient sharding savings on top of that are less significant than pure DP. ZeRO stage 3 is not a good choice either for the same reason - more inter-node communications required. And since we have ZeRO, the other benefit is ZeRO-Offload. Since this is stage 1 optimizer states can be offloaded to CPU. Implementations: - [Megatron-DeepSpeed](https://github.com/microsoft/Megatron-DeepSpeed) and [Megatron-Deepspeed from BigScience](https://github.com/bigscience-workshop/Megatron-DeepSpeed), which is the fork of the former repo. - [OSLO](https://github.com/tunib-ai/oslo) Important papers: - [Using DeepSpeed and Megatron to Train Megatron-Turing NLG 530B, A Large-Scale Generative Language Model]( https://arxiv.org/abs/2201.11990) 🤗 Transformers status: not yet implemented, since we have no PP and TP. ## FlexFlow [FlexFlow](https://github.com/flexflow/FlexFlow) also solves the parallelization problem in a slightly different approach. Paper: [""Beyond Data and Model Parallelism for Deep Neural Networks"" by Zhihao Jia, Matei Zaharia, Alex Aiken](https://arxiv.org/abs/1807.05358) It performs a sort of 4D Parallelism over Sample-Operator-Attribute-Parameter. 1. Sample = Data Parallelism (sample-wise parallel) 2. Operator = Parallelize a single operation into several sub-operations 3. Attribute = Data Parallelism (length-wise parallel) 4. Parameter = Model Parallelism (regardless of dimension - horizontal or vertical) Examples: * Sample Let's take 10 batches of sequence length 512. If we parallelize them by sample dimension into 2 devices, we get 10 x 512 which becomes be 5 x 2 x 512. * Operator If we perform layer normalization, we compute std first and mean second, and then we can normalize data. Operator parallelism allows computing std and mean in parallel. So if we parallelize them by operator dimension into 2 devices (cuda:0, cuda:1), first we copy input data into both devices, and cuda:0 computes std, cuda:1 computes mean at the same time. * Attribute We have 10 batches of 512 length. If we parallelize them by attribute dimension into 2 devices, 10 x 512 will be 10 x 2 x 256. * Parameter It is similar with tensor model parallelism or naive layer-wise model parallelism. The significance of this framework is that it takes resources like (1) GPU/TPU/CPU vs. (2) RAM/DRAM vs. (3) fast-intra-connect/slow-inter-connect and it automatically optimizes all these algorithmically deciding which parallelisation to use where. One very important aspect is that FlexFlow is designed for optimizing DNN parallelizations for models with static and fixed workloads, since models with dynamic behavior may prefer different parallelization strategies across iterations. So the promise is very attractive - it runs a 30min simulation on the cluster of choice and it comes up with the best strategy to utilise this specific environment. If you add/remove/replace any parts it'll run and re-optimize the plan for that. And then you can train. A different setup will have its own custom optimization. 🤗 Transformers status: Transformers models are FX-trace-able via [transformers.utils.fx](https://github.com/huggingface/transformers/blob/master/src/transformers/utils/fx.py), which is a prerequisite for FlexFlow, however, changes are required on the FlexFlow side to make it work with Transformers models. " quicktour.md," # Quick tour [[open-in-colab]] Get up and running with 🤗 Transformers! Whether you're a developer or an everyday user, this quick tour will help you get started and show you how to use the [`pipeline`] for inference, load a pretrained model and preprocessor with an [AutoClass](./model_doc/auto), and quickly train a model with PyTorch or TensorFlow. If you're a beginner, we recommend checking out our tutorials or [course](https://huggingface.co/course/chapter1/1) next for more in-depth explanations of the concepts introduced here. Before you begin, make sure you have all the necessary libraries installed: ```bash !pip install transformers datasets You'll also need to install your preferred machine learning framework: ```bash pip install torch ```bash pip install tensorflow ## Pipeline The [`pipeline`] is the easiest and fastest way to use a pretrained model for inference. You can use the [`pipeline`] out-of-the-box for many tasks across different modalities, some of which are shown in the table below: For a complete list of available tasks, check out the [pipeline API reference](./main_classes/pipelines). | **Task** | **Description** | **Modality** | **Pipeline identifier** | |------------------------------|--------------------------------------------------------------------------------------------------------------|-----------------|-----------------------------------------------| | Text classification | assign a label to a given sequence of text | NLP | pipeline(task=“sentiment-analysis”) | | Text generation | generate text given a prompt | NLP | pipeline(task=“text-generation”) | | Summarization | generate a summary of a sequence of text or document | NLP | pipeline(task=“summarization”) | | Image classification | assign a label to an image | Computer vision | pipeline(task=“image-classification”) | | Image segmentation | assign a label to each individual pixel of an image (supports semantic, panoptic, and instance segmentation) | Computer vision | pipeline(task=“image-segmentation”) | | Object detection | predict the bounding boxes and classes of objects in an image | Computer vision | pipeline(task=“object-detection”) | | Audio classification | assign a label to some audio data | Audio | pipeline(task=“audio-classification”) | | Automatic speech recognition | transcribe speech into text | Audio | pipeline(task=“automatic-speech-recognition”) | | Visual question answering | answer a question about the image, given an image and a question | Multimodal | pipeline(task=“vqa”) | | Document question answering | answer a question about the document, given a document and a question | Multimodal | pipeline(task=""document-question-answering"") | | Image captioning | generate a caption for a given image | Multimodal | pipeline(task=""image-to-text"") | Start by creating an instance of [`pipeline`] and specifying a task you want to use it for. In this guide, you'll use the [`pipeline`] for sentiment analysis as an example: >>> from transformers import pipeline >>> classifier = pipeline(""sentiment-analysis"") The [`pipeline`] downloads and caches a default [pretrained model](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) and tokenizer for sentiment analysis. Now you can use the `classifier` on your target text: >>> classifier(""We are very happy to show you the 🤗 Transformers library."") [{'label': 'POSITIVE', 'score': 0.9998}] If you have more than one input, pass your inputs as a list to the [`pipeline`] to return a list of dictionaries: >>> results = classifier([""We are very happy to show you the 🤗 Transformers library."", ""We hope you don't hate it.""]) >>> for result in results: print(f""label: {result['label']}, with score: {round(result['score'], 4)}"") label: POSITIVE, with score: 0.9998 label: NEGATIVE, with score: 0.5309 The [`pipeline`] can also iterate over an entire dataset for any task you like. For this example, let's choose automatic speech recognition as our task: >>> import torch >>> from transformers import pipeline >>> speech_recognizer = pipeline(""automatic-speech-recognition"", model=""facebook/wav2vec2-base-960h"") Load an audio dataset (see the 🤗 Datasets [Quick Start](https://huggingface.co/docs/datasets/quickstart#audio) for more details) you'd like to iterate over. For example, load the [MInDS-14](https://huggingface.co/datasets/PolyAI/minds14) dataset: >>> from datasets import load_dataset, Audio >>> dataset = load_dataset(""PolyAI/minds14"", name=""en-US"", split=""train"") # doctest: +IGNORE_RESULT You need to make sure the sampling rate of the dataset matches the sampling rate [`facebook/wav2vec2-base-960h`](https://huggingface.co/facebook/wav2vec2-base-960h) was trained on: >>> dataset = dataset.cast_column(""audio"", Audio(sampling_rate=speech_recognizer.feature_extractor.sampling_rate)) The audio files are automatically loaded and resampled when calling the `""audio""` column. Extract the raw waveform arrays from the first 4 samples and pass it as a list to the pipeline: >>> result = speech_recognizer(dataset[:4][""audio""]) >>> print([d[""text""] for d in result]) ['I WOULD LIKE TO SET UP A JOINT ACCOUNT WITH MY PARTNER HOW DO I PROCEED WITH DOING THAT', ""FONDERING HOW I'D SET UP A JOIN TO HELL T WITH MY WIFE AND WHERE THE AP MIGHT BE"", ""I I'D LIKE TOY SET UP A JOINT ACCOUNT WITH MY PARTNER I'M NOT SEEING THE OPTION TO DO IT ON THE APSO I CALLED IN TO GET SOME HELP CAN I JUST DO IT OVER THE PHONE WITH YOU AND GIVE YOU THE INFORMATION OR SHOULD I DO IT IN THE AP AN I'M MISSING SOMETHING UQUETTE HAD PREFERRED TO JUST DO IT OVER THE PHONE OF POSSIBLE THINGS"", 'HOW DO I FURN A JOINA COUT'] For larger datasets where the inputs are big (like in speech or vision), you'll want to pass a generator instead of a list to load all the inputs in memory. Take a look at the [pipeline API reference](./main_classes/pipelines) for more information. ### Use another model and tokenizer in the pipeline The [`pipeline`] can accommodate any model from the [Hub](https://huggingface.co/models), making it easy to adapt the [`pipeline`] for other use-cases. For example, if you'd like a model capable of handling French text, use the tags on the Hub to filter for an appropriate model. The top filtered result returns a multilingual [BERT model](https://huggingface.co/nlptown/bert-base-multilingual-uncased-sentiment) finetuned for sentiment analysis you can use for French text: >>> model_name = ""nlptown/bert-base-multilingual-uncased-sentiment"" Use [`AutoModelForSequenceClassification`] and [`AutoTokenizer`] to load the pretrained model and it's associated tokenizer (more on an `AutoClass` in the next section): >>> from transformers import AutoTokenizer, AutoModelForSequenceClassification >>> model = AutoModelForSequenceClassification.from_pretrained(model_name) >>> tokenizer = AutoTokenizer.from_pretrained(model_name) Use [`TFAutoModelForSequenceClassification`] and [`AutoTokenizer`] to load the pretrained model and it's associated tokenizer (more on an `TFAutoClass` in the next section): >>> from transformers import AutoTokenizer, TFAutoModelForSequenceClassification >>> model = TFAutoModelForSequenceClassification.from_pretrained(model_name) >>> tokenizer = AutoTokenizer.from_pretrained(model_name) Specify the model and tokenizer in the [`pipeline`], and now you can apply the `classifier` on French text: >>> classifier = pipeline(""sentiment-analysis"", model=model, tokenizer=tokenizer) >>> classifier(""Nous sommes très heureux de vous présenter la bibliothèque 🤗 Transformers."") [{'label': '5 stars', 'score': 0.7273}] If you can't find a model for your use-case, you'll need to finetune a pretrained model on your data. Take a look at our [finetuning tutorial](./training) to learn how. Finally, after you've finetuned your pretrained model, please consider [sharing](./model_sharing) the model with the community on the Hub to democratize machine learning for everyone! 🤗 ## AutoClass Under the hood, the [`AutoModelForSequenceClassification`] and [`AutoTokenizer`] classes work together to power the [`pipeline`] you used above. An [AutoClass](./model_doc/auto) is a shortcut that automatically retrieves the architecture of a pretrained model from its name or path. You only need to select the appropriate `AutoClass` for your task and it's associated preprocessing class. Let's return to the example from the previous section and see how you can use the `AutoClass` to replicate the results of the [`pipeline`]. ### AutoTokenizer A tokenizer is responsible for preprocessing text into an array of numbers as inputs to a model. There are multiple rules that govern the tokenization process, including how to split a word and at what level words should be split (learn more about tokenization in the [tokenizer summary](./tokenizer_summary)). The most important thing to remember is you need to instantiate a tokenizer with the same model name to ensure you're using the same tokenization rules a model was pretrained with. Load a tokenizer with [`AutoTokenizer`]: >>> from transformers import AutoTokenizer >>> model_name = ""nlptown/bert-base-multilingual-uncased-sentiment"" >>> tokenizer = AutoTokenizer.from_pretrained(model_name) Pass your text to the tokenizer: >>> encoding = tokenizer(""We are very happy to show you the 🤗 Transformers library."") >>> print(encoding) {'input_ids': [101, 11312, 10320, 12495, 19308, 10114, 11391, 10855, 10103, 100, 58263, 13299, 119, 102], 'token_type_ids': [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], 'attention_mask': [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]} The tokenizer returns a dictionary containing: * [input_ids](./glossary#input-ids): numerical representations of your tokens. * [attention_mask](.glossary#attention-mask): indicates which tokens should be attended to. A tokenizer can also accept a list of inputs, and pad and truncate the text to return a batch with uniform length: >>> pt_batch = tokenizer( [""We are very happy to show you the 🤗 Transformers library."", ""We hope you don't hate it.""], padding=True, truncation=True, max_length=512, return_tensors=""pt"", ) >>> tf_batch = tokenizer( [""We are very happy to show you the 🤗 Transformers library."", ""We hope you don't hate it.""], padding=True, truncation=True, max_length=512, return_tensors=""tf"", ) Check out the [preprocess](./preprocessing) tutorial for more details about tokenization, and how to use an [`AutoImageProcessor`], [`AutoFeatureExtractor`] and [`AutoProcessor`] to preprocess image, audio, and multimodal inputs. ### AutoModel 🤗 Transformers provides a simple and unified way to load pretrained instances. This means you can load an [`AutoModel`] like you would load an [`AutoTokenizer`]. The only difference is selecting the correct [`AutoModel`] for the task. For text (or sequence) classification, you should load [`AutoModelForSequenceClassification`]: >>> from transformers import AutoModelForSequenceClassification >>> model_name = ""nlptown/bert-base-multilingual-uncased-sentiment"" >>> pt_model = AutoModelForSequenceClassification.from_pretrained(model_name) See the [task summary](./task_summary) for tasks supported by an [`AutoModel`] class. Now pass your preprocessed batch of inputs directly to the model. You just have to unpack the dictionary by adding `**`: >>> pt_outputs = pt_model(**pt_batch) The model outputs the final activations in the `logits` attribute. Apply the softmax function to the `logits` to retrieve the probabilities: >>> from torch import nn >>> pt_predictions = nn.functional.softmax(pt_outputs.logits, dim=-1) >>> print(pt_predictions) tensor([[0.0021, 0.0018, 0.0115, 0.2121, 0.7725], [0.2084, 0.1826, 0.1969, 0.1755, 0.2365]], grad_fn=) 🤗 Transformers provides a simple and unified way to load pretrained instances. This means you can load an [`TFAutoModel`] like you would load an [`AutoTokenizer`]. The only difference is selecting the correct [`TFAutoModel`] for the task. For text (or sequence) classification, you should load [`TFAutoModelForSequenceClassification`]: >>> from transformers import TFAutoModelForSequenceClassification >>> model_name = ""nlptown/bert-base-multilingual-uncased-sentiment"" >>> tf_model = TFAutoModelForSequenceClassification.from_pretrained(model_name) See the [task summary](./task_summary) for tasks supported by an [`AutoModel`] class. Now pass your preprocessed batch of inputs directly to the model. You can pass the tensors as-is: >>> tf_outputs = tf_model(tf_batch) The model outputs the final activations in the `logits` attribute. Apply the softmax function to the `logits` to retrieve the probabilities: >>> import tensorflow as tf >>> tf_predictions = tf.nn.softmax(tf_outputs.logits, axis=-1) >>> tf_predictions # doctest: +IGNORE_RESULT All 🤗 Transformers models (PyTorch or TensorFlow) output the tensors *before* the final activation function (like softmax) because the final activation function is often fused with the loss. Model outputs are special dataclasses so their attributes are autocompleted in an IDE. The model outputs behave like a tuple or a dictionary (you can index with an integer, a slice or a string) in which case, attributes that are None are ignored. ### Save a model Once your model is fine-tuned, you can save it with its tokenizer using [`PreTrainedModel.save_pretrained`]: >>> pt_save_directory = ""./pt_save_pretrained"" >>> tokenizer.save_pretrained(pt_save_directory) # doctest: +IGNORE_RESULT >>> pt_model.save_pretrained(pt_save_directory) When you are ready to use the model again, reload it with [`PreTrainedModel.from_pretrained`]: >>> pt_model = AutoModelForSequenceClassification.from_pretrained(""./pt_save_pretrained"") Once your model is fine-tuned, you can save it with its tokenizer using [`TFPreTrainedModel.save_pretrained`]: >>> tf_save_directory = ""./tf_save_pretrained"" >>> tokenizer.save_pretrained(tf_save_directory) # doctest: +IGNORE_RESULT >>> tf_model.save_pretrained(tf_save_directory) When you are ready to use the model again, reload it with [`TFPreTrainedModel.from_pretrained`]: >>> tf_model = TFAutoModelForSequenceClassification.from_pretrained(""./tf_save_pretrained"") One particularly cool 🤗 Transformers feature is the ability to save a model and reload it as either a PyTorch or TensorFlow model. The `from_pt` or `from_tf` parameter can convert the model from one framework to the other: >>> from transformers import AutoModel >>> tokenizer = AutoTokenizer.from_pretrained(tf_save_directory) >>> pt_model = AutoModelForSequenceClassification.from_pretrained(tf_save_directory, from_tf=True) >>> from transformers import TFAutoModel >>> tokenizer = AutoTokenizer.from_pretrained(pt_save_directory) >>> tf_model = TFAutoModelForSequenceClassification.from_pretrained(pt_save_directory, from_pt=True) ## Custom model builds You can modify the model's configuration class to change how a model is built. The configuration specifies a model's attributes, such as the number of hidden layers or attention heads. You start from scratch when you initialize a model from a custom configuration class. The model attributes are randomly initialized, and you'll need to train the model before you can use it to get meaningful results. Start by importing [`AutoConfig`], and then load the pretrained model you want to modify. Within [`AutoConfig.from_pretrained`], you can specify the attribute you want to change, such as the number of attention heads: >>> from transformers import AutoConfig >>> my_config = AutoConfig.from_pretrained(""distilbert-base-uncased"", n_heads=12) Create a model from your custom configuration with [`AutoModel.from_config`]: >>> from transformers import AutoModel >>> my_model = AutoModel.from_config(my_config) Create a model from your custom configuration with [`TFAutoModel.from_config`]: >>> from transformers import TFAutoModel >>> my_model = TFAutoModel.from_config(my_config) Take a look at the [Create a custom architecture](./create_a_model) guide for more information about building custom configurations. ## Trainer - a PyTorch optimized training loop All models are a standard [`torch.nn.Module`](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) so you can use them in any typical training loop. While you can write your own training loop, 🤗 Transformers provides a [`Trainer`] class for PyTorch, which contains the basic training loop and adds additional functionality for features like distributed training, mixed precision, and more. Depending on your task, you'll typically pass the following parameters to [`Trainer`]: 1. You'll start with a [`PreTrainedModel`] or a [`torch.nn.Module`](https://pytorch.org/docs/stable/nn.html#torch.nn.Module): >>> from transformers import AutoModelForSequenceClassification >>> model = AutoModelForSequenceClassification.from_pretrained(""distilbert-base-uncased"") 2. [`TrainingArguments`] contains the model hyperparameters you can change like learning rate, batch size, and the number of epochs to train for. The default values are used if you don't specify any training arguments: >>> from transformers import TrainingArguments >>> training_args = TrainingArguments( output_dir=""path/to/save/folder/"", learning_rate=2e-5, per_device_train_batch_size=8, per_device_eval_batch_size=8, num_train_epochs=2, ) 3. Load a preprocessing class like a tokenizer, image processor, feature extractor, or processor: >>> from transformers import AutoTokenizer >>> tokenizer = AutoTokenizer.from_pretrained(""distilbert-base-uncased"") 4. Load a dataset: >>> from datasets import load_dataset >>> dataset = load_dataset(""rotten_tomatoes"") # doctest: +IGNORE_RESULT 5. Create a function to tokenize the dataset: >>> def tokenize_dataset(dataset): return tokenizer(dataset[""text""]) Then apply it over the entire dataset with [`~datasets.Dataset.map`]: >>> dataset = dataset.map(tokenize_dataset, batched=True) 6. A [`DataCollatorWithPadding`] to create a batch of examples from your dataset: >>> from transformers import DataCollatorWithPadding >>> data_collator = DataCollatorWithPadding(tokenizer=tokenizer) Now gather all these classes in [`Trainer`]: >>> from transformers import Trainer >>> trainer = Trainer( model=model, args=training_args, train_dataset=dataset[""train""], eval_dataset=dataset[""test""], tokenizer=tokenizer, data_collator=data_collator, ) # doctest: +SKIP When you're ready, call [`~Trainer.train`] to start training: >>> trainer.train() # doctest: +SKIP For tasks - like translation or summarization - that use a sequence-to-sequence model, use the [`Seq2SeqTrainer`] and [`Seq2SeqTrainingArguments`] classes instead. You can customize the training loop behavior by subclassing the methods inside [`Trainer`]. This allows you to customize features such as the loss function, optimizer, and scheduler. Take a look at the [`Trainer`] reference for which methods can be subclassed. The other way to customize the training loop is by using [Callbacks](./main_classes/callbacks). You can use callbacks to integrate with other libraries and inspect the training loop to report on progress or stop the training early. Callbacks do not modify anything in the training loop itself. To customize something like the loss function, you need to subclass the [`Trainer`] instead. ## Train with TensorFlow All models are a standard [`tf.keras.Model`](https://www.tensorflow.org/api_docs/python/tf/keras/Model) so they can be trained in TensorFlow with the [Keras](https://keras.io/) API. 🤗 Transformers provides the [`~TFPreTrainedModel.prepare_tf_dataset`] method to easily load your dataset as a `tf.data.Dataset` so you can start training right away with Keras' [`compile`](https://keras.io/api/models/model_training_apis/#compile-method) and [`fit`](https://keras.io/api/models/model_training_apis/#fit-method) methods. 1. You'll start with a [`TFPreTrainedModel`] or a [`tf.keras.Model`](https://www.tensorflow.org/api_docs/python/tf/keras/Model): >>> from transformers import TFAutoModelForSequenceClassification >>> model = TFAutoModelForSequenceClassification.from_pretrained(""distilbert-base-uncased"") 2. Load a preprocessing class like a tokenizer, image processor, feature extractor, or processor: >>> from transformers import AutoTokenizer >>> tokenizer = AutoTokenizer.from_pretrained(""distilbert-base-uncased"") 3. Create a function to tokenize the dataset: >>> def tokenize_dataset(dataset): return tokenizer(dataset[""text""]) # doctest: +SKIP 4. Apply the tokenizer over the entire dataset with [`~datasets.Dataset.map`] and then pass the dataset and tokenizer to [`~TFPreTrainedModel.prepare_tf_dataset`]. You can also change the batch size and shuffle the dataset here if you'd like: >>> dataset = dataset.map(tokenize_dataset) # doctest: +SKIP >>> tf_dataset = model.prepare_tf_dataset( dataset[""train""], batch_size=16, shuffle=True, tokenizer=tokenizer ) # doctest: +SKIP 5. When you're ready, you can call `compile` and `fit` to start training. Note that Transformers models all have a default task-relevant loss function, so you don't need to specify one unless you want to: >>> from tensorflow.keras.optimizers import Adam >>> model.compile(optimizer=Adam(3e-5)) # No loss argument! >>> model.fit(tf_dataset) # doctest: +SKIP ## What's next? Now that you've completed the 🤗 Transformers quick tour, check out our guides and learn how to do more specific things like writing a custom model, fine-tuning a model for a task, and how to train a model with a script. If you're interested in learning more about 🤗 Transformers core concepts, grab a cup of coffee and take a look at our Conceptual Guides! " pad_truncation.md," # Padding and truncation Batched inputs are often different lengths, so they can't be converted to fixed-size tensors. Padding and truncation are strategies for dealing with this problem, to create rectangular tensors from batches of varying lengths. Padding adds a special **padding token** to ensure shorter sequences will have the same length as either the longest sequence in a batch or the maximum length accepted by the model. Truncation works in the other direction by truncating long sequences. In most cases, padding your batch to the length of the longest sequence and truncating to the maximum length a model can accept works pretty well. However, the API supports more strategies if you need them. The three arguments you need to are: `padding`, `truncation` and `max_length`. The `padding` argument controls padding. It can be a boolean or a string: - `True` or `'longest'`: pad to the longest sequence in the batch (no padding is applied if you only provide a single sequence). - `'max_length'`: pad to a length specified by the `max_length` argument or the maximum length accepted by the model if no `max_length` is provided (`max_length=None`). Padding will still be applied if you only provide a single sequence. - `False` or `'do_not_pad'`: no padding is applied. This is the default behavior. The `truncation` argument controls truncation. It can be a boolean or a string: - `True` or `'longest_first'`: truncate to a maximum length specified by the `max_length` argument or the maximum length accepted by the model if no `max_length` is provided (`max_length=None`). This will truncate token by token, removing a token from the longest sequence in the pair until the proper length is reached. - `'only_second'`: truncate to a maximum length specified by the `max_length` argument or the maximum length accepted by the model if no `max_length` is provided (`max_length=None`). This will only truncate the second sentence of a pair if a pair of sequences (or a batch of pairs of sequences) is provided. - `'only_first'`: truncate to a maximum length specified by the `max_length` argument or the maximum length accepted by the model if no `max_length` is provided (`max_length=None`). This will only truncate the first sentence of a pair if a pair of sequences (or a batch of pairs of sequences) is provided. - `False` or `'do_not_truncate'`: no truncation is applied. This is the default behavior. The `max_length` argument controls the length of the padding and truncation. It can be an integer or `None`, in which case it will default to the maximum length the model can accept. If the model has no specific maximum input length, truncation or padding to `max_length` is deactivated. The following table summarizes the recommended way to setup padding and truncation. If you use pairs of input sequences in any of the following examples, you can replace `truncation=True` by a `STRATEGY` selected in `['only_first', 'only_second', 'longest_first']`, i.e. `truncation='only_second'` or `truncation='longest_first'` to control how both sequences in the pair are truncated as detailed before. | Truncation | Padding | Instruction | |--------------------------------------|-----------------------------------|---------------------------------------------------------------------------------------------| | no truncation | no padding | `tokenizer(batch_sentences)` | | | padding to max sequence in batch | `tokenizer(batch_sentences, padding=True)` or | | | | `tokenizer(batch_sentences, padding='longest')` | | | padding to max model input length | `tokenizer(batch_sentences, padding='max_length')` | | | padding to specific length | `tokenizer(batch_sentences, padding='max_length', max_length=42)` | | | padding to a multiple of a value | `tokenizer(batch_sentences, padding=True, pad_to_multiple_of=8) | | truncation to max model input length | no padding | `tokenizer(batch_sentences, truncation=True)` or | | | | `tokenizer(batch_sentences, truncation=STRATEGY)` | | | padding to max sequence in batch | `tokenizer(batch_sentences, padding=True, truncation=True)` or | | | | `tokenizer(batch_sentences, padding=True, truncation=STRATEGY)` | | | padding to max model input length | `tokenizer(batch_sentences, padding='max_length', truncation=True)` or | | | | `tokenizer(batch_sentences, padding='max_length', truncation=STRATEGY)` | | | padding to specific length | Not possible | | truncation to specific length | no padding | `tokenizer(batch_sentences, truncation=True, max_length=42)` or | | | | `tokenizer(batch_sentences, truncation=STRATEGY, max_length=42)` | | | padding to max sequence in batch | `tokenizer(batch_sentences, padding=True, truncation=True, max_length=42)` or | | | | `tokenizer(batch_sentences, padding=True, truncation=STRATEGY, max_length=42)` | | | padding to max model input length | Not possible | | | padding to specific length | `tokenizer(batch_sentences, padding='max_length', truncation=True, max_length=42)` or | | | | `tokenizer(batch_sentences, padding='max_length', truncation=STRATEGY, max_length=42)` | " preprocessing.md," # Preprocess [[open-in-colab]] Before you can train a model on a dataset, it needs to be preprocessed into the expected model input format. Whether your data is text, images, or audio, they need to be converted and assembled into batches of tensors. 🤗 Transformers provides a set of preprocessing classes to help prepare your data for the model. In this tutorial, you'll learn that for: * Text, use a [Tokenizer](./main_classes/tokenizer) to convert text into a sequence of tokens, create a numerical representation of the tokens, and assemble them into tensors. * Speech and audio, use a [Feature extractor](./main_classes/feature_extractor) to extract sequential features from audio waveforms and convert them into tensors. * Image inputs use a [ImageProcessor](./main_classes/image) to convert images into tensors. * Multimodal inputs, use a [Processor](./main_classes/processors) to combine a tokenizer and a feature extractor or image processor. `AutoProcessor` **always** works and automatically chooses the correct class for the model you're using, whether you're using a tokenizer, image processor, feature extractor or processor. Before you begin, install 🤗 Datasets so you can load some datasets to experiment with: ```bash pip install datasets ## Natural Language Processing The main tool for preprocessing textual data is a [tokenizer](main_classes/tokenizer). A tokenizer splits text into *tokens* according to a set of rules. The tokens are converted into numbers and then tensors, which become the model inputs. Any additional inputs required by the model are added by the tokenizer. If you plan on using a pretrained model, it's important to use the associated pretrained tokenizer. This ensures the text is split the same way as the pretraining corpus, and uses the same corresponding tokens-to-index (usually referred to as the *vocab*) during pretraining. Get started by loading a pretrained tokenizer with the [`AutoTokenizer.from_pretrained`] method. This downloads the *vocab* a model was pretrained with: >>> from transformers import AutoTokenizer >>> tokenizer = AutoTokenizer.from_pretrained(""bert-base-cased"") Then pass your text to the tokenizer: >>> encoded_input = tokenizer(""Do not meddle in the affairs of wizards, for they are subtle and quick to anger."") >>> print(encoded_input) {'input_ids': [101, 2079, 2025, 19960, 10362, 1999, 1996, 3821, 1997, 16657, 1010, 2005, 2027, 2024, 11259, 1998, 4248, 2000, 4963, 1012, 102], 'token_type_ids': [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], 'attention_mask': [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]} The tokenizer returns a dictionary with three important items: * [input_ids](glossary#input-ids) are the indices corresponding to each token in the sentence. * [attention_mask](glossary#attention-mask) indicates whether a token should be attended to or not. * [token_type_ids](glossary#token-type-ids) identifies which sequence a token belongs to when there is more than one sequence. Return your input by decoding the `input_ids`: >>> tokenizer.decode(encoded_input[""input_ids""]) '[CLS] Do not meddle in the affairs of wizards, for they are subtle and quick to anger. [SEP]' As you can see, the tokenizer added two special tokens - `CLS` and `SEP` (classifier and separator) - to the sentence. Not all models need special tokens, but if they do, the tokenizer automatically adds them for you. If there are several sentences you want to preprocess, pass them as a list to the tokenizer: >>> batch_sentences = [ ""But what about second breakfast?"", ""Don't think he knows about second breakfast, Pip."", ""What about elevensies?"", ] >>> encoded_inputs = tokenizer(batch_sentences) >>> print(encoded_inputs) {'input_ids': [[101, 1252, 1184, 1164, 1248, 6462, 136, 102], [101, 1790, 112, 189, 1341, 1119, 3520, 1164, 1248, 6462, 117, 21902, 1643, 119, 102], [101, 1327, 1164, 5450, 23434, 136, 102]], 'token_type_ids': [[0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0]], 'attention_mask': [[1, 1, 1, 1, 1, 1, 1, 1], [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1], [1, 1, 1, 1, 1, 1, 1]]} ### Pad Sentences aren't always the same length which can be an issue because tensors, the model inputs, need to have a uniform shape. Padding is a strategy for ensuring tensors are rectangular by adding a special *padding token* to shorter sentences. Set the `padding` parameter to `True` to pad the shorter sequences in the batch to match the longest sequence: >>> batch_sentences = [ ""But what about second breakfast?"", ""Don't think he knows about second breakfast, Pip."", ""What about elevensies?"", ] >>> encoded_input = tokenizer(batch_sentences, padding=True) >>> print(encoded_input) {'input_ids': [[101, 1252, 1184, 1164, 1248, 6462, 136, 102, 0, 0, 0, 0, 0, 0, 0], [101, 1790, 112, 189, 1341, 1119, 3520, 1164, 1248, 6462, 117, 21902, 1643, 119, 102], [101, 1327, 1164, 5450, 23434, 136, 102, 0, 0, 0, 0, 0, 0, 0, 0]], 'token_type_ids': [[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]], 'attention_mask': [[1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0], [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1], [1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0]]} The first and third sentences are now padded with `0`'s because they are shorter. ### Truncation On the other end of the spectrum, sometimes a sequence may be too long for a model to handle. In this case, you'll need to truncate the sequence to a shorter length. Set the `truncation` parameter to `True` to truncate a sequence to the maximum length accepted by the model: >>> batch_sentences = [ ""But what about second breakfast?"", ""Don't think he knows about second breakfast, Pip."", ""What about elevensies?"", ] >>> encoded_input = tokenizer(batch_sentences, padding=True, truncation=True) >>> print(encoded_input) {'input_ids': [[101, 1252, 1184, 1164, 1248, 6462, 136, 102, 0, 0, 0, 0, 0, 0, 0], [101, 1790, 112, 189, 1341, 1119, 3520, 1164, 1248, 6462, 117, 21902, 1643, 119, 102], [101, 1327, 1164, 5450, 23434, 136, 102, 0, 0, 0, 0, 0, 0, 0, 0]], 'token_type_ids': [[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]], 'attention_mask': [[1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0], [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1], [1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0]]} Check out the [Padding and truncation](./pad_truncation) concept guide to learn more different padding and truncation arguments. ### Build tensors Finally, you want the tokenizer to return the actual tensors that get fed to the model. Set the `return_tensors` parameter to either `pt` for PyTorch, or `tf` for TensorFlow: >>> batch_sentences = [ ""But what about second breakfast?"", ""Don't think he knows about second breakfast, Pip."", ""What about elevensies?"", ] >>> encoded_input = tokenizer(batch_sentences, padding=True, truncation=True, return_tensors=""pt"") >>> print(encoded_input) {'input_ids': tensor([[101, 1252, 1184, 1164, 1248, 6462, 136, 102, 0, 0, 0, 0, 0, 0, 0], [101, 1790, 112, 189, 1341, 1119, 3520, 1164, 1248, 6462, 117, 21902, 1643, 119, 102], [101, 1327, 1164, 5450, 23434, 136, 102, 0, 0, 0, 0, 0, 0, 0, 0]]), 'token_type_ids': tensor([[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]]), 'attention_mask': tensor([[1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0], [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1], [1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0]])} >>> batch_sentences = [ ""But what about second breakfast?"", ""Don't think he knows about second breakfast, Pip."", ""What about elevensies?"", ] >>> encoded_input = tokenizer(batch_sentences, padding=True, truncation=True, return_tensors=""tf"") >>> print(encoded_input) {'input_ids': , 'token_type_ids': , 'attention_mask': } ## Audio For audio tasks, you'll need a [feature extractor](main_classes/feature_extractor) to prepare your dataset for the model. The feature extractor is designed to extract features from raw audio data, and convert them into tensors. Load the [MInDS-14](https://huggingface.co/datasets/PolyAI/minds14) dataset (see the 🤗 [Datasets tutorial](https://huggingface.co/docs/datasets/load_hub) for more details on how to load a dataset) to see how you can use a feature extractor with audio datasets: >>> from datasets import load_dataset, Audio >>> dataset = load_dataset(""PolyAI/minds14"", name=""en-US"", split=""train"") Access the first element of the `audio` column to take a look at the input. Calling the `audio` column automatically loads and resamples the audio file: >>> dataset[0][""audio""] {'array': array([ 0. , 0.00024414, -0.00024414, , -0.00024414, 0. , 0. ], dtype=float32), 'path': '/root/.cache/huggingface/datasets/downloads/extracted/f14948e0e84be638dd7943ac36518a4cf3324e8b7aa331c5ab11541518e9368c/en-US~JOINT_ACCOUNT/602ba55abb1e6d0fbce92065.wav', 'sampling_rate': 8000} This returns three items: * `array` is the speech signal loaded - and potentially resampled - as a 1D array. * `path` points to the location of the audio file. * `sampling_rate` refers to how many data points in the speech signal are measured per second. For this tutorial, you'll use the [Wav2Vec2](https://huggingface.co/facebook/wav2vec2-base) model. Take a look at the model card, and you'll learn Wav2Vec2 is pretrained on 16kHz sampled speech audio. It is important your audio data's sampling rate matches the sampling rate of the dataset used to pretrain the model. If your data's sampling rate isn't the same, then you need to resample your data. 1. Use 🤗 Datasets' [`~datasets.Dataset.cast_column`] method to upsample the sampling rate to 16kHz: >>> dataset = dataset.cast_column(""audio"", Audio(sampling_rate=16_000)) 2. Call the `audio` column again to resample the audio file: >>> dataset[0][""audio""] {'array': array([ 2.3443763e-05, 2.1729663e-04, 2.2145823e-04, , 3.8356509e-05, -7.3497440e-06, -2.1754686e-05], dtype=float32), 'path': '/root/.cache/huggingface/datasets/downloads/extracted/f14948e0e84be638dd7943ac36518a4cf3324e8b7aa331c5ab11541518e9368c/en-US~JOINT_ACCOUNT/602ba55abb1e6d0fbce92065.wav', 'sampling_rate': 16000} Next, load a feature extractor to normalize and pad the input. When padding textual data, a `0` is added for shorter sequences. The same idea applies to audio data. The feature extractor adds a `0` - interpreted as silence - to `array`. Load the feature extractor with [`AutoFeatureExtractor.from_pretrained`]: >>> from transformers import AutoFeatureExtractor >>> feature_extractor = AutoFeatureExtractor.from_pretrained(""facebook/wav2vec2-base"") Pass the audio `array` to the feature extractor. We also recommend adding the `sampling_rate` argument in the feature extractor in order to better debug any silent errors that may occur. >>> audio_input = [dataset[0][""audio""][""array""]] >>> feature_extractor(audio_input, sampling_rate=16000) {'input_values': [array([ 3.8106556e-04, 2.7506407e-03, 2.8015103e-03, , 5.6335266e-04, 4.6588284e-06, -1.7142107e-04], dtype=float32)]} Just like the tokenizer, you can apply padding or truncation to handle variable sequences in a batch. Take a look at the sequence length of these two audio samples: >>> dataset[0][""audio""][""array""].shape (173398,) >>> dataset[1][""audio""][""array""].shape (106496,) Create a function to preprocess the dataset so the audio samples are the same lengths. Specify a maximum sample length, and the feature extractor will either pad or truncate the sequences to match it: >>> def preprocess_function(examples): audio_arrays = [x[""array""] for x in examples[""audio""]] inputs = feature_extractor( audio_arrays, sampling_rate=16000, padding=True, max_length=100000, truncation=True, ) return inputs Apply the `preprocess_function` to the first few examples in the dataset: >>> processed_dataset = preprocess_function(dataset[:5]) The sample lengths are now the same and match the specified maximum length. You can pass your processed dataset to the model now! >>> processed_dataset[""input_values""][0].shape (100000,) >>> processed_dataset[""input_values""][1].shape (100000,) ## Computer vision For computer vision tasks, you'll need an [image processor](main_classes/image_processor) to prepare your dataset for the model. Image preprocessing consists of several steps that convert images into the input expected by the model. These steps include but are not limited to resizing, normalizing, color channel correction, and converting images to tensors. Image preprocessing often follows some form of image augmentation. Both image preprocessing and image augmentation transform image data, but they serve different purposes: * Image augmentation alters images in a way that can help prevent overfitting and increase the robustness of the model. You can get creative in how you augment your data - adjust brightness and colors, crop, rotate, resize, zoom, etc. However, be mindful not to change the meaning of the images with your augmentations. * Image preprocessing guarantees that the images match the model’s expected input format. When fine-tuning a computer vision model, images must be preprocessed exactly as when the model was initially trained. You can use any library you like for image augmentation. For image preprocessing, use the `ImageProcessor` associated with the model. Load the [food101](https://huggingface.co/datasets/food101) dataset (see the 🤗 [Datasets tutorial](https://huggingface.co/docs/datasets/load_hub) for more details on how to load a dataset) to see how you can use an image processor with computer vision datasets: Use 🤗 Datasets `split` parameter to only load a small sample from the training split since the dataset is quite large! >>> from datasets import load_dataset >>> dataset = load_dataset(""food101"", split=""train[:100]"") Next, take a look at the image with 🤗 Datasets [`Image`](https://huggingface.co/docs/datasets/package_reference/main_classes?highlight=image#datasets.Image) feature: >>> dataset[0][""image""] Load the image processor with [`AutoImageProcessor.from_pretrained`]: >>> from transformers import AutoImageProcessor >>> image_processor = AutoImageProcessor.from_pretrained(""google/vit-base-patch16-224"") First, let's add some image augmentation. You can use any library you prefer, but in this tutorial, we'll use torchvision's [`transforms`](https://pytorch.org/vision/stable/transforms.html) module. If you're interested in using another data augmentation library, learn how in the [Albumentations](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/image_classification_albumentations.ipynb) or [Kornia notebooks](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/image_classification_kornia.ipynb). 1. Here we use [`Compose`](https://pytorch.org/vision/master/generated/torchvision.transforms.Compose.html) to chain together a couple of transforms - [`RandomResizedCrop`](https://pytorch.org/vision/main/generated/torchvision.transforms.RandomResizedCrop.html) and [`ColorJitter`](https://pytorch.org/vision/main/generated/torchvision.transforms.ColorJitter.html). Note that for resizing, we can get the image size requirements from the `image_processor`. For some models, an exact height and width are expected, for others only the `shortest_edge` is defined. >>> from torchvision.transforms import RandomResizedCrop, ColorJitter, Compose >>> size = ( image_processor.size[""shortest_edge""] if ""shortest_edge"" in image_processor.size else (image_processor.size[""height""], image_processor.size[""width""]) ) >>> _transforms = Compose([RandomResizedCrop(size), ColorJitter(brightness=0.5, hue=0.5)]) 2. The model accepts [`pixel_values`](model_doc/visionencoderdecoder#transformers.VisionEncoderDecoderModel.forward.pixel_values) as its input. `ImageProcessor` can take care of normalizing the images, and generating appropriate tensors. Create a function that combines image augmentation and image preprocessing for a batch of images and generates `pixel_values`: >>> def transforms(examples): images = [_transforms(img.convert(""RGB"")) for img in examples[""image""]] examples[""pixel_values""] = image_processor(images, do_resize=False, return_tensors=""pt"")[""pixel_values""] return examples In the example above we set `do_resize=False` because we have already resized the images in the image augmentation transformation, and leveraged the `size` attribute from the appropriate `image_processor`. If you do not resize images during image augmentation, leave this parameter out. By default, `ImageProcessor` will handle the resizing. If you wish to normalize images as a part of the augmentation transformation, use the `image_processor.image_mean`, and `image_processor.image_std` values. 3. Then use 🤗 Datasets[`~datasets.Dataset.set_transform`] to apply the transforms on the fly: >>> dataset.set_transform(transforms) 4. Now when you access the image, you'll notice the image processor has added `pixel_values`. You can pass your processed dataset to the model now! >>> dataset[0].keys() Here is what the image looks like after the transforms are applied. The image has been randomly cropped and it's color properties are different. >>> import numpy as np >>> import matplotlib.pyplot as plt >>> img = dataset[0][""pixel_values""] >>> plt.imshow(img.permute(1, 2, 0)) For tasks like object detection, semantic segmentation, instance segmentation, and panoptic segmentation, `ImageProcessor` offers post processing methods. These methods convert model's raw outputs into meaningful predictions such as bounding boxes, or segmentation maps. ### Pad In some cases, for instance, when fine-tuning [DETR](./model_doc/detr), the model applies scale augmentation at training time. This may cause images to be different sizes in a batch. You can use [`DetrImageProcessor.pad`] from [`DetrImageProcessor`] and define a custom `collate_fn` to batch images together. >>> def collate_fn(batch): pixel_values = [item[""pixel_values""] for item in batch] encoding = image_processor.pad(pixel_values, return_tensors=""pt"") labels = [item[""labels""] for item in batch] batch = {} batch[""pixel_values""] = encoding[""pixel_values""] batch[""pixel_mask""] = encoding[""pixel_mask""] batch[""labels""] = labels return batch ## Multimodal For tasks involving multimodal inputs, you'll need a [processor](main_classes/processors) to prepare your dataset for the model. A processor couples together two processing objects such as as tokenizer and feature extractor. Load the [LJ Speech](https://huggingface.co/datasets/lj_speech) dataset (see the 🤗 [Datasets tutorial](https://huggingface.co/docs/datasets/load_hub) for more details on how to load a dataset) to see how you can use a processor for automatic speech recognition (ASR): >>> from datasets import load_dataset >>> lj_speech = load_dataset(""lj_speech"", split=""train"") For ASR, you're mainly focused on `audio` and `text` so you can remove the other columns: >>> lj_speech = lj_speech.map(remove_columns=[""file"", ""id"", ""normalized_text""]) Now take a look at the `audio` and `text` columns: >>> lj_speech[0][""audio""] {'array': array([-7.3242188e-04, -7.6293945e-04, -6.4086914e-04, , 7.3242188e-04, 2.1362305e-04, 6.1035156e-05], dtype=float32), 'path': '/root/.cache/huggingface/datasets/downloads/extracted/917ece08c95cf0c4115e45294e3cd0dee724a1165b7fc11798369308a465bd26/LJSpeech-1.1/wavs/LJ001-0001.wav', 'sampling_rate': 22050} >>> lj_speech[0][""text""] 'Printing, in the only sense with which we are at present concerned, differs from most if not from all the arts and crafts represented in the Exhibition' Remember you should always [resample](preprocessing#audio) your audio dataset's sampling rate to match the sampling rate of the dataset used to pretrain a model! >>> lj_speech = lj_speech.cast_column(""audio"", Audio(sampling_rate=16_000)) Load a processor with [`AutoProcessor.from_pretrained`]: >>> from transformers import AutoProcessor >>> processor = AutoProcessor.from_pretrained(""facebook/wav2vec2-base-960h"") 1. Create a function to process the audio data contained in `array` to `input_values`, and tokenize `text` to `labels`. These are the inputs to the model: >>> def prepare_dataset(example): audio = example[""audio""] example.update(processor(audio=audio[""array""], text=example[""text""], sampling_rate=16000)) return example 2. Apply the `prepare_dataset` function to a sample: >>> prepare_dataset(lj_speech[0]) The processor has now added `input_values` and `labels`, and the sampling rate has also been correctly downsampled to 16kHz. You can pass your processed dataset to the model now! " llm_tutorial.md," # Generation with LLMs [[open-in-colab]] LLMs, or Large Language Models, are the key component behind text generation. In a nutshell, they consist of large pretrained transformer models trained to predict the next word (or, more precisely, token) given some input text. Since they predict one token at a time, you need to do something more elaborate to generate new sentences other than just calling the model -- you need to do autoregressive generation. Autoregressive generation is the inference-time procedure of iteratively calling a model with its own generated outputs, given a few initial inputs. In 🤗 Transformers, this is handled by the [`~generation.GenerationMixin.generate`] method, which is available to all models with generative capabilities. This tutorial will show you how to: * Generate text with an LLM * Avoid common pitfalls * Next steps to help you get the most out of your LLM Before you begin, make sure you have all the necessary libraries installed: ```bash pip install transformers bitsandbytes>=0.39.0 -q ## Generate text A language model trained for [causal language modeling](tasks/language_modeling) takes a sequence of text tokens as input and returns the probability distribution for the next token. ""Forward pass of an LLM"" A critical aspect of autoregressive generation with LLMs is how to select the next token from this probability distribution. Anything goes in this step as long as you end up with a token for the next iteration. This means it can be as simple as selecting the most likely token from the probability distribution or as complex as applying a dozen transformations before sampling from the resulting distribution. ""Autoregressive generation iteratively selects the next token from a probability distribution to generate text"" The process depicted above is repeated iteratively until some stopping condition is reached. Ideally, the stopping condition is dictated by the model, which should learn when to output an end-of-sequence (`EOS`) token. If this is not the case, generation stops when some predefined maximum length is reached. Properly setting up the token selection step and the stopping condition is essential to make your model behave as you'd expect on your task. That is why we have a [`~generation.GenerationConfig`] file associated with each model, which contains a good default generative parameterization and is loaded alongside your model. Let's talk code! If you're interested in basic LLM usage, our high-level [`Pipeline`](pipeline_tutorial) interface is a great starting point. However, LLMs often require advanced features like quantization and fine control of the token selection step, which is best done through [`~generation.GenerationMixin.generate`]. Autoregressive generation with LLMs is also resource-intensive and should be executed on a GPU for adequate throughput. First, you need to load the model. >>> from transformers import AutoModelForCausalLM >>> model = AutoModelForCausalLM.from_pretrained( ""mistralai/Mistral-7B-v0.1"", device_map=""auto"", load_in_4bit=True ) You'll notice two flags in the `from_pretrained` call: - `device_map` ensures the model is moved to your GPU(s) - `load_in_4bit` applies [4-bit dynamic quantization](main_classes/quantization) to massively reduce the resource requirements There are other ways to initialize a model, but this is a good baseline to begin with an LLM. Next, you need to preprocess your text input with a [tokenizer](tokenizer_summary). >>> from transformers import AutoTokenizer >>> tokenizer = AutoTokenizer.from_pretrained(""mistralai/Mistral-7B-v0.1"", padding_side=""left"") >>> model_inputs = tokenizer([""A list of colors: red, blue""], return_tensors=""pt"").to(""cuda"") The `model_inputs` variable holds the tokenized text input, as well as the attention mask. While [`~generation.GenerationMixin.generate`] does its best effort to infer the attention mask when it is not passed, we recommend passing it whenever possible for optimal results. After tokenizing the inputs, you can call the [`~generation.GenerationMixin.generate`] method to returns the generated tokens. The generated tokens then should be converted to text before printing. >>> generated_ids = model.generate(**model_inputs) >>> tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0] 'A list of colors: red, blue, green, yellow, orange, purple, pink,' Finally, you don't need to do it one sequence at a time! You can batch your inputs, which will greatly improve the throughput at a small latency and memory cost. All you need to do is to make sure you pad your inputs properly (more on that below). >>> tokenizer.pad_token = tokenizer.eos_token # Most LLMs don't have a pad token by default >>> model_inputs = tokenizer( [""A list of colors: red, blue"", ""Portugal is""], return_tensors=""pt"", padding=True ).to(""cuda"") >>> generated_ids = model.generate(**model_inputs) >>> tokenizer.batch_decode(generated_ids, skip_special_tokens=True) ['A list of colors: red, blue, green, yellow, orange, purple, pink,', 'Portugal is a country in southwestern Europe, on the Iber'] And that's it! In a few lines of code, you can harness the power of an LLM. ## Common pitfalls There are many [generation strategies](generation_strategies), and sometimes the default values may not be appropriate for your use case. If your outputs aren't aligned with what you're expecting, we've created a list of the most common pitfalls and how to avoid them. >>> from transformers import AutoModelForCausalLM, AutoTokenizer >>> tokenizer = AutoTokenizer.from_pretrained(""mistralai/Mistral-7B-v0.1"") >>> tokenizer.pad_token = tokenizer.eos_token # Most LLMs don't have a pad token by default >>> model = AutoModelForCausalLM.from_pretrained( ""mistralai/Mistral-7B-v0.1"", device_map=""auto"", load_in_4bit=True ) ### Generated output is too short/long If not specified in the [`~generation.GenerationConfig`] file, `generate` returns up to 20 tokens by default. We highly recommend manually setting `max_new_tokens` in your `generate` call to control the maximum number of new tokens it can return. Keep in mind LLMs (more precisely, [decoder-only models](https://huggingface.co/learn/nlp-course/chapter1/6?fw=pt)) also return the input prompt as part of the output. >>> model_inputs = tokenizer([""A sequence of numbers: 1, 2""], return_tensors=""pt"").to(""cuda"") >>> # By default, the output will contain up to 20 tokens >>> generated_ids = model.generate(**model_inputs) >>> tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0] 'A sequence of numbers: 1, 2, 3, 4, 5' >>> # Setting `max_new_tokens` allows you to control the maximum length >>> generated_ids = model.generate(**model_inputs, max_new_tokens=50) >>> tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0] 'A sequence of numbers: 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16,' ### Incorrect generation mode By default, and unless specified in the [`~generation.GenerationConfig`] file, `generate` selects the most likely token at each iteration (greedy decoding). Depending on your task, this may be undesirable; creative tasks like chatbots or writing an essay benefit from sampling. On the other hand, input-grounded tasks like audio transcription or translation benefit from greedy decoding. Enable sampling with `do_sample=True`, and you can learn more about this topic in this [blog post](https://huggingface.co/blog/how-to-generate). >>> # Set seed or reproducibility -- you don't need this unless you want full reproducibility >>> from transformers import set_seed >>> set_seed(42) >>> model_inputs = tokenizer([""I am a cat.""], return_tensors=""pt"").to(""cuda"") >>> # LLM + greedy decoding = repetitive, boring output >>> generated_ids = model.generate(**model_inputs) >>> tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0] 'I am a cat. I am a cat. I am a cat. I am a cat' >>> # With sampling, the output becomes more creative! >>> generated_ids = model.generate(**model_inputs, do_sample=True) >>> tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0] 'I am a cat. Specifically, I am an indoor-only cat. I' ### Wrong padding side LLMs are [decoder-only](https://huggingface.co/learn/nlp-course/chapter1/6?fw=pt) architectures, meaning they continue to iterate on your input prompt. If your inputs do not have the same length, they need to be padded. Since LLMs are not trained to continue from pad tokens, your input needs to be left-padded. Make sure you also don't forget to pass the attention mask to generate! >>> # The tokenizer initialized above has right-padding active by default: the 1st sequence, >>> # which is shorter, has padding on the right side. Generation fails to capture the logic. >>> model_inputs = tokenizer( [""1, 2, 3"", ""A, B, C, D, E""], padding=True, return_tensors=""pt"" ).to(""cuda"") >>> generated_ids = model.generate(**model_inputs) >>> tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0] '1, 2, 33333333333' >>> # With left-padding, it works as expected! >>> tokenizer = AutoTokenizer.from_pretrained(""mistralai/Mistral-7B-v0.1"", padding_side=""left"") >>> tokenizer.pad_token = tokenizer.eos_token # Most LLMs don't have a pad token by default >>> model_inputs = tokenizer( [""1, 2, 3"", ""A, B, C, D, E""], padding=True, return_tensors=""pt"" ).to(""cuda"") >>> generated_ids = model.generate(**model_inputs) >>> tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0] '1, 2, 3, 4, 5, 6,' ### Wrong prompt Some models and tasks expect a certain input prompt format to work properly. When this format is not applied, you will get a silent performance degradation: the model kinda works, but not as well as if you were following the expected prompt. More information about prompting, including which models and tasks need to be careful, is available in this [guide](tasks/prompting). Let's see an example with a chat LLM, which makes use of [chat templating](chat_templating): thon >>> tokenizer = AutoTokenizer.from_pretrained(""HuggingFaceH4/zephyr-7b-alpha"") >>> model = AutoModelForCausalLM.from_pretrained( ""HuggingFaceH4/zephyr-7b-alpha"", device_map=""auto"", load_in_4bit=True ) >>> set_seed(0) >>> prompt = """"""How many helicopters can a human eat in one sitting? Reply as a thug."""""" >>> model_inputs = tokenizer([prompt], return_tensors=""pt"").to(""cuda"") >>> input_length = model_inputs.input_ids.shape[1] >>> generated_ids = model.generate(**model_inputs, max_new_tokens=20) >>> print(tokenizer.batch_decode(generated_ids[:, input_length:], skip_special_tokens=True)[0]) ""I'm not a thug, but i can tell you that a human cannot eat"" >>> # Oh no, it did not follow our instruction to reply as a thug! Let's see what happens when we write >>> # a better prompt and use the right template for this model (through `tokenizer.apply_chat_template`) >>> set_seed(0) >>> messages = [ { ""role"": ""system"", ""content"": ""You are a friendly chatbot who always responds in the style of a thug"", }, {""role"": ""user"", ""content"": ""How many helicopters can a human eat in one sitting?""}, ] >>> model_inputs = tokenizer.apply_chat_template(messages, add_generation_prompt=True, return_tensors=""pt"").to(""cuda"") >>> input_length = model_inputs.shape[1] >>> generated_ids = model.generate(model_inputs, do_sample=True, max_new_tokens=20) >>> print(tokenizer.batch_decode(generated_ids[:, input_length:], skip_special_tokens=True)[0]) 'None, you thug. How bout you try to focus on more useful questions?' >>> # As we can see, it followed a proper thug style 😎 ## Further resources While the autoregressive generation process is relatively straightforward, making the most out of your LLM can be a challenging endeavor because there are many moving parts. For your next steps to help you dive deeper into LLM usage and understanding: ### Advanced generate usage 1. [Guide](generation_strategies) on how to control different generation methods, how to set up the generation configuration file, and how to stream the output; 2. [Guide](chat_templating) on the prompt template for chat LLMs; 3. [Guide](tasks/prompting) on to get the most of prompt design; 4. API reference on [`~generation.GenerationConfig`], [`~generation.GenerationMixin.generate`], and [generate-related classes](internal/generation_utils). ### LLM leaderboards 1. [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard), which focuses on the quality of the open-source models; 2. [Open LLM-Perf Leaderboard](https://huggingface.co/spaces/optimum/llm-perf-leaderboard), which focuses on LLM throughput. ### Latency, throughput and memory utilization 1. [Guide](llm_tutorial_optimization) on how to optimize LLMs for speed and memory; 2. [Guide](main_classes/quantization) on quantization such as bitsandbytes and autogptq, which shows you how to drastically reduce your memory requirements. ### Related libraries 1. [`text-generation-inference`](https://github.com/huggingface/text-generation-inference), a production-ready server for LLMs; 2. [`optimum`](https://github.com/huggingface/optimum), an extension of 🤗 Transformers that optimizes for specific hardware devices. " perf_infer_gpu_one.md," # GPU inference GPUs are the standard choice of hardware for machine learning, unlike CPUs, because they are optimized for memory bandwidth and parallelism. To keep up with the larger sizes of modern models or to run these large models on existing and older hardware, there are several optimizations you can use to speed up GPU inference. In this guide, you'll learn how to use FlashAttention-2 (a more memory-efficient attention mechanism), BetterTransformer (a PyTorch native fastpath execution), and bitsandbytes to quantize your model to a lower precision. Finally, learn how to use 🤗 Optimum to accelerate inference with ONNX Runtime on Nvidia GPUs. The majority of the optimizations described here also apply to multi-GPU setups! ## FlashAttention-2 FlashAttention-2 is experimental and may change considerably in future versions. [FlashAttention-2](https://huggingface.co/papers/2205.14135) is a faster and more efficient implementation of the standard attention mechanism that can significantly speedup inference by: 1. additionally parallelizing the attention computation over sequence length 2. partitioning the work between GPU threads to reduce communication and shared memory reads/writes between them FlashAttention-2 supports inference with Llama, Mistral, Falcon and Bark models. You can request to add FlashAttention-2 support for another model by opening a GitHub Issue or Pull Request. Before you begin, make sure you have FlashAttention-2 installed (see the [installation](https://github.com/Dao-AILab/flash-attention?tab=readme-ov-file#installation-and-features) guide for more details about prerequisites): ```bash pip install flash-attn --no-build-isolation To enable FlashAttention-2, add the `use_flash_attention_2` parameter to [`~AutoModelForCausalLM.from_pretrained`]: thon import torch from transformers import AutoModelForCausalLM, AutoTokenizer, LlamaForCausalLM model_id = ""tiiuae/falcon-7b"" tokenizer = AutoTokenizer.from_pretrained(model_id) model = AutoModelForCausalLM.from_pretrained( model_id, torch_dtype=torch.bfloat16, use_flash_attention_2=True, ) FlashAttention-2 can only be used when the model's dtype is `fp16` or `bf16`, and it only runs on Nvidia GPUs. Make sure to cast your model to the appropriate dtype and load them on a supported device before using FlashAttention-2. FlashAttention-2 can be combined with other optimization techniques like quantization to further speedup inference. For example, you can combine FlashAttention-2 with 8-bit or 4-bit quantization: import torch from transformers import AutoModelForCausalLM, AutoTokenizer, LlamaForCausalLM model_id = ""tiiuae/falcon-7b"" tokenizer = AutoTokenizer.from_pretrained(model_id) # load in 8bit model = AutoModelForCausalLM.from_pretrained( model_id, load_in_8bit=True, use_flash_attention_2=True, ) # load in 4bit model = AutoModelForCausalLM.from_pretrained( model_id, load_in_4bit=True, use_flash_attention_2=True, ) ### Expected speedups You can benefit from considerable speedups for inference, especially for inputs with long sequences. However, since FlashAttention-2 does not support computing attention scores with padding tokens, you must manually pad/unpad the attention scores for batched inference when the sequence contains padding tokens. This leads to a significant slowdown for batched generations with padding tokens. To overcome this, you should use FlashAttention-2 without padding tokens in the sequence during training (by packing a dataset or [concatenating sequences](https://github.com/huggingface/transformers/blob/main/examples/pytorch/language-modeling/run_clm.py#L516) until reaching the maximum sequence length). For a single forward pass on [tiiuae/falcon-7b](https://hf.co/tiiuae/falcon-7b) with a sequence length of 4096 and various batch sizes without padding tokens, the expected speedup is: For a single forward pass on [meta-llama/Llama-7b-hf](https://hf.co/meta-llama/Llama-7b-hf) with a sequence length of 4096 and various batch sizes without padding tokens, the expected speedup is: For sequences with padding tokens (generating with padding tokens), you need to unpad/pad the input sequences to correctly compute the attention scores. With a relatively small sequence length, a single forward pass creates overhead leading to a small speedup (in the example below, 30% of the input is filled with padding tokens): But for larger sequence lengths, you can expect even more speedup benefits: FlashAttention is more memory efficient, meaning you can train on much larger sequence lengths without running into out-of-memory issues. You can potentially reduce memory usage up to 20x for larger sequence lengths. Take a look at the [flash-attention](https://github.com/Dao-AILab/flash-attention) repository for more details. ## BetterTransformer Check out our benchmarks with BetterTransformer and scaled dot product attention in the [Out of the box acceleration and memory savings of 🤗 decoder models with PyTorch 2.0](https://pytorch.org/blog/out-of-the-box-acceleration/) and learn more about the fastpath execution in the [BetterTransformer](https://medium.com/pytorch/bettertransformer-out-of-the-box-performance-for-huggingface-transformers-3fbe27d50ab2) blog post. BetterTransformer accelerates inference with its fastpath (native PyTorch specialized implementation of Transformer functions) execution. The two optimizations in the fastpath execution are: 1. fusion, which combines multiple sequential operations into a single ""kernel"" to reduce the number of computation steps 2. skipping the inherent sparsity of padding tokens to avoid unnecessary computation with nested tensors BetterTransformer also converts all attention operations to use the more memory-efficient [scaled dot product attention (SDPA)](https://pytorch.org/docs/master/generated/torch.nn.functional.scaled_dot_product_attention), and it calls optimized kernels like [FlashAttention](https://huggingface.co/papers/2205.14135) under the hood. Before you start, make sure you have 🤗 Optimum [installed](https://huggingface.co/docs/optimum/installation). Then you can enable BetterTransformer with the [`PreTrainedModel.to_bettertransformer`] method: thon model = model.to_bettertransformer() You can return the original Transformers model with the [`~PreTrainedModel.reverse_bettertransformer`] method. You should use this before saving your model to use the canonical Transformers modeling: model = model.reverse_bettertransformer() model.save_pretrained(""saved_model"") ### FlashAttention SDPA can also call FlashAttention kernels under the hood. FlashAttention can only be used for models using the `fp16` or `bf16` dtype, so make sure to cast your model to the appropriate dtype before using it. To enable FlashAttention or to check whether it is available in a given setting (hardware, problem size), use [`torch.backends.cuda.sdp_kernel`](https://pytorch.org/docs/master/backends.html#torch.backends.cuda.sdp_kernel) as a context manager: import torch from transformers import AutoModelForCausalLM, AutoTokenizer tokenizer = AutoTokenizer.from_pretrained(""facebook/opt-350m"") model = AutoModelForCausalLM.from_pretrained(""facebook/opt-350m"", torch_dtype=torch.float16).to(""cuda"") # convert the model to BetterTransformer model.to_bettertransformer() input_text = ""Hello my dog is cute and"" inputs = tokenizer(input_text, return_tensors=""pt"").to(""cuda"") + with torch.backends.cuda.sdp_kernel(enable_flash=True, enable_math=False, enable_mem_efficient=False): outputs = model.generate(**inputs) print(tokenizer.decode(outputs[0], skip_special_tokens=True)) If you see a bug with the traceback below, try using nightly version of PyTorch which may have broader coverage for FlashAttention: ```bash RuntimeError: No available kernel. Aborting execution. # install PyTorch nightly pip3 install -U --pre torch torchvision torchaudio --index-url https://download.pytorch.org/whl/nightly/cu118 ## bitsandbytes bitsandbytes is a quantization library that includes support for 4-bit and 8-bit quantization. Quantization reduces your model size compared to its native full precision version, making it easier to fit large models onto GPUs with limited memory. Make sure you have bitsnbytes and 🤗 Accelerate installed: ```bash # these versions support 8-bit and 4-bit pip install bitsandbytes>=0.39.0 accelerate>=0.20.0 # install Transformers pip install transformers ### 4-bit To load a model in 4-bit for inference, use the `load_in_4bit` parameter. The `device_map` parameter is optional, but we recommend setting it to `""auto""` to allow 🤗 Accelerate to automatically and efficiently allocate the model given the available resources in the environment. from transformers import AutoModelForCausalLM model_name = ""bigscience/bloom-2b5"" model_4bit = AutoModelForCausalLM.from_pretrained(model_name, device_map=""auto"", load_in_4bit=True) To load a model in 4-bit for inference with multiple GPUs, you can control how much GPU RAM you want to allocate to each GPU. For example, to distribute 600MB of memory to the first GPU and 1GB of memory to the second GPU: max_memory_mapping = {0: ""600MB"", 1: ""1GB""} model_name = ""bigscience/bloom-3b"" model_4bit = AutoModelForCausalLM.from_pretrained( model_name, device_map=""auto"", load_in_4bit=True, max_memory=max_memory_mapping ) ### 8-bit If you're curious and interested in learning more about the concepts underlying 8-bit quantization, read the [Gentle Introduction to 8-bit Matrix Multiplication for transformers at scale using Hugging Face Transformers, Accelerate and bitsandbytes](https://huggingface.co/blog/hf-bitsandbytes-integration) blog post. To load a model in 8-bit for inference, use the `load_in_8bit` parameter. The `device_map` parameter is optional, but we recommend setting it to `""auto""` to allow 🤗 Accelerate to automatically and efficiently allocate the model given the available resources in the environment: from transformers import AutoModelForCausalLM model_name = ""bigscience/bloom-2b5"" model_8bit = AutoModelForCausalLM.from_pretrained(model_name, device_map=""auto"", load_in_8bit=True) If you're loading a model in 8-bit for text generation, you should use the [`~transformers.GenerationMixin.generate`] method instead of the [`Pipeline`] function which is not optimized for 8-bit models and will be slower. Some sampling strategies, like nucleus sampling, are also not supported by the [`Pipeline`] for 8-bit models. You should also place all inputs on the same device as the model: from transformers import AutoModelForCausalLM, AutoTokenizer model_name = ""bigscience/bloom-2b5"" tokenizer = AutoTokenizer.from_pretrained(model_name) model_8bit = AutoModelForCausalLM.from_pretrained(model_name, device_map=""auto"", load_in_8bit=True) prompt = ""Hello, my llama is cute"" inputs = tokenizer(prompt, return_tensors=""pt"").to(""cuda"") generated_ids = model.generate(**inputs) outputs = tokenizer.batch_decode(generated_ids, skip_special_tokens=True) To load a model in 4-bit for inference with multiple GPUs, you can control how much GPU RAM you want to allocate to each GPU. For example, to distribute 1GB of memory to the first GPU and 2GB of memory to the second GPU: max_memory_mapping = {0: ""1GB"", 1: ""2GB""} model_name = ""bigscience/bloom-3b"" model_8bit = AutoModelForCausalLM.from_pretrained( model_name, device_map=""auto"", load_in_8bit=True, max_memory=max_memory_mapping ) Feel free to try running a 11 billion parameter [T5 model](https://colab.research.google.com/drive/1YORPWx4okIHXnjW7MSAidXN29mPVNT7F?usp=sharing) or the 3 billion parameter [BLOOM model](https://colab.research.google.com/drive/1qOjXfQIAULfKvZqwCen8-MoWKGdSatZ4?usp=sharing) for inference on Google Colab's free tier GPUs! ## 🤗 Optimum Learn more details about using ORT with 🤗 Optimum in the [Accelerated inference on NVIDIA GPUs](https://huggingface.co/docs/optimum/onnxruntime/usage_guides/gpu#accelerated-inference-on-nvidia-gpus) guide. This section only provides a brief and simple example. ONNX Runtime (ORT) is a model accelerator that supports accelerated inference on Nvidia GPUs. ORT uses optimization techniques like fusing common operations into a single node and constant folding to reduce the number of computations performed and speedup inference. ORT also places the most computationally intensive operations on the GPU and the rest on the CPU to intelligently distribute the workload between the two devices. ORT is supported by 🤗 Optimum which can be used in 🤗 Transformers. You'll need to use an [`~optimum.onnxruntime.ORTModel`] for the task you're solving, and specify the `provider` parameter which can be set to either [`CUDAExecutionProvider`](https://huggingface.co/docs/optimum/onnxruntime/usage_guides/gpu#cudaexecutionprovider) or [`TensorrtExecutionProvider`](https://huggingface.co/docs/optimum/onnxruntime/usage_guides/gpu#tensorrtexecutionprovider). If you want to load a model that was not yet exported to ONNX, you can set `export=True` to convert your model on-the-fly to the ONNX format : from optimum.onnxruntime import ORTModelForSequenceClassification ort_model = ORTModelForSequenceClassification.from_pretrained( ""distilbert-base-uncased-finetuned-sst-2-english"", export=True, provider=""CUDAExecutionProvider"", ) Now you're free to use the model for inference: from optimum.pipelines import pipeline from transformers import AutoTokenizer tokenizer = AutoTokenizer.from_pretrained(""distilbert-base-uncased-finetuned-sst-2-english"") pipeline = pipeline(task=""text-classification"", model=ort_model, tokenizer=tokenizer, device=""cuda:0"") result = pipeline(""Both the music and visual were astounding, not to mention the actors performance."") ## Combine optimizations It is often possible to combine several of the optimization techniques described above to get the best inference performance possible for your model. For example, you can load a model in 4-bit, and then enable BetterTransformer with FlashAttention: import torch from transformers import AutoModelForCausalLM, AutoTokenizer, BitsAndBytesConfig # load model in 4-bit quantization_config = BitsAndBytesConfig( load_in_4bit=True, bnb_4bit_compute_dtype=torch.float16 ) tokenizer = AutoTokenizer.from_pretrained(""facebook/opt-350m"") model = AutoModelForCausalLM.from_pretrained(""facebook/opt-350m"", quantization_config=quantization_config) # enable BetterTransformer model = model.to_bettertransformer() input_text = ""Hello my dog is cute and"" inputs = tokenizer(input_text, return_tensors=""pt"").to(""cuda"") # enable FlashAttention with torch.backends.cuda.sdp_kernel(enable_flash=True, enable_math=False, enable_mem_efficient=False): outputs = model.generate(**inputs) print(tokenizer.decode(outputs[0], skip_special_tokens=True)) " pipeline_webserver.md," # Using pipelines for a webserver Creating an inference engine is a complex topic, and the ""best"" solution will most likely depend on your problem space. Are you on CPU or GPU? Do you want the lowest latency, the highest throughput, support for many models, or just highly optimize 1 specific model? There are many ways to tackle this topic, so what we are going to present is a good default to get started which may not necessarily be the most optimal solution for you. The key thing to understand is that we can use an iterator, just like you would [on a dataset](pipeline_tutorial#using-pipelines-on-a-dataset), since a webserver is basically a system that waits for requests and treats them as they come in. Usually webservers are multiplexed (multithreaded, async, etc..) to handle various requests concurrently. Pipelines on the other hand (and mostly the underlying models) are not really great for parallelism; they take up a lot of RAM, so it's best to give them all the available resources when they are running or it's a compute-intensive job. We are going to solve that by having the webserver handle the light load of receiving and sending requests, and having a single thread handling the actual work. This example is going to use `starlette`. The actual framework is not really important, but you might have to tune or change the code if you are using another one to achieve the same effect. Create `server.py`: from starlette.applications import Starlette from starlette.responses import JSONResponse from starlette.routing import Route from transformers import pipeline import asyncio async def homepage(request): payload = await request.body() string = payload.decode(""utf-8"") response_q = asyncio.Queue() await request.app.model_queue.put((string, response_q)) output = await response_q.get() return JSONResponse(output) async def server_loop(q): pipe = pipeline(model=""bert-base-uncased"") while True: (string, response_q) = await q.get() out = pipe(string) await response_q.put(out) app = Starlette( routes=[ Route(""/"", homepage, methods=[""POST""]), ], ) @app.on_event(""startup"") async def startup_event(): q = asyncio.Queue() app.model_queue = q asyncio.create_task(server_loop(q)) Now you can start it with: ```bash uvicorn server:app And you can query it: ```bash curl -X POST -d ""test [MASK]"" http://localhost:8000/ #[{""score"":0.7742936015129089,""token"":1012,""token_str"":""."",""sequence"":""test.""},] And there you go, now you have a good idea of how to create a webserver! What is really important is that we load the model only **once**, so there are no copies of the model on the webserver. This way, no unnecessary RAM is being used. Then the queuing mechanism allows you to do fancy stuff like maybe accumulating a few items before inferring to use dynamic batching: The code sample below is intentionally written like pseudo-code for readability. Do not run this without checking if it makes sense for your system resources! (string, rq) = await q.get() strings = [] queues = [] while True: try: (string, rq) = await asyncio.wait_for(q.get(), timeout=0.001) # 1ms except asyncio.exceptions.TimeoutError: break strings.append(string) queues.append(rq) strings outs = pipe(strings, batch_size=len(strings)) for rq, out in zip(queues, outs): await rq.put(out) Again, the proposed code is optimized for readability, not for being the best code. First of all, there's no batch size limit which is usually not a great idea. Next, the timeout is reset on every queue fetch, meaning you could wait much more than 1ms before running the inference (delaying the first request by that much). It would be better to have a single 1ms deadline. This will always wait for 1ms even if the queue is empty, which might not be the best since you probably want to start doing inference if there's nothing in the queue. But maybe it does make sense if batching is really crucial for your use case. Again, there's really no one best solution. ## Few things you might want to consider ### Error checking There's a lot that can go wrong in production: out of memory, out of space, loading the model might fail, the query might be wrong, the query might be correct but still fail to run because of a model misconfiguration, and so on. Generally, it's good if the server outputs the errors to the user, so adding a lot of `try..except` statements to show those errors is a good idea. But keep in mind it may also be a security risk to reveal all those errors depending on your security context. ### Circuit breaking Webservers usually look better when they do circuit breaking. It means they return proper errors when they're overloaded instead of just waiting for the query indefinitely. Return a 503 error instead of waiting for a super long time or a 504 after a long time. This is relatively easy to implement in the proposed code since there is a single queue. Looking at the queue size is a basic way to start returning errors before your webserver fails under load. ### Blocking the main thread Currently PyTorch is not async aware, and computation will block the main thread while running. That means it would be better if PyTorch was forced to run on its own thread/process. This wasn't done here because the code is a lot more complex (mostly because threads and async and queues don't play nice together). But ultimately it does the same thing. This would be important if the inference of single items were long (> 1s) because in this case, it means every query during inference would have to wait for 1s before even receiving an error. ### Dynamic batching In general, batching is not necessarily an improvement over passing 1 item at a time (see [batching details](./main_classes/pipelines#pipeline-batching) for more information). But it can be very effective when used in the correct setting. In the API, there is no dynamic batching by default (too much opportunity for a slowdown). But for BLOOM inference - which is a very large model - dynamic batching is **essential** to provide a decent experience for everyone. " peft.md," # Load adapters with 🤗 PEFT [[open-in-colab]] [Parameter-Efficient Fine Tuning (PEFT)](https://huggingface.co/blog/peft) methods freeze the pretrained model parameters during fine-tuning and add a small number of trainable parameters (the adapters) on top of it. The adapters are trained to learn task-specific information. This approach has been shown to be very memory-efficient with lower compute usage while producing results comparable to a fully fine-tuned model. Adapters trained with PEFT are also usually an order of magnitude smaller than the full model, making it convenient to share, store, and load them. The adapter weights for a OPTForCausalLM model stored on the Hub are only ~6MB compared to the full size of the model weights, which can be ~700MB. If you're interested in learning more about the 🤗 PEFT library, check out the [documentation](https://huggingface.co/docs/peft/index). ## Setup Get started by installing 🤗 PEFT: ```bash pip install peft If you want to try out the brand new features, you might be interested in installing the library from source: ```bash pip install git+https://github.com/huggingface/peft.git ## Supported PEFT models 🤗 Transformers natively supports some PEFT methods, meaning you can load adapter weights stored locally or on the Hub and easily run or train them with a few lines of code. The following methods are supported: - [Low Rank Adapters](https://huggingface.co/docs/peft/conceptual_guides/lora) - [IA3](https://huggingface.co/docs/peft/conceptual_guides/ia3) - [AdaLoRA](https://arxiv.org/abs/2303.10512) If you want to use other PEFT methods, such as prompt learning or prompt tuning, or about the 🤗 PEFT library in general, please refer to the [documentation](https://huggingface.co/docs/peft/index). ## Load a PEFT adapter To load and use a PEFT adapter model from 🤗 Transformers, make sure the Hub repository or local directory contains an `adapter_config.json` file and the adapter weights, as shown in the example image above. Then you can load the PEFT adapter model using the `AutoModelFor` class. For example, to load a PEFT adapter model for causal language modeling: 1. specify the PEFT model id 2. pass it to the [`AutoModelForCausalLM`] class from transformers import AutoModelForCausalLM, AutoTokenizer peft_model_id = ""ybelkada/opt-350m-lora"" model = AutoModelForCausalLM.from_pretrained(peft_model_id) You can load a PEFT adapter with either an `AutoModelFor` class or the base model class like `OPTForCausalLM` or `LlamaForCausalLM`. You can also load a PEFT adapter by calling the `load_adapter` method: from transformers import AutoModelForCausalLM, AutoTokenizer model_id = ""facebook/opt-350m"" peft_model_id = ""ybelkada/opt-350m-lora"" model = AutoModelForCausalLM.from_pretrained(model_id) model.load_adapter(peft_model_id) ## Load in 8bit or 4bit The `bitsandbytes` integration supports 8bit and 4bit precision data types, which are useful for loading large models because it saves memory (see the `bitsandbytes` integration [guide](./quantization#bitsandbytes-integration) to learn more). Add the `load_in_8bit` or `load_in_4bit` parameters to [`~PreTrainedModel.from_pretrained`] and set `device_map=""auto""` to effectively distribute the model to your hardware: from transformers import AutoModelForCausalLM, AutoTokenizer peft_model_id = ""ybelkada/opt-350m-lora"" model = AutoModelForCausalLM.from_pretrained(peft_model_id, device_map=""auto"", load_in_8bit=True) ## Add a new adapter You can use [`~peft.PeftModel.add_adapter`] to add a new adapter to a model with an existing adapter as long as the new adapter is the same type as the current one. For example, if you have an existing LoRA adapter attached to a model: from transformers import AutoModelForCausalLM, OPTForCausalLM, AutoTokenizer from peft import LoraConfig model_id = ""facebook/opt-350m"" model = AutoModelForCausalLM.from_pretrained(model_id) lora_config = LoraConfig( target_modules=[""q_proj"", ""k_proj""], init_lora_weights=False ) model.add_adapter(lora_config, adapter_name=""adapter_1"") To add a new adapter: # attach new adapter with same config model.add_adapter(lora_config, adapter_name=""adapter_2"") Now you can use [`~peft.PeftModel.set_adapter`] to set which adapter to use: # use adapter_1 model.set_adapter(""adapter_1"") output = model.generate(**inputs) print(tokenizer.decode(output_disabled[0], skip_special_tokens=True)) # use adapter_2 model.set_adapter(""adapter_2"") output_enabled = model.generate(**inputs) print(tokenizer.decode(output_enabled[0], skip_special_tokens=True)) ## Enable and disable adapters Once you've added an adapter to a model, you can enable or disable the adapter module. To enable the adapter module: from transformers import AutoModelForCausalLM, OPTForCausalLM, AutoTokenizer from peft import PeftConfig model_id = ""facebook/opt-350m"" adapter_model_id = ""ybelkada/opt-350m-lora"" tokenizer = AutoTokenizer.from_pretrained(model_id) text = ""Hello"" inputs = tokenizer(text, return_tensors=""pt"") model = AutoModelForCausalLM.from_pretrained(model_id) peft_config = PeftConfig.from_pretrained(adapter_model_id) # to initiate with random weights peft_config.init_lora_weights = False model.add_adapter(peft_config) model.enable_adapters() output = model.generate(**inputs) To disable the adapter module: model.disable_adapters() output = model.generate(**inputs) ## Train a PEFT adapter PEFT adapters are supported by the [`Trainer`] class so that you can train an adapter for your specific use case. It only requires adding a few more lines of code. For example, to train a LoRA adapter: If you aren't familiar with fine-tuning a model with [`Trainer`], take a look at the [Fine-tune a pretrained model](training) tutorial. 1. Define your adapter configuration with the task type and hyperparameters (see [`~peft.LoraConfig`] for more details about what the hyperparameters do). from peft import LoraConfig peft_config = LoraConfig( lora_alpha=16, lora_dropout=0.1, r=64, bias=""none"", task_type=""CAUSAL_LM"", ) 2. Add adapter to the model. model.add_adapter(peft_config) 3. Now you can pass the model to [`Trainer`]! trainer = Trainer(model=model, ) trainer.train() To save your trained adapter and load it back: model.save_pretrained(save_dir) model = AutoModelForCausalLM.from_pretrained(save_dir) ## Add additional trainable layers to a PEFT adapter You can also fine-tune additional trainable adapters on top of a model that has adapters attached by passing `modules_to_save` in your PEFT config. For example, if you want to also fine-tune the lm_head on top of a model with a LoRA adapter: from transformers import AutoModelForCausalLM, OPTForCausalLM, AutoTokenizer from peft import LoraConfig model_id = ""facebook/opt-350m"" model = AutoModelForCausalLM.from_pretrained(model_id) lora_config = LoraConfig( target_modules=[""q_proj"", ""k_proj""], modules_to_save=[""lm_head""], ) model.add_adapter(lora_config) " add_new_pipeline.md," # How to create a custom pipeline? In this guide, we will see how to create a custom pipeline and share it on the [Hub](hf.co/models) or add it to the 🤗 Transformers library. First and foremost, you need to decide the raw entries the pipeline will be able to take. It can be strings, raw bytes, dictionaries or whatever seems to be the most likely desired input. Try to keep these inputs as pure Python as possible as it makes compatibility easier (even through other languages via JSON). Those will be the `inputs` of the pipeline (`preprocess`). Then define the `outputs`. Same policy as the `inputs`. The simpler, the better. Those will be the outputs of `postprocess` method. Start by inheriting the base class `Pipeline` with the 4 methods needed to implement `preprocess`, `_forward`, `postprocess`, and `_sanitize_parameters`. thon from transformers import Pipeline class MyPipeline(Pipeline): def _sanitize_parameters(self, **kwargs): preprocess_kwargs = {} if ""maybe_arg"" in kwargs: preprocess_kwargs[""maybe_arg""] = kwargs[""maybe_arg""] return preprocess_kwargs, {}, {} def preprocess(self, inputs, maybe_arg=2): model_input = Tensor(inputs[""input_ids""]) return {""model_input"": model_input} def _forward(self, model_inputs): # model_inputs == {""model_input"": model_input} outputs = self.model(**model_inputs) # Maybe {""logits"": Tensor()} return outputs def postprocess(self, model_outputs): best_class = model_outputs[""logits""].softmax(-1) return best_class The structure of this breakdown is to support relatively seamless support for CPU/GPU, while supporting doing pre/postprocessing on the CPU on different threads `preprocess` will take the originally defined inputs, and turn them into something feedable to the model. It might contain more information and is usually a `Dict`. `_forward` is the implementation detail and is not meant to be called directly. `forward` is the preferred called method as it contains safeguards to make sure everything is working on the expected device. If anything is linked to a real model it belongs in the `_forward` method, anything else is in the preprocess/postprocess. `postprocess` methods will take the output of `_forward` and turn it into the final output that was decided earlier. `_sanitize_parameters` exists to allow users to pass any parameters whenever they wish, be it at initialization time `pipeline(., maybe_arg=4)` or at call time `pipe = pipeline(); output = pipe(., maybe_arg=4)`. The returns of `_sanitize_parameters` are the 3 dicts of kwargs that will be passed directly to `preprocess`, `_forward`, and `postprocess`. Don't fill anything if the caller didn't call with any extra parameter. That allows to keep the default arguments in the function definition which is always more ""natural"". A classic example would be a `top_k` argument in the post processing in classification tasks. thon >>> pipe = pipeline(""my-new-task"") >>> pipe(""This is a test"") [{""label"": ""1-star"", ""score"": 0.8}, {""label"": ""2-star"", ""score"": 0.1}, {""label"": ""3-star"", ""score"": 0.05} {""label"": ""4-star"", ""score"": 0.025}, {""label"": ""5-star"", ""score"": 0.025}] >>> pipe(""This is a test"", top_k=2) [{""label"": ""1-star"", ""score"": 0.8}, {""label"": ""2-star"", ""score"": 0.1}] In order to achieve that, we'll update our `postprocess` method with a default parameter to `5`. and edit `_sanitize_parameters` to allow this new parameter. thon def postprocess(self, model_outputs, top_k=5): best_class = model_outputs[""logits""].softmax(-1) # Add logic to handle top_k return best_class def _sanitize_parameters(self, **kwargs): preprocess_kwargs = {} if ""maybe_arg"" in kwargs: preprocess_kwargs[""maybe_arg""] = kwargs[""maybe_arg""] postprocess_kwargs = {} if ""top_k"" in kwargs: postprocess_kwargs[""top_k""] = kwargs[""top_k""] return preprocess_kwargs, {}, postprocess_kwargs Try to keep the inputs/outputs very simple and ideally JSON-serializable as it makes the pipeline usage very easy without requiring users to understand new kinds of objects. It's also relatively common to support many different types of arguments for ease of use (audio files, which can be filenames, URLs or pure bytes) ## Adding it to the list of supported tasks To register your `new-task` to the list of supported tasks, you have to add it to the `PIPELINE_REGISTRY`: thon from transformers.pipelines import PIPELINE_REGISTRY PIPELINE_REGISTRY.register_pipeline( ""new-task"", pipeline_class=MyPipeline, pt_model=AutoModelForSequenceClassification, ) You can specify a default model if you want, in which case it should come with a specific revision (which can be the name of a branch or a commit hash, here we took `""abcdef""`) as well as the type: thon PIPELINE_REGISTRY.register_pipeline( ""new-task"", pipeline_class=MyPipeline, pt_model=AutoModelForSequenceClassification, default={""pt"": (""user/awesome_model"", ""abcdef"")}, type=""text"", # current support type: text, audio, image, multimodal ) ## Share your pipeline on the Hub To share your custom pipeline on the Hub, you just have to save the custom code of your `Pipeline` subclass in a python file. For instance, let's say we want to use a custom pipeline for sentence pair classification like this: import numpy as np from transformers import Pipeline def softmax(outputs): maxes = np.max(outputs, axis=-1, keepdims=True) shifted_exp = np.exp(outputs - maxes) return shifted_exp / shifted_exp.sum(axis=-1, keepdims=True) class PairClassificationPipeline(Pipeline): def _sanitize_parameters(self, **kwargs): preprocess_kwargs = {} if ""second_text"" in kwargs: preprocess_kwargs[""second_text""] = kwargs[""second_text""] return preprocess_kwargs, {}, {} def preprocess(self, text, second_text=None): return self.tokenizer(text, text_pair=second_text, return_tensors=self.framework) def _forward(self, model_inputs): return self.model(**model_inputs) def postprocess(self, model_outputs): logits = model_outputs.logits[0].numpy() probabilities = softmax(logits) best_class = np.argmax(probabilities) label = self.model.config.id2label[best_class] score = probabilities[best_class].item() logits = logits.tolist() return {""label"": label, ""score"": score, ""logits"": logits} The implementation is framework agnostic, and will work for PyTorch and TensorFlow models. If we have saved this in a file named `pair_classification.py`, we can then import it and register it like this: from pair_classification import PairClassificationPipeline from transformers.pipelines import PIPELINE_REGISTRY from transformers import AutoModelForSequenceClassification, TFAutoModelForSequenceClassification PIPELINE_REGISTRY.register_pipeline( ""pair-classification"", pipeline_class=PairClassificationPipeline, pt_model=AutoModelForSequenceClassification, tf_model=TFAutoModelForSequenceClassification, ) Once this is done, we can use it with a pretrained model. For instance `sgugger/finetuned-bert-mrpc` has been fine-tuned on the MRPC dataset, which classifies pairs of sentences as paraphrases or not. from transformers import pipeline classifier = pipeline(""pair-classification"", model=""sgugger/finetuned-bert-mrpc"") Then we can share it on the Hub by using the `save_pretrained` method in a `Repository`: from huggingface_hub import Repository repo = Repository(""test-dynamic-pipeline"", clone_from=""{your_username}/test-dynamic-pipeline"") classifier.save_pretrained(""test-dynamic-pipeline"") repo.push_to_hub() This will copy the file where you defined `PairClassificationPipeline` inside the folder `""test-dynamic-pipeline""`, along with saving the model and tokenizer of the pipeline, before pushing everything into the repository `{your_username}/test-dynamic-pipeline`. After that, anyone can use it as long as they provide the option `trust_remote_code=True`: from transformers import pipeline classifier = pipeline(model=""{your_username}/test-dynamic-pipeline"", trust_remote_code=True) ## Add the pipeline to 🤗 Transformers If you want to contribute your pipeline to 🤗 Transformers, you will need to add a new module in the `pipelines` submodule with the code of your pipeline, then add it to the list of tasks defined in `pipelines/__init__.py`. Then you will need to add tests. Create a new file `tests/test_pipelines_MY_PIPELINE.py` with examples of the other tests. The `run_pipeline_test` function will be very generic and run on small random models on every possible architecture as defined by `model_mapping` and `tf_model_mapping`. This is very important to test future compatibility, meaning if someone adds a new model for `XXXForQuestionAnswering` then the pipeline test will attempt to run on it. Because the models are random it's impossible to check for actual values, that's why there is a helper `ANY` that will simply attempt to match the output of the pipeline TYPE. You also *need* to implement 2 (ideally 4) tests. - `test_small_model_pt` : Define 1 small model for this pipeline (doesn't matter if the results don't make sense) and test the pipeline outputs. The results should be the same as `test_small_model_tf`. - `test_small_model_tf` : Define 1 small model for this pipeline (doesn't matter if the results don't make sense) and test the pipeline outputs. The results should be the same as `test_small_model_pt`. - `test_large_model_pt` (`optional`): Tests the pipeline on a real pipeline where the results are supposed to make sense. These tests are slow and should be marked as such. Here the goal is to showcase the pipeline and to make sure there is no drift in future releases. - `test_large_model_tf` (`optional`): Tests the pipeline on a real pipeline where the results are supposed to make sense. These tests are slow and should be marked as such. Here the goal is to showcase the pipeline and to make sure there is no drift in future releases. " tasks_explained.md," # How 🤗 Transformers solve tasks In [What 🤗 Transformers can do](task_summary), you learned about natural language processing (NLP), speech and audio, computer vision tasks, and some important applications of them. This page will look closely at how models solve these tasks and explain what's happening under the hood. There are many ways to solve a given task, some models may implement certain techniques or even approach the task from a new angle, but for Transformer models, the general idea is the same. Owing to its flexible architecture, most models are a variant of an encoder, decoder, or encoder-decoder structure. In addition to Transformer models, our library also has several convolutional neural networks (CNNs), which are still used today for computer vision tasks. We'll also explain how a modern CNN works. To explain how tasks are solved, we'll walk through what goes on inside the model to output useful predictions. - [Wav2Vec2](model_doc/wav2vec2) for audio classification and automatic speech recognition (ASR) - [Vision Transformer (ViT)](model_doc/vit) and [ConvNeXT](model_doc/convnext) for image classification - [DETR](model_doc/detr) for object detection - [Mask2Former](model_doc/mask2former) for image segmentation - [GLPN](model_doc/glpn) for depth estimation - [BERT](model_doc/bert) for NLP tasks like text classification, token classification and question answering that use an encoder - [GPT2](model_doc/gpt2) for NLP tasks like text generation that use a decoder - [BART](model_doc/bart) for NLP tasks like summarization and translation that use an encoder-decoder Before you go further, it is good to have some basic knowledge of the original Transformer architecture. Knowing how encoders, decoders, and attention work will aid you in understanding how different Transformer models work. If you're just getting started or need a refresher, check out our [course](https://huggingface.co/course/chapter1/4?fw=pt) for more information! ## Speech and audio [Wav2Vec2](model_doc/wav2vec2) is a self-supervised model pretrained on unlabeled speech data and finetuned on labeled data for audio classification and automatic speech recognition. This model has four main components: 1. A *feature encoder* takes the raw audio waveform, normalizes it to zero mean and unit variance, and converts it into a sequence of feature vectors that are each 20ms long. 2. Waveforms are continuous by nature, so they can't be divided into separate units like a sequence of text can be split into words. That's why the feature vectors are passed to a *quantization module*, which aims to learn discrete speech units. The speech unit is chosen from a collection of codewords, known as a *codebook* (you can think of this as the vocabulary). From the codebook, the vector or speech unit, that best represents the continuous audio input is chosen and forwarded through the model. 3. About half of the feature vectors are randomly masked, and the masked feature vector is fed to a *context network*, which is a Transformer encoder that also adds relative positional embeddings. 4. The pretraining objective of the context network is a *contrastive task*. The model has to predict the true quantized speech representation of the masked prediction from a set of false ones, encouraging the model to find the most similar context vector and quantized speech unit (the target label). Now that wav2vec2 is pretrained, you can finetune it on your data for audio classification or automatic speech recognition! ### Audio classification To use the pretrained model for audio classification, add a sequence classification head on top of the base Wav2Vec2 model. The classification head is a linear layer that accepts the encoder's hidden states. The hidden states represent the learned features from each audio frame which can have varying lengths. To create one vector of fixed-length, the hidden states are pooled first and then transformed into logits over the class labels. The cross-entropy loss is calculated between the logits and target to find the most likely class. Ready to try your hand at audio classification? Check out our complete [audio classification guide](tasks/audio_classification) to learn how to finetune Wav2Vec2 and use it for inference! ### Automatic speech recognition To use the pretrained model for automatic speech recognition, add a language modeling head on top of the base Wav2Vec2 model for [connectionist temporal classification (CTC)](glossary#connectionist-temporal-classification-ctc). The language modeling head is a linear layer that accepts the encoder's hidden states and transforms them into logits. Each logit represents a token class (the number of tokens comes from the task vocabulary). The CTC loss is calculated between the logits and targets to find the most likely sequence of tokens, which are then decoded into a transcription. Ready to try your hand at automatic speech recognition? Check out our complete [automatic speech recognition guide](tasks/asr) to learn how to finetune Wav2Vec2 and use it for inference! ## Computer vision There are two ways to approach computer vision tasks: 1. Split an image into a sequence of patches and process them in parallel with a Transformer. 2. Use a modern CNN, like [ConvNeXT](model_doc/convnext), which relies on convolutional layers but adopts modern network designs. A third approach mixes Transformers with convolutions (for example, [Convolutional Vision Transformer](model_doc/cvt) or [LeViT](model_doc/levit)). We won't discuss those because they just combine the two approaches we examine here. ViT and ConvNeXT are commonly used for image classification, but for other vision tasks like object detection, segmentation, and depth estimation, we'll look at DETR, Mask2Former and GLPN, respectively; these models are better suited for those tasks. ### Image classification ViT and ConvNeXT can both be used for image classification; the main difference is that ViT uses an attention mechanism while ConvNeXT uses convolutions. #### Transformer [ViT](model_doc/vit) replaces convolutions entirely with a pure Transformer architecture. If you're familiar with the original Transformer, then you're already most of the way toward understanding ViT. The main change ViT introduced was in how images are fed to a Transformer: 1. An image is split into square non-overlapping patches, each of which gets turned into a vector or *patch embedding*. The patch embeddings are generated from a convolutional 2D layer which creates the proper input dimensions (which for a base Transformer is 768 values for each patch embedding). If you had a 224x224 pixel image, you could split it into 196 16x16 image patches. Just like how text is tokenized into words, an image is ""tokenized"" into a sequence of patches. 2. A *learnable embedding* - a special `[CLS]` token - is added to the beginning of the patch embeddings just like BERT. The final hidden state of the `[CLS]` token is used as the input to the attached classification head; other outputs are ignored. This token helps the model learn how to encode a representation of the image. 3. The last thing to add to the patch and learnable embeddings are the *position embeddings* because the model doesn't know how the image patches are ordered. The position embeddings are also learnable and have the same size as the patch embeddings. Finally, all of the embeddings are passed to the Transformer encoder. 4. The output, specifically only the output with the `[CLS]` token, is passed to a multilayer perceptron head (MLP). ViT's pretraining objective is simply classification. Like other classification heads, the MLP head converts the output into logits over the class labels and calculates the cross-entropy loss to find the most likely class. Ready to try your hand at image classification? Check out our complete [image classification guide](tasks/image_classification) to learn how to finetune ViT and use it for inference! #### CNN This section briefly explains convolutions, but it'd be helpful to have a prior understanding of how they change an image's shape and size. If you're unfamiliar with convolutions, check out the [Convolution Neural Networks chapter](https://github.com/fastai/fastbook/blob/master/13_convolutions.ipynb) from the fastai book! [ConvNeXT](model_doc/convnext) is a CNN architecture that adopts new and modern network designs to improve performance. However, convolutions are still at the core of the model. From a high-level perspective, a [convolution](glossary#convolution) is an operation where a smaller matrix (*kernel*) is multiplied by a small window of the image pixels. It computes some features from it, such as a particular texture or curvature of a line. Then it slides over to the next window of pixels; the distance the convolution travels is known as the *stride*. A basic convolution without padding or stride, taken from A guide to convolution arithmetic for deep learning. You can feed this output to another convolutional layer, and with each successive layer, the network learns more complex and abstract things like hotdogs or rockets. Between convolutional layers, it is common to add a pooling layer to reduce dimensionality and make the model more robust to variations of a feature's position. ConvNeXT modernizes a CNN in five ways: 1. Change the number of blocks in each stage and ""patchify"" an image with a larger stride and corresponding kernel size. The non-overlapping sliding window makes this patchifying strategy similar to how ViT splits an image into patches. 2. A *bottleneck* layer shrinks the number of channels and then restores it because it is faster to do a 1x1 convolution, and you can increase the depth. An inverted bottleneck does the opposite by expanding the number of channels and shrinking them, which is more memory efficient. 3. Replace the typical 3x3 convolutional layer in the bottleneck layer with *depthwise convolution*, which applies a convolution to each input channel separately and then stacks them back together at the end. This widens the network width for improved performance. 4. ViT has a global receptive field which means it can see more of an image at once thanks to its attention mechanism. ConvNeXT attempts to replicate this effect by increasing the kernel size to 7x7. 5. ConvNeXT also makes several layer design changes that imitate Transformer models. There are fewer activation and normalization layers, the activation function is switched to GELU instead of ReLU, and it uses LayerNorm instead of BatchNorm. The output from the convolution blocks is passed to a classification head which converts the outputs into logits and calculates the cross-entropy loss to find the most likely label. ### Object detection [DETR](model_doc/detr), *DEtection TRansformer*, is an end-to-end object detection model that combines a CNN with a Transformer encoder-decoder. 1. A pretrained CNN *backbone* takes an image, represented by its pixel values, and creates a low-resolution feature map of it. A 1x1 convolution is applied to the feature map to reduce dimensionality and it creates a new feature map with a high-level image representation. Since the Transformer is a sequential model, the feature map is flattened into a sequence of feature vectors that are combined with positional embeddings. 2. The feature vectors are passed to the encoder, which learns the image representations using its attention layers. Next, the encoder hidden states are combined with *object queries* in the decoder. Object queries are learned embeddings that focus on the different regions of an image, and they're updated as they progress through each attention layer. The decoder hidden states are passed to a feedforward network that predicts the bounding box coordinates and class label for each object query, or `no object` if there isn't one. DETR decodes each object query in parallel to output *N* final predictions, where *N* is the number of queries. Unlike a typical autoregressive model that predicts one element at a time, object detection is a set prediction task (`bounding box`, `class label`) that makes *N* predictions in a single pass. 3. DETR uses a *bipartite matching loss* during training to compare a fixed number of predictions with a fixed set of ground truth labels. If there are fewer ground truth labels in the set of *N* labels, then they're padded with a `no object` class. This loss function encourages DETR to find a one-to-one assignment between the predictions and ground truth labels. If either the bounding boxes or class labels aren't correct, a loss is incurred. Likewise, if DETR predicts an object that doesn't exist, it is penalized. This encourages DETR to find other objects in an image instead of focusing on one really prominent object. An object detection head is added on top of DETR to find the class label and the coordinates of the bounding box. There are two components to the object detection head: a linear layer to transform the decoder hidden states into logits over the class labels, and a MLP to predict the bounding box. Ready to try your hand at object detection? Check out our complete [object detection guide](tasks/object_detection) to learn how to finetune DETR and use it for inference! ### Image segmentation [Mask2Former](model_doc/mask2former) is a universal architecture for solving all types of image segmentation tasks. Traditional segmentation models are typically tailored towards a particular subtask of image segmentation, like instance, semantic or panoptic segmentation. Mask2Former frames each of those tasks as a *mask classification* problem. Mask classification groups pixels into *N* segments, and predicts *N* masks and their corresponding class label for a given image. We'll explain how Mask2Former works in this section, and then you can try finetuning SegFormer at the end. There are three main components to Mask2Former: 1. A [Swin](model_doc/swin) backbone accepts an image and creates a low-resolution image feature map from 3 consecutive 3x3 convolutions. 2. The feature map is passed to a *pixel decoder* which gradually upsamples the low-resolution features into high-resolution per-pixel embeddings. The pixel decoder actually generates multi-scale features (contains both low- and high-resolution features) with resolutions 1/32, 1/16, and 1/8th of the original image. 3. Each of these feature maps of differing scales is fed successively to one Transformer decoder layer at a time in order to capture small objects from the high-resolution features. The key to Mask2Former is the *masked attention* mechanism in the decoder. Unlike cross-attention which can attend to the entire image, masked attention only focuses on a certain area of the image. This is faster and leads to better performance because the local features of an image are enough for the model to learn from. 4. Like [DETR](tasks_explained#object-detection), Mask2Former also uses learned object queries and combines them with the image features from the pixel decoder to make a set prediction (`class label`, `mask prediction`). The decoder hidden states are passed into a linear layer and transformed into logits over the class labels. The cross-entropy loss is calculated between the logits and class label to find the most likely one. The mask predictions are generated by combining the pixel-embeddings with the final decoder hidden states. The sigmoid cross-entropy and dice loss is calculated between the logits and the ground truth mask to find the most likely mask. Ready to try your hand at object detection? Check out our complete [image segmentation guide](tasks/semantic_segmentation) to learn how to finetune SegFormer and use it for inference! ### Depth estimation [GLPN](model_doc/glpn), *Global-Local Path Network*, is a Transformer for depth estimation that combines a [SegFormer](model_doc/segformer) encoder with a lightweight decoder. 1. Like ViT, an image is split into a sequence of patches, except these image patches are smaller. This is better for dense prediction tasks like segmentation or depth estimation. The image patches are transformed into patch embeddings (see the [image classification](#image-classification) section for more details about how patch embeddings are created), which are fed to the encoder. 2. The encoder accepts the patch embeddings, and passes them through several encoder blocks. Each block consists of attention and Mix-FFN layers. The purpose of the latter is to provide positional information. At the end of each encoder block is a *patch merging* layer for creating hierarchical representations. The features of each group of neighboring patches are concatenated, and a linear layer is applied to the concatenated features to reduce the number of patches to a resolution of 1/4. This becomes the input to the next encoder block, where this whole process is repeated until you have image features with resolutions of 1/8, 1/16, and 1/32. 3. A lightweight decoder takes the last feature map (1/32 scale) from the encoder and upsamples it to 1/16 scale. From here, the feature is passed into a *Selective Feature Fusion (SFF)* module, which selects and combines local and global features from an attention map for each feature and then upsamples it to 1/8th. This process is repeated until the decoded features are the same size as the original image. The output is passed through two convolution layers and then a sigmoid activation is applied to predict the depth of each pixel. ## Natural language processing The Transformer was initially designed for machine translation, and since then, it has practically become the default architecture for solving all NLP tasks. Some tasks lend themselves to the Transformer's encoder structure, while others are better suited for the decoder. Still, other tasks make use of both the Transformer's encoder-decoder structure. ### Text classification [BERT](model_doc/bert) is an encoder-only model and is the first model to effectively implement deep bidirectionality to learn richer representations of the text by attending to words on both sides. 1. BERT uses [WordPiece](tokenizer_summary#wordpiece) tokenization to generate a token embedding of the text. To tell the difference between a single sentence and a pair of sentences, a special `[SEP]` token is added to differentiate them. A special `[CLS]` token is added to the beginning of every sequence of text. The final output with the `[CLS]` token is used as the input to the classification head for classification tasks. BERT also adds a segment embedding to denote whether a token belongs to the first or second sentence in a pair of sentences. 2. BERT is pretrained with two objectives: masked language modeling and next-sentence prediction. In masked language modeling, some percentage of the input tokens are randomly masked, and the model needs to predict these. This solves the issue of bidirectionality, where the model could cheat and see all the words and ""predict"" the next word. The final hidden states of the predicted mask tokens are passed to a feedforward network with a softmax over the vocabulary to predict the masked word. The second pretraining object is next-sentence prediction. The model must predict whether sentence B follows sentence A. Half of the time sentence B is the next sentence, and the other half of the time, sentence B is a random sentence. The prediction, whether it is the next sentence or not, is passed to a feedforward network with a softmax over the two classes (`IsNext` and `NotNext`). 3. The input embeddings are passed through multiple encoder layers to output some final hidden states. To use the pretrained model for text classification, add a sequence classification head on top of the base BERT model. The sequence classification head is a linear layer that accepts the final hidden states and performs a linear transformation to convert them into logits. The cross-entropy loss is calculated between the logits and target to find the most likely label. Ready to try your hand at text classification? Check out our complete [text classification guide](tasks/sequence_classification) to learn how to finetune DistilBERT and use it for inference! ### Token classification To use BERT for token classification tasks like named entity recognition (NER), add a token classification head on top of the base BERT model. The token classification head is a linear layer that accepts the final hidden states and performs a linear transformation to convert them into logits. The cross-entropy loss is calculated between the logits and each token to find the most likely label. Ready to try your hand at token classification? Check out our complete [token classification guide](tasks/token_classification) to learn how to finetune DistilBERT and use it for inference! ### Question answering To use BERT for question answering, add a span classification head on top of the base BERT model. This linear layer accepts the final hidden states and performs a linear transformation to compute the `span` start and end logits corresponding to the answer. The cross-entropy loss is calculated between the logits and the label position to find the most likely span of text corresponding to the answer. Ready to try your hand at question answering? Check out our complete [question answering guide](tasks/question_answering) to learn how to finetune DistilBERT and use it for inference! 💡 Notice how easy it is to use BERT for different tasks once it's been pretrained. You only need to add a specific head to the pretrained model to manipulate the hidden states into your desired output! ### Text generation [GPT-2](model_doc/gpt2) is a decoder-only model pretrained on a large amount of text. It can generate convincing (though not always true!) text given a prompt and complete other NLP tasks like question answering despite not being explicitly trained to. 1. GPT-2 uses [byte pair encoding (BPE)](tokenizer_summary#bytepair-encoding-bpe) to tokenize words and generate a token embedding. Positional encodings are added to the token embeddings to indicate the position of each token in the sequence. The input embeddings are passed through multiple decoder blocks to output some final hidden state. Within each decoder block, GPT-2 uses a *masked self-attention* layer which means GPT-2 can't attend to future tokens. It is only allowed to attend to tokens on the left. This is different from BERT's [`mask`] token because, in masked self-attention, an attention mask is used to set the score to `0` for future tokens. 2. The output from the decoder is passed to a language modeling head, which performs a linear transformation to convert the hidden states into logits. The label is the next token in the sequence, which are created by shifting the logits to the right by one. The cross-entropy loss is calculated between the shifted logits and the labels to output the next most likely token. GPT-2's pretraining objective is based entirely on [causal language modeling](glossary#causal-language-modeling), predicting the next word in a sequence. This makes GPT-2 especially good at tasks that involve generating text. Ready to try your hand at text generation? Check out our complete [causal language modeling guide](tasks/language_modeling#causal-language-modeling) to learn how to finetune DistilGPT-2 and use it for inference! For more information about text generation, check out the [text generation strategies](generation_strategies) guide! ### Summarization Encoder-decoder models like [BART](model_doc/bart) and [T5](model_doc/t5) are designed for the sequence-to-sequence pattern of a summarization task. We'll explain how BART works in this section, and then you can try finetuning T5 at the end. 1. BART's encoder architecture is very similar to BERT and accepts a token and positional embedding of the text. BART is pretrained by corrupting the input and then reconstructing it with the decoder. Unlike other encoders with specific corruption strategies, BART can apply any type of corruption. The *text infilling* corruption strategy works the best though. In text infilling, a number of text spans are replaced with a **single** [`mask`] token. This is important because the model has to predict the masked tokens, and it teaches the model to predict the number of missing tokens. The input embeddings and masked spans are passed through the encoder to output some final hidden states, but unlike BERT, BART doesn't add a final feedforward network at the end to predict a word. 2. The encoder's output is passed to the decoder, which must predict the masked tokens and any uncorrupted tokens from the encoder's output. This gives additional context to help the decoder restore the original text. The output from the decoder is passed to a language modeling head, which performs a linear transformation to convert the hidden states into logits. The cross-entropy loss is calculated between the logits and the label, which is just the token shifted to the right. Ready to try your hand at summarization? Check out our complete [summarization guide](tasks/summarization) to learn how to finetune T5 and use it for inference! For more information about text generation, check out the [text generation strategies](generation_strategies) guide! ### Translation Translation is another example of a sequence-to-sequence task, which means you can use an encoder-decoder model like [BART](model_doc/bart) or [T5](model_doc/t5) to do it. We'll explain how BART works in this section, and then you can try finetuning T5 at the end. BART adapts to translation by adding a separate randomly initialized encoder to map a source language to an input that can be decoded into the target language. This new encoder's embeddings are passed to the pretrained encoder instead of the original word embeddings. The source encoder is trained by updating the source encoder, positional embeddings, and input embeddings with the cross-entropy loss from the model output. The model parameters are frozen in this first step, and all the model parameters are trained together in the second step. BART has since been followed up by a multilingual version, mBART, intended for translation and pretrained on many different languages. Ready to try your hand at translation? Check out our complete [translation guide](tasks/summarization) to learn how to finetune T5 and use it for inference! For more information about text generation, check out the [text generation strategies](generation_strategies) guide! " add_tensorflow_model.md," # How to convert a 🤗 Transformers model to TensorFlow? Having multiple frameworks available to use with 🤗 Transformers gives you flexibility to play their strengths when designing your application, but it implies that compatibility must be added on a per-model basis. The good news is that adding TensorFlow compatibility to an existing model is simpler than [adding a new model from scratch](add_new_model)! Whether you wish to have a deeper understanding of large TensorFlow models, make a major open-source contribution, or enable TensorFlow for your model of choice, this guide is for you. This guide empowers you, a member of our community, to contribute TensorFlow model weights and/or architectures to be used in 🤗 Transformers, with minimal supervision from the Hugging Face team. Writing a new model is no small feat, but hopefully this guide will make it less of a rollercoaster 🎢 and more of a walk in the park 🚶. Harnessing our collective experiences is absolutely critical to make this process increasingly easier, and thus we highly encourage that you suggest improvements to this guide! Before you dive deeper, it is recommended that you check the following resources if you're new to 🤗 Transformers: - [General overview of 🤗 Transformers](add_new_model#general-overview-of-transformers) - [Hugging Face's TensorFlow Philosophy](https://huggingface.co/blog/tensorflow-philosophy) In the remainder of this guide, you will learn what's needed to add a new TensorFlow model architecture, the procedure to convert PyTorch into TensorFlow model weights, and how to efficiently debug mismatches across ML frameworks. Let's get started! Are you unsure whether the model you wish to use already has a corresponding TensorFlow architecture?   Check the `model_type` field of the `config.json` of your model of choice ([example](https://huggingface.co/bert-base-uncased/blob/main/config.json#L14)). If the corresponding model folder in 🤗 Transformers has a file whose name starts with ""modeling_tf"", it means that it has a corresponding TensorFlow architecture ([example](https://github.com/huggingface/transformers/tree/main/src/transformers/models/bert)). ## Step-by-step guide to add TensorFlow model architecture code There are many ways to design a large model architecture, and multiple ways of implementing said design. However, you might recall from our [general overview of 🤗 Transformers](add_new_model#general-overview-of-transformers) that we are an opinionated bunch - the ease of use of 🤗 Transformers relies on consistent design choices. From experience, we can tell you a few important things about adding TensorFlow models: - Don't reinvent the wheel! More often than not, there are at least two reference implementations you should check: the PyTorch equivalent of the model you are implementing and other TensorFlow models for the same class of problems. - Great model implementations survive the test of time. This doesn't happen because the code is pretty, but rather because the code is clear, easy to debug and build upon. If you make the life of the maintainers easy with your TensorFlow implementation, by replicating the same patterns as in other TensorFlow models and minimizing the mismatch to the PyTorch implementation, you ensure your contribution will be long lived. - Ask for help when you're stuck! The 🤗 Transformers team is here to help, and we've probably found solutions to the same problems you're facing. Here's an overview of the steps needed to add a TensorFlow model architecture: 1. Select the model you wish to convert 2. Prepare transformers dev environment 3. (Optional) Understand theoretical aspects and the existing implementation 4. Implement the model architecture 5. Implement model tests 6. Submit the pull request 7. (Optional) Build demos and share with the world ### 1.-3. Prepare your model contribution **1. Select the model you wish to convert** Let's start off with the basics: the first thing you need to know is the architecture you want to convert. If you don't have your eyes set on a specific architecture, asking the 🤗 Transformers team for suggestions is a great way to maximize your impact - we will guide you towards the most prominent architectures that are missing on the TensorFlow side. If the specific model you want to use with TensorFlow already has a TensorFlow architecture implementation in 🤗 Transformers but is lacking weights, feel free to jump straight into the [weight conversion section](#adding-tensorflow-weights-to-hub) of this page. For simplicity, the remainder of this guide assumes you've decided to contribute with the TensorFlow version of *BrandNewBert* (the same example as in the [guide](add_new_model) to add a new model from scratch). Before starting the work on a TensorFlow model architecture, double-check that there is no ongoing effort to do so. You can search for `BrandNewBert` on the [pull request GitHub page](https://github.com/huggingface/transformers/pulls?q=is%3Apr) to confirm that there is no TensorFlow-related pull request. **2. Prepare transformers dev environment** Having selected the model architecture, open a draft PR to signal your intention to work on it. Follow the instructions below to set up your environment and open a draft PR. 1. Fork the [repository](https://github.com/huggingface/transformers) by clicking on the 'Fork' button on the repository's page. This creates a copy of the code under your GitHub user account. 2. Clone your `transformers` fork to your local disk, and add the base repository as a remote: ```bash git clone https://github.com/[your Github handle]/transformers.git cd transformers git remote add upstream https://github.com/huggingface/transformers.git 3. Set up a development environment, for instance by running the following command: ```bash python -m venv .env source .env/bin/activate pip install -e "".[dev]"" Depending on your OS, and since the number of optional dependencies of Transformers is growing, you might get a failure with this command. If that's the case make sure to install TensorFlow then do: ```bash pip install -e "".[quality]"" **Note:** You don't need to have CUDA installed. Making the new model work on CPU is sufficient. 4. Create a branch with a descriptive name from your main branch ```bash git checkout -b add_tf_brand_new_bert 5. Fetch and rebase to current main ```bash git fetch upstream git rebase upstream/main 6. Add an empty `.py` file in `transformers/src/models/brandnewbert/` named `modeling_tf_brandnewbert.py`. This will be your TensorFlow model file. 7. Push the changes to your account using: ```bash git add . git commit -m ""initial commit"" git push -u origin add_tf_brand_new_bert 8. Once you are satisfied, go to the webpage of your fork on GitHub. Click on “Pull request”. Make sure to add the GitHub handle of some members of the Hugging Face team as reviewers, so that the Hugging Face team gets notified for future changes. 9. Change the PR into a draft by clicking on “Convert to draft” on the right of the GitHub pull request web page. Now you have set up a development environment to port *BrandNewBert* to TensorFlow in 🤗 Transformers. **3. (Optional) Understand theoretical aspects and the existing implementation** You should take some time to read *BrandNewBert's* paper, if such descriptive work exists. There might be large sections of the paper that are difficult to understand. If this is the case, this is fine - don't worry! The goal is not to get a deep theoretical understanding of the paper, but to extract the necessary information required to effectively re-implement the model in 🤗 Transformers using TensorFlow. That being said, you don't have to spend too much time on the theoretical aspects, but rather focus on the practical ones, namely the existing model documentation page (e.g. [model docs for BERT](model_doc/bert)). After you've grasped the basics of the models you are about to implement, it's important to understand the existing implementation. This is a great chance to confirm that a working implementation matches your expectations for the model, as well as to foresee technical challenges on the TensorFlow side. It's perfectly natural that you feel overwhelmed with the amount of information that you've just absorbed. It is definitely not a requirement that you understand all facets of the model at this stage. Nevertheless, we highly encourage you to clear any pressing questions in our [forum](https://discuss.huggingface.co/). ### 4. Model implementation Now it's time to finally start coding. Our suggested starting point is the PyTorch file itself: copy the contents of `modeling_brand_new_bert.py` inside `src/transformers/models/brand_new_bert/` into `modeling_tf_brand_new_bert.py`. The goal of this section is to modify the file and update the import structure of 🤗 Transformers such that you can import `TFBrandNewBert` and `TFBrandNewBert.from_pretrained(model_repo, from_pt=True)` successfully loads a working TensorFlow *BrandNewBert* model. Sadly, there is no prescription to convert a PyTorch model into TensorFlow. You can, however, follow our selection of tips to make the process as smooth as possible: - Prepend `TF` to the name of all classes (e.g. `BrandNewBert` becomes `TFBrandNewBert`). - Most PyTorch operations have a direct TensorFlow replacement. For example, `torch.nn.Linear` corresponds to `tf.keras.layers.Dense`, `torch.nn.Dropout` corresponds to `tf.keras.layers.Dropout`, etc. If you're not sure about a specific operation, you can use the [TensorFlow documentation](https://www.tensorflow.org/api_docs/python/tf) or the [PyTorch documentation](https://pytorch.org/docs/stable/). - Look for patterns in the 🤗 Transformers codebase. If you come across a certain operation that doesn't have a direct replacement, the odds are that someone else already had the same problem. - By default, keep the same variable names and structure as in PyTorch. This will make it easier to debug, track issues, and add fixes down the line. - Some layers have different default values in each framework. A notable example is the batch normalization layer's epsilon (`1e-5` in [PyTorch](https://pytorch.org/docs/stable/generated/torch.nn.BatchNorm2d.html#torch.nn.BatchNorm2d) and `1e-3` in [TensorFlow](https://www.tensorflow.org/api_docs/python/tf/keras/layers/BatchNormalization)). Double-check the documentation! - PyTorch's `nn.Parameter` variables typically need to be initialized within TF Layer's `build()`. See the following example: [PyTorch](https://github.com/huggingface/transformers/blob/655f72a6896c0533b1bdee519ed65a059c2425ac/src/transformers/models/vit_mae/modeling_vit_mae.py#L212) / [TensorFlow](https://github.com/huggingface/transformers/blob/655f72a6896c0533b1bdee519ed65a059c2425ac/src/transformers/models/vit_mae/modeling_tf_vit_mae.py#L220) - If the PyTorch model has a `#copied from ` on top of a function, the odds are that your TensorFlow model can also borrow that function from the architecture it was copied from, assuming it has a TensorFlow architecture. - Assigning the `name` attribute correctly in TensorFlow functions is critical to do the `from_pt=True` weight cross-loading. `name` is almost always the name of the corresponding variable in the PyTorch code. If `name` is not properly set, you will see it in the error message when loading the model weights. - The logic of the base model class, `BrandNewBertModel`, will actually reside in `TFBrandNewBertMainLayer`, a Keras layer subclass ([example](https://github.com/huggingface/transformers/blob/4fd32a1f499e45f009c2c0dea4d81c321cba7e02/src/transformers/models/bert/modeling_tf_bert.py#L719)). `TFBrandNewBertModel` will simply be a wrapper around this layer. - Keras models need to be built in order to load pretrained weights. For that reason, `TFBrandNewBertPreTrainedModel` will need to hold an example of inputs to the model, the `dummy_inputs` ([example](https://github.com/huggingface/transformers/blob/4fd32a1f499e45f009c2c0dea4d81c321cba7e02/src/transformers/models/bert/modeling_tf_bert.py#L916)). - If you get stuck, ask for help - we're here to help you! 🤗 In addition to the model file itself, you will also need to add the pointers to the model classes and related documentation pages. You can complete this part entirely following the patterns in other PRs ([example](https://github.com/huggingface/transformers/pull/18020/files)). Here's a list of the needed manual changes: - Include all public classes of *BrandNewBert* in `src/transformers/__init__.py` - Add *BrandNewBert* classes to the corresponding Auto classes in `src/transformers/models/auto/modeling_tf_auto.py` - Add the lazy loading classes related to *BrandNewBert* in `src/transformers/utils/dummy_tf_objects.py` - Update the import structures for the public classes in `src/transformers/models/brand_new_bert/__init__.py` - Add the documentation pointers to the public methods of *BrandNewBert* in `docs/source/en/model_doc/brand_new_bert.md` - Add yourself to the list of contributors to *BrandNewBert* in `docs/source/en/model_doc/brand_new_bert.md` - Finally, add a green tick ✅ to the TensorFlow column of *BrandNewBert* in `docs/source/en/index.md` When you're happy with your implementation, run the following checklist to confirm that your model architecture is ready: 1. All layers that behave differently at train time (e.g. Dropout) are called with a `training` argument, which is propagated all the way from the top-level classes 2. You have used `#copied from ` whenever possible 3. `TFBrandNewBertMainLayer` and all classes that use it have their `call` function decorated with `@unpack_inputs` 4. `TFBrandNewBertMainLayer` is decorated with `@keras_serializable` 5. A TensorFlow model can be loaded from PyTorch weights using `TFBrandNewBert.from_pretrained(model_repo, from_pt=True)` 6. You can call the TensorFlow model using the expected input format ### 5. Add model tests Hurray, you've implemented a TensorFlow model! Now it's time to add tests to make sure that your model behaves as expected. As in the previous section, we suggest you start by copying the `test_modeling_brand_new_bert.py` file in `tests/models/brand_new_bert/` into `test_modeling_tf_brand_new_bert.py`, and continue by making the necessary TensorFlow replacements. For now, in all `.from_pretrained()` calls, you should use the `from_pt=True` flag to load the existing PyTorch weights. After you're done, it's time for the moment of truth: run the tests! 😬 ```bash NVIDIA_TF32_OVERRIDE=0 RUN_SLOW=1 RUN_PT_TF_CROSS_TESTS=1 \ py.test -vv tests/models/brand_new_bert/test_modeling_tf_brand_new_bert.py The most likely outcome is that you'll see a bunch of errors. Don't worry, this is expected! Debugging ML models is notoriously hard, and the key ingredient to success is patience (and `breakpoint()`). In our experience, the hardest problems arise from subtle mismatches between ML frameworks, for which we have a few pointers at the end of this guide. In other cases, a general test might not be directly applicable to your model, in which case we suggest an override at the model test class level. Regardless of the issue, don't hesitate to ask for help in your draft pull request if you're stuck. When all tests pass, congratulations, your model is nearly ready to be added to the 🤗 Transformers library! 🎉 ### 6.-7. Ensure everyone can use your model **6. Submit the pull request** Once you're done with the implementation and the tests, it's time to submit a pull request. Before pushing your code, run our code formatting utility, `make fixup` 🪄. This will automatically fix any formatting issues, which would cause our automatic checks to fail. It's now time to convert your draft pull request into a real pull request. To do so, click on the ""Ready for review"" button and add Joao (`@gante`) and Matt (`@Rocketknight1`) as reviewers. A model pull request will need at least 3 reviewers, but they will take care of finding appropriate additional reviewers for your model. After all reviewers are happy with the state of your PR, the final action point is to remove the `from_pt=True` flag in `.from_pretrained()` calls. Since there are no TensorFlow weights, you will have to add them! Check the section below for instructions on how to do it. Finally, when the TensorFlow weights get merged, you have at least 3 reviewer approvals, and all CI checks are green, double-check the tests locally one last time ```bash NVIDIA_TF32_OVERRIDE=0 RUN_SLOW=1 RUN_PT_TF_CROSS_TESTS=1 \ py.test -vv tests/models/brand_new_bert/test_modeling_tf_brand_new_bert.py and we will merge your PR! Congratulations on the milestone 🎉 **7. (Optional) Build demos and share with the world** One of the hardest parts about open-source is discovery. How can the other users learn about the existence of your fabulous TensorFlow contribution? With proper communication, of course! 📣 There are two main ways to share your model with the community: - Build demos. These include Gradio demos, notebooks, and other fun ways to show off your model. We highly encourage you to add a notebook to our [community-driven demos](https://huggingface.co/docs/transformers/community). - Share stories on social media like Twitter and LinkedIn. You should be proud of your work and share your achievement with the community - your model can now be used by thousands of engineers and researchers around the world 🌍! We will be happy to retweet your posts and help you share your work with the community. ## Adding TensorFlow weights to 🤗 Hub Assuming that the TensorFlow model architecture is available in 🤗 Transformers, converting PyTorch weights into TensorFlow weights is a breeze! Here's how to do it: 1. Make sure you are logged into your Hugging Face account in your terminal. You can log in using the command `huggingface-cli login` (you can find your access tokens [here](https://huggingface.co/settings/tokens)) 2. Run `transformers-cli pt-to-tf --model-name foo/bar`, where `foo/bar` is the name of the model repository containing the PyTorch weights you want to convert 3. Tag `@joaogante` and `@Rocketknight1` in the 🤗 Hub PR the command above has just created That's it! 🎉 ## Debugging mismatches across ML frameworks 🐛 At some point, when adding a new architecture or when creating TensorFlow weights for an existing architecture, you might come across errors complaining about mismatches between PyTorch and TensorFlow. You might even decide to open the model architecture code for the two frameworks, and find that they look identical. What's going on? 🤔 First of all, let's talk about why understanding these mismatches matters. Many community members will use 🤗 Transformers models out of the box, and trust that our models behave as expected. When there is a large mismatch between the two frameworks, it implies that the model is not following the reference implementation for at least one of the frameworks. This might lead to silent failures, in which the model runs but has poor performance. This is arguably worse than a model that fails to run at all! To that end, we aim at having a framework mismatch smaller than `1e-5` at all stages of the model. As in other numerical problems, the devil is in the details. And as in any detail-oriented craft, the secret ingredient here is patience. Here is our suggested workflow for when you come across this type of issues: 1. Locate the source of mismatches. The model you're converting probably has near identical inner variables up to a certain point. Place `breakpoint()` statements in the two frameworks' architectures, and compare the values of the numerical variables in a top-down fashion until you find the source of the problems. 2. Now that you've pinpointed the source of the issue, get in touch with the 🤗 Transformers team. It is possible that we've seen a similar problem before and can promptly provide a solution. As a fallback, scan popular pages like StackOverflow and GitHub issues. 3. If there is no solution in sight, it means you'll have to go deeper. The good news is that you've located the issue, so you can focus on the problematic instruction, abstracting away the rest of the model! The bad news is that you'll have to venture into the source implementation of said instruction. In some cases, you might find an issue with a reference implementation - don't abstain from opening an issue in the upstream repository. In some cases, in discussion with the 🤗 Transformers team, we might find that fixing the mismatch is infeasible. When the mismatch is very small in the output layers of the model (but potentially large in the hidden states), we might decide to ignore it in favor of distributing the model. The `pt-to-tf` CLI mentioned above has a `--max-error` flag to override the error message at weight conversion time. " tasks/audio_classification.md," # Audio classification [[open-in-colab]] Audio classification - just like with text - assigns a class label output from the input data. The only difference is instead of text inputs, you have raw audio waveforms. Some practical applications of audio classification include identifying speaker intent, language classification, and even animal species by their sounds. This guide will show you how to: 1. Finetune [Wav2Vec2](https://huggingface.co/facebook/wav2vec2-base) on the [MInDS-14](https://huggingface.co/datasets/PolyAI/minds14) dataset to classify speaker intent. 2. Use your finetuned model for inference. The task illustrated in this tutorial is supported by the following model architectures: [Audio Spectrogram Transformer](../model_doc/audio-spectrogram-transformer), [Data2VecAudio](../model_doc/data2vec-audio), [Hubert](../model_doc/hubert), [SEW](../model_doc/sew), [SEW-D](../model_doc/sew-d), [UniSpeech](../model_doc/unispeech), [UniSpeechSat](../model_doc/unispeech-sat), [Wav2Vec2](../model_doc/wav2vec2), [Wav2Vec2-Conformer](../model_doc/wav2vec2-conformer), [WavLM](../model_doc/wavlm), [Whisper](../model_doc/whisper) Before you begin, make sure you have all the necessary libraries installed: ```bash pip install transformers datasets evaluate We encourage you to login to your Hugging Face account so you can upload and share your model with the community. When prompted, enter your token to login: >>> from huggingface_hub import notebook_login >>> notebook_login() ## Load MInDS-14 dataset Start by loading the MInDS-14 dataset from the 🤗 Datasets library: >>> from datasets import load_dataset, Audio >>> minds = load_dataset(""PolyAI/minds14"", name=""en-US"", split=""train"") Split the dataset's `train` split into a smaller train and test set with the [`~datasets.Dataset.train_test_split`] method. This'll give you a chance to experiment and make sure everything works before spending more time on the full dataset. >>> minds = minds.train_test_split(test_size=0.2) Then take a look at the dataset: >>> minds DatasetDict({ train: Dataset({ features: ['path', 'audio', 'transcription', 'english_transcription', 'intent_class', 'lang_id'], num_rows: 450 }) test: Dataset({ features: ['path', 'audio', 'transcription', 'english_transcription', 'intent_class', 'lang_id'], num_rows: 113 }) }) While the dataset contains a lot of useful information, like `lang_id` and `english_transcription`, you'll focus on the `audio` and `intent_class` in this guide. Remove the other columns with the [`~datasets.Dataset.remove_columns`] method: >>> minds = minds.remove_columns([""path"", ""transcription"", ""english_transcription"", ""lang_id""]) Take a look at an example now: >>> minds[""train""][0] {'audio': {'array': array([ 0. , 0. , 0. , , -0.00048828, -0.00024414, -0.00024414], dtype=float32), 'path': '/root/.cache/huggingface/datasets/downloads/extracted/f14948e0e84be638dd7943ac36518a4cf3324e8b7aa331c5ab11541518e9368c/en-US~APP_ERROR/602b9a5fbb1e6d0fbce91f52.wav', 'sampling_rate': 8000}, 'intent_class': 2} There are two fields: - `audio`: a 1-dimensional `array` of the speech signal that must be called to load and resample the audio file. - `intent_class`: represents the class id of the speaker's intent. To make it easier for the model to get the label name from the label id, create a dictionary that maps the label name to an integer and vice versa: >>> labels = minds[""train""].features[""intent_class""].names >>> label2id, id2label = dict(), dict() >>> for i, label in enumerate(labels): label2id[label] = str(i) id2label[str(i)] = label Now you can convert the label id to a label name: >>> id2label[str(2)] 'app_error' ## Preprocess The next step is to load a Wav2Vec2 feature extractor to process the audio signal: >>> from transformers import AutoFeatureExtractor >>> feature_extractor = AutoFeatureExtractor.from_pretrained(""facebook/wav2vec2-base"") The MInDS-14 dataset has a sampling rate of 8000khz (you can find this information in it's [dataset card](https://huggingface.co/datasets/PolyAI/minds14)), which means you'll need to resample the dataset to 16000kHz to use the pretrained Wav2Vec2 model: >>> minds = minds.cast_column(""audio"", Audio(sampling_rate=16_000)) >>> minds[""train""][0] {'audio': {'array': array([ 2.2098757e-05, 4.6582241e-05, -2.2803260e-05, , -2.8419291e-04, -2.3305941e-04, -1.1425107e-04], dtype=float32), 'path': '/root/.cache/huggingface/datasets/downloads/extracted/f14948e0e84be638dd7943ac36518a4cf3324e8b7aa331c5ab11541518e9368c/en-US~APP_ERROR/602b9a5fbb1e6d0fbce91f52.wav', 'sampling_rate': 16000}, 'intent_class': 2} Now create a preprocessing function that: 1. Calls the `audio` column to load, and if necessary, resample the audio file. 2. Checks if the sampling rate of the audio file matches the sampling rate of the audio data a model was pretrained with. You can find this information in the Wav2Vec2 [model card](https://huggingface.co/facebook/wav2vec2-base). 3. Set a maximum input length to batch longer inputs without truncating them. >>> def preprocess_function(examples): audio_arrays = [x[""array""] for x in examples[""audio""]] inputs = feature_extractor( audio_arrays, sampling_rate=feature_extractor.sampling_rate, max_length=16000, truncation=True ) return inputs To apply the preprocessing function over the entire dataset, use 🤗 Datasets [`~datasets.Dataset.map`] function. You can speed up `map` by setting `batched=True` to process multiple elements of the dataset at once. Remove the columns you don't need, and rename `intent_class` to `label` because that's the name the model expects: >>> encoded_minds = minds.map(preprocess_function, remove_columns=""audio"", batched=True) >>> encoded_minds = encoded_minds.rename_column(""intent_class"", ""label"") ## Evaluate Including a metric during training is often helpful for evaluating your model's performance. You can quickly load a evaluation method with the 🤗 [Evaluate](https://huggingface.co/docs/evaluate/index) library. For this task, load the [accuracy](https://huggingface.co/spaces/evaluate-metric/accuracy) metric (see the 🤗 Evaluate [quick tour](https://huggingface.co/docs/evaluate/a_quick_tour) to learn more about how to load and compute a metric): >>> import evaluate >>> accuracy = evaluate.load(""accuracy"") Then create a function that passes your predictions and labels to [`~evaluate.EvaluationModule.compute`] to calculate the accuracy: >>> import numpy as np >>> def compute_metrics(eval_pred): predictions = np.argmax(eval_pred.predictions, axis=1) return accuracy.compute(predictions=predictions, references=eval_pred.label_ids) Your `compute_metrics` function is ready to go now, and you'll return to it when you setup your training. ## Train If you aren't familiar with finetuning a model with the [`Trainer`], take a look at the basic tutorial [here](../training#train-with-pytorch-trainer)! You're ready to start training your model now! Load Wav2Vec2 with [`AutoModelForAudioClassification`] along with the number of expected labels, and the label mappings: >>> from transformers import AutoModelForAudioClassification, TrainingArguments, Trainer >>> num_labels = len(id2label) >>> model = AutoModelForAudioClassification.from_pretrained( ""facebook/wav2vec2-base"", num_labels=num_labels, label2id=label2id, id2label=id2label ) At this point, only three steps remain: 1. Define your training hyperparameters in [`TrainingArguments`]. The only required parameter is `output_dir` which specifies where to save your model. You'll push this model to the Hub by setting `push_to_hub=True` (you need to be signed in to Hugging Face to upload your model). At the end of each epoch, the [`Trainer`] will evaluate the accuracy and save the training checkpoint. 2. Pass the training arguments to [`Trainer`] along with the model, dataset, tokenizer, data collator, and `compute_metrics` function. 3. Call [`~Trainer.train`] to finetune your model. >>> training_args = TrainingArguments( output_dir=""my_awesome_mind_model"", evaluation_strategy=""epoch"", save_strategy=""epoch"", learning_rate=3e-5, per_device_train_batch_size=32, gradient_accumulation_steps=4, per_device_eval_batch_size=32, num_train_epochs=10, warmup_ratio=0.1, logging_steps=10, load_best_model_at_end=True, metric_for_best_model=""accuracy"", push_to_hub=True, ) >>> trainer = Trainer( model=model, args=training_args, train_dataset=encoded_minds[""train""], eval_dataset=encoded_minds[""test""], tokenizer=feature_extractor, compute_metrics=compute_metrics, ) >>> trainer.train() Once training is completed, share your model to the Hub with the [`~transformers.Trainer.push_to_hub`] method so everyone can use your model: >>> trainer.push_to_hub() For a more in-depth example of how to finetune a model for audio classification, take a look at the corresponding [PyTorch notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/audio_classification.ipynb). ## Inference Great, now that you've finetuned a model, you can use it for inference! Load an audio file you'd like to run inference on. Remember to resample the sampling rate of the audio file to match the sampling rate of the model if you need to! >>> from datasets import load_dataset, Audio >>> dataset = load_dataset(""PolyAI/minds14"", name=""en-US"", split=""train"") >>> dataset = dataset.cast_column(""audio"", Audio(sampling_rate=16000)) >>> sampling_rate = dataset.features[""audio""].sampling_rate >>> audio_file = dataset[0][""audio""][""path""] The simplest way to try out your finetuned model for inference is to use it in a [`pipeline`]. Instantiate a `pipeline` for audio classification with your model, and pass your audio file to it: >>> from transformers import pipeline >>> classifier = pipeline(""audio-classification"", model=""stevhliu/my_awesome_minds_model"") >>> classifier(audio_file) [ {'score': 0.09766869246959686, 'label': 'cash_deposit'}, {'score': 0.07998877018690109, 'label': 'app_error'}, {'score': 0.0781070664525032, 'label': 'joint_account'}, {'score': 0.07667109370231628, 'label': 'pay_bill'}, {'score': 0.0755252093076706, 'label': 'balance'} ] You can also manually replicate the results of the `pipeline` if you'd like: Load a feature extractor to preprocess the audio file and return the `input` as PyTorch tensors: >>> from transformers import AutoFeatureExtractor >>> feature_extractor = AutoFeatureExtractor.from_pretrained(""stevhliu/my_awesome_minds_model"") >>> inputs = feature_extractor(dataset[0][""audio""][""array""], sampling_rate=sampling_rate, return_tensors=""pt"") Pass your inputs to the model and return the logits: >>> from transformers import AutoModelForAudioClassification >>> model = AutoModelForAudioClassification.from_pretrained(""stevhliu/my_awesome_minds_model"") >>> with torch.no_grad(): logits = model(**inputs).logits Get the class with the highest probability, and use the model's `id2label` mapping to convert it to a label: >>> import torch >>> predicted_class_ids = torch.argmax(logits).item() >>> predicted_label = model.config.id2label[predicted_class_ids] >>> predicted_label 'cash_deposit' " tasks/prompting.md," # LLM prompting guide [[open-in-colab]] Large Language Models such as Falcon, LLaMA, etc. are pretrained transformer models initially trained to predict the next token given some input text. They typically have billions of parameters and have been trained on trillions of tokens for an extended period of time. As a result, these models become quite powerful and versatile, and you can use them to solve multiple NLP tasks out of the box by instructing the models with natural language prompts. Designing such prompts to ensure the optimal output is often called ""prompt engineering"". Prompt engineering is an iterative process that requires a fair amount of experimentation. Natural languages are much more flexible and expressive than programming languages, however, they can also introduce some ambiguity. At the same time, prompts in natural language are quite sensitive to changes. Even minor modifications in prompts can lead to wildly different outputs. While there is no exact recipe for creating prompts to match all cases, researchers have worked out a number of best practices that help to achieve optimal results more consistently. This guide covers the prompt engineering best practices to help you craft better LLM prompts and solve various NLP tasks. You'll learn: - [Basics of prompting](#basic-prompts) - [Best practices of LLM prompting](#best-practices-of-llm-prompting) - [Advanced prompting techniques: few-shot prompting and chain-of-thought](#advanced-prompting-techniques) - [When to fine-tune instead of prompting](#prompting-vs-fine-tuning) Prompt engineering is only a part of the LLM output optimization process. Another essential component is choosing the optimal text generation strategy. You can customize how your LLM selects each of the subsequent tokens when generating the text without modifying any of the trainable parameters. By tweaking the text generation parameters, you can reduce repetition in the generated text and make it more coherent and human-sounding. Text generation strategies and parameters are out of scope for this guide, but you can learn more about these topics in the following guides: * [Generation with LLMs](../llm_tutorial) * [Text generation strategies](../generation_strategies) ## Basics of prompting ### Types of models The majority of modern LLMs are decoder-only transformers. Some examples include: [LLaMA](../model_doc/llama), [Llama2](../model_doc/llama2), [Falcon](../model_doc/falcon), [GPT2](../model_doc/gpt2). However, you may encounter encoder-decoder transformer LLMs as well, for instance, [Flan-T5](../model_doc/flan-t5) and [BART](../model_doc/bart). Encoder-decoder-style models are typically used in generative tasks where the output **heavily** relies on the input, for example, in translation and summarization. The decoder-only models are used for all other types of generative tasks. When using a pipeline to generate text with an LLM, it's important to know what type of LLM you are using, because they use different pipelines. Run inference with decoder-only models with the `text-generation` pipeline: thon >>> from transformers import pipeline >>> import torch >>> torch.manual_seed(0) # doctest: +IGNORE_RESULT >>> generator = pipeline('text-generation', model = 'gpt2') >>> prompt = ""Hello, I'm a language model"" >>> generator(prompt, max_length = 30) [{'generated_text': ""Hello, I'm a language model expert, so I'm a big believer in the concept that I know very well and then I try to look into""}] To run inference with an encoder-decoder, use the `text2text-generation` pipeline: thon >>> text2text_generator = pipeline(""text2text-generation"", model = 'google/flan-t5-base') >>> prompt = ""Translate from English to French: I'm very happy to see you"" >>> text2text_generator(prompt) [{'generated_text': 'Je suis très heureuse de vous rencontrer.'}] ### Base vs instruct/chat models Most of the recent LLM checkpoints available on 🤗 Hub come in two versions: base and instruct (or chat). For example, [`tiiuae/falcon-7b`](https://huggingface.co/tiiuae/falcon-7b) and [`tiiuae/falcon-7b-instruct`](https://huggingface.co/tiiuae/falcon-7b-instruct). Base models are excellent at completing the text when given an initial prompt, however, they are not ideal for NLP tasks where they need to follow instructions, or for conversational use. This is where the instruct (chat) versions come in. These checkpoints are the result of further fine-tuning of the pre-trained base versions on instructions and conversational data. This additional fine-tuning makes them a better choice for many NLP tasks. Let's illustrate some simple prompts that you can use with [`tiiuae/falcon-7b-instruct`](https://huggingface.co/tiiuae/falcon-7b-instruct) to solve some common NLP tasks. ### NLP tasks First, let's set up the environment: ```bash pip install -q transformers accelerate Next, let's load the model with the appropriate pipeline (`""text-generation""`): thon >>> from transformers import pipeline, AutoTokenizer >>> import torch >>> torch.manual_seed(0) # doctest: +IGNORE_RESULT >>> model = ""tiiuae/falcon-7b-instruct"" >>> tokenizer = AutoTokenizer.from_pretrained(model) >>> pipe = pipeline( ""text-generation"", model=model, tokenizer=tokenizer, torch_dtype=torch.bfloat16, device_map=""auto"", ) Note that Falcon models were trained using the `bfloat16` datatype, so we recommend you use the same. This requires a recent version of CUDA and works best on modern cards. Now that we have the model loaded via the pipeline, let's explore how you can use prompts to solve NLP tasks. #### Text classification One of the most common forms of text classification is sentiment analysis, which assigns a label like ""positive"", ""negative"", or ""neutral"" to a sequence of text. Let's write a prompt that instructs the model to classify a given text (a movie review). We'll start by giving the instruction, and then specifying the text to classify. Note that instead of leaving it at that, we're also adding the beginning of the response - `""Sentiment: ""`: thon >>> torch.manual_seed(0) # doctest: +IGNORE_RESULT >>> prompt = """"""Classify the text into neutral, negative or positive. Text: This movie is definitely one of my favorite movies of its kind. The interaction between respectable and morally strong characters is an ode to chivalry and the honor code amongst thieves and policemen. Sentiment: """""" >>> sequences = pipe( prompt, max_new_tokens=10, ) >>> for seq in sequences: print(f""Result: {seq['generated_text']}"") Result: Classify the text into neutral, negative or positive. Text: This movie is definitely one of my favorite movies of its kind. The interaction between respectable and morally strong characters is an ode to chivalry and the honor code amongst thieves and policemen. Sentiment: Positive As a result, the output contains a classification label from the list we have provided in the instructions, and it is a correct one! You may notice that in addition to the prompt, we pass a `max_new_tokens` parameter. It controls the number of tokens the model shall generate, and it is one of the many text generation parameters that you can learn about in [Text generation strategies](../generation_strategies) guide. #### Named Entity Recognition Named Entity Recognition (NER) is a task of finding named entities in a piece of text, such as a person, location, or organization. Let's modify the instructions in the prompt to make the LLM perform this task. Here, let's also set `return_full_text = False` so that output doesn't contain the prompt: thon >>> torch.manual_seed(1) # doctest: +IGNORE_RESULT >>> prompt = """"""Return a list of named entities in the text. Text: The Golden State Warriors are an American professional basketball team based in San Francisco. Named entities: """""" >>> sequences = pipe( prompt, max_new_tokens=15, return_full_text = False, ) >>> for seq in sequences: print(f""{seq['generated_text']}"") - Golden State Warriors - San Francisco As you can see, the model correctly identified two named entities from the given text. #### Translation Another task LLMs can perform is translation. You can choose to use encoder-decoder models for this task, however, here, for the simplicity of the examples, we'll keep using Falcon-7b-instruct, which does a decent job. Once again, here's how you can write a basic prompt to instruct a model to translate a piece of text from English to Italian: thon >>> torch.manual_seed(2) # doctest: +IGNORE_RESULT >>> prompt = """"""Translate the English text to Italian. Text: Sometimes, I've believed as many as six impossible things before breakfast. Translation: """""" >>> sequences = pipe( prompt, max_new_tokens=20, do_sample=True, top_k=10, return_full_text = False, ) >>> for seq in sequences: print(f""{seq['generated_text']}"") A volte, ho creduto a sei impossibili cose prima di colazione. Here we've added a `do_sample=True` and `top_k=10` to allow the model to be a bit more flexible when generating output. #### Text summarization Similar to the translation, text summarization is another generative task where the output **heavily** relies on the input, and encoder-decoder models can be a better choice. However, decoder-style models can be used for this task as well. Previously, we have placed the instructions at the very beginning of the prompt. However, the very end of the prompt can also be a suitable location for instructions. Typically, it's better to place the instruction on one of the extreme ends. thon >>> torch.manual_seed(3) # doctest: +IGNORE_RESULT >>> prompt = """"""Permaculture is a design process mimicking the diversity, functionality and resilience of natural ecosystems. The principles and practices are drawn from traditional ecological knowledge of indigenous cultures combined with modern scientific understanding and technological innovations. Permaculture design provides a framework helping individuals and communities develop innovative, creative and effective strategies for meeting basic needs while preparing for and mitigating the projected impacts of climate change. Write a summary of the above text. Summary: """""" >>> sequences = pipe( prompt, max_new_tokens=30, do_sample=True, top_k=10, return_full_text = False, ) >>> for seq in sequences: print(f""{seq['generated_text']}"") Permaculture is an ecological design mimicking natural ecosystems to meet basic needs and prepare for climate change. It is based on traditional knowledge and scientific understanding. #### Question answering For question answering task we can structure the prompt into the following logical components: instructions, context, question, and the leading word or phrase (`""Answer:""`) to nudge the model to start generating the answer: thon >>> torch.manual_seed(4) # doctest: +IGNORE_RESULT >>> prompt = """"""Answer the question using the context below. Context: Gazpacho is a cold soup and drink made of raw, blended vegetables. Most gazpacho includes stale bread, tomato, cucumbers, onion, bell peppers, garlic, olive oil, wine vinegar, water, and salt. Northern recipes often include cumin and/or pimentón (smoked sweet paprika). Traditionally, gazpacho was made by pounding the vegetables in a mortar with a pestle; this more laborious method is still sometimes used as it helps keep the gazpacho cool and avoids the foam and silky consistency of smoothie versions made in blenders or food processors. Question: What modern tool is used to make gazpacho? Answer: """""" >>> sequences = pipe( prompt, max_new_tokens=10, do_sample=True, top_k=10, return_full_text = False, ) >>> for seq in sequences: print(f""Result: {seq['generated_text']}"") Result: Modern tools are used, such as immersion blenders #### Reasoning Reasoning is one of the most difficult tasks for LLMs, and achieving good results often requires applying advanced prompting techniques, like [Chain-of-though](#chain-of-thought). Let's try if we can make a model reason about a simple arithmetics task with a basic prompt: thon >>> torch.manual_seed(5) # doctest: +IGNORE_RESULT >>> prompt = """"""There are 5 groups of students in the class. Each group has 4 students. How many students are there in the class?"""""" >>> sequences = pipe( prompt, max_new_tokens=30, do_sample=True, top_k=10, return_full_text = False, ) >>> for seq in sequences: print(f""Result: {seq['generated_text']}"") Result: There are a total of 5 groups, so there are 5 x 4=20 students in the class. Correct! Let's increase the complexity a little and see if we can still get away with a basic prompt: thon >>> torch.manual_seed(6) # doctest: +IGNORE_RESULT >>> prompt = """"""I baked 15 muffins. I ate 2 muffins and gave 5 muffins to a neighbor. My partner then bought 6 more muffins and ate 2. How many muffins do we now have?"""""" >>> sequences = pipe( prompt, max_new_tokens=10, do_sample=True, top_k=10, return_full_text = False, ) >>> for seq in sequences: print(f""Result: {seq['generated_text']}"") Result: The total number of muffins now is 21 This is a wrong answer, it should be 12. In this case, this can be due to the prompt being too basic, or due to the choice of model, after all we've picked the smallest version of Falcon. Reasoning is difficult for models of all sizes, but larger models are likely to perform better. ## Best practices of LLM prompting In this section of the guide we have compiled a list of best practices that tend to improve the prompt results: * When choosing the model to work with, the latest and most capable models are likely to perform better. * Start with a simple and short prompt, and iterate from there. * Put the instructions at the beginning of the prompt, or at the very end. When working with large context, models apply various optimizations to prevent Attention complexity from scaling quadratically. This may make a model more attentive to the beginning or end of a prompt than the middle. * Clearly separate instructions from the text they apply to - more on this in the next section. * Be specific and descriptive about the task and the desired outcome - its format, length, style, language, etc. * Avoid ambiguous descriptions and instructions. * Favor instructions that say ""what to do"" instead of those that say ""what not to do"". * ""Lead"" the output in the right direction by writing the first word (or even begin the first sentence for the model). * Use advanced techniques like [Few-shot prompting](#few-shot-prompting) and [Chain-of-thought](#chain-of-thought) * Test your prompts with different models to assess their robustness. * Version and track the performance of your prompts. ## Advanced prompting techniques ### Few-shot prompting The basic prompts in the sections above are the examples of ""zero-shot"" prompts, meaning, the model has been given instructions and context, but no examples with solutions. LLMs that have been fine-tuned on instruction datasets, generally perform well on such ""zero-shot"" tasks. However, you may find that your task has more complexity or nuance, and, perhaps, you have some requirements for the output that the model doesn't catch on just from the instructions. In this case, you can try the technique called few-shot prompting. In few-shot prompting, we provide examples in the prompt giving the model more context to improve the performance. The examples condition the model to generate the output following the patterns in the examples. Here's an example: thon >>> torch.manual_seed(0) # doctest: +IGNORE_RESULT >>> prompt = """"""Text: The first human went into space and orbited the Earth on April 12, 1961. Date: 04/12/1961 Text: The first-ever televised presidential debate in the United States took place on September 28, 1960, between presidential candidates John F. Kennedy and Richard Nixon. Date:"""""" >>> sequences = pipe( prompt, max_new_tokens=8, do_sample=True, top_k=10, ) >>> for seq in sequences: print(f""Result: {seq['generated_text']}"") Result: Text: The first human went into space and orbited the Earth on April 12, 1961. Date: 04/12/1961 Text: The first-ever televised presidential debate in the United States took place on September 28, 1960, between presidential candidates John F. Kennedy and Richard Nixon. Date: 09/28/1960 In the above code snippet we used a single example to demonstrate the desired output to the model, so this can be called a ""one-shot"" prompting. However, depending on the task complexity you may need to use more than one example. Limitations of the few-shot prompting technique: - While LLMs can pick up on the patterns in the examples, these technique doesn't work well on complex reasoning tasks - Few-shot prompting requires creating lengthy prompts. Prompts with large number of tokens can increase computation and latency. There's also a limit to the length of the prompts. - Sometimes when given a number of examples, models can learn patterns that you didn't intend them to learn, e.g. that the third movie review is always negative. ### Chain-of-thought Chain-of-thought (CoT) prompting is a technique that nudges a model to produce intermediate reasoning steps thus improving the results on complex reasoning tasks. There are two ways of steering a model to producing the reasoning steps: - few-shot prompting by illustrating examples with detailed answers to questions, showing the model how to work through a problem. - by instructing the model to reason by adding phrases like ""Let's think step by step"" or ""Take a deep breath and work through the problem step by step."" If we apply the CoT technique to the muffins example from the [reasoning section](#reasoning) and use a larger model, such as (`tiiuae/falcon-180B-chat`) which you can play with in the [HuggingChat](https://huggingface.co/chat/), we'll get a significant improvement on the reasoning result: ```text Let's go through this step-by-step: 1. You start with 15 muffins. 2. You eat 2 muffins, leaving you with 13 muffins. 3. You give 5 muffins to your neighbor, leaving you with 8 muffins. 4. Your partner buys 6 more muffins, bringing the total number of muffins to 14. 5. Your partner eats 2 muffins, leaving you with 12 muffins. Therefore, you now have 12 muffins. ## Prompting vs fine-tuning You can achieve great results by optimizing your prompts, however, you may still ponder whether fine-tuning a model would work better for your case. Here are some scenarios when fine-tuning a smaller model may be a preferred option: - Your domain is wildly different from what LLMs were pre-trained on and extensive prompt optimization did not yield sufficient results. - You need your model to work well in a low-resource language. - You need the model to be trained on sensitive data that is under strict regulations. - You have to use a small model due to cost, privacy, infrastructure or other limitations. In all of the above examples, you will need to make sure that you either already have or can easily obtain a large enough domain-specific dataset at a reasonable cost to fine-tune a model. You will also need to have enough time and resources to fine-tune a model. If the above examples are not the case for you, optimizing prompts can prove to be more beneficial. " tasks/asr.md," # Automatic speech recognition [[open-in-colab]] Automatic speech recognition (ASR) converts a speech signal to text, mapping a sequence of audio inputs to text outputs. Virtual assistants like Siri and Alexa use ASR models to help users everyday, and there are many other useful user-facing applications like live captioning and note-taking during meetings. This guide will show you how to: 1. Finetune [Wav2Vec2](https://huggingface.co/facebook/wav2vec2-base) on the [MInDS-14](https://huggingface.co/datasets/PolyAI/minds14) dataset to transcribe audio to text. 2. Use your finetuned model for inference. The task illustrated in this tutorial is supported by the following model architectures: [Data2VecAudio](../model_doc/data2vec-audio), [Hubert](../model_doc/hubert), [M-CTC-T](../model_doc/mctct), [SEW](../model_doc/sew), [SEW-D](../model_doc/sew-d), [UniSpeech](../model_doc/unispeech), [UniSpeechSat](../model_doc/unispeech-sat), [Wav2Vec2](../model_doc/wav2vec2), [Wav2Vec2-Conformer](../model_doc/wav2vec2-conformer), [WavLM](../model_doc/wavlm) Before you begin, make sure you have all the necessary libraries installed: ```bash pip install transformers datasets evaluate jiwer We encourage you to login to your Hugging Face account so you can upload and share your model with the community. When prompted, enter your token to login: >>> from huggingface_hub import notebook_login >>> notebook_login() ## Load MInDS-14 dataset Start by loading a smaller subset of the [MInDS-14](https://huggingface.co/datasets/PolyAI/minds14) dataset from the 🤗 Datasets library. This'll give you a chance to experiment and make sure everything works before spending more time training on the full dataset. >>> from datasets import load_dataset, Audio >>> minds = load_dataset(""PolyAI/minds14"", name=""en-US"", split=""train[:100]"") Split the dataset's `train` split into a train and test set with the [`~Dataset.train_test_split`] method: >>> minds = minds.train_test_split(test_size=0.2) Then take a look at the dataset: >>> minds DatasetDict({ train: Dataset({ features: ['path', 'audio', 'transcription', 'english_transcription', 'intent_class', 'lang_id'], num_rows: 16 }) test: Dataset({ features: ['path', 'audio', 'transcription', 'english_transcription', 'intent_class', 'lang_id'], num_rows: 4 }) }) While the dataset contains a lot of useful information, like `lang_id` and `english_transcription`, you'll focus on the `audio` and `transcription` in this guide. Remove the other columns with the [`~datasets.Dataset.remove_columns`] method: >>> minds = minds.remove_columns([""english_transcription"", ""intent_class"", ""lang_id""]) Take a look at the example again: >>> minds[""train""][0] {'audio': {'array': array([-0.00024414, 0. , 0. , , 0.00024414, 0.00024414, 0.00024414], dtype=float32), 'path': '/root/.cache/huggingface/datasets/downloads/extracted/f14948e0e84be638dd7943ac36518a4cf3324e8b7aa331c5ab11541518e9368c/en-US~APP_ERROR/602ba9e2963e11ccd901cd4f.wav', 'sampling_rate': 8000}, 'path': '/root/.cache/huggingface/datasets/downloads/extracted/f14948e0e84be638dd7943ac36518a4cf3324e8b7aa331c5ab11541518e9368c/en-US~APP_ERROR/602ba9e2963e11ccd901cd4f.wav', 'transcription': ""hi I'm trying to use the banking app on my phone and currently my checking and savings account balance is not refreshing""} There are two fields: - `audio`: a 1-dimensional `array` of the speech signal that must be called to load and resample the audio file. - `transcription`: the target text. ## Preprocess The next step is to load a Wav2Vec2 processor to process the audio signal: >>> from transformers import AutoProcessor >>> processor = AutoProcessor.from_pretrained(""facebook/wav2vec2-base"") The MInDS-14 dataset has a sampling rate of 8000kHz (you can find this information in its [dataset card](https://huggingface.co/datasets/PolyAI/minds14)), which means you'll need to resample the dataset to 16000kHz to use the pretrained Wav2Vec2 model: >>> minds = minds.cast_column(""audio"", Audio(sampling_rate=16_000)) >>> minds[""train""][0] {'audio': {'array': array([-2.38064706e-04, -1.58618059e-04, -5.43987835e-06, , 2.78103951e-04, 2.38446111e-04, 1.18740834e-04], dtype=float32), 'path': '/root/.cache/huggingface/datasets/downloads/extracted/f14948e0e84be638dd7943ac36518a4cf3324e8b7aa331c5ab11541518e9368c/en-US~APP_ERROR/602ba9e2963e11ccd901cd4f.wav', 'sampling_rate': 16000}, 'path': '/root/.cache/huggingface/datasets/downloads/extracted/f14948e0e84be638dd7943ac36518a4cf3324e8b7aa331c5ab11541518e9368c/en-US~APP_ERROR/602ba9e2963e11ccd901cd4f.wav', 'transcription': ""hi I'm trying to use the banking app on my phone and currently my checking and savings account balance is not refreshing""} As you can see in the `transcription` above, the text contains a mix of upper and lowercase characters. The Wav2Vec2 tokenizer is only trained on uppercase characters so you'll need to make sure the text matches the tokenizer's vocabulary: >>> def uppercase(example): return {""transcription"": example[""transcription""].upper()} >>> minds = minds.map(uppercase) Now create a preprocessing function that: 1. Calls the `audio` column to load and resample the audio file. 2. Extracts the `input_values` from the audio file and tokenize the `transcription` column with the processor. >>> def prepare_dataset(batch): audio = batch[""audio""] batch = processor(audio[""array""], sampling_rate=audio[""sampling_rate""], text=batch[""transcription""]) batch[""input_length""] = len(batch[""input_values""][0]) return batch To apply the preprocessing function over the entire dataset, use 🤗 Datasets [`~datasets.Dataset.map`] function. You can speed up `map` by increasing the number of processes with the `num_proc` parameter. Remove the columns you don't need with the [`~datasets.Dataset.remove_columns`] method: >>> encoded_minds = minds.map(prepare_dataset, remove_columns=minds.column_names[""train""], num_proc=4) 🤗 Transformers doesn't have a data collator for ASR, so you'll need to adapt the [`DataCollatorWithPadding`] to create a batch of examples. It'll also dynamically pad your text and labels to the length of the longest element in its batch (instead of the entire dataset) so they are a uniform length. While it is possible to pad your text in the `tokenizer` function by setting `padding=True`, dynamic padding is more efficient. Unlike other data collators, this specific data collator needs to apply a different padding method to `input_values` and `labels`: >>> import torch >>> from dataclasses import dataclass, field >>> from typing import Any, Dict, List, Optional, Union >>> @dataclass class DataCollatorCTCWithPadding: processor: AutoProcessor padding: Union[bool, str] = ""longest"" def __call__(self, features: List[Dict[str, Union[List[int], torch.Tensor]]]) -> Dict[str, torch.Tensor]: # split inputs and labels since they have to be of different lengths and need # different padding methods input_features = [{""input_values"": feature[""input_values""][0]} for feature in features] label_features = [{""input_ids"": feature[""labels""]} for feature in features] batch = self.processor.pad(input_features, padding=self.padding, return_tensors=""pt"") labels_batch = self.processor.pad(labels=label_features, padding=self.padding, return_tensors=""pt"") # replace padding with -100 to ignore loss correctly labels = labels_batch[""input_ids""].masked_fill(labels_batch.attention_mask.ne(1), -100) batch[""labels""] = labels return batch Now instantiate your `DataCollatorForCTCWithPadding`: >>> data_collator = DataCollatorCTCWithPadding(processor=processor, padding=""longest"") ## Evaluate Including a metric during training is often helpful for evaluating your model's performance. You can quickly load a evaluation method with the 🤗 [Evaluate](https://huggingface.co/docs/evaluate/index) library. For this task, load the [word error rate](https://huggingface.co/spaces/evaluate-metric/wer) (WER) metric (see the 🤗 Evaluate [quick tour](https://huggingface.co/docs/evaluate/a_quick_tour) to learn more about how to load and compute a metric): >>> import evaluate >>> wer = evaluate.load(""wer"") Then create a function that passes your predictions and labels to [`~evaluate.EvaluationModule.compute`] to calculate the WER: >>> import numpy as np >>> def compute_metrics(pred): pred_logits = pred.predictions pred_ids = np.argmax(pred_logits, axis=-1) pred.label_ids[pred.label_ids == -100] = processor.tokenizer.pad_token_id pred_str = processor.batch_decode(pred_ids) label_str = processor.batch_decode(pred.label_ids, group_tokens=False) wer = wer.compute(predictions=pred_str, references=label_str) return {""wer"": wer} Your `compute_metrics` function is ready to go now, and you'll return to it when you setup your training. ## Train If you aren't familiar with finetuning a model with the [`Trainer`], take a look at the basic tutorial [here](../training#train-with-pytorch-trainer)! You're ready to start training your model now! Load Wav2Vec2 with [`AutoModelForCTC`]. Specify the reduction to apply with the `ctc_loss_reduction` parameter. It is often better to use the average instead of the default summation: >>> from transformers import AutoModelForCTC, TrainingArguments, Trainer >>> model = AutoModelForCTC.from_pretrained( ""facebook/wav2vec2-base"", ctc_loss_reduction=""mean"", pad_token_id=processor.tokenizer.pad_token_id, ) At this point, only three steps remain: 1. Define your training hyperparameters in [`TrainingArguments`]. The only required parameter is `output_dir` which specifies where to save your model. You'll push this model to the Hub by setting `push_to_hub=True` (you need to be signed in to Hugging Face to upload your model). At the end of each epoch, the [`Trainer`] will evaluate the WER and save the training checkpoint. 2. Pass the training arguments to [`Trainer`] along with the model, dataset, tokenizer, data collator, and `compute_metrics` function. 3. Call [`~Trainer.train`] to finetune your model. >>> training_args = TrainingArguments( output_dir=""my_awesome_asr_mind_model"", per_device_train_batch_size=8, gradient_accumulation_steps=2, learning_rate=1e-5, warmup_steps=500, max_steps=2000, gradient_checkpointing=True, fp16=True, group_by_length=True, evaluation_strategy=""steps"", per_device_eval_batch_size=8, save_steps=1000, eval_steps=1000, logging_steps=25, load_best_model_at_end=True, metric_for_best_model=""wer"", greater_is_better=False, push_to_hub=True, ) >>> trainer = Trainer( model=model, args=training_args, train_dataset=encoded_minds[""train""], eval_dataset=encoded_minds[""test""], tokenizer=processor, data_collator=data_collator, compute_metrics=compute_metrics, ) >>> trainer.train() Once training is completed, share your model to the Hub with the [`~transformers.Trainer.push_to_hub`] method so everyone can use your model: >>> trainer.push_to_hub() For a more in-depth example of how to finetune a model for automatic speech recognition, take a look at this blog [post](https://huggingface.co/blog/fine-tune-wav2vec2-english) for English ASR and this [post](https://huggingface.co/blog/fine-tune-xlsr-wav2vec2) for multilingual ASR. ## Inference Great, now that you've finetuned a model, you can use it for inference! Load an audio file you'd like to run inference on. Remember to resample the sampling rate of the audio file to match the sampling rate of the model if you need to! >>> from datasets import load_dataset, Audio >>> dataset = load_dataset(""PolyAI/minds14"", ""en-US"", split=""train"") >>> dataset = dataset.cast_column(""audio"", Audio(sampling_rate=16000)) >>> sampling_rate = dataset.features[""audio""].sampling_rate >>> audio_file = dataset[0][""audio""][""path""] The simplest way to try out your finetuned model for inference is to use it in a [`pipeline`]. Instantiate a `pipeline` for automatic speech recognition with your model, and pass your audio file to it: >>> from transformers import pipeline >>> transcriber = pipeline(""automatic-speech-recognition"", model=""stevhliu/my_awesome_asr_minds_model"") >>> transcriber(audio_file) {'text': 'I WOUD LIKE O SET UP JOINT ACOUNT WTH Y PARTNER'} The transcription is decent, but it could be better! Try finetuning your model on more examples to get even better results! You can also manually replicate the results of the `pipeline` if you'd like: Load a processor to preprocess the audio file and transcription and return the `input` as PyTorch tensors: >>> from transformers import AutoProcessor >>> processor = AutoProcessor.from_pretrained(""stevhliu/my_awesome_asr_mind_model"") >>> inputs = processor(dataset[0][""audio""][""array""], sampling_rate=sampling_rate, return_tensors=""pt"") Pass your inputs to the model and return the logits: >>> from transformers import AutoModelForCTC >>> model = AutoModelForCTC.from_pretrained(""stevhliu/my_awesome_asr_mind_model"") >>> with torch.no_grad(): logits = model(**inputs).logits Get the predicted `input_ids` with the highest probability, and use the processor to decode the predicted `input_ids` back into text: >>> import torch >>> predicted_ids = torch.argmax(logits, dim=-1) >>> transcription = processor.batch_decode(predicted_ids) >>> transcription ['I WOUL LIKE O SET UP JOINT ACOUNT WTH Y PARTNER'] " tasks/token_classification.md," # Token classification [[open-in-colab]] Token classification assigns a label to individual tokens in a sentence. One of the most common token classification tasks is Named Entity Recognition (NER). NER attempts to find a label for each entity in a sentence, such as a person, location, or organization. This guide will show you how to: 1. Finetune [DistilBERT](https://huggingface.co/distilbert-base-uncased) on the [WNUT 17](https://huggingface.co/datasets/wnut_17) dataset to detect new entities. 2. Use your finetuned model for inference. The task illustrated in this tutorial is supported by the following model architectures: [ALBERT](../model_doc/albert), [BERT](../model_doc/bert), [BigBird](../model_doc/big_bird), [BioGpt](../model_doc/biogpt), [BLOOM](../model_doc/bloom), [BROS](../model_doc/bros), [CamemBERT](../model_doc/camembert), [CANINE](../model_doc/canine), [ConvBERT](../model_doc/convbert), [Data2VecText](../model_doc/data2vec-text), [DeBERTa](../model_doc/deberta), [DeBERTa-v2](../model_doc/deberta-v2), [DistilBERT](../model_doc/distilbert), [ELECTRA](../model_doc/electra), [ERNIE](../model_doc/ernie), [ErnieM](../model_doc/ernie_m), [ESM](../model_doc/esm), [Falcon](../model_doc/falcon), [FlauBERT](../model_doc/flaubert), [FNet](../model_doc/fnet), [Funnel Transformer](../model_doc/funnel), [GPT-Sw3](../model_doc/gpt-sw3), [OpenAI GPT-2](../model_doc/gpt2), [GPTBigCode](../model_doc/gpt_bigcode), [GPT Neo](../model_doc/gpt_neo), [GPT NeoX](../model_doc/gpt_neox), [I-BERT](../model_doc/ibert), [LayoutLM](../model_doc/layoutlm), [LayoutLMv2](../model_doc/layoutlmv2), [LayoutLMv3](../model_doc/layoutlmv3), [LiLT](../model_doc/lilt), [Longformer](../model_doc/longformer), [LUKE](../model_doc/luke), [MarkupLM](../model_doc/markuplm), [MEGA](../model_doc/mega), [Megatron-BERT](../model_doc/megatron-bert), [MobileBERT](../model_doc/mobilebert), [MPNet](../model_doc/mpnet), [MPT](../model_doc/mpt), [MRA](../model_doc/mra), [Nezha](../model_doc/nezha), [Nyströmformer](../model_doc/nystromformer), [Phi](../model_doc/phi), [QDQBert](../model_doc/qdqbert), [RemBERT](../model_doc/rembert), [RoBERTa](../model_doc/roberta), [RoBERTa-PreLayerNorm](../model_doc/roberta-prelayernorm), [RoCBert](../model_doc/roc_bert), [RoFormer](../model_doc/roformer), [SqueezeBERT](../model_doc/squeezebert), [XLM](../model_doc/xlm), [XLM-RoBERTa](../model_doc/xlm-roberta), [XLM-RoBERTa-XL](../model_doc/xlm-roberta-xl), [XLNet](../model_doc/xlnet), [X-MOD](../model_doc/xmod), [YOSO](../model_doc/yoso) Before you begin, make sure you have all the necessary libraries installed: ```bash pip install transformers datasets evaluate seqeval We encourage you to login to your Hugging Face account so you can upload and share your model with the community. When prompted, enter your token to login: >>> from huggingface_hub import notebook_login >>> notebook_login() ## Load WNUT 17 dataset Start by loading the WNUT 17 dataset from the 🤗 Datasets library: >>> from datasets import load_dataset >>> wnut = load_dataset(""wnut_17"") Then take a look at an example: >>> wnut[""train""][0] {'id': '0', 'ner_tags': [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 7, 8, 8, 0, 7, 0, 0, 0, 0, 0, 0, 0, 0], 'tokens': ['@paulwalk', 'It', ""'s"", 'the', 'view', 'from', 'where', 'I', ""'m"", 'living', 'for', 'two', 'weeks', '.', 'Empire', 'State', 'Building', '=', 'ESB', '.', 'Pretty', 'bad', 'storm', 'here', 'last', 'evening', '.'] } Each number in `ner_tags` represents an entity. Convert the numbers to their label names to find out what the entities are: >>> label_list = wnut[""train""].features[f""ner_tags""].feature.names >>> label_list [ ""O"", ""B-corporation"", ""I-corporation"", ""B-creative-work"", ""I-creative-work"", ""B-group"", ""I-group"", ""B-location"", ""I-location"", ""B-person"", ""I-person"", ""B-product"", ""I-product"", ] The letter that prefixes each `ner_tag` indicates the token position of the entity: - `B-` indicates the beginning of an entity. - `I-` indicates a token is contained inside the same entity (for example, the `State` token is a part of an entity like `Empire State Building`). - `0` indicates the token doesn't correspond to any entity. ## Preprocess The next step is to load a DistilBERT tokenizer to preprocess the `tokens` field: >>> from transformers import AutoTokenizer >>> tokenizer = AutoTokenizer.from_pretrained(""distilbert-base-uncased"") As you saw in the example `tokens` field above, it looks like the input has already been tokenized. But the input actually hasn't been tokenized yet and you'll need to set `is_split_into_words=True` to tokenize the words into subwords. For example: >>> example = wnut[""train""][0] >>> tokenized_input = tokenizer(example[""tokens""], is_split_into_words=True) >>> tokens = tokenizer.convert_ids_to_tokens(tokenized_input[""input_ids""]) >>> tokens ['[CLS]', '@', 'paul', '##walk', 'it', ""'"", 's', 'the', 'view', 'from', 'where', 'i', ""'"", 'm', 'living', 'for', 'two', 'weeks', '.', 'empire', 'state', 'building', '=', 'es', '##b', '.', 'pretty', 'bad', 'storm', 'here', 'last', 'evening', '.', '[SEP]'] However, this adds some special tokens `[CLS]` and `[SEP]` and the subword tokenization creates a mismatch between the input and labels. A single word corresponding to a single label may now be split into two subwords. You'll need to realign the tokens and labels by: 1. Mapping all tokens to their corresponding word with the [`word_ids`](https://huggingface.co/docs/transformers/main_classes/tokenizer#transformers.BatchEncoding.word_ids) method. 2. Assigning the label `-100` to the special tokens `[CLS]` and `[SEP]` so they're ignored by the PyTorch loss function (see [CrossEntropyLoss](https://pytorch.org/docs/stable/generated/torch.nn.CrossEntropyLoss.html)). 3. Only labeling the first token of a given word. Assign `-100` to other subtokens from the same word. Here is how you can create a function to realign the tokens and labels, and truncate sequences to be no longer than DistilBERT's maximum input length: >>> def tokenize_and_align_labels(examples): tokenized_inputs = tokenizer(examples[""tokens""], truncation=True, is_split_into_words=True) labels = [] for i, label in enumerate(examples[f""ner_tags""]): word_ids = tokenized_inputs.word_ids(batch_index=i) # Map tokens to their respective word. previous_word_idx = None label_ids = [] for word_idx in word_ids: # Set the special tokens to -100. if word_idx is None: label_ids.append(-100) elif word_idx != previous_word_idx: # Only label the first token of a given word. label_ids.append(label[word_idx]) else: label_ids.append(-100) previous_word_idx = word_idx labels.append(label_ids) tokenized_inputs[""labels""] = labels return tokenized_inputs To apply the preprocessing function over the entire dataset, use 🤗 Datasets [`~datasets.Dataset.map`] function. You can speed up the `map` function by setting `batched=True` to process multiple elements of the dataset at once: >>> tokenized_wnut = wnut.map(tokenize_and_align_labels, batched=True) Now create a batch of examples using [`DataCollatorWithPadding`]. It's more efficient to *dynamically pad* the sentences to the longest length in a batch during collation, instead of padding the whole dataset to the maximum length. >>> from transformers import DataCollatorForTokenClassification >>> data_collator = DataCollatorForTokenClassification(tokenizer=tokenizer) >>> from transformers import DataCollatorForTokenClassification >>> data_collator = DataCollatorForTokenClassification(tokenizer=tokenizer, return_tensors=""tf"") ## Evaluate Including a metric during training is often helpful for evaluating your model's performance. You can quickly load a evaluation method with the 🤗 [Evaluate](https://huggingface.co/docs/evaluate/index) library. For this task, load the [seqeval](https://huggingface.co/spaces/evaluate-metric/seqeval) framework (see the 🤗 Evaluate [quick tour](https://huggingface.co/docs/evaluate/a_quick_tour) to learn more about how to load and compute a metric). Seqeval actually produces several scores: precision, recall, F1, and accuracy. >>> import evaluate >>> seqeval = evaluate.load(""seqeval"") Get the NER labels first, and then create a function that passes your true predictions and true labels to [`~evaluate.EvaluationModule.compute`] to calculate the scores: >>> import numpy as np >>> labels = [label_list[i] for i in example[f""ner_tags""]] >>> def compute_metrics(p): predictions, labels = p predictions = np.argmax(predictions, axis=2) true_predictions = [ [label_list[p] for (p, l) in zip(prediction, label) if l != -100] for prediction, label in zip(predictions, labels) ] true_labels = [ [label_list[l] for (p, l) in zip(prediction, label) if l != -100] for prediction, label in zip(predictions, labels) ] results = seqeval.compute(predictions=true_predictions, references=true_labels) return { ""precision"": results[""overall_precision""], ""recall"": results[""overall_recall""], ""f1"": results[""overall_f1""], ""accuracy"": results[""overall_accuracy""], } Your `compute_metrics` function is ready to go now, and you'll return to it when you setup your training. ## Train Before you start training your model, create a map of the expected ids to their labels with `id2label` and `label2id`: >>> id2label = { 0: ""O"", 1: ""B-corporation"", 2: ""I-corporation"", 3: ""B-creative-work"", 4: ""I-creative-work"", 5: ""B-group"", 6: ""I-group"", 7: ""B-location"", 8: ""I-location"", 9: ""B-person"", 10: ""I-person"", 11: ""B-product"", 12: ""I-product"", } >>> label2id = { ""O"": 0, ""B-corporation"": 1, ""I-corporation"": 2, ""B-creative-work"": 3, ""I-creative-work"": 4, ""B-group"": 5, ""I-group"": 6, ""B-location"": 7, ""I-location"": 8, ""B-person"": 9, ""I-person"": 10, ""B-product"": 11, ""I-product"": 12, } If you aren't familiar with finetuning a model with the [`Trainer`], take a look at the basic tutorial [here](../training#train-with-pytorch-trainer)! You're ready to start training your model now! Load DistilBERT with [`AutoModelForTokenClassification`] along with the number of expected labels, and the label mappings: >>> from transformers import AutoModelForTokenClassification, TrainingArguments, Trainer >>> model = AutoModelForTokenClassification.from_pretrained( ""distilbert-base-uncased"", num_labels=13, id2label=id2label, label2id=label2id ) At this point, only three steps remain: 1. Define your training hyperparameters in [`TrainingArguments`]. The only required parameter is `output_dir` which specifies where to save your model. You'll push this model to the Hub by setting `push_to_hub=True` (you need to be signed in to Hugging Face to upload your model). At the end of each epoch, the [`Trainer`] will evaluate the seqeval scores and save the training checkpoint. 2. Pass the training arguments to [`Trainer`] along with the model, dataset, tokenizer, data collator, and `compute_metrics` function. 3. Call [`~Trainer.train`] to finetune your model. >>> training_args = TrainingArguments( output_dir=""my_awesome_wnut_model"", learning_rate=2e-5, per_device_train_batch_size=16, per_device_eval_batch_size=16, num_train_epochs=2, weight_decay=0.01, evaluation_strategy=""epoch"", save_strategy=""epoch"", load_best_model_at_end=True, push_to_hub=True, ) >>> trainer = Trainer( model=model, args=training_args, train_dataset=tokenized_wnut[""train""], eval_dataset=tokenized_wnut[""test""], tokenizer=tokenizer, data_collator=data_collator, compute_metrics=compute_metrics, ) >>> trainer.train() Once training is completed, share your model to the Hub with the [`~transformers.Trainer.push_to_hub`] method so everyone can use your model: >>> trainer.push_to_hub() If you aren't familiar with finetuning a model with Keras, take a look at the basic tutorial [here](../training#train-a-tensorflow-model-with-keras)! To finetune a model in TensorFlow, start by setting up an optimizer function, learning rate schedule, and some training hyperparameters: >>> from transformers import create_optimizer >>> batch_size = 16 >>> num_train_epochs = 3 >>> num_train_steps = (len(tokenized_wnut[""train""]) // batch_size) * num_train_epochs >>> optimizer, lr_schedule = create_optimizer( init_lr=2e-5, num_train_steps=num_train_steps, weight_decay_rate=0.01, num_warmup_steps=0, ) Then you can load DistilBERT with [`TFAutoModelForTokenClassification`] along with the number of expected labels, and the label mappings: >>> from transformers import TFAutoModelForTokenClassification >>> model = TFAutoModelForTokenClassification.from_pretrained( ""distilbert-base-uncased"", num_labels=13, id2label=id2label, label2id=label2id ) Convert your datasets to the `tf.data.Dataset` format with [`~transformers.TFPreTrainedModel.prepare_tf_dataset`]: >>> tf_train_set = model.prepare_tf_dataset( tokenized_wnut[""train""], shuffle=True, batch_size=16, collate_fn=data_collator, ) >>> tf_validation_set = model.prepare_tf_dataset( tokenized_wnut[""validation""], shuffle=False, batch_size=16, collate_fn=data_collator, ) Configure the model for training with [`compile`](https://keras.io/api/models/model_training_apis/#compile-method). Note that Transformers models all have a default task-relevant loss function, so you don't need to specify one unless you want to: >>> import tensorflow as tf >>> model.compile(optimizer=optimizer) # No loss argument! The last two things to setup before you start training is to compute the seqeval scores from the predictions, and provide a way to push your model to the Hub. Both are done by using [Keras callbacks](../main_classes/keras_callbacks). Pass your `compute_metrics` function to [`~transformers.KerasMetricCallback`]: >>> from transformers.keras_callbacks import KerasMetricCallback >>> metric_callback = KerasMetricCallback(metric_fn=compute_metrics, eval_dataset=tf_validation_set) Specify where to push your model and tokenizer in the [`~transformers.PushToHubCallback`]: >>> from transformers.keras_callbacks import PushToHubCallback >>> push_to_hub_callback = PushToHubCallback( output_dir=""my_awesome_wnut_model"", tokenizer=tokenizer, ) Then bundle your callbacks together: >>> callbacks = [metric_callback, push_to_hub_callback] Finally, you're ready to start training your model! Call [`fit`](https://keras.io/api/models/model_training_apis/#fit-method) with your training and validation datasets, the number of epochs, and your callbacks to finetune the model: >>> model.fit(x=tf_train_set, validation_data=tf_validation_set, epochs=3, callbacks=callbacks) Once training is completed, your model is automatically uploaded to the Hub so everyone can use it! For a more in-depth example of how to finetune a model for token classification, take a look at the corresponding [PyTorch notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/token_classification.ipynb) or [TensorFlow notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/token_classification-tf.ipynb). ## Inference Great, now that you've finetuned a model, you can use it for inference! Grab some text you'd like to run inference on: >>> text = ""The Golden State Warriors are an American professional basketball team based in San Francisco."" The simplest way to try out your finetuned model for inference is to use it in a [`pipeline`]. Instantiate a `pipeline` for NER with your model, and pass your text to it: >>> from transformers import pipeline >>> classifier = pipeline(""ner"", model=""stevhliu/my_awesome_wnut_model"") >>> classifier(text) [{'entity': 'B-location', 'score': 0.42658573, 'index': 2, 'word': 'golden', 'start': 4, 'end': 10}, {'entity': 'I-location', 'score': 0.35856336, 'index': 3, 'word': 'state', 'start': 11, 'end': 16}, {'entity': 'B-group', 'score': 0.3064001, 'index': 4, 'word': 'warriors', 'start': 17, 'end': 25}, {'entity': 'B-location', 'score': 0.65523505, 'index': 13, 'word': 'san', 'start': 80, 'end': 83}, {'entity': 'B-location', 'score': 0.4668663, 'index': 14, 'word': 'francisco', 'start': 84, 'end': 93}] You can also manually replicate the results of the `pipeline` if you'd like: Tokenize the text and return PyTorch tensors: >>> from transformers import AutoTokenizer >>> tokenizer = AutoTokenizer.from_pretrained(""stevhliu/my_awesome_wnut_model"") >>> inputs = tokenizer(text, return_tensors=""pt"") Pass your inputs to the model and return the `logits`: >>> from transformers import AutoModelForTokenClassification >>> model = AutoModelForTokenClassification.from_pretrained(""stevhliu/my_awesome_wnut_model"") >>> with torch.no_grad(): logits = model(**inputs).logits Get the class with the highest probability, and use the model's `id2label` mapping to convert it to a text label: >>> predictions = torch.argmax(logits, dim=2) >>> predicted_token_class = [model.config.id2label[t.item()] for t in predictions[0]] >>> predicted_token_class ['O', 'O', 'B-location', 'I-location', 'B-group', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'B-location', 'B-location', 'O', 'O'] Tokenize the text and return TensorFlow tensors: >>> from transformers import AutoTokenizer >>> tokenizer = AutoTokenizer.from_pretrained(""stevhliu/my_awesome_wnut_model"") >>> inputs = tokenizer(text, return_tensors=""tf"") Pass your inputs to the model and return the `logits`: >>> from transformers import TFAutoModelForTokenClassification >>> model = TFAutoModelForTokenClassification.from_pretrained(""stevhliu/my_awesome_wnut_model"") >>> logits = model(**inputs).logits Get the class with the highest probability, and use the model's `id2label` mapping to convert it to a text label: >>> predicted_token_class_ids = tf.math.argmax(logits, axis=-1) >>> predicted_token_class = [model.config.id2label[t] for t in predicted_token_class_ids[0].numpy().tolist()] >>> predicted_token_class ['O', 'O', 'B-location', 'I-location', 'B-group', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'B-location', 'B-location', 'O', 'O'] " tasks/image_to_image.md," # Image-to-Image Task Guide [[open-in-colab]] Image-to-Image task is the task where an application receives an image and outputs another image. This has various subtasks, including image enhancement (super resolution, low light enhancement, deraining and so on), image inpainting, and more. This guide will show you how to: - Use an image-to-image pipeline for super resolution task, - Run image-to-image models for same task without a pipeline. Note that as of the time this guide is released, `image-to-image` pipeline only supports super resolution task. Let's begin by installing the necessary libraries. ```bash pip install transformers We can now initialize the pipeline with a [Swin2SR model](https://huggingface.co/caidas/swin2SR-lightweight-x2-64). We can then infer with the pipeline by calling it with an image. As of now, only [Swin2SR models](https://huggingface.co/models?sort=trending&search=swin2sr) are supported in this pipeline. thon from transformers import pipeline device = torch.device('cuda' if torch.cuda.is_available() else 'cpu') pipe = pipeline(task=""image-to-image"", model=""caidas/swin2SR-lightweight-x2-64"", device=device) Now, let's load an image. thon from PIL import Image import requests url = ""https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/cat.jpg"" image = Image.open(requests.get(url, stream=True).raw) print(image.size) ```bash # (532, 432) We can now do inference with the pipeline. We will get an upscaled version of the cat image. thon upscaled = pipe(image) print(upscaled.size) ```bash # (1072, 880) If you wish to do inference yourself with no pipeline, you can use the `Swin2SRForImageSuperResolution` and `Swin2SRImageProcessor` classes of transformers. We will use the same model checkpoint for this. Let's initialize the model and the processor. thon from transformers import Swin2SRForImageSuperResolution, Swin2SRImageProcessor model = Swin2SRForImageSuperResolution.from_pretrained(""caidas/swin2SR-lightweight-x2-64"").to(device) processor = Swin2SRImageProcessor(""caidas/swin2SR-lightweight-x2-64"") `pipeline` abstracts away the preprocessing and postprocessing steps that we have to do ourselves, so let's preprocess the image. We will pass the image to the processor and then move the pixel values to GPU. thon pixel_values = processor(image, return_tensors=""pt"").pixel_values print(pixel_values.shape) pixel_values = pixel_values.to(device) We can now infer the image by passing pixel values to the model. thon import torch with torch.no_grad(): outputs = model(pixel_values) Output is an object of type `ImageSuperResolutionOutput` that looks like below 👇 (loss=None, reconstruction=tensor([[[[0.8270, 0.8269, 0.8275, , 0.7463, 0.7446, 0.7453], [0.8287, 0.8278, 0.8283, , 0.7451, 0.7448, 0.7457], [0.8280, 0.8273, 0.8269, , 0.7447, 0.7446, 0.7452], , [0.5923, 0.5933, 0.5924, , 0.0697, 0.0695, 0.0706], [0.5926, 0.5932, 0.5926, , 0.0673, 0.0687, 0.0705], [0.5927, 0.5914, 0.5922, , 0.0664, 0.0694, 0.0718]]]], device='cuda:0'), hidden_states=None, attentions=None) We need to get the `reconstruction` and post-process it for visualization. Let's see how it looks like. thon outputs.reconstruction.data.shape # torch.Size([1, 3, 880, 1072]) We need to squeeze the output and get rid of axis 0, clip the values, then convert it to be numpy float. Then we will arrange axes to have the shape [1072, 880], and finally, bring the output back to range [0, 255]. thon import numpy as np # squeeze, take to CPU and clip the values output = outputs.reconstruction.data.squeeze().cpu().clamp_(0, 1).numpy() # rearrange the axes output = np.moveaxis(output, source=0, destination=-1) # bring values back to pixel values range output = (output * 255.0).round().astype(np.uint8) Image.fromarray(output) " tasks/text-to-speech.md," # Text to speech [[open-in-colab]] Text-to-speech (TTS) is the task of creating natural-sounding speech from text, where the speech can be generated in multiple languages and for multiple speakers. Several text-to-speech models are currently available in 🤗 Transformers, such as [Bark](../model_doc/bark), [MMS](../model_doc/mms), [VITS](../model_doc/vits) and [SpeechT5](../model_doc/speecht5). You can easily generate audio using the `""text-to-audio""` pipeline (or its alias - `""text-to-speech""`). Some models, like Bark, can also be conditioned to generate non-verbal communications such as laughing, sighing and crying, or even add music. Here's an example of how you would use the `""text-to-speech""` pipeline with Bark: >>> from transformers import pipeline >>> pipe = pipeline(""text-to-speech"", model=""suno/bark-small"") >>> text = ""[clears throat] This is a test and I just took a long pause."" >>> output = pipe(text) Here's a code snippet you can use to listen to the resulting audio in a notebook: thon >>> from IPython.display import Audio >>> Audio(output[""audio""], rate=output[""sampling_rate""]) For more examples on what Bark and other pretrained TTS models can do, refer to our [Audio course](https://huggingface.co/learn/audio-course/chapter6/pre-trained_models). If you are looking to fine-tune a TTS model, you can currently fine-tune SpeechT5 only. SpeechT5 is pre-trained on a combination of speech-to-text and text-to-speech data, allowing it to learn a unified space of hidden representations shared by both text and speech. This means that the same pre-trained model can be fine-tuned for different tasks. Furthermore, SpeechT5 supports multiple speakers through x-vector speaker embeddings. The remainder of this guide illustrates how to: 1. Fine-tune [SpeechT5](../model_doc/speecht5) that was originally trained on English speech on the Dutch (`nl`) language subset of the [VoxPopuli](https://huggingface.co/datasets/facebook/voxpopuli) dataset. 2. Use your refined model for inference in one of two ways: using a pipeline or directly. Before you begin, make sure you have all the necessary libraries installed: ```bash pip install datasets soundfile speechbrain accelerate Install 🤗Transformers from source as not all the SpeechT5 features have been merged into an official release yet: ```bash pip install git+https://github.com/huggingface/transformers.git To follow this guide you will need a GPU. If you're working in a notebook, run the following line to check if a GPU is available: ```bash !nvidia-smi We encourage you to log in to your Hugging Face account to upload and share your model with the community. When prompted, enter your token to log in: >>> from huggingface_hub import notebook_login >>> notebook_login() ## Load the dataset [VoxPopuli](https://huggingface.co/datasets/facebook/voxpopuli) is a large-scale multilingual speech corpus consisting of data sourced from 2009-2020 European Parliament event recordings. It contains labelled audio-transcription data for 15 European languages. In this guide, we are using the Dutch language subset, feel free to pick another subset. Note that VoxPopuli or any other automated speech recognition (ASR) dataset may not be the most suitable option for training TTS models. The features that make it beneficial for ASR, such as excessive background noise, are typically undesirable in TTS. However, finding top-quality, multilingual, and multi-speaker TTS datasets can be quite challenging. Let's load the data: >>> from datasets import load_dataset, Audio >>> dataset = load_dataset(""facebook/voxpopuli"", ""nl"", split=""train"") >>> len(dataset) 20968 20968 examples should be sufficient for fine-tuning. SpeechT5 expects audio data to have a sampling rate of 16 kHz, so make sure the examples in the dataset meet this requirement: dataset = dataset.cast_column(""audio"", Audio(sampling_rate=16000)) ## Preprocess the data Let's begin by defining the model checkpoint to use and loading the appropriate processor: >>> from transformers import SpeechT5Processor >>> checkpoint = ""microsoft/speecht5_tts"" >>> processor = SpeechT5Processor.from_pretrained(checkpoint) ### Text cleanup for SpeechT5 tokenization Start by cleaning up the text data. You'll need the tokenizer part of the processor to process the text: >>> tokenizer = processor.tokenizer The dataset examples contain `raw_text` and `normalized_text` features. When deciding which feature to use as the text input, consider that the SpeechT5 tokenizer doesn't have any tokens for numbers. In `normalized_text` the numbers are written out as text. Thus, it is a better fit, and we recommend using `normalized_text` as input text. Because SpeechT5 was trained on the English language, it may not recognize certain characters in the Dutch dataset. If left as is, these characters will be converted to `` tokens. However, in Dutch, certain characters like `à` are used to stress syllables. In order to preserve the meaning of the text, we can replace this character with a regular `a`. To identify unsupported tokens, extract all unique characters in the dataset using the `SpeechT5Tokenizer` which works with characters as tokens. To do this, write the `extract_all_chars` mapping function that concatenates the transcriptions from all examples into one string and converts it to a set of characters. Make sure to set `batched=True` and `batch_size=-1` in `dataset.map()` so that all transcriptions are available at once for the mapping function. >>> def extract_all_chars(batch): all_text = "" "".join(batch[""normalized_text""]) vocab = list(set(all_text)) return {""vocab"": [vocab], ""all_text"": [all_text]} >>> vocabs = dataset.map( extract_all_chars, batched=True, batch_size=-1, keep_in_memory=True, remove_columns=dataset.column_names, ) >>> dataset_vocab = set(vocabs[""vocab""][0]) >>> tokenizer_vocab = {k for k, _ in tokenizer.get_vocab().items()} Now you have two sets of characters: one with the vocabulary from the dataset and one with the vocabulary from the tokenizer. To identify any unsupported characters in the dataset, you can take the difference between these two sets. The resulting set will contain the characters that are in the dataset but not in the tokenizer. >>> dataset_vocab - tokenizer_vocab {' ', 'à', 'ç', 'è', 'ë', 'í', 'ï', 'ö', 'ü'} To handle the unsupported characters identified in the previous step, define a function that maps these characters to valid tokens. Note that spaces are already replaced by `▁` in the tokenizer and don't need to be handled separately. >>> replacements = [ (""à"", ""a""), (""ç"", ""c""), (""è"", ""e""), (""ë"", ""e""), (""í"", ""i""), (""ï"", ""i""), (""ö"", ""o""), (""ü"", ""u""), ] >>> def cleanup_text(inputs): for src, dst in replacements: inputs[""normalized_text""] = inputs[""normalized_text""].replace(src, dst) return inputs >>> dataset = dataset.map(cleanup_text) Now that you have dealt with special characters in the text, it's time to shift focus to the audio data. ### Speakers The VoxPopuli dataset includes speech from multiple speakers, but how many speakers are represented in the dataset? To determine this, we can count the number of unique speakers and the number of examples each speaker contributes to the dataset. With a total of 20,968 examples in the dataset, this information will give us a better understanding of the distribution of speakers and examples in the data. >>> from collections import defaultdict >>> speaker_counts = defaultdict(int) >>> for speaker_id in dataset[""speaker_id""]: speaker_counts[speaker_id] += 1 By plotting a histogram you can get a sense of how much data there is for each speaker. >>> import matplotlib.pyplot as plt >>> plt.figure() >>> plt.hist(speaker_counts.values(), bins=20) >>> plt.ylabel(""Speakers"") >>> plt.xlabel(""Examples"") >>> plt.show() The histogram reveals that approximately one-third of the speakers in the dataset have fewer than 100 examples, while around ten speakers have more than 500 examples. To improve training efficiency and balance the dataset, we can limit the data to speakers with between 100 and 400 examples. >>> def select_speaker(speaker_id): return 100 <= speaker_counts[speaker_id] <= 400 >>> dataset = dataset.filter(select_speaker, input_columns=[""speaker_id""]) Let's check how many speakers remain: >>> len(set(dataset[""speaker_id""])) 42 Let's see how many examples are left: >>> len(dataset) 9973 You are left with just under 10,000 examples from approximately 40 unique speakers, which should be sufficient. Note that some speakers with few examples may actually have more audio available if the examples are long. However, determining the total amount of audio for each speaker requires scanning through the entire dataset, which is a time-consuming process that involves loading and decoding each audio file. As such, we have chosen to skip this step here. ### Speaker embeddings To enable the TTS model to differentiate between multiple speakers, you'll need to create a speaker embedding for each example. The speaker embedding is an additional input into the model that captures a particular speaker's voice characteristics. To generate these speaker embeddings, use the pre-trained [spkrec-xvect-voxceleb](https://huggingface.co/speechbrain/spkrec-xvect-voxceleb) model from SpeechBrain. Create a function `create_speaker_embedding()` that takes an input audio waveform and outputs a 512-element vector containing the corresponding speaker embedding. >>> import os >>> import torch >>> from speechbrain.pretrained import EncoderClassifier >>> spk_model_name = ""speechbrain/spkrec-xvect-voxceleb"" >>> device = ""cuda"" if torch.cuda.is_available() else ""cpu"" >>> speaker_model = EncoderClassifier.from_hparams( source=spk_model_name, run_opts={""device"": device}, savedir=os.path.join(""/tmp"", spk_model_name), ) >>> def create_speaker_embedding(waveform): with torch.no_grad(): speaker_embeddings = speaker_model.encode_batch(torch.tensor(waveform)) speaker_embeddings = torch.nn.functional.normalize(speaker_embeddings, dim=2) speaker_embeddings = speaker_embeddings.squeeze().cpu().numpy() return speaker_embeddings It's important to note that the `speechbrain/spkrec-xvect-voxceleb` model was trained on English speech from the VoxCeleb dataset, whereas the training examples in this guide are in Dutch. While we believe that this model will still generate reasonable speaker embeddings for our Dutch dataset, this assumption may not hold true in all cases. For optimal results, we recommend training an X-vector model on the target speech first. This will ensure that the model is better able to capture the unique voice characteristics present in the Dutch language. ### Processing the dataset Finally, let's process the data into the format the model expects. Create a `prepare_dataset` function that takes in a single example and uses the `SpeechT5Processor` object to tokenize the input text and load the target audio into a log-mel spectrogram. It should also add the speaker embeddings as an additional input. >>> def prepare_dataset(example): audio = example[""audio""] example = processor( text=example[""normalized_text""], audio_target=audio[""array""], sampling_rate=audio[""sampling_rate""], return_attention_mask=False, ) # strip off the batch dimension example[""labels""] = example[""labels""][0] # use SpeechBrain to obtain x-vector example[""speaker_embeddings""] = create_speaker_embedding(audio[""array""]) return example Verify the processing is correct by looking at a single example: >>> processed_example = prepare_dataset(dataset[0]) >>> list(processed_example.keys()) ['input_ids', 'labels', 'stop_labels', 'speaker_embeddings'] Speaker embeddings should be a 512-element vector: >>> processed_example[""speaker_embeddings""].shape (512,) The labels should be a log-mel spectrogram with 80 mel bins. >>> import matplotlib.pyplot as plt >>> plt.figure() >>> plt.imshow(processed_example[""labels""].T) >>> plt.show() Side note: If you find this spectrogram confusing, it may be due to your familiarity with the convention of placing low frequencies at the bottom and high frequencies at the top of a plot. However, when plotting spectrograms as an image using the matplotlib library, the y-axis is flipped and the spectrograms appear upside down. Now apply the processing function to the entire dataset. This will take between 5 and 10 minutes. >>> dataset = dataset.map(prepare_dataset, remove_columns=dataset.column_names) You'll see a warning saying that some examples in the dataset are longer than the maximum input length the model can handle (600 tokens). Remove those examples from the dataset. Here we go even further and to allow for larger batch sizes we remove anything over 200 tokens. >>> def is_not_too_long(input_ids): input_length = len(input_ids) return input_length < 200 >>> dataset = dataset.filter(is_not_too_long, input_columns=[""input_ids""]) >>> len(dataset) 8259 Next, create a basic train/test split: >>> dataset = dataset.train_test_split(test_size=0.1) ### Data collator In order to combine multiple examples into a batch, you need to define a custom data collator. This collator will pad shorter sequences with padding tokens, ensuring that all examples have the same length. For the spectrogram labels, the padded portions are replaced with the special value `-100`. This special value instructs the model to ignore that part of the spectrogram when calculating the spectrogram loss. >>> from dataclasses import dataclass >>> from typing import Any, Dict, List, Union >>> @dataclass class TTSDataCollatorWithPadding: processor: Any def __call__(self, features: List[Dict[str, Union[List[int], torch.Tensor]]]) -> Dict[str, torch.Tensor]: input_ids = [{""input_ids"": feature[""input_ids""]} for feature in features] label_features = [{""input_values"": feature[""labels""]} for feature in features] speaker_features = [feature[""speaker_embeddings""] for feature in features] # collate the inputs and targets into a batch batch = processor.pad(input_ids=input_ids, labels=label_features, return_tensors=""pt"") # replace padding with -100 to ignore loss correctly batch[""labels""] = batch[""labels""].masked_fill(batch.decoder_attention_mask.unsqueeze(-1).ne(1), -100) # not used during fine-tuning del batch[""decoder_attention_mask""] # round down target lengths to multiple of reduction factor if model.config.reduction_factor > 1: target_lengths = torch.tensor([len(feature[""input_values""]) for feature in label_features]) target_lengths = target_lengths.new( [length - length % model.config.reduction_factor for length in target_lengths] ) max_length = max(target_lengths) batch[""labels""] = batch[""labels""][:, :max_length] # also add in the speaker embeddings batch[""speaker_embeddings""] = torch.tensor(speaker_features) return batch In SpeechT5, the input to the decoder part of the model is reduced by a factor 2. In other words, it throws away every other timestep from the target sequence. The decoder then predicts a sequence that is twice as long. Since the original target sequence length may be odd, the data collator makes sure to round the maximum length of the batch down to be a multiple of 2. >>> data_collator = TTSDataCollatorWithPadding(processor=processor) ## Train the model Load the pre-trained model from the same checkpoint as you used for loading the processor: >>> from transformers import SpeechT5ForTextToSpeech >>> model = SpeechT5ForTextToSpeech.from_pretrained(checkpoint) The `use_cache=True` option is incompatible with gradient checkpointing. Disable it for training. >>> model.config.use_cache = False Define the training arguments. Here we are not computing any evaluation metrics during the training process. Instead, we'll only look at the loss: thon >>> from transformers import Seq2SeqTrainingArguments >>> training_args = Seq2SeqTrainingArguments( output_dir=""speecht5_finetuned_voxpopuli_nl"", # change to a repo name of your choice per_device_train_batch_size=4, gradient_accumulation_steps=8, learning_rate=1e-5, warmup_steps=500, max_steps=4000, gradient_checkpointing=True, fp16=True, evaluation_strategy=""steps"", per_device_eval_batch_size=2, save_steps=1000, eval_steps=1000, logging_steps=25, report_to=[""tensorboard""], load_best_model_at_end=True, greater_is_better=False, label_names=[""labels""], push_to_hub=True, ) Instantiate the `Trainer` object and pass the model, dataset, and data collator to it. >>> from transformers import Seq2SeqTrainer >>> trainer = Seq2SeqTrainer( args=training_args, model=model, train_dataset=dataset[""train""], eval_dataset=dataset[""test""], data_collator=data_collator, tokenizer=processor, ) And with that, you're ready to start training! Training will take several hours. Depending on your GPU, it is possible that you will encounter a CUDA ""out-of-memory"" error when you start training. In this case, you can reduce the `per_device_train_batch_size` incrementally by factors of 2 and increase `gradient_accumulation_steps` by 2x to compensate. >>> trainer.train() To be able to use your checkpoint with a pipeline, make sure to save the processor with the checkpoint: >>> processor.save_pretrained(""YOUR_ACCOUNT_NAME/speecht5_finetuned_voxpopuli_nl"") Push the final model to the 🤗 Hub: >>> trainer.push_to_hub() ## Inference ### Inference with a pipeline Great, now that you've fine-tuned a model, you can use it for inference! First, let's see how you can use it with a corresponding pipeline. Let's create a `""text-to-speech""` pipeline with your checkpoint: >>> from transformers import pipeline >>> pipe = pipeline(""text-to-speech"", model=""YOUR_ACCOUNT_NAME/speecht5_finetuned_voxpopuli_nl"") Pick a piece of text in Dutch you'd like narrated, e.g.: >>> text = ""hallo allemaal, ik praat nederlands. groetjes aan iedereen!"" To use SpeechT5 with the pipeline, you'll need a speaker embedding. Let's get it from an example in the test dataset: >>> example = dataset[""test""][304] >>> speaker_embeddings = torch.tensor(example[""speaker_embeddings""]).unsqueeze(0) Now you can pass the text and speaker embeddings to the pipeline, and it will take care of the rest: >>> forward_params = {""speaker_embeddings"": speaker_embeddings} >>> output = pipe(text, forward_params=forward_params) >>> output {'audio': array([-6.82714235e-05, -4.26525949e-04, 1.06134125e-04, , -1.22392643e-03, -7.76011671e-04, 3.29112721e-04], dtype=float32), 'sampling_rate': 16000} You can then listen to the result: >>> from IPython.display import Audio >>> Audio(output['audio'], rate=output['sampling_rate']) ### Run inference manually You can achieve the same inference results without using the pipeline, however, more steps will be required. Load the model from the 🤗 Hub: >>> model = SpeechT5ForTextToSpeech.from_pretrained(""YOUR_ACCOUNT/speecht5_finetuned_voxpopuli_nl"") Pick an example from the test dataset obtain a speaker embedding. >>> example = dataset[""test""][304] >>> speaker_embeddings = torch.tensor(example[""speaker_embeddings""]).unsqueeze(0) Define the input text and tokenize it. >>> text = ""hallo allemaal, ik praat nederlands. groetjes aan iedereen!"" >>> inputs = processor(text=text, return_tensors=""pt"") Create a spectrogram with your model: >>> spectrogram = model.generate_speech(inputs[""input_ids""], speaker_embeddings) Visualize the spectrogram, if you'd like to: >>> plt.figure() >>> plt.imshow(spectrogram.T) >>> plt.show() Finally, use the vocoder to turn the spectrogram into sound. >>> with torch.no_grad(): speech = vocoder(spectrogram) >>> from IPython.display import Audio >>> Audio(speech.numpy(), rate=16000) In our experience, obtaining satisfactory results from this model can be challenging. The quality of the speaker embeddings appears to be a significant factor. Since SpeechT5 was pre-trained with English x-vectors, it performs best when using English speaker embeddings. If the synthesized speech sounds poor, try using a different speaker embedding. Increasing the training duration is also likely to enhance the quality of the results. Even so, the speech clearly is Dutch instead of English, and it does capture the voice characteristics of the speaker (compare to the original audio in the example). Another thing to experiment with is the model's configuration. For example, try using `config.reduction_factor = 1` to see if this improves the results. Finally, it is essential to consider ethical considerations. Although TTS technology has numerous useful applications, it may also be used for malicious purposes, such as impersonating someone's voice without their knowledge or consent. Please use TTS judiciously and responsibly." tasks/question_answering.md," # Question answering [[open-in-colab]] Question answering tasks return an answer given a question. If you've ever asked a virtual assistant like Alexa, Siri or Google what the weather is, then you've used a question answering model before. There are two common types of question answering tasks: - Extractive: extract the answer from the given context. - Abstractive: generate an answer from the context that correctly answers the question. This guide will show you how to: 1. Finetune [DistilBERT](https://huggingface.co/distilbert-base-uncased) on the [SQuAD](https://huggingface.co/datasets/squad) dataset for extractive question answering. 2. Use your finetuned model for inference. The task illustrated in this tutorial is supported by the following model architectures: [ALBERT](../model_doc/albert), [BART](../model_doc/bart), [BERT](../model_doc/bert), [BigBird](../model_doc/big_bird), [BigBird-Pegasus](../model_doc/bigbird_pegasus), [BLOOM](../model_doc/bloom), [CamemBERT](../model_doc/camembert), [CANINE](../model_doc/canine), [ConvBERT](../model_doc/convbert), [Data2VecText](../model_doc/data2vec-text), [DeBERTa](../model_doc/deberta), [DeBERTa-v2](../model_doc/deberta-v2), [DistilBERT](../model_doc/distilbert), [ELECTRA](../model_doc/electra), [ERNIE](../model_doc/ernie), [ErnieM](../model_doc/ernie_m), [Falcon](../model_doc/falcon), [FlauBERT](../model_doc/flaubert), [FNet](../model_doc/fnet), [Funnel Transformer](../model_doc/funnel), [OpenAI GPT-2](../model_doc/gpt2), [GPT Neo](../model_doc/gpt_neo), [GPT NeoX](../model_doc/gpt_neox), [GPT-J](../model_doc/gptj), [I-BERT](../model_doc/ibert), [LayoutLMv2](../model_doc/layoutlmv2), [LayoutLMv3](../model_doc/layoutlmv3), [LED](../model_doc/led), [LiLT](../model_doc/lilt), [Longformer](../model_doc/longformer), [LUKE](../model_doc/luke), [LXMERT](../model_doc/lxmert), [MarkupLM](../model_doc/markuplm), [mBART](../model_doc/mbart), [MEGA](../model_doc/mega), [Megatron-BERT](../model_doc/megatron-bert), [MobileBERT](../model_doc/mobilebert), [MPNet](../model_doc/mpnet), [MPT](../model_doc/mpt), [MRA](../model_doc/mra), [MT5](../model_doc/mt5), [MVP](../model_doc/mvp), [Nezha](../model_doc/nezha), [Nyströmformer](../model_doc/nystromformer), [OPT](../model_doc/opt), [QDQBert](../model_doc/qdqbert), [Reformer](../model_doc/reformer), [RemBERT](../model_doc/rembert), [RoBERTa](../model_doc/roberta), [RoBERTa-PreLayerNorm](../model_doc/roberta-prelayernorm), [RoCBert](../model_doc/roc_bert), [RoFormer](../model_doc/roformer), [Splinter](../model_doc/splinter), [SqueezeBERT](../model_doc/squeezebert), [T5](../model_doc/t5), [UMT5](../model_doc/umt5), [XLM](../model_doc/xlm), [XLM-RoBERTa](../model_doc/xlm-roberta), [XLM-RoBERTa-XL](../model_doc/xlm-roberta-xl), [XLNet](../model_doc/xlnet), [X-MOD](../model_doc/xmod), [YOSO](../model_doc/yoso) Before you begin, make sure you have all the necessary libraries installed: ```bash pip install transformers datasets evaluate We encourage you to login to your Hugging Face account so you can upload and share your model with the community. When prompted, enter your token to login: >>> from huggingface_hub import notebook_login >>> notebook_login() ## Load SQuAD dataset Start by loading a smaller subset of the SQuAD dataset from the 🤗 Datasets library. This'll give you a chance to experiment and make sure everything works before spending more time training on the full dataset. >>> from datasets import load_dataset >>> squad = load_dataset(""squad"", split=""train[:5000]"") Split the dataset's `train` split into a train and test set with the [`~datasets.Dataset.train_test_split`] method: >>> squad = squad.train_test_split(test_size=0.2) Then take a look at an example: >>> squad[""train""][0] {'answers': {'answer_start': [515], 'text': ['Saint Bernadette Soubirous']}, 'context': 'Architecturally, the school has a Catholic character. Atop the Main Building\'s gold dome is a golden statue of the Virgin Mary. Immediately in front of the Main Building and facing it, is a copper statue of Christ with arms upraised with the legend ""Venite Ad Me Omnes"". Next to the Main Building is the Basilica of the Sacred Heart. Immediately behind the basilica is the Grotto, a Marian place of prayer and reflection. It is a replica of the grotto at Lourdes, France where the Virgin Mary reputedly appeared to Saint Bernadette Soubirous in 1858. At the end of the main drive (and in a direct line that connects through 3 statues and the Gold Dome), is a simple, modern stone statue of Mary.', 'id': '5733be284776f41900661182', 'question': 'To whom did the Virgin Mary allegedly appear in 1858 in Lourdes France?', 'title': 'University_of_Notre_Dame' } There are several important fields here: - `answers`: the starting location of the answer token and the answer text. - `context`: background information from which the model needs to extract the answer. - `question`: the question a model should answer. ## Preprocess The next step is to load a DistilBERT tokenizer to process the `question` and `context` fields: >>> from transformers import AutoTokenizer >>> tokenizer = AutoTokenizer.from_pretrained(""distilbert-base-uncased"") There are a few preprocessing steps particular to question answering tasks you should be aware of: 1. Some examples in a dataset may have a very long `context` that exceeds the maximum input length of the model. To deal with longer sequences, truncate only the `context` by setting `truncation=""only_second""`. 2. Next, map the start and end positions of the answer to the original `context` by setting `return_offset_mapping=True`. 3. With the mapping in hand, now you can find the start and end tokens of the answer. Use the [`~tokenizers.Encoding.sequence_ids`] method to find which part of the offset corresponds to the `question` and which corresponds to the `context`. Here is how you can create a function to truncate and map the start and end tokens of the `answer` to the `context`: >>> def preprocess_function(examples): questions = [q.strip() for q in examples[""question""]] inputs = tokenizer( questions, examples[""context""], max_length=384, truncation=""only_second"", return_offsets_mapping=True, padding=""max_length"", ) offset_mapping = inputs.pop(""offset_mapping"") answers = examples[""answers""] start_positions = [] end_positions = [] for i, offset in enumerate(offset_mapping): answer = answers[i] start_char = answer[""answer_start""][0] end_char = answer[""answer_start""][0] + len(answer[""text""][0]) sequence_ids = inputs.sequence_ids(i) # Find the start and end of the context idx = 0 while sequence_ids[idx] != 1: idx += 1 context_start = idx while sequence_ids[idx] == 1: idx += 1 context_end = idx - 1 # If the answer is not fully inside the context, label it (0, 0) if offset[context_start][0] > end_char or offset[context_end][1] < start_char: start_positions.append(0) end_positions.append(0) else: # Otherwise it's the start and end token positions idx = context_start while idx <= context_end and offset[idx][0] <= start_char: idx += 1 start_positions.append(idx - 1) idx = context_end while idx >= context_start and offset[idx][1] >= end_char: idx -= 1 end_positions.append(idx + 1) inputs[""start_positions""] = start_positions inputs[""end_positions""] = end_positions return inputs To apply the preprocessing function over the entire dataset, use 🤗 Datasets [`~datasets.Dataset.map`] function. You can speed up the `map` function by setting `batched=True` to process multiple elements of the dataset at once. Remove any columns you don't need: >>> tokenized_squad = squad.map(preprocess_function, batched=True, remove_columns=squad[""train""].column_names) Now create a batch of examples using [`DefaultDataCollator`]. Unlike other data collators in 🤗 Transformers, the [`DefaultDataCollator`] does not apply any additional preprocessing such as padding. >>> from transformers import DefaultDataCollator >>> data_collator = DefaultDataCollator() >>> from transformers import DefaultDataCollator >>> data_collator = DefaultDataCollator(return_tensors=""tf"") ## Train If you aren't familiar with finetuning a model with the [`Trainer`], take a look at the basic tutorial [here](../training#train-with-pytorch-trainer)! You're ready to start training your model now! Load DistilBERT with [`AutoModelForQuestionAnswering`]: >>> from transformers import AutoModelForQuestionAnswering, TrainingArguments, Trainer >>> model = AutoModelForQuestionAnswering.from_pretrained(""distilbert-base-uncased"") At this point, only three steps remain: 1. Define your training hyperparameters in [`TrainingArguments`]. The only required parameter is `output_dir` which specifies where to save your model. You'll push this model to the Hub by setting `push_to_hub=True` (you need to be signed in to Hugging Face to upload your model). 2. Pass the training arguments to [`Trainer`] along with the model, dataset, tokenizer, and data collator. 3. Call [`~Trainer.train`] to finetune your model. >>> training_args = TrainingArguments( output_dir=""my_awesome_qa_model"", evaluation_strategy=""epoch"", learning_rate=2e-5, per_device_train_batch_size=16, per_device_eval_batch_size=16, num_train_epochs=3, weight_decay=0.01, push_to_hub=True, ) >>> trainer = Trainer( model=model, args=training_args, train_dataset=tokenized_squad[""train""], eval_dataset=tokenized_squad[""test""], tokenizer=tokenizer, data_collator=data_collator, ) >>> trainer.train() Once training is completed, share your model to the Hub with the [`~transformers.Trainer.push_to_hub`] method so everyone can use your model: >>> trainer.push_to_hub() If you aren't familiar with finetuning a model with Keras, take a look at the basic tutorial [here](../training#train-a-tensorflow-model-with-keras)! To finetune a model in TensorFlow, start by setting up an optimizer function, learning rate schedule, and some training hyperparameters: >>> from transformers import create_optimizer >>> batch_size = 16 >>> num_epochs = 2 >>> total_train_steps = (len(tokenized_squad[""train""]) // batch_size) * num_epochs >>> optimizer, schedule = create_optimizer( init_lr=2e-5, num_warmup_steps=0, num_train_steps=total_train_steps, ) Then you can load DistilBERT with [`TFAutoModelForQuestionAnswering`]: >>> from transformers import TFAutoModelForQuestionAnswering >>> model = TFAutoModelForQuestionAnswering(""distilbert-base-uncased"") Convert your datasets to the `tf.data.Dataset` format with [`~transformers.TFPreTrainedModel.prepare_tf_dataset`]: >>> tf_train_set = model.prepare_tf_dataset( tokenized_squad[""train""], shuffle=True, batch_size=16, collate_fn=data_collator, ) >>> tf_validation_set = model.prepare_tf_dataset( tokenized_squad[""test""], shuffle=False, batch_size=16, collate_fn=data_collator, ) Configure the model for training with [`compile`](https://keras.io/api/models/model_training_apis/#compile-method): >>> import tensorflow as tf >>> model.compile(optimizer=optimizer) The last thing to setup before you start training is to provide a way to push your model to the Hub. This can be done by specifying where to push your model and tokenizer in the [`~transformers.PushToHubCallback`]: >>> from transformers.keras_callbacks import PushToHubCallback >>> callback = PushToHubCallback( output_dir=""my_awesome_qa_model"", tokenizer=tokenizer, ) Finally, you're ready to start training your model! Call [`fit`](https://keras.io/api/models/model_training_apis/#fit-method) with your training and validation datasets, the number of epochs, and your callback to finetune the model: >>> model.fit(x=tf_train_set, validation_data=tf_validation_set, epochs=3, callbacks=[callback]) Once training is completed, your model is automatically uploaded to the Hub so everyone can use it! For a more in-depth example of how to finetune a model for question answering, take a look at the corresponding [PyTorch notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/question_answering.ipynb) or [TensorFlow notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/question_answering-tf.ipynb). ## Evaluate Evaluation for question answering requires a significant amount of postprocessing. To avoid taking up too much of your time, this guide skips the evaluation step. The [`Trainer`] still calculates the evaluation loss during training so you're not completely in the dark about your model's performance. If have more time and you're interested in how to evaluate your model for question answering, take a look at the [Question answering](https://huggingface.co/course/chapter7/7?fw=pt#postprocessing) chapter from the 🤗 Hugging Face Course! ## Inference Great, now that you've finetuned a model, you can use it for inference! Come up with a question and some context you'd like the model to predict: >>> question = ""How many programming languages does BLOOM support?"" >>> context = ""BLOOM has 176 billion parameters and can generate text in 46 languages natural languages and 13 programming languages."" The simplest way to try out your finetuned model for inference is to use it in a [`pipeline`]. Instantiate a `pipeline` for question answering with your model, and pass your text to it: >>> from transformers import pipeline >>> question_answerer = pipeline(""question-answering"", model=""my_awesome_qa_model"") >>> question_answerer(question=question, context=context) {'score': 0.2058267742395401, 'start': 10, 'end': 95, 'answer': '176 billion parameters and can generate text in 46 languages natural languages and 13'} You can also manually replicate the results of the `pipeline` if you'd like: Tokenize the text and return PyTorch tensors: >>> from transformers import AutoTokenizer >>> tokenizer = AutoTokenizer.from_pretrained(""my_awesome_qa_model"") >>> inputs = tokenizer(question, context, return_tensors=""pt"") Pass your inputs to the model and return the `logits`: >>> import torch >>> from transformers import AutoModelForQuestionAnswering >>> model = AutoModelForQuestionAnswering.from_pretrained(""my_awesome_qa_model"") >>> with torch.no_grad(): outputs = model(**inputs) Get the highest probability from the model output for the start and end positions: >>> answer_start_index = outputs.start_logits.argmax() >>> answer_end_index = outputs.end_logits.argmax() Decode the predicted tokens to get the answer: >>> predict_answer_tokens = inputs.input_ids[0, answer_start_index : answer_end_index + 1] >>> tokenizer.decode(predict_answer_tokens) '176 billion parameters and can generate text in 46 languages natural languages and 13' Tokenize the text and return TensorFlow tensors: >>> from transformers import AutoTokenizer >>> tokenizer = AutoTokenizer.from_pretrained(""my_awesome_qa_model"") >>> inputs = tokenizer(question, text, return_tensors=""tf"") Pass your inputs to the model and return the `logits`: >>> from transformers import TFAutoModelForQuestionAnswering >>> model = TFAutoModelForQuestionAnswering.from_pretrained(""my_awesome_qa_model"") >>> outputs = model(**inputs) Get the highest probability from the model output for the start and end positions: >>> answer_start_index = int(tf.math.argmax(outputs.start_logits, axis=-1)[0]) >>> answer_end_index = int(tf.math.argmax(outputs.end_logits, axis=-1)[0]) Decode the predicted tokens to get the answer: >>> predict_answer_tokens = inputs.input_ids[0, answer_start_index : answer_end_index + 1] >>> tokenizer.decode(predict_answer_tokens) '176 billion parameters and can generate text in 46 languages natural languages and 13' " tasks/multiple_choice.md," # Multiple choice [[open-in-colab]] A multiple choice task is similar to question answering, except several candidate answers are provided along with a context and the model is trained to select the correct answer. This guide will show you how to: 1. Finetune [BERT](https://huggingface.co/bert-base-uncased) on the `regular` configuration of the [SWAG](https://huggingface.co/datasets/swag) dataset to select the best answer given multiple options and some context. 2. Use your finetuned model for inference. The task illustrated in this tutorial is supported by the following model architectures: [ALBERT](../model_doc/albert), [BERT](../model_doc/bert), [BigBird](../model_doc/big_bird), [CamemBERT](../model_doc/camembert), [CANINE](../model_doc/canine), [ConvBERT](../model_doc/convbert), [Data2VecText](../model_doc/data2vec-text), [DeBERTa-v2](../model_doc/deberta-v2), [DistilBERT](../model_doc/distilbert), [ELECTRA](../model_doc/electra), [ERNIE](../model_doc/ernie), [ErnieM](../model_doc/ernie_m), [FlauBERT](../model_doc/flaubert), [FNet](../model_doc/fnet), [Funnel Transformer](../model_doc/funnel), [I-BERT](../model_doc/ibert), [Longformer](../model_doc/longformer), [LUKE](../model_doc/luke), [MEGA](../model_doc/mega), [Megatron-BERT](../model_doc/megatron-bert), [MobileBERT](../model_doc/mobilebert), [MPNet](../model_doc/mpnet), [MRA](../model_doc/mra), [Nezha](../model_doc/nezha), [Nyströmformer](../model_doc/nystromformer), [QDQBert](../model_doc/qdqbert), [RemBERT](../model_doc/rembert), [RoBERTa](../model_doc/roberta), [RoBERTa-PreLayerNorm](../model_doc/roberta-prelayernorm), [RoCBert](../model_doc/roc_bert), [RoFormer](../model_doc/roformer), [SqueezeBERT](../model_doc/squeezebert), [XLM](../model_doc/xlm), [XLM-RoBERTa](../model_doc/xlm-roberta), [XLM-RoBERTa-XL](../model_doc/xlm-roberta-xl), [XLNet](../model_doc/xlnet), [X-MOD](../model_doc/xmod), [YOSO](../model_doc/yoso) Before you begin, make sure you have all the necessary libraries installed: ```bash pip install transformers datasets evaluate We encourage you to login to your Hugging Face account so you can upload and share your model with the community. When prompted, enter your token to login: >>> from huggingface_hub import notebook_login >>> notebook_login() ## Load SWAG dataset Start by loading the `regular` configuration of the SWAG dataset from the 🤗 Datasets library: >>> from datasets import load_dataset >>> swag = load_dataset(""swag"", ""regular"") Then take a look at an example: >>> swag[""train""][0] {'ending0': 'passes by walking down the street playing their instruments.', 'ending1': 'has heard approaching them.', 'ending2': ""arrives and they're outside dancing and asleep."", 'ending3': 'turns the lead singer watches the performance.', 'fold-ind': '3416', 'gold-source': 'gold', 'label': 0, 'sent1': 'Members of the procession walk down the street holding small horn brass instruments.', 'sent2': 'A drum line', 'startphrase': 'Members of the procession walk down the street holding small horn brass instruments. A drum line', 'video-id': 'anetv_jkn6uvmqwh4'} While it looks like there are a lot of fields here, it is actually pretty straightforward: - `sent1` and `sent2`: these fields show how a sentence starts, and if you put the two together, you get the `startphrase` field. - `ending`: suggests a possible ending for how a sentence can end, but only one of them is correct. - `label`: identifies the correct sentence ending. ## Preprocess The next step is to load a BERT tokenizer to process the sentence starts and the four possible endings: >>> from transformers import AutoTokenizer >>> tokenizer = AutoTokenizer.from_pretrained(""bert-base-uncased"") The preprocessing function you want to create needs to: 1. Make four copies of the `sent1` field and combine each of them with `sent2` to recreate how a sentence starts. 2. Combine `sent2` with each of the four possible sentence endings. 3. Flatten these two lists so you can tokenize them, and then unflatten them afterward so each example has a corresponding `input_ids`, `attention_mask`, and `labels` field. >>> ending_names = [""ending0"", ""ending1"", ""ending2"", ""ending3""] >>> def preprocess_function(examples): first_sentences = [[context] * 4 for context in examples[""sent1""]] question_headers = examples[""sent2""] second_sentences = [ [f""{header} {examples[end][i]}"" for end in ending_names] for i, header in enumerate(question_headers) ] first_sentences = sum(first_sentences, []) second_sentences = sum(second_sentences, []) tokenized_examples = tokenizer(first_sentences, second_sentences, truncation=True) return {k: [v[i : i + 4] for i in range(0, len(v), 4)] for k, v in tokenized_examples.items()} To apply the preprocessing function over the entire dataset, use 🤗 Datasets [`~datasets.Dataset.map`] method. You can speed up the `map` function by setting `batched=True` to process multiple elements of the dataset at once: tokenized_swag = swag.map(preprocess_function, batched=True) 🤗 Transformers doesn't have a data collator for multiple choice, so you'll need to adapt the [`DataCollatorWithPadding`] to create a batch of examples. It's more efficient to *dynamically pad* the sentences to the longest length in a batch during collation, instead of padding the whole dataset to the maximum length. `DataCollatorForMultipleChoice` flattens all the model inputs, applies padding, and then unflattens the results: >>> from dataclasses import dataclass >>> from transformers.tokenization_utils_base import PreTrainedTokenizerBase, PaddingStrategy >>> from typing import Optional, Union >>> import torch >>> @dataclass class DataCollatorForMultipleChoice: """""" Data collator that will dynamically pad the inputs for multiple choice received. """""" tokenizer: PreTrainedTokenizerBase padding: Union[bool, str, PaddingStrategy] = True max_length: Optional[int] = None pad_to_multiple_of: Optional[int] = None def __call__(self, features): label_name = ""label"" if ""label"" in features[0].keys() else ""labels"" labels = [feature.pop(label_name) for feature in features] batch_size = len(features) num_choices = len(features[0][""input_ids""]) flattened_features = [ [{k: v[i] for k, v in feature.items()} for i in range(num_choices)] for feature in features ] flattened_features = sum(flattened_features, []) batch = self.tokenizer.pad( flattened_features, padding=self.padding, max_length=self.max_length, pad_to_multiple_of=self.pad_to_multiple_of, return_tensors=""pt"", ) batch = {k: v.view(batch_size, num_choices, -1) for k, v in batch.items()} batch[""labels""] = torch.tensor(labels, dtype=torch.int64) return batch >>> from dataclasses import dataclass >>> from transformers.tokenization_utils_base import PreTrainedTokenizerBase, PaddingStrategy >>> from typing import Optional, Union >>> import tensorflow as tf >>> @dataclass class DataCollatorForMultipleChoice: """""" Data collator that will dynamically pad the inputs for multiple choice received. """""" tokenizer: PreTrainedTokenizerBase padding: Union[bool, str, PaddingStrategy] = True max_length: Optional[int] = None pad_to_multiple_of: Optional[int] = None def __call__(self, features): label_name = ""label"" if ""label"" in features[0].keys() else ""labels"" labels = [feature.pop(label_name) for feature in features] batch_size = len(features) num_choices = len(features[0][""input_ids""]) flattened_features = [ [{k: v[i] for k, v in feature.items()} for i in range(num_choices)] for feature in features ] flattened_features = sum(flattened_features, []) batch = self.tokenizer.pad( flattened_features, padding=self.padding, max_length=self.max_length, pad_to_multiple_of=self.pad_to_multiple_of, return_tensors=""tf"", ) batch = {k: tf.reshape(v, (batch_size, num_choices, -1)) for k, v in batch.items()} batch[""labels""] = tf.convert_to_tensor(labels, dtype=tf.int64) return batch ## Evaluate Including a metric during training is often helpful for evaluating your model's performance. You can quickly load a evaluation method with the 🤗 [Evaluate](https://huggingface.co/docs/evaluate/index) library. For this task, load the [accuracy](https://huggingface.co/spaces/evaluate-metric/accuracy) metric (see the 🤗 Evaluate [quick tour](https://huggingface.co/docs/evaluate/a_quick_tour) to learn more about how to load and compute a metric): >>> import evaluate >>> accuracy = evaluate.load(""accuracy"") Then create a function that passes your predictions and labels to [`~evaluate.EvaluationModule.compute`] to calculate the accuracy: >>> import numpy as np >>> def compute_metrics(eval_pred): predictions, labels = eval_pred predictions = np.argmax(predictions, axis=1) return accuracy.compute(predictions=predictions, references=labels) Your `compute_metrics` function is ready to go now, and you'll return to it when you setup your training. ## Train If you aren't familiar with finetuning a model with the [`Trainer`], take a look at the basic tutorial [here](../training#train-with-pytorch-trainer)! You're ready to start training your model now! Load BERT with [`AutoModelForMultipleChoice`]: >>> from transformers import AutoModelForMultipleChoice, TrainingArguments, Trainer >>> model = AutoModelForMultipleChoice.from_pretrained(""bert-base-uncased"") At this point, only three steps remain: 1. Define your training hyperparameters in [`TrainingArguments`]. The only required parameter is `output_dir` which specifies where to save your model. You'll push this model to the Hub by setting `push_to_hub=True` (you need to be signed in to Hugging Face to upload your model). At the end of each epoch, the [`Trainer`] will evaluate the accuracy and save the training checkpoint. 2. Pass the training arguments to [`Trainer`] along with the model, dataset, tokenizer, data collator, and `compute_metrics` function. 3. Call [`~Trainer.train`] to finetune your model. >>> training_args = TrainingArguments( output_dir=""my_awesome_swag_model"", evaluation_strategy=""epoch"", save_strategy=""epoch"", load_best_model_at_end=True, learning_rate=5e-5, per_device_train_batch_size=16, per_device_eval_batch_size=16, num_train_epochs=3, weight_decay=0.01, push_to_hub=True, ) >>> trainer = Trainer( model=model, args=training_args, train_dataset=tokenized_swag[""train""], eval_dataset=tokenized_swag[""validation""], tokenizer=tokenizer, data_collator=DataCollatorForMultipleChoice(tokenizer=tokenizer), compute_metrics=compute_metrics, ) >>> trainer.train() Once training is completed, share your model to the Hub with the [`~transformers.Trainer.push_to_hub`] method so everyone can use your model: >>> trainer.push_to_hub() If you aren't familiar with finetuning a model with Keras, take a look at the basic tutorial [here](../training#train-a-tensorflow-model-with-keras)! To finetune a model in TensorFlow, start by setting up an optimizer function, learning rate schedule, and some training hyperparameters: >>> from transformers import create_optimizer >>> batch_size = 16 >>> num_train_epochs = 2 >>> total_train_steps = (len(tokenized_swag[""train""]) // batch_size) * num_train_epochs >>> optimizer, schedule = create_optimizer(init_lr=5e-5, num_warmup_steps=0, num_train_steps=total_train_steps) Then you can load BERT with [`TFAutoModelForMultipleChoice`]: >>> from transformers import TFAutoModelForMultipleChoice >>> model = TFAutoModelForMultipleChoice.from_pretrained(""bert-base-uncased"") Convert your datasets to the `tf.data.Dataset` format with [`~transformers.TFPreTrainedModel.prepare_tf_dataset`]: >>> data_collator = DataCollatorForMultipleChoice(tokenizer=tokenizer) >>> tf_train_set = model.prepare_tf_dataset( tokenized_swag[""train""], shuffle=True, batch_size=batch_size, collate_fn=data_collator, ) >>> tf_validation_set = model.prepare_tf_dataset( tokenized_swag[""validation""], shuffle=False, batch_size=batch_size, collate_fn=data_collator, ) Configure the model for training with [`compile`](https://keras.io/api/models/model_training_apis/#compile-method). Note that Transformers models all have a default task-relevant loss function, so you don't need to specify one unless you want to: >>> model.compile(optimizer=optimizer) # No loss argument! The last two things to setup before you start training is to compute the accuracy from the predictions, and provide a way to push your model to the Hub. Both are done by using [Keras callbacks](../main_classes/keras_callbacks). Pass your `compute_metrics` function to [`~transformers.KerasMetricCallback`]: >>> from transformers.keras_callbacks import KerasMetricCallback >>> metric_callback = KerasMetricCallback(metric_fn=compute_metrics, eval_dataset=tf_validation_set) Specify where to push your model and tokenizer in the [`~transformers.PushToHubCallback`]: >>> from transformers.keras_callbacks import PushToHubCallback >>> push_to_hub_callback = PushToHubCallback( output_dir=""my_awesome_model"", tokenizer=tokenizer, ) Then bundle your callbacks together: >>> callbacks = [metric_callback, push_to_hub_callback] Finally, you're ready to start training your model! Call [`fit`](https://keras.io/api/models/model_training_apis/#fit-method) with your training and validation datasets, the number of epochs, and your callbacks to finetune the model: >>> model.fit(x=tf_train_set, validation_data=tf_validation_set, epochs=2, callbacks=callbacks) Once training is completed, your model is automatically uploaded to the Hub so everyone can use it! For a more in-depth example of how to finetune a model for multiple choice, take a look at the corresponding [PyTorch notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/multiple_choice.ipynb) or [TensorFlow notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/multiple_choice-tf.ipynb). ## Inference Great, now that you've finetuned a model, you can use it for inference! Come up with some text and two candidate answers: >>> prompt = ""France has a bread law, Le Décret Pain, with strict rules on what is allowed in a traditional baguette."" >>> candidate1 = ""The law does not apply to croissants and brioche."" >>> candidate2 = ""The law applies to baguettes."" Tokenize each prompt and candidate answer pair and return PyTorch tensors. You should also create some `labels`: >>> from transformers import AutoTokenizer >>> tokenizer = AutoTokenizer.from_pretrained(""my_awesome_swag_model"") >>> inputs = tokenizer([[prompt, candidate1], [prompt, candidate2]], return_tensors=""pt"", padding=True) >>> labels = torch.tensor(0).unsqueeze(0) Pass your inputs and labels to the model and return the `logits`: >>> from transformers import AutoModelForMultipleChoice >>> model = AutoModelForMultipleChoice.from_pretrained(""my_awesome_swag_model"") >>> outputs = model(**{k: v.unsqueeze(0) for k, v in inputs.items()}, labels=labels) >>> logits = outputs.logits Get the class with the highest probability: >>> predicted_class = logits.argmax().item() >>> predicted_class '0' Tokenize each prompt and candidate answer pair and return TensorFlow tensors: >>> from transformers import AutoTokenizer >>> tokenizer = AutoTokenizer.from_pretrained(""my_awesome_swag_model"") >>> inputs = tokenizer([[prompt, candidate1], [prompt, candidate2]], return_tensors=""tf"", padding=True) Pass your inputs to the model and return the `logits`: >>> from transformers import TFAutoModelForMultipleChoice >>> model = TFAutoModelForMultipleChoice.from_pretrained(""my_awesome_swag_model"") >>> inputs = {k: tf.expand_dims(v, 0) for k, v in inputs.items()} >>> outputs = model(inputs) >>> logits = outputs.logits Get the class with the highest probability: >>> predicted_class = int(tf.math.argmax(logits, axis=-1)[0]) >>> predicted_class '0' " tasks/monocular_depth_estimation.md," # Monocular depth estimation Monocular depth estimation is a computer vision task that involves predicting the depth information of a scene from a single image. In other words, it is the process of estimating the distance of objects in a scene from a single camera viewpoint. Monocular depth estimation has various applications, including 3D reconstruction, augmented reality, autonomous driving, and robotics. It is a challenging task as it requires the model to understand the complex relationships between objects in the scene and the corresponding depth information, which can be affected by factors such as lighting conditions, occlusion, and texture. The task illustrated in this tutorial is supported by the following model architectures: [DPT](../model_doc/dpt), [GLPN](../model_doc/glpn) In this guide you'll learn how to: * create a depth estimation pipeline * run depth estimation inference by hand Before you begin, make sure you have all the necessary libraries installed: ```bash pip install -q transformers ## Depth estimation pipeline The simplest way to try out inference with a model supporting depth estimation is to use the corresponding [`pipeline`]. Instantiate a pipeline from a [checkpoint on the Hugging Face Hub](https://huggingface.co/models?pipeline_tag=depth-estimation&sort=downloads): >>> from transformers import pipeline >>> checkpoint = ""vinvino02/glpn-nyu"" >>> depth_estimator = pipeline(""depth-estimation"", model=checkpoint) Next, choose an image to analyze: >>> from PIL import Image >>> import requests >>> url = ""https://unsplash.com/photos/HwBAsSbPBDU/download?ixid=MnwxMjA3fDB8MXxzZWFyY2h8MzR8fGNhciUyMGluJTIwdGhlJTIwc3RyZWV0fGVufDB8MHx8fDE2Nzg5MDEwODg&force=true&w=640"" >>> image = Image.open(requests.get(url, stream=True).raw) >>> image Pass the image to the pipeline. >>> predictions = depth_estimator(image) The pipeline returns a dictionary with two entries. The first one, called `predicted_depth`, is a tensor with the values being the depth expressed in meters for each pixel. The second one, `depth`, is a PIL image that visualizes the depth estimation result. Let's take a look at the visualized result: >>> predictions[""depth""] ## Depth estimation inference by hand Now that you've seen how to use the depth estimation pipeline, let's see how we can replicate the same result by hand. Start by loading the model and associated processor from a [checkpoint on the Hugging Face Hub](https://huggingface.co/models?pipeline_tag=depth-estimation&sort=downloads). Here we'll use the same checkpoint as before: >>> from transformers import AutoImageProcessor, AutoModelForDepthEstimation >>> checkpoint = ""vinvino02/glpn-nyu"" >>> image_processor = AutoImageProcessor.from_pretrained(checkpoint) >>> model = AutoModelForDepthEstimation.from_pretrained(checkpoint) Prepare the image input for the model using the `image_processor` that will take care of the necessary image transformations such as resizing and normalization: >>> pixel_values = image_processor(image, return_tensors=""pt"").pixel_values Pass the prepared inputs through the model: >>> import torch >>> with torch.no_grad(): outputs = model(pixel_values) predicted_depth = outputs.predicted_depth Visualize the results: >>> import numpy as np >>> # interpolate to original size >>> prediction = torch.nn.functional.interpolate( predicted_depth.unsqueeze(1), size=image.size[::-1], mode=""bicubic"", align_corners=False, ).squeeze() >>> output = prediction.numpy() >>> formatted = (output * 255 / np.max(output)).astype(""uint8"") >>> depth = Image.fromarray(formatted) >>> depth " tasks/sequence_classification.md," # Text classification [[open-in-colab]] Text classification is a common NLP task that assigns a label or class to text. Some of the largest companies run text classification in production for a wide range of practical applications. One of the most popular forms of text classification is sentiment analysis, which assigns a label like 🙂 positive, 🙁 negative, or 😐 neutral to a sequence of text. This guide will show you how to: 1. Finetune [DistilBERT](https://huggingface.co/distilbert-base-uncased) on the [IMDb](https://huggingface.co/datasets/imdb) dataset to determine whether a movie review is positive or negative. 2. Use your finetuned model for inference. The task illustrated in this tutorial is supported by the following model architectures: [ALBERT](../model_doc/albert), [BART](../model_doc/bart), [BERT](../model_doc/bert), [BigBird](../model_doc/big_bird), [BigBird-Pegasus](../model_doc/bigbird_pegasus), [BioGpt](../model_doc/biogpt), [BLOOM](../model_doc/bloom), [CamemBERT](../model_doc/camembert), [CANINE](../model_doc/canine), [CodeLlama](../model_doc/code_llama), [ConvBERT](../model_doc/convbert), [CTRL](../model_doc/ctrl), [Data2VecText](../model_doc/data2vec-text), [DeBERTa](../model_doc/deberta), [DeBERTa-v2](../model_doc/deberta-v2), [DistilBERT](../model_doc/distilbert), [ELECTRA](../model_doc/electra), [ERNIE](../model_doc/ernie), [ErnieM](../model_doc/ernie_m), [ESM](../model_doc/esm), [Falcon](../model_doc/falcon), [FlauBERT](../model_doc/flaubert), [FNet](../model_doc/fnet), [Funnel Transformer](../model_doc/funnel), [GPT-Sw3](../model_doc/gpt-sw3), [OpenAI GPT-2](../model_doc/gpt2), [GPTBigCode](../model_doc/gpt_bigcode), [GPT Neo](../model_doc/gpt_neo), [GPT NeoX](../model_doc/gpt_neox), [GPT-J](../model_doc/gptj), [I-BERT](../model_doc/ibert), [LayoutLM](../model_doc/layoutlm), [LayoutLMv2](../model_doc/layoutlmv2), [LayoutLMv3](../model_doc/layoutlmv3), [LED](../model_doc/led), [LiLT](../model_doc/lilt), [LLaMA](../model_doc/llama), [Longformer](../model_doc/longformer), [LUKE](../model_doc/luke), [MarkupLM](../model_doc/markuplm), [mBART](../model_doc/mbart), [MEGA](../model_doc/mega), [Megatron-BERT](../model_doc/megatron-bert), [Mistral](../model_doc/mistral), [MobileBERT](../model_doc/mobilebert), [MPNet](../model_doc/mpnet), [MPT](../model_doc/mpt), [MRA](../model_doc/mra), [MT5](../model_doc/mt5), [MVP](../model_doc/mvp), [Nezha](../model_doc/nezha), [Nyströmformer](../model_doc/nystromformer), [OpenLlama](../model_doc/open-llama), [OpenAI GPT](../model_doc/openai-gpt), [OPT](../model_doc/opt), [Perceiver](../model_doc/perceiver), [Persimmon](../model_doc/persimmon), [Phi](../model_doc/phi), [PLBart](../model_doc/plbart), [QDQBert](../model_doc/qdqbert), [Reformer](../model_doc/reformer), [RemBERT](../model_doc/rembert), [RoBERTa](../model_doc/roberta), [RoBERTa-PreLayerNorm](../model_doc/roberta-prelayernorm), [RoCBert](../model_doc/roc_bert), [RoFormer](../model_doc/roformer), [SqueezeBERT](../model_doc/squeezebert), [T5](../model_doc/t5), [TAPAS](../model_doc/tapas), [Transformer-XL](../model_doc/transfo-xl), [UMT5](../model_doc/umt5), [XLM](../model_doc/xlm), [XLM-RoBERTa](../model_doc/xlm-roberta), [XLM-RoBERTa-XL](../model_doc/xlm-roberta-xl), [XLNet](../model_doc/xlnet), [X-MOD](../model_doc/xmod), [YOSO](../model_doc/yoso) Before you begin, make sure you have all the necessary libraries installed: ```bash pip install transformers datasets evaluate accelerate We encourage you to login to your Hugging Face account so you can upload and share your model with the community. When prompted, enter your token to login: >>> from huggingface_hub import notebook_login >>> notebook_login() ## Load IMDb dataset Start by loading the IMDb dataset from the 🤗 Datasets library: >>> from datasets import load_dataset >>> imdb = load_dataset(""imdb"") Then take a look at an example: >>> imdb[""test""][0] { ""label"": 0, ""text"": ""I love sci-fi and am willing to put up with a lot. Sci-fi movies/TV are usually underfunded, under-appreciated and misunderstood. I tried to like this, I really did, but it is to good TV sci-fi as Babylon 5 is to Star Trek (the original). Silly prosthetics, cheap cardboard sets, stilted dialogues, CG that doesn't match the background, and painfully one-dimensional characters cannot be overcome with a 'sci-fi' setting. (I'm sure there are those of you out there who think Babylon 5 is good sci-fi TV. It's not. It's clichéd and uninspiring.) While US viewers might like emotion and character development, sci-fi is a genre that does not take itself seriously (cf. Star Trek). It may treat important issues, yet not as a serious philosophy. It's really difficult to care about the characters here as they are not simply foolish, just missing a spark of life. Their actions and reactions are wooden and predictable, often painful to watch. The makers of Earth KNOW it's rubbish as they have to always say \""Gene Roddenberry's Earth\"" otherwise people would not continue watching. Roddenberry's ashes must be turning in their orbit as this dull, cheap, poorly edited (watching it without advert breaks really brings this home) trudging Trabant of a show lumbers into space. Spoiler. So, kill off a main character. And then bring him back as another actor. Jeeez! Dallas all over again."", } There are two fields in this dataset: - `text`: the movie review text. - `label`: a value that is either `0` for a negative review or `1` for a positive review. ## Preprocess The next step is to load a DistilBERT tokenizer to preprocess the `text` field: >>> from transformers import AutoTokenizer >>> tokenizer = AutoTokenizer.from_pretrained(""distilbert-base-uncased"") Create a preprocessing function to tokenize `text` and truncate sequences to be no longer than DistilBERT's maximum input length: >>> def preprocess_function(examples): return tokenizer(examples[""text""], truncation=True) To apply the preprocessing function over the entire dataset, use 🤗 Datasets [`~datasets.Dataset.map`] function. You can speed up `map` by setting `batched=True` to process multiple elements of the dataset at once: tokenized_imdb = imdb.map(preprocess_function, batched=True) Now create a batch of examples using [`DataCollatorWithPadding`]. It's more efficient to *dynamically pad* the sentences to the longest length in a batch during collation, instead of padding the whole dataset to the maximum length. >>> from transformers import DataCollatorWithPadding >>> data_collator = DataCollatorWithPadding(tokenizer=tokenizer) >>> from transformers import DataCollatorWithPadding >>> data_collator = DataCollatorWithPadding(tokenizer=tokenizer, return_tensors=""tf"") ## Evaluate Including a metric during training is often helpful for evaluating your model's performance. You can quickly load a evaluation method with the 🤗 [Evaluate](https://huggingface.co/docs/evaluate/index) library. For this task, load the [accuracy](https://huggingface.co/spaces/evaluate-metric/accuracy) metric (see the 🤗 Evaluate [quick tour](https://huggingface.co/docs/evaluate/a_quick_tour) to learn more about how to load and compute a metric): >>> import evaluate >>> accuracy = evaluate.load(""accuracy"") Then create a function that passes your predictions and labels to [`~evaluate.EvaluationModule.compute`] to calculate the accuracy: >>> import numpy as np >>> def compute_metrics(eval_pred): predictions, labels = eval_pred predictions = np.argmax(predictions, axis=1) return accuracy.compute(predictions=predictions, references=labels) Your `compute_metrics` function is ready to go now, and you'll return to it when you setup your training. ## Train Before you start training your model, create a map of the expected ids to their labels with `id2label` and `label2id`: >>> id2label = {0: ""NEGATIVE"", 1: ""POSITIVE""} >>> label2id = {""NEGATIVE"": 0, ""POSITIVE"": 1} If you aren't familiar with finetuning a model with the [`Trainer`], take a look at the basic tutorial [here](../training#train-with-pytorch-trainer)! You're ready to start training your model now! Load DistilBERT with [`AutoModelForSequenceClassification`] along with the number of expected labels, and the label mappings: >>> from transformers import AutoModelForSequenceClassification, TrainingArguments, Trainer >>> model = AutoModelForSequenceClassification.from_pretrained( ""distilbert-base-uncased"", num_labels=2, id2label=id2label, label2id=label2id ) At this point, only three steps remain: 1. Define your training hyperparameters in [`TrainingArguments`]. The only required parameter is `output_dir` which specifies where to save your model. You'll push this model to the Hub by setting `push_to_hub=True` (you need to be signed in to Hugging Face to upload your model). At the end of each epoch, the [`Trainer`] will evaluate the accuracy and save the training checkpoint. 2. Pass the training arguments to [`Trainer`] along with the model, dataset, tokenizer, data collator, and `compute_metrics` function. 3. Call [`~Trainer.train`] to finetune your model. >>> training_args = TrainingArguments( output_dir=""my_awesome_model"", learning_rate=2e-5, per_device_train_batch_size=16, per_device_eval_batch_size=16, num_train_epochs=2, weight_decay=0.01, evaluation_strategy=""epoch"", save_strategy=""epoch"", load_best_model_at_end=True, push_to_hub=True, ) >>> trainer = Trainer( model=model, args=training_args, train_dataset=tokenized_imdb[""train""], eval_dataset=tokenized_imdb[""test""], tokenizer=tokenizer, data_collator=data_collator, compute_metrics=compute_metrics, ) >>> trainer.train() [`Trainer`] applies dynamic padding by default when you pass `tokenizer` to it. In this case, you don't need to specify a data collator explicitly. Once training is completed, share your model to the Hub with the [`~transformers.Trainer.push_to_hub`] method so everyone can use your model: >>> trainer.push_to_hub() If you aren't familiar with finetuning a model with Keras, take a look at the basic tutorial [here](../training#train-a-tensorflow-model-with-keras)! To finetune a model in TensorFlow, start by setting up an optimizer function, learning rate schedule, and some training hyperparameters: >>> from transformers import create_optimizer >>> import tensorflow as tf >>> batch_size = 16 >>> num_epochs = 5 >>> batches_per_epoch = len(tokenized_imdb[""train""]) // batch_size >>> total_train_steps = int(batches_per_epoch * num_epochs) >>> optimizer, schedule = create_optimizer(init_lr=2e-5, num_warmup_steps=0, num_train_steps=total_train_steps) Then you can load DistilBERT with [`TFAutoModelForSequenceClassification`] along with the number of expected labels, and the label mappings: >>> from transformers import TFAutoModelForSequenceClassification >>> model = TFAutoModelForSequenceClassification.from_pretrained( ""distilbert-base-uncased"", num_labels=2, id2label=id2label, label2id=label2id ) Convert your datasets to the `tf.data.Dataset` format with [`~transformers.TFPreTrainedModel.prepare_tf_dataset`]: >>> tf_train_set = model.prepare_tf_dataset( tokenized_imdb[""train""], shuffle=True, batch_size=16, collate_fn=data_collator, ) >>> tf_validation_set = model.prepare_tf_dataset( tokenized_imdb[""test""], shuffle=False, batch_size=16, collate_fn=data_collator, ) Configure the model for training with [`compile`](https://keras.io/api/models/model_training_apis/#compile-method). Note that Transformers models all have a default task-relevant loss function, so you don't need to specify one unless you want to: >>> import tensorflow as tf >>> model.compile(optimizer=optimizer) # No loss argument! The last two things to setup before you start training is to compute the accuracy from the predictions, and provide a way to push your model to the Hub. Both are done by using [Keras callbacks](../main_classes/keras_callbacks). Pass your `compute_metrics` function to [`~transformers.KerasMetricCallback`]: >>> from transformers.keras_callbacks import KerasMetricCallback >>> metric_callback = KerasMetricCallback(metric_fn=compute_metrics, eval_dataset=tf_validation_set) Specify where to push your model and tokenizer in the [`~transformers.PushToHubCallback`]: >>> from transformers.keras_callbacks import PushToHubCallback >>> push_to_hub_callback = PushToHubCallback( output_dir=""my_awesome_model"", tokenizer=tokenizer, ) Then bundle your callbacks together: >>> callbacks = [metric_callback, push_to_hub_callback] Finally, you're ready to start training your model! Call [`fit`](https://keras.io/api/models/model_training_apis/#fit-method) with your training and validation datasets, the number of epochs, and your callbacks to finetune the model: >>> model.fit(x=tf_train_set, validation_data=tf_validation_set, epochs=3, callbacks=callbacks) Once training is completed, your model is automatically uploaded to the Hub so everyone can use it! For a more in-depth example of how to finetune a model for text classification, take a look at the corresponding [PyTorch notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/text_classification.ipynb) or [TensorFlow notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/text_classification-tf.ipynb). ## Inference Great, now that you've finetuned a model, you can use it for inference! Grab some text you'd like to run inference on: >>> text = ""This was a masterpiece. Not completely faithful to the books, but enthralling from beginning to end. Might be my favorite of the three."" The simplest way to try out your finetuned model for inference is to use it in a [`pipeline`]. Instantiate a `pipeline` for sentiment analysis with your model, and pass your text to it: >>> from transformers import pipeline >>> classifier = pipeline(""sentiment-analysis"", model=""stevhliu/my_awesome_model"") >>> classifier(text) [{'label': 'POSITIVE', 'score': 0.9994940757751465}] You can also manually replicate the results of the `pipeline` if you'd like: Tokenize the text and return PyTorch tensors: >>> from transformers import AutoTokenizer >>> tokenizer = AutoTokenizer.from_pretrained(""stevhliu/my_awesome_model"") >>> inputs = tokenizer(text, return_tensors=""pt"") Pass your inputs to the model and return the `logits`: >>> from transformers import AutoModelForSequenceClassification >>> model = AutoModelForSequenceClassification.from_pretrained(""stevhliu/my_awesome_model"") >>> with torch.no_grad(): logits = model(**inputs).logits Get the class with the highest probability, and use the model's `id2label` mapping to convert it to a text label: >>> predicted_class_id = logits.argmax().item() >>> model.config.id2label[predicted_class_id] 'POSITIVE' Tokenize the text and return TensorFlow tensors: >>> from transformers import AutoTokenizer >>> tokenizer = AutoTokenizer.from_pretrained(""stevhliu/my_awesome_model"") >>> inputs = tokenizer(text, return_tensors=""tf"") Pass your inputs to the model and return the `logits`: >>> from transformers import TFAutoModelForSequenceClassification >>> model = TFAutoModelForSequenceClassification.from_pretrained(""stevhliu/my_awesome_model"") >>> logits = model(**inputs).logits Get the class with the highest probability, and use the model's `id2label` mapping to convert it to a text label: >>> predicted_class_id = int(tf.math.argmax(logits, axis=-1)[0]) >>> model.config.id2label[predicted_class_id] 'POSITIVE' " tasks/semantic_segmentation.md," # Image Segmentation [[open-in-colab]] Image segmentation models separate areas corresponding to different areas of interest in an image. These models work by assigning a label to each pixel. There are several types of segmentation: semantic segmentation, instance segmentation, and panoptic segmentation. In this guide, we will: 1. [Take a look at different types of segmentation](#Types-of-Segmentation), 2. [Have an end-to-end fine-tuning example for semantic segmentation](#Fine-tuning-a-Model-for-Segmentation). Before you begin, make sure you have all the necessary libraries installed: ```bash pip install -q datasets transformers evaluate We encourage you to log in to your Hugging Face account so you can upload and share your model with the community. When prompted, enter your token to log in: >>> from huggingface_hub import notebook_login >>> notebook_login() ## Types of Segmentation Semantic segmentation assigns a label or class to every single pixel in an image. Let's take a look at a semantic segmentation model output. It will assign the same class to every instance of an object it comes across in an image, for example, all cats will be labeled as ""cat"" instead of ""cat-1"", ""cat-2"". We can use transformers' image segmentation pipeline to quickly infer a semantic segmentation model. Let's take a look at the example image. thon from transformers import pipeline from PIL import Image import requests url = ""https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/segmentation_input.jpg"" image = Image.open(requests.get(url, stream=True).raw) image We will use [nvidia/segformer-b1-finetuned-cityscapes-1024-1024](https://huggingface.co/nvidia/segformer-b1-finetuned-cityscapes-1024-1024). thon semantic_segmentation = pipeline(""image-segmentation"", ""nvidia/segformer-b1-finetuned-cityscapes-1024-1024"") results = semantic_segmentation(image) results The segmentation pipeline output includes a mask for every predicted class. ```bash [{'score': None, 'label': 'road', 'mask': }, {'score': None, 'label': 'sidewalk', 'mask': }, {'score': None, 'label': 'building', 'mask': }, {'score': None, 'label': 'wall', 'mask': }, {'score': None, 'label': 'pole', 'mask': }, {'score': None, 'label': 'traffic sign', 'mask': }, {'score': None, 'label': 'vegetation', 'mask': }, {'score': None, 'label': 'terrain', 'mask': }, {'score': None, 'label': 'sky', 'mask': }, {'score': None, 'label': 'car', 'mask': }] Taking a look at the mask for the car class, we can see every car is classified with the same mask. thon results[-1][""mask""] In instance segmentation, the goal is not to classify every pixel, but to predict a mask for **every instance of an object** in a given image. It works very similar to object detection, where there is a bounding box for every instance, there's a segmentation mask instead. We will use [facebook/mask2former-swin-large-cityscapes-instance](https://huggingface.co/facebook/mask2former-swin-large-cityscapes-instance) for this. thon instance_segmentation = pipeline(""image-segmentation"", ""facebook/mask2former-swin-large-cityscapes-instance"") results = instance_segmentation(Image.open(image)) results As you can see below, there are multiple cars classified, and there's no classification for pixels other than pixels that belong to car and person instances. ```bash [{'score': 0.999944, 'label': 'car', 'mask': }, {'score': 0.999945, 'label': 'car', 'mask': }, {'score': 0.999652, 'label': 'car', 'mask': }, {'score': 0.903529, 'label': 'person', 'mask': }] Checking out one of the car masks below. thon results[2][""mask""] Panoptic segmentation combines semantic segmentation and instance segmentation, where every pixel is classified into a class and an instance of that class, and there are multiple masks for each instance of a class. We can use [facebook/mask2former-swin-large-cityscapes-panoptic](https://huggingface.co/facebook/mask2former-swin-large-cityscapes-panoptic) for this. thon panoptic_segmentation = pipeline(""image-segmentation"", ""facebook/mask2former-swin-large-cityscapes-panoptic"") results = panoptic_segmentation(Image.open(image)) results As you can see below, we have more classes. We will later illustrate to see that every pixel is classified into one of the classes. ```bash [{'score': 0.999981, 'label': 'car', 'mask': }, {'score': 0.999958, 'label': 'car', 'mask': }, {'score': 0.99997, 'label': 'vegetation', 'mask': }, {'score': 0.999575, 'label': 'pole', 'mask': }, {'score': 0.999958, 'label': 'building', 'mask': }, {'score': 0.999634, 'label': 'road', 'mask': }, {'score': 0.996092, 'label': 'sidewalk', 'mask': }, {'score': 0.999221, 'label': 'car', 'mask': }, {'score': 0.99987, 'label': 'sky', 'mask': }] Let's have a side by side comparison for all types of segmentation. Seeing all types of segmentation, let's have a deep dive on fine-tuning a model for semantic segmentation. Common real-world applications of semantic segmentation include training self-driving cars to identify pedestrians and important traffic information, identifying cells and abnormalities in medical imagery, and monitoring environmental changes from satellite imagery. ## Fine-tuning a Model for Segmentation We will now: 1. Finetune [SegFormer](https://huggingface.co/docs/transformers/main/en/model_doc/segformer#segformer) on the [SceneParse150](https://huggingface.co/datasets/scene_parse_150) dataset. 2. Use your fine-tuned model for inference. The task illustrated in this tutorial is supported by the following model architectures: [BEiT](../model_doc/beit), [Data2VecVision](../model_doc/data2vec-vision), [DPT](../model_doc/dpt), [MobileNetV2](../model_doc/mobilenet_v2), [MobileViT](../model_doc/mobilevit), [MobileViTV2](../model_doc/mobilevitv2), [SegFormer](../model_doc/segformer), [UPerNet](../model_doc/upernet) ### Load SceneParse150 dataset Start by loading a smaller subset of the SceneParse150 dataset from the 🤗 Datasets library. This'll give you a chance to experiment and make sure everything works before spending more time training on the full dataset. >>> from datasets import load_dataset >>> ds = load_dataset(""scene_parse_150"", split=""train[:50]"") Split the dataset's `train` split into a train and test set with the [`~datasets.Dataset.train_test_split`] method: >>> ds = ds.train_test_split(test_size=0.2) >>> train_ds = ds[""train""] >>> test_ds = ds[""test""] Then take a look at an example: >>> train_ds[0] {'image': , 'annotation': , 'scene_category': 368} - `image`: a PIL image of the scene. - `annotation`: a PIL image of the segmentation map, which is also the model's target. - `scene_category`: a category id that describes the image scene like ""kitchen"" or ""office"". In this guide, you'll only need `image` and `annotation`, both of which are PIL images. You'll also want to create a dictionary that maps a label id to a label class which will be useful when you set up the model later. Download the mappings from the Hub and create the `id2label` and `label2id` dictionaries: >>> import json >>> from huggingface_hub import cached_download, hf_hub_url >>> repo_id = ""huggingface/label-files"" >>> filename = ""ade20k-id2label.json"" >>> id2label = json.load(open(cached_download(hf_hub_url(repo_id, filename, repo_type=""dataset"")), ""r"")) >>> id2label = {int(k): v for k, v in id2label.items()} >>> label2id = {v: k for k, v in id2label.items()} >>> num_labels = len(id2label) ### Preprocess The next step is to load a SegFormer image processor to prepare the images and annotations for the model. Some datasets, like this one, use the zero-index as the background class. However, the background class isn't actually included in the 150 classes, so you'll need to set `reduce_labels=True` to subtract one from all the labels. The zero-index is replaced by `255` so it's ignored by SegFormer's loss function: >>> from transformers import AutoImageProcessor >>> checkpoint = ""nvidia/mit-b0"" >>> image_processor = AutoImageProcessor.from_pretrained(checkpoint, reduce_labels=True) It is common to apply some data augmentations to an image dataset to make a model more robust against overfitting. In this guide, you'll use the [`ColorJitter`](https://pytorch.org/vision/stable/generated/torchvision.transforms.ColorJitter.html) function from [torchvision](https://pytorch.org/vision/stable/index.html) to randomly change the color properties of an image, but you can also use any image library you like. >>> from torchvision.transforms import ColorJitter >>> jitter = ColorJitter(brightness=0.25, contrast=0.25, saturation=0.25, hue=0.1) Now create two preprocessing functions to prepare the images and annotations for the model. These functions convert the images into `pixel_values` and annotations to `labels`. For the training set, `jitter` is applied before providing the images to the image processor. For the test set, the image processor crops and normalizes the `images`, and only crops the `labels` because no data augmentation is applied during testing. >>> def train_transforms(example_batch): images = [jitter(x) for x in example_batch[""image""]] labels = [x for x in example_batch[""annotation""]] inputs = image_processor(images, labels) return inputs >>> def val_transforms(example_batch): images = [x for x in example_batch[""image""]] labels = [x for x in example_batch[""annotation""]] inputs = image_processor(images, labels) return inputs To apply the `jitter` over the entire dataset, use the 🤗 Datasets [`~datasets.Dataset.set_transform`] function. The transform is applied on the fly which is faster and consumes less disk space: >>> train_ds.set_transform(train_transforms) >>> test_ds.set_transform(val_transforms) It is common to apply some data augmentations to an image dataset to make a model more robust against overfitting. In this guide, you'll use [`tf.image`](https://www.tensorflow.org/api_docs/python/tf/image) to randomly change the color properties of an image, but you can also use any image library you like. Define two separate transformation functions: - training data transformations that include image augmentation - validation data transformations that only transpose the images, since computer vision models in 🤗 Transformers expect channels-first layout >>> import tensorflow as tf >>> def aug_transforms(image): image = tf.keras.utils.img_to_array(image) image = tf.image.random_brightness(image, 0.25) image = tf.image.random_contrast(image, 0.5, 2.0) image = tf.image.random_saturation(image, 0.75, 1.25) image = tf.image.random_hue(image, 0.1) image = tf.transpose(image, (2, 0, 1)) return image >>> def transforms(image): image = tf.keras.utils.img_to_array(image) image = tf.transpose(image, (2, 0, 1)) return image Next, create two preprocessing functions to prepare batches of images and annotations for the model. These functions apply the image transformations and use the earlier loaded `image_processor` to convert the images into `pixel_values` and annotations to `labels`. `ImageProcessor` also takes care of resizing and normalizing the images. >>> def train_transforms(example_batch): images = [aug_transforms(x.convert(""RGB"")) for x in example_batch[""image""]] labels = [x for x in example_batch[""annotation""]] inputs = image_processor(images, labels) return inputs >>> def val_transforms(example_batch): images = [transforms(x.convert(""RGB"")) for x in example_batch[""image""]] labels = [x for x in example_batch[""annotation""]] inputs = image_processor(images, labels) return inputs To apply the preprocessing transformations over the entire dataset, use the 🤗 Datasets [`~datasets.Dataset.set_transform`] function. The transform is applied on the fly which is faster and consumes less disk space: >>> train_ds.set_transform(train_transforms) >>> test_ds.set_transform(val_transforms) ### Evaluate Including a metric during training is often helpful for evaluating your model's performance. You can quickly load an evaluation method with the 🤗 [Evaluate](https://huggingface.co/docs/evaluate/index) library. For this task, load the [mean Intersection over Union](https://huggingface.co/spaces/evaluate-metric/accuracy) (IoU) metric (see the 🤗 Evaluate [quick tour](https://huggingface.co/docs/evaluate/a_quick_tour) to learn more about how to load and compute a metric): >>> import evaluate >>> metric = evaluate.load(""mean_iou"") Then create a function to [`~evaluate.EvaluationModule.compute`] the metrics. Your predictions need to be converted to logits first, and then reshaped to match the size of the labels before you can call [`~evaluate.EvaluationModule.compute`]: >>> import numpy as np >>> import torch >>> from torch import nn >>> def compute_metrics(eval_pred): with torch.no_grad(): logits, labels = eval_pred logits_tensor = torch.from_numpy(logits) logits_tensor = nn.functional.interpolate( logits_tensor, size=labels.shape[-2:], mode=""bilinear"", align_corners=False, ).argmax(dim=1) pred_labels = logits_tensor.detach().cpu().numpy() metrics = metric.compute( predictions=pred_labels, references=labels, num_labels=num_labels, ignore_index=255, reduce_labels=False, ) for key, value in metrics.items(): if isinstance(value, np.ndarray): metrics[key] = value.tolist() return metrics >>> def compute_metrics(eval_pred): logits, labels = eval_pred logits = tf.transpose(logits, perm=[0, 2, 3, 1]) logits_resized = tf.image.resize( logits, size=tf.shape(labels)[1:], method=""bilinear"", ) pred_labels = tf.argmax(logits_resized, axis=-1) metrics = metric.compute( predictions=pred_labels, references=labels, num_labels=num_labels, ignore_index=-1, reduce_labels=image_processor.do_reduce_labels, ) per_category_accuracy = metrics.pop(""per_category_accuracy"").tolist() per_category_iou = metrics.pop(""per_category_iou"").tolist() metrics.update({f""accuracy_{id2label[i]}"": v for i, v in enumerate(per_category_accuracy)}) metrics.update({f""iou_{id2label[i]}"": v for i, v in enumerate(per_category_iou)}) return {""val_"" + k: v for k, v in metrics.items()} Your `compute_metrics` function is ready to go now, and you'll return to it when you setup your training. ### Train If you aren't familiar with finetuning a model with the [`Trainer`], take a look at the basic tutorial [here](../training#finetune-with-trainer)! You're ready to start training your model now! Load SegFormer with [`AutoModelForSemanticSegmentation`], and pass the model the mapping between label ids and label classes: >>> from transformers import AutoModelForSemanticSegmentation, TrainingArguments, Trainer >>> model = AutoModelForSemanticSegmentation.from_pretrained(checkpoint, id2label=id2label, label2id=label2id) At this point, only three steps remain: 1. Define your training hyperparameters in [`TrainingArguments`]. It is important you don't remove unused columns because this'll drop the `image` column. Without the `image` column, you can't create `pixel_values`. Set `remove_unused_columns=False` to prevent this behavior! The only other required parameter is `output_dir` which specifies where to save your model. You'll push this model to the Hub by setting `push_to_hub=True` (you need to be signed in to Hugging Face to upload your model). At the end of each epoch, the [`Trainer`] will evaluate the IoU metric and save the training checkpoint. 2. Pass the training arguments to [`Trainer`] along with the model, dataset, tokenizer, data collator, and `compute_metrics` function. 3. Call [`~Trainer.train`] to finetune your model. >>> training_args = TrainingArguments( output_dir=""segformer-b0-scene-parse-150"", learning_rate=6e-5, num_train_epochs=50, per_device_train_batch_size=2, per_device_eval_batch_size=2, save_total_limit=3, evaluation_strategy=""steps"", save_strategy=""steps"", save_steps=20, eval_steps=20, logging_steps=1, eval_accumulation_steps=5, remove_unused_columns=False, push_to_hub=True, ) >>> trainer = Trainer( model=model, args=training_args, train_dataset=train_ds, eval_dataset=test_ds, compute_metrics=compute_metrics, ) >>> trainer.train() Once training is completed, share your model to the Hub with the [`~transformers.Trainer.push_to_hub`] method so everyone can use your model: >>> trainer.push_to_hub() If you are unfamiliar with fine-tuning a model with Keras, check out the [basic tutorial](./training#train-a-tensorflow-model-with-keras) first! To fine-tune a model in TensorFlow, follow these steps: 1. Define the training hyperparameters, and set up an optimizer and a learning rate schedule. 2. Instantiate a pretrained model. 3. Convert a 🤗 Dataset to a `tf.data.Dataset`. 4. Compile your model. 5. Add callbacks to calculate metrics and upload your model to 🤗 Hub 6. Use the `fit()` method to run the training. Start by defining the hyperparameters, optimizer and learning rate schedule: >>> from transformers import create_optimizer >>> batch_size = 2 >>> num_epochs = 50 >>> num_train_steps = len(train_ds) * num_epochs >>> learning_rate = 6e-5 >>> weight_decay_rate = 0.01 >>> optimizer, lr_schedule = create_optimizer( init_lr=learning_rate, num_train_steps=num_train_steps, weight_decay_rate=weight_decay_rate, num_warmup_steps=0, ) Then, load SegFormer with [`TFAutoModelForSemanticSegmentation`] along with the label mappings, and compile it with the optimizer. Note that Transformers models all have a default task-relevant loss function, so you don't need to specify one unless you want to: >>> from transformers import TFAutoModelForSemanticSegmentation >>> model = TFAutoModelForSemanticSegmentation.from_pretrained( checkpoint, id2label=id2label, label2id=label2id, ) >>> model.compile(optimizer=optimizer) # No loss argument! Convert your datasets to the `tf.data.Dataset` format using the [`~datasets.Dataset.to_tf_dataset`] and the [`DefaultDataCollator`]: >>> from transformers import DefaultDataCollator >>> data_collator = DefaultDataCollator(return_tensors=""tf"") >>> tf_train_dataset = train_ds.to_tf_dataset( columns=[""pixel_values"", ""label""], shuffle=True, batch_size=batch_size, collate_fn=data_collator, ) >>> tf_eval_dataset = test_ds.to_tf_dataset( columns=[""pixel_values"", ""label""], shuffle=True, batch_size=batch_size, collate_fn=data_collator, ) To compute the accuracy from the predictions and push your model to the 🤗 Hub, use [Keras callbacks](../main_classes/keras_callbacks). Pass your `compute_metrics` function to [`KerasMetricCallback`], and use the [`PushToHubCallback`] to upload the model: >>> from transformers.keras_callbacks import KerasMetricCallback, PushToHubCallback >>> metric_callback = KerasMetricCallback( metric_fn=compute_metrics, eval_dataset=tf_eval_dataset, batch_size=batch_size, label_cols=[""labels""] ) >>> push_to_hub_callback = PushToHubCallback(output_dir=""scene_segmentation"", tokenizer=image_processor) >>> callbacks = [metric_callback, push_to_hub_callback] Finally, you are ready to train your model! Call `fit()` with your training and validation datasets, the number of epochs, and your callbacks to fine-tune the model: >>> model.fit( tf_train_dataset, validation_data=tf_eval_dataset, callbacks=callbacks, epochs=num_epochs, ) Congratulations! You have fine-tuned your model and shared it on the 🤗 Hub. You can now use it for inference! ### Inference Great, now that you've finetuned a model, you can use it for inference! Load an image for inference: >>> image = ds[0][""image""] >>> image We will now see how to infer without a pipeline. Process the image with an image processor and place the `pixel_values` on a GPU: >>> device = torch.device(""cuda"" if torch.cuda.is_available() else ""cpu"") # use GPU if available, otherwise use a CPU >>> encoding = image_processor(image, return_tensors=""pt"") >>> pixel_values = encoding.pixel_values.to(device) Pass your input to the model and return the `logits`: >>> outputs = model(pixel_values=pixel_values) >>> logits = outputs.logits.cpu() Next, rescale the logits to the original image size: >>> upsampled_logits = nn.functional.interpolate( logits, size=image.size[::-1], mode=""bilinear"", align_corners=False, ) >>> pred_seg = upsampled_logits.argmax(dim=1)[0] Load an image processor to preprocess the image and return the input as TensorFlow tensors: >>> from transformers import AutoImageProcessor >>> image_processor = AutoImageProcessor.from_pretrained(""MariaK/scene_segmentation"") >>> inputs = image_processor(image, return_tensors=""tf"") Pass your input to the model and return the `logits`: >>> from transformers import TFAutoModelForSemanticSegmentation >>> model = TFAutoModelForSemanticSegmentation.from_pretrained(""MariaK/scene_segmentation"") >>> logits = model(**inputs).logits Next, rescale the logits to the original image size and apply argmax on the class dimension: >>> logits = tf.transpose(logits, [0, 2, 3, 1]) >>> upsampled_logits = tf.image.resize( logits, # We reverse the shape of `image` because `image.size` returns width and height. image.size[::-1], ) >>> pred_seg = tf.math.argmax(upsampled_logits, axis=-1)[0] To visualize the results, load the [dataset color palette](https://github.com/tensorflow/models/blob/3f1ca33afe3c1631b733ea7e40c294273b9e406d/research/deeplab/utils/get_dataset_colormap.py#L51) as `ade_palette()` that maps each class to their RGB values. Then you can combine and plot your image and the predicted segmentation map: >>> import matplotlib.pyplot as plt >>> import numpy as np >>> color_seg = np.zeros((pred_seg.shape[0], pred_seg.shape[1], 3), dtype=np.uint8) >>> palette = np.array(ade_palette()) >>> for label, color in enumerate(palette): color_seg[pred_seg == label, :] = color >>> color_seg = color_seg[, ::-1] # convert to BGR >>> img = np.array(image) * 0.5 + color_seg * 0.5 # plot the image with the segmentation map >>> img = img.astype(np.uint8) >>> plt.figure(figsize=(15, 10)) >>> plt.imshow(img) >>> plt.show() " tasks/object_detection.md," # Object detection [[open-in-colab]] Object detection is the computer vision task of detecting instances (such as humans, buildings, or cars) in an image. Object detection models receive an image as input and output coordinates of the bounding boxes and associated labels of the detected objects. An image can contain multiple objects, each with its own bounding box and a label (e.g. it can have a car and a building), and each object can be present in different parts of an image (e.g. the image can have several cars). This task is commonly used in autonomous driving for detecting things like pedestrians, road signs, and traffic lights. Other applications include counting objects in images, image search, and more. In this guide, you will learn how to: 1. Finetune [DETR](https://huggingface.co/docs/transformers/model_doc/detr), a model that combines a convolutional backbone with an encoder-decoder Transformer, on the [CPPE-5](https://huggingface.co/datasets/cppe-5) dataset. 2. Use your finetuned model for inference. The task illustrated in this tutorial is supported by the following model architectures: [Conditional DETR](../model_doc/conditional_detr), [Deformable DETR](../model_doc/deformable_detr), [DETA](../model_doc/deta), [DETR](../model_doc/detr), [Table Transformer](../model_doc/table-transformer), [YOLOS](../model_doc/yolos) Before you begin, make sure you have all the necessary libraries installed: ```bash pip install -q datasets transformers evaluate timm albumentations You'll use 🤗 Datasets to load a dataset from the Hugging Face Hub, 🤗 Transformers to train your model, and `albumentations` to augment the data. `timm` is currently required to load a convolutional backbone for the DETR model. We encourage you to share your model with the community. Log in to your Hugging Face account to upload it to the Hub. When prompted, enter your token to log in: >>> from huggingface_hub import notebook_login >>> notebook_login() ## Load the CPPE-5 dataset The [CPPE-5 dataset](https://huggingface.co/datasets/cppe-5) contains images with annotations identifying medical personal protective equipment (PPE) in the context of the COVID-19 pandemic. Start by loading the dataset: >>> from datasets import load_dataset >>> cppe5 = load_dataset(""cppe-5"") >>> cppe5 DatasetDict({ train: Dataset({ features: ['image_id', 'image', 'width', 'height', 'objects'], num_rows: 1000 }) test: Dataset({ features: ['image_id', 'image', 'width', 'height', 'objects'], num_rows: 29 }) }) You'll see that this dataset already comes with a training set containing 1000 images and a test set with 29 images. To get familiar with the data, explore what the examples look like. >>> cppe5[""train""][0] {'image_id': 15, 'image': , 'width': 943, 'height': 663, 'objects': {'id': [114, 115, 116, 117], 'area': [3796, 1596, 152768, 81002], 'bbox': [[302.0, 109.0, 73.0, 52.0], [810.0, 100.0, 57.0, 28.0], [160.0, 31.0, 248.0, 616.0], [741.0, 68.0, 202.0, 401.0]], 'category': [4, 4, 0, 0]}} The examples in the dataset have the following fields: - `image_id`: the example image id - `image`: a `PIL.Image.Image` object containing the image - `width`: width of the image - `height`: height of the image - `objects`: a dictionary containing bounding box metadata for the objects in the image: - `id`: the annotation id - `area`: the area of the bounding box - `bbox`: the object's bounding box (in the [COCO format](https://albumentations.ai/docs/getting_started/bounding_boxes_augmentation/#coco) ) - `category`: the object's category, with possible values including `Coverall (0)`, `Face_Shield (1)`, `Gloves (2)`, `Goggles (3)` and `Mask (4)` You may notice that the `bbox` field follows the COCO format, which is the format that the DETR model expects. However, the grouping of the fields inside `objects` differs from the annotation format DETR requires. You will need to apply some preprocessing transformations before using this data for training. To get an even better understanding of the data, visualize an example in the dataset. >>> import numpy as np >>> import os >>> from PIL import Image, ImageDraw >>> image = cppe5[""train""][0][""image""] >>> annotations = cppe5[""train""][0][""objects""] >>> draw = ImageDraw.Draw(image) >>> categories = cppe5[""train""].features[""objects""].feature[""category""].names >>> id2label = {index: x for index, x in enumerate(categories, start=0)} >>> label2id = {v: k for k, v in id2label.items()} >>> for i in range(len(annotations[""id""])): box = annotations[""bbox""][i] class_idx = annotations[""category""][i] x, y, w, h = tuple(box) draw.rectangle((x, y, x + w, y + h), outline=""red"", width=1) draw.text((x, y), id2label[class_idx], fill=""white"") >>> image To visualize the bounding boxes with associated labels, you can get the labels from the dataset's metadata, specifically the `category` field. You'll also want to create dictionaries that map a label id to a label class (`id2label`) and the other way around (`label2id`). You can use them later when setting up the model. Including these maps will make your model reusable by others if you share it on the Hugging Face Hub. As a final step of getting familiar with the data, explore it for potential issues. One common problem with datasets for object detection is bounding boxes that ""stretch"" beyond the edge of the image. Such ""runaway"" bounding boxes can raise errors during training and should be addressed at this stage. There are a few examples with this issue in this dataset. To keep things simple in this guide, we remove these images from the data. >>> remove_idx = [590, 821, 822, 875, 876, 878, 879] >>> keep = [i for i in range(len(cppe5[""train""])) if i not in remove_idx] >>> cppe5[""train""] = cppe5[""train""].select(keep) ## Preprocess the data To finetune a model, you must preprocess the data you plan to use to match precisely the approach used for the pre-trained model. [`AutoImageProcessor`] takes care of processing image data to create `pixel_values`, `pixel_mask`, and `labels` that a DETR model can train with. The image processor has some attributes that you won't have to worry about: - `image_mean = [0.485, 0.456, 0.406 ]` - `image_std = [0.229, 0.224, 0.225]` These are the mean and standard deviation used to normalize images during the model pre-training. These values are crucial to replicate when doing inference or finetuning a pre-trained image model. Instantiate the image processor from the same checkpoint as the model you want to finetune. >>> from transformers import AutoImageProcessor >>> checkpoint = ""facebook/detr-resnet-50"" >>> image_processor = AutoImageProcessor.from_pretrained(checkpoint) Before passing the images to the `image_processor`, apply two preprocessing transformations to the dataset: - Augmenting images - Reformatting annotations to meet DETR expectations First, to make sure the model does not overfit on the training data, you can apply image augmentation with any data augmentation library. Here we use [Albumentations](https://albumentations.ai/docs/) This library ensures that transformations affect the image and update the bounding boxes accordingly. The 🤗 Datasets library documentation has a detailed [guide on how to augment images for object detection](https://huggingface.co/docs/datasets/object_detection), and it uses the exact same dataset as an example. Apply the same approach here, resize each image to (480, 480), flip it horizontally, and brighten it: >>> import albumentations >>> import numpy as np >>> import torch >>> transform = albumentations.Compose( [ albumentations.Resize(480, 480), albumentations.HorizontalFlip(p=1.0), albumentations.RandomBrightnessContrast(p=1.0), ], bbox_params=albumentations.BboxParams(format=""coco"", label_fields=[""category""]), ) The `image_processor` expects the annotations to be in the following format: `{'image_id': int, 'annotations': List[Dict]}`, where each dictionary is a COCO object annotation. Let's add a function to reformat annotations for a single example: >>> def formatted_anns(image_id, category, area, bbox): annotations = [] for i in range(0, len(category)): new_ann = { ""image_id"": image_id, ""category_id"": category[i], ""isCrowd"": 0, ""area"": area[i], ""bbox"": list(bbox[i]), } annotations.append(new_ann) return annotations Now you can combine the image and annotation transformations to use on a batch of examples: >>> # transforming a batch >>> def transform_aug_ann(examples): image_ids = examples[""image_id""] images, bboxes, area, categories = [], [], [], [] for image, objects in zip(examples[""image""], examples[""objects""]): image = np.array(image.convert(""RGB""))[:, :, ::-1] out = transform(image=image, bboxes=objects[""bbox""], category=objects[""category""]) area.append(objects[""area""]) images.append(out[""image""]) bboxes.append(out[""bboxes""]) categories.append(out[""category""]) targets = [ {""image_id"": id_, ""annotations"": formatted_anns(id_, cat_, ar_, box_)} for id_, cat_, ar_, box_ in zip(image_ids, categories, area, bboxes) ] return image_processor(images=images, annotations=targets, return_tensors=""pt"") Apply this preprocessing function to the entire dataset using 🤗 Datasets [`~datasets.Dataset.with_transform`] method. This method applies transformations on the fly when you load an element of the dataset. At this point, you can check what an example from the dataset looks like after the transformations. You should see a tensor with `pixel_values`, a tensor with `pixel_mask`, and `labels`. >>> cppe5[""train""] = cppe5[""train""].with_transform(transform_aug_ann) >>> cppe5[""train""][15] {'pixel_values': tensor([[[ 0.9132, 0.9132, 0.9132, , -1.9809, -1.9809, -1.9809], [ 0.9132, 0.9132, 0.9132, , -1.9809, -1.9809, -1.9809], [ 0.9132, 0.9132, 0.9132, , -1.9638, -1.9638, -1.9638], , [-1.5699, -1.5699, -1.5699, , -1.9980, -1.9980, -1.9980], [-1.5528, -1.5528, -1.5528, , -1.9980, -1.9809, -1.9809], [-1.5528, -1.5528, -1.5528, , -1.9980, -1.9809, -1.9809]], [[ 1.3081, 1.3081, 1.3081, , -1.8431, -1.8431, -1.8431], [ 1.3081, 1.3081, 1.3081, , -1.8431, -1.8431, -1.8431], [ 1.3081, 1.3081, 1.3081, , -1.8256, -1.8256, -1.8256], , [-1.3179, -1.3179, -1.3179, , -1.8606, -1.8606, -1.8606], [-1.3004, -1.3004, -1.3004, , -1.8606, -1.8431, -1.8431], [-1.3004, -1.3004, -1.3004, , -1.8606, -1.8431, -1.8431]], [[ 1.4200, 1.4200, 1.4200, , -1.6476, -1.6476, -1.6476], [ 1.4200, 1.4200, 1.4200, , -1.6476, -1.6476, -1.6476], [ 1.4200, 1.4200, 1.4200, , -1.6302, -1.6302, -1.6302], , [-1.0201, -1.0201, -1.0201, , -1.5604, -1.5604, -1.5604], [-1.0027, -1.0027, -1.0027, , -1.5604, -1.5430, -1.5430], [-1.0027, -1.0027, -1.0027, , -1.5604, -1.5430, -1.5430]]]), 'pixel_mask': tensor([[1, 1, 1, , 1, 1, 1], [1, 1, 1, , 1, 1, 1], [1, 1, 1, , 1, 1, 1], , [1, 1, 1, , 1, 1, 1], [1, 1, 1, , 1, 1, 1], [1, 1, 1, , 1, 1, 1]]), 'labels': {'size': tensor([800, 800]), 'image_id': tensor([756]), 'class_labels': tensor([4]), 'boxes': tensor([[0.7340, 0.6986, 0.3414, 0.5944]]), 'area': tensor([519544.4375]), 'iscrowd': tensor([0]), 'orig_size': tensor([480, 480])}} You have successfully augmented the individual images and prepared their annotations. However, preprocessing isn't complete yet. In the final step, create a custom `collate_fn` to batch images together. Pad images (which are now `pixel_values`) to the largest image in a batch, and create a corresponding `pixel_mask` to indicate which pixels are real (1) and which are padding (0). >>> def collate_fn(batch): pixel_values = [item[""pixel_values""] for item in batch] encoding = image_processor.pad(pixel_values, return_tensors=""pt"") labels = [item[""labels""] for item in batch] batch = {} batch[""pixel_values""] = encoding[""pixel_values""] batch[""pixel_mask""] = encoding[""pixel_mask""] batch[""labels""] = labels return batch ## Training the DETR model You have done most of the heavy lifting in the previous sections, so now you are ready to train your model! The images in this dataset are still quite large, even after resizing. This means that finetuning this model will require at least one GPU. Training involves the following steps: 1. Load the model with [`AutoModelForObjectDetection`] using the same checkpoint as in the preprocessing. 2. Define your training hyperparameters in [`TrainingArguments`]. 3. Pass the training arguments to [`Trainer`] along with the model, dataset, image processor, and data collator. 4. Call [`~Trainer.train`] to finetune your model. When loading the model from the same checkpoint that you used for the preprocessing, remember to pass the `label2id` and `id2label` maps that you created earlier from the dataset's metadata. Additionally, we specify `ignore_mismatched_sizes=True` to replace the existing classification head with a new one. >>> from transformers import AutoModelForObjectDetection >>> model = AutoModelForObjectDetection.from_pretrained( checkpoint, id2label=id2label, label2id=label2id, ignore_mismatched_sizes=True, ) In the [`TrainingArguments`] use `output_dir` to specify where to save your model, then configure hyperparameters as you see fit. It is important you do not remove unused columns because this will drop the image column. Without the image column, you can't create `pixel_values`. For this reason, set `remove_unused_columns` to `False`. If you wish to share your model by pushing to the Hub, set `push_to_hub` to `True` (you must be signed in to Hugging Face to upload your model). >>> from transformers import TrainingArguments >>> training_args = TrainingArguments( output_dir=""detr-resnet-50_finetuned_cppe5"", per_device_train_batch_size=8, num_train_epochs=10, fp16=True, save_steps=200, logging_steps=50, learning_rate=1e-5, weight_decay=1e-4, save_total_limit=2, remove_unused_columns=False, push_to_hub=True, ) Finally, bring everything together, and call [`~transformers.Trainer.train`]: >>> from transformers import Trainer >>> trainer = Trainer( model=model, args=training_args, data_collator=collate_fn, train_dataset=cppe5[""train""], tokenizer=image_processor, ) >>> trainer.train() If you have set `push_to_hub` to `True` in the `training_args`, the training checkpoints are pushed to the Hugging Face Hub. Upon training completion, push the final model to the Hub as well by calling the [`~transformers.Trainer.push_to_hub`] method. >>> trainer.push_to_hub() ## Evaluate Object detection models are commonly evaluated with a set of COCO-style metrics. You can use one of the existing metrics implementations, but here you'll use the one from `torchvision` to evaluate the final model that you pushed to the Hub. To use the `torchvision` evaluator, you'll need to prepare a ground truth COCO dataset. The API to build a COCO dataset requires the data to be stored in a certain format, so you'll need to save images and annotations to disk first. Just like when you prepared your data for training, the annotations from the `cppe5[""test""]` need to be formatted. However, images should stay as they are. The evaluation step requires a bit of work, but it can be split in three major steps. First, prepare the `cppe5[""test""]` set: format the annotations and save the data to disk. >>> import json >>> # format annotations the same as for training, no need for data augmentation >>> def val_formatted_anns(image_id, objects): annotations = [] for i in range(0, len(objects[""id""])): new_ann = { ""id"": objects[""id""][i], ""category_id"": objects[""category""][i], ""iscrowd"": 0, ""image_id"": image_id, ""area"": objects[""area""][i], ""bbox"": objects[""bbox""][i], } annotations.append(new_ann) return annotations >>> # Save images and annotations into the files torchvision.datasets.CocoDetection expects >>> def save_cppe5_annotation_file_images(cppe5): output_json = {} path_output_cppe5 = f""{os.getcwd()}/cppe5/"" if not os.path.exists(path_output_cppe5): os.makedirs(path_output_cppe5) path_anno = os.path.join(path_output_cppe5, ""cppe5_ann.json"") categories_json = [{""supercategory"": ""none"", ""id"": id, ""name"": id2label[id]} for id in id2label] output_json[""images""] = [] output_json[""annotations""] = [] for example in cppe5: ann = val_formatted_anns(example[""image_id""], example[""objects""]) output_json[""images""].append( { ""id"": example[""image_id""], ""width"": example[""image""].width, ""height"": example[""image""].height, ""file_name"": f""{example['image_id']}.png"", } ) output_json[""annotations""].extend(ann) output_json[""categories""] = categories_json with open(path_anno, ""w"") as file: json.dump(output_json, file, ensure_ascii=False, indent=4) for im, img_id in zip(cppe5[""image""], cppe5[""image_id""]): path_img = os.path.join(path_output_cppe5, f""{img_id}.png"") im.save(path_img) return path_output_cppe5, path_anno Next, prepare an instance of a `CocoDetection` class that can be used with `cocoevaluator`. >>> import torchvision >>> class CocoDetection(torchvision.datasets.CocoDetection): def __init__(self, img_folder, image_processor, ann_file): super().__init__(img_folder, ann_file) self.image_processor = image_processor def __getitem__(self, idx): # read in PIL image and target in COCO format img, target = super(CocoDetection, self).__getitem__(idx) # preprocess image and target: converting target to DETR format, # resizing + normalization of both image and target) image_id = self.ids[idx] target = {""image_id"": image_id, ""annotations"": target} encoding = self.image_processor(images=img, annotations=target, return_tensors=""pt"") pixel_values = encoding[""pixel_values""].squeeze() # remove batch dimension target = encoding[""labels""][0] # remove batch dimension return {""pixel_values"": pixel_values, ""labels"": target} >>> im_processor = AutoImageProcessor.from_pretrained(""devonho/detr-resnet-50_finetuned_cppe5"") >>> path_output_cppe5, path_anno = save_cppe5_annotation_file_images(cppe5[""test""]) >>> test_ds_coco_format = CocoDetection(path_output_cppe5, im_processor, path_anno) Finally, load the metrics and run the evaluation. >>> import evaluate >>> from tqdm import tqdm >>> model = AutoModelForObjectDetection.from_pretrained(""devonho/detr-resnet-50_finetuned_cppe5"") >>> module = evaluate.load(""ybelkada/cocoevaluate"", coco=test_ds_coco_format.coco) >>> val_dataloader = torch.utils.data.DataLoader( test_ds_coco_format, batch_size=8, shuffle=False, num_workers=4, collate_fn=collate_fn ) >>> with torch.no_grad(): for idx, batch in enumerate(tqdm(val_dataloader)): pixel_values = batch[""pixel_values""] pixel_mask = batch[""pixel_mask""] labels = [ {k: v for k, v in t.items()} for t in batch[""labels""] ] # these are in DETR format, resized + normalized # forward pass outputs = model(pixel_values=pixel_values, pixel_mask=pixel_mask) orig_target_sizes = torch.stack([target[""orig_size""] for target in labels], dim=0) results = im_processor.post_process(outputs, orig_target_sizes) # convert outputs of model to COCO api module.add(prediction=results, reference=labels) del batch >>> results = module.compute() >>> print(results) Accumulating evaluation results DONE (t=0.08s). IoU metric: bbox Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.352 Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=100 ] = 0.681 Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=100 ] = 0.292 Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.168 Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.208 Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.429 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 1 ] = 0.274 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 10 ] = 0.484 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.501 Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.191 Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.323 Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.590 These results can be further improved by adjusting the hyperparameters in [`~transformers.TrainingArguments`]. Give it a go! ## Inference Now that you have finetuned a DETR model, evaluated it, and uploaded it to the Hugging Face Hub, you can use it for inference. The simplest way to try out your finetuned model for inference is to use it in a [`Pipeline`]. Instantiate a pipeline for object detection with your model, and pass an image to it: >>> from transformers import pipeline >>> import requests >>> url = ""https://i.imgur.com/2lnWoly.jpg"" >>> image = Image.open(requests.get(url, stream=True).raw) >>> obj_detector = pipeline(""object-detection"", model=""devonho/detr-resnet-50_finetuned_cppe5"") >>> obj_detector(image) You can also manually replicate the results of the pipeline if you'd like: >>> image_processor = AutoImageProcessor.from_pretrained(""devonho/detr-resnet-50_finetuned_cppe5"") >>> model = AutoModelForObjectDetection.from_pretrained(""devonho/detr-resnet-50_finetuned_cppe5"") >>> with torch.no_grad(): inputs = image_processor(images=image, return_tensors=""pt"") outputs = model(**inputs) target_sizes = torch.tensor([image.size[::-1]]) results = image_processor.post_process_object_detection(outputs, threshold=0.5, target_sizes=target_sizes)[0] >>> for score, label, box in zip(results[""scores""], results[""labels""], results[""boxes""]): box = [round(i, 2) for i in box.tolist()] print( f""Detected {model.config.id2label[label.item()]} with confidence "" f""{round(score.item(), 3)} at location {box}"" ) Detected Coverall with confidence 0.566 at location [1215.32, 147.38, 4401.81, 3227.08] Detected Mask with confidence 0.584 at location [2449.06, 823.19, 3256.43, 1413.9] Let's plot the result: >>> draw = ImageDraw.Draw(image) >>> for score, label, box in zip(results[""scores""], results[""labels""], results[""boxes""]): box = [round(i, 2) for i in box.tolist()] x, y, x2, y2 = tuple(box) draw.rectangle((x, y, x2, y2), outline=""red"", width=1) draw.text((x, y), model.config.id2label[label.item()], fill=""white"") >>> image " tasks/video_classification.md," # Video classification [[open-in-colab]] Video classification is the task of assigning a label or class to an entire video. Videos are expected to have only one class for each video. Video classification models take a video as input and return a prediction about which class the video belongs to. These models can be used to categorize what a video is all about. A real-world application of video classification is action / activity recognition, which is useful for fitness applications. It is also helpful for vision-impaired individuals, especially when they are commuting. This guide will show you how to: 1. Fine-tune [VideoMAE](https://huggingface.co/docs/transformers/main/en/model_doc/videomae) on a subset of the [UCF101](https://www.crcv.ucf.edu/data/UCF101.php) dataset. 2. Use your fine-tuned model for inference. The task illustrated in this tutorial is supported by the following model architectures: [TimeSformer](../model_doc/timesformer), [VideoMAE](../model_doc/videomae), [ViViT](../model_doc/vivit) Before you begin, make sure you have all the necessary libraries installed: ```bash pip install -q pytorchvideo transformers evaluate You will use [PyTorchVideo](https://pytorchvideo.org/) (dubbed `pytorchvideo`) to process and prepare the videos. We encourage you to log in to your Hugging Face account so you can upload and share your model with the community. When prompted, enter your token to log in: >>> from huggingface_hub import notebook_login >>> notebook_login() ## Load UCF101 dataset Start by loading a subset of the [UCF-101 dataset](https://www.crcv.ucf.edu/data/UCF101.php). This will give you a chance to experiment and make sure everything works before spending more time training on the full dataset. >>> from huggingface_hub import hf_hub_download >>> hf_dataset_identifier = ""sayakpaul/ucf101-subset"" >>> filename = ""UCF101_subset.tar.gz"" >>> file_path = hf_hub_download(repo_id=hf_dataset_identifier, filename=filename, repo_type=""dataset"") After the subset has been downloaded, you need to extract the compressed archive: >>> import tarfile >>> with tarfile.open(file_path) as t: t.extractall(""."") At a high level, the dataset is organized like so: ```bash UCF101_subset/ train/ BandMarching/ video_1.mp4 video_2.mp4 Archery video_1.mp4 video_2.mp4 val/ BandMarching/ video_1.mp4 video_2.mp4 Archery video_1.mp4 video_2.mp4 test/ BandMarching/ video_1.mp4 video_2.mp4 Archery video_1.mp4 video_2.mp4 The (`sorted`) video paths appear like so: ```bash 'UCF101_subset/train/ApplyEyeMakeup/v_ApplyEyeMakeup_g07_c04.avi', 'UCF101_subset/train/ApplyEyeMakeup/v_ApplyEyeMakeup_g07_c06.avi', 'UCF101_subset/train/ApplyEyeMakeup/v_ApplyEyeMakeup_g08_c01.avi', 'UCF101_subset/train/ApplyEyeMakeup/v_ApplyEyeMakeup_g09_c02.avi', 'UCF101_subset/train/ApplyEyeMakeup/v_ApplyEyeMakeup_g09_c06.avi' You will notice that there are video clips belonging to the same group / scene where group is denoted by `g` in the video file paths. `v_ApplyEyeMakeup_g07_c04.avi` and `v_ApplyEyeMakeup_g07_c06.avi`, for example. For the validation and evaluation splits, you wouldn't want to have video clips from the same group / scene to prevent [data leakage](https://www.kaggle.com/code/alexisbcook/data-leakage). The subset that you are using in this tutorial takes this information into account. Next up, you will derive the set of labels present in the dataset. Also, create two dictionaries that'll be helpful when initializing the model: * `label2id`: maps the class names to integers. * `id2label`: maps the integers to class names. >>> class_labels = sorted({str(path).split(""/"")[2] for path in all_video_file_paths}) >>> label2id = {label: i for i, label in enumerate(class_labels)} >>> id2label = {i: label for label, i in label2id.items()} >>> print(f""Unique classes: {list(label2id.keys())}."") # Unique classes: ['ApplyEyeMakeup', 'ApplyLipstick', 'Archery', 'BabyCrawling', 'BalanceBeam', 'BandMarching', 'BaseballPitch', 'Basketball', 'BasketballDunk', 'BenchPress']. There are 10 unique classes. For each class, there are 30 videos in the training set. ## Load a model to fine-tune Instantiate a video classification model from a pretrained checkpoint and its associated image processor. The model's encoder comes with pre-trained parameters, and the classification head is randomly initialized. The image processor will come in handy when writing the preprocessing pipeline for our dataset. >>> from transformers import VideoMAEImageProcessor, VideoMAEForVideoClassification >>> model_ckpt = ""MCG-NJU/videomae-base"" >>> image_processor = VideoMAEImageProcessor.from_pretrained(model_ckpt) >>> model = VideoMAEForVideoClassification.from_pretrained( model_ckpt, label2id=label2id, id2label=id2label, ignore_mismatched_sizes=True, # provide this in case you're planning to fine-tune an already fine-tuned checkpoint ) While the model is loading, you might notice the following warning: ```bash Some weights of the model checkpoint at MCG-NJU/videomae-base were not used when initializing VideoMAEForVideoClassification: [, 'decoder.decoder_layers.1.attention.output.dense.bias', 'decoder.decoder_layers.2.attention.attention.key.weight'] - This IS expected if you are initializing VideoMAEForVideoClassification from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model). - This IS NOT expected if you are initializing VideoMAEForVideoClassification from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model). Some weights of VideoMAEForVideoClassification were not initialized from the model checkpoint at MCG-NJU/videomae-base and are newly initialized: ['classifier.bias', 'classifier.weight'] You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference. The warning is telling us we are throwing away some weights (e.g. the weights and bias of the `classifier` layer) and randomly initializing some others (the weights and bias of a new `classifier` layer). This is expected in this case, because we are adding a new head for which we don't have pretrained weights, so the library warns us we should fine-tune this model before using it for inference, which is exactly what we are going to do. **Note** that [this checkpoint](https://huggingface.co/MCG-NJU/videomae-base-finetuned-kinetics) leads to better performance on this task as the checkpoint was obtained fine-tuning on a similar downstream task having considerable domain overlap. You can check out [this checkpoint](https://huggingface.co/sayakpaul/videomae-base-finetuned-kinetics-finetuned-ucf101-subset) which was obtained by fine-tuning `MCG-NJU/videomae-base-finetuned-kinetics`. ## Prepare the datasets for training For preprocessing the videos, you will leverage the [PyTorchVideo library](https://pytorchvideo.org/). Start by importing the dependencies we need. >>> import pytorchvideo.data >>> from pytorchvideo.transforms import ( ApplyTransformToKey, Normalize, RandomShortSideScale, RemoveKey, ShortSideScale, UniformTemporalSubsample, ) >>> from torchvision.transforms import ( Compose, Lambda, RandomCrop, RandomHorizontalFlip, Resize, ) For the training dataset transformations, use a combination of uniform temporal subsampling, pixel normalization, random cropping, and random horizontal flipping. For the validation and evaluation dataset transformations, keep the same transformation chain except for random cropping and horizontal flipping. To learn more about the details of these transformations check out the [official documentation of PyTorchVideo](https://pytorchvideo.org). Use the `image_processor` associated with the pre-trained model to obtain the following information: * Image mean and standard deviation with which the video frame pixels will be normalized. * Spatial resolution to which the video frames will be resized. Start by defining some constants. >>> mean = image_processor.image_mean >>> std = image_processor.image_std >>> if ""shortest_edge"" in image_processor.size: height = width = image_processor.size[""shortest_edge""] >>> else: height = image_processor.size[""height""] width = image_processor.size[""width""] >>> resize_to = (height, width) >>> num_frames_to_sample = model.config.num_frames >>> sample_rate = 4 >>> fps = 30 >>> clip_duration = num_frames_to_sample * sample_rate / fps Now, define the dataset-specific transformations and the datasets respectively. Starting with the training set: >>> train_transform = Compose( [ ApplyTransformToKey( key=""video"", transform=Compose( [ UniformTemporalSubsample(num_frames_to_sample), Lambda(lambda x: x / 255.0), Normalize(mean, std), RandomShortSideScale(min_size=256, max_size=320), RandomCrop(resize_to), RandomHorizontalFlip(p=0.5), ] ), ), ] ) >>> train_dataset = pytorchvideo.data.Ucf101( data_path=os.path.join(dataset_root_path, ""train""), clip_sampler=pytorchvideo.data.make_clip_sampler(""random"", clip_duration), decode_audio=False, transform=train_transform, ) The same sequence of workflow can be applied to the validation and evaluation sets: >>> val_transform = Compose( [ ApplyTransformToKey( key=""video"", transform=Compose( [ UniformTemporalSubsample(num_frames_to_sample), Lambda(lambda x: x / 255.0), Normalize(mean, std), Resize(resize_to), ] ), ), ] ) >>> val_dataset = pytorchvideo.data.Ucf101( data_path=os.path.join(dataset_root_path, ""val""), clip_sampler=pytorchvideo.data.make_clip_sampler(""uniform"", clip_duration), decode_audio=False, transform=val_transform, ) >>> test_dataset = pytorchvideo.data.Ucf101( data_path=os.path.join(dataset_root_path, ""test""), clip_sampler=pytorchvideo.data.make_clip_sampler(""uniform"", clip_duration), decode_audio=False, transform=val_transform, ) **Note**: The above dataset pipelines are taken from the [official PyTorchVideo example](https://pytorchvideo.org/docs/tutorial_classification#dataset). We're using the [`pytorchvideo.data.Ucf101()`](https://pytorchvideo.readthedocs.io/en/latest/api/data/data.html#pytorchvideo.data.Ucf101) function because it's tailored for the UCF-101 dataset. Under the hood, it returns a [`pytorchvideo.data.labeled_video_dataset.LabeledVideoDataset`](https://pytorchvideo.readthedocs.io/en/latest/api/data/data.html#pytorchvideo.data.LabeledVideoDataset) object. `LabeledVideoDataset` class is the base class for all things video in the PyTorchVideo dataset. So, if you want to use a custom dataset not supported off-the-shelf by PyTorchVideo, you can extend the `LabeledVideoDataset` class accordingly. Refer to the `data` API [documentation to](https://pytorchvideo.readthedocs.io/en/latest/api/data/data.html) learn more. Also, if your dataset follows a similar structure (as shown above), then using the `pytorchvideo.data.Ucf101()` should work just fine. You can access the `num_videos` argument to know the number of videos in the dataset. >>> print(train_dataset.num_videos, val_dataset.num_videos, test_dataset.num_videos) # (300, 30, 75) ## Visualize the preprocessed video for better debugging >>> import imageio >>> import numpy as np >>> from IPython.display import Image >>> def unnormalize_img(img): """"""Un-normalizes the image pixels."""""" img = (img * std) + mean img = (img * 255).astype(""uint8"") return img.clip(0, 255) >>> def create_gif(video_tensor, filename=""sample.gif""): """"""Prepares a GIF from a video tensor. The video tensor is expected to have the following shape: (num_frames, num_channels, height, width). """""" frames = [] for video_frame in video_tensor: frame_unnormalized = unnormalize_img(video_frame.permute(1, 2, 0).numpy()) frames.append(frame_unnormalized) kargs = {""duration"": 0.25} imageio.mimsave(filename, frames, ""GIF"", **kargs) return filename >>> def display_gif(video_tensor, gif_name=""sample.gif""): """"""Prepares and displays a GIF from a video tensor."""""" video_tensor = video_tensor.permute(1, 0, 2, 3) gif_filename = create_gif(video_tensor, gif_name) return Image(filename=gif_filename) >>> sample_video = next(iter(train_dataset)) >>> video_tensor = sample_video[""video""] >>> display_gif(video_tensor) ## Train the model Leverage [`Trainer`](https://huggingface.co/docs/transformers/main_classes/trainer) from 🤗 Transformers for training the model. To instantiate a `Trainer`, you need to define the training configuration and an evaluation metric. The most important is the [`TrainingArguments`](https://huggingface.co/transformers/main_classes/trainer.html#transformers.TrainingArguments), which is a class that contains all the attributes to configure the training. It requires an output folder name, which will be used to save the checkpoints of the model. It also helps sync all the information in the model repository on 🤗 Hub. Most of the training arguments are self-explanatory, but one that is quite important here is `remove_unused_columns=False`. This one will drop any features not used by the model's call function. By default it's `True` because usually it's ideal to drop unused feature columns, making it easier to unpack inputs into the model's call function. But, in this case, you need the unused features ('video' in particular) in order to create `pixel_values` (which is a mandatory key our model expects in its inputs). >>> from transformers import TrainingArguments, Trainer >>> model_name = model_ckpt.split(""/"")[-1] >>> new_model_name = f""{model_name}-finetuned-ucf101-subset"" >>> num_epochs = 4 >>> args = TrainingArguments( new_model_name, remove_unused_columns=False, evaluation_strategy=""epoch"", save_strategy=""epoch"", learning_rate=5e-5, per_device_train_batch_size=batch_size, per_device_eval_batch_size=batch_size, warmup_ratio=0.1, logging_steps=10, load_best_model_at_end=True, metric_for_best_model=""accuracy"", push_to_hub=True, max_steps=(train_dataset.num_videos // batch_size) * num_epochs, ) The dataset returned by `pytorchvideo.data.Ucf101()` doesn't implement the `__len__` method. As such, we must define `max_steps` when instantiating `TrainingArguments`. Next, you need to define a function to compute the metrics from the predictions, which will use the `metric` you'll load now. The only preprocessing you have to do is to take the argmax of our predicted logits: import evaluate metric = evaluate.load(""accuracy"") def compute_metrics(eval_pred): predictions = np.argmax(eval_pred.predictions, axis=1) return metric.compute(predictions=predictions, references=eval_pred.label_ids) **A note on evaluation**: In the [VideoMAE paper](https://arxiv.org/abs/2203.12602), the authors use the following evaluation strategy. They evaluate the model on several clips from test videos and apply different crops to those clips and report the aggregate score. However, in the interest of simplicity and brevity, we don't consider that in this tutorial. Also, define a `collate_fn`, which will be used to batch examples together. Each batch consists of 2 keys, namely `pixel_values` and `labels`. >>> def collate_fn(examples): # permute to (num_frames, num_channels, height, width) pixel_values = torch.stack( [example[""video""].permute(1, 0, 2, 3) for example in examples] ) labels = torch.tensor([example[""label""] for example in examples]) return {""pixel_values"": pixel_values, ""labels"": labels} Then you just pass all of this along with the datasets to `Trainer`: >>> trainer = Trainer( model, args, train_dataset=train_dataset, eval_dataset=val_dataset, tokenizer=image_processor, compute_metrics=compute_metrics, data_collator=collate_fn, ) You might wonder why you passed along the `image_processor` as a tokenizer when you preprocessed the data already. This is only to make sure the image processor configuration file (stored as JSON) will also be uploaded to the repo on the Hub. Now fine-tune our model by calling the `train` method: >>> train_results = trainer.train() Once training is completed, share your model to the Hub with the [`~transformers.Trainer.push_to_hub`] method so everyone can use your model: >>> trainer.push_to_hub() ## Inference Great, now that you have fine-tuned a model, you can use it for inference! Load a video for inference: >>> sample_test_video = next(iter(test_dataset)) The simplest way to try out your fine-tuned model for inference is to use it in a [`pipeline`](https://huggingface.co/docs/transformers/main/en/main_classes/pipelines#transformers.VideoClassificationPipeline). Instantiate a `pipeline` for video classification with your model, and pass your video to it: >>> from transformers import pipeline >>> video_cls = pipeline(model=""my_awesome_video_cls_model"") >>> video_cls(""https://huggingface.co/datasets/sayakpaul/ucf101-subset/resolve/main/v_BasketballDunk_g14_c06.avi"") [{'score': 0.9272987842559814, 'label': 'BasketballDunk'}, {'score': 0.017777055501937866, 'label': 'BabyCrawling'}, {'score': 0.01663011871278286, 'label': 'BalanceBeam'}, {'score': 0.009560945443809032, 'label': 'BandMarching'}, {'score': 0.0068979403004050255, 'label': 'BaseballPitch'}] You can also manually replicate the results of the `pipeline` if you'd like. >>> def run_inference(model, video): # (num_frames, num_channels, height, width) perumuted_sample_test_video = video.permute(1, 0, 2, 3) inputs = { ""pixel_values"": perumuted_sample_test_video.unsqueeze(0), ""labels"": torch.tensor( [sample_test_video[""label""]] ), # this can be skipped if you don't have labels available. } device = torch.device(""cuda"" if torch.cuda.is_available() else ""cpu"") inputs = {k: v.to(device) for k, v in inputs.items()} model = model.to(device) # forward pass with torch.no_grad(): outputs = model(**inputs) logits = outputs.logits return logits Now, pass your input to the model and return the `logits`: >>> logits = run_inference(trained_model, sample_test_video[""video""]) Decoding the `logits`, we get: >>> predicted_class_idx = logits.argmax(-1).item() >>> print(""Predicted class:"", model.config.id2label[predicted_class_idx]) # Predicted class: BasketballDunk ```" tasks/zero_shot_object_detection.md," # Zero-shot object detection [[open-in-colab]] Traditionally, models used for [object detection](object_detection) require labeled image datasets for training, and are limited to detecting the set of classes from the training data. Zero-shot object detection is supported by the [OWL-ViT](../model_doc/owlvit) model which uses a different approach. OWL-ViT is an open-vocabulary object detector. It means that it can detect objects in images based on free-text queries without the need to fine-tune the model on labeled datasets. OWL-ViT leverages multi-modal representations to perform open-vocabulary detection. It combines [CLIP](../model_doc/clip) with lightweight object classification and localization heads. Open-vocabulary detection is achieved by embedding free-text queries with the text encoder of CLIP and using them as input to the object classification and localization heads. associate images and their corresponding textual descriptions, and ViT processes image patches as inputs. The authors of OWL-ViT first trained CLIP from scratch and then fine-tuned OWL-ViT end to end on standard object detection datasets using a bipartite matching loss. With this approach, the model can detect objects based on textual descriptions without prior training on labeled datasets. In this guide, you will learn how to use OWL-ViT: - to detect objects based on text prompts - for batch object detection - for image-guided object detection Before you begin, make sure you have all the necessary libraries installed: ```bash pip install -q transformers ## Zero-shot object detection pipeline The simplest way to try out inference with OWL-ViT is to use it in a [`pipeline`]. Instantiate a pipeline for zero-shot object detection from a [checkpoint on the Hugging Face Hub](https://huggingface.co/models?other=owlvit): thon >>> from transformers import pipeline >>> checkpoint = ""google/owlvit-base-patch32"" >>> detector = pipeline(model=checkpoint, task=""zero-shot-object-detection"") Next, choose an image you'd like to detect objects in. Here we'll use the image of astronaut Eileen Collins that is a part of the [NASA](https://www.nasa.gov/multimedia/imagegallery/index.html) Great Images dataset. >>> import skimage >>> import numpy as np >>> from PIL import Image >>> image = skimage.data.astronaut() >>> image = Image.fromarray(np.uint8(image)).convert(""RGB"") >>> image Pass the image and the candidate object labels to look for to the pipeline. Here we pass the image directly; other suitable options include a local path to an image or an image url. We also pass text descriptions for all items we want to query the image for. >>> predictions = detector( image, candidate_labels=[""human face"", ""rocket"", ""nasa badge"", ""star-spangled banner""], ) >>> predictions [{'score': 0.3571370542049408, 'label': 'human face', 'box': {'xmin': 180, 'ymin': 71, 'xmax': 271, 'ymax': 178}}, {'score': 0.28099656105041504, 'label': 'nasa badge', 'box': {'xmin': 129, 'ymin': 348, 'xmax': 206, 'ymax': 427}}, {'score': 0.2110239565372467, 'label': 'rocket', 'box': {'xmin': 350, 'ymin': -1, 'xmax': 468, 'ymax': 288}}, {'score': 0.13790413737297058, 'label': 'star-spangled banner', 'box': {'xmin': 1, 'ymin': 1, 'xmax': 105, 'ymax': 509}}, {'score': 0.11950037628412247, 'label': 'nasa badge', 'box': {'xmin': 277, 'ymin': 338, 'xmax': 327, 'ymax': 380}}, {'score': 0.10649408400058746, 'label': 'rocket', 'box': {'xmin': 358, 'ymin': 64, 'xmax': 424, 'ymax': 280}}] Let's visualize the predictions: >>> from PIL import ImageDraw >>> draw = ImageDraw.Draw(image) >>> for prediction in predictions: box = prediction[""box""] label = prediction[""label""] score = prediction[""score""] xmin, ymin, xmax, ymax = box.values() draw.rectangle((xmin, ymin, xmax, ymax), outline=""red"", width=1) draw.text((xmin, ymin), f""{label}: {round(score,2)}"", fill=""white"") >>> image ## Text-prompted zero-shot object detection by hand Now that you've seen how to use the zero-shot object detection pipeline, let's replicate the same result manually. Start by loading the model and associated processor from a [checkpoint on the Hugging Face Hub](https://huggingface.co/models?other=owlvit). Here we'll use the same checkpoint as before: >>> from transformers import AutoProcessor, AutoModelForZeroShotObjectDetection >>> model = AutoModelForZeroShotObjectDetection.from_pretrained(checkpoint) >>> processor = AutoProcessor.from_pretrained(checkpoint) Let's take a different image to switch things up. >>> import requests >>> url = ""https://unsplash.com/photos/oj0zeY2Ltk4/download?ixid=MnwxMjA3fDB8MXxzZWFyY2h8MTR8fHBpY25pY3xlbnwwfHx8fDE2Nzc0OTE1NDk&force=true&w=640"" >>> im = Image.open(requests.get(url, stream=True).raw) >>> im Use the processor to prepare the inputs for the model. The processor combines an image processor that prepares the image for the model by resizing and normalizing it, and a [`CLIPTokenizer`] that takes care of the text inputs. >>> text_queries = [""hat"", ""book"", ""sunglasses"", ""camera""] >>> inputs = processor(text=text_queries, images=im, return_tensors=""pt"") Pass the inputs through the model, post-process, and visualize the results. Since the image processor resized images before feeding them to the model, you need to use the [`~OwlViTImageProcessor.post_process_object_detection`] method to make sure the predicted bounding boxes have the correct coordinates relative to the original image: >>> import torch >>> with torch.no_grad(): outputs = model(**inputs) target_sizes = torch.tensor([im.size[::-1]]) results = processor.post_process_object_detection(outputs, threshold=0.1, target_sizes=target_sizes)[0] >>> draw = ImageDraw.Draw(im) >>> scores = results[""scores""].tolist() >>> labels = results[""labels""].tolist() >>> boxes = results[""boxes""].tolist() >>> for box, score, label in zip(boxes, scores, labels): xmin, ymin, xmax, ymax = box draw.rectangle((xmin, ymin, xmax, ymax), outline=""red"", width=1) draw.text((xmin, ymin), f""{text_queries[label]}: {round(score,2)}"", fill=""white"") >>> im ## Batch processing You can pass multiple sets of images and text queries to search for different (or same) objects in several images. Let's use both an astronaut image and the beach image together. For batch processing, you should pass text queries as a nested list to the processor and images as lists of PIL images, PyTorch tensors, or NumPy arrays. >>> images = [image, im] >>> text_queries = [ [""human face"", ""rocket"", ""nasa badge"", ""star-spangled banner""], [""hat"", ""book"", ""sunglasses"", ""camera""], ] >>> inputs = processor(text=text_queries, images=images, return_tensors=""pt"") Previously for post-processing you passed the single image's size as a tensor, but you can also pass a tuple, or, in case of several images, a list of tuples. Let's create predictions for the two examples, and visualize the second one (`image_idx = 1`). >>> with torch.no_grad(): outputs = model(**inputs) target_sizes = [x.size[::-1] for x in images] results = processor.post_process_object_detection(outputs, threshold=0.1, target_sizes=target_sizes) >>> image_idx = 1 >>> draw = ImageDraw.Draw(images[image_idx]) >>> scores = results[image_idx][""scores""].tolist() >>> labels = results[image_idx][""labels""].tolist() >>> boxes = results[image_idx][""boxes""].tolist() >>> for box, score, label in zip(boxes, scores, labels): xmin, ymin, xmax, ymax = box draw.rectangle((xmin, ymin, xmax, ymax), outline=""red"", width=1) draw.text((xmin, ymin), f""{text_queries[image_idx][label]}: {round(score,2)}"", fill=""white"") >>> images[image_idx] ## Image-guided object detection In addition to zero-shot object detection with text queries, OWL-ViT offers image-guided object detection. This means you can use an image query to find similar objects in the target image. Unlike text queries, only a single example image is allowed. Let's take an image with two cats on a couch as a target image, and an image of a single cat as a query: >>> url = ""http://images.cocodataset.org/val2017/000000039769.jpg"" >>> image_target = Image.open(requests.get(url, stream=True).raw) >>> query_url = ""http://images.cocodataset.org/val2017/000000524280.jpg"" >>> query_image = Image.open(requests.get(query_url, stream=True).raw) Let's take a quick look at the images: >>> import matplotlib.pyplot as plt >>> fig, ax = plt.subplots(1, 2) >>> ax[0].imshow(image_target) >>> ax[1].imshow(query_image) In the preprocessing step, instead of text queries, you now need to use `query_images`: >>> inputs = processor(images=image_target, query_images=query_image, return_tensors=""pt"") For predictions, instead of passing the inputs to the model, pass them to [`~OwlViTForObjectDetection.image_guided_detection`]. Draw the predictions as before except now there are no labels. >>> with torch.no_grad(): outputs = model.image_guided_detection(**inputs) target_sizes = torch.tensor([image_target.size[::-1]]) results = processor.post_process_image_guided_detection(outputs=outputs, target_sizes=target_sizes)[0] >>> draw = ImageDraw.Draw(image_target) >>> scores = results[""scores""].tolist() >>> boxes = results[""boxes""].tolist() >>> for box, score, label in zip(boxes, scores, labels): xmin, ymin, xmax, ymax = box draw.rectangle((xmin, ymin, xmax, ymax), outline=""white"", width=4) >>> image_target If you'd like to interactively try out inference with OWL-ViT, check out this demo: " tasks/language_modeling.md," # Causal language modeling [[open-in-colab]] There are two types of language modeling, causal and masked. This guide illustrates causal language modeling. Causal language models are frequently used for text generation. You can use these models for creative applications like choosing your own text adventure or an intelligent coding assistant like Copilot or CodeParrot. Causal language modeling predicts the next token in a sequence of tokens, and the model can only attend to tokens on the left. This means the model cannot see future tokens. GPT-2 is an example of a causal language model. This guide will show you how to: 1. Finetune [DistilGPT2](https://huggingface.co/distilgpt2) on the [r/askscience](https://www.reddit.com/r/askscience/) subset of the [ELI5](https://huggingface.co/datasets/eli5) dataset. 2. Use your finetuned model for inference. You can finetune other architectures for causal language modeling following the same steps in this guide. Choose one of the following architectures: [BART](../model_doc/bart), [BERT](../model_doc/bert), [Bert Generation](../model_doc/bert-generation), [BigBird](../model_doc/big_bird), [BigBird-Pegasus](../model_doc/bigbird_pegasus), [BioGpt](../model_doc/biogpt), [Blenderbot](../model_doc/blenderbot), [BlenderbotSmall](../model_doc/blenderbot-small), [BLOOM](../model_doc/bloom), [CamemBERT](../model_doc/camembert), [CodeLlama](../model_doc/code_llama), [CodeGen](../model_doc/codegen), [CPM-Ant](../model_doc/cpmant), [CTRL](../model_doc/ctrl), [Data2VecText](../model_doc/data2vec-text), [ELECTRA](../model_doc/electra), [ERNIE](../model_doc/ernie), [Falcon](../model_doc/falcon), [Fuyu](../model_doc/fuyu), [GIT](../model_doc/git), [GPT-Sw3](../model_doc/gpt-sw3), [OpenAI GPT-2](../model_doc/gpt2), [GPTBigCode](../model_doc/gpt_bigcode), [GPT Neo](../model_doc/gpt_neo), [GPT NeoX](../model_doc/gpt_neox), [GPT NeoX Japanese](../model_doc/gpt_neox_japanese), [GPT-J](../model_doc/gptj), [LLaMA](../model_doc/llama), [Marian](../model_doc/marian), [mBART](../model_doc/mbart), [MEGA](../model_doc/mega), [Megatron-BERT](../model_doc/megatron-bert), [Mistral](../model_doc/mistral), [MPT](../model_doc/mpt), [MusicGen](../model_doc/musicgen), [MVP](../model_doc/mvp), [OpenLlama](../model_doc/open-llama), [OpenAI GPT](../model_doc/openai-gpt), [OPT](../model_doc/opt), [Pegasus](../model_doc/pegasus), [Persimmon](../model_doc/persimmon), [Phi](../model_doc/phi), [PLBart](../model_doc/plbart), [ProphetNet](../model_doc/prophetnet), [QDQBert](../model_doc/qdqbert), [Reformer](../model_doc/reformer), [RemBERT](../model_doc/rembert), [RoBERTa](../model_doc/roberta), [RoBERTa-PreLayerNorm](../model_doc/roberta-prelayernorm), [RoCBert](../model_doc/roc_bert), [RoFormer](../model_doc/roformer), [RWKV](../model_doc/rwkv), [Speech2Text2](../model_doc/speech_to_text_2), [Transformer-XL](../model_doc/transfo-xl), [TrOCR](../model_doc/trocr), [Whisper](../model_doc/whisper), [XGLM](../model_doc/xglm), [XLM](../model_doc/xlm), [XLM-ProphetNet](../model_doc/xlm-prophetnet), [XLM-RoBERTa](../model_doc/xlm-roberta), [XLM-RoBERTa-XL](../model_doc/xlm-roberta-xl), [XLNet](../model_doc/xlnet), [X-MOD](../model_doc/xmod) Before you begin, make sure you have all the necessary libraries installed: ```bash pip install transformers datasets evaluate We encourage you to log in to your Hugging Face account so you can upload and share your model with the community. When prompted, enter your token to log in: >>> from huggingface_hub import notebook_login >>> notebook_login() ## Load ELI5 dataset Start by loading a smaller subset of the r/askscience subset of the ELI5 dataset from the 🤗 Datasets library. This'll give you a chance to experiment and make sure everything works before spending more time training on the full dataset. >>> from datasets import load_dataset >>> eli5 = load_dataset(""eli5"", split=""train_asks[:5000]"") Split the dataset's `train_asks` split into a train and test set with the [`~datasets.Dataset.train_test_split`] method: >>> eli5 = eli5.train_test_split(test_size=0.2) Then take a look at an example: >>> eli5[""train""][0] {'answers': {'a_id': ['c3d1aib', 'c3d4lya'], 'score': [6, 3], 'text': [""The velocity needed to remain in orbit is equal to the square root of Newton's constant times the mass of earth divided by the distance from the center of the earth. I don't know the altitude of that specific mission, but they're usually around 300 km. That means he's going 7-8 km/s.\n\nIn space there are no other forces acting on either the shuttle or the guy, so they stay in the same position relative to each other. If he were to become unable to return to the ship, he would presumably run out of oxygen, or slowly fall into the atmosphere and burn up."", ""Hope you don't mind me asking another question, but why aren't there any stars visible in this photo?""]}, 'answers_urls': {'url': []}, 'document': '', 'q_id': 'nyxfp', 'selftext': '_URL_0_\n\nThis was on the front page earlier and I have a few questions about it. Is it possible to calculate how fast the astronaut would be orbiting the earth? Also how does he stay close to the shuttle so that he can return safely, i.e is he orbiting at the same speed and can therefore stay next to it? And finally if his propulsion system failed, would he eventually re-enter the atmosphere and presumably die?', 'selftext_urls': {'url': ['http://apod.nasa.gov/apod/image/1201/freeflyer_nasa_3000.jpg']}, 'subreddit': 'askscience', 'title': 'Few questions about this space walk photograph.', 'title_urls': {'url': []}} While this may look like a lot, you're only really interested in the `text` field. What's cool about language modeling tasks is you don't need labels (also known as an unsupervised task) because the next word *is* the label. ## Preprocess The next step is to load a DistilGPT2 tokenizer to process the `text` subfield: >>> from transformers import AutoTokenizer >>> tokenizer = AutoTokenizer.from_pretrained(""distilgpt2"") You'll notice from the example above, the `text` field is actually nested inside `answers`. This means you'll need to extract the `text` subfield from its nested structure with the [`flatten`](https://huggingface.co/docs/datasets/process#flatten) method: >>> eli5 = eli5.flatten() >>> eli5[""train""][0] {'answers.a_id': ['c3d1aib', 'c3d4lya'], 'answers.score': [6, 3], 'answers.text': [""The velocity needed to remain in orbit is equal to the square root of Newton's constant times the mass of earth divided by the distance from the center of the earth. I don't know the altitude of that specific mission, but they're usually around 300 km. That means he's going 7-8 km/s.\n\nIn space there are no other forces acting on either the shuttle or the guy, so they stay in the same position relative to each other. If he were to become unable to return to the ship, he would presumably run out of oxygen, or slowly fall into the atmosphere and burn up."", ""Hope you don't mind me asking another question, but why aren't there any stars visible in this photo?""], 'answers_urls.url': [], 'document': '', 'q_id': 'nyxfp', 'selftext': '_URL_0_\n\nThis was on the front page earlier and I have a few questions about it. Is it possible to calculate how fast the astronaut would be orbiting the earth? Also how does he stay close to the shuttle so that he can return safely, i.e is he orbiting at the same speed and can therefore stay next to it? And finally if his propulsion system failed, would he eventually re-enter the atmosphere and presumably die?', 'selftext_urls.url': ['http://apod.nasa.gov/apod/image/1201/freeflyer_nasa_3000.jpg'], 'subreddit': 'askscience', 'title': 'Few questions about this space walk photograph.', 'title_urls.url': []} Each subfield is now a separate column as indicated by the `answers` prefix, and the `text` field is a list now. Instead of tokenizing each sentence separately, convert the list to a string so you can jointly tokenize them. Here is a first preprocessing function to join the list of strings for each example and tokenize the result: >>> def preprocess_function(examples): return tokenizer(["" "".join(x) for x in examples[""answers.text""]]) To apply this preprocessing function over the entire dataset, use the 🤗 Datasets [`~datasets.Dataset.map`] method. You can speed up the `map` function by setting `batched=True` to process multiple elements of the dataset at once, and increasing the number of processes with `num_proc`. Remove any columns you don't need: >>> tokenized_eli5 = eli5.map( preprocess_function, batched=True, num_proc=4, remove_columns=eli5[""train""].column_names, ) This dataset contains the token sequences, but some of these are longer than the maximum input length for the model. You can now use a second preprocessing function to - concatenate all the sequences - split the concatenated sequences into shorter chunks defined by `block_size`, which should be both shorter than the maximum input length and short enough for your GPU RAM. >>> block_size = 128 >>> def group_texts(examples): # Concatenate all texts. concatenated_examples = {k: sum(examples[k], []) for k in examples.keys()} total_length = len(concatenated_examples[list(examples.keys())[0]]) # We drop the small remainder, we could add padding if the model supported it instead of this drop, you can # customize this part to your needs. if total_length >= block_size: total_length = (total_length // block_size) * block_size # Split by chunks of block_size. result = { k: [t[i : i + block_size] for i in range(0, total_length, block_size)] for k, t in concatenated_examples.items() } result[""labels""] = result[""input_ids""].copy() return result Apply the `group_texts` function over the entire dataset: >>> lm_dataset = tokenized_eli5.map(group_texts, batched=True, num_proc=4) Now create a batch of examples using [`DataCollatorForLanguageModeling`]. It's more efficient to *dynamically pad* the sentences to the longest length in a batch during collation, instead of padding the whole dataset to the maximum length. Use the end-of-sequence token as the padding token and set `mlm=False`. This will use the inputs as labels shifted to the right by one element: >>> from transformers import DataCollatorForLanguageModeling >>> tokenizer.pad_token = tokenizer.eos_token >>> data_collator = DataCollatorForLanguageModeling(tokenizer=tokenizer, mlm=False) Use the end-of-sequence token as the padding token and set `mlm=False`. This will use the inputs as labels shifted to the right by one element: >>> from transformers import DataCollatorForLanguageModeling >>> data_collator = DataCollatorForLanguageModeling(tokenizer=tokenizer, mlm=False, return_tensors=""tf"") ## Train If you aren't familiar with finetuning a model with the [`Trainer`], take a look at the [basic tutorial](../training#train-with-pytorch-trainer)! You're ready to start training your model now! Load DistilGPT2 with [`AutoModelForCausalLM`]: >>> from transformers import AutoModelForCausalLM, TrainingArguments, Trainer >>> model = AutoModelForCausalLM.from_pretrained(""distilgpt2"") At this point, only three steps remain: 1. Define your training hyperparameters in [`TrainingArguments`]. The only required parameter is `output_dir` which specifies where to save your model. You'll push this model to the Hub by setting `push_to_hub=True` (you need to be signed in to Hugging Face to upload your model). 2. Pass the training arguments to [`Trainer`] along with the model, datasets, and data collator. 3. Call [`~Trainer.train`] to finetune your model. >>> training_args = TrainingArguments( output_dir=""my_awesome_eli5_clm-model"", evaluation_strategy=""epoch"", learning_rate=2e-5, weight_decay=0.01, push_to_hub=True, ) >>> trainer = Trainer( model=model, args=training_args, train_dataset=lm_dataset[""train""], eval_dataset=lm_dataset[""test""], data_collator=data_collator, ) >>> trainer.train() Once training is completed, use the [`~transformers.Trainer.evaluate`] method to evaluate your model and get its perplexity: >>> import math >>> eval_results = trainer.evaluate() >>> print(f""Perplexity: {math.exp(eval_results['eval_loss']):.2f}"") Perplexity: 49.61 Then share your model to the Hub with the [`~transformers.Trainer.push_to_hub`] method so everyone can use your model: >>> trainer.push_to_hub() If you aren't familiar with finetuning a model with Keras, take a look at the [basic tutorial](../training#train-a-tensorflow-model-with-keras)! To finetune a model in TensorFlow, start by setting up an optimizer function, learning rate schedule, and some training hyperparameters: >>> from transformers import create_optimizer, AdamWeightDecay >>> optimizer = AdamWeightDecay(learning_rate=2e-5, weight_decay_rate=0.01) Then you can load DistilGPT2 with [`TFAutoModelForCausalLM`]: >>> from transformers import TFAutoModelForCausalLM >>> model = TFAutoModelForCausalLM.from_pretrained(""distilgpt2"") Convert your datasets to the `tf.data.Dataset` format with [`~transformers.TFPreTrainedModel.prepare_tf_dataset`]: >>> tf_train_set = model.prepare_tf_dataset( lm_dataset[""train""], shuffle=True, batch_size=16, collate_fn=data_collator, ) >>> tf_test_set = model.prepare_tf_dataset( lm_dataset[""test""], shuffle=False, batch_size=16, collate_fn=data_collator, ) Configure the model for training with [`compile`](https://keras.io/api/models/model_training_apis/#compile-method). Note that Transformers models all have a default task-relevant loss function, so you don't need to specify one unless you want to: >>> import tensorflow as tf >>> model.compile(optimizer=optimizer) # No loss argument! This can be done by specifying where to push your model and tokenizer in the [`~transformers.PushToHubCallback`]: >>> from transformers.keras_callbacks import PushToHubCallback >>> callback = PushToHubCallback( output_dir=""my_awesome_eli5_clm-model"", tokenizer=tokenizer, ) Finally, you're ready to start training your model! Call [`fit`](https://keras.io/api/models/model_training_apis/#fit-method) with your training and validation datasets, the number of epochs, and your callback to finetune the model: >>> model.fit(x=tf_train_set, validation_data=tf_test_set, epochs=3, callbacks=[callback]) Once training is completed, your model is automatically uploaded to the Hub so everyone can use it! For a more in-depth example of how to finetune a model for causal language modeling, take a look at the corresponding [PyTorch notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/language_modeling.ipynb) or [TensorFlow notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/language_modeling-tf.ipynb). ## Inference Great, now that you've finetuned a model, you can use it for inference! Come up with a prompt you'd like to generate text from: >>> prompt = ""Somatic hypermutation allows the immune system to"" The simplest way to try out your finetuned model for inference is to use it in a [`pipeline`]. Instantiate a `pipeline` for text generation with your model, and pass your text to it: >>> from transformers import pipeline >>> generator = pipeline(""text-generation"", model=""my_awesome_eli5_clm-model"") >>> generator(prompt) [{'generated_text': ""Somatic hypermutation allows the immune system to be able to effectively reverse the damage caused by an infection.\n\n\nThe damage caused by an infection is caused by the immune system's ability to perform its own self-correcting tasks.""}] Tokenize the text and return the `input_ids` as PyTorch tensors: >>> from transformers import AutoTokenizer >>> tokenizer = AutoTokenizer.from_pretrained(""my_awesome_eli5_clm-model"") >>> inputs = tokenizer(prompt, return_tensors=""pt"").input_ids Use the [`~transformers.generation_utils.GenerationMixin.generate`] method to generate text. For more details about the different text generation strategies and parameters for controlling generation, check out the [Text generation strategies](../generation_strategies) page. >>> from transformers import AutoModelForCausalLM >>> model = AutoModelForCausalLM.from_pretrained(""my_awesome_eli5_clm-model"") >>> outputs = model.generate(inputs, max_new_tokens=100, do_sample=True, top_k=50, top_p=0.95) Decode the generated token ids back into text: >>> tokenizer.batch_decode(outputs, skip_special_tokens=True) [""Somatic hypermutation allows the immune system to react to drugs with the ability to adapt to a different environmental situation. In other words, a system of 'hypermutation' can help the immune system to adapt to a different environmental situation or in some cases even a single life. In contrast, researchers at the University of Massachusetts-Boston have found that 'hypermutation' is much stronger in mice than in humans but can be found in humans, and that it's not completely unknown to the immune system. A study on how the immune system""] Tokenize the text and return the `input_ids` as TensorFlow tensors: >>> from transformers import AutoTokenizer >>> tokenizer = AutoTokenizer.from_pretrained(""my_awesome_eli5_clm-model"") >>> inputs = tokenizer(prompt, return_tensors=""tf"").input_ids Use the [`~transformers.generation_tf_utils.TFGenerationMixin.generate`] method to create the summarization. For more details about the different text generation strategies and parameters for controlling generation, check out the [Text generation strategies](../generation_strategies) page. >>> from transformers import TFAutoModelForCausalLM >>> model = TFAutoModelForCausalLM.from_pretrained(""my_awesome_eli5_clm-model"") >>> outputs = model.generate(input_ids=inputs, max_new_tokens=100, do_sample=True, top_k=50, top_p=0.95) Decode the generated token ids back into text: >>> tokenizer.batch_decode(outputs, skip_special_tokens=True) ['Somatic hypermutation allows the immune system to detect the presence of other viruses as they become more prevalent. Therefore, researchers have identified a high proportion of human viruses. The proportion of virus-associated viruses in our study increases with age. Therefore, we propose a simple algorithm to detect the presence of these new viruses in our samples as a sign of improved immunity. A first study based on this algorithm, which will be published in Science on Friday, aims to show that this finding could translate into the development of a better vaccine that is more effective for'] " tasks/masked_language_modeling.md," # Masked language modeling [[open-in-colab]] Masked language modeling predicts a masked token in a sequence, and the model can attend to tokens bidirectionally. This means the model has full access to the tokens on the left and right. Masked language modeling is great for tasks that require a good contextual understanding of an entire sequence. BERT is an example of a masked language model. This guide will show you how to: 1. Finetune [DistilRoBERTa](https://huggingface.co/distilroberta-base) on the [r/askscience](https://www.reddit.com/r/askscience/) subset of the [ELI5](https://huggingface.co/datasets/eli5) dataset. 2. Use your finetuned model for inference. You can finetune other architectures for masked language modeling following the same steps in this guide. Choose one of the following architectures: [ALBERT](../model_doc/albert), [BART](../model_doc/bart), [BERT](../model_doc/bert), [BigBird](../model_doc/big_bird), [CamemBERT](../model_doc/camembert), [ConvBERT](../model_doc/convbert), [Data2VecText](../model_doc/data2vec-text), [DeBERTa](../model_doc/deberta), [DeBERTa-v2](../model_doc/deberta-v2), [DistilBERT](../model_doc/distilbert), [ELECTRA](../model_doc/electra), [ERNIE](../model_doc/ernie), [ESM](../model_doc/esm), [FlauBERT](../model_doc/flaubert), [FNet](../model_doc/fnet), [Funnel Transformer](../model_doc/funnel), [I-BERT](../model_doc/ibert), [LayoutLM](../model_doc/layoutlm), [Longformer](../model_doc/longformer), [LUKE](../model_doc/luke), [mBART](../model_doc/mbart), [MEGA](../model_doc/mega), [Megatron-BERT](../model_doc/megatron-bert), [MobileBERT](../model_doc/mobilebert), [MPNet](../model_doc/mpnet), [MRA](../model_doc/mra), [MVP](../model_doc/mvp), [Nezha](../model_doc/nezha), [Nyströmformer](../model_doc/nystromformer), [Perceiver](../model_doc/perceiver), [QDQBert](../model_doc/qdqbert), [Reformer](../model_doc/reformer), [RemBERT](../model_doc/rembert), [RoBERTa](../model_doc/roberta), [RoBERTa-PreLayerNorm](../model_doc/roberta-prelayernorm), [RoCBert](../model_doc/roc_bert), [RoFormer](../model_doc/roformer), [SqueezeBERT](../model_doc/squeezebert), [TAPAS](../model_doc/tapas), [Wav2Vec2](../model_doc/wav2vec2), [XLM](../model_doc/xlm), [XLM-RoBERTa](../model_doc/xlm-roberta), [XLM-RoBERTa-XL](../model_doc/xlm-roberta-xl), [X-MOD](../model_doc/xmod), [YOSO](../model_doc/yoso) Before you begin, make sure you have all the necessary libraries installed: ```bash pip install transformers datasets evaluate We encourage you to log in to your Hugging Face account so you can upload and share your model with the community. When prompted, enter your token to log in: >>> from huggingface_hub import notebook_login >>> notebook_login() ## Load ELI5 dataset Start by loading a smaller subset of the r/askscience subset of the ELI5 dataset from the 🤗 Datasets library. This'll give you a chance to experiment and make sure everything works before spending more time training on the full dataset. >>> from datasets import load_dataset >>> eli5 = load_dataset(""eli5"", split=""train_asks[:5000]"") Split the dataset's `train_asks` split into a train and test set with the [`~datasets.Dataset.train_test_split`] method: >>> eli5 = eli5.train_test_split(test_size=0.2) Then take a look at an example: >>> eli5[""train""][0] {'answers': {'a_id': ['c3d1aib', 'c3d4lya'], 'score': [6, 3], 'text': [""The velocity needed to remain in orbit is equal to the square root of Newton's constant times the mass of earth divided by the distance from the center of the earth. I don't know the altitude of that specific mission, but they're usually around 300 km. That means he's going 7-8 km/s.\n\nIn space there are no other forces acting on either the shuttle or the guy, so they stay in the same position relative to each other. If he were to become unable to return to the ship, he would presumably run out of oxygen, or slowly fall into the atmosphere and burn up."", ""Hope you don't mind me asking another question, but why aren't there any stars visible in this photo?""]}, 'answers_urls': {'url': []}, 'document': '', 'q_id': 'nyxfp', 'selftext': '_URL_0_\n\nThis was on the front page earlier and I have a few questions about it. Is it possible to calculate how fast the astronaut would be orbiting the earth? Also how does he stay close to the shuttle so that he can return safely, i.e is he orbiting at the same speed and can therefore stay next to it? And finally if his propulsion system failed, would he eventually re-enter the atmosphere and presumably die?', 'selftext_urls': {'url': ['http://apod.nasa.gov/apod/image/1201/freeflyer_nasa_3000.jpg']}, 'subreddit': 'askscience', 'title': 'Few questions about this space walk photograph.', 'title_urls': {'url': []}} While this may look like a lot, you're only really interested in the `text` field. What's cool about language modeling tasks is you don't need labels (also known as an unsupervised task) because the next word *is* the label. ## Preprocess For masked language modeling, the next step is to load a DistilRoBERTa tokenizer to process the `text` subfield: >>> from transformers import AutoTokenizer >>> tokenizer = AutoTokenizer.from_pretrained(""distilroberta-base"") You'll notice from the example above, the `text` field is actually nested inside `answers`. This means you'll need to e xtract the `text` subfield from its nested structure with the [`flatten`](https://huggingface.co/docs/datasets/process#flatten) method: >>> eli5 = eli5.flatten() >>> eli5[""train""][0] {'answers.a_id': ['c3d1aib', 'c3d4lya'], 'answers.score': [6, 3], 'answers.text': [""The velocity needed to remain in orbit is equal to the square root of Newton's constant times the mass of earth divided by the distance from the center of the earth. I don't know the altitude of that specific mission, but they're usually around 300 km. That means he's going 7-8 km/s.\n\nIn space there are no other forces acting on either the shuttle or the guy, so they stay in the same position relative to each other. If he were to become unable to return to the ship, he would presumably run out of oxygen, or slowly fall into the atmosphere and burn up."", ""Hope you don't mind me asking another question, but why aren't there any stars visible in this photo?""], 'answers_urls.url': [], 'document': '', 'q_id': 'nyxfp', 'selftext': '_URL_0_\n\nThis was on the front page earlier and I have a few questions about it. Is it possible to calculate how fast the astronaut would be orbiting the earth? Also how does he stay close to the shuttle so that he can return safely, i.e is he orbiting at the same speed and can therefore stay next to it? And finally if his propulsion system failed, would he eventually re-enter the atmosphere and presumably die?', 'selftext_urls.url': ['http://apod.nasa.gov/apod/image/1201/freeflyer_nasa_3000.jpg'], 'subreddit': 'askscience', 'title': 'Few questions about this space walk photograph.', 'title_urls.url': []} Each subfield is now a separate column as indicated by the `answers` prefix, and the `text` field is a list now. Instead of tokenizing each sentence separately, convert the list to a string so you can jointly tokenize them. Here is a first preprocessing function to join the list of strings for each example and tokenize the result: >>> def preprocess_function(examples): return tokenizer(["" "".join(x) for x in examples[""answers.text""]]) To apply this preprocessing function over the entire dataset, use the 🤗 Datasets [`~datasets.Dataset.map`] method. You can speed up the `map` function by setting `batched=True` to process multiple elements of the dataset at once, and increasing the number of processes with `num_proc`. Remove any columns you don't need: >>> tokenized_eli5 = eli5.map( preprocess_function, batched=True, num_proc=4, remove_columns=eli5[""train""].column_names, ) This dataset contains the token sequences, but some of these are longer than the maximum input length for the model. You can now use a second preprocessing function to - concatenate all the sequences - split the concatenated sequences into shorter chunks defined by `block_size`, which should be both shorter than the maximum input length and short enough for your GPU RAM. >>> block_size = 128 >>> def group_texts(examples): # Concatenate all texts. concatenated_examples = {k: sum(examples[k], []) for k in examples.keys()} total_length = len(concatenated_examples[list(examples.keys())[0]]) # We drop the small remainder, we could add padding if the model supported it instead of this drop, you can # customize this part to your needs. if total_length >= block_size: total_length = (total_length // block_size) * block_size # Split by chunks of block_size. result = { k: [t[i : i + block_size] for i in range(0, total_length, block_size)] for k, t in concatenated_examples.items() } return result Apply the `group_texts` function over the entire dataset: >>> lm_dataset = tokenized_eli5.map(group_texts, batched=True, num_proc=4) Now create a batch of examples using [`DataCollatorForLanguageModeling`]. It's more efficient to *dynamically pad* the sentences to the longest length in a batch during collation, instead of padding the whole dataset to the maximum length. Use the end-of-sequence token as the padding token and specify `mlm_probability` to randomly mask tokens each time you iterate over the data: >>> from transformers import DataCollatorForLanguageModeling >>> tokenizer.pad_token = tokenizer.eos_token >>> data_collator = DataCollatorForLanguageModeling(tokenizer=tokenizer, mlm_probability=0.15) Use the end-of-sequence token as the padding token and specify `mlm_probability` to randomly mask tokens each time you iterate over the data: >>> from transformers import DataCollatorForLanguageModeling >>> data_collator = DataCollatorForLanguageModeling(tokenizer=tokenizer, mlm_probability=0.15, return_tensors=""tf"") ## Train If you aren't familiar with finetuning a model with the [`Trainer`], take a look at the basic tutorial [here](../training#train-with-pytorch-trainer)! You're ready to start training your model now! Load DistilRoBERTa with [`AutoModelForMaskedLM`]: >>> from transformers import AutoModelForMaskedLM >>> model = AutoModelForMaskedLM.from_pretrained(""distilroberta-base"") At this point, only three steps remain: 1. Define your training hyperparameters in [`TrainingArguments`]. The only required parameter is `output_dir` which specifies where to save your model. You'll push this model to the Hub by setting `push_to_hub=True` (you need to be signed in to Hugging Face to upload your model). 2. Pass the training arguments to [`Trainer`] along with the model, datasets, and data collator. 3. Call [`~Trainer.train`] to finetune your model. >>> training_args = TrainingArguments( output_dir=""my_awesome_eli5_mlm_model"", evaluation_strategy=""epoch"", learning_rate=2e-5, num_train_epochs=3, weight_decay=0.01, push_to_hub=True, ) >>> trainer = Trainer( model=model, args=training_args, train_dataset=lm_dataset[""train""], eval_dataset=lm_dataset[""test""], data_collator=data_collator, ) >>> trainer.train() Once training is completed, use the [`~transformers.Trainer.evaluate`] method to evaluate your model and get its perplexity: >>> import math >>> eval_results = trainer.evaluate() >>> print(f""Perplexity: {math.exp(eval_results['eval_loss']):.2f}"") Perplexity: 8.76 Then share your model to the Hub with the [`~transformers.Trainer.push_to_hub`] method so everyone can use your model: >>> trainer.push_to_hub() If you aren't familiar with finetuning a model with Keras, take a look at the basic tutorial [here](../training#train-a-tensorflow-model-with-keras)! To finetune a model in TensorFlow, start by setting up an optimizer function, learning rate schedule, and some training hyperparameters: >>> from transformers import create_optimizer, AdamWeightDecay >>> optimizer = AdamWeightDecay(learning_rate=2e-5, weight_decay_rate=0.01) Then you can load DistilRoBERTa with [`TFAutoModelForMaskedLM`]: >>> from transformers import TFAutoModelForMaskedLM >>> model = TFAutoModelForMaskedLM.from_pretrained(""distilroberta-base"") Convert your datasets to the `tf.data.Dataset` format with [`~transformers.TFPreTrainedModel.prepare_tf_dataset`]: >>> tf_train_set = model.prepare_tf_dataset( lm_dataset[""train""], shuffle=True, batch_size=16, collate_fn=data_collator, ) >>> tf_test_set = model.prepare_tf_dataset( lm_dataset[""test""], shuffle=False, batch_size=16, collate_fn=data_collator, ) Configure the model for training with [`compile`](https://keras.io/api/models/model_training_apis/#compile-method). Note that Transformers models all have a default task-relevant loss function, so you don't need to specify one unless you want to: >>> import tensorflow as tf >>> model.compile(optimizer=optimizer) # No loss argument! This can be done by specifying where to push your model and tokenizer in the [`~transformers.PushToHubCallback`]: >>> from transformers.keras_callbacks import PushToHubCallback >>> callback = PushToHubCallback( output_dir=""my_awesome_eli5_mlm_model"", tokenizer=tokenizer, ) Finally, you're ready to start training your model! Call [`fit`](https://keras.io/api/models/model_training_apis/#fit-method) with your training and validation datasets, the number of epochs, and your callback to finetune the model: >>> model.fit(x=tf_train_set, validation_data=tf_test_set, epochs=3, callbacks=[callback]) Once training is completed, your model is automatically uploaded to the Hub so everyone can use it! For a more in-depth example of how to finetune a model for masked language modeling, take a look at the corresponding [PyTorch notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/language_modeling.ipynb) or [TensorFlow notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/language_modeling-tf.ipynb). ## Inference Great, now that you've finetuned a model, you can use it for inference! Come up with some text you'd like the model to fill in the blank with, and use the special `` token to indicate the blank: >>> text = ""The Milky Way is a galaxy."" The simplest way to try out your finetuned model for inference is to use it in a [`pipeline`]. Instantiate a `pipeline` for fill-mask with your model, and pass your text to it. If you like, you can use the `top_k` parameter to specify how many predictions to return: >>> from transformers import pipeline >>> mask_filler = pipeline(""fill-mask"", ""stevhliu/my_awesome_eli5_mlm_model"") >>> mask_filler(text, top_k=3) [{'score': 0.5150994658470154, 'token': 21300, 'token_str': ' spiral', 'sequence': 'The Milky Way is a spiral galaxy.'}, {'score': 0.07087188959121704, 'token': 2232, 'token_str': ' massive', 'sequence': 'The Milky Way is a massive galaxy.'}, {'score': 0.06434620916843414, 'token': 650, 'token_str': ' small', 'sequence': 'The Milky Way is a small galaxy.'}] Tokenize the text and return the `input_ids` as PyTorch tensors. You'll also need to specify the position of the `` token: >>> from transformers import AutoTokenizer >>> tokenizer = AutoTokenizer.from_pretrained(""stevhliu/my_awesome_eli5_mlm_model"") >>> inputs = tokenizer(text, return_tensors=""pt"") >>> mask_token_index = torch.where(inputs[""input_ids""] == tokenizer.mask_token_id)[1] Pass your inputs to the model and return the `logits` of the masked token: >>> from transformers import AutoModelForMaskedLM >>> model = AutoModelForMaskedLM.from_pretrained(""stevhliu/my_awesome_eli5_mlm_model"") >>> logits = model(**inputs).logits >>> mask_token_logits = logits[0, mask_token_index, :] Then return the three masked tokens with the highest probability and print them out: >>> top_3_tokens = torch.topk(mask_token_logits, 3, dim=1).indices[0].tolist() >>> for token in top_3_tokens: print(text.replace(tokenizer.mask_token, tokenizer.decode([token]))) The Milky Way is a spiral galaxy. The Milky Way is a massive galaxy. The Milky Way is a small galaxy. Tokenize the text and return the `input_ids` as TensorFlow tensors. You'll also need to specify the position of the `` token: >>> from transformers import AutoTokenizer >>> tokenizer = AutoTokenizer.from_pretrained(""stevhliu/my_awesome_eli5_mlm_model"") >>> inputs = tokenizer(text, return_tensors=""tf"") >>> mask_token_index = tf.where(inputs[""input_ids""] == tokenizer.mask_token_id)[0, 1] Pass your inputs to the model and return the `logits` of the masked token: >>> from transformers import TFAutoModelForMaskedLM >>> model = TFAutoModelForMaskedLM.from_pretrained(""stevhliu/my_awesome_eli5_mlm_model"") >>> logits = model(**inputs).logits >>> mask_token_logits = logits[0, mask_token_index, :] Then return the three masked tokens with the highest probability and print them out: >>> top_3_tokens = tf.math.top_k(mask_token_logits, 3).indices.numpy() >>> for token in top_3_tokens: print(text.replace(tokenizer.mask_token, tokenizer.decode([token]))) The Milky Way is a spiral galaxy. The Milky Way is a massive galaxy. The Milky Way is a small galaxy. " tasks/zero_shot_image_classification.md," # Zero-shot image classification [[open-in-colab]] Zero-shot image classification is a task that involves classifying images into different categories using a model that was not explicitly trained on data containing labeled examples from those specific categories. Traditionally, image classification requires training a model on a specific set of labeled images, and this model learns to ""map"" certain image features to labels. When there's a need to use such model for a classification task that introduces a new set of labels, fine-tuning is required to ""recalibrate"" the model. In contrast, zero-shot or open vocabulary image classification models are typically multi-modal models that have been trained on a large dataset of images and associated descriptions. These models learn aligned vision-language representations that can be used for many downstream tasks including zero-shot image classification. This is a more flexible approach to image classification that allows models to generalize to new and unseen categories without the need for additional training data and enables users to query images with free-form text descriptions of their target objects . In this guide you'll learn how to: * create a zero-shot image classification pipeline * run zero-shot image classification inference by hand Before you begin, make sure you have all the necessary libraries installed: ```bash pip install -q transformers ## Zero-shot image classification pipeline The simplest way to try out inference with a model supporting zero-shot image classification is to use the corresponding [`pipeline`]. Instantiate a pipeline from a [checkpoint on the Hugging Face Hub](https://huggingface.co/models?pipeline_tag=zero-shot-image-classification&sort=downloads): thon >>> from transformers import pipeline >>> checkpoint = ""openai/clip-vit-large-patch14"" >>> detector = pipeline(model=checkpoint, task=""zero-shot-image-classification"") Next, choose an image you'd like to classify. >>> from PIL import Image >>> import requests >>> url = ""https://unsplash.com/photos/g8oS8-82DxI/download?ixid=MnwxMjA3fDB8MXx0b3BpY3x8SnBnNktpZGwtSGt8fHx8fDJ8fDE2NzgxMDYwODc&force=true&w=640"" >>> image = Image.open(requests.get(url, stream=True).raw) >>> image Pass the image and the candidate object labels to the pipeline. Here we pass the image directly; other suitable options include a local path to an image or an image url. The candidate labels can be simple words like in this example, or more descriptive. >>> predictions = detector(image, candidate_labels=[""fox"", ""bear"", ""seagull"", ""owl""]) >>> predictions [{'score': 0.9996670484542847, 'label': 'owl'}, {'score': 0.000199399160919711, 'label': 'seagull'}, {'score': 7.392891711788252e-05, 'label': 'fox'}, {'score': 5.96074532950297e-05, 'label': 'bear'}] ## Zero-shot image classification by hand Now that you've seen how to use the zero-shot image classification pipeline, let's take a look how you can run zero-shot image classification manually. Start by loading the model and associated processor from a [checkpoint on the Hugging Face Hub](https://huggingface.co/models?pipeline_tag=zero-shot-image-classification&sort=downloads). Here we'll use the same checkpoint as before: >>> from transformers import AutoProcessor, AutoModelForZeroShotImageClassification >>> model = AutoModelForZeroShotImageClassification.from_pretrained(checkpoint) >>> processor = AutoProcessor.from_pretrained(checkpoint) Let's take a different image to switch things up. >>> from PIL import Image >>> import requests >>> url = ""https://unsplash.com/photos/xBRQfR2bqNI/download?ixid=MnwxMjA3fDB8MXxhbGx8fHx8fHx8fHwxNjc4Mzg4ODEx&force=true&w=640"" >>> image = Image.open(requests.get(url, stream=True).raw) >>> image Use the processor to prepare the inputs for the model. The processor combines an image processor that prepares the image for the model by resizing and normalizing it, and a tokenizer that takes care of the text inputs. >>> candidate_labels = [""tree"", ""car"", ""bike"", ""cat""] >>> inputs = processor(images=image, text=candidate_labels, return_tensors=""pt"", padding=True) Pass the inputs through the model, and post-process the results: >>> import torch >>> with torch.no_grad(): outputs = model(**inputs) >>> logits = outputs.logits_per_image[0] >>> probs = logits.softmax(dim=-1).numpy() >>> scores = probs.tolist() >>> result = [ {""score"": score, ""label"": candidate_label} for score, candidate_label in sorted(zip(probs, candidate_labels), key=lambda x: -x[0]) ] >>> result [{'score': 0.998572, 'label': 'car'}, {'score': 0.0010570387, 'label': 'bike'}, {'score': 0.0003393686, 'label': 'tree'}, {'score': 3.1572064e-05, 'label': 'cat'}] ```" tasks/translation.md," # Translation [[open-in-colab]] Translation converts a sequence of text from one language to another. It is one of several tasks you can formulate as a sequence-to-sequence problem, a powerful framework for returning some output from an input, like translation or summarization. Translation systems are commonly used for translation between different language texts, but it can also be used for speech or some combination in between like text-to-speech or speech-to-text. This guide will show you how to: 1. Finetune [T5](https://huggingface.co/t5-small) on the English-French subset of the [OPUS Books](https://huggingface.co/datasets/opus_books) dataset to translate English text to French. 2. Use your finetuned model for inference. The task illustrated in this tutorial is supported by the following model architectures: [BART](../model_doc/bart), [BigBird-Pegasus](../model_doc/bigbird_pegasus), [Blenderbot](../model_doc/blenderbot), [BlenderbotSmall](../model_doc/blenderbot-small), [Encoder decoder](../model_doc/encoder-decoder), [FairSeq Machine-Translation](../model_doc/fsmt), [GPTSAN-japanese](../model_doc/gptsan-japanese), [LED](../model_doc/led), [LongT5](../model_doc/longt5), [M2M100](../model_doc/m2m_100), [Marian](../model_doc/marian), [mBART](../model_doc/mbart), [MT5](../model_doc/mt5), [MVP](../model_doc/mvp), [NLLB](../model_doc/nllb), [NLLB-MOE](../model_doc/nllb-moe), [Pegasus](../model_doc/pegasus), [PEGASUS-X](../model_doc/pegasus_x), [PLBart](../model_doc/plbart), [ProphetNet](../model_doc/prophetnet), [SeamlessM4T](../model_doc/seamless_m4t), [SwitchTransformers](../model_doc/switch_transformers), [T5](../model_doc/t5), [UMT5](../model_doc/umt5), [XLM-ProphetNet](../model_doc/xlm-prophetnet) Before you begin, make sure you have all the necessary libraries installed: ```bash pip install transformers datasets evaluate sacrebleu We encourage you to login to your Hugging Face account so you can upload and share your model with the community. When prompted, enter your token to login: >>> from huggingface_hub import notebook_login >>> notebook_login() ## Load OPUS Books dataset Start by loading the English-French subset of the [OPUS Books](https://huggingface.co/datasets/opus_books) dataset from the 🤗 Datasets library: >>> from datasets import load_dataset >>> books = load_dataset(""opus_books"", ""en-fr"") Split the dataset into a train and test set with the [`~datasets.Dataset.train_test_split`] method: >>> books = books[""train""].train_test_split(test_size=0.2) Then take a look at an example: >>> books[""train""][0] {'id': '90560', 'translation': {'en': 'But this lofty plateau measured only a few fathoms, and soon we reentered Our Element.', 'fr': 'Mais ce plateau élevé ne mesurait que quelques toises, et bientôt nous fûmes rentrés dans notre élément.'}} `translation`: an English and French translation of the text. ## Preprocess The next step is to load a T5 tokenizer to process the English-French language pairs: >>> from transformers import AutoTokenizer >>> checkpoint = ""t5-small"" >>> tokenizer = AutoTokenizer.from_pretrained(checkpoint) The preprocessing function you want to create needs to: 1. Prefix the input with a prompt so T5 knows this is a translation task. Some models capable of multiple NLP tasks require prompting for specific tasks. 2. Tokenize the input (English) and target (French) separately because you can't tokenize French text with a tokenizer pretrained on an English vocabulary. 3. Truncate sequences to be no longer than the maximum length set by the `max_length` parameter. >>> source_lang = ""en"" >>> target_lang = ""fr"" >>> prefix = ""translate English to French: "" >>> def preprocess_function(examples): inputs = [prefix + example[source_lang] for example in examples[""translation""]] targets = [example[target_lang] for example in examples[""translation""]] model_inputs = tokenizer(inputs, text_target=targets, max_length=128, truncation=True) return model_inputs To apply the preprocessing function over the entire dataset, use 🤗 Datasets [`~datasets.Dataset.map`] method. You can speed up the `map` function by setting `batched=True` to process multiple elements of the dataset at once: >>> tokenized_books = books.map(preprocess_function, batched=True) Now create a batch of examples using [`DataCollatorForSeq2Seq`]. It's more efficient to *dynamically pad* the sentences to the longest length in a batch during collation, instead of padding the whole dataset to the maximum length. >>> from transformers import DataCollatorForSeq2Seq >>> data_collator = DataCollatorForSeq2Seq(tokenizer=tokenizer, model=checkpoint) >>> from transformers import DataCollatorForSeq2Seq >>> data_collator = DataCollatorForSeq2Seq(tokenizer=tokenizer, model=checkpoint, return_tensors=""tf"") ## Evaluate Including a metric during training is often helpful for evaluating your model's performance. You can quickly load a evaluation method with the 🤗 [Evaluate](https://huggingface.co/docs/evaluate/index) library. For this task, load the [SacreBLEU](https://huggingface.co/spaces/evaluate-metric/sacrebleu) metric (see the 🤗 Evaluate [quick tour](https://huggingface.co/docs/evaluate/a_quick_tour) to learn more about how to load and compute a metric): >>> import evaluate >>> metric = evaluate.load(""sacrebleu"") Then create a function that passes your predictions and labels to [`~evaluate.EvaluationModule.compute`] to calculate the SacreBLEU score: >>> import numpy as np >>> def postprocess_text(preds, labels): preds = [pred.strip() for pred in preds] labels = [[label.strip()] for label in labels] return preds, labels >>> def compute_metrics(eval_preds): preds, labels = eval_preds if isinstance(preds, tuple): preds = preds[0] decoded_preds = tokenizer.batch_decode(preds, skip_special_tokens=True) labels = np.where(labels != -100, labels, tokenizer.pad_token_id) decoded_labels = tokenizer.batch_decode(labels, skip_special_tokens=True) decoded_preds, decoded_labels = postprocess_text(decoded_preds, decoded_labels) result = metric.compute(predictions=decoded_preds, references=decoded_labels) result = {""bleu"": result[""score""]} prediction_lens = [np.count_nonzero(pred != tokenizer.pad_token_id) for pred in preds] result[""gen_len""] = np.mean(prediction_lens) result = {k: round(v, 4) for k, v in result.items()} return result Your `compute_metrics` function is ready to go now, and you'll return to it when you setup your training. ## Train If you aren't familiar with finetuning a model with the [`Trainer`], take a look at the basic tutorial [here](../training#train-with-pytorch-trainer)! You're ready to start training your model now! Load T5 with [`AutoModelForSeq2SeqLM`]: >>> from transformers import AutoModelForSeq2SeqLM, Seq2SeqTrainingArguments, Seq2SeqTrainer >>> model = AutoModelForSeq2SeqLM.from_pretrained(checkpoint) At this point, only three steps remain: 1. Define your training hyperparameters in [`Seq2SeqTrainingArguments`]. The only required parameter is `output_dir` which specifies where to save your model. You'll push this model to the Hub by setting `push_to_hub=True` (you need to be signed in to Hugging Face to upload your model). At the end of each epoch, the [`Trainer`] will evaluate the SacreBLEU metric and save the training checkpoint. 2. Pass the training arguments to [`Seq2SeqTrainer`] along with the model, dataset, tokenizer, data collator, and `compute_metrics` function. 3. Call [`~Trainer.train`] to finetune your model. >>> training_args = Seq2SeqTrainingArguments( output_dir=""my_awesome_opus_books_model"", evaluation_strategy=""epoch"", learning_rate=2e-5, per_device_train_batch_size=16, per_device_eval_batch_size=16, weight_decay=0.01, save_total_limit=3, num_train_epochs=2, predict_with_generate=True, fp16=True, push_to_hub=True, ) >>> trainer = Seq2SeqTrainer( model=model, args=training_args, train_dataset=tokenized_books[""train""], eval_dataset=tokenized_books[""test""], tokenizer=tokenizer, data_collator=data_collator, compute_metrics=compute_metrics, ) >>> trainer.train() ` Once training is completed, share your model to the Hub with the [`~transformers.Trainer.push_to_hub`] method so everyone can use your model: >>> trainer.push_to_hub() If you aren't familiar with finetuning a model with Keras, take a look at the basic tutorial [here](../training#train-a-tensorflow-model-with-keras)! To finetune a model in TensorFlow, start by setting up an optimizer function, learning rate schedule, and some training hyperparameters: >>> from transformers import AdamWeightDecay >>> optimizer = AdamWeightDecay(learning_rate=2e-5, weight_decay_rate=0.01) Then you can load T5 with [`TFAutoModelForSeq2SeqLM`]: >>> from transformers import TFAutoModelForSeq2SeqLM >>> model = TFAutoModelForSeq2SeqLM.from_pretrained(checkpoint) Convert your datasets to the `tf.data.Dataset` format with [`~transformers.TFPreTrainedModel.prepare_tf_dataset`]: >>> tf_train_set = model.prepare_tf_dataset( tokenized_books[""train""], shuffle=True, batch_size=16, collate_fn=data_collator, ) >>> tf_test_set = model.prepare_tf_dataset( tokenized_books[""test""], shuffle=False, batch_size=16, collate_fn=data_collator, ) Configure the model for training with [`compile`](https://keras.io/api/models/model_training_apis/#compile-method). Note that Transformers models all have a default task-relevant loss function, so you don't need to specify one unless you want to: >>> import tensorflow as tf >>> model.compile(optimizer=optimizer) # No loss argument! The last two things to setup before you start training is to compute the SacreBLEU metric from the predictions, and provide a way to push your model to the Hub. Both are done by using [Keras callbacks](../main_classes/keras_callbacks). Pass your `compute_metrics` function to [`~transformers.KerasMetricCallback`]: >>> from transformers.keras_callbacks import KerasMetricCallback >>> metric_callback = KerasMetricCallback(metric_fn=compute_metrics, eval_dataset=tf_validation_set) Specify where to push your model and tokenizer in the [`~transformers.PushToHubCallback`]: >>> from transformers.keras_callbacks import PushToHubCallback >>> push_to_hub_callback = PushToHubCallback( output_dir=""my_awesome_opus_books_model"", tokenizer=tokenizer, ) Then bundle your callbacks together: >>> callbacks = [metric_callback, push_to_hub_callback] Finally, you're ready to start training your model! Call [`fit`](https://keras.io/api/models/model_training_apis/#fit-method) with your training and validation datasets, the number of epochs, and your callbacks to finetune the model: >>> model.fit(x=tf_train_set, validation_data=tf_test_set, epochs=3, callbacks=callbacks) Once training is completed, your model is automatically uploaded to the Hub so everyone can use it! For a more in-depth example of how to finetune a model for translation, take a look at the corresponding [PyTorch notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/translation.ipynb) or [TensorFlow notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/translation-tf.ipynb). ## Inference Great, now that you've finetuned a model, you can use it for inference! Come up with some text you'd like to translate to another language. For T5, you need to prefix your input depending on the task you're working on. For translation from English to French, you should prefix your input as shown below: >>> text = ""translate English to French: Legumes share resources with nitrogen-fixing bacteria."" The simplest way to try out your finetuned model for inference is to use it in a [`pipeline`]. Instantiate a `pipeline` for translation with your model, and pass your text to it: >>> from transformers import pipeline >>> translator = pipeline(""translation"", model=""my_awesome_opus_books_model"") >>> translator(text) [{'translation_text': 'Legumes partagent des ressources avec des bactéries azotantes.'}] You can also manually replicate the results of the `pipeline` if you'd like: Tokenize the text and return the `input_ids` as PyTorch tensors: >>> from transformers import AutoTokenizer >>> tokenizer = AutoTokenizer.from_pretrained(""my_awesome_opus_books_model"") >>> inputs = tokenizer(text, return_tensors=""pt"").input_ids Use the [`~transformers.generation_utils.GenerationMixin.generate`] method to create the translation. For more details about the different text generation strategies and parameters for controlling generation, check out the [Text Generation](../main_classes/text_generation) API. >>> from transformers import AutoModelForSeq2SeqLM >>> model = AutoModelForSeq2SeqLM.from_pretrained(""my_awesome_opus_books_model"") >>> outputs = model.generate(inputs, max_new_tokens=40, do_sample=True, top_k=30, top_p=0.95) Decode the generated token ids back into text: >>> tokenizer.decode(outputs[0], skip_special_tokens=True) 'Les lignées partagent des ressources avec des bactéries enfixant l'azote.' Tokenize the text and return the `input_ids` as TensorFlow tensors: >>> from transformers import AutoTokenizer >>> tokenizer = AutoTokenizer.from_pretrained(""my_awesome_opus_books_model"") >>> inputs = tokenizer(text, return_tensors=""tf"").input_ids Use the [`~transformers.generation_tf_utils.TFGenerationMixin.generate`] method to create the translation. For more details about the different text generation strategies and parameters for controlling generation, check out the [Text Generation](../main_classes/text_generation) API. >>> from transformers import TFAutoModelForSeq2SeqLM >>> model = TFAutoModelForSeq2SeqLM.from_pretrained(""my_awesome_opus_books_model"") >>> outputs = model.generate(inputs, max_new_tokens=40, do_sample=True, top_k=30, top_p=0.95) Decode the generated token ids back into text: >>> tokenizer.decode(outputs[0], skip_special_tokens=True) 'Les lugumes partagent les ressources avec des bactéries fixatrices d'azote.' " tasks/summarization.md," # Summarization [[open-in-colab]] Summarization creates a shorter version of a document or an article that captures all the important information. Along with translation, it is another example of a task that can be formulated as a sequence-to-sequence task. Summarization can be: - Extractive: extract the most relevant information from a document. - Abstractive: generate new text that captures the most relevant information. This guide will show you how to: 1. Finetune [T5](https://huggingface.co/t5-small) on the California state bill subset of the [BillSum](https://huggingface.co/datasets/billsum) dataset for abstractive summarization. 2. Use your finetuned model for inference. The task illustrated in this tutorial is supported by the following model architectures: [BART](../model_doc/bart), [BigBird-Pegasus](../model_doc/bigbird_pegasus), [Blenderbot](../model_doc/blenderbot), [BlenderbotSmall](../model_doc/blenderbot-small), [Encoder decoder](../model_doc/encoder-decoder), [FairSeq Machine-Translation](../model_doc/fsmt), [GPTSAN-japanese](../model_doc/gptsan-japanese), [LED](../model_doc/led), [LongT5](../model_doc/longt5), [M2M100](../model_doc/m2m_100), [Marian](../model_doc/marian), [mBART](../model_doc/mbart), [MT5](../model_doc/mt5), [MVP](../model_doc/mvp), [NLLB](../model_doc/nllb), [NLLB-MOE](../model_doc/nllb-moe), [Pegasus](../model_doc/pegasus), [PEGASUS-X](../model_doc/pegasus_x), [PLBart](../model_doc/plbart), [ProphetNet](../model_doc/prophetnet), [SeamlessM4T](../model_doc/seamless_m4t), [SwitchTransformers](../model_doc/switch_transformers), [T5](../model_doc/t5), [UMT5](../model_doc/umt5), [XLM-ProphetNet](../model_doc/xlm-prophetnet) Before you begin, make sure you have all the necessary libraries installed: ```bash pip install transformers datasets evaluate rouge_score We encourage you to login to your Hugging Face account so you can upload and share your model with the community. When prompted, enter your token to login: >>> from huggingface_hub import notebook_login >>> notebook_login() ## Load BillSum dataset Start by loading the smaller California state bill subset of the BillSum dataset from the 🤗 Datasets library: >>> from datasets import load_dataset >>> billsum = load_dataset(""billsum"", split=""ca_test"") Split the dataset into a train and test set with the [`~datasets.Dataset.train_test_split`] method: >>> billsum = billsum.train_test_split(test_size=0.2) Then take a look at an example: >>> billsum[""train""][0] {'summary': 'Existing law authorizes state agencies to enter into contracts for the acquisition of goods or services upon approval by the Department of General Services. Existing law sets forth various requirements and prohibitions for those contracts, including, but not limited to, a prohibition on entering into contracts for the acquisition of goods or services of $100,000 or more with a contractor that discriminates between spouses and domestic partners or same-sex and different-sex couples in the provision of benefits. Existing law provides that a contract entered into in violation of those requirements and prohibitions is void and authorizes the state or any person acting on behalf of the state to bring a civil action seeking a determination that a contract is in violation and therefore void. Under existing law, a willful violation of those requirements and prohibitions is a misdemeanor.\nThis bill would also prohibit a state agency from entering into contracts for the acquisition of goods or services of $100,000 or more with a contractor that discriminates between employees on the basis of gender identity in the provision of benefits, as specified. By expanding the scope of a crime, this bill would impose a state-mandated local program.\nThe California Constitution requires the state to reimburse local agencies and school districts for certain costs mandated by the state. Statutory provisions establish procedures for making that reimbursement.\nThis bill would provide that no reimbursement is required by this act for a specified reason.', 'text': 'The people of the State of California do enact as follows:\n\n\nSECTION 1.\nSection 10295.35 is added to the Public Contract Code, to read:\n10295.35.\n(a) (1) Notwithstanding any other law, a state agency shall not enter into any contract for the acquisition of goods or services in the amount of one hundred thousand dollars ($100,000) or more with a contractor that, in the provision of benefits, discriminates between employees on the basis of an employee’s or dependent’s actual or perceived gender identity, including, but not limited to, the employee’s or dependent’s identification as transgender.\n(2) For purposes of this section, “contract” includes contracts with a cumulative amount of one hundred thousand dollars ($100,000) or more per contractor in each fiscal year.\n(3) For purposes of this section, an employee health plan is discriminatory if the plan is not consistent with Section 1365.5 of the Health and Safety Code and Section 10140 of the Insurance Code.\n(4) The requirements of this section shall apply only to those portions of a contractor’s operations that occur under any of the following conditions:\n(A) Within the state.\n(B) On real property outside the state if the property is owned by the state or if the state has a right to occupy the property, and if the contractor’s presence at that location is connected to a contract with the state.\n(C) Elsewhere in the United States where work related to a state contract is being performed.\n(b) Contractors shall treat as confidential, to the maximum extent allowed by law or by the requirement of the contractor’s insurance provider, any request by an employee or applicant for employment benefits or any documentation of eligibility for benefits submitted by an employee or applicant for employment.\n(c) After taking all reasonable measures to find a contractor that complies with this section, as determined by the state agency, the requirements of this section may be waived under any of the following circumstances:\n(1) There is only one prospective contractor willing to enter into a specific contract with the state agency.\n(2) The contract is necessary to respond to an emergency, as determined by the state agency, that endangers the public health, welfare, or safety, or the contract is necessary for the provision of essential services, and no entity that complies with the requirements of this section capable of responding to the emergency is immediately available.\n(3) The requirements of this section violate, or are inconsistent with, the terms or conditions of a grant, subvention, or agreement, if the agency has made a good faith attempt to change the terms or conditions of any grant, subvention, or agreement to authorize application of this section.\n(4) The contractor is providing wholesale or bulk water, power, or natural gas, the conveyance or transmission of the same, or ancillary services, as required for ensuring reliable services in accordance with good utility practice, if the purchase of the same cannot practically be accomplished through the standard competitive bidding procedures and the contractor is not providing direct retail services to end users.\n(d) (1) A contractor shall not be deemed to discriminate in the provision of benefits if the contractor, in providing the benefits, pays the actual costs incurred in obtaining the benefit.\n(2) If a contractor is unable to provide a certain benefit, despite taking reasonable measures to do so, the contractor shall not be deemed to discriminate in the provision of benefits.\n(e) (1) Every contract subject to this chapter shall contain a statement by which the contractor certifies that the contractor is in compliance with this section.\n(2) The department or other contracting agency shall enforce this section pursuant to its existing enforcement powers.\n(3) (A) If a contractor falsely certifies that it is in compliance with this section, the contract with that contractor shall be subject to Article 9 (commencing with Section 10420), unless, within a time period specified by the department or other contracting agency, the contractor provides to the department or agency proof that it has complied, or is in the process of complying, with this section.\n(B) The application of the remedies or penalties contained in Article 9 (commencing with Section 10420) to a contract subject to this chapter shall not preclude the application of any existing remedies otherwise available to the department or other contracting agency under its existing enforcement powers.\n(f) Nothing in this section is intended to regulate the contracting practices of any local jurisdiction.\n(g) This section shall be construed so as not to conflict with applicable federal laws, rules, or regulations. In the event that a court or agency of competent jurisdiction holds that federal law, rule, or regulation invalidates any clause, sentence, paragraph, or section of this code or the application thereof to any person or circumstances, it is the intent of the state that the court or agency sever that clause, sentence, paragraph, or section so that the remainder of this section shall remain in effect.\nSEC. 2.\nSection 10295.35 of the Public Contract Code shall not be construed to create any new enforcement authority or responsibility in the Department of General Services or any other contracting agency.\nSEC. 3.\nNo reimbursement is required by this act pursuant to Section 6 of Article XIII\u2009B of the California Constitution because the only costs that may be incurred by a local agency or school district will be incurred because this act creates a new crime or infraction, eliminates a crime or infraction, or changes the penalty for a crime or infraction, within the meaning of Section 17556 of the Government Code, or changes the definition of a crime within the meaning of Section 6 of Article XIII\u2009B of the California Constitution.', 'title': 'An act to add Section 10295.35 to the Public Contract Code, relating to public contracts.'} There are two fields that you'll want to use: - `text`: the text of the bill which'll be the input to the model. - `summary`: a condensed version of `text` which'll be the model target. ## Preprocess The next step is to load a T5 tokenizer to process `text` and `summary`: >>> from transformers import AutoTokenizer >>> checkpoint = ""t5-small"" >>> tokenizer = AutoTokenizer.from_pretrained(checkpoint) The preprocessing function you want to create needs to: 1. Prefix the input with a prompt so T5 knows this is a summarization task. Some models capable of multiple NLP tasks require prompting for specific tasks. 2. Use the keyword `text_target` argument when tokenizing labels. 3. Truncate sequences to be no longer than the maximum length set by the `max_length` parameter. >>> prefix = ""summarize: "" >>> def preprocess_function(examples): inputs = [prefix + doc for doc in examples[""text""]] model_inputs = tokenizer(inputs, max_length=1024, truncation=True) labels = tokenizer(text_target=examples[""summary""], max_length=128, truncation=True) model_inputs[""labels""] = labels[""input_ids""] return model_inputs To apply the preprocessing function over the entire dataset, use 🤗 Datasets [`~datasets.Dataset.map`] method. You can speed up the `map` function by setting `batched=True` to process multiple elements of the dataset at once: >>> tokenized_billsum = billsum.map(preprocess_function, batched=True) Now create a batch of examples using [`DataCollatorForSeq2Seq`]. It's more efficient to *dynamically pad* the sentences to the longest length in a batch during collation, instead of padding the whole dataset to the maximum length. >>> from transformers import DataCollatorForSeq2Seq >>> data_collator = DataCollatorForSeq2Seq(tokenizer=tokenizer, model=checkpoint) >>> from transformers import DataCollatorForSeq2Seq >>> data_collator = DataCollatorForSeq2Seq(tokenizer=tokenizer, model=checkpoint, return_tensors=""tf"") ## Evaluate Including a metric during training is often helpful for evaluating your model's performance. You can quickly load a evaluation method with the 🤗 [Evaluate](https://huggingface.co/docs/evaluate/index) library. For this task, load the [ROUGE](https://huggingface.co/spaces/evaluate-metric/rouge) metric (see the 🤗 Evaluate [quick tour](https://huggingface.co/docs/evaluate/a_quick_tour) to learn more about how to load and compute a metric): >>> import evaluate >>> rouge = evaluate.load(""rouge"") Then create a function that passes your predictions and labels to [`~evaluate.EvaluationModule.compute`] to calculate the ROUGE metric: >>> import numpy as np >>> def compute_metrics(eval_pred): predictions, labels = eval_pred decoded_preds = tokenizer.batch_decode(predictions, skip_special_tokens=True) labels = np.where(labels != -100, labels, tokenizer.pad_token_id) decoded_labels = tokenizer.batch_decode(labels, skip_special_tokens=True) result = rouge.compute(predictions=decoded_preds, references=decoded_labels, use_stemmer=True) prediction_lens = [np.count_nonzero(pred != tokenizer.pad_token_id) for pred in predictions] result[""gen_len""] = np.mean(prediction_lens) return {k: round(v, 4) for k, v in result.items()} Your `compute_metrics` function is ready to go now, and you'll return to it when you setup your training. ## Train If you aren't familiar with finetuning a model with the [`Trainer`], take a look at the basic tutorial [here](../training#train-with-pytorch-trainer)! You're ready to start training your model now! Load T5 with [`AutoModelForSeq2SeqLM`]: >>> from transformers import AutoModelForSeq2SeqLM, Seq2SeqTrainingArguments, Seq2SeqTrainer >>> model = AutoModelForSeq2SeqLM.from_pretrained(checkpoint) At this point, only three steps remain: 1. Define your training hyperparameters in [`Seq2SeqTrainingArguments`]. The only required parameter is `output_dir` which specifies where to save your model. You'll push this model to the Hub by setting `push_to_hub=True` (you need to be signed in to Hugging Face to upload your model). At the end of each epoch, the [`Trainer`] will evaluate the ROUGE metric and save the training checkpoint. 2. Pass the training arguments to [`Seq2SeqTrainer`] along with the model, dataset, tokenizer, data collator, and `compute_metrics` function. 3. Call [`~Trainer.train`] to finetune your model. >>> training_args = Seq2SeqTrainingArguments( output_dir=""my_awesome_billsum_model"", evaluation_strategy=""epoch"", learning_rate=2e-5, per_device_train_batch_size=16, per_device_eval_batch_size=16, weight_decay=0.01, save_total_limit=3, num_train_epochs=4, predict_with_generate=True, fp16=True, push_to_hub=True, ) >>> trainer = Seq2SeqTrainer( model=model, args=training_args, train_dataset=tokenized_billsum[""train""], eval_dataset=tokenized_billsum[""test""], tokenizer=tokenizer, data_collator=data_collator, compute_metrics=compute_metrics, ) >>> trainer.train() Once training is completed, share your model to the Hub with the [`~transformers.Trainer.push_to_hub`] method so everyone can use your model: >>> trainer.push_to_hub() If you aren't familiar with finetuning a model with Keras, take a look at the basic tutorial [here](../training#train-a-tensorflow-model-with-keras)! To finetune a model in TensorFlow, start by setting up an optimizer function, learning rate schedule, and some training hyperparameters: >>> from transformers import create_optimizer, AdamWeightDecay >>> optimizer = AdamWeightDecay(learning_rate=2e-5, weight_decay_rate=0.01) Then you can load T5 with [`TFAutoModelForSeq2SeqLM`]: >>> from transformers import TFAutoModelForSeq2SeqLM >>> model = TFAutoModelForSeq2SeqLM.from_pretrained(checkpoint) Convert your datasets to the `tf.data.Dataset` format with [`~transformers.TFPreTrainedModel.prepare_tf_dataset`]: >>> tf_train_set = model.prepare_tf_dataset( tokenized_billsum[""train""], shuffle=True, batch_size=16, collate_fn=data_collator, ) >>> tf_test_set = model.prepare_tf_dataset( tokenized_billsum[""test""], shuffle=False, batch_size=16, collate_fn=data_collator, ) Configure the model for training with [`compile`](https://keras.io/api/models/model_training_apis/#compile-method). Note that Transformers models all have a default task-relevant loss function, so you don't need to specify one unless you want to: >>> import tensorflow as tf >>> model.compile(optimizer=optimizer) # No loss argument! The last two things to setup before you start training is to compute the ROUGE score from the predictions, and provide a way to push your model to the Hub. Both are done by using [Keras callbacks](../main_classes/keras_callbacks). Pass your `compute_metrics` function to [`~transformers.KerasMetricCallback`]: >>> from transformers.keras_callbacks import KerasMetricCallback >>> metric_callback = KerasMetricCallback(metric_fn=compute_metrics, eval_dataset=tf_validation_set) Specify where to push your model and tokenizer in the [`~transformers.PushToHubCallback`]: >>> from transformers.keras_callbacks import PushToHubCallback >>> push_to_hub_callback = PushToHubCallback( output_dir=""my_awesome_billsum_model"", tokenizer=tokenizer, ) Then bundle your callbacks together: >>> callbacks = [metric_callback, push_to_hub_callback] Finally, you're ready to start training your model! Call [`fit`](https://keras.io/api/models/model_training_apis/#fit-method) with your training and validation datasets, the number of epochs, and your callbacks to finetune the model: >>> model.fit(x=tf_train_set, validation_data=tf_test_set, epochs=3, callbacks=callbacks) Once training is completed, your model is automatically uploaded to the Hub so everyone can use it! For a more in-depth example of how to finetune a model for summarization, take a look at the corresponding [PyTorch notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/summarization.ipynb) or [TensorFlow notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/summarization-tf.ipynb). ## Inference Great, now that you've finetuned a model, you can use it for inference! Come up with some text you'd like to summarize. For T5, you need to prefix your input depending on the task you're working on. For summarization you should prefix your input as shown below: >>> text = ""summarize: The Inflation Reduction Act lowers prescription drug costs, health care costs, and energy costs. It's the most aggressive action on tackling the climate crisis in American history, which will lift up American workers and create good-paying, union jobs across the country. It'll lower the deficit and ask the ultra-wealthy and corporations to pay their fair share. And no one making under $400,000 per year will pay a penny more in taxes."" The simplest way to try out your finetuned model for inference is to use it in a [`pipeline`]. Instantiate a `pipeline` for summarization with your model, and pass your text to it: >>> from transformers import pipeline >>> summarizer = pipeline(""summarization"", model=""stevhliu/my_awesome_billsum_model"") >>> summarizer(text) [{""summary_text"": ""The Inflation Reduction Act lowers prescription drug costs, health care costs, and energy costs. It's the most aggressive action on tackling the climate crisis in American history, which will lift up American workers and create good-paying, union jobs across the country.""}] You can also manually replicate the results of the `pipeline` if you'd like: Tokenize the text and return the `input_ids` as PyTorch tensors: >>> from transformers import AutoTokenizer >>> tokenizer = AutoTokenizer.from_pretrained(""stevhliu/my_awesome_billsum_model"") >>> inputs = tokenizer(text, return_tensors=""pt"").input_ids Use the [`~transformers.generation_utils.GenerationMixin.generate`] method to create the summarization. For more details about the different text generation strategies and parameters for controlling generation, check out the [Text Generation](../main_classes/text_generation) API. >>> from transformers import AutoModelForSeq2SeqLM >>> model = AutoModelForSeq2SeqLM.from_pretrained(""stevhliu/my_awesome_billsum_model"") >>> outputs = model.generate(inputs, max_new_tokens=100, do_sample=False) Decode the generated token ids back into text: >>> tokenizer.decode(outputs[0], skip_special_tokens=True) 'the inflation reduction act lowers prescription drug costs, health care costs, and energy costs. it's the most aggressive action on tackling the climate crisis in american history. it will ask the ultra-wealthy and corporations to pay their fair share.' Tokenize the text and return the `input_ids` as TensorFlow tensors: >>> from transformers import AutoTokenizer >>> tokenizer = AutoTokenizer.from_pretrained(""stevhliu/my_awesome_billsum_model"") >>> inputs = tokenizer(text, return_tensors=""tf"").input_ids Use the [`~transformers.generation_tf_utils.TFGenerationMixin.generate`] method to create the summarization. For more details about the different text generation strategies and parameters for controlling generation, check out the [Text Generation](../main_classes/text_generation) API. >>> from transformers import TFAutoModelForSeq2SeqLM >>> model = TFAutoModelForSeq2SeqLM.from_pretrained(""stevhliu/my_awesome_billsum_model"") >>> outputs = model.generate(inputs, max_new_tokens=100, do_sample=False) Decode the generated token ids back into text: >>> tokenizer.decode(outputs[0], skip_special_tokens=True) 'the inflation reduction act lowers prescription drug costs, health care costs, and energy costs. it's the most aggressive action on tackling the climate crisis in american history. it will ask the ultra-wealthy and corporations to pay their fair share.' " tasks/knowledge_distillation_for_image_classification.md," # Knowledge Distillation for Computer Vision [[open-in-colab]] Knowledge distillation is a technique used to transfer knowledge from a larger, more complex model (teacher) to a smaller, simpler model (student). To distill knowledge from one model to another, we take a pre-trained teacher model trained on a certain task (image classification for this case) and randomly initialize a student model to be trained on image classification. Next, we train the student model to minimize the difference between it's outputs and the teacher's outputs, thus making it mimic the behavior. It was first introduced in [Distilling the Knowledge in a Neural Network by Hinton et al](https://arxiv.org/abs/1503.02531). In this guide, we will do task-specific knowledge distillation. We will use the [beans dataset](https://huggingface.co/datasets/beans) for this. This guide demonstrates how you can distill a [fine-tuned ViT model](https://huggingface.co/merve/vit-mobilenet-beans-224) (teacher model) to a [MobileNet](https://huggingface.co/google/mobilenet_v2_1.4_224) (student model) using the [Trainer API](https://huggingface.co/docs/transformers/en/main_classes/trainer#trainer) of 🤗 Transformers. Let's install the libraries needed for distillation and evaluating the process. ```bash pip install transformers datasets accelerate tensorboard evaluate --upgrade In this example, we are using the `merve/beans-vit-224` model as teacher model. It's an image classification model, based on `google/vit-base-patch16-224-in21k` fine-tuned on beans dataset. We will distill this model to a randomly initialized MobileNetV2. We will now load the dataset. thon from datasets import load_dataset dataset = load_dataset(""beans"") We can use an image processor from either of the models, as in this case they return the same output with same resolution. We will use the `map()` method of `dataset` to apply the preprocessing to every split of the dataset. thon from transformers import AutoImageProcessor teacher_processor = AutoImageProcessor.from_pretrained(""merve/beans-vit-224"") def process(examples): processed_inputs = teacher_processor(examples[""image""]) return processed_inputs processed_datasets = dataset.map(process, batched=True) Essentially, we want the student model (a randomly initialized MobileNet) to mimic the teacher model (fine-tuned vision transformer). To achieve this, we first get the logits output from the teacher and the student. Then, we divide each of them by the parameter `temperature` which controls the importance of each soft target. A parameter called `lambda` weighs the importance of the distillation loss. In this example, we will use `temperature=5` and `lambda=0.5`. We will use the Kullback-Leibler Divergence loss to compute the divergence between the student and teacher. Given two data P and Q, KL Divergence explains how much extra information we need to represent P using Q. If two are identical, their KL divergence is zero, as there's no other information needed to explain P from Q. Thus, in the context of knowledge distillation, KL divergence is useful. thon from transformers import TrainingArguments, Trainer import torch import torch.nn as nn import torch.nn.functional as F class ImageDistilTrainer(Trainer): def __init__(self, *args, teacher_model=None, **kwargs): super().__init__(*args, **kwargs) self.teacher = teacher_model self.student = student_model self.loss_function = nn.KLDivLoss(reduction=""batchmean"") device = torch.device('cuda' if torch.cuda.is_available() else 'cpu') self.teacher.to(device) self.teacher.eval() self.temperature = temperature self.lambda_param = lambda_param def compute_loss(self, student, inputs, return_outputs=False): student_output = self.student(**inputs) with torch.no_grad(): teacher_output = self.teacher(**inputs) # Compute soft targets for teacher and student soft_teacher = F.softmax(teacher_output.logits / self.temperature, dim=-1) soft_student = F.log_softmax(student_output.logits / self.temperature, dim=-1) # Compute the loss distillation_loss = self.loss_function(soft_student, soft_teacher) * (self.temperature ** 2) # Compute the true label loss student_target_loss = student_output.loss # Calculate final loss loss = (1. - self.lambda_param) * student_target_loss + self.lambda_param * distillation_loss return (loss, student_output) if return_outputs else loss We will now login to Hugging Face Hub so we can push our model to the Hugging Face Hub through the `Trainer`. thon from huggingface_hub import notebook_login notebook_login() Let's set the `TrainingArguments`, the teacher model and the student model. thon from transformers import AutoModelForImageClassification, MobileNetV2Config, MobileNetV2ForImageClassification training_args = TrainingArguments( output_dir=""my-awesome-model"", num_train_epochs=30, fp16=True, logging_dir=f""{repo_name}/logs"", logging_strategy=""epoch"", evaluation_strategy=""epoch"", save_strategy=""epoch"", load_best_model_at_end=True, metric_for_best_model=""accuracy"", report_to=""tensorboard"", push_to_hub=True, hub_strategy=""every_save"", hub_model_id=repo_name, ) num_labels = len(processed_datasets[""train""].features[""labels""].names) # initialize models teacher_model = AutoModelForImageClassification.from_pretrained( ""merve/beans-vit-224"", num_labels=num_labels, ignore_mismatched_sizes=True ) # training MobileNetV2 from scratch student_config = MobileNetV2Config() student_config.num_labels = num_labels student_model = MobileNetV2ForImageClassification(student_config) We can use `compute_metrics` function to evaluate our model on the test set. This function will be used during the training process to compute the `accuracy` & `f1` of our model. thon import evaluate import numpy as np accuracy = evaluate.load(""accuracy"") def compute_metrics(eval_pred): predictions, labels = eval_pred acc = accuracy.compute(references=labels, predictions=np.argmax(predictions, axis=1)) return {""accuracy"": acc[""accuracy""]} Let's initialize the `Trainer` with the training arguments we defined. We will also initialize our data collator. thon from transformers import DefaultDataCollator data_collator = DefaultDataCollator() trainer = ImageDistilTrainer( student_model=student_model, teacher_model=teacher_model, training_args=training_args, train_dataset=processed_datasets[""train""], eval_dataset=processed_datasets[""validation""], data_collator=data_collator, tokenizer=teacher_extractor, compute_metrics=compute_metrics, temperature=5, lambda_param=0.5 ) We can now train our model. thon trainer.train() We can evaluate the model on the test set. thon trainer.evaluate(processed_datasets[""test""]) On test set, our model reaches 72 percent accuracy. To have a sanity check over efficiency of distillation, we also trained MobileNet on the beans dataset from scratch with the same hyperparameters and observed 63 percent accuracy on the test set. We invite the readers to try different pre-trained teacher models, student architectures, distillation parameters and report their findings. The training logs and checkpoints for distilled model can be found in [this repository](https://huggingface.co/merve/vit-mobilenet-beans-224), and MobileNetV2 trained from scratch can be found in this [repository](https://huggingface.co/merve/resnet-mobilenet-beans-5). " tasks/idefics.md," # Image tasks with IDEFICS [[open-in-colab]] While individual tasks can be tackled by fine-tuning specialized models, an alternative approach that has recently emerged and gained popularity is to use large models for a diverse set of tasks without fine-tuning. For instance, large language models can handle such NLP tasks as summarization, translation, classification, and more. This approach is no longer limited to a single modality, such as text, and in this guide, we will illustrate how you can solve image-text tasks with a large multimodal model called IDEFICS. [IDEFICS](../model_doc/idefics) is an open-access vision and language model based on [Flamingo](https://huggingface.co/papers/2204.14198), a state-of-the-art visual language model initially developed by DeepMind. The model accepts arbitrary sequences of image and text inputs and generates coherent text as output. It can answer questions about images, describe visual content, create stories grounded in multiple images, and so on. IDEFICS comes in two variants - [80 billion parameters](https://huggingface.co/HuggingFaceM4/idefics-80b) and [9 billion parameters](https://huggingface.co/HuggingFaceM4/idefics-9b), both of which are available on the 🤗 Hub. For each variant, you can also find fine-tuned instructed versions of the model adapted for conversational use cases. This model is exceptionally versatile and can be used for a wide range of image and multimodal tasks. However, being a large model means it requires significant computational resources and infrastructure. It is up to you to decide whether this approach suits your use case better than fine-tuning specialized models for each individual task. In this guide, you'll learn how to: - [Load IDEFICS](#loading-the-model) and [load the quantized version of the model](#loading-the-quantized-version-of-the-model) - Use IDEFICS for: - [Image captioning](#image-captioning) - [Prompted image captioning](#prompted-image-captioning) - [Few-shot prompting](#few-shot-prompting) - [Visual question answering](#visual-question-answering) - [Image classificaiton](#image-classification) - [Image-guided text generation](#image-guided-text-generation) - [Run inference in batch mode](#running-inference-in-batch-mode) - [Run IDEFICS instruct for conversational use](#idefics-instruct-for-conversational-use) Before you begin, make sure you have all the necessary libraries installed. ```bash pip install -q bitsandbytes sentencepiece accelerate transformers To run the following examples with a non-quantized version of the model checkpoint you will need at least 20GB of GPU memory. ## Loading the model Let's start by loading the model's 9 billion parameters checkpoint: >>> checkpoint = ""HuggingFaceM4/idefics-9b"" Just like for other Transformers models, you need to load a processor and the model itself from the checkpoint. The IDEFICS processor wraps a [`LlamaTokenizer`] and IDEFICS image processor into a single processor to take care of preparing text and image inputs for the model. >>> import torch >>> from transformers import IdeficsForVisionText2Text, AutoProcessor >>> processor = AutoProcessor.from_pretrained(checkpoint) >>> model = IdeficsForVisionText2Text.from_pretrained(checkpoint, torch_dtype=torch.bfloat16, device_map=""auto"") Setting `device_map` to `""auto""` will automatically determine how to load and store the model weights in the most optimized manner given existing devices. ### Quantized model If high-memory GPU availability is an issue, you can load the quantized version of the model. To load the model and the processor in 4bit precision, pass a `BitsAndBytesConfig` to the `from_pretrained` method and the model will be compressed on the fly while loading. >>> import torch >>> from transformers import IdeficsForVisionText2Text, AutoProcessor, BitsAndBytesConfig >>> quantization_config = BitsAndBytesConfig( load_in_4bit=True, bnb_4bit_compute_dtype=torch.float16, ) >>> processor = AutoProcessor.from_pretrained(checkpoint) >>> model = IdeficsForVisionText2Text.from_pretrained( checkpoint, quantization_config=quantization_config, device_map=""auto"" ) Now that you have the model loaded in one of the suggested ways, let's move on to exploring tasks that you can use IDEFICS for. ## Image captioning Image captioning is the task of predicting a caption for a given image. A common application is to aid visually impaired people navigate through different situations, for instance, explore image content online. To illustrate the task, get an image to be captioned, e.g.: Photo by [Hendo Wang](https://unsplash.com/@hendoo). IDEFICS accepts text and image prompts. However, to caption an image, you do not have to provide a text prompt to the model, only the preprocessed input image. Without a text prompt, the model will start generating text from the BOS (beginning-of-sequence) token thus creating a caption. As image input to the model, you can use either an image object (`PIL.Image`) or a url from which the image can be retrieved. >>> prompt = [ ""https://images.unsplash.com/photo-1583160247711-2191776b4b91?ixlib=rb-4.0.3&ixid=M3wxMjA3fDB8MHxwaG90by1wYWdlfHx8fGVufDB8fHx8fA%3D%3D&auto=format&fit=crop&w=3542&q=80"", ] >>> inputs = processor(prompt, return_tensors=""pt"").to(""cuda"") >>> bad_words_ids = processor.tokenizer(["""", """"], add_special_tokens=False).input_ids >>> generated_ids = model.generate(**inputs, max_new_tokens=10, bad_words_ids=bad_words_ids) >>> generated_text = processor.batch_decode(generated_ids, skip_special_tokens=True) >>> print(generated_text[0]) A puppy in a flower bed It is a good idea to include the `bad_words_ids` in the call to `generate` to avoid errors arising when increasing the `max_new_tokens`: the model will want to generate a new `` or `` token when there is no image being generated by the model. You can set it on-the-fly as in this guide, or store in the `GenerationConfig` as described in the [Text generation strategies](../generation_strategies) guide. ## Prompted image captioning You can extend image captioning by providing a text prompt, which the model will continue given the image. Let's take another image to illustrate: Photo by [Denys Nevozhai](https://unsplash.com/@dnevozhai). Textual and image prompts can be passed to the model's processor as a single list to create appropriate inputs. >>> prompt = [ ""https://images.unsplash.com/photo-1543349689-9a4d426bee8e?ixlib=rb-4.0.3&ixid=M3wxMjA3fDB8MHxwaG90by1wYWdlfHx8fGVufDB8fHx8fA%3D%3D&auto=format&fit=crop&w=3501&q=80"", ""This is an image of "", ] >>> inputs = processor(prompt, return_tensors=""pt"").to(""cuda"") >>> bad_words_ids = processor.tokenizer(["""", """"], add_special_tokens=False).input_ids >>> generated_ids = model.generate(**inputs, max_new_tokens=10, bad_words_ids=bad_words_ids) >>> generated_text = processor.batch_decode(generated_ids, skip_special_tokens=True) >>> print(generated_text[0]) This is an image of the Eiffel Tower in Paris, France. ## Few-shot prompting While IDEFICS demonstrates great zero-shot results, your task may require a certain format of the caption, or come with other restrictions or requirements that increase task's complexity. Few-shot prompting can be used to enable in-context learning. By providing examples in the prompt, you can steer the model to generate results that mimic the format of given examples. Let's use the previous image of the Eiffel Tower as an example for the model and build a prompt that demonstrates to the model that in addition to learning what the object in an image is, we would also like to get some interesting information about it. Then, let's see, if we can get the same response format for an image of the Statue of Liberty: Photo by [Juan Mayobre](https://unsplash.com/@jmayobres). >>> prompt = [""User:"", ""https://images.unsplash.com/photo-1543349689-9a4d426bee8e?ixlib=rb-4.0.3&ixid=M3wxMjA3fDB8MHxwaG90by1wYWdlfHx8fGVufDB8fHx8fA%3D%3D&auto=format&fit=crop&w=3501&q=80"", ""Describe this image.\nAssistant: An image of the Eiffel Tower at night. Fun fact: the Eiffel Tower is the same height as an 81-storey building.\n"", ""User:"", ""https://images.unsplash.com/photo-1524099163253-32b7f0256868?ixlib=rb-4.0.3&ixid=M3wxMjA3fDB8MHxwaG90by1wYWdlfHx8fGVufDB8fHx8fA%3D%3D&auto=format&fit=crop&w=3387&q=80"", ""Describe this image.\nAssistant:"" ] >>> inputs = processor(prompt, return_tensors=""pt"").to(""cuda"") >>> bad_words_ids = processor.tokenizer(["""", """"], add_special_tokens=False).input_ids >>> generated_ids = model.generate(**inputs, max_new_tokens=30, bad_words_ids=bad_words_ids) >>> generated_text = processor.batch_decode(generated_ids, skip_special_tokens=True) >>> print(generated_text[0]) User: Describe this image. Assistant: An image of the Eiffel Tower at night. Fun fact: the Eiffel Tower is the same height as an 81-storey building. User: Describe this image. Assistant: An image of the Statue of Liberty. Fun fact: the Statue of Liberty is 151 feet tall. Notice that just from a single example (i.e., 1-shot) the model has learned how to perform the task. For more complex tasks, feel free to experiment with a larger number of examples (e.g., 3-shot, 5-shot, etc.). ## Visual question answering Visual Question Answering (VQA) is the task of answering open-ended questions based on an image. Similar to image captioning it can be used in accessibility applications, but also in education (reasoning about visual materials), customer service (questions about products based on images), and image retrieval. Let's get a new image for this task: Photo by [Jarritos Mexican Soda](https://unsplash.com/@jarritos). You can steer the model from image captioning to visual question answering by prompting it with appropriate instructions: >>> prompt = [ ""Instruction: Provide an answer to the question. Use the image to answer.\n"", ""https://images.unsplash.com/photo-1623944889288-cd147dbb517c?ixlib=rb-4.0.3&ixid=M3wxMjA3fDB8MHxwaG90by1wYWdlfHx8fGVufDB8fHx8fA%3D%3D&auto=format&fit=crop&w=3540&q=80"", ""Question: Where are these people and what's the weather like? Answer:"" ] >>> inputs = processor(prompt, return_tensors=""pt"").to(""cuda"") >>> bad_words_ids = processor.tokenizer(["""", """"], add_special_tokens=False).input_ids >>> generated_ids = model.generate(**inputs, max_new_tokens=20, bad_words_ids=bad_words_ids) >>> generated_text = processor.batch_decode(generated_ids, skip_special_tokens=True) >>> print(generated_text[0]) Instruction: Provide an answer to the question. Use the image to answer. Question: Where are these people and what's the weather like? Answer: They're in a park in New York City, and it's a beautiful day. ## Image classification IDEFICS is capable of classifying images into different categories without being explicitly trained on data containing labeled examples from those specific categories. Given a list of categories and using its image and text understanding capabilities, the model can infer which category the image likely belongs to. Say, we have this image of a vegetable stand: Photo by [Peter Wendt](https://unsplash.com/@peterwendt). We can instruct the model to classify the image into one of the categories that we have: >>> categories = ['animals','vegetables', 'city landscape', 'cars', 'office'] >>> prompt = [f""Instruction: Classify the following image into a single category from the following list: {categories}.\n"", ""https://images.unsplash.com/photo-1471193945509-9ad0617afabf?ixlib=rb-4.0.3&ixid=M3wxMjA3fDB8MHxwaG90by1wYWdlfHx8fGVufDB8fHx8fA%3D%3D&auto=format&fit=crop&w=3540&q=80"", ""Category: "" ] >>> inputs = processor(prompt, return_tensors=""pt"").to(""cuda"") >>> bad_words_ids = processor.tokenizer(["""", """"], add_special_tokens=False).input_ids >>> generated_ids = model.generate(**inputs, max_new_tokens=6, bad_words_ids=bad_words_ids) >>> generated_text = processor.batch_decode(generated_ids, skip_special_tokens=True) >>> print(generated_text[0]) Instruction: Classify the following image into a single category from the following list: ['animals', 'vegetables', 'city landscape', 'cars', 'office']. Category: Vegetables ``` In the example above we instruct the model to classify the image into a single category, however, you can also prompt the model to do rank classification. ## Image-guided text generation For more creative applications, you can use image-guided text generation to generate text based on an image. This can be useful to create descriptions of products, ads, descriptions of a scene, etc. Let's prompt IDEFICS to write a story based on a simple image of a red door: Photo by [Craig Tidball](https://unsplash.com/@devonshiremedia). >>> prompt = [""Instruction: Use the image to write a story. \n"", ""https://images.unsplash.com/photo-1517086822157-2b0358e7684a?ixlib=rb-4.0.3&ixid=M3wxMjA3fDB8MHxwaG90by1wYWdlfHx8fGVufDB8fHx8fA%3D%3D&auto=format&fit=crop&w=2203&q=80"", ""Story: \n""] >>> inputs = processor(prompt, return_tensors=""pt"").to(""cuda"") >>> bad_words_ids = processor.tokenizer(["""", """"], add_special_tokens=False).input_ids >>> generated_ids = model.generate(**inputs, num_beams=2, max_new_tokens=200, bad_words_ids=bad_words_ids) >>> generated_text = processor.batch_decode(generated_ids, skip_special_tokens=True) >>> print(generated_text[0]) Instruction: Use the image to write a story. Story: Once upon a time, there was a little girl who lived in a house with a red door. She loved her red door. It was the prettiest door in the whole world. One day, the little girl was playing in her yard when she noticed a man standing on her doorstep. He was wearing a long black coat and a top hat. The little girl ran inside and told her mother about the man. Her mother said, “Don’t worry, honey. He’s just a friendly ghost.” The little girl wasn’t sure if she believed her mother, but she went outside anyway. When she got to the door, the man was gone. The next day, the little girl was playing in her yard again when she noticed the man standing on her doorstep. He was wearing a long black coat and a top hat. The little girl ran Looks like IDEFICS noticed the pumpkin on the doorstep and went with a spooky Halloween story about a ghost. For longer outputs like this, you will greatly benefit from tweaking the text generation strategy. This can help you significantly improve the quality of the generated output. Check out [Text generation strategies](../generation_strategies) to learn more. ## Running inference in batch mode All of the earlier sections illustrated IDEFICS for a single example. In a very similar fashion, you can run inference for a batch of examples by passing a list of prompts: >>> prompts = [ [ ""https://images.unsplash.com/photo-1543349689-9a4d426bee8e?ixlib=rb-4.0.3&ixid=M3wxMjA3fDB8MHxwaG90by1wYWdlfHx8fGVufDB8fHx8fA%3D%3D&auto=format&fit=crop&w=3501&q=80"", ""This is an image of "", ], [ ""https://images.unsplash.com/photo-1623944889288-cd147dbb517c?ixlib=rb-4.0.3&ixid=M3wxMjA3fDB8MHxwaG90by1wYWdlfHx8fGVufDB8fHx8fA%3D%3D&auto=format&fit=crop&w=3540&q=80"", ""This is an image of "", ], [ ""https://images.unsplash.com/photo-1471193945509-9ad0617afabf?ixlib=rb-4.0.3&ixid=M3wxMjA3fDB8MHxwaG90by1wYWdlfHx8fGVufDB8fHx8fA%3D%3D&auto=format&fit=crop&w=3540&q=80"", ""This is an image of "", ], ] >>> inputs = processor(prompts, return_tensors=""pt"").to(""cuda"") >>> bad_words_ids = processor.tokenizer(["""", """"], add_special_tokens=False).input_ids >>> generated_ids = model.generate(**inputs, max_new_tokens=10, bad_words_ids=bad_words_ids) >>> generated_text = processor.batch_decode(generated_ids, skip_special_tokens=True) >>> for i,t in enumerate(generated_text): print(f""{i}:\n{t}\n"") 0: This is an image of the Eiffel Tower in Paris, France. 1: This is an image of a couple on a picnic blanket. 2: This is an image of a vegetable stand. ## IDEFICS instruct for conversational use For conversational use cases, you can find fine-tuned instructed versions of the model on the 🤗 Hub: `HuggingFaceM4/idefics-80b-instruct` and `HuggingFaceM4/idefics-9b-instruct`. These checkpoints are the result of fine-tuning the respective base models on a mixture of supervised and instruction fine-tuning datasets, which boosts the downstream performance while making the models more usable in conversational settings. The use and prompting for the conversational use is very similar to using the base models: >>> import torch >>> from transformers import IdeficsForVisionText2Text, AutoProcessor >>> device = ""cuda"" if torch.cuda.is_available() else ""cpu"" >>> checkpoint = ""HuggingFaceM4/idefics-9b-instruct"" >>> model = IdeficsForVisionText2Text.from_pretrained(checkpoint, torch_dtype=torch.bfloat16).to(device) >>> processor = AutoProcessor.from_pretrained(checkpoint) >>> prompts = [ [ ""User: What is in this image?"", ""https://upload.wikimedia.org/wikipedia/commons/8/86/Id%C3%A9fix.JPG"", """", ""\nAssistant: This picture depicts Idefix, the dog of Obelix in Asterix and Obelix. Idefix is running on the ground."", ""\nUser:"", ""https://static.wikia.nocookie.net/asterix/images/2/25/R22b.gif/revision/latest?cb=20110815073052"", ""And who is that?"", ""\nAssistant:"", ], ] >>> # --batched mode >>> inputs = processor(prompts, add_end_of_utterance_token=False, return_tensors=""pt"").to(device) >>> # --single sample mode >>> # inputs = processor(prompts[0], return_tensors=""pt"").to(device) >>> # Generation args >>> exit_condition = processor.tokenizer("""", add_special_tokens=False).input_ids >>> bad_words_ids = processor.tokenizer(["""", """"], add_special_tokens=False).input_ids >>> generated_ids = model.generate(**inputs, eos_token_id=exit_condition, bad_words_ids=bad_words_ids, max_length=100) >>> generated_text = processor.batch_decode(generated_ids, skip_special_tokens=True) >>> for i, t in enumerate(generated_text): print(f""{i}:\n{t}\n"") " tasks/image_classification.md," # Image classification [[open-in-colab]] Image classification assigns a label or class to an image. Unlike text or audio classification, the inputs are the pixel values that comprise an image. There are many applications for image classification, such as detecting damage after a natural disaster, monitoring crop health, or helping screen medical images for signs of disease. This guide illustrates how to: 1. Fine-tune [ViT](model_doc/vit) on the [Food-101](https://huggingface.co/datasets/food101) dataset to classify a food item in an image. 2. Use your fine-tuned model for inference. The task illustrated in this tutorial is supported by the following model architectures: [BEiT](../model_doc/beit), [BiT](../model_doc/bit), [ConvNeXT](../model_doc/convnext), [ConvNeXTV2](../model_doc/convnextv2), [CvT](../model_doc/cvt), [Data2VecVision](../model_doc/data2vec-vision), [DeiT](../model_doc/deit), [DiNAT](../model_doc/dinat), [DINOv2](../model_doc/dinov2), [EfficientFormer](../model_doc/efficientformer), [EfficientNet](../model_doc/efficientnet), [FocalNet](../model_doc/focalnet), [ImageGPT](../model_doc/imagegpt), [LeViT](../model_doc/levit), [MobileNetV1](../model_doc/mobilenet_v1), [MobileNetV2](../model_doc/mobilenet_v2), [MobileViT](../model_doc/mobilevit), [MobileViTV2](../model_doc/mobilevitv2), [NAT](../model_doc/nat), [Perceiver](../model_doc/perceiver), [PoolFormer](../model_doc/poolformer), [PVT](../model_doc/pvt), [RegNet](../model_doc/regnet), [ResNet](../model_doc/resnet), [SegFormer](../model_doc/segformer), [SwiftFormer](../model_doc/swiftformer), [Swin Transformer](../model_doc/swin), [Swin Transformer V2](../model_doc/swinv2), [VAN](../model_doc/van), [ViT](../model_doc/vit), [ViT Hybrid](../model_doc/vit_hybrid), [ViTMSN](../model_doc/vit_msn) Before you begin, make sure you have all the necessary libraries installed: ```bash pip install transformers datasets evaluate We encourage you to log in to your Hugging Face account to upload and share your model with the community. When prompted, enter your token to log in: >>> from huggingface_hub import notebook_login >>> notebook_login() ## Load Food-101 dataset Start by loading a smaller subset of the Food-101 dataset from the 🤗 Datasets library. This will give you a chance to experiment and make sure everything works before spending more time training on the full dataset. >>> from datasets import load_dataset >>> food = load_dataset(""food101"", split=""train[:5000]"") Split the dataset's `train` split into a train and test set with the [`~datasets.Dataset.train_test_split`] method: >>> food = food.train_test_split(test_size=0.2) Then take a look at an example: >>> food[""train""][0] {'image': , 'label': 79} Each example in the dataset has two fields: - `image`: a PIL image of the food item - `label`: the label class of the food item To make it easier for the model to get the label name from the label id, create a dictionary that maps the label name to an integer and vice versa: >>> labels = food[""train""].features[""label""].names >>> label2id, id2label = dict(), dict() >>> for i, label in enumerate(labels): label2id[label] = str(i) id2label[str(i)] = label Now you can convert the label id to a label name: >>> id2label[str(79)] 'prime_rib' ## Preprocess The next step is to load a ViT image processor to process the image into a tensor: >>> from transformers import AutoImageProcessor >>> checkpoint = ""google/vit-base-patch16-224-in21k"" >>> image_processor = AutoImageProcessor.from_pretrained(checkpoint) Apply some image transformations to the images to make the model more robust against overfitting. Here you'll use torchvision's [`transforms`](https://pytorch.org/vision/stable/transforms.html) module, but you can also use any image library you like. Crop a random part of the image, resize it, and normalize it with the image mean and standard deviation: >>> from torchvision.transforms import RandomResizedCrop, Compose, Normalize, ToTensor >>> normalize = Normalize(mean=image_processor.image_mean, std=image_processor.image_std) >>> size = ( image_processor.size[""shortest_edge""] if ""shortest_edge"" in image_processor.size else (image_processor.size[""height""], image_processor.size[""width""]) ) >>> _transforms = Compose([RandomResizedCrop(size), ToTensor(), normalize]) Then create a preprocessing function to apply the transforms and return the `pixel_values` - the inputs to the model - of the image: >>> def transforms(examples): examples[""pixel_values""] = [_transforms(img.convert(""RGB"")) for img in examples[""image""]] del examples[""image""] return examples To apply the preprocessing function over the entire dataset, use 🤗 Datasets [`~datasets.Dataset.with_transform`] method. The transforms are applied on the fly when you load an element of the dataset: >>> food = food.with_transform(transforms) Now create a batch of examples using [`DefaultDataCollator`]. Unlike other data collators in 🤗 Transformers, the `DefaultDataCollator` does not apply additional preprocessing such as padding. >>> from transformers import DefaultDataCollator >>> data_collator = DefaultDataCollator() To avoid overfitting and to make the model more robust, add some data augmentation to the training part of the dataset. Here we use Keras preprocessing layers to define the transformations for the training data (includes data augmentation), and transformations for the validation data (only center cropping, resizing and normalizing). You can use `tf.image`or any other library you prefer. >>> from tensorflow import keras >>> from tensorflow.keras import layers >>> size = (image_processor.size[""height""], image_processor.size[""width""]) >>> train_data_augmentation = keras.Sequential( [ layers.RandomCrop(size[0], size[1]), layers.Rescaling(scale=1.0 / 127.5, offset=-1), layers.RandomFlip(""horizontal""), layers.RandomRotation(factor=0.02), layers.RandomZoom(height_factor=0.2, width_factor=0.2), ], name=""train_data_augmentation"", ) >>> val_data_augmentation = keras.Sequential( [ layers.CenterCrop(size[0], size[1]), layers.Rescaling(scale=1.0 / 127.5, offset=-1), ], name=""val_data_augmentation"", ) Next, create functions to apply appropriate transformations to a batch of images, instead of one image at a time. >>> import numpy as np >>> import tensorflow as tf >>> from PIL import Image >>> def convert_to_tf_tensor(image: Image): np_image = np.array(image) tf_image = tf.convert_to_tensor(np_image) # `expand_dims()` is used to add a batch dimension since # the TF augmentation layers operates on batched inputs. return tf.expand_dims(tf_image, 0) >>> def preprocess_train(example_batch): """"""Apply train_transforms across a batch."""""" images = [ train_data_augmentation(convert_to_tf_tensor(image.convert(""RGB""))) for image in example_batch[""image""] ] example_batch[""pixel_values""] = [tf.transpose(tf.squeeze(image)) for image in images] return example_batch def preprocess_val(example_batch): """"""Apply val_transforms across a batch."""""" images = [ val_data_augmentation(convert_to_tf_tensor(image.convert(""RGB""))) for image in example_batch[""image""] ] example_batch[""pixel_values""] = [tf.transpose(tf.squeeze(image)) for image in images] return example_batch Use 🤗 Datasets [`~datasets.Dataset.set_transform`] to apply the transformations on the fly: food[""train""].set_transform(preprocess_train) food[""test""].set_transform(preprocess_val) As a final preprocessing step, create a batch of examples using `DefaultDataCollator`. Unlike other data collators in 🤗 Transformers, the `DefaultDataCollator` does not apply additional preprocessing, such as padding. >>> from transformers import DefaultDataCollator >>> data_collator = DefaultDataCollator(return_tensors=""tf"") ## Evaluate Including a metric during training is often helpful for evaluating your model's performance. You can quickly load an evaluation method with the 🤗 [Evaluate](https://huggingface.co/docs/evaluate/index) library. For this task, load the [accuracy](https://huggingface.co/spaces/evaluate-metric/accuracy) metric (see the 🤗 Evaluate [quick tour](https://huggingface.co/docs/evaluate/a_quick_tour) to learn more about how to load and compute a metric): >>> import evaluate >>> accuracy = evaluate.load(""accuracy"") Then create a function that passes your predictions and labels to [`~evaluate.EvaluationModule.compute`] to calculate the accuracy: >>> import numpy as np >>> def compute_metrics(eval_pred): predictions, labels = eval_pred predictions = np.argmax(predictions, axis=1) return accuracy.compute(predictions=predictions, references=labels) Your `compute_metrics` function is ready to go now, and you'll return to it when you set up your training. ## Train If you aren't familiar with finetuning a model with the [`Trainer`], take a look at the basic tutorial [here](../training#train-with-pytorch-trainer)! You're ready to start training your model now! Load ViT with [`AutoModelForImageClassification`]. Specify the number of labels along with the number of expected labels, and the label mappings: >>> from transformers import AutoModelForImageClassification, TrainingArguments, Trainer >>> model = AutoModelForImageClassification.from_pretrained( checkpoint, num_labels=len(labels), id2label=id2label, label2id=label2id, ) At this point, only three steps remain: 1. Define your training hyperparameters in [`TrainingArguments`]. It is important you don't remove unused columns because that'll drop the `image` column. Without the `image` column, you can't create `pixel_values`. Set `remove_unused_columns=False` to prevent this behavior! The only other required parameter is `output_dir` which specifies where to save your model. You'll push this model to the Hub by setting `push_to_hub=True` (you need to be signed in to Hugging Face to upload your model). At the end of each epoch, the [`Trainer`] will evaluate the accuracy and save the training checkpoint. 2. Pass the training arguments to [`Trainer`] along with the model, dataset, tokenizer, data collator, and `compute_metrics` function. 3. Call [`~Trainer.train`] to finetune your model. >>> training_args = TrainingArguments( output_dir=""my_awesome_food_model"", remove_unused_columns=False, evaluation_strategy=""epoch"", save_strategy=""epoch"", learning_rate=5e-5, per_device_train_batch_size=16, gradient_accumulation_steps=4, per_device_eval_batch_size=16, num_train_epochs=3, warmup_ratio=0.1, logging_steps=10, load_best_model_at_end=True, metric_for_best_model=""accuracy"", push_to_hub=True, ) >>> trainer = Trainer( model=model, args=training_args, data_collator=data_collator, train_dataset=food[""train""], eval_dataset=food[""test""], tokenizer=image_processor, compute_metrics=compute_metrics, ) >>> trainer.train() Once training is completed, share your model to the Hub with the [`~transformers.Trainer.push_to_hub`] method so everyone can use your model: >>> trainer.push_to_hub() If you are unfamiliar with fine-tuning a model with Keras, check out the [basic tutorial](./training#train-a-tensorflow-model-with-keras) first! To fine-tune a model in TensorFlow, follow these steps: 1. Define the training hyperparameters, and set up an optimizer and a learning rate schedule. 2. Instantiate a pre-trained model. 3. Convert a 🤗 Dataset to a `tf.data.Dataset`. 4. Compile your model. 5. Add callbacks and use the `fit()` method to run the training. 6. Upload your model to 🤗 Hub to share with the community. Start by defining the hyperparameters, optimizer and learning rate schedule: >>> from transformers import create_optimizer >>> batch_size = 16 >>> num_epochs = 5 >>> num_train_steps = len(food[""train""]) * num_epochs >>> learning_rate = 3e-5 >>> weight_decay_rate = 0.01 >>> optimizer, lr_schedule = create_optimizer( init_lr=learning_rate, num_train_steps=num_train_steps, weight_decay_rate=weight_decay_rate, num_warmup_steps=0, ) Then, load ViT with [`TFAutoModelForImageClassification`] along with the label mappings: >>> from transformers import TFAutoModelForImageClassification >>> model = TFAutoModelForImageClassification.from_pretrained( checkpoint, id2label=id2label, label2id=label2id, ) Convert your datasets to the `tf.data.Dataset` format using the [`~datasets.Dataset.to_tf_dataset`] and your `data_collator`: >>> # converting our train dataset to tf.data.Dataset >>> tf_train_dataset = food[""train""].to_tf_dataset( columns=""pixel_values"", label_cols=""label"", shuffle=True, batch_size=batch_size, collate_fn=data_collator ) >>> # converting our test dataset to tf.data.Dataset >>> tf_eval_dataset = food[""test""].to_tf_dataset( columns=""pixel_values"", label_cols=""label"", shuffle=True, batch_size=batch_size, collate_fn=data_collator ) Configure the model for training with `compile()`: >>> from tensorflow.keras.losses import SparseCategoricalCrossentropy >>> loss = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True) >>> model.compile(optimizer=optimizer, loss=loss) To compute the accuracy from the predictions and push your model to the 🤗 Hub, use [Keras callbacks](../main_classes/keras_callbacks). Pass your `compute_metrics` function to [KerasMetricCallback](../main_classes/keras_callbacks#transformers.KerasMetricCallback), and use the [PushToHubCallback](../main_classes/keras_callbacks#transformers.PushToHubCallback) to upload the model: >>> from transformers.keras_callbacks import KerasMetricCallback, PushToHubCallback >>> metric_callback = KerasMetricCallback(metric_fn=compute_metrics, eval_dataset=tf_eval_dataset) >>> push_to_hub_callback = PushToHubCallback( output_dir=""food_classifier"", tokenizer=image_processor, save_strategy=""no"", ) >>> callbacks = [metric_callback, push_to_hub_callback] Finally, you are ready to train your model! Call `fit()` with your training and validation datasets, the number of epochs, and your callbacks to fine-tune the model: >>> model.fit(tf_train_dataset, validation_data=tf_eval_dataset, epochs=num_epochs, callbacks=callbacks) Epoch 1/5 250/250 [==============================] - 313s 1s/step - loss: 2.5623 - val_loss: 1.4161 - accuracy: 0.9290 Epoch 2/5 250/250 [==============================] - 265s 1s/step - loss: 0.9181 - val_loss: 0.6808 - accuracy: 0.9690 Epoch 3/5 250/250 [==============================] - 252s 1s/step - loss: 0.3910 - val_loss: 0.4303 - accuracy: 0.9820 Epoch 4/5 250/250 [==============================] - 251s 1s/step - loss: 0.2028 - val_loss: 0.3191 - accuracy: 0.9900 Epoch 5/5 250/250 [==============================] - 238s 949ms/step - loss: 0.1232 - val_loss: 0.3259 - accuracy: 0.9890 Congratulations! You have fine-tuned your model and shared it on the 🤗 Hub. You can now use it for inference! For a more in-depth example of how to finetune a model for image classification, take a look at the corresponding [PyTorch notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/image_classification.ipynb). ## Inference Great, now that you've fine-tuned a model, you can use it for inference! Load an image you'd like to run inference on: >>> ds = load_dataset(""food101"", split=""validation[:10]"") >>> image = ds[""image""][0] The simplest way to try out your finetuned model for inference is to use it in a [`pipeline`]. Instantiate a `pipeline` for image classification with your model, and pass your image to it: >>> from transformers import pipeline >>> classifier = pipeline(""image-classification"", model=""my_awesome_food_model"") >>> classifier(image) [{'score': 0.31856709718704224, 'label': 'beignets'}, {'score': 0.015232225880026817, 'label': 'bruschetta'}, {'score': 0.01519392803311348, 'label': 'chicken_wings'}, {'score': 0.013022331520915031, 'label': 'pork_chop'}, {'score': 0.012728818692266941, 'label': 'prime_rib'}] You can also manually replicate the results of the `pipeline` if you'd like: Load an image processor to preprocess the image and return the `input` as PyTorch tensors: >>> from transformers import AutoImageProcessor >>> import torch >>> image_processor = AutoImageProcessor.from_pretrained(""my_awesome_food_model"") >>> inputs = image_processor(image, return_tensors=""pt"") Pass your inputs to the model and return the logits: >>> from transformers import AutoModelForImageClassification >>> model = AutoModelForImageClassification.from_pretrained(""my_awesome_food_model"") >>> with torch.no_grad(): logits = model(**inputs).logits Get the predicted label with the highest probability, and use the model's `id2label` mapping to convert it to a label: >>> predicted_label = logits.argmax(-1).item() >>> model.config.id2label[predicted_label] 'beignets' Load an image processor to preprocess the image and return the `input` as TensorFlow tensors: >>> from transformers import AutoImageProcessor >>> image_processor = AutoImageProcessor.from_pretrained(""MariaK/food_classifier"") >>> inputs = image_processor(image, return_tensors=""tf"") Pass your inputs to the model and return the logits: >>> from transformers import TFAutoModelForImageClassification >>> model = TFAutoModelForImageClassification.from_pretrained(""MariaK/food_classifier"") >>> logits = model(**inputs).logits Get the predicted label with the highest probability, and use the model's `id2label` mapping to convert it to a label: >>> predicted_class_id = int(tf.math.argmax(logits, axis=-1)[0]) >>> model.config.id2label[predicted_class_id] 'beignets' " tasks/visual_question_answering.md," # Visual Question Answering [[open-in-colab]] Visual Question Answering (VQA) is the task of answering open-ended questions based on an image. The input to models supporting this task is typically a combination of an image and a question, and the output is an answer expressed in natural language. Some noteworthy use case examples for VQA include: * Accessibility applications for visually impaired individuals. * Education: posing questions about visual materials presented in lectures or textbooks. VQA can also be utilized in interactive museum exhibits or historical sites. * Customer service and e-commerce: VQA can enhance user experience by letting users ask questions about products. * Image retrieval: VQA models can be used to retrieve images with specific characteristics. For example, the user can ask ""Is there a dog?"" to find all images with dogs from a set of images. In this guide you'll learn how to: - Fine-tune a classification VQA model, specifically [ViLT](../model_doc/vilt), on the [`Graphcore/vqa` dataset](https://huggingface.co/datasets/Graphcore/vqa). - Use your fine-tuned ViLT for inference. - Run zero-shot VQA inference with a generative model, like BLIP-2. ## Fine-tuning ViLT ViLT model incorporates text embeddings into a Vision Transformer (ViT), allowing it to have a minimal design for Vision-and-Language Pre-training (VLP). This model can be used for several downstream tasks. For the VQA task, a classifier head is placed on top (a linear layer on top of the final hidden state of the `[CLS]` token) and randomly initialized. Visual Question Answering is thus treated as a **classification problem**. More recent models, such as BLIP, BLIP-2, and InstructBLIP, treat VQA as a generative task. Later in this guide we illustrate how to use them for zero-shot VQA inference. Before you begin, make sure you have all the necessary libraries installed. ```bash pip install -q transformers datasets We encourage you to share your model with the community. Log in to your Hugging Face account to upload it to the 🤗 Hub. When prompted, enter your token to log in: >>> from huggingface_hub import notebook_login >>> notebook_login() Let's define the model checkpoint as a global variable. >>> model_checkpoint = ""dandelin/vilt-b32-mlm"" ## Load the data For illustration purposes, in this guide we use a very small sample of the annotated visual question answering `Graphcore/vqa` dataset. You can find the full dataset on [🤗 Hub](https://huggingface.co/datasets/Graphcore/vqa). As an alternative to the [`Graphcore/vqa` dataset](https://huggingface.co/datasets/Graphcore/vqa), you can download the same data manually from the official [VQA dataset page](https://visualqa.org/download.html). If you prefer to follow the tutorial with your custom data, check out how to [Create an image dataset](https://huggingface.co/docs/datasets/image_dataset#loading-script) guide in the 🤗 Datasets documentation. Let's load the first 200 examples from the validation split and explore the dataset's features: thon >>> from datasets import load_dataset >>> dataset = load_dataset(""Graphcore/vqa"", split=""validation[:200]"") >>> dataset Dataset({ features: ['question', 'question_type', 'question_id', 'image_id', 'answer_type', 'label'], num_rows: 200 }) Let's take a look at an example to understand the dataset's features: >>> dataset[0] {'question': 'Where is he looking?', 'question_type': 'none of the above', 'question_id': 262148000, 'image_id': '/root/.cache/huggingface/datasets/downloads/extracted/ca733e0e000fb2d7a09fbcc94dbfe7b5a30750681d0e965f8e0a23b1c2f98c75/val2014/COCO_val2014_000000262148.jpg', 'answer_type': 'other', 'label': {'ids': ['at table', 'down', 'skateboard', 'table'], 'weights': [0.30000001192092896, 1.0, 0.30000001192092896, 0.30000001192092896]}} The features relevant to the task include: * `question`: the question to be answered from the image * `image_id`: the path to the image the question refers to * `label`: the annotations We can remove the rest of the features as they won't be necessary: >>> dataset = dataset.remove_columns(['question_type', 'question_id', 'answer_type']) As you can see, the `label` feature contains several answers to the same question (called `ids` here) collected by different human annotators. This is because the answer to a question can be subjective. In this case, the question is ""where is he looking?"". Some people annotated this with ""down"", others with ""at table"", another one with ""skateboard"", etc. Take a look at the image and consider which answer would you give: thon >>> from PIL import Image >>> image = Image.open(dataset[0]['image_id']) >>> image Due to the questions' and answers' ambiguity, datasets like this are treated as a multi-label classification problem (as multiple answers are possibly valid). Moreover, rather than just creating a one-hot encoded vector, one creates a soft encoding, based on the number of times a certain answer appeared in the annotations. For instance, in the example above, because the answer ""down"" is selected way more often than other answers, it has a score (called `weight` in the dataset) of 1.0, and the rest of the answers have scores < 1.0. To later instantiate the model with an appropriate classification head, let's create two dictionaries: one that maps the label name to an integer and vice versa: >>> import itertools >>> labels = [item['ids'] for item in dataset['label']] >>> flattened_labels = list(itertools.chain(*labels)) >>> unique_labels = list(set(flattened_labels)) >>> label2id = {label: idx for idx, label in enumerate(unique_labels)} >>> id2label = {idx: label for label, idx in label2id.items()} Now that we have the mappings, we can replace the string answers with their ids, and flatten the dataset for a more convenient further preprocessing. thon >>> def replace_ids(inputs): inputs[""label""][""ids""] = [label2id[x] for x in inputs[""label""][""ids""]] return inputs >>> dataset = dataset.map(replace_ids) >>> flat_dataset = dataset.flatten() >>> flat_dataset.features {'question': Value(dtype='string', id=None), 'image_id': Value(dtype='string', id=None), 'label.ids': Sequence(feature=Value(dtype='int64', id=None), length=-1, id=None), 'label.weights': Sequence(feature=Value(dtype='float64', id=None), length=-1, id=None)} ## Preprocessing data The next step is to load a ViLT processor to prepare the image and text data for the model. [`ViltProcessor`] wraps a BERT tokenizer and ViLT image processor into a convenient single processor: >>> from transformers import ViltProcessor >>> processor = ViltProcessor.from_pretrained(model_checkpoint) To preprocess the data we need to encode the images and questions using the [`ViltProcessor`]. The processor will use the [`BertTokenizerFast`] to tokenize the text and create `input_ids`, `attention_mask` and `token_type_ids` for the text data. As for images, the processor will leverage [`ViltImageProcessor`] to resize and normalize the image, and create `pixel_values` and `pixel_mask`. All these preprocessing steps are done under the hood, we only need to call the `processor`. However, we still need to prepare the target labels. In this representation, each element corresponds to a possible answer (label). For correct answers, the element holds their respective score (weight), while the remaining elements are set to zero. The following function applies the `processor` to the images and questions and formats the labels as described above: >>> import torch >>> def preprocess_data(examples): image_paths = examples['image_id'] images = [Image.open(image_path) for image_path in image_paths] texts = examples['question'] encoding = processor(images, texts, padding=""max_length"", truncation=True, return_tensors=""pt"") for k, v in encoding.items(): encoding[k] = v.squeeze() targets = [] for labels, scores in zip(examples['label.ids'], examples['label.weights']): target = torch.zeros(len(id2label)) for label, score in zip(labels, scores): target[label] = score targets.append(target) encoding[""labels""] = targets return encoding To apply the preprocessing function over the entire dataset, use 🤗 Datasets [`~datasets.map`] function. You can speed up `map` by setting `batched=True` to process multiple elements of the dataset at once. At this point, feel free to remove the columns you don't need. >>> processed_dataset = flat_dataset.map(preprocess_data, batched=True, remove_columns=['question','question_type', 'question_id', 'image_id', 'answer_type', 'label.ids', 'label.weights']) >>> processed_dataset Dataset({ features: ['input_ids', 'token_type_ids', 'attention_mask', 'pixel_values', 'pixel_mask', 'labels'], num_rows: 200 }) As a final step, create a batch of examples using [`DefaultDataCollator`]: >>> from transformers import DefaultDataCollator >>> data_collator = DefaultDataCollator() ## Train the model You’re ready to start training your model now! Load ViLT with [`ViltForQuestionAnswering`]. Specify the number of labels along with the label mappings: >>> from transformers import ViltForQuestionAnswering >>> model = ViltForQuestionAnswering.from_pretrained(model_checkpoint, num_labels=len(id2label), id2label=id2label, label2id=label2id) At this point, only three steps remain: 1. Define your training hyperparameters in [`TrainingArguments`]: >>> from transformers import TrainingArguments >>> repo_id = ""MariaK/vilt_finetuned_200"" >>> training_args = TrainingArguments( output_dir=repo_id, per_device_train_batch_size=4, num_train_epochs=20, save_steps=200, logging_steps=50, learning_rate=5e-5, save_total_limit=2, remove_unused_columns=False, push_to_hub=True, ) 2. Pass the training arguments to [`Trainer`] along with the model, dataset, processor, and data collator. >>> from transformers import Trainer >>> trainer = Trainer( model=model, args=training_args, data_collator=data_collator, train_dataset=processed_dataset, tokenizer=processor, ) 3. Call [`~Trainer.train`] to finetune your model. >>> trainer.train() Once training is completed, share your model to the Hub with the [`~Trainer.push_to_hub`] method to share your final model on the 🤗 Hub: >>> trainer.push_to_hub() ## Inference Now that you have fine-tuned a ViLT model, and uploaded it to the 🤗 Hub, you can use it for inference. The simplest way to try out your fine-tuned model for inference is to use it in a [`Pipeline`]. >>> from transformers import pipeline >>> pipe = pipeline(""visual-question-answering"", model=""MariaK/vilt_finetuned_200"") The model in this guide has only been trained on 200 examples, so don't expect a lot from it. Let's see if it at least learned something from the data and take the first example from the dataset to illustrate inference: >>> example = dataset[0] >>> image = Image.open(example['image_id']) >>> question = example['question'] >>> print(question) >>> pipe(image, question, top_k=1) ""Where is he looking?"" [{'score': 0.5498199462890625, 'answer': 'down'}] Even though not very confident, the model indeed has learned something. With more examples and longer training, you'll get far better results! You can also manually replicate the results of the pipeline if you'd like: 1. Take an image and a question, prepare them for the model using the processor from your model. 2. Forward the result or preprocessing through the model. 3. From the logits, get the most likely answer's id, and find the actual answer in the `id2label`. >>> processor = ViltProcessor.from_pretrained(""MariaK/vilt_finetuned_200"") >>> image = Image.open(example['image_id']) >>> question = example['question'] >>> # prepare inputs >>> inputs = processor(image, question, return_tensors=""pt"") >>> model = ViltForQuestionAnswering.from_pretrained(""MariaK/vilt_finetuned_200"") >>> # forward pass >>> with torch.no_grad(): outputs = model(**inputs) >>> logits = outputs.logits >>> idx = logits.argmax(-1).item() >>> print(""Predicted answer:"", model.config.id2label[idx]) Predicted answer: down ## Zero-shot VQA The previous model treated VQA as a classification task. Some recent models, such as BLIP, BLIP-2, and InstructBLIP approach VQA as a generative task. Let's take [BLIP-2](../model_doc/blip-2) as an example. It introduced a new visual-language pre-training paradigm in which any combination of pre-trained vision encoder and LLM can be used (learn more in the [BLIP-2 blog post](https://huggingface.co/blog/blip-2)). This enables achieving state-of-the-art results on multiple visual-language tasks including visual question answering. Let's illustrate how you can use this model for VQA. First, let's load the model. Here we'll explicitly send the model to a GPU, if available, which we didn't need to do earlier when training, as [`Trainer`] handles this automatically: >>> from transformers import AutoProcessor, Blip2ForConditionalGeneration >>> import torch >>> processor = AutoProcessor.from_pretrained(""Salesforce/blip2-opt-2.7b"") >>> model = Blip2ForConditionalGeneration.from_pretrained(""Salesforce/blip2-opt-2.7b"", torch_dtype=torch.float16) >>> device = ""cuda"" if torch.cuda.is_available() else ""cpu"" >>> model.to(device) The model takes image and text as input, so let's use the exact same image/question pair from the first example in the VQA dataset: >>> example = dataset[0] >>> image = Image.open(example['image_id']) >>> question = example['question'] To use BLIP-2 for visual question answering task, the textual prompt has to follow a specific format: `Question: {} Answer:`. >>> prompt = f""Question: {question} Answer:"" Now we need to preprocess the image/prompt with the model's processor, pass the processed input through the model, and decode the output: >>> inputs = processor(image, text=prompt, return_tensors=""pt"").to(device, torch.float16) >>> generated_ids = model.generate(**inputs, max_new_tokens=10) >>> generated_text = processor.batch_decode(generated_ids, skip_special_tokens=True)[0].strip() >>> print(generated_text) ""He is looking at the crowd"" As you can see, the model recognized the crowd, and the direction of the face (looking down), however, it seems to miss the fact the crowd is behind the skater. Still, in cases where acquiring human-annotated datasets is not feasible, this approach can quickly produce useful results. " tasks/image_captioning.md," # Image captioning [[open-in-colab]] Image captioning is the task of predicting a caption for a given image. Common real world applications of it include aiding visually impaired people that can help them navigate through different situations. Therefore, image captioning helps to improve content accessibility for people by describing images to them. This guide will show you how to: * Fine-tune an image captioning model. * Use the fine-tuned model for inference. Before you begin, make sure you have all the necessary libraries installed: ```bash pip install transformers datasets evaluate -q pip install jiwer -q We encourage you to log in to your Hugging Face account so you can upload and share your model with the community. When prompted, enter your token to log in: thon from huggingface_hub import notebook_login notebook_login() ## Load the Pokémon BLIP captions dataset Use the 🤗 Dataset library to load a dataset that consists of {image-caption} pairs. To create your own image captioning dataset in PyTorch, you can follow [this notebook](https://github.com/NielsRogge/Transformers-Tutorials/blob/master/GIT/Fine_tune_GIT_on_an_image_captioning_dataset.ipynb). thon from datasets import load_dataset ds = load_dataset(""lambdalabs/pokemon-blip-captions"") ds ```bash DatasetDict({ train: Dataset({ features: ['image', 'text'], num_rows: 833 }) }) The dataset has two features, `image` and `text`. Many image captioning datasets contain multiple captions per image. In those cases, a common strategy is to randomly sample a caption amongst the available ones during training. Split the dataset’s train split into a train and test set with the [~datasets.Dataset.train_test_split] method: thon ds = ds[""train""].train_test_split(test_size=0.1) train_ds = ds[""train""] test_ds = ds[""test""] Let's visualize a couple of samples from the training set. thon from textwrap import wrap import matplotlib.pyplot as plt import numpy as np def plot_images(images, captions): plt.figure(figsize=(20, 20)) for i in range(len(images)): ax = plt.subplot(1, len(images), i + 1) caption = captions[i] caption = ""\n"".join(wrap(caption, 12)) plt.title(caption) plt.imshow(images[i]) plt.axis(""off"") sample_images_to_visualize = [np.array(train_ds[i][""image""]) for i in range(5)] sample_captions = [train_ds[i][""text""] for i in range(5)] plot_images(sample_images_to_visualize, sample_captions) ## Preprocess the dataset Since the dataset has two modalities (image and text), the pre-processing pipeline will preprocess images and the captions. To do so, load the processor class associated with the model you are about to fine-tune. thon from transformers import AutoProcessor checkpoint = ""microsoft/git-base"" processor = AutoProcessor.from_pretrained(checkpoint) The processor will internally pre-process the image (which includes resizing, and pixel scaling) and tokenize the caption. thon def transforms(example_batch): images = [x for x in example_batch[""image""]] captions = [x for x in example_batch[""text""]] inputs = processor(images=images, text=captions, padding=""max_length"") inputs.update({""labels"": inputs[""input_ids""]}) return inputs train_ds.set_transform(transforms) test_ds.set_transform(transforms) With the dataset ready, you can now set up the model for fine-tuning. ## Load a base model Load the [""microsoft/git-base""](https://huggingface.co/microsoft/git-base) into a [`AutoModelForCausalLM`](https://huggingface.co/docs/transformers/model_doc/auto#transformers.AutoModelForCausalLM) object. thon from transformers import AutoModelForCausalLM model = AutoModelForCausalLM.from_pretrained(checkpoint) ## Evaluate Image captioning models are typically evaluated with the [Rouge Score](https://huggingface.co/spaces/evaluate-metric/rouge) or [Word Error Rate](https://huggingface.co/spaces/evaluate-metric/wer). For this guide, you will use the Word Error Rate (WER). We use the 🤗 Evaluate library to do so. For potential limitations and other gotchas of the WER, refer to [this guide](https://huggingface.co/spaces/evaluate-metric/wer). thon from evaluate import load import torch wer = load(""wer"") def compute_metrics(eval_pred): logits, labels = eval_pred predicted = logits.argmax(-1) decoded_labels = processor.batch_decode(labels, skip_special_tokens=True) decoded_predictions = processor.batch_decode(predicted, skip_special_tokens=True) wer_score = wer.compute(predictions=decoded_predictions, references=decoded_labels) return {""wer_score"": wer_score} ## Train! Now, you are ready to start fine-tuning the model. You will use the 🤗 [`Trainer`] for this. First, define the training arguments using [`TrainingArguments`]. thon from transformers import TrainingArguments, Trainer model_name = checkpoint.split(""/"")[1] training_args = TrainingArguments( output_dir=f""{model_name}-pokemon"", learning_rate=5e-5, num_train_epochs=50, fp16=True, per_device_train_batch_size=32, per_device_eval_batch_size=32, gradient_accumulation_steps=2, save_total_limit=3, evaluation_strategy=""steps"", eval_steps=50, save_strategy=""steps"", save_steps=50, logging_steps=50, remove_unused_columns=False, push_to_hub=True, label_names=[""labels""], load_best_model_at_end=True, ) Then pass them along with the datasets and the model to 🤗 Trainer. thon trainer = Trainer( model=model, args=training_args, train_dataset=train_ds, eval_dataset=test_ds, compute_metrics=compute_metrics, ) To start training, simply call [`~Trainer.train`] on the [`Trainer`] object. thon trainer.train() You should see the training loss drop smoothly as training progresses. Once training is completed, share your model to the Hub with the [`~Trainer.push_to_hub`] method so everyone can use your model: thon trainer.push_to_hub() ## Inference Take a sample image from `test_ds` to test the model. thon from PIL import Image import requests url = ""https://huggingface.co/datasets/sayakpaul/sample-datasets/resolve/main/pokemon.png"" image = Image.open(requests.get(url, stream=True).raw) image Prepare image for the model. thon device = ""cuda"" if torch.cuda.is_available() else ""cpu"" inputs = processor(images=image, return_tensors=""pt"").to(device) pixel_values = inputs.pixel_values Call [`generate`] and decode the predictions. thon generated_ids = model.generate(pixel_values=pixel_values, max_length=50) generated_caption = processor.batch_decode(generated_ids, skip_special_tokens=True)[0] print(generated_caption) ```bash a drawing of a pink and blue pokemon Looks like the fine-tuned model generated a pretty good caption! " tasks/document_question_answering.md," # Document Question Answering [[open-in-colab]] Document Question Answering, also referred to as Document Visual Question Answering, is a task that involves providing answers to questions posed about document images. The input to models supporting this task is typically a combination of an image and a question, and the output is an answer expressed in natural language. These models utilize multiple modalities, including text, the positions of words (bounding boxes), and the image itself. This guide illustrates how to: - Fine-tune [LayoutLMv2](../model_doc/layoutlmv2) on the [DocVQA dataset](https://huggingface.co/datasets/nielsr/docvqa_1200_examples_donut). - Use your fine-tuned model for inference. The task illustrated in this tutorial is supported by the following model architectures: [LayoutLM](../model_doc/layoutlm), [LayoutLMv2](../model_doc/layoutlmv2), [LayoutLMv3](../model_doc/layoutlmv3) LayoutLMv2 solves the document question-answering task by adding a question-answering head on top of the final hidden states of the tokens, to predict the positions of the start and end tokens of the answer. In other words, the problem is treated as extractive question answering: given the context, extract which piece of information answers the question. The context comes from the output of an OCR engine, here it is Google's Tesseract. Before you begin, make sure you have all the necessary libraries installed. LayoutLMv2 depends on detectron2, torchvision and tesseract. ```bash pip install -q transformers datasets ```bash pip install 'git+https://github.com/facebookresearch/detectron2.git' pip install torchvision ```bash sudo apt install tesseract-ocr pip install -q pytesseract Once you have installed all of the dependencies, restart your runtime. We encourage you to share your model with the community. Log in to your Hugging Face account to upload it to the 🤗 Hub. When prompted, enter your token to log in: >>> from huggingface_hub import notebook_login >>> notebook_login() Let's define some global variables. >>> model_checkpoint = ""microsoft/layoutlmv2-base-uncased"" >>> batch_size = 4 ## Load the data In this guide we use a small sample of preprocessed DocVQA that you can find on 🤗 Hub. If you'd like to use the full DocVQA dataset, you can register and download it on [DocVQA homepage](https://rrc.cvc.uab.es/?ch=17). If you do so, to proceed with this guide check out [how to load files into a 🤗 dataset](https://huggingface.co/docs/datasets/loading#local-and-remote-files). >>> from datasets import load_dataset >>> dataset = load_dataset(""nielsr/docvqa_1200_examples"") >>> dataset DatasetDict({ train: Dataset({ features: ['id', 'image', 'query', 'answers', 'words', 'bounding_boxes', 'answer'], num_rows: 1000 }) test: Dataset({ features: ['id', 'image', 'query', 'answers', 'words', 'bounding_boxes', 'answer'], num_rows: 200 }) }) As you can see, the dataset is split into train and test sets already. Take a look at a random example to familiarize yourself with the features. >>> dataset[""train""].features Here's what the individual fields represent: * `id`: the example's id * `image`: a PIL.Image.Image object containing the document image * `query`: the question string - natural language asked question, in several languages * `answers`: a list of correct answers provided by human annotators * `words` and `bounding_boxes`: the results of OCR, which we will not use here * `answer`: an answer matched by a different model which we will not use here Let's leave only English questions, and drop the `answer` feature which appears to contain predictions by another model. We'll also take the first of the answers from the set provided by the annotators. Alternatively, you can randomly sample it. >>> updated_dataset = dataset.map(lambda example: {""question"": example[""query""][""en""]}, remove_columns=[""query""]) >>> updated_dataset = updated_dataset.map( lambda example: {""answer"": example[""answers""][0]}, remove_columns=[""answer"", ""answers""] ) Note that the LayoutLMv2 checkpoint that we use in this guide has been trained with `max_position_embeddings = 512` (you can find this information in the [checkpoint's `config.json` file](https://huggingface.co/microsoft/layoutlmv2-base-uncased/blob/main/config.json#L18)). We can truncate the examples but to avoid the situation where the answer might be at the end of a large document and end up truncated, here we'll remove the few examples where the embedding is likely to end up longer than 512. If most of the documents in your dataset are long, you can implement a sliding window strategy - check out [this notebook](https://github.com/huggingface/notebooks/blob/main/examples/question_answering.ipynb) for details. >>> updated_dataset = updated_dataset.filter(lambda x: len(x[""words""]) + len(x[""question""].split()) < 512) At this point let's also remove the OCR features from this dataset. These are a result of OCR for fine-tuning a different model. They would still require some processing if we wanted to use them, as they do not match the input requirements of the model we use in this guide. Instead, we can use the [`LayoutLMv2Processor`] on the original data for both OCR and tokenization. This way we'll get the inputs that match model's expected input. If you want to process images manually, check out the [`LayoutLMv2` model documentation](../model_doc/layoutlmv2) to learn what input format the model expects. >>> updated_dataset = updated_dataset.remove_columns(""words"") >>> updated_dataset = updated_dataset.remove_columns(""bounding_boxes"") Finally, the data exploration won't be complete if we don't peek at an image example. >>> updated_dataset[""train""][11][""image""] ## Preprocess the data The Document Question Answering task is a multimodal task, and you need to make sure that the inputs from each modality are preprocessed according to the model's expectations. Let's start by loading the [`LayoutLMv2Processor`], which internally combines an image processor that can handle image data and a tokenizer that can encode text data. >>> from transformers import AutoProcessor >>> processor = AutoProcessor.from_pretrained(model_checkpoint) ### Preprocessing document images First, let's prepare the document images for the model with the help of the `image_processor` from the processor. By default, image processor resizes the images to 224x224, makes sure they have the correct order of color channels, applies OCR with tesseract to get words and normalized bounding boxes. In this tutorial, all of these defaults are exactly what we need. Write a function that applies the default image processing to a batch of images and returns the results of OCR. >>> image_processor = processor.image_processor >>> def get_ocr_words_and_boxes(examples): images = [image.convert(""RGB"") for image in examples[""image""]] encoded_inputs = image_processor(images) examples[""image""] = encoded_inputs.pixel_values examples[""words""] = encoded_inputs.words examples[""boxes""] = encoded_inputs.boxes return examples To apply this preprocessing to the entire dataset in a fast way, use [`~datasets.Dataset.map`]. >>> dataset_with_ocr = updated_dataset.map(get_ocr_words_and_boxes, batched=True, batch_size=2) ### Preprocessing text data Once we have applied OCR to the images, we need to encode the text part of the dataset to prepare it for the model. This involves converting the words and boxes that we got in the previous step to token-level `input_ids`, `attention_mask`, `token_type_ids` and `bbox`. For preprocessing text, we'll need the `tokenizer` from the processor. >>> tokenizer = processor.tokenizer On top of the preprocessing mentioned above, we also need to add the labels for the model. For `xxxForQuestionAnswering` models in 🤗 Transformers, the labels consist of the `start_positions` and `end_positions`, indicating which token is at the start and which token is at the end of the answer. Let's start with that. Define a helper function that can find a sublist (the answer split into words) in a larger list (the words list). This function will take two lists as input, `words_list` and `answer_list`. It will then iterate over the `words_list` and check if the current word in the `words_list` (words_list[i]) is equal to the first word of answer_list (answer_list[0]) and if the sublist of `words_list` starting from the current word and of the same length as `answer_list` is equal `to answer_list`. If this condition is true, it means that a match has been found, and the function will record the match, its starting index (idx), and its ending index (idx + len(answer_list) - 1). If more than one match was found, the function will return only the first one. If no match is found, the function returns (`None`, 0, and 0). >>> def subfinder(words_list, answer_list): matches = [] start_indices = [] end_indices = [] for idx, i in enumerate(range(len(words_list))): if words_list[i] == answer_list[0] and words_list[i : i + len(answer_list)] == answer_list: matches.append(answer_list) start_indices.append(idx) end_indices.append(idx + len(answer_list) - 1) if matches: return matches[0], start_indices[0], end_indices[0] else: return None, 0, 0 To illustrate how this function finds the position of the answer, let's use it on an example: >>> example = dataset_with_ocr[""train""][1] >>> words = [word.lower() for word in example[""words""]] >>> match, word_idx_start, word_idx_end = subfinder(words, example[""answer""].lower().split()) >>> print(""Question: "", example[""question""]) >>> print(""Words:"", words) >>> print(""Answer: "", example[""answer""]) >>> print(""start_index"", word_idx_start) >>> print(""end_index"", word_idx_end) Question: Who is in cc in this letter? Words: ['wie', 'baw', 'brown', '&', 'williamson', 'tobacco', 'corporation', 'research', '&', 'development', 'internal', 'correspondence', 'to:', 'r.', 'h.', 'honeycutt', 'ce:', 't.f.', 'riehl', 'from:', '.', 'c.j.', 'cook', 'date:', 'may', '8,', '1995', 'subject:', 'review', 'of', 'existing', 'brainstorming', 'ideas/483', 'the', 'major', 'function', 'of', 'the', 'product', 'innovation', 'graup', 'is', 'to', 'develop', 'marketable', 'nove!', 'products', 'that', 'would', 'be', 'profitable', 'to', 'manufacture', 'and', 'sell.', 'novel', 'is', 'defined', 'as:', 'of', 'a', 'new', 'kind,', 'or', 'different', 'from', 'anything', 'seen', 'or', 'known', 'before.', 'innovation', 'is', 'defined', 'as:', 'something', 'new', 'or', 'different', 'introduced;', 'act', 'of', 'innovating;', 'introduction', 'of', 'new', 'things', 'or', 'methods.', 'the', 'products', 'may', 'incorporate', 'the', 'latest', 'technologies,', 'materials', 'and', 'know-how', 'available', 'to', 'give', 'then', 'a', 'unique', 'taste', 'or', 'look.', 'the', 'first', 'task', 'of', 'the', 'product', 'innovation', 'group', 'was', 'to', 'assemble,', 'review', 'and', 'categorize', 'a', 'list', 'of', 'existing', 'brainstorming', 'ideas.', 'ideas', 'were', 'grouped', 'into', 'two', 'major', 'categories', 'labeled', 'appearance', 'and', 'taste/aroma.', 'these', 'categories', 'are', 'used', 'for', 'novel', 'products', 'that', 'may', 'differ', 'from', 'a', 'visual', 'and/or', 'taste/aroma', 'point', 'of', 'view', 'compared', 'to', 'canventional', 'cigarettes.', 'other', 'categories', 'include', 'a', 'combination', 'of', 'the', 'above,', 'filters,', 'packaging', 'and', 'brand', 'extensions.', 'appearance', 'this', 'category', 'is', 'used', 'for', 'novel', 'cigarette', 'constructions', 'that', 'yield', 'visually', 'different', 'products', 'with', 'minimal', 'changes', 'in', 'smoke', 'chemistry', 'two', 'cigarettes', 'in', 'cne.', 'emulti-plug', 'te', 'build', 'yaur', 'awn', 'cigarette.', 'eswitchable', 'menthol', 'or', 'non', 'menthol', 'cigarette.', '*cigarettes', 'with', 'interspaced', 'perforations', 'to', 'enable', 'smoker', 'to', 'separate', 'unburned', 'section', 'for', 'future', 'smoking.', '«short', 'cigarette,', 'tobacco', 'section', '30', 'mm.', '«extremely', 'fast', 'buming', 'cigarette.', '«novel', 'cigarette', 'constructions', 'that', 'permit', 'a', 'significant', 'reduction', 'iretobacco', 'weight', 'while', 'maintaining', 'smoking', 'mechanics', 'and', 'visual', 'characteristics.', 'higher', 'basis', 'weight', 'paper:', 'potential', 'reduction', 'in', 'tobacco', 'weight.', '«more', 'rigid', 'tobacco', 'column;', 'stiffing', 'agent', 'for', 'tobacco;', 'e.g.', 'starch', '*colored', 'tow', 'and', 'cigarette', 'papers;', 'seasonal', 'promotions,', 'e.g.', 'pastel', 'colored', 'cigarettes', 'for', 'easter', 'or', 'in', 'an', 'ebony', 'and', 'ivory', 'brand', 'containing', 'a', 'mixture', 'of', 'all', 'black', '(black', 'paper', 'and', 'tow)', 'and', 'ail', 'white', 'cigarettes.', '499150498'] Answer: T.F. Riehl start_index 17 end_index 18 Once examples are encoded, however, they will look like this: >>> encoding = tokenizer(example[""question""], example[""words""], example[""boxes""]) >>> tokenizer.decode(encoding[""input_ids""]) [CLS] who is in cc in this letter? [SEP] wie baw brown & williamson tobacco corporation research & development We'll need to find the position of the answer in the encoded input. * `token_type_ids` tells us which tokens are part of the question, and which ones are part of the document's words. * `tokenizer.cls_token_id` will help find the special token at the beginning of the input. * `word_ids` will help match the answer found in the original `words` to the same answer in the full encoded input and determine the start/end position of the answer in the encoded input. With that in mind, let's create a function to encode a batch of examples in the dataset: >>> def encode_dataset(examples, max_length=512): questions = examples[""question""] words = examples[""words""] boxes = examples[""boxes""] answers = examples[""answer""] # encode the batch of examples and initialize the start_positions and end_positions encoding = tokenizer(questions, words, boxes, max_length=max_length, padding=""max_length"", truncation=True) start_positions = [] end_positions = [] # loop through the examples in the batch for i in range(len(questions)): cls_index = encoding[""input_ids""][i].index(tokenizer.cls_token_id) # find the position of the answer in example's words words_example = [word.lower() for word in words[i]] answer = answers[i] match, word_idx_start, word_idx_end = subfinder(words_example, answer.lower().split()) if match: # if match is found, use `token_type_ids` to find where words start in the encoding token_type_ids = encoding[""token_type_ids""][i] token_start_index = 0 while token_type_ids[token_start_index] != 1: token_start_index += 1 token_end_index = len(encoding[""input_ids""][i]) - 1 while token_type_ids[token_end_index] != 1: token_end_index -= 1 word_ids = encoding.word_ids(i)[token_start_index : token_end_index + 1] start_position = cls_index end_position = cls_index # loop over word_ids and increase `token_start_index` until it matches the answer position in words # once it matches, save the `token_start_index` as the `start_position` of the answer in the encoding for id in word_ids: if id == word_idx_start: start_position = token_start_index else: token_start_index += 1 # similarly loop over `word_ids` starting from the end to find the `end_position` of the answer for id in word_ids[::-1]: if id == word_idx_end: end_position = token_end_index else: token_end_index -= 1 start_positions.append(start_position) end_positions.append(end_position) else: start_positions.append(cls_index) end_positions.append(cls_index) encoding[""image""] = examples[""image""] encoding[""start_positions""] = start_positions encoding[""end_positions""] = end_positions return encoding Now that we have this preprocessing function, we can encode the entire dataset: >>> encoded_train_dataset = dataset_with_ocr[""train""].map( encode_dataset, batched=True, batch_size=2, remove_columns=dataset_with_ocr[""train""].column_names ) >>> encoded_test_dataset = dataset_with_ocr[""test""].map( encode_dataset, batched=True, batch_size=2, remove_columns=dataset_with_ocr[""test""].column_names ) Let's check what the features of the encoded dataset look like: >>> encoded_train_dataset.features {'image': Sequence(feature=Sequence(feature=Sequence(feature=Value(dtype='uint8', id=None), length=-1, id=None), length=-1, id=None), length=-1, id=None), 'input_ids': Sequence(feature=Value(dtype='int32', id=None), length=-1, id=None), 'token_type_ids': Sequence(feature=Value(dtype='int8', id=None), length=-1, id=None), 'attention_mask': Sequence(feature=Value(dtype='int8', id=None), length=-1, id=None), 'bbox': Sequence(feature=Sequence(feature=Value(dtype='int64', id=None), length=-1, id=None), length=-1, id=None), 'start_positions': Value(dtype='int64', id=None), 'end_positions': Value(dtype='int64', id=None)} ## Evaluation Evaluation for document question answering requires a significant amount of postprocessing. To avoid taking up too much of your time, this guide skips the evaluation step. The [`Trainer`] still calculates the evaluation loss during training so you're not completely in the dark about your model's performance. Extractive question answering is typically evaluated using F1/exact match. If you'd like to implement it yourself, check out the [Question Answering chapter](https://huggingface.co/course/chapter7/7?fw=pt#postprocessing) of the Hugging Face course for inspiration. ## Train Congratulations! You've successfully navigated the toughest part of this guide and now you are ready to train your own model. Training involves the following steps: * Load the model with [`AutoModelForDocumentQuestionAnswering`] using the same checkpoint as in the preprocessing. * Define your training hyperparameters in [`TrainingArguments`]. * Define a function to batch examples together, here the [`DefaultDataCollator`] will do just fine * Pass the training arguments to [`Trainer`] along with the model, dataset, and data collator. * Call [`~Trainer.train`] to finetune your model. >>> from transformers import AutoModelForDocumentQuestionAnswering >>> model = AutoModelForDocumentQuestionAnswering.from_pretrained(model_checkpoint) In the [`TrainingArguments`] use `output_dir` to specify where to save your model, and configure hyperparameters as you see fit. If you wish to share your model with the community, set `push_to_hub` to `True` (you must be signed in to Hugging Face to upload your model). In this case the `output_dir` will also be the name of the repo where your model checkpoint will be pushed. >>> from transformers import TrainingArguments >>> # REPLACE THIS WITH YOUR REPO ID >>> repo_id = ""MariaK/layoutlmv2-base-uncased_finetuned_docvqa"" >>> training_args = TrainingArguments( output_dir=repo_id, per_device_train_batch_size=4, num_train_epochs=20, save_steps=200, logging_steps=50, evaluation_strategy=""steps"", learning_rate=5e-5, save_total_limit=2, remove_unused_columns=False, push_to_hub=True, ) Define a simple data collator to batch examples together. >>> from transformers import DefaultDataCollator >>> data_collator = DefaultDataCollator() Finally, bring everything together, and call [`~Trainer.train`]: >>> from transformers import Trainer >>> trainer = Trainer( model=model, args=training_args, data_collator=data_collator, train_dataset=encoded_train_dataset, eval_dataset=encoded_test_dataset, tokenizer=processor, ) >>> trainer.train() To add the final model to 🤗 Hub, create a model card and call `push_to_hub`: >>> trainer.create_model_card() >>> trainer.push_to_hub() ## Inference Now that you have finetuned a LayoutLMv2 model, and uploaded it to the 🤗 Hub, you can use it for inference. The simplest way to try out your finetuned model for inference is to use it in a [`Pipeline`]. Let's take an example: >>> example = dataset[""test""][2] >>> question = example[""query""][""en""] >>> image = example[""image""] >>> print(question) >>> print(example[""answers""]) 'Who is ‘presiding’ TRRF GENERAL SESSION (PART 1)?' ['TRRF Vice President', 'lee a. waller'] Next, instantiate a pipeline for document question answering with your model, and pass the image + question combination to it. >>> from transformers import pipeline >>> qa_pipeline = pipeline(""document-question-answering"", model=""MariaK/layoutlmv2-base-uncased_finetuned_docvqa"") >>> qa_pipeline(image, question) [{'score': 0.9949808120727539, 'answer': 'Lee A. Waller', 'start': 55, 'end': 57}] You can also manually replicate the results of the pipeline if you'd like: 1. Take an image and a question, prepare them for the model using the processor from your model. 2. Forward the result or preprocessing through the model. 3. The model returns `start_logits` and `end_logits`, which indicate which token is at the start of the answer and which token is at the end of the answer. Both have shape (batch_size, sequence_length). 4. Take an argmax on the last dimension of both the `start_logits` and `end_logits` to get the predicted `start_idx` and `end_idx`. 5. Decode the answer with the tokenizer. >>> import torch >>> from transformers import AutoProcessor >>> from transformers import AutoModelForDocumentQuestionAnswering >>> processor = AutoProcessor.from_pretrained(""MariaK/layoutlmv2-base-uncased_finetuned_docvqa"") >>> model = AutoModelForDocumentQuestionAnswering.from_pretrained(""MariaK/layoutlmv2-base-uncased_finetuned_docvqa"") >>> with torch.no_grad(): encoding = processor(image.convert(""RGB""), question, return_tensors=""pt"") outputs = model(**encoding) start_logits = outputs.start_logits end_logits = outputs.end_logits predicted_start_idx = start_logits.argmax(-1).item() predicted_end_idx = end_logits.argmax(-1).item() >>> processor.tokenizer.decode(encoding.input_ids.squeeze()[predicted_start_idx : predicted_end_idx + 1]) 'lee a. waller' ```" internal/pipelines_utils.md," # Utilities for pipelines This page lists all the utility functions the library provides for pipelines. Most of those are only useful if you are studying the code of the models in the library. ## Argument handling [[autodoc]] pipelines.ArgumentHandler [[autodoc]] pipelines.ZeroShotClassificationArgumentHandler [[autodoc]] pipelines.QuestionAnsweringArgumentHandler ## Data format [[autodoc]] pipelines.PipelineDataFormat [[autodoc]] pipelines.CsvPipelineDataFormat [[autodoc]] pipelines.JsonPipelineDataFormat [[autodoc]] pipelines.PipedPipelineDataFormat ## Utilities [[autodoc]] pipelines.PipelineException " internal/time_series_utils.md," # Time Series Utilities This page lists all the utility functions and classes that can be used for Time Series based models. Most of those are only useful if you are studying the code of the time series models or you wish to add to the collection of distributional output classes. ## Distributional Output [[autodoc]] time_series_utils.NormalOutput [[autodoc]] time_series_utils.StudentTOutput [[autodoc]] time_series_utils.NegativeBinomialOutput " internal/modeling_utils.md," # Custom Layers and Utilities This page lists all the custom layers used by the library, as well as the utility functions it provides for modeling. Most of those are only useful if you are studying the code of the models in the library. ## Pytorch custom modules [[autodoc]] pytorch_utils.Conv1D [[autodoc]] modeling_utils.PoolerStartLogits - forward [[autodoc]] modeling_utils.PoolerEndLogits - forward [[autodoc]] modeling_utils.PoolerAnswerClass - forward [[autodoc]] modeling_utils.SquadHeadOutput [[autodoc]] modeling_utils.SQuADHead - forward [[autodoc]] modeling_utils.SequenceSummary - forward ## PyTorch Helper Functions [[autodoc]] pytorch_utils.apply_chunking_to_forward [[autodoc]] pytorch_utils.find_pruneable_heads_and_indices [[autodoc]] pytorch_utils.prune_layer [[autodoc]] pytorch_utils.prune_conv1d_layer [[autodoc]] pytorch_utils.prune_linear_layer ## TensorFlow custom layers [[autodoc]] modeling_tf_utils.TFConv1D [[autodoc]] modeling_tf_utils.TFSequenceSummary ## TensorFlow loss functions [[autodoc]] modeling_tf_utils.TFCausalLanguageModelingLoss [[autodoc]] modeling_tf_utils.TFMaskedLanguageModelingLoss [[autodoc]] modeling_tf_utils.TFMultipleChoiceLoss [[autodoc]] modeling_tf_utils.TFQuestionAnsweringLoss [[autodoc]] modeling_tf_utils.TFSequenceClassificationLoss [[autodoc]] modeling_tf_utils.TFTokenClassificationLoss ## TensorFlow Helper Functions [[autodoc]] modeling_tf_utils.get_initializer [[autodoc]] modeling_tf_utils.keras_serializable [[autodoc]] modeling_tf_utils.shape_list " internal/file_utils.md," # General Utilities This page lists all of Transformers general utility functions that are found in the file `utils.py`. Most of those are only useful if you are studying the general code in the library. ## Enums and namedtuples [[autodoc]] utils.ExplicitEnum [[autodoc]] utils.PaddingStrategy [[autodoc]] utils.TensorType ## Special Decorators [[autodoc]] utils.add_start_docstrings [[autodoc]] utils.add_start_docstrings_to_model_forward [[autodoc]] utils.add_end_docstrings [[autodoc]] utils.add_code_sample_docstrings [[autodoc]] utils.replace_return_docstrings ## Special Properties [[autodoc]] utils.cached_property ## Other Utilities [[autodoc]] utils._LazyModule " internal/tokenization_utils.md," # Utilities for Tokenizers This page lists all the utility functions used by the tokenizers, mainly the class [`~tokenization_utils_base.PreTrainedTokenizerBase`] that implements the common methods between [`PreTrainedTokenizer`] and [`PreTrainedTokenizerFast`] and the mixin [`~tokenization_utils_base.SpecialTokensMixin`]. Most of those are only useful if you are studying the code of the tokenizers in the library. ## PreTrainedTokenizerBase [[autodoc]] tokenization_utils_base.PreTrainedTokenizerBase - __call__ - all ## SpecialTokensMixin [[autodoc]] tokenization_utils_base.SpecialTokensMixin ## Enums and namedtuples [[autodoc]] tokenization_utils_base.TruncationStrategy [[autodoc]] tokenization_utils_base.CharSpan [[autodoc]] tokenization_utils_base.TokenSpan " internal/audio_utils.md," # Utilities for `FeatureExtractors` This page lists all the utility functions that can be used by the audio [`FeatureExtractor`] in order to compute special features from a raw audio using common algorithms such as *Short Time Fourier Transform* or *log mel spectrogram*. Most of those are only useful if you are studying the code of the audio processors in the library. ## Audio Transformations [[autodoc]] audio_utils.hertz_to_mel [[autodoc]] audio_utils.mel_to_hertz [[autodoc]] audio_utils.mel_filter_bank [[autodoc]] audio_utils.optimal_fft_length [[autodoc]] audio_utils.window_function [[autodoc]] audio_utils.spectrogram [[autodoc]] audio_utils.power_to_db [[autodoc]] audio_utils.amplitude_to_db " internal/generation_utils.md," # Utilities for Generation This page lists all the utility functions used by [`~generation.GenerationMixin.generate`], [`~generation.GenerationMixin.greedy_search`], [`~generation.GenerationMixin.contrastive_search`], [`~generation.GenerationMixin.sample`], [`~generation.GenerationMixin.beam_search`], [`~generation.GenerationMixin.beam_sample`], [`~generation.GenerationMixin.group_beam_search`], and [`~generation.GenerationMixin.constrained_beam_search`]. Most of those are only useful if you are studying the code of the generate methods in the library. ## Generate Outputs The output of [`~generation.GenerationMixin.generate`] is an instance of a subclass of [`~utils.ModelOutput`]. This output is a data structure containing all the information returned by [`~generation.GenerationMixin.generate`], but that can also be used as tuple or dictionary. Here's an example: thon from transformers import GPT2Tokenizer, GPT2LMHeadModel tokenizer = GPT2Tokenizer.from_pretrained(""gpt2"") model = GPT2LMHeadModel.from_pretrained(""gpt2"") inputs = tokenizer(""Hello, my dog is cute and "", return_tensors=""pt"") generation_output = model.generate(**inputs, return_dict_in_generate=True, output_scores=True) The `generation_output` object is a [`~generation.GreedySearchDecoderOnlyOutput`], as we can see in the documentation of that class below, it means it has the following attributes: - `sequences`: the generated sequences of tokens - `scores` (optional): the prediction scores of the language modelling head, for each generation step - `hidden_states` (optional): the hidden states of the model, for each generation step - `attentions` (optional): the attention weights of the model, for each generation step Here we have the `scores` since we passed along `output_scores=True`, but we don't have `hidden_states` and `attentions` because we didn't pass `output_hidden_states=True` or `output_attentions=True`. You can access each attribute as you would usually do, and if that attribute has not been returned by the model, you will get `None`. Here for instance `generation_output.scores` are all the generated prediction scores of the language modeling head, and `generation_output.attentions` is `None`. When using our `generation_output` object as a tuple, it only keeps the attributes that don't have `None` values. Here, for instance, it has two elements, `loss` then `logits`, so thon generation_output[:2] will return the tuple `(generation_output.sequences, generation_output.scores)` for instance. When using our `generation_output` object as a dictionary, it only keeps the attributes that don't have `None` values. Here, for instance, it has two keys that are `sequences` and `scores`. We document here all output types. ### PyTorch [[autodoc]] generation.GreedySearchEncoderDecoderOutput [[autodoc]] generation.GreedySearchDecoderOnlyOutput [[autodoc]] generation.SampleEncoderDecoderOutput [[autodoc]] generation.SampleDecoderOnlyOutput [[autodoc]] generation.BeamSearchEncoderDecoderOutput [[autodoc]] generation.BeamSearchDecoderOnlyOutput [[autodoc]] generation.BeamSampleEncoderDecoderOutput [[autodoc]] generation.BeamSampleDecoderOnlyOutput [[autodoc]] generation.ContrastiveSearchEncoderDecoderOutput [[autodoc]] generation.ContrastiveSearchDecoderOnlyOutput ### TensorFlow [[autodoc]] generation.TFGreedySearchEncoderDecoderOutput [[autodoc]] generation.TFGreedySearchDecoderOnlyOutput [[autodoc]] generation.TFSampleEncoderDecoderOutput [[autodoc]] generation.TFSampleDecoderOnlyOutput [[autodoc]] generation.TFBeamSearchEncoderDecoderOutput [[autodoc]] generation.TFBeamSearchDecoderOnlyOutput [[autodoc]] generation.TFBeamSampleEncoderDecoderOutput [[autodoc]] generation.TFBeamSampleDecoderOnlyOutput [[autodoc]] generation.TFContrastiveSearchEncoderDecoderOutput [[autodoc]] generation.TFContrastiveSearchDecoderOnlyOutput ### FLAX [[autodoc]] generation.FlaxSampleOutput [[autodoc]] generation.FlaxGreedySearchOutput [[autodoc]] generation.FlaxBeamSearchOutput ## LogitsProcessor A [`LogitsProcessor`] can be used to modify the prediction scores of a language model head for generation. ### PyTorch [[autodoc]] AlternatingCodebooksLogitsProcessor - __call__ [[autodoc]] ClassifierFreeGuidanceLogitsProcessor - __call__ [[autodoc]] EncoderNoRepeatNGramLogitsProcessor - __call__ [[autodoc]] EncoderRepetitionPenaltyLogitsProcessor - __call__ [[autodoc]] EpsilonLogitsWarper - __call__ [[autodoc]] EtaLogitsWarper - __call__ [[autodoc]] ExponentialDecayLengthPenalty - __call__ [[autodoc]] ForcedBOSTokenLogitsProcessor - __call__ [[autodoc]] ForcedEOSTokenLogitsProcessor - __call__ [[autodoc]] ForceTokensLogitsProcessor - __call__ [[autodoc]] HammingDiversityLogitsProcessor - __call__ [[autodoc]] InfNanRemoveLogitsProcessor - __call__ [[autodoc]] LogitNormalization - __call__ [[autodoc]] LogitsProcessor - __call__ [[autodoc]] LogitsProcessorList - __call__ [[autodoc]] LogitsWarper - __call__ [[autodoc]] MinLengthLogitsProcessor - __call__ [[autodoc]] MinNewTokensLengthLogitsProcessor - __call__ [[autodoc]] NoBadWordsLogitsProcessor - __call__ [[autodoc]] NoRepeatNGramLogitsProcessor - __call__ [[autodoc]] PrefixConstrainedLogitsProcessor - __call__ [[autodoc]] RepetitionPenaltyLogitsProcessor - __call__ [[autodoc]] SequenceBiasLogitsProcessor - __call__ [[autodoc]] SuppressTokensAtBeginLogitsProcessor - __call__ [[autodoc]] SuppressTokensLogitsProcessor - __call__ [[autodoc]] TemperatureLogitsWarper - __call__ [[autodoc]] TopKLogitsWarper - __call__ [[autodoc]] TopPLogitsWarper - __call__ [[autodoc]] TypicalLogitsWarper - __call__ [[autodoc]] UnbatchedClassifierFreeGuidanceLogitsProcessor - __call__ [[autodoc]] WhisperTimeStampLogitsProcessor - __call__ ### TensorFlow [[autodoc]] TFForcedBOSTokenLogitsProcessor - __call__ [[autodoc]] TFForcedEOSTokenLogitsProcessor - __call__ [[autodoc]] TFForceTokensLogitsProcessor - __call__ [[autodoc]] TFLogitsProcessor - __call__ [[autodoc]] TFLogitsProcessorList - __call__ [[autodoc]] TFLogitsWarper - __call__ [[autodoc]] TFMinLengthLogitsProcessor - __call__ [[autodoc]] TFNoBadWordsLogitsProcessor - __call__ [[autodoc]] TFNoRepeatNGramLogitsProcessor - __call__ [[autodoc]] TFRepetitionPenaltyLogitsProcessor - __call__ [[autodoc]] TFSuppressTokensAtBeginLogitsProcessor - __call__ [[autodoc]] TFSuppressTokensLogitsProcessor - __call__ [[autodoc]] TFTemperatureLogitsWarper - __call__ [[autodoc]] TFTopKLogitsWarper - __call__ [[autodoc]] TFTopPLogitsWarper - __call__ ### FLAX [[autodoc]] FlaxForcedBOSTokenLogitsProcessor - __call__ [[autodoc]] FlaxForcedEOSTokenLogitsProcessor - __call__ [[autodoc]] FlaxForceTokensLogitsProcessor - __call__ [[autodoc]] FlaxLogitsProcessor - __call__ [[autodoc]] FlaxLogitsProcessorList - __call__ [[autodoc]] FlaxLogitsWarper - __call__ [[autodoc]] FlaxMinLengthLogitsProcessor - __call__ [[autodoc]] FlaxSuppressTokensAtBeginLogitsProcessor - __call__ [[autodoc]] FlaxSuppressTokensLogitsProcessor - __call__ [[autodoc]] FlaxTemperatureLogitsWarper - __call__ [[autodoc]] FlaxTopKLogitsWarper - __call__ [[autodoc]] FlaxTopPLogitsWarper - __call__ [[autodoc]] FlaxWhisperTimeStampLogitsProcessor - __call__ ## StoppingCriteria A [`StoppingCriteria`] can be used to change when to stop generation (other than EOS token). Please note that this is exclusivelly available to our PyTorch implementations. [[autodoc]] StoppingCriteria - __call__ [[autodoc]] StoppingCriteriaList - __call__ [[autodoc]] MaxLengthCriteria - __call__ [[autodoc]] MaxTimeCriteria - __call__ ## Constraints A [`Constraint`] can be used to force the generation to include specific tokens or sequences in the output. Please note that this is exclusivelly available to our PyTorch implementations. [[autodoc]] Constraint [[autodoc]] PhrasalConstraint [[autodoc]] DisjunctiveConstraint [[autodoc]] ConstraintListState ## BeamSearch [[autodoc]] BeamScorer - process - finalize [[autodoc]] BeamSearchScorer - process - finalize [[autodoc]] ConstrainedBeamSearchScorer - process - finalize ## Utilities [[autodoc]] top_k_top_p_filtering [[autodoc]] tf_top_k_top_p_filtering ## Streamers [[autodoc]] TextStreamer [[autodoc]] TextIteratorStreamer " internal/trainer_utils.md," # Utilities for Trainer This page lists all the utility functions used by [`Trainer`]. Most of those are only useful if you are studying the code of the Trainer in the library. ## Utilities [[autodoc]] EvalPrediction [[autodoc]] IntervalStrategy [[autodoc]] enable_full_determinism [[autodoc]] set_seed [[autodoc]] torch_distributed_zero_first ## Callbacks internals [[autodoc]] trainer_callback.CallbackHandler ## Distributed Evaluation [[autodoc]] trainer_pt_utils.DistributedTensorGatherer ## Distributed Evaluation [[autodoc]] HfArgumentParser ## Debug Utilities [[autodoc]] debug_utils.DebugUnderflowOverflow " internal/image_processing_utils.md," # Utilities for Image Processors This page lists all the utility functions used by the image processors, mainly the functional transformations used to process the images. Most of those are only useful if you are studying the code of the image processors in the library. ## Image Transformations [[autodoc]] image_transforms.center_crop [[autodoc]] image_transforms.center_to_corners_format [[autodoc]] image_transforms.corners_to_center_format [[autodoc]] image_transforms.id_to_rgb [[autodoc]] image_transforms.normalize [[autodoc]] image_transforms.pad [[autodoc]] image_transforms.rgb_to_id [[autodoc]] image_transforms.rescale [[autodoc]] image_transforms.resize [[autodoc]] image_transforms.to_pil_image ## ImageProcessingMixin [[autodoc]] image_processing_utils.ImageProcessingMixin " model_doc/bert.md," # BERT ## Overview The BERT model was proposed in [BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding](https://arxiv.org/abs/1810.04805) by Jacob Devlin, Ming-Wei Chang, Kenton Lee and Kristina Toutanova. It's a bidirectional transformer pretrained using a combination of masked language modeling objective and next sentence prediction on a large corpus comprising the Toronto Book Corpus and Wikipedia. The abstract from the paper is the following: *We introduce a new language representation model called BERT, which stands for Bidirectional Encoder Representations from Transformers. Unlike recent language representation models, BERT is designed to pre-train deep bidirectional representations from unlabeled text by jointly conditioning on both left and right context in all layers. As a result, the pre-trained BERT model can be fine-tuned with just one additional output layer to create state-of-the-art models for a wide range of tasks, such as question answering and language inference, without substantial task-specific architecture modifications.* *BERT is conceptually simple and empirically powerful. It obtains new state-of-the-art results on eleven natural language processing tasks, including pushing the GLUE score to 80.5% (7.7% point absolute improvement), MultiNLI accuracy to 86.7% (4.6% absolute improvement), SQuAD v1.1 question answering Test F1 to 93.2 (1.5 point absolute improvement) and SQuAD v2.0 Test F1 to 83.1 (5.1 point absolute improvement).* This model was contributed by [thomwolf](https://huggingface.co/thomwolf). The original code can be found [here](https://github.com/google-research/bert). ## Usage tips - BERT is a model with absolute position embeddings so it's usually advised to pad the inputs on the right rather than the left. - BERT was trained with the masked language modeling (MLM) and next sentence prediction (NSP) objectives. It is efficient at predicting masked tokens and at NLU in general, but is not optimal for text generation. - Corrupts the inputs by using random masking, more precisely, during pretraining, a given percentage of tokens (usually 15%) is masked by: * a special mask token with probability 0.8 * a random token different from the one masked with probability 0.1 * the same token with probability 0.1 - The model must predict the original sentence, but has a second objective: inputs are two sentences A and B (with a separation token in between). With probability 50%, the sentences are consecutive in the corpus, in the remaining 50% they are not related. The model has to predict if the sentences are consecutive or not. ## Resources A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with BERT. If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource. - A blog post on [BERT Text Classification in a different language](https://www.philschmid.de/bert-text-classification-in-a-different-language). - A notebook for [Finetuning BERT (and friends) for multi-label text classification](https://colab.research.google.com/github/NielsRogge/Transformers-Tutorials/blob/master/BERT/Fine_tuning_BERT_(and_friends)_for_multi_label_text_classification.ipynb). - A notebook on how to [Finetune BERT for multi-label classification using PyTorch](https://colab.research.google.com/github/abhimishra91/transformers-tutorials/blob/master/transformers_multi_label_classification.ipynb). 🌎 - A notebook on how to [warm-start an EncoderDecoder model with BERT for summarization](https://colab.research.google.com/github/patrickvonplaten/notebooks/blob/master/BERT2BERT_for_CNN_Dailymail.ipynb). - [`BertForSequenceClassification`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/pytorch/text-classification) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/text_classification.ipynb). - [`TFBertForSequenceClassification`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/tensorflow/text-classification) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/text_classification-tf.ipynb). - [`FlaxBertForSequenceClassification`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/flax/text-classification) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/text_classification_flax.ipynb). - [Text classification task guide](../tasks/sequence_classification) - A blog post on how to use [Hugging Face Transformers with Keras: Fine-tune a non-English BERT for Named Entity Recognition](https://www.philschmid.de/huggingface-transformers-keras-tf). - A notebook for [Finetuning BERT for named-entity recognition](https://colab.research.google.com/github/NielsRogge/Transformers-Tutorials/blob/master/Custom_Named_Entity_Recognition_with_BERT_only_first_wordpiece.ipynb) using only the first wordpiece of each word in the word label during tokenization. To propagate the label of the word to all wordpieces, see this [version](https://github.com/NielsRogge/Transformers-Tutorials/blob/master/BERT/Custom_Named_Entity_Recognition_with_BERT.ipynb) of the notebook instead. - [`BertForTokenClassification`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/pytorch/token-classification) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/token_classification.ipynb). - [`TFBertForTokenClassification`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/tensorflow/token-classification) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/token_classification-tf.ipynb). - [`FlaxBertForTokenClassification`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/flax/token-classification). - [Token classification](https://huggingface.co/course/chapter7/2?fw=pt) chapter of the 🤗 Hugging Face Course. - [Token classification task guide](../tasks/token_classification) - [`BertForMaskedLM`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/pytorch/language-modeling#robertabertdistilbert-and-masked-language-modeling) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/language_modeling.ipynb). - [`TFBertForMaskedLM`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/tensorflow/language-modeling#run_mlmpy) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/language_modeling-tf.ipynb). - [`FlaxBertForMaskedLM`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/flax/language-modeling#masked-language-modeling) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/masked_language_modeling_flax.ipynb). - [Masked language modeling](https://huggingface.co/course/chapter7/3?fw=pt) chapter of the 🤗 Hugging Face Course. - [Masked language modeling task guide](../tasks/masked_language_modeling) - [`BertForQuestionAnswering`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/pytorch/question-answering) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/question_answering.ipynb). - [`TFBertForQuestionAnswering`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/tensorflow/question-answering) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/question_answering-tf.ipynb). - [`FlaxBertForQuestionAnswering`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/flax/question-answering). - [Question answering](https://huggingface.co/course/chapter7/7?fw=pt) chapter of the 🤗 Hugging Face Course. - [Question answering task guide](../tasks/question_answering) **Multiple choice** - [`BertForMultipleChoice`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/pytorch/multiple-choice) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/multiple_choice.ipynb). - [`TFBertForMultipleChoice`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/tensorflow/multiple-choice) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/multiple_choice-tf.ipynb). - [Multiple choice task guide](../tasks/multiple_choice) ⚡️ **Inference** - A blog post on how to [Accelerate BERT inference with Hugging Face Transformers and AWS Inferentia](https://huggingface.co/blog/bert-inferentia-sagemaker). - A blog post on how to [Accelerate BERT inference with DeepSpeed-Inference on GPUs](https://www.philschmid.de/bert-deepspeed-inference). ⚙️ **Pretraining** - A blog post on [Pre-Training BERT with Hugging Face Transformers and Habana Gaudi](https://www.philschmid.de/pre-training-bert-habana). 🚀 **Deploy** - A blog post on how to [Convert Transformers to ONNX with Hugging Face Optimum](https://www.philschmid.de/convert-transformers-to-onnx). - A blog post on how to [Setup Deep Learning environment for Hugging Face Transformers with Habana Gaudi on AWS](https://www.philschmid.de/getting-started-habana-gaudi#conclusion). - A blog post on [Autoscaling BERT with Hugging Face Transformers, Amazon SageMaker and Terraform module](https://www.philschmid.de/terraform-huggingface-amazon-sagemaker-advanced). - A blog post on [Serverless BERT with HuggingFace, AWS Lambda, and Docker](https://www.philschmid.de/serverless-bert-with-huggingface-aws-lambda-docker). - A blog post on [Hugging Face Transformers BERT fine-tuning using Amazon SageMaker and Training Compiler](https://www.philschmid.de/huggingface-amazon-sagemaker-training-compiler). - A blog post on [Task-specific knowledge distillation for BERT using Transformers & Amazon SageMaker](https://www.philschmid.de/knowledge-distillation-bert-transformers). ## BertConfig [[autodoc]] BertConfig - all ## BertTokenizer [[autodoc]] BertTokenizer - build_inputs_with_special_tokens - get_special_tokens_mask - create_token_type_ids_from_sequences - save_vocabulary ## BertTokenizerFast [[autodoc]] BertTokenizerFast ## TFBertTokenizer [[autodoc]] TFBertTokenizer ## Bert specific outputs [[autodoc]] models.bert.modeling_bert.BertForPreTrainingOutput [[autodoc]] models.bert.modeling_tf_bert.TFBertForPreTrainingOutput [[autodoc]] models.bert.modeling_flax_bert.FlaxBertForPreTrainingOutput ## BertModel [[autodoc]] BertModel - forward ## BertForPreTraining [[autodoc]] BertForPreTraining - forward ## BertLMHeadModel [[autodoc]] BertLMHeadModel - forward ## BertForMaskedLM [[autodoc]] BertForMaskedLM - forward ## BertForNextSentencePrediction [[autodoc]] BertForNextSentencePrediction - forward ## BertForSequenceClassification [[autodoc]] BertForSequenceClassification - forward ## BertForMultipleChoice [[autodoc]] BertForMultipleChoice - forward ## BertForTokenClassification [[autodoc]] BertForTokenClassification - forward ## BertForQuestionAnswering [[autodoc]] BertForQuestionAnswering - forward ## TFBertModel [[autodoc]] TFBertModel - call ## TFBertForPreTraining [[autodoc]] TFBertForPreTraining - call ## TFBertModelLMHeadModel [[autodoc]] TFBertLMHeadModel - call ## TFBertForMaskedLM [[autodoc]] TFBertForMaskedLM - call ## TFBertForNextSentencePrediction [[autodoc]] TFBertForNextSentencePrediction - call ## TFBertForSequenceClassification [[autodoc]] TFBertForSequenceClassification - call ## TFBertForMultipleChoice [[autodoc]] TFBertForMultipleChoice - call ## TFBertForTokenClassification [[autodoc]] TFBertForTokenClassification - call ## TFBertForQuestionAnswering [[autodoc]] TFBertForQuestionAnswering - call ## FlaxBertModel [[autodoc]] FlaxBertModel - __call__ ## FlaxBertForPreTraining [[autodoc]] FlaxBertForPreTraining - __call__ ## FlaxBertForCausalLM [[autodoc]] FlaxBertForCausalLM - __call__ ## FlaxBertForMaskedLM [[autodoc]] FlaxBertForMaskedLM - __call__ ## FlaxBertForNextSentencePrediction [[autodoc]] FlaxBertForNextSentencePrediction - __call__ ## FlaxBertForSequenceClassification [[autodoc]] FlaxBertForSequenceClassification - __call__ ## FlaxBertForMultipleChoice [[autodoc]] FlaxBertForMultipleChoice - __call__ ## FlaxBertForTokenClassification [[autodoc]] FlaxBertForTokenClassification - __call__ ## FlaxBertForQuestionAnswering [[autodoc]] FlaxBertForQuestionAnswering - __call__ " model_doc/squeezebert.md," # SqueezeBERT ## Overview The SqueezeBERT model was proposed in [SqueezeBERT: What can computer vision teach NLP about efficient neural networks?](https://arxiv.org/abs/2006.11316) by Forrest N. Iandola, Albert E. Shaw, Ravi Krishna, Kurt W. Keutzer. It's a bidirectional transformer similar to the BERT model. The key difference between the BERT architecture and the SqueezeBERT architecture is that SqueezeBERT uses [grouped convolutions](https://blog.yani.io/filter-group-tutorial) instead of fully-connected layers for the Q, K, V and FFN layers. The abstract from the paper is the following: *Humans read and write hundreds of billions of messages every day. Further, due to the availability of large datasets, large computing systems, and better neural network models, natural language processing (NLP) technology has made significant strides in understanding, proofreading, and organizing these messages. Thus, there is a significant opportunity to deploy NLP in myriad applications to help web users, social networks, and businesses. In particular, we consider smartphones and other mobile devices as crucial platforms for deploying NLP models at scale. However, today's highly-accurate NLP neural network models such as BERT and RoBERTa are extremely computationally expensive, with BERT-base taking 1.7 seconds to classify a text snippet on a Pixel 3 smartphone. In this work, we observe that methods such as grouped convolutions have yielded significant speedups for computer vision networks, but many of these techniques have not been adopted by NLP neural network designers. We demonstrate how to replace several operations in self-attention layers with grouped convolutions, and we use this technique in a novel network architecture called SqueezeBERT, which runs 4.3x faster than BERT-base on the Pixel 3 while achieving competitive accuracy on the GLUE test set. The SqueezeBERT code will be released.* This model was contributed by [forresti](https://huggingface.co/forresti). ## Usage tips - SqueezeBERT is a model with absolute position embeddings so it's usually advised to pad the inputs on the right rather than the left. - SqueezeBERT is similar to BERT and therefore relies on the masked language modeling (MLM) objective. It is therefore efficient at predicting masked tokens and at NLU in general, but is not optimal for text generation. Models trained with a causal language modeling (CLM) objective are better in that regard. - For best results when finetuning on sequence classification tasks, it is recommended to start with the *squeezebert/squeezebert-mnli-headless* checkpoint. ## Resources - [Text classification task guide](../tasks/sequence_classification) - [Token classification task guide](../tasks/token_classification) - [Question answering task guide](../tasks/question_answering) - [Masked language modeling task guide](../tasks/masked_language_modeling) - [Multiple choice task guide](../tasks/multiple_choice) ## SqueezeBertConfig [[autodoc]] SqueezeBertConfig ## SqueezeBertTokenizer [[autodoc]] SqueezeBertTokenizer - build_inputs_with_special_tokens - get_special_tokens_mask - create_token_type_ids_from_sequences - save_vocabulary ## SqueezeBertTokenizerFast [[autodoc]] SqueezeBertTokenizerFast ## SqueezeBertModel [[autodoc]] SqueezeBertModel ## SqueezeBertForMaskedLM [[autodoc]] SqueezeBertForMaskedLM ## SqueezeBertForSequenceClassification [[autodoc]] SqueezeBertForSequenceClassification ## SqueezeBertForMultipleChoice [[autodoc]] SqueezeBertForMultipleChoice ## SqueezeBertForTokenClassification [[autodoc]] SqueezeBertForTokenClassification ## SqueezeBertForQuestionAnswering [[autodoc]] SqueezeBertForQuestionAnswering " model_doc/flaubert.md," # FlauBERT ## Overview The FlauBERT model was proposed in the paper [FlauBERT: Unsupervised Language Model Pre-training for French](https://arxiv.org/abs/1912.05372) by Hang Le et al. It's a transformer model pretrained using a masked language modeling (MLM) objective (like BERT). The abstract from the paper is the following: *Language models have become a key step to achieve state-of-the art results in many different Natural Language Processing (NLP) tasks. Leveraging the huge amount of unlabeled texts nowadays available, they provide an efficient way to pre-train continuous word representations that can be fine-tuned for a downstream task, along with their contextualization at the sentence level. This has been widely demonstrated for English using contextualized representations (Dai and Le, 2015; Peters et al., 2018; Howard and Ruder, 2018; Radford et al., 2018; Devlin et al., 2019; Yang et al., 2019b). In this paper, we introduce and share FlauBERT, a model learned on a very large and heterogeneous French corpus. Models of different sizes are trained using the new CNRS (French National Centre for Scientific Research) Jean Zay supercomputer. We apply our French language models to diverse NLP tasks (text classification, paraphrasing, natural language inference, parsing, word sense disambiguation) and show that most of the time they outperform other pretraining approaches. Different versions of FlauBERT as well as a unified evaluation protocol for the downstream tasks, called FLUE (French Language Understanding Evaluation), are shared to the research community for further reproducible experiments in French NLP.* This model was contributed by [formiel](https://huggingface.co/formiel). The original code can be found [here](https://github.com/getalp/Flaubert). Tips: - Like RoBERTa, without the sentence ordering prediction (so just trained on the MLM objective). ## Resources - [Text classification task guide](../tasks/sequence_classification) - [Token classification task guide](../tasks/token_classification) - [Question answering task guide](../tasks/question_answering) - [Masked language modeling task guide](../tasks/masked_language_modeling) - [Multiple choice task guide](../tasks/multiple_choice) ## FlaubertConfig [[autodoc]] FlaubertConfig ## FlaubertTokenizer [[autodoc]] FlaubertTokenizer ## FlaubertModel [[autodoc]] FlaubertModel - forward ## FlaubertWithLMHeadModel [[autodoc]] FlaubertWithLMHeadModel - forward ## FlaubertForSequenceClassification [[autodoc]] FlaubertForSequenceClassification - forward ## FlaubertForMultipleChoice [[autodoc]] FlaubertForMultipleChoice - forward ## FlaubertForTokenClassification [[autodoc]] FlaubertForTokenClassification - forward ## FlaubertForQuestionAnsweringSimple [[autodoc]] FlaubertForQuestionAnsweringSimple - forward ## FlaubertForQuestionAnswering [[autodoc]] FlaubertForQuestionAnswering - forward ## TFFlaubertModel [[autodoc]] TFFlaubertModel - call ## TFFlaubertWithLMHeadModel [[autodoc]] TFFlaubertWithLMHeadModel - call ## TFFlaubertForSequenceClassification [[autodoc]] TFFlaubertForSequenceClassification - call ## TFFlaubertForMultipleChoice [[autodoc]] TFFlaubertForMultipleChoice - call ## TFFlaubertForTokenClassification [[autodoc]] TFFlaubertForTokenClassification - call ## TFFlaubertForQuestionAnsweringSimple [[autodoc]] TFFlaubertForQuestionAnsweringSimple - call " model_doc/vit_hybrid.md," # Hybrid Vision Transformer (ViT Hybrid) ## Overview The hybrid Vision Transformer (ViT) model was proposed in [An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale](https://arxiv.org/abs/2010.11929) by Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, Jakob Uszkoreit, Neil Houlsby. It's the first paper that successfully trains a Transformer encoder on ImageNet, attaining very good results compared to familiar convolutional architectures. ViT hybrid is a slight variant of the [plain Vision Transformer](vit), by leveraging a convolutional backbone (specifically, [BiT](bit)) whose features are used as initial ""tokens"" for the Transformer. The abstract from the paper is the following: *While the Transformer architecture has become the de-facto standard for natural language processing tasks, its applications to computer vision remain limited. In vision, attention is either applied in conjunction with convolutional networks, or used to replace certain components of convolutional networks while keeping their overall structure in place. We show that this reliance on CNNs is not necessary and a pure transformer applied directly to sequences of image patches can perform very well on image classification tasks. When pre-trained on large amounts of data and transferred to multiple mid-sized or small image recognition benchmarks (ImageNet, CIFAR-100, VTAB, etc.), Vision Transformer (ViT) attains excellent results compared to state-of-the-art convolutional networks while requiring substantially fewer computational resources to train.* This model was contributed by [nielsr](https://huggingface.co/nielsr). The original code (written in JAX) can be found [here](https://github.com/google-research/vision_transformer). ## Resources A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with ViT Hybrid. - [`ViTHybridForImageClassification`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/pytorch/image-classification) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/image_classification.ipynb). - See also: [Image classification task guide](../tasks/image_classification) If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource. ## ViTHybridConfig [[autodoc]] ViTHybridConfig ## ViTHybridImageProcessor [[autodoc]] ViTHybridImageProcessor - preprocess ## ViTHybridModel [[autodoc]] ViTHybridModel - forward ## ViTHybridForImageClassification [[autodoc]] ViTHybridForImageClassification - forward " model_doc/unispeech-sat.md," # UniSpeech-SAT ## Overview The UniSpeech-SAT model was proposed in [UniSpeech-SAT: Universal Speech Representation Learning with Speaker Aware Pre-Training](https://arxiv.org/abs/2110.05752) by Sanyuan Chen, Yu Wu, Chengyi Wang, Zhengyang Chen, Zhuo Chen, Shujie Liu, Jian Wu, Yao Qian, Furu Wei, Jinyu Li, Xiangzhan Yu . The abstract from the paper is the following: *Self-supervised learning (SSL) is a long-standing goal for speech processing, since it utilizes large-scale unlabeled data and avoids extensive human labeling. Recent years witness great successes in applying self-supervised learning in speech recognition, while limited exploration was attempted in applying SSL for modeling speaker characteristics. In this paper, we aim to improve the existing SSL framework for speaker representation learning. Two methods are introduced for enhancing the unsupervised speaker information extraction. First, we apply the multi-task learning to the current SSL framework, where we integrate the utterance-wise contrastive loss with the SSL objective function. Second, for better speaker discrimination, we propose an utterance mixing strategy for data augmentation, where additional overlapped utterances are created unsupervisely and incorporate during training. We integrate the proposed methods into the HuBERT framework. Experiment results on SUPERB benchmark show that the proposed system achieves state-of-the-art performance in universal representation learning, especially for speaker identification oriented tasks. An ablation study is performed verifying the efficacy of each proposed method. Finally, we scale up training dataset to 94 thousand hours public audio data and achieve further performance improvement in all SUPERB tasks.* This model was contributed by [patrickvonplaten](https://huggingface.co/patrickvonplaten). The Authors' code can be found [here](https://github.com/microsoft/UniSpeech/tree/main/UniSpeech-SAT). ## Usage tips - UniSpeechSat is a speech model that accepts a float array corresponding to the raw waveform of the speech signal. Please use [`Wav2Vec2Processor`] for the feature extraction. - UniSpeechSat model can be fine-tuned using connectionist temporal classification (CTC) so the model output has to be decoded using [`Wav2Vec2CTCTokenizer`]. - UniSpeechSat performs especially well on speaker verification, speaker identification, and speaker diarization tasks. ## Resources - [Audio classification task guide](../tasks/audio_classification) - [Automatic speech recognition task guide](../tasks/asr) ## UniSpeechSatConfig [[autodoc]] UniSpeechSatConfig ## UniSpeechSat specific outputs [[autodoc]] models.unispeech_sat.modeling_unispeech_sat.UniSpeechSatForPreTrainingOutput ## UniSpeechSatModel [[autodoc]] UniSpeechSatModel - forward ## UniSpeechSatForCTC [[autodoc]] UniSpeechSatForCTC - forward ## UniSpeechSatForSequenceClassification [[autodoc]] UniSpeechSatForSequenceClassification - forward ## UniSpeechSatForAudioFrameClassification [[autodoc]] UniSpeechSatForAudioFrameClassification - forward ## UniSpeechSatForXVector [[autodoc]] UniSpeechSatForXVector - forward ## UniSpeechSatForPreTraining [[autodoc]] UniSpeechSatForPreTraining - forward " model_doc/xlm-roberta.md," # XLM-RoBERTa ## Overview The XLM-RoBERTa model was proposed in [Unsupervised Cross-lingual Representation Learning at Scale](https://arxiv.org/abs/1911.02116) by Alexis Conneau, Kartikay Khandelwal, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzmán, Edouard Grave, Myle Ott, Luke Zettlemoyer and Veselin Stoyanov. It is based on Facebook's RoBERTa model released in 2019. It is a large multi-lingual language model, trained on 2.5TB of filtered CommonCrawl data. The abstract from the paper is the following: *This paper shows that pretraining multilingual language models at scale leads to significant performance gains for a wide range of cross-lingual transfer tasks. We train a Transformer-based masked language model on one hundred languages, using more than two terabytes of filtered CommonCrawl data. Our model, dubbed XLM-R, significantly outperforms multilingual BERT (mBERT) on a variety of cross-lingual benchmarks, including +13.8% average accuracy on XNLI, +12.3% average F1 score on MLQA, and +2.1% average F1 score on NER. XLM-R performs particularly well on low-resource languages, improving 11.8% in XNLI accuracy for Swahili and 9.2% for Urdu over the previous XLM model. We also present a detailed empirical evaluation of the key factors that are required to achieve these gains, including the trade-offs between (1) positive transfer and capacity dilution and (2) the performance of high and low resource languages at scale. Finally, we show, for the first time, the possibility of multilingual modeling without sacrificing per-language performance; XLM-Ris very competitive with strong monolingual models on the GLUE and XNLI benchmarks. We will make XLM-R code, data, and models publicly available.* This model was contributed by [stefan-it](https://huggingface.co/stefan-it). The original code can be found [here](https://github.com/pytorch/fairseq/tree/master/examples/xlmr). ## Usage tips - XLM-RoBERTa is a multilingual model trained on 100 different languages. Unlike some XLM multilingual models, it does not require `lang` tensors to understand which language is used, and should be able to determine the correct language from the input ids. - Uses RoBERTa tricks on the XLM approach, but does not use the translation language modeling objective. It only uses masked language modeling on sentences coming from one language. ## Resources A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with XLM-RoBERTa. If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource. - A blog post on how to [finetune XLM RoBERTa for multiclass classification with Habana Gaudi on AWS](https://www.philschmid.de/habana-distributed-training) - [`XLMRobertaForSequenceClassification`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/pytorch/text-classification) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/text_classification.ipynb). - [`TFXLMRobertaForSequenceClassification`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/tensorflow/text-classification) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/text_classification-tf.ipynb). - [`FlaxXLMRobertaForSequenceClassification`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/flax/text-classification) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/text_classification_flax.ipynb). - [Text classification](https://huggingface.co/docs/transformers/tasks/sequence_classification) chapter of the 🤗 Hugging Face Task Guides. - [Text classification task guide](../tasks/sequence_classification) - [`XLMRobertaForTokenClassification`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/pytorch/token-classification) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/token_classification.ipynb). - [`TFXLMRobertaForTokenClassification`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/tensorflow/token-classification) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/token_classification-tf.ipynb). - [`FlaxXLMRobertaForTokenClassification`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/flax/token-classification). - [Token classification](https://huggingface.co/course/chapter7/2?fw=pt) chapter of the 🤗 Hugging Face Course. - [Token classification task guide](../tasks/token_classification) - [`XLMRobertaForCausalLM`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/pytorch/language-modeling) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/language_modeling.ipynb). - [Causal language modeling](https://huggingface.co/docs/transformers/tasks/language_modeling) chapter of the 🤗 Hugging Face Task Guides. - [Causal language modeling task guide](../tasks/language_modeling) - [`XLMRobertaForMaskedLM`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/pytorch/language-modeling#robertabertdistilbert-and-masked-language-modeling) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/language_modeling.ipynb). - [`TFXLMRobertaForMaskedLM`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/tensorflow/language-modeling#run_mlmpy) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/language_modeling-tf.ipynb). - [`FlaxXLMRobertaForMaskedLM`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/flax/language-modeling#masked-language-modeling) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/masked_language_modeling_flax.ipynb). - [Masked language modeling](https://huggingface.co/course/chapter7/3?fw=pt) chapter of the 🤗 Hugging Face Course. - [Masked language modeling](../tasks/masked_language_modeling) - [`XLMRobertaForQuestionAnswering`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/pytorch/question-answering) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/question_answering.ipynb). - [`TFXLMRobertaForQuestionAnswering`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/tensorflow/question-answering) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/question_answering-tf.ipynb). - [`FlaxXLMRobertaForQuestionAnswering`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/flax/question-answering). - [Question answering](https://huggingface.co/course/chapter7/7?fw=pt) chapter of the 🤗 Hugging Face Course. - [Question answering task guide](../tasks/question_answering) **Multiple choice** - [`XLMRobertaForMultipleChoice`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/pytorch/multiple-choice) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/multiple_choice.ipynb). - [`TFXLMRobertaForMultipleChoice`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/tensorflow/multiple-choice) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/multiple_choice-tf.ipynb). - [Multiple choice task guide](../tasks/multiple_choice) 🚀 Deploy - A blog post on how to [Deploy Serverless XLM RoBERTa on AWS Lambda](https://www.philschmid.de/multilingual-serverless-xlm-roberta-with-huggingface). This implementation is the same as RoBERTa. Refer to the [documentation of RoBERTa](roberta) for usage examples as well as the information relative to the inputs and outputs. ## XLMRobertaConfig [[autodoc]] XLMRobertaConfig ## XLMRobertaTokenizer [[autodoc]] XLMRobertaTokenizer - build_inputs_with_special_tokens - get_special_tokens_mask - create_token_type_ids_from_sequences - save_vocabulary ## XLMRobertaTokenizerFast [[autodoc]] XLMRobertaTokenizerFast ## XLMRobertaModel [[autodoc]] XLMRobertaModel - forward ## XLMRobertaForCausalLM [[autodoc]] XLMRobertaForCausalLM - forward ## XLMRobertaForMaskedLM [[autodoc]] XLMRobertaForMaskedLM - forward ## XLMRobertaForSequenceClassification [[autodoc]] XLMRobertaForSequenceClassification - forward ## XLMRobertaForMultipleChoice [[autodoc]] XLMRobertaForMultipleChoice - forward ## XLMRobertaForTokenClassification [[autodoc]] XLMRobertaForTokenClassification - forward ## XLMRobertaForQuestionAnswering [[autodoc]] XLMRobertaForQuestionAnswering - forward ## TFXLMRobertaModel [[autodoc]] TFXLMRobertaModel - call ## TFXLMRobertaForCausalLM [[autodoc]] TFXLMRobertaForCausalLM - call ## TFXLMRobertaForMaskedLM [[autodoc]] TFXLMRobertaForMaskedLM - call ## TFXLMRobertaForSequenceClassification [[autodoc]] TFXLMRobertaForSequenceClassification - call ## TFXLMRobertaForMultipleChoice [[autodoc]] TFXLMRobertaForMultipleChoice - call ## TFXLMRobertaForTokenClassification [[autodoc]] TFXLMRobertaForTokenClassification - call ## TFXLMRobertaForQuestionAnswering [[autodoc]] TFXLMRobertaForQuestionAnswering - call ## FlaxXLMRobertaModel [[autodoc]] FlaxXLMRobertaModel - __call__ ## FlaxXLMRobertaForCausalLM [[autodoc]] FlaxXLMRobertaForCausalLM - __call__ ## FlaxXLMRobertaForMaskedLM [[autodoc]] FlaxXLMRobertaForMaskedLM - __call__ ## FlaxXLMRobertaForSequenceClassification [[autodoc]] FlaxXLMRobertaForSequenceClassification - __call__ ## FlaxXLMRobertaForMultipleChoice [[autodoc]] FlaxXLMRobertaForMultipleChoice - __call__ ## FlaxXLMRobertaForTokenClassification [[autodoc]] FlaxXLMRobertaForTokenClassification - __call__ ## FlaxXLMRobertaForQuestionAnswering [[autodoc]] FlaxXLMRobertaForQuestionAnswering - __call__ " model_doc/xlm-prophetnet.md," # XLM-ProphetNet **DISCLAIMER:** If you see something strange, file a [Github Issue](https://github.com/huggingface/transformers/issues/new?assignees=&labels=&template=bug-report.md&title) and assign @patrickvonplaten ## Overview The XLM-ProphetNet model was proposed in [ProphetNet: Predicting Future N-gram for Sequence-to-Sequence Pre-training,](https://arxiv.org/abs/2001.04063) by Yu Yan, Weizhen Qi, Yeyun Gong, Dayiheng Liu, Nan Duan, Jiusheng Chen, Ruofei Zhang, Ming Zhou on 13 Jan, 2020. XLM-ProphetNet is an encoder-decoder model and can predict n-future tokens for ""ngram"" language modeling instead of just the next token. Its architecture is identical to ProhpetNet, but the model was trained on the multi-lingual ""wiki100"" Wikipedia dump. XLM-ProphetNet's model architecture and pretraining objective is same as ProphetNet, but XLM-ProphetNet was pre-trained on the cross-lingual dataset XGLUE. The abstract from the paper is the following: *In this paper, we present a new sequence-to-sequence pretraining model called ProphetNet, which introduces a novel self-supervised objective named future n-gram prediction and the proposed n-stream self-attention mechanism. Instead of the optimization of one-step ahead prediction in traditional sequence-to-sequence model, the ProphetNet is optimized by n-step ahead prediction which predicts the next n tokens simultaneously based on previous context tokens at each time step. The future n-gram prediction explicitly encourages the model to plan for the future tokens and prevent overfitting on strong local correlations. We pre-train ProphetNet using a base scale dataset (16GB) and a large scale dataset (160GB) respectively. Then we conduct experiments on CNN/DailyMail, Gigaword, and SQuAD 1.1 benchmarks for abstractive summarization and question generation tasks. Experimental results show that ProphetNet achieves new state-of-the-art results on all these datasets compared to the models using the same scale pretraining corpus.* The Authors' code can be found [here](https://github.com/microsoft/ProphetNet). ## Resources - [Causal language modeling task guide](../tasks/language_modeling) - [Translation task guide](../tasks/translation) - [Summarization task guide](../tasks/summarization) ## XLMProphetNetConfig [[autodoc]] XLMProphetNetConfig ## XLMProphetNetTokenizer [[autodoc]] XLMProphetNetTokenizer ## XLMProphetNetModel [[autodoc]] XLMProphetNetModel ## XLMProphetNetEncoder [[autodoc]] XLMProphetNetEncoder ## XLMProphetNetDecoder [[autodoc]] XLMProphetNetDecoder ## XLMProphetNetForConditionalGeneration [[autodoc]] XLMProphetNetForConditionalGeneration ## XLMProphetNetForCausalLM [[autodoc]] XLMProphetNetForCausalLM " model_doc/xglm.md," # XGLM ## Overview The XGLM model was proposed in [Few-shot Learning with Multilingual Language Models](https://arxiv.org/abs/2112.10668) by Xi Victoria Lin, Todor Mihaylov, Mikel Artetxe, Tianlu Wang, Shuohui Chen, Daniel Simig, Myle Ott, Naman Goyal, Shruti Bhosale, Jingfei Du, Ramakanth Pasunuru, Sam Shleifer, Punit Singh Koura, Vishrav Chaudhary, Brian O'Horo, Jeff Wang, Luke Zettlemoyer, Zornitsa Kozareva, Mona Diab, Veselin Stoyanov, Xian Li. The abstract from the paper is the following: *Large-scale autoregressive language models such as GPT-3 are few-shot learners that can perform a wide range of language tasks without fine-tuning. While these models are known to be able to jointly represent many different languages, their training data is dominated by English, potentially limiting their cross-lingual generalization. In this work, we train multilingual autoregressive language models on a balanced corpus covering a diverse set of languages, and study their few- and zero-shot learning capabilities in a wide range of tasks. Our largest model with 7.5 billion parameters sets new state of the art in few-shot learning in more than 20 representative languages, outperforming GPT-3 of comparable size in multilingual commonsense reasoning (with +7.4% absolute accuracy improvement in 0-shot settings and +9.4% in 4-shot settings) and natural language inference (+5.4% in each of 0-shot and 4-shot settings). On the FLORES-101 machine translation benchmark, our model outperforms GPT-3 on 171 out of 182 translation directions with 32 training examples, while surpassing the official supervised baseline in 45 directions. We present a detailed analysis of where the model succeeds and fails, showing in particular that it enables cross-lingual in-context learning on some tasks, while there is still room for improvement on surface form robustness and adaptation to tasks that do not have a natural cloze form. Finally, we evaluate our models in social value tasks such as hate speech detection in five languages and find it has limitations similar to comparable sized GPT-3 models.* This model was contributed by [Suraj](https://huggingface.co/valhalla). The original code can be found [here](https://github.com/pytorch/fairseq/tree/main/examples/xglm). ## Resources - [Causal language modeling task guide](../tasks/language_modeling) ## XGLMConfig [[autodoc]] XGLMConfig ## XGLMTokenizer [[autodoc]] XGLMTokenizer - build_inputs_with_special_tokens - get_special_tokens_mask - create_token_type_ids_from_sequences - save_vocabulary ## XGLMTokenizerFast [[autodoc]] XGLMTokenizerFast ## XGLMModel [[autodoc]] XGLMModel - forward ## XGLMForCausalLM [[autodoc]] XGLMForCausalLM - forward ## TFXGLMModel [[autodoc]] TFXGLMModel - call ## TFXGLMForCausalLM [[autodoc]] TFXGLMForCausalLM - call ## FlaxXGLMModel [[autodoc]] FlaxXGLMModel - __call__ ## FlaxXGLMForCausalLM [[autodoc]] FlaxXGLMForCausalLM - __call__ " model_doc/megatron_gpt2.md," # MegatronGPT2 ## Overview The MegatronGPT2 model was proposed in [Megatron-LM: Training Multi-Billion Parameter Language Models Using Model Parallelism](https://arxiv.org/abs/1909.08053) by Mohammad Shoeybi, Mostofa Patwary, Raul Puri, Patrick LeGresley, Jared Casper and Bryan Catanzaro. The abstract from the paper is the following: *Recent work in language modeling demonstrates that training large transformer models advances the state of the art in Natural Language Processing applications. However, very large models can be quite difficult to train due to memory constraints. In this work, we present our techniques for training very large transformer models and implement a simple, efficient intra-layer model parallel approach that enables training transformer models with billions of parameters. Our approach does not require a new compiler or library changes, is orthogonal and complimentary to pipeline model parallelism, and can be fully implemented with the insertion of a few communication operations in native PyTorch. We illustrate this approach by converging transformer based models up to 8.3 billion parameters using 512 GPUs. We sustain 15.1 PetaFLOPs across the entire application with 76% scaling efficiency when compared to a strong single GPU baseline that sustains 39 TeraFLOPs, which is 30% of peak FLOPs. To demonstrate that large language models can further advance the state of the art (SOTA), we train an 8.3 billion parameter transformer language model similar to GPT-2 and a 3.9 billion parameter model similar to BERT. We show that careful attention to the placement of layer normalization in BERT-like models is critical to achieving increased performance as the model size grows. Using the GPT-2 model we achieve SOTA results on the WikiText103 (10.8 compared to SOTA perplexity of 15.8) and LAMBADA (66.5% compared to SOTA accuracy of 63.2%) datasets. Our BERT model achieves SOTA results on the RACE dataset (90.9% compared to SOTA accuracy of 89.4%).* This model was contributed by [jdemouth](https://huggingface.co/jdemouth). The original code can be found [here](https://github.com/NVIDIA/Megatron-LM). That repository contains a multi-GPU and multi-node implementation of the Megatron Language models. In particular, it contains a hybrid model parallel approach using ""tensor parallel"" and ""pipeline parallel"" techniques. ## Usage tips We have provided pretrained [GPT2-345M](https://ngc.nvidia.com/catalog/models/nvidia:megatron_lm_345m) checkpoints for use to evaluate or finetuning downstream tasks. To access these checkpoints, first [sign up](https://ngc.nvidia.com/signup) for and setup the NVIDIA GPU Cloud (NGC) Registry CLI. Further documentation for downloading models can be found in the [NGC documentation](https://docs.nvidia.com/dgx/ngc-registry-cli-user-guide/index.html#topic_6_4_1). Alternatively, you can directly download the checkpoints using: ```bash wget --content-disposition https://api.ngc.nvidia.com/v2/models/nvidia/megatron_lm_345m/versions/v0.0/zip -O megatron_gpt2_345m_v0_0.zip Once you have obtained the checkpoint from NVIDIA GPU Cloud (NGC), you have to convert it to a format that will easily be loaded by Hugging Face Transformers GPT2 implementation. The following command allows you to do the conversion. We assume that the folder `models/megatron_gpt2` contains `megatron_gpt2_345m_v0_0.zip` and that the command is run from that folder: ```bash python3 $PATH_TO_TRANSFORMERS/models/megatron_gpt2/convert_megatron_gpt2_checkpoint.py megatron_gpt2_345m_v0_0.zip MegatronGPT2 architecture is the same as OpenAI GPT-2 . Refer to [GPT-2 documentation](gpt2) for information on configuration classes and their parameters. " model_doc/donut.md," # Donut ## Overview The Donut model was proposed in [OCR-free Document Understanding Transformer](https://arxiv.org/abs/2111.15664) by Geewook Kim, Teakgyu Hong, Moonbin Yim, Jeongyeon Nam, Jinyoung Park, Jinyeong Yim, Wonseok Hwang, Sangdoo Yun, Dongyoon Han, Seunghyun Park. Donut consists of an image Transformer encoder and an autoregressive text Transformer decoder to perform document understanding tasks such as document image classification, form understanding and visual question answering. The abstract from the paper is the following: *Understanding document images (e.g., invoices) is a core but challenging task since it requires complex functions such as reading text and a holistic understanding of the document. Current Visual Document Understanding (VDU) methods outsource the task of reading text to off-the-shelf Optical Character Recognition (OCR) engines and focus on the understanding task with the OCR outputs. Although such OCR-based approaches have shown promising performance, they suffer from 1) high computational costs for using OCR; 2) inflexibility of OCR models on languages or types of document; 3) OCR error propagation to the subsequent process. To address these issues, in this paper, we introduce a novel OCR-free VDU model named Donut, which stands for Document understanding transformer. As the first step in OCR-free VDU research, we propose a simple architecture (i.e., Transformer) with a pre-training objective (i.e., cross-entropy loss). Donut is conceptually simple yet effective. Through extensive experiments and analyses, we show a simple OCR-free VDU model, Donut, achieves state-of-the-art performances on various VDU tasks in terms of both speed and accuracy. In addition, we offer a synthetic data generator that helps the model pre-training to be flexible in various languages and domains.* Donut high-level overview. Taken from the original paper. This model was contributed by [nielsr](https://huggingface.co/nielsr). The original code can be found [here](https://github.com/clovaai/donut). ## Usage tips - The quickest way to get started with Donut is by checking the [tutorial notebooks](https://github.com/NielsRogge/Transformers-Tutorials/tree/master/Donut), which show how to use the model at inference time as well as fine-tuning on custom data. - Donut is always used within the [VisionEncoderDecoder](vision-encoder-decoder) framework. ## Inference examples Donut's [`VisionEncoderDecoder`] model accepts images as input and makes use of [`~generation.GenerationMixin.generate`] to autoregressively generate text given the input image. The [`DonutImageProcessor`] class is responsible for preprocessing the input image and [`XLMRobertaTokenizer`/`XLMRobertaTokenizerFast`] decodes the generated target tokens to the target string. The [`DonutProcessor`] wraps [`DonutImageProcessor`] and [`XLMRobertaTokenizer`/`XLMRobertaTokenizerFast`] into a single instance to both extract the input features and decode the predicted token ids. - Step-by-step Document Image Classification >>> import re >>> from transformers import DonutProcessor, VisionEncoderDecoderModel >>> from datasets import load_dataset >>> import torch >>> processor = DonutProcessor.from_pretrained(""naver-clova-ix/donut-base-finetuned-rvlcdip"") >>> model = VisionEncoderDecoderModel.from_pretrained(""naver-clova-ix/donut-base-finetuned-rvlcdip"") >>> device = ""cuda"" if torch.cuda.is_available() else ""cpu"" >>> model.to(device) # doctest: +IGNORE_RESULT >>> # load document image >>> dataset = load_dataset(""hf-internal-testing/example-documents"", split=""test"") >>> image = dataset[1][""image""] >>> # prepare decoder inputs >>> task_prompt = """" >>> decoder_input_ids = processor.tokenizer(task_prompt, add_special_tokens=False, return_tensors=""pt"").input_ids >>> pixel_values = processor(image, return_tensors=""pt"").pixel_values >>> outputs = model.generate( pixel_values.to(device), decoder_input_ids=decoder_input_ids.to(device), max_length=model.decoder.config.max_position_embeddings, pad_token_id=processor.tokenizer.pad_token_id, eos_token_id=processor.tokenizer.eos_token_id, use_cache=True, bad_words_ids=[[processor.tokenizer.unk_token_id]], return_dict_in_generate=True, ) >>> sequence = processor.batch_decode(outputs.sequences)[0] >>> sequence = sequence.replace(processor.tokenizer.eos_token, """").replace(processor.tokenizer.pad_token, """") >>> sequence = re.sub(r""<.*?>"", """", sequence, count=1).strip() # remove first task start token >>> print(processor.token2json(sequence)) {'class': 'advertisement'} - Step-by-step Document Parsing >>> import re >>> from transformers import DonutProcessor, VisionEncoderDecoderModel >>> from datasets import load_dataset >>> import torch >>> processor = DonutProcessor.from_pretrained(""naver-clova-ix/donut-base-finetuned-cord-v2"") >>> model = VisionEncoderDecoderModel.from_pretrained(""naver-clova-ix/donut-base-finetuned-cord-v2"") >>> device = ""cuda"" if torch.cuda.is_available() else ""cpu"" >>> model.to(device) # doctest: +IGNORE_RESULT >>> # load document image >>> dataset = load_dataset(""hf-internal-testing/example-documents"", split=""test"") >>> image = dataset[2][""image""] >>> # prepare decoder inputs >>> task_prompt = """" >>> decoder_input_ids = processor.tokenizer(task_prompt, add_special_tokens=False, return_tensors=""pt"").input_ids >>> pixel_values = processor(image, return_tensors=""pt"").pixel_values >>> outputs = model.generate( pixel_values.to(device), decoder_input_ids=decoder_input_ids.to(device), max_length=model.decoder.config.max_position_embeddings, pad_token_id=processor.tokenizer.pad_token_id, eos_token_id=processor.tokenizer.eos_token_id, use_cache=True, bad_words_ids=[[processor.tokenizer.unk_token_id]], return_dict_in_generate=True, ) >>> sequence = processor.batch_decode(outputs.sequences)[0] >>> sequence = sequence.replace(processor.tokenizer.eos_token, """").replace(processor.tokenizer.pad_token, """") >>> sequence = re.sub(r""<.*?>"", """", sequence, count=1).strip() # remove first task start token >>> print(processor.token2json(sequence)) {'menu': {'nm': 'CINNAMON SUGAR', 'unitprice': '17,000', 'cnt': '1 x', 'price': '17,000'}, 'sub_total': {'subtotal_price': '17,000'}, 'total': {'total_price': '17,000', 'cashprice': '20,000', 'changeprice': '3,000'}} - Step-by-step Document Visual Question Answering (DocVQA) >>> import re >>> from transformers import DonutProcessor, VisionEncoderDecoderModel >>> from datasets import load_dataset >>> import torch >>> processor = DonutProcessor.from_pretrained(""naver-clova-ix/donut-base-finetuned-docvqa"") >>> model = VisionEncoderDecoderModel.from_pretrained(""naver-clova-ix/donut-base-finetuned-docvqa"") >>> device = ""cuda"" if torch.cuda.is_available() else ""cpu"" >>> model.to(device) # doctest: +IGNORE_RESULT >>> # load document image from the DocVQA dataset >>> dataset = load_dataset(""hf-internal-testing/example-documents"", split=""test"") >>> image = dataset[0][""image""] >>> # prepare decoder inputs >>> task_prompt = ""{user_input}"" >>> question = ""When is the coffee break?"" >>> prompt = task_prompt.replace(""{user_input}"", question) >>> decoder_input_ids = processor.tokenizer(prompt, add_special_tokens=False, return_tensors=""pt"").input_ids >>> pixel_values = processor(image, return_tensors=""pt"").pixel_values >>> outputs = model.generate( pixel_values.to(device), decoder_input_ids=decoder_input_ids.to(device), max_length=model.decoder.config.max_position_embeddings, pad_token_id=processor.tokenizer.pad_token_id, eos_token_id=processor.tokenizer.eos_token_id, use_cache=True, bad_words_ids=[[processor.tokenizer.unk_token_id]], return_dict_in_generate=True, ) >>> sequence = processor.batch_decode(outputs.sequences)[0] >>> sequence = sequence.replace(processor.tokenizer.eos_token, """").replace(processor.tokenizer.pad_token, """") >>> sequence = re.sub(r""<.*?>"", """", sequence, count=1).strip() # remove first task start token >>> print(processor.token2json(sequence)) {'question': 'When is the coffee break?', 'answer': '11-14 to 11:39 a.m.'} See the [model hub](https://huggingface.co/models?filter=donut) to look for Donut checkpoints. ## Training We refer to the [tutorial notebooks](https://github.com/NielsRogge/Transformers-Tutorials/tree/master/Donut). ## DonutSwinConfig [[autodoc]] DonutSwinConfig ## DonutImageProcessor [[autodoc]] DonutImageProcessor - preprocess ## DonutFeatureExtractor [[autodoc]] DonutFeatureExtractor - __call__ ## DonutProcessor [[autodoc]] DonutProcessor - __call__ - from_pretrained - save_pretrained - batch_decode - decode ## DonutSwinModel [[autodoc]] DonutSwinModel - forward " model_doc/nystromformer.md," # Nyströmformer ## Overview The Nyströmformer model was proposed in [*Nyströmformer: A Nyström-Based Algorithm for Approximating Self-Attention*](https://arxiv.org/abs/2102.03902) by Yunyang Xiong, Zhanpeng Zeng, Rudrasis Chakraborty, Mingxing Tan, Glenn Fung, Yin Li, and Vikas Singh. The abstract from the paper is the following: *Transformers have emerged as a powerful tool for a broad range of natural language processing tasks. A key component that drives the impressive performance of Transformers is the self-attention mechanism that encodes the influence or dependence of other tokens on each specific token. While beneficial, the quadratic complexity of self-attention on the input sequence length has limited its application to longer sequences -- a topic being actively studied in the community. To address this limitation, we propose Nyströmformer -- a model that exhibits favorable scalability as a function of sequence length. Our idea is based on adapting the Nyström method to approximate standard self-attention with O(n) complexity. The scalability of Nyströmformer enables application to longer sequences with thousands of tokens. We perform evaluations on multiple downstream tasks on the GLUE benchmark and IMDB reviews with standard sequence length, and find that our Nyströmformer performs comparably, or in a few cases, even slightly better, than standard self-attention. On longer sequence tasks in the Long Range Arena (LRA) benchmark, Nyströmformer performs favorably relative to other efficient self-attention methods. Our code is available at this https URL.* This model was contributed by [novice03](https://huggingface.co/novice03). The original code can be found [here](https://github.com/mlpen/Nystromformer). ## Resources - [Text classification task guide](../tasks/sequence_classification) - [Token classification task guide](../tasks/token_classification) - [Question answering task guide](../tasks/question_answering) - [Masked language modeling task guide](../tasks/masked_language_modeling) - [Multiple choice task guide](../tasks/multiple_choice) ## NystromformerConfig [[autodoc]] NystromformerConfig ## NystromformerModel [[autodoc]] NystromformerModel - forward ## NystromformerForMaskedLM [[autodoc]] NystromformerForMaskedLM - forward ## NystromformerForSequenceClassification [[autodoc]] NystromformerForSequenceClassification - forward ## NystromformerForMultipleChoice [[autodoc]] NystromformerForMultipleChoice - forward ## NystromformerForTokenClassification [[autodoc]] NystromformerForTokenClassification - forward ## NystromformerForQuestionAnswering [[autodoc]] NystromformerForQuestionAnswering - forward " model_doc/sam.md," # SAM ## Overview SAM (Segment Anything Model) was proposed in [Segment Anything](https://arxiv.org/pdf/2304.02643v1.pdf) by Alexander Kirillov, Eric Mintun, Nikhila Ravi, Hanzi Mao, Chloe Rolland, Laura Gustafson, Tete Xiao, Spencer Whitehead, Alex Berg, Wan-Yen Lo, Piotr Dollar, Ross Girshick. The model can be used to predict segmentation masks of any object of interest given an input image. ![example image](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/model_doc/sam-output.png) The abstract from the paper is the following: *We introduce the Segment Anything (SA) project: a new task, model, and dataset for image segmentation. Using our efficient model in a data collection loop, we built the largest segmentation dataset to date (by far), with over 1 billion masks on 11M licensed and privacy respecting images. The model is designed and trained to be promptable, so it can transfer zero-shot to new image distributions and tasks. We evaluate its capabilities on numerous tasks and find that its zero-shot performance is impressive -- often competitive with or even superior to prior fully supervised results. We are releasing the Segment Anything Model (SAM) and corresponding dataset (SA-1B) of 1B masks and 11M images at [https://segment-anything.com](https://segment-anything.com) to foster research into foundation models for computer vision.* Tips: - The model predicts binary masks that states the presence or not of the object of interest given an image. - The model predicts much better results if input 2D points and/or input bounding boxes are provided - You can prompt multiple points for the same image, and predict a single mask. - Fine-tuning the model is not supported yet - According to the paper, textual input should be also supported. However, at this time of writing this seems to be not supported according to [the official repository](https://github.com/facebookresearch/segment-anything/issues/4#issuecomment-1497626844). This model was contributed by [ybelkada](https://huggingface.co/ybelkada) and [ArthurZ](https://huggingface.co/ArthurZ). The original code can be found [here](https://github.com/facebookresearch/segment-anything). Below is an example on how to run mask generation given an image and a 2D point: thon import torch from PIL import Image import requests from transformers import SamModel, SamProcessor device = ""cuda"" if torch.cuda.is_available() else ""cpu"" model = SamModel.from_pretrained(""facebook/sam-vit-huge"").to(device) processor = SamProcessor.from_pretrained(""facebook/sam-vit-huge"") img_url = ""https://huggingface.co/ybelkada/segment-anything/resolve/main/assets/car.png"" raw_image = Image.open(requests.get(img_url, stream=True).raw).convert(""RGB"") input_points = [[[450, 600]]] # 2D location of a window in the image inputs = processor(raw_image, input_points=input_points, return_tensors=""pt"").to(device) with torch.no_grad(): outputs = model(**inputs) masks = processor.image_processor.post_process_masks( outputs.pred_masks.cpu(), inputs[""original_sizes""].cpu(), inputs[""reshaped_input_sizes""].cpu() ) scores = outputs.iou_scores Resources: - [Demo notebook](https://github.com/huggingface/notebooks/blob/main/examples/segment_anything.ipynb) for using the model. - [Demo notebook](https://github.com/huggingface/notebooks/blob/main/examples/automatic_mask_generation.ipynb) for using the automatic mask generation pipeline. - [Demo notebook](https://github.com/NielsRogge/Transformers-Tutorials/blob/master/SAM/Run_inference_with_MedSAM_using_HuggingFace_Transformers.ipynb) for inference with MedSAM, a fine-tuned version of SAM on the medical domain. - [Demo notebook](https://github.com/NielsRogge/Transformers-Tutorials/blob/master/SAM/Fine_tune_SAM_(segment_anything)_on_a_custom_dataset.ipynb) for fine-tuning the model on custom data. ## SamConfig [[autodoc]] SamConfig ## SamVisionConfig [[autodoc]] SamVisionConfig ## SamMaskDecoderConfig [[autodoc]] SamMaskDecoderConfig ## SamPromptEncoderConfig [[autodoc]] SamPromptEncoderConfig ## SamProcessor [[autodoc]] SamProcessor ## SamImageProcessor [[autodoc]] SamImageProcessor ## SamModel [[autodoc]] SamModel - forward ## TFSamModel [[autodoc]] TFSamModel - call " model_doc/xlm-v.md," # XLM-V ## Overview XLM-V is multilingual language model with a one million token vocabulary trained on 2.5TB of data from Common Crawl (same as XLM-R). It was introduced in the [XLM-V: Overcoming the Vocabulary Bottleneck in Multilingual Masked Language Models](https://arxiv.org/abs/2301.10472) paper by Davis Liang, Hila Gonen, Yuning Mao, Rui Hou, Naman Goyal, Marjan Ghazvininejad, Luke Zettlemoyer and Madian Khabsa. From the abstract of the XLM-V paper: *Large multilingual language models typically rely on a single vocabulary shared across 100+ languages. As these models have increased in parameter count and depth, vocabulary size has remained largely unchanged. This vocabulary bottleneck limits the representational capabilities of multilingual models like XLM-R. In this paper, we introduce a new approach for scaling to very large multilingual vocabularies by de-emphasizing token sharing between languages with little lexical overlap and assigning vocabulary capacity to achieve sufficient coverage for each individual language. Tokenizations using our vocabulary are typically more semantically meaningful and shorter compared to XLM-R. Leveraging this improved vocabulary, we train XLM-V, a multilingual language model with a one million token vocabulary. XLM-V outperforms XLM-R on every task we tested on ranging from natural language inference (XNLI), question answering (MLQA, XQuAD, TyDiQA), and named entity recognition (WikiAnn) to low-resource tasks (Americas NLI, MasakhaNER).* This model was contributed by [stefan-it](https://huggingface.co/stefan-it), including detailed experiments with XLM-V on downstream tasks. The experiments repository can be found [here](https://github.com/stefan-it/xlm-v-experiments). ## Usage tips - XLM-V is compatible with the XLM-RoBERTa model architecture, only model weights from [`fairseq`](https://github.com/facebookresearch/fairseq) library had to be converted. - The `XLMTokenizer` implementation is used to load the vocab and performs tokenization. A XLM-V (base size) model is available under the [`facebook/xlm-v-base`](https://huggingface.co/facebook/xlm-v-base) identifier. XLM-V architecture is the same as XLM-RoBERTa, refer to [XLM-RoBERTa documentation](xlm-roberta) for API reference, and examples. " model_doc/encodec.md," # EnCodec ## Overview The EnCodec neural codec model was proposed in [High Fidelity Neural Audio Compression](https://arxiv.org/abs/2210.13438) by Alexandre Défossez, Jade Copet, Gabriel Synnaeve, Yossi Adi. The abstract from the paper is the following: *We introduce a state-of-the-art real-time, high-fidelity, audio codec leveraging neural networks. It consists in a streaming encoder-decoder architecture with quantized latent space trained in an end-to-end fashion. We simplify and speed-up the training by using a single multiscale spectrogram adversary that efficiently reduces artifacts and produce high-quality samples. We introduce a novel loss balancer mechanism to stabilize training: the weight of a loss now defines the fraction of the overall gradient it should represent, thus decoupling the choice of this hyper-parameter from the typical scale of the loss. Finally, we study how lightweight Transformer models can be used to further compress the obtained representation by up to 40%, while staying faster than real time. We provide a detailed description of the key design choices of the proposed model including: training objective, architectural changes and a study of various perceptual loss functions. We present an extensive subjective evaluation (MUSHRA tests) together with an ablation study for a range of bandwidths and audio domains, including speech, noisy-reverberant speech, and music. Our approach is superior to the baselines methods across all evaluated settings, considering both 24 kHz monophonic and 48 kHz stereophonic audio.* This model was contributed by [Matthijs](https://huggingface.co/Matthijs), [Patrick Von Platen](https://huggingface.co/patrickvonplaten) and [Arthur Zucker](https://huggingface.co/ArthurZ). The original code can be found [here](https://github.com/facebookresearch/encodec). ## Usage example Here is a quick example of how to encode and decode an audio using this model: thon >>> from datasets import load_dataset, Audio >>> from transformers import EncodecModel, AutoProcessor >>> librispeech_dummy = load_dataset(""hf-internal-testing/librispeech_asr_dummy"", ""clean"", split=""validation"") >>> model = EncodecModel.from_pretrained(""facebook/encodec_24khz"") >>> processor = AutoProcessor.from_pretrained(""facebook/encodec_24khz"") >>> librispeech_dummy = librispeech_dummy.cast_column(""audio"", Audio(sampling_rate=processor.sampling_rate)) >>> audio_sample = librispeech_dummy[-1][""audio""][""array""] >>> inputs = processor(raw_audio=audio_sample, sampling_rate=processor.sampling_rate, return_tensors=""pt"") >>> encoder_outputs = model.encode(inputs[""input_values""], inputs[""padding_mask""]) >>> audio_values = model.decode(encoder_outputs.audio_codes, encoder_outputs.audio_scales, inputs[""padding_mask""])[0] >>> # or the equivalent with a forward pass >>> audio_values = model(inputs[""input_values""], inputs[""padding_mask""]).audio_values ## EncodecConfig [[autodoc]] EncodecConfig ## EncodecFeatureExtractor [[autodoc]] EncodecFeatureExtractor - __call__ ## EncodecModel [[autodoc]] EncodecModel - decode - encode - forward " model_doc/yoso.md," # YOSO ## Overview The YOSO model was proposed in [You Only Sample (Almost) Once: Linear Cost Self-Attention Via Bernoulli Sampling](https://arxiv.org/abs/2111.09714) by Zhanpeng Zeng, Yunyang Xiong, Sathya N. Ravi, Shailesh Acharya, Glenn Fung, Vikas Singh. YOSO approximates standard softmax self-attention via a Bernoulli sampling scheme based on Locality Sensitive Hashing (LSH). In principle, all the Bernoulli random variables can be sampled with a single hash. The abstract from the paper is the following: *Transformer-based models are widely used in natural language processing (NLP). Central to the transformer model is the self-attention mechanism, which captures the interactions of token pairs in the input sequences and depends quadratically on the sequence length. Training such models on longer sequences is expensive. In this paper, we show that a Bernoulli sampling attention mechanism based on Locality Sensitive Hashing (LSH), decreases the quadratic complexity of such models to linear. We bypass the quadratic cost by considering self-attention as a sum of individual tokens associated with Bernoulli random variables that can, in principle, be sampled at once by a single hash (although in practice, this number may be a small constant). This leads to an efficient sampling scheme to estimate self-attention which relies on specific modifications of LSH (to enable deployment on GPU architectures). We evaluate our algorithm on the GLUE benchmark with standard 512 sequence length where we see favorable performance relative to a standard pretrained Transformer. On the Long Range Arena (LRA) benchmark, for evaluating performance on long sequences, our method achieves results consistent with softmax self-attention but with sizable speed-ups and memory savings and often outperforms other efficient self-attention methods. Our code is available at this https URL* This model was contributed by [novice03](https://huggingface.co/novice03). The original code can be found [here](https://github.com/mlpen/YOSO). ## Usage tips - The YOSO attention algorithm is implemented through custom CUDA kernels, functions written in CUDA C++ that can be executed multiple times in parallel on a GPU. - The kernels provide a `fast_hash` function, which approximates the random projections of the queries and keys using the Fast Hadamard Transform. Using these hash codes, the `lsh_cumulation` function approximates self-attention via LSH-based Bernoulli sampling. - To use the custom kernels, the user should set `config.use_expectation = False`. To ensure that the kernels are compiled successfully, the user must install the correct version of PyTorch and cudatoolkit. By default, `config.use_expectation = True`, which uses YOSO-E and does not require compiling CUDA kernels. YOSO Attention Algorithm. Taken from the original paper. ## Resources - [Text classification task guide](../tasks/sequence_classification) - [Token classification task guide](../tasks/token_classification) - [Question answering task guide](../tasks/question_answering) - [Masked language modeling task guide](../tasks/masked_language_modeling) - [Multiple choice task guide](../tasks/multiple_choice) ## YosoConfig [[autodoc]] YosoConfig ## YosoModel [[autodoc]] YosoModel - forward ## YosoForMaskedLM [[autodoc]] YosoForMaskedLM - forward ## YosoForSequenceClassification [[autodoc]] YosoForSequenceClassification - forward ## YosoForMultipleChoice [[autodoc]] YosoForMultipleChoice - forward ## YosoForTokenClassification [[autodoc]] YosoForTokenClassification - forward ## YosoForQuestionAnswering [[autodoc]] YosoForQuestionAnswering - forward" model_doc/mgp-str.md," # MGP-STR ## Overview The MGP-STR model was proposed in [Multi-Granularity Prediction for Scene Text Recognition](https://arxiv.org/abs/2209.03592) by Peng Wang, Cheng Da, and Cong Yao. MGP-STR is a conceptually **simple** yet **powerful** vision Scene Text Recognition (STR) model, which is built upon the [Vision Transformer (ViT)](vit). To integrate linguistic knowledge, Multi-Granularity Prediction (MGP) strategy is proposed to inject information from the language modality into the model in an implicit way. The abstract from the paper is the following: *Scene text recognition (STR) has been an active research topic in computer vision for years. To tackle this challenging problem, numerous innovative methods have been successively proposed and incorporating linguistic knowledge into STR models has recently become a prominent trend. In this work, we first draw inspiration from the recent progress in Vision Transformer (ViT) to construct a conceptually simple yet powerful vision STR model, which is built upon ViT and outperforms previous state-of-the-art models for scene text recognition, including both pure vision models and language-augmented methods. To integrate linguistic knowledge, we further propose a Multi-Granularity Prediction strategy to inject information from the language modality into the model in an implicit way, i.e. , subword representations (BPE and WordPiece) widely-used in NLP are introduced into the output space, in addition to the conventional character level representation, while no independent language model (LM) is adopted. The resultant algorithm (termed MGP-STR) is able to push the performance envelop of STR to an even higher level. Specifically, it achieves an average recognition accuracy of 93.35% on standard benchmarks.* MGP-STR architecture. Taken from the original paper. MGP-STR is trained on two synthetic datasets [MJSynth]((http://www.robots.ox.ac.uk/~vgg/data/text/)) (MJ) and SynthText(http://www.robots.ox.ac.uk/~vgg/data/scenetext/) (ST) without fine-tuning on other datasets. It achieves state-of-the-art results on six standard Latin scene text benchmarks, including 3 regular text datasets (IC13, SVT, IIIT) and 3 irregular ones (IC15, SVTP, CUTE). This model was contributed by [yuekun](https://huggingface.co/yuekun). The original code can be found [here](https://github.com/AlibabaResearch/AdvancedLiterateMachinery/tree/main/OCR/MGP-STR). ## Inference example [`MgpstrModel`] accepts images as input and generates three types of predictions, which represent textual information at different granularities. The three types of predictions are fused to give the final prediction result. The [`ViTImageProcessor`] class is responsible for preprocessing the input image and [`MgpstrTokenizer`] decodes the generated character tokens to the target string. The [`MgpstrProcessor`] wraps [`ViTImageProcessor`] and [`MgpstrTokenizer`] into a single instance to both extract the input features and decode the predicted token ids. - Step-by-step Optical Character Recognition (OCR) >>> from transformers import MgpstrProcessor, MgpstrForSceneTextRecognition >>> import requests >>> from PIL import Image >>> processor = MgpstrProcessor.from_pretrained('alibaba-damo/mgp-str-base') >>> model = MgpstrForSceneTextRecognition.from_pretrained('alibaba-damo/mgp-str-base') >>> # load image from the IIIT-5k dataset >>> url = ""https://i.postimg.cc/ZKwLg2Gw/367-14.png"" >>> image = Image.open(requests.get(url, stream=True).raw).convert(""RGB"") >>> pixel_values = processor(images=image, return_tensors=""pt"").pixel_values >>> outputs = model(pixel_values) >>> generated_text = processor.batch_decode(outputs.logits)['generated_text'] ## MgpstrConfig [[autodoc]] MgpstrConfig ## MgpstrTokenizer [[autodoc]] MgpstrTokenizer - save_vocabulary ## MgpstrProcessor [[autodoc]] MgpstrProcessor - __call__ - batch_decode ## MgpstrModel [[autodoc]] MgpstrModel - forward ## MgpstrForSceneTextRecognition [[autodoc]] MgpstrForSceneTextRecognition - forward " model_doc/poolformer.md," # PoolFormer ## Overview The PoolFormer model was proposed in [MetaFormer is Actually What You Need for Vision](https://arxiv.org/abs/2111.11418) by Sea AI Labs. Instead of designing complicated token mixer to achieve SOTA performance, the target of this work is to demonstrate the competence of transformer models largely stem from the general architecture MetaFormer. The abstract from the paper is the following: *Transformers have shown great potential in computer vision tasks. A common belief is their attention-based token mixer module contributes most to their competence. However, recent works show the attention-based module in transformers can be replaced by spatial MLPs and the resulted models still perform quite well. Based on this observation, we hypothesize that the general architecture of the transformers, instead of the specific token mixer module, is more essential to the model's performance. To verify this, we deliberately replace the attention module in transformers with an embarrassingly simple spatial pooling operator to conduct only the most basic token mixing. Surprisingly, we observe that the derived model, termed as PoolFormer, achieves competitive performance on multiple computer vision tasks. For example, on ImageNet-1K, PoolFormer achieves 82.1% top-1 accuracy, surpassing well-tuned vision transformer/MLP-like baselines DeiT-B/ResMLP-B24 by 0.3%/1.1% accuracy with 35%/52% fewer parameters and 48%/60% fewer MACs. The effectiveness of PoolFormer verifies our hypothesis and urges us to initiate the concept of ""MetaFormer"", a general architecture abstracted from transformers without specifying the token mixer. Based on the extensive experiments, we argue that MetaFormer is the key player in achieving superior results for recent transformer and MLP-like models on vision tasks. This work calls for more future research dedicated to improving MetaFormer instead of focusing on the token mixer modules. Additionally, our proposed PoolFormer could serve as a starting baseline for future MetaFormer architecture design.* The figure below illustrates the architecture of PoolFormer. Taken from the [original paper](https://arxiv.org/abs/2111.11418). This model was contributed by [heytanay](https://huggingface.co/heytanay). The original code can be found [here](https://github.com/sail-sg/poolformer). ## Usage tips - PoolFormer has a hierarchical architecture, where instead of Attention, a simple Average Pooling layer is present. All checkpoints of the model can be found on the [hub](https://huggingface.co/models?other=poolformer). - One can use [`PoolFormerImageProcessor`] to prepare images for the model. - As most models, PoolFormer comes in different sizes, the details of which can be found in the table below. | **Model variant** | **Depths** | **Hidden sizes** | **Params (M)** | **ImageNet-1k Top 1** | | :---------------: | ------------- | ------------------- | :------------: | :-------------------: | | s12 | [2, 2, 6, 2] | [64, 128, 320, 512] | 12 | 77.2 | | s24 | [4, 4, 12, 4] | [64, 128, 320, 512] | 21 | 80.3 | | s36 | [6, 6, 18, 6] | [64, 128, 320, 512] | 31 | 81.4 | | m36 | [6, 6, 18, 6] | [96, 192, 384, 768] | 56 | 82.1 | | m48 | [8, 8, 24, 8] | [96, 192, 384, 768] | 73 | 82.5 | ## Resources A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with PoolFormer. - [`PoolFormerForImageClassification`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/pytorch/image-classification) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/image_classification.ipynb). - See also: [Image classification task guide](../tasks/image_classification) If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource. ## PoolFormerConfig [[autodoc]] PoolFormerConfig ## PoolFormerFeatureExtractor [[autodoc]] PoolFormerFeatureExtractor - __call__ ## PoolFormerImageProcessor [[autodoc]] PoolFormerImageProcessor - preprocess ## PoolFormerModel [[autodoc]] PoolFormerModel - forward ## PoolFormerForImageClassification [[autodoc]] PoolFormerForImageClassification - forward " model_doc/layoutxlm.md," # LayoutXLM ## Overview LayoutXLM was proposed in [LayoutXLM: Multimodal Pre-training for Multilingual Visually-rich Document Understanding](https://arxiv.org/abs/2104.08836) by Yiheng Xu, Tengchao Lv, Lei Cui, Guoxin Wang, Yijuan Lu, Dinei Florencio, Cha Zhang, Furu Wei. It's a multilingual extension of the [LayoutLMv2 model](https://arxiv.org/abs/2012.14740) trained on 53 languages. The abstract from the paper is the following: *Multimodal pre-training with text, layout, and image has achieved SOTA performance for visually-rich document understanding tasks recently, which demonstrates the great potential for joint learning across different modalities. In this paper, we present LayoutXLM, a multimodal pre-trained model for multilingual document understanding, which aims to bridge the language barriers for visually-rich document understanding. To accurately evaluate LayoutXLM, we also introduce a multilingual form understanding benchmark dataset named XFUN, which includes form understanding samples in 7 languages (Chinese, Japanese, Spanish, French, Italian, German, Portuguese), and key-value pairs are manually labeled for each language. Experiment results show that the LayoutXLM model has significantly outperformed the existing SOTA cross-lingual pre-trained models on the XFUN dataset.* This model was contributed by [nielsr](https://huggingface.co/nielsr). The original code can be found [here](https://github.com/microsoft/unilm). ## Usage tips and examples One can directly plug in the weights of LayoutXLM into a LayoutLMv2 model, like so: thon from transformers import LayoutLMv2Model model = LayoutLMv2Model.from_pretrained(""microsoft/layoutxlm-base"") Note that LayoutXLM has its own tokenizer, based on [`LayoutXLMTokenizer`]/[`LayoutXLMTokenizerFast`]. You can initialize it as follows: thon from transformers import LayoutXLMTokenizer tokenizer = LayoutXLMTokenizer.from_pretrained(""microsoft/layoutxlm-base"") Similar to LayoutLMv2, you can use [`LayoutXLMProcessor`] (which internally applies [`LayoutLMv2ImageProcessor`] and [`LayoutXLMTokenizer`]/[`LayoutXLMTokenizerFast`] in sequence) to prepare all data for the model. As LayoutXLM's architecture is equivalent to that of LayoutLMv2, one can refer to [LayoutLMv2's documentation page](layoutlmv2) for all tips, code examples and notebooks. ## LayoutXLMTokenizer [[autodoc]] LayoutXLMTokenizer - __call__ - build_inputs_with_special_tokens - get_special_tokens_mask - create_token_type_ids_from_sequences - save_vocabulary ## LayoutXLMTokenizerFast [[autodoc]] LayoutXLMTokenizerFast - __call__ ## LayoutXLMProcessor [[autodoc]] LayoutXLMProcessor - __call__ " model_doc/encoder-decoder.md," # Encoder Decoder Models ## Overview The [`EncoderDecoderModel`] can be used to initialize a sequence-to-sequence model with any pretrained autoencoding model as the encoder and any pretrained autoregressive model as the decoder. The effectiveness of initializing sequence-to-sequence models with pretrained checkpoints for sequence generation tasks was shown in [Leveraging Pre-trained Checkpoints for Sequence Generation Tasks](https://arxiv.org/abs/1907.12461) by Sascha Rothe, Shashi Narayan, Aliaksei Severyn. After such an [`EncoderDecoderModel`] has been trained/fine-tuned, it can be saved/loaded just like any other models (see the examples for more information). An application of this architecture could be to leverage two pretrained [`BertModel`] as the encoder and decoder for a summarization model as was shown in: [Text Summarization with Pretrained Encoders](https://arxiv.org/abs/1908.08345) by Yang Liu and Mirella Lapata. ## Randomly initializing `EncoderDecoderModel` from model configurations. [`EncoderDecoderModel`] can be randomly initialized from an encoder and a decoder config. In the following example, we show how to do this using the default [`BertModel`] configuration for the encoder and the default [`BertForCausalLM`] configuration for the decoder. thon >>> from transformers import BertConfig, EncoderDecoderConfig, EncoderDecoderModel >>> config_encoder = BertConfig() >>> config_decoder = BertConfig() >>> config = EncoderDecoderConfig.from_encoder_decoder_configs(config_encoder, config_decoder) >>> model = EncoderDecoderModel(config=config) ## Initialising `EncoderDecoderModel` from a pretrained encoder and a pretrained decoder. [`EncoderDecoderModel`] can be initialized from a pretrained encoder checkpoint and a pretrained decoder checkpoint. Note that any pretrained auto-encoding model, *e.g.* BERT, can serve as the encoder and both pretrained auto-encoding models, *e.g.* BERT, pretrained causal language models, *e.g.* GPT2, as well as the pretrained decoder part of sequence-to-sequence models, *e.g.* decoder of BART, can be used as the decoder. Depending on which architecture you choose as the decoder, the cross-attention layers might be randomly initialized. Initializing [`EncoderDecoderModel`] from a pretrained encoder and decoder checkpoint requires the model to be fine-tuned on a downstream task, as has been shown in [the *Warm-starting-encoder-decoder blog post*](https://huggingface.co/blog/warm-starting-encoder-decoder). To do so, the `EncoderDecoderModel` class provides a [`EncoderDecoderModel.from_encoder_decoder_pretrained`] method. thon >>> from transformers import EncoderDecoderModel, BertTokenizer >>> tokenizer = BertTokenizer.from_pretrained(""bert-base-uncased"") >>> model = EncoderDecoderModel.from_encoder_decoder_pretrained(""bert-base-uncased"", ""bert-base-uncased"") ## Loading an existing `EncoderDecoderModel` checkpoint and perform inference. To load fine-tuned checkpoints of the `EncoderDecoderModel` class, [`EncoderDecoderModel`] provides the `from_pretrained()` method just like any other model architecture in Transformers. To perform inference, one uses the [`generate`] method, which allows to autoregressively generate text. This method supports various forms of decoding, such as greedy, beam search and multinomial sampling. thon >>> from transformers import AutoTokenizer, EncoderDecoderModel >>> # load a fine-tuned seq2seq model and corresponding tokenizer >>> model = EncoderDecoderModel.from_pretrained(""patrickvonplaten/bert2bert_cnn_daily_mail"") >>> tokenizer = AutoTokenizer.from_pretrained(""patrickvonplaten/bert2bert_cnn_daily_mail"") >>> # let's perform inference on a long piece of text >>> ARTICLE_TO_SUMMARIZE = ( ""PG&E stated it scheduled the blackouts in response to forecasts for high winds "" ""amid dry conditions. The aim is to reduce the risk of wildfires. Nearly 800 thousand customers were "" ""scheduled to be affected by the shutoffs which were expected to last through at least midday tomorrow."" ) >>> input_ids = tokenizer(ARTICLE_TO_SUMMARIZE, return_tensors=""pt"").input_ids >>> # autoregressively generate summary (uses greedy decoding by default) >>> generated_ids = model.generate(input_ids) >>> generated_text = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0] >>> print(generated_text) nearly 800 thousand customers were affected by the shutoffs. the aim is to reduce the risk of wildfires. nearly 800, 000 customers were expected to be affected by high winds amid dry conditions. pg & e said it scheduled the blackouts to last through at least midday tomorrow. ## Loading a PyTorch checkpoint into `TFEncoderDecoderModel`. [`TFEncoderDecoderModel.from_pretrained`] currently doesn't support initializing the model from a pytorch checkpoint. Passing `from_pt=True` to this method will throw an exception. If there are only pytorch checkpoints for a particular encoder-decoder model, a workaround is: thon >>> # a workaround to load from pytorch checkpoint >>> from transformers import EncoderDecoderModel, TFEncoderDecoderModel >>> _model = EncoderDecoderModel.from_pretrained(""patrickvonplaten/bert2bert-cnn_dailymail-fp16"") >>> _model.encoder.save_pretrained(""./encoder"") >>> _model.decoder.save_pretrained(""./decoder"") >>> model = TFEncoderDecoderModel.from_encoder_decoder_pretrained( ""./encoder"", ""./decoder"", encoder_from_pt=True, decoder_from_pt=True ) >>> # This is only for copying some specific attributes of this particular model. >>> model.config = _model.config ## Training Once the model is created, it can be fine-tuned similar to BART, T5 or any other encoder-decoder model. As you can see, only 2 inputs are required for the model in order to compute a loss: `input_ids` (which are the `input_ids` of the encoded input sequence) and `labels` (which are the `input_ids` of the encoded target sequence). thon >>> from transformers import BertTokenizer, EncoderDecoderModel >>> tokenizer = BertTokenizer.from_pretrained(""bert-base-uncased"") >>> model = EncoderDecoderModel.from_encoder_decoder_pretrained(""bert-base-uncased"", ""bert-base-uncased"") >>> model.config.decoder_start_token_id = tokenizer.cls_token_id >>> model.config.pad_token_id = tokenizer.pad_token_id >>> input_ids = tokenizer( ""The tower is 324 metres (1,063 ft) tall, about the same height as an 81-storey building, and the tallest structure in Paris. Its base is square, measuring 125 metres (410 ft) on each side.During its construction, the Eiffel Tower surpassed the Washington Monument to become the tallest man-made structure in the world, a title it held for 41 years until the Chrysler Building in New York City was finished in 1930. It was the first structure to reach a height of 300 metres. Due to the addition of a broadcasting aerial at the top of the tower in 1957, it is now taller than the Chrysler Building by 5.2 metres (17 ft).Excluding transmitters, the Eiffel Tower is the second tallest free-standing structure in France after the Millau Viaduct."", return_tensors=""pt"", ).input_ids >>> labels = tokenizer( ""the eiffel tower surpassed the washington monument to become the tallest structure in the world. it was the first structure to reach a height of 300 metres in paris in 1930. it is now taller than the chrysler building by 5. 2 metres ( 17 ft ) and is the second tallest free - standing structure in paris."", return_tensors=""pt"", ).input_ids >>> # the forward function automatically creates the correct decoder_input_ids >>> loss = model(input_ids=input_ids, labels=labels).loss Detailed [colab](https://colab.research.google.com/drive/1WIk2bxglElfZewOHboPFNj8H44_VAyKE?usp=sharing#scrollTo=ZwQIEhKOrJpl) for training. This model was contributed by [thomwolf](https://github.com/thomwolf). This model's TensorFlow and Flax versions were contributed by [ydshieh](https://github.com/ydshieh). ## EncoderDecoderConfig [[autodoc]] EncoderDecoderConfig ## EncoderDecoderModel [[autodoc]] EncoderDecoderModel - forward - from_encoder_decoder_pretrained ## TFEncoderDecoderModel [[autodoc]] TFEncoderDecoderModel - call - from_encoder_decoder_pretrained ## FlaxEncoderDecoderModel [[autodoc]] FlaxEncoderDecoderModel - __call__ - from_encoder_decoder_pretrained " model_doc/xclip.md," # X-CLIP ## Overview The X-CLIP model was proposed in [Expanding Language-Image Pretrained Models for General Video Recognition](https://arxiv.org/abs/2208.02816) by Bolin Ni, Houwen Peng, Minghao Chen, Songyang Zhang, Gaofeng Meng, Jianlong Fu, Shiming Xiang, Haibin Ling. X-CLIP is a minimal extension of [CLIP](clip) for video. The model consists of a text encoder, a cross-frame vision encoder, a multi-frame integration Transformer, and a video-specific prompt generator. The abstract from the paper is the following: *Contrastive language-image pretraining has shown great success in learning visual-textual joint representation from web-scale data, demonstrating remarkable ""zero-shot"" generalization ability for various image tasks. However, how to effectively expand such new language-image pretraining methods to video domains is still an open problem. In this work, we present a simple yet effective approach that adapts the pretrained language-image models to video recognition directly, instead of pretraining a new model from scratch. More concretely, to capture the long-range dependencies of frames along the temporal dimension, we propose a cross-frame attention mechanism that explicitly exchanges information across frames. Such module is lightweight and can be plugged into pretrained language-image models seamlessly. Moreover, we propose a video-specific prompting scheme, which leverages video content information for generating discriminative textual prompts. Extensive experiments demonstrate that our approach is effective and can be generalized to different video recognition scenarios. In particular, under fully-supervised settings, our approach achieves a top-1 accuracy of 87.1% on Kinectics-400, while using 12 times fewer FLOPs compared with Swin-L and ViViT-H. In zero-shot experiments, our approach surpasses the current state-of-the-art methods by +7.6% and +14.9% in terms of top-1 accuracy under two popular protocols. In few-shot scenarios, our approach outperforms previous best methods by +32.1% and +23.1% when the labeled data is extremely limited.* Tips: - Usage of X-CLIP is identical to [CLIP](clip). X-CLIP architecture. Taken from the original paper. This model was contributed by [nielsr](https://huggingface.co/nielsr). The original code can be found [here](https://github.com/microsoft/VideoX/tree/master/X-CLIP). ## Resources A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with X-CLIP. - Demo notebooks for X-CLIP can be found [here](https://github.com/NielsRogge/Transformers-Tutorials/tree/master/X-CLIP). If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource. ## XCLIPProcessor [[autodoc]] XCLIPProcessor ## XCLIPConfig [[autodoc]] XCLIPConfig - from_text_vision_configs ## XCLIPTextConfig [[autodoc]] XCLIPTextConfig ## XCLIPVisionConfig [[autodoc]] XCLIPVisionConfig ## XCLIPModel [[autodoc]] XCLIPModel - forward - get_text_features - get_video_features ## XCLIPTextModel [[autodoc]] XCLIPTextModel - forward ## XCLIPVisionModel [[autodoc]] XCLIPVisionModel - forward " model_doc/roberta.md," # RoBERTa ## Overview The RoBERTa model was proposed in [RoBERTa: A Robustly Optimized BERT Pretraining Approach](https://arxiv.org/abs/1907.11692) by Yinhan Liu, [Myle Ott](https://huggingface.co/myleott), Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, Veselin Stoyanov. It is based on Google's BERT model released in 2018. It builds on BERT and modifies key hyperparameters, removing the next-sentence pretraining objective and training with much larger mini-batches and learning rates. The abstract from the paper is the following: *Language model pretraining has led to significant performance gains but careful comparison between different approaches is challenging. Training is computationally expensive, often done on private datasets of different sizes, and, as we will show, hyperparameter choices have significant impact on the final results. We present a replication study of BERT pretraining (Devlin et al., 2019) that carefully measures the impact of many key hyperparameters and training data size. We find that BERT was significantly undertrained, and can match or exceed the performance of every model published after it. Our best model achieves state-of-the-art results on GLUE, RACE and SQuAD. These results highlight the importance of previously overlooked design choices, and raise questions about the source of recently reported improvements. We release our models and code.* This model was contributed by [julien-c](https://huggingface.co/julien-c). The original code can be found [here](https://github.com/pytorch/fairseq/tree/master/examples/roberta). ## Usage tips - This implementation is the same as [`BertModel`] with a tiny embeddings tweak as well as a setup for Roberta pretrained models. - RoBERTa has the same architecture as BERT, but uses a byte-level BPE as a tokenizer (same as GPT-2) and uses a different pretraining scheme. - RoBERTa doesn't have `token_type_ids`, you don't need to indicate which token belongs to which segment. Just separate your segments with the separation token `tokenizer.sep_token` (or ``) - Same as BERT with better pretraining tricks: * dynamic masking: tokens are masked differently at each epoch, whereas BERT does it once and for all * together to reach 512 tokens (so the sentences are in an order than may span several documents) * train with larger batches * use BPE with bytes as a subunit and not characters (because of unicode characters) - [CamemBERT](camembert) is a wrapper around RoBERTa. Refer to this page for usage examples. ## Resources A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with RoBERTa. If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource. - A blog on [Getting Started with Sentiment Analysis on Twitter](https://huggingface.co/blog/sentiment-analysis-twitter) using RoBERTa and the [Inference API](https://huggingface.co/inference-api). - A blog on [Opinion Classification with Kili and Hugging Face AutoTrain](https://huggingface.co/blog/opinion-classification-with-kili) using RoBERTa. - A notebook on how to [finetune RoBERTa for sentiment analysis](https://colab.research.google.com/github/DhavalTaunk08/NLP_scripts/blob/master/sentiment_analysis_using_roberta.ipynb). 🌎 - [`RobertaForSequenceClassification`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/pytorch/text-classification) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/text_classification.ipynb). - [`TFRobertaForSequenceClassification`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/tensorflow/text-classification) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/text_classification-tf.ipynb). - [`FlaxRobertaForSequenceClassification`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/flax/text-classification) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/text_classification_flax.ipynb). - [Text classification task guide](../tasks/sequence_classification) - [`RobertaForTokenClassification`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/pytorch/token-classification) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/token_classification.ipynb). - [`TFRobertaForTokenClassification`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/tensorflow/token-classification) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/token_classification-tf.ipynb). - [`FlaxRobertaForTokenClassification`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/flax/token-classification). - [Token classification](https://huggingface.co/course/chapter7/2?fw=pt) chapter of the 🤗 Hugging Face Course. - [Token classification task guide](../tasks/token_classification) - A blog on [How to train a new language model from scratch using Transformers and Tokenizers](https://huggingface.co/blog/how-to-train) with RoBERTa. - [`RobertaForMaskedLM`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/pytorch/language-modeling#robertabertdistilbert-and-masked-language-modeling) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/language_modeling.ipynb). - [`TFRobertaForMaskedLM`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/tensorflow/language-modeling#run_mlmpy) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/language_modeling-tf.ipynb). - [`FlaxRobertaForMaskedLM`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/flax/language-modeling#masked-language-modeling) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/masked_language_modeling_flax.ipynb). - [Masked language modeling](https://huggingface.co/course/chapter7/3?fw=pt) chapter of the 🤗 Hugging Face Course. - [Masked language modeling task guide](../tasks/masked_language_modeling) - A blog on [Accelerated Inference with Optimum and Transformers Pipelines](https://huggingface.co/blog/optimum-inference) with RoBERTa for question answering. - [`RobertaForQuestionAnswering`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/pytorch/question-answering) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/question_answering.ipynb). - [`TFRobertaForQuestionAnswering`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/tensorflow/question-answering) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/question_answering-tf.ipynb). - [`FlaxRobertaForQuestionAnswering`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/flax/question-answering). - [Question answering](https://huggingface.co/course/chapter7/7?fw=pt) chapter of the 🤗 Hugging Face Course. - [Question answering task guide](../tasks/question_answering) **Multiple choice** - [`RobertaForMultipleChoice`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/pytorch/multiple-choice) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/multiple_choice.ipynb). - [`TFRobertaForMultipleChoice`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/tensorflow/multiple-choice) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/multiple_choice-tf.ipynb). - [Multiple choice task guide](../tasks/multiple_choice) ## RobertaConfig [[autodoc]] RobertaConfig ## RobertaTokenizer [[autodoc]] RobertaTokenizer - build_inputs_with_special_tokens - get_special_tokens_mask - create_token_type_ids_from_sequences - save_vocabulary ## RobertaTokenizerFast [[autodoc]] RobertaTokenizerFast - build_inputs_with_special_tokens ## RobertaModel [[autodoc]] RobertaModel - forward ## RobertaForCausalLM [[autodoc]] RobertaForCausalLM - forward ## RobertaForMaskedLM [[autodoc]] RobertaForMaskedLM - forward ## RobertaForSequenceClassification [[autodoc]] RobertaForSequenceClassification - forward ## RobertaForMultipleChoice [[autodoc]] RobertaForMultipleChoice - forward ## RobertaForTokenClassification [[autodoc]] RobertaForTokenClassification - forward ## RobertaForQuestionAnswering [[autodoc]] RobertaForQuestionAnswering - forward ## TFRobertaModel [[autodoc]] TFRobertaModel - call ## TFRobertaForCausalLM [[autodoc]] TFRobertaForCausalLM - call ## TFRobertaForMaskedLM [[autodoc]] TFRobertaForMaskedLM - call ## TFRobertaForSequenceClassification [[autodoc]] TFRobertaForSequenceClassification - call ## TFRobertaForMultipleChoice [[autodoc]] TFRobertaForMultipleChoice - call ## TFRobertaForTokenClassification [[autodoc]] TFRobertaForTokenClassification - call ## TFRobertaForQuestionAnswering [[autodoc]] TFRobertaForQuestionAnswering - call ## FlaxRobertaModel [[autodoc]] FlaxRobertaModel - __call__ ## FlaxRobertaForCausalLM [[autodoc]] FlaxRobertaForCausalLM - __call__ ## FlaxRobertaForMaskedLM [[autodoc]] FlaxRobertaForMaskedLM - __call__ ## FlaxRobertaForSequenceClassification [[autodoc]] FlaxRobertaForSequenceClassification - __call__ ## FlaxRobertaForMultipleChoice [[autodoc]] FlaxRobertaForMultipleChoice - __call__ ## FlaxRobertaForTokenClassification [[autodoc]] FlaxRobertaForTokenClassification - __call__ ## FlaxRobertaForQuestionAnswering [[autodoc]] FlaxRobertaForQuestionAnswering - __call__ " model_doc/nougat.md," # Nougat ## Overview The Nougat model was proposed in [Nougat: Neural Optical Understanding for Academic Documents](https://arxiv.org/abs/2308.13418) by Lukas Blecher, Guillem Cucurull, Thomas Scialom, Robert Stojnic. Nougat uses the same architecture as [Donut](donut), meaning an image Transformer encoder and an autoregressive text Transformer decoder to translate scientific PDFs to markdown, enabling easier access to them. The abstract from the paper is the following: *Scientific knowledge is predominantly stored in books and scientific journals, often in the form of PDFs. However, the PDF format leads to a loss of semantic information, particularly for mathematical expressions. We propose Nougat (Neural Optical Understanding for Academic Documents), a Visual Transformer model that performs an Optical Character Recognition (OCR) task for processing scientific documents into a markup language, and demonstrate the effectiveness of our model on a new dataset of scientific documents. The proposed approach offers a promising solution to enhance the accessibility of scientific knowledge in the digital age, by bridging the gap between human-readable documents and machine-readable text. We release the models and code to accelerate future work on scientific text recognition.* Nougat high-level overview. Taken from the original paper. This model was contributed by [nielsr](https://huggingface.co/nielsr). The original code can be found [here](https://github.com/facebookresearch/nougat). ## Usage tips - The quickest way to get started with Nougat is by checking the [tutorial notebooks](https://github.com/NielsRogge/Transformers-Tutorials/tree/master/Nougat), which show how to use the model at inference time as well as fine-tuning on custom data. - Nougat is always used within the [VisionEncoderDecoder](vision-encoder-decoder) framework. The model is identical to [Donut](donut) in terms of architecture. ## Inference Nougat's [`VisionEncoderDecoder`] model accepts images as input and makes use of [`~generation.GenerationMixin.generate`] to autoregressively generate text given the input image. The [`NougatImageProcessor`] class is responsible for preprocessing the input image and [`NougatTokenizerFast`] decodes the generated target tokens to the target string. The [`NougatProcessor`] wraps [`NougatImageProcessor`] and [`NougatTokenizerFast`] classes into a single instance to both extract the input features and decode the predicted token ids. - Step-by-step PDF transcription >>> from huggingface_hub import hf_hub_download >>> import re >>> from PIL import Image >>> from transformers import NougatProcessor, VisionEncoderDecoderModel >>> from datasets import load_dataset >>> import torch >>> processor = NougatProcessor.from_pretrained(""facebook/nougat-base"") >>> model = VisionEncoderDecoderModel.from_pretrained(""facebook/nougat-base"") >>> device = ""cuda"" if torch.cuda.is_available() else ""cpu"" >>> model.to(device) # doctest: +IGNORE_RESULT >>> # prepare PDF image for the model >>> filepath = hf_hub_download(repo_id=""hf-internal-testing/fixtures_docvqa"", filename=""nougat_paper.png"", repo_type=""dataset"") >>> image = Image.open(filepath) >>> pixel_values = processor(image, return_tensors=""pt"").pixel_values >>> # generate transcription (here we only generate 30 tokens) >>> outputs = model.generate( pixel_values.to(device), min_length=1, max_new_tokens=30, bad_words_ids=[[processor.tokenizer.unk_token_id]], ) >>> sequence = processor.batch_decode(outputs, skip_special_tokens=True)[0] >>> sequence = processor.post_process_generation(sequence, fix_markdown=False) >>> # note: we're using repr here such for the sake of printing the \n characters, feel free to just print the sequence >>> print(repr(sequence)) '\n\n# Nougat: Neural Optical Understanding for Academic Documents\n\n Lukas Blecher\n\nCorrespondence to: lblecher@' See the [model hub](https://huggingface.co/models?filter=nougat) to look for Nougat checkpoints. The model is identical to [Donut](donut) in terms of architecture. ## NougatImageProcessor [[autodoc]] NougatImageProcessor - preprocess ## NougatTokenizerFast [[autodoc]] NougatTokenizerFast ## NougatProcessor [[autodoc]] NougatProcessor - __call__ - from_pretrained - save_pretrained - batch_decode - decode - post_process_generation" model_doc/bart.md," # BART ## Overview The Bart model was proposed in [BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension](https://arxiv.org/abs/1910.13461) by Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Ves Stoyanov and Luke Zettlemoyer on 29 Oct, 2019. According to the abstract, - Bart uses a standard seq2seq/machine translation architecture with a bidirectional encoder (like BERT) and a left-to-right decoder (like GPT). - The pretraining task involves randomly shuffling the order of the original sentences and a novel in-filling scheme, where spans of text are replaced with a single mask token. - BART is particularly effective when fine tuned for text generation but also works well for comprehension tasks. It matches the performance of RoBERTa with comparable training resources on GLUE and SQuAD, achieves new state-of-the-art results on a range of abstractive dialogue, question answering, and summarization tasks, with gains of up to 6 ROUGE. This model was contributed by [sshleifer](https://huggingface.co/sshleifer). The authors' code can be found [here](https://github.com/pytorch/fairseq/tree/master/examples/bart). ## Usage tips: - BART is a model with absolute position embeddings so it's usually advised to pad the inputs on the right rather than the left. - Sequence-to-sequence model with an encoder and a decoder. Encoder is fed a corrupted version of the tokens, decoder is fed the original tokens (but has a mask to hide the future words like a regular transformers decoder). A composition of the following transformations are applied on the pretraining tasks for the encoder: * mask random tokens (like in BERT) * delete random tokens * mask a span of k tokens with a single mask token (a span of 0 tokens is an insertion of a mask token) * permute sentences * rotate the document to make it start at a specific token ## Implementation Notes - Bart doesn't use `token_type_ids` for sequence classification. Use [`BartTokenizer`] or [`~BartTokenizer.encode`] to get the proper splitting. - The forward pass of [`BartModel`] will create the `decoder_input_ids` if they are not passed. This is different than some other modeling APIs. A typical use case of this feature is mask filling. - Model predictions are intended to be identical to the original implementation when `forced_bos_token_id=0`. This only works, however, if the string you pass to [`fairseq.encode`] starts with a space. - [`~generation.GenerationMixin.generate`] should be used for conditional generation tasks like summarization, see the example in that docstrings. - Models that load the *facebook/bart-large-cnn* weights will not have a `mask_token_id`, or be able to perform mask-filling tasks. ## Mask Filling The `facebook/bart-base` and `facebook/bart-large` checkpoints can be used to fill multi-token masks. thon from transformers import BartForConditionalGeneration, BartTokenizer model = BartForConditionalGeneration.from_pretrained(""facebook/bart-large"", forced_bos_token_id=0) tok = BartTokenizer.from_pretrained(""facebook/bart-large"") example_english_phrase = ""UN Chief Says There Is No in Syria"" batch = tok(example_english_phrase, return_tensors=""pt"") generated_ids = model.generate(batch[""input_ids""]) assert tok.batch_decode(generated_ids, skip_special_tokens=True) == [ ""UN Chief Says There Is No Plan to Stop Chemical Weapons in Syria"" ] ## Resources A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with BART. If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource. - A blog post on [Distributed Training: Train BART/T5 for Summarization using 🤗 Transformers and Amazon SageMaker](https://huggingface.co/blog/sagemaker-distributed-training-seq2seq). - A notebook on how to [finetune BART for summarization with fastai using blurr](https://colab.research.google.com/github/ohmeow/ohmeow_website/blob/master/posts/2021-05-25-mbart-sequence-classification-with-blurr.ipynb). 🌎 - A notebook on how to [finetune BART for summarization in two languages with Trainer class](https://colab.research.google.com/github/elsanns/xai-nlp-notebooks/blob/master/fine_tune_bart_summarization_two_langs.ipynb). 🌎 - [`BartForConditionalGeneration`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/pytorch/summarization) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/summarization.ipynb). - [`TFBartForConditionalGeneration`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/tensorflow/summarization) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/summarization-tf.ipynb). - [`FlaxBartForConditionalGeneration`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/flax/summarization). - An example of how to train [`BartForConditionalGeneration`] with a Hugging Face `datasets` object can be found in this [forum discussion](https://discuss.huggingface.co/t/train-bart-for-conditional-generation-e-g-summarization/1904) - [Summarization](https://huggingface.co/course/chapter7/5?fw=pt#summarization) chapter of the 🤗 Hugging Face course. - [Summarization task guide](../tasks/summarization) - [`BartForConditionalGeneration`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/pytorch/language-modeling#robertabertdistilbert-and-masked-language-modeling) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/language_modeling.ipynb). - [`TFBartForConditionalGeneration`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/tensorflow/language-modeling#run_mlmpy) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/language_modeling-tf.ipynb). - [`FlaxBartForConditionalGeneration`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/flax/language-modeling#masked-language-modeling) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/masked_language_modeling_flax.ipynb). - [Masked language modeling](https://huggingface.co/course/chapter7/3?fw=pt) chapter of the 🤗 Hugging Face Course. - [Masked language modeling task guide](../tasks/masked_language_modeling) - A notebook on how to [finetune mBART using Seq2SeqTrainer for Hindi to English translation](https://colab.research.google.com/github/vasudevgupta7/huggingface-tutorials/blob/main/translation_training.ipynb). 🌎 - [`BartForConditionalGeneration`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/pytorch/translation) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/translation.ipynb). - [`TFBartForConditionalGeneration`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/tensorflow/translation) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/translation-tf.ipynb). - [Translation task guide](../tasks/translation) See also: - [Text classification task guide](../tasks/sequence_classification) - [Question answering task guide](../tasks/question_answering) - [Causal language modeling task guide](../tasks/language_modeling) - [Distilled checkpoints](https://huggingface.co/models?search=distilbart) are described in this [paper](https://arxiv.org/abs/2010.13002). ## BartConfig [[autodoc]] BartConfig - all ## BartTokenizer [[autodoc]] BartTokenizer - all ## BartTokenizerFast [[autodoc]] BartTokenizerFast - all ## BartModel [[autodoc]] BartModel - forward ## BartForConditionalGeneration [[autodoc]] BartForConditionalGeneration - forward ## BartForSequenceClassification [[autodoc]] BartForSequenceClassification - forward ## BartForQuestionAnswering [[autodoc]] BartForQuestionAnswering - forward ## BartForCausalLM [[autodoc]] BartForCausalLM - forward ## TFBartModel [[autodoc]] TFBartModel - call ## TFBartForConditionalGeneration [[autodoc]] TFBartForConditionalGeneration - call ## TFBartForSequenceClassification [[autodoc]] TFBartForSequenceClassification - call ## FlaxBartModel [[autodoc]] FlaxBartModel - __call__ - encode - decode ## FlaxBartForConditionalGeneration [[autodoc]] FlaxBartForConditionalGeneration - __call__ - encode - decode ## FlaxBartForSequenceClassification [[autodoc]] FlaxBartForSequenceClassification - __call__ - encode - decode ## FlaxBartForQuestionAnswering [[autodoc]] FlaxBartForQuestionAnswering - __call__ - encode - decode ## FlaxBartForCausalLM [[autodoc]] FlaxBartForCausalLM - __call__ " model_doc/gpt_bigcode.md," # GPTBigCode ## Overview The GPTBigCode model was proposed in [SantaCoder: don't reach for the stars!](https://arxiv.org/abs/2301.03988) by BigCode. The listed authors are: Loubna Ben Allal, Raymond Li, Denis Kocetkov, Chenghao Mou, Christopher Akiki, Carlos Munoz Ferrandis, Niklas Muennighoff, Mayank Mishra, Alex Gu, Manan Dey, Logesh Kumar Umapathi, Carolyn Jane Anderson, Yangtian Zi, Joel Lamy Poirier, Hailey Schoelkopf, Sergey Troshin, Dmitry Abulkhanov, Manuel Romero, Michael Lappert, Francesco De Toni, Bernardo García del Río, Qian Liu, Shamik Bose, Urvashi Bhattacharyya, Terry Yue Zhuo, Ian Yu, Paulo Villegas, Marco Zocca, Sourab Mangrulkar, David Lansky, Huu Nguyen, Danish Contractor, Luis Villa, Jia Li, Dzmitry Bahdanau, Yacine Jernite, Sean Hughes, Daniel Fried, Arjun Guha, Harm de Vries, Leandro von Werra. The abstract from the paper is the following: *The BigCode project is an open-scientific collaboration working on the responsible development of large language models for code. This tech report describes the progress of the collaboration until December 2022, outlining the current state of the Personally Identifiable Information (PII) redaction pipeline, the experiments conducted to de-risk the model architecture, and the experiments investigating better preprocessing methods for the training data. We train 1.1B parameter models on the Java, JavaScript, and Python subsets of The Stack and evaluate them on the MultiPL-E text-to-code benchmark. We find that more aggressive filtering of near-duplicates can further boost performance and, surprisingly, that selecting files from repositories with 5+ GitHub stars deteriorates performance significantly. Our best model outperforms previous open-source multilingual code generation models (InCoder-6.7B and CodeGen-Multi-2.7B) in both left-to-right generation and infilling on the Java, JavaScript, and Python portions of MultiPL-E, despite being a substantially smaller model. All models are released under an OpenRAIL license at [this https URL.](https://huggingface.co/bigcode)* The model is an optimized [GPT2 model](https://huggingface.co/docs/transformers/model_doc/gpt2) with support for Multi-Query Attention. ## Implementation details The main differences compared to GPT2. - Added support for Multi-Query Attention. - Use `gelu_pytorch_tanh` instead of classic `gelu`. - Avoid unnecessary synchronizations (this has since been added to GPT2 in #20061, but wasn't in the reference codebase). - Use Linear layers instead of Conv1D (good speedup but makes the checkpoints incompatible). - Merge `_attn` and `_upcast_and_reordered_attn`. Always merge the matmul with scaling. Rename `reorder_and_upcast_attn`->`attention_softmax_in_fp32` - Cache the attention mask value to avoid recreating it every time. - Use jit to fuse the attention fp32 casting, masking, softmax, and scaling. - Combine the attention and causal masks into a single one, pre-computed for the whole model instead of every layer. - Merge the key and value caches into one (this changes the format of layer_past/ present, does it risk creating problems?) - Use the memory layout (self.num_heads, 3, self.head_dim) instead of `(3, self.num_heads, self.head_dim)` for the QKV tensor with MHA. (prevents an overhead with the merged key and values, but makes the checkpoints incompatible with the original gpt2 model). You can read more about the optimizations in the [original pull request](https://github.com/huggingface/transformers/pull/22575) ## Combining Starcoder and Flash Attention 2 First, make sure to install the latest version of Flash Attention 2 to include the sliding window attention feature. ```bash pip install -U flash-attn --no-build-isolation Make also sure that you have a hardware that is compatible with Flash-Attention 2. Read more about it in the official documentation of flash-attn repository. Make also sure to load your model in half-precision (e.g. `torch.float16``) To load and run a model using Flash Attention 2, refer to the snippet below: thon >>> import torch >>> from transformers import AutoModelForCausalLM, AutoTokenizer >>> device = ""cuda"" # the device to load the model onto >>> model = AutoModelForCausalLM.from_pretrained(""bigcode/gpt_bigcode-santacoder"", torch_dtype=torch.float16, use_flash_attention_2=True) >>> tokenizer = AutoTokenizer.from_pretrained(""bigcode/gpt_bigcode-santacoder"") >>> prompt = ""def hello_world():"" >>> model_inputs = tokenizer([prompt], return_tensors=""pt"").to(device) >>> model.to(device) >>> generated_ids = model.generate(**model_inputs, max_new_tokens=30, do_sample=False) >>> tokenizer.batch_decode(generated_ids)[0] 'def hello_world():\n print(""hello world"")\n\nif __name__ == ""__main__"":\n print(""hello world"")\n<|endoftext|>' ### Expected speedups Below is a expected speedup diagram that compares pure inference time between the native implementation in transformers using `bigcode/starcoder` checkpoint and the Flash Attention 2 version of the model using two different sequence lengths. ## GPTBigCodeConfig [[autodoc]] GPTBigCodeConfig ## GPTBigCodeModel [[autodoc]] GPTBigCodeModel - forward ## GPTBigCodeForCausalLM [[autodoc]] GPTBigCodeForCausalLM - forward ## GPTBigCodeForSequenceClassification [[autodoc]] GPTBigCodeForSequenceClassification - forward ## GPTBigCodeForTokenClassification [[autodoc]] GPTBigCodeForTokenClassification - forward " model_doc/vit_msn.md," # ViTMSN ## Overview The ViTMSN model was proposed in [Masked Siamese Networks for Label-Efficient Learning](https://arxiv.org/abs/2204.07141) by Mahmoud Assran, Mathilde Caron, Ishan Misra, Piotr Bojanowski, Florian Bordes, Pascal Vincent, Armand Joulin, Michael Rabbat, Nicolas Ballas. The paper presents a joint-embedding architecture to match the prototypes of masked patches with that of the unmasked patches. With this setup, their method yields excellent performance in the low-shot and extreme low-shot regimes. The abstract from the paper is the following: *We propose Masked Siamese Networks (MSN), a self-supervised learning framework for learning image representations. Our approach matches the representation of an image view containing randomly masked patches to the representation of the original unmasked image. This self-supervised pre-training strategy is particularly scalable when applied to Vision Transformers since only the unmasked patches are processed by the network. As a result, MSNs improve the scalability of joint-embedding architectures, while producing representations of a high semantic level that perform competitively on low-shot image classification. For instance, on ImageNet-1K, with only 5,000 annotated images, our base MSN model achieves 72.4% top-1 accuracy, and with 1% of ImageNet-1K labels, we achieve 75.7% top-1 accuracy, setting a new state-of-the-art for self-supervised learning on this benchmark.* MSN architecture. Taken from the original paper. This model was contributed by [sayakpaul](https://huggingface.co/sayakpaul). The original code can be found [here](https://github.com/facebookresearch/msn). ## Usage tips - MSN (masked siamese networks) is a method for self-supervised pre-training of Vision Transformers (ViTs). The pre-training objective is to match the prototypes assigned to the unmasked views of the images to that of the masked views of the same images. - The authors have only released pre-trained weights of the backbone (ImageNet-1k pre-training). So, to use that on your own image classification dataset, use the [`ViTMSNForImageClassification`] class which is initialized from [`ViTMSNModel`]. Follow [this notebook](https://github.com/huggingface/notebooks/blob/main/examples/image_classification.ipynb) for a detailed tutorial on fine-tuning. - MSN is particularly useful in the low-shot and extreme low-shot regimes. Notably, it achieves 75.7% top-1 accuracy with only 1% of ImageNet-1K labels when fine-tuned. ## Resources A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with ViT MSN. - [`ViTMSNForImageClassification`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/pytorch/image-classification) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/image_classification.ipynb). - See also: [Image classification task guide](../tasks/image_classification) If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource. ## ViTMSNConfig [[autodoc]] ViTMSNConfig ## ViTMSNModel [[autodoc]] ViTMSNModel - forward ## ViTMSNForImageClassification [[autodoc]] ViTMSNForImageClassification - forward " model_doc/reformer.md," # Reformer ## Overview The Reformer model was proposed in the paper [Reformer: The Efficient Transformer](https://arxiv.org/abs/2001.04451.pdf) by Nikita Kitaev, Łukasz Kaiser, Anselm Levskaya. The abstract from the paper is the following: *Large Transformer models routinely achieve state-of-the-art results on a number of tasks but training these models can be prohibitively costly, especially on long sequences. We introduce two techniques to improve the efficiency of Transformers. For one, we replace dot-product attention by one that uses locality-sensitive hashing, changing its complexity from O(L^2) to O(Llog(L)), where L is the length of the sequence. Furthermore, we use reversible residual layers instead of the standard residuals, which allows storing activations only once in the training process instead of N times, where N is the number of layers. The resulting model, the Reformer, performs on par with Transformer models while being much more memory-efficient and much faster on long sequences.* This model was contributed by [patrickvonplaten](https://huggingface.co/patrickvonplaten). The Authors' code can be found [here](https://github.com/google/trax/tree/master/trax/models/reformer). ## Usage tips - Reformer does **not** work with *torch.nn.DataParallel* due to a bug in PyTorch, see [issue #36035](https://github.com/pytorch/pytorch/issues/36035). - Use Axial position encoding (see below for more details). It’s a mechanism to avoid having a huge positional encoding matrix (when the sequence length is very big) by factorizing it into smaller matrices. - Replace traditional attention by LSH (local-sensitive hashing) attention (see below for more details). It’s a technique to avoid computing the full product query-key in the attention layers. - Avoid storing the intermediate results of each layer by using reversible transformer layers to obtain them during the backward pass (subtracting the residuals from the input of the next layer gives them back) or recomputing them for results inside a given layer (less efficient than storing them but saves memory). - Compute the feedforward operations by chunks and not on the whole batch. ### Axial Positional Encodings Axial Positional Encodings were first implemented in Google's [trax library](https://github.com/google/trax/blob/4d99ad4965bab1deba227539758d59f0df0fef48/trax/layers/research/position_encodings.py#L29) and developed by the authors of this model's paper. In models that are treating very long input sequences, the conventional position id encodings store an embedings vector of size \\(d\\) being the `config.hidden_size` for every position \\(i, \ldots, n_s\\), with \\(n_s\\) being `config.max_embedding_size`. This means that having a sequence length of \\(n_s = 2^{19} \approx 0.5M\\) and a `config.hidden_size` of \\(d = 2^{10} \approx 1000\\) would result in a position encoding matrix: $$X_{i,j}, \text{ with } i \in \left[1,\ldots, d\right] \text{ and } j \in \left[1,\ldots, n_s\right]$$ which alone has over 500M parameters to store. Axial positional encodings factorize \\(X_{i,j}\\) into two matrices: $$X^{1}_{i,j}, \text{ with } i \in \left[1,\ldots, d^1\right] \text{ and } j \in \left[1,\ldots, n_s^1\right]$$ and $$X^{2}_{i,j}, \text{ with } i \in \left[1,\ldots, d^2\right] \text{ and } j \in \left[1,\ldots, n_s^2\right]$$ with: $$d = d^1 + d^2 \text{ and } n_s = n_s^1 \times n_s^2 .$$ Therefore the following holds: $$X_{i,j} = \begin{cases} X^{1}_{i, k}, & \text{if }\ i < d^1 \text{ with } k = j \mod n_s^1 \\ X^{2}_{i - d^1, l}, & \text{if } i \ge d^1 \text{ with } l = \lfloor\frac{j}{n_s^1}\rfloor \end{cases}$$ Intuitively, this means that a position embedding vector \\(x_j \in \mathbb{R}^{d}\\) is now the composition of two factorized embedding vectors: \\(x^1_{k, l} + x^2_{l, k}\\), where as the `config.max_embedding_size` dimension \\(j\\) is factorized into \\(k \text{ and } l\\). This design ensures that each position embedding vector \\(x_j\\) is unique. Using the above example again, axial position encoding with \\(d^1 = 2^9, d^2 = 2^9, n_s^1 = 2^9, n_s^2 = 2^{10}\\) can drastically reduced the number of parameters from 500 000 000 to \\(2^{18} + 2^{19} \approx 780 000\\) parameters, this means 85% less memory usage. In practice, the parameter `config.axial_pos_embds_dim` is set to a tuple \\((d^1, d^2)\\) which sum has to be equal to `config.hidden_size` and `config.axial_pos_shape` is set to a tuple \\((n_s^1, n_s^2)\\) which product has to be equal to `config.max_embedding_size`, which during training has to be equal to the *sequence length* of the `input_ids`. ### LSH Self Attention In Locality sensitive hashing (LSH) self attention the key and query projection weights are tied. Therefore, the key query embedding vectors are also tied. LSH self attention uses the locality sensitive hashing mechanism proposed in [Practical and Optimal LSH for Angular Distance](https://arxiv.org/abs/1509.02897) to assign each of the tied key query embedding vectors to one of `config.num_buckets` possible buckets. The premise is that the more ""similar"" key query embedding vectors (in terms of *cosine similarity*) are to each other, the more likely they are assigned to the same bucket. The accuracy of the LSH mechanism can be improved by increasing `config.num_hashes` or directly the argument `num_hashes` of the forward function so that the output of the LSH self attention better approximates the output of the ""normal"" full self attention. The buckets are then sorted and chunked into query key embedding vector chunks each of length `config.lsh_chunk_length`. For each chunk, the query embedding vectors attend to its key vectors (which are tied to themselves) and to the key embedding vectors of `config.lsh_num_chunks_before` previous neighboring chunks and `config.lsh_num_chunks_after` following neighboring chunks. For more information, see the [original Paper](https://arxiv.org/abs/2001.04451) or this great [blog post](https://www.pragmatic.ml/reformer-deep-dive/). Note that `config.num_buckets` can also be factorized into a list \\((n_{\text{buckets}}^1, n_{\text{buckets}}^2)\\). This way instead of assigning the query key embedding vectors to one of \\((1,\ldots, n_{\text{buckets}})\\) they are assigned to one of \\((1-1,\ldots, n_{\text{buckets}}^1-1, \ldots, 1-n_{\text{buckets}}^2, \ldots, n_{\text{buckets}}^1-n_{\text{buckets}}^2)\\). This is crucial for very long sequences to save memory. When training a model from scratch, it is recommended to leave `config.num_buckets=None`, so that depending on the sequence length a good value for `num_buckets` is calculated on the fly. This value will then automatically be saved in the config and should be reused for inference. Using LSH self attention, the memory and time complexity of the query-key matmul operation can be reduced from \\(\mathcal{O}(n_s \times n_s)\\) to \\(\mathcal{O}(n_s \times \log(n_s))\\), which usually represents the memory and time bottleneck in a transformer model, with \\(n_s\\) being the sequence length. ### Local Self Attention Local self attention is essentially a ""normal"" self attention layer with key, query and value projections, but is chunked so that in each chunk of length `config.local_chunk_length` the query embedding vectors only attends to the key embedding vectors in its chunk and to the key embedding vectors of `config.local_num_chunks_before` previous neighboring chunks and `config.local_num_chunks_after` following neighboring chunks. Using Local self attention, the memory and time complexity of the query-key matmul operation can be reduced from \\(\mathcal{O}(n_s \times n_s)\\) to \\(\mathcal{O}(n_s \times \log(n_s))\\), which usually represents the memory and time bottleneck in a transformer model, with \\(n_s\\) being the sequence length. ### Training During training, we must ensure that the sequence length is set to a value that can be divided by the least common multiple of `config.lsh_chunk_length` and `config.local_chunk_length` and that the parameters of the Axial Positional Encodings are correctly set as described above. Reformer is very memory efficient so that the model can easily be trained on sequences as long as 64000 tokens. For training, the [`ReformerModelWithLMHead`] should be used as follows: thon input_ids = tokenizer.encode(""This is a sentence from the training data"", return_tensors=""pt"") loss = model(input_ids, labels=input_ids)[0] ## Resources - [Text classification task guide](../tasks/sequence_classification) - [Question answering task guide](../tasks/question_answering) - [Causal language modeling task guide](../tasks/language_modeling) - [Masked language modeling task guide](../tasks/masked_language_modeling) ## ReformerConfig [[autodoc]] ReformerConfig ## ReformerTokenizer [[autodoc]] ReformerTokenizer - save_vocabulary ## ReformerTokenizerFast [[autodoc]] ReformerTokenizerFast ## ReformerModel [[autodoc]] ReformerModel - forward ## ReformerModelWithLMHead [[autodoc]] ReformerModelWithLMHead - forward ## ReformerForMaskedLM [[autodoc]] ReformerForMaskedLM - forward ## ReformerForSequenceClassification [[autodoc]] ReformerForSequenceClassification - forward ## ReformerForQuestionAnswering [[autodoc]] ReformerForQuestionAnswering - forward " model_doc/nllb-moe.md," # NLLB-MOE ## Overview The NLLB model was presented in [No Language Left Behind: Scaling Human-Centered Machine Translation](https://arxiv.org/abs/2207.04672) by Marta R. Costa-jussà, James Cross, Onur Çelebi, Maha Elbayad, Kenneth Heafield, Kevin Heffernan, Elahe Kalbassi, Janice Lam, Daniel Licht, Jean Maillard, Anna Sun, Skyler Wang, Guillaume Wenzek, Al Youngblood, Bapi Akula, Loic Barrault, Gabriel Mejia Gonzalez, Prangthip Hansanti, John Hoffman, Semarley Jarrett, Kaushik Ram Sadagopan, Dirk Rowe, Shannon Spruit, Chau Tran, Pierre Andrews, Necip Fazil Ayan, Shruti Bhosale, Sergey Edunov, Angela Fan, Cynthia Gao, Vedanuj Goswami, Francisco Guzmán, Philipp Koehn, Alexandre Mourachko, Christophe Ropers, Safiyyah Saleem, Holger Schwenk, and Jeff Wang. The abstract of the paper is the following: *Driven by the goal of eradicating language barriers on a global scale, machine translation has solidified itself as a key focus of artificial intelligence research today. However, such efforts have coalesced around a small subset of languages, leaving behind the vast majority of mostly low-resource languages. What does it take to break the 200 language barrier while ensuring safe, high quality results, all while keeping ethical considerations in mind? In No Language Left Behind, we took on this challenge by first contextualizing the need for low-resource language translation support through exploratory interviews with native speakers. Then, we created datasets and models aimed at narrowing the performance gap between low and high-resource languages. More specifically, we developed a conditional compute model based on Sparsely Gated Mixture of Experts that is trained on data obtained with novel and effective data mining techniques tailored for low-resource languages. We propose multiple architectural and training improvements to counteract overfitting while training on thousands of tasks. Critically, we evaluated the performance of over 40,000 different translation directions using a human-translated benchmark, Flores-200, and combined human evaluation with a novel toxicity benchmark covering all languages in Flores-200 to assess translation safety. Our model achieves an improvement of 44% BLEU relative to the previous state-of-the-art, laying important groundwork towards realizing a universal translation system.* This model was contributed by [Arthur Zucker](https://huggingface.co/ArthurZ). The original code can be found [here](https://github.com/facebookresearch/fairseq). ## Usage tips - M2M100ForConditionalGeneration is the base model for both NLLB and NLLB MoE - The NLLB-MoE is very similar to the NLLB model, but it's feed forward layer is based on the implementation of SwitchTransformers. - The tokenizer is the same as the NLLB models. ## Implementation differences with SwitchTransformers The biggest difference is the way the tokens are routed. NLLB-MoE uses a `top-2-gate` which means that for each input, only the top two experts are selected based on the highest predicted probabilities from the gating network, and the remaining experts are ignored. In `SwitchTransformers`, only the top-1 probabilities are computed, which means that tokens have less probability of being forwarded. Moreover, if a token is not routed to any expert, `SwitchTransformers` still adds its unmodified hidden states (kind of like a residual connection) while they are masked in `NLLB`'s top-2 routing mechanism. ## Generating with NLLB-MoE The available checkpoints require around 350GB of storage. Make sure to use `accelerate` if you do not have enough RAM on your machine. While generating the target text set the `forced_bos_token_id` to the target language id. The following example shows how to translate English to French using the *facebook/nllb-200-distilled-600M* model. Note that we're using the BCP-47 code for French `fra_Latn`. See [here](https://github.com/facebookresearch/flores/blob/main/flores200/README.md#languages-in-flores-200) for the list of all BCP-47 in the Flores 200 dataset. thon >>> from transformers import AutoModelForSeq2SeqLM, AutoTokenizer >>> tokenizer = AutoTokenizer.from_pretrained(""facebook/nllb-moe-54b"") >>> model = AutoModelForSeq2SeqLM.from_pretrained(""facebook/nllb-moe-54b"") >>> article = ""Previously, Ring's CEO, Jamie Siminoff, remarked the company started when his doorbell wasn't audible from his shop in his garage."" >>> inputs = tokenizer(article, return_tensors=""pt"") >>> translated_tokens = model.generate( **inputs, forced_bos_token_id=tokenizer.lang_code_to_id[""fra_Latn""], max_length=50 ) >>> tokenizer.batch_decode(translated_tokens, skip_special_tokens=True)[0] ""Auparavant, le PDG de Ring, Jamie Siminoff, a fait remarquer que la société avait commencé lorsque sa sonnette n'était pas audible depuis son magasin dans son garage."" ### Generating from any other language than English English (`eng_Latn`) is set as the default language from which to translate. In order to specify that you'd like to translate from a different language, you should specify the BCP-47 code in the `src_lang` keyword argument of the tokenizer initialization. See example below for a translation from romanian to german: thon >>> from transformers import AutoModelForSeq2SeqLM, AutoTokenizer >>> tokenizer = AutoTokenizer.from_pretrained(""facebook/nllb-moe-54b"", src_lang=""ron_Latn"") >>> model = AutoModelForSeq2SeqLM.from_pretrained(""facebook/nllb-moe-54b"") >>> article = ""Şeful ONU spune că nu există o soluţie militară în Siria"" >>> inputs = tokenizer(article, return_tensors=""pt"") >>> translated_tokens = model.generate( **inputs, forced_bos_token_id=tokenizer.lang_code_to_id[""deu_Latn""], max_length=30 ) >>> tokenizer.batch_decode(translated_tokens, skip_special_tokens=True)[0] ## Resources - [Translation task guide](../tasks/translation) - [Summarization task guide](../tasks/summarization) ## NllbMoeConfig [[autodoc]] NllbMoeConfig ## NllbMoeTop2Router [[autodoc]] NllbMoeTop2Router - route_tokens - forward ## NllbMoeSparseMLP [[autodoc]] NllbMoeSparseMLP - forward ## NllbMoeModel [[autodoc]] NllbMoeModel - forward ## NllbMoeForConditionalGeneration [[autodoc]] NllbMoeForConditionalGeneration - forward " model_doc/mobilebert.md," # MobileBERT ## Overview The MobileBERT model was proposed in [MobileBERT: a Compact Task-Agnostic BERT for Resource-Limited Devices](https://arxiv.org/abs/2004.02984) by Zhiqing Sun, Hongkun Yu, Xiaodan Song, Renjie Liu, Yiming Yang, and Denny Zhou. It's a bidirectional transformer based on the BERT model, which is compressed and accelerated using several approaches. The abstract from the paper is the following: *Natural Language Processing (NLP) has recently achieved great success by using huge pre-trained models with hundreds of millions of parameters. However, these models suffer from heavy model sizes and high latency such that they cannot be deployed to resource-limited mobile devices. In this paper, we propose MobileBERT for compressing and accelerating the popular BERT model. Like the original BERT, MobileBERT is task-agnostic, that is, it can be generically applied to various downstream NLP tasks via simple fine-tuning. Basically, MobileBERT is a thin version of BERT_LARGE, while equipped with bottleneck structures and a carefully designed balance between self-attentions and feed-forward networks. To train MobileBERT, we first train a specially designed teacher model, an inverted-bottleneck incorporated BERT_LARGE model. Then, we conduct knowledge transfer from this teacher to MobileBERT. Empirical studies show that MobileBERT is 4.3x smaller and 5.5x faster than BERT_BASE while achieving competitive results on well-known benchmarks. On the natural language inference tasks of GLUE, MobileBERT achieves a GLUEscore o 77.7 (0.6 lower than BERT_BASE), and 62 ms latency on a Pixel 4 phone. On the SQuAD v1.1/v2.0 question answering task, MobileBERT achieves a dev F1 score of 90.0/79.2 (1.5/2.1 higher than BERT_BASE).* This model was contributed by [vshampor](https://huggingface.co/vshampor). The original code can be found [here](https://github.com/google-research/google-research/tree/master/mobilebert). ## Usage tips - MobileBERT is a model with absolute position embeddings so it's usually advised to pad the inputs on the right rather than the left. - MobileBERT is similar to BERT and therefore relies on the masked language modeling (MLM) objective. It is therefore efficient at predicting masked tokens and at NLU in general, but is not optimal for text generation. Models trained with a causal language modeling (CLM) objective are better in that regard. ## Resources - [Text classification task guide](../tasks/sequence_classification) - [Token classification task guide](../tasks/token_classification) - [Question answering task guide](../tasks/question_answering) - [Masked language modeling task guide](../tasks/masked_language_modeling) - [Multiple choice task guide](../tasks/multiple_choice) ## MobileBertConfig [[autodoc]] MobileBertConfig ## MobileBertTokenizer [[autodoc]] MobileBertTokenizer ## MobileBertTokenizerFast [[autodoc]] MobileBertTokenizerFast ## MobileBert specific outputs [[autodoc]] models.mobilebert.modeling_mobilebert.MobileBertForPreTrainingOutput [[autodoc]] models.mobilebert.modeling_tf_mobilebert.TFMobileBertForPreTrainingOutput ## MobileBertModel [[autodoc]] MobileBertModel - forward ## MobileBertForPreTraining [[autodoc]] MobileBertForPreTraining - forward ## MobileBertForMaskedLM [[autodoc]] MobileBertForMaskedLM - forward ## MobileBertForNextSentencePrediction [[autodoc]] MobileBertForNextSentencePrediction - forward ## MobileBertForSequenceClassification [[autodoc]] MobileBertForSequenceClassification - forward ## MobileBertForMultipleChoice [[autodoc]] MobileBertForMultipleChoice - forward ## MobileBertForTokenClassification [[autodoc]] MobileBertForTokenClassification - forward ## MobileBertForQuestionAnswering [[autodoc]] MobileBertForQuestionAnswering - forward ## TFMobileBertModel [[autodoc]] TFMobileBertModel - call ## TFMobileBertForPreTraining [[autodoc]] TFMobileBertForPreTraining - call ## TFMobileBertForMaskedLM [[autodoc]] TFMobileBertForMaskedLM - call ## TFMobileBertForNextSentencePrediction [[autodoc]] TFMobileBertForNextSentencePrediction - call ## TFMobileBertForSequenceClassification [[autodoc]] TFMobileBertForSequenceClassification - call ## TFMobileBertForMultipleChoice [[autodoc]] TFMobileBertForMultipleChoice - call ## TFMobileBertForTokenClassification [[autodoc]] TFMobileBertForTokenClassification - call ## TFMobileBertForQuestionAnswering [[autodoc]] TFMobileBertForQuestionAnswering - call " model_doc/maskformer.md," # MaskFormer This is a recently introduced model so the API hasn't been tested extensively. There may be some bugs or slight breaking changes to fix it in the future. If you see something strange, file a [Github Issue](https://github.com/huggingface/transformers/issues/new?assignees=&labels=&template=bug-report.md&title). ## Overview The MaskFormer model was proposed in [Per-Pixel Classification is Not All You Need for Semantic Segmentation](https://arxiv.org/abs/2107.06278) by Bowen Cheng, Alexander G. Schwing, Alexander Kirillov. MaskFormer addresses semantic segmentation with a mask classification paradigm instead of performing classic pixel-level classification. The abstract from the paper is the following: *Modern approaches typically formulate semantic segmentation as a per-pixel classification task, while instance-level segmentation is handled with an alternative mask classification. Our key insight: mask classification is sufficiently general to solve both semantic- and instance-level segmentation tasks in a unified manner using the exact same model, loss, and training procedure. Following this observation, we propose MaskFormer, a simple mask classification model which predicts a set of binary masks, each associated with a single global class label prediction. Overall, the proposed mask classification-based method simplifies the landscape of effective approaches to semantic and panoptic segmentation tasks and shows excellent empirical results. In particular, we observe that MaskFormer outperforms per-pixel classification baselines when the number of classes is large. Our mask classification-based method outperforms both current state-of-the-art semantic (55.6 mIoU on ADE20K) and panoptic segmentation (52.7 PQ on COCO) models.* The figure below illustrates the architecture of MaskFormer. Taken from the [original paper](https://arxiv.org/abs/2107.06278). This model was contributed by [francesco](https://huggingface.co/francesco). The original code can be found [here](https://github.com/facebookresearch/MaskFormer). ## Usage tips - MaskFormer's Transformer decoder is identical to the decoder of [DETR](detr). During training, the authors of DETR did find it helpful to use auxiliary losses in the decoder, especially to help the model output the correct number of objects of each class. If you set the parameter `use_auxilary_loss` of [`MaskFormerConfig`] to `True`, then prediction feedforward neural networks and Hungarian losses are added after each decoder layer (with the FFNs sharing parameters). - If you want to train the model in a distributed environment across multiple nodes, then one should update the `get_num_masks` function inside in the `MaskFormerLoss` class of `modeling_maskformer.py`. When training on multiple nodes, this should be set to the average number of target masks across all nodes, as can be seen in the original implementation [here](https://github.com/facebookresearch/MaskFormer/blob/da3e60d85fdeedcb31476b5edd7d328826ce56cc/mask_former/modeling/criterion.py#L169). - One can use [`MaskFormerImageProcessor`] to prepare images for the model and optional targets for the model. - To get the final segmentation, depending on the task, you can call [`~MaskFormerImageProcessor.post_process_semantic_segmentation`] or [`~MaskFormerImageProcessor.post_process_panoptic_segmentation`]. Both tasks can be solved using [`MaskFormerForInstanceSegmentation`] output, panoptic segmentation accepts an optional `label_ids_to_fuse` argument to fuse instances of the target object/s (e.g. sky) together. ## Resources - All notebooks that illustrate inference as well as fine-tuning on custom data with MaskFormer can be found [here](https://github.com/NielsRogge/Transformers-Tutorials/tree/master/MaskFormer). ## MaskFormer specific outputs [[autodoc]] models.maskformer.modeling_maskformer.MaskFormerModelOutput [[autodoc]] models.maskformer.modeling_maskformer.MaskFormerForInstanceSegmentationOutput ## MaskFormerConfig [[autodoc]] MaskFormerConfig ## MaskFormerImageProcessor [[autodoc]] MaskFormerImageProcessor - preprocess - encode_inputs - post_process_semantic_segmentation - post_process_instance_segmentation - post_process_panoptic_segmentation ## MaskFormerFeatureExtractor [[autodoc]] MaskFormerFeatureExtractor - __call__ - encode_inputs - post_process_semantic_segmentation - post_process_instance_segmentation - post_process_panoptic_segmentation ## MaskFormerModel [[autodoc]] MaskFormerModel - forward ## MaskFormerForInstanceSegmentation [[autodoc]] MaskFormerForInstanceSegmentation - forward" model_doc/time_series_transformer.md," # Time Series Transformer ## Overview The Time Series Transformer model is a vanilla encoder-decoder Transformer for time series forecasting. This model was contributed by [kashif](https://huggingface.co/kashif). ## Usage tips - Similar to other models in the library, [`TimeSeriesTransformerModel`] is the raw Transformer without any head on top, and [`TimeSeriesTransformerForPrediction`] adds a distribution head on top of the former, which can be used for time-series forecasting. Note that this is a so-called probabilistic forecasting model, not a point forecasting model. This means that the model learns a distribution, from which one can sample. The model doesn't directly output values. - [`TimeSeriesTransformerForPrediction`] consists of 2 blocks: an encoder, which takes a `context_length` of time series values as input (called `past_values`), and a decoder, which predicts a `prediction_length` of time series values into the future (called `future_values`). During training, one needs to provide pairs of (`past_values` and `future_values`) to the model. - In addition to the raw (`past_values` and `future_values`), one typically provides additional features to the model. These can be the following: - `past_time_features`: temporal features which the model will add to `past_values`. These serve as ""positional encodings"" for the Transformer encoder. Examples are ""day of the month"", ""month of the year"", etc. as scalar values (and then stacked together as a vector). e.g. if a given time-series value was obtained on the 11th of August, then one could have [11, 8] as time feature vector (11 being ""day of the month"", 8 being ""month of the year""). - `future_time_features`: temporal features which the model will add to `future_values`. These serve as ""positional encodings"" for the Transformer decoder. Examples are ""day of the month"", ""month of the year"", etc. as scalar values (and then stacked together as a vector). e.g. if a given time-series value was obtained on the 11th of August, then one could have [11, 8] as time feature vector (11 being ""day of the month"", 8 being ""month of the year""). - `static_categorical_features`: categorical features which are static over time (i.e., have the same value for all `past_values` and `future_values`). An example here is the store ID or region ID that identifies a given time-series. Note that these features need to be known for ALL data points (also those in the future). - `static_real_features`: real-valued features which are static over time (i.e., have the same value for all `past_values` and `future_values`). An example here is the image representation of the product for which you have the time-series values (like the [ResNet](resnet) embedding of a ""shoe"" picture, if your time-series is about the sales of shoes). Note that these features need to be known for ALL data points (also those in the future). - The model is trained using ""teacher-forcing"", similar to how a Transformer is trained for machine translation. This means that, during training, one shifts the `future_values` one position to the right as input to the decoder, prepended by the last value of `past_values`. At each time step, the model needs to predict the next target. So the set-up of training is similar to a GPT model for language, except that there's no notion of `decoder_start_token_id` (we just use the last value of the context as initial input for the decoder). - At inference time, we give the final value of the `past_values` as input to the decoder. Next, we can sample from the model to make a prediction at the next time step, which is then fed to the decoder in order to make the next prediction (also called autoregressive generation). ## Resources A list of official Hugging Face and community (indicated by 🌎) resources to help you get started. If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource. - Check out the Time Series Transformer blog-post in HuggingFace blog: [Probabilistic Time Series Forecasting with 🤗 Transformers](https://huggingface.co/blog/time-series-transformers) ## TimeSeriesTransformerConfig [[autodoc]] TimeSeriesTransformerConfig ## TimeSeriesTransformerModel [[autodoc]] TimeSeriesTransformerModel - forward ## TimeSeriesTransformerForPrediction [[autodoc]] TimeSeriesTransformerForPrediction - forward " model_doc/wavlm.md," # WavLM ## Overview The WavLM model was proposed in [WavLM: Large-Scale Self-Supervised Pre-Training for Full Stack Speech Processing](https://arxiv.org/abs/2110.13900) by Sanyuan Chen, Chengyi Wang, Zhengyang Chen, Yu Wu, Shujie Liu, Zhuo Chen, Jinyu Li, Naoyuki Kanda, Takuya Yoshioka, Xiong Xiao, Jian Wu, Long Zhou, Shuo Ren, Yanmin Qian, Yao Qian, Jian Wu, Michael Zeng, Furu Wei. The abstract from the paper is the following: *Self-supervised learning (SSL) achieves great success in speech recognition, while limited exploration has been attempted for other speech processing tasks. As speech signal contains multi-faceted information including speaker identity, paralinguistics, spoken content, etc., learning universal representations for all speech tasks is challenging. In this paper, we propose a new pre-trained model, WavLM, to solve full-stack downstream speech tasks. WavLM is built based on the HuBERT framework, with an emphasis on both spoken content modeling and speaker identity preservation. We first equip the Transformer structure with gated relative position bias to improve its capability on recognition tasks. For better speaker discrimination, we propose an utterance mixing training strategy, where additional overlapped utterances are created unsupervisely and incorporated during model training. Lastly, we scale up the training dataset from 60k hours to 94k hours. WavLM Large achieves state-of-the-art performance on the SUPERB benchmark, and brings significant improvements for various speech processing tasks on their representative benchmarks.* Relevant checkpoints can be found under https://huggingface.co/models?other=wavlm. This model was contributed by [patrickvonplaten](https://huggingface.co/patrickvonplaten). The Authors' code can be found [here](https://github.com/microsoft/unilm/tree/master/wavlm). ## Usage tips - WavLM is a speech model that accepts a float array corresponding to the raw waveform of the speech signal. Please use [`Wav2Vec2Processor`] for the feature extraction. - WavLM model can be fine-tuned using connectionist temporal classification (CTC) so the model output has to be decoded using [`Wav2Vec2CTCTokenizer`]. - WavLM performs especially well on speaker verification, speaker identification, and speaker diarization tasks. ## Resources - [Audio classification task guide](../tasks/audio_classification) - [Automatic speech recognition task guide](../tasks/asr) ## WavLMConfig [[autodoc]] WavLMConfig ## WavLMModel [[autodoc]] WavLMModel - forward ## WavLMForCTC [[autodoc]] WavLMForCTC - forward ## WavLMForSequenceClassification [[autodoc]] WavLMForSequenceClassification - forward ## WavLMForAudioFrameClassification [[autodoc]] WavLMForAudioFrameClassification - forward ## WavLMForXVector [[autodoc]] WavLMForXVector - forward " model_doc/convbert.md," # ConvBERT ## Overview The ConvBERT model was proposed in [ConvBERT: Improving BERT with Span-based Dynamic Convolution](https://arxiv.org/abs/2008.02496) by Zihang Jiang, Weihao Yu, Daquan Zhou, Yunpeng Chen, Jiashi Feng, Shuicheng Yan. The abstract from the paper is the following: *Pre-trained language models like BERT and its variants have recently achieved impressive performance in various natural language understanding tasks. However, BERT heavily relies on the global self-attention block and thus suffers large memory footprint and computation cost. Although all its attention heads query on the whole input sequence for generating the attention map from a global perspective, we observe some heads only need to learn local dependencies, which means the existence of computation redundancy. We therefore propose a novel span-based dynamic convolution to replace these self-attention heads to directly model local dependencies. The novel convolution heads, together with the rest self-attention heads, form a new mixed attention block that is more efficient at both global and local context learning. We equip BERT with this mixed attention design and build a ConvBERT model. Experiments have shown that ConvBERT significantly outperforms BERT and its variants in various downstream tasks, with lower training cost and fewer model parameters. Remarkably, ConvBERTbase model achieves 86.4 GLUE score, 0.7 higher than ELECTRAbase, while using less than 1/4 training cost. Code and pre-trained models will be released.* This model was contributed by [abhishek](https://huggingface.co/abhishek). The original implementation can be found here: https://github.com/yitu-opensource/ConvBert ## Usage tips ConvBERT training tips are similar to those of BERT. For usage tips refer to [BERT documentation](bert). ## Resources - [Text classification task guide](../tasks/sequence_classification) - [Token classification task guide](../tasks/token_classification) - [Question answering task guide](../tasks/question_answering) - [Masked language modeling task guide](../tasks/masked_language_modeling) - [Multiple choice task guide](../tasks/multiple_choice) ## ConvBertConfig [[autodoc]] ConvBertConfig ## ConvBertTokenizer [[autodoc]] ConvBertTokenizer - build_inputs_with_special_tokens - get_special_tokens_mask - create_token_type_ids_from_sequences - save_vocabulary ## ConvBertTokenizerFast [[autodoc]] ConvBertTokenizerFast ## ConvBertModel [[autodoc]] ConvBertModel - forward ## ConvBertForMaskedLM [[autodoc]] ConvBertForMaskedLM - forward ## ConvBertForSequenceClassification [[autodoc]] ConvBertForSequenceClassification - forward ## ConvBertForMultipleChoice [[autodoc]] ConvBertForMultipleChoice - forward ## ConvBertForTokenClassification [[autodoc]] ConvBertForTokenClassification - forward ## ConvBertForQuestionAnswering [[autodoc]] ConvBertForQuestionAnswering - forward ## TFConvBertModel [[autodoc]] TFConvBertModel - call ## TFConvBertForMaskedLM [[autodoc]] TFConvBertForMaskedLM - call ## TFConvBertForSequenceClassification [[autodoc]] TFConvBertForSequenceClassification - call ## TFConvBertForMultipleChoice [[autodoc]] TFConvBertForMultipleChoice - call ## TFConvBertForTokenClassification [[autodoc]] TFConvBertForTokenClassification - call ## TFConvBertForQuestionAnswering [[autodoc]] TFConvBertForQuestionAnswering - call " model_doc/sew-d.md," # SEW-D ## Overview SEW-D (Squeezed and Efficient Wav2Vec with Disentangled attention) was proposed in [Performance-Efficiency Trade-offs in Unsupervised Pre-training for Speech Recognition](https://arxiv.org/abs/2109.06870) by Felix Wu, Kwangyoun Kim, Jing Pan, Kyu Han, Kilian Q. Weinberger, Yoav Artzi. The abstract from the paper is the following: *This paper is a study of performance-efficiency trade-offs in pre-trained models for automatic speech recognition (ASR). We focus on wav2vec 2.0, and formalize several architecture designs that influence both the model performance and its efficiency. Putting together all our observations, we introduce SEW (Squeezed and Efficient Wav2vec), a pre-trained model architecture with significant improvements along both performance and efficiency dimensions across a variety of training setups. For example, under the 100h-960h semi-supervised setup on LibriSpeech, SEW achieves a 1.9x inference speedup compared to wav2vec 2.0, with a 13.5% relative reduction in word error rate. With a similar inference time, SEW reduces word error rate by 25-50% across different model sizes.* This model was contributed by [anton-l](https://huggingface.co/anton-l). ## Usage tips - SEW-D is a speech model that accepts a float array corresponding to the raw waveform of the speech signal. - SEWDForCTC is fine-tuned using connectionist temporal classification (CTC) so the model output has to be decoded using [`Wav2Vec2CTCTokenizer`]. ## Resources - [Audio classification task guide](../tasks/audio_classification) - [Automatic speech recognition task guide](../tasks/asr) ## SEWDConfig [[autodoc]] SEWDConfig ## SEWDModel [[autodoc]] SEWDModel - forward ## SEWDForCTC [[autodoc]] SEWDForCTC - forward ## SEWDForSequenceClassification [[autodoc]] SEWDForSequenceClassification - forward " model_doc/prophetnet.md," # ProphetNet ## Overview The ProphetNet model was proposed in [ProphetNet: Predicting Future N-gram for Sequence-to-Sequence Pre-training,](https://arxiv.org/abs/2001.04063) by Yu Yan, Weizhen Qi, Yeyun Gong, Dayiheng Liu, Nan Duan, Jiusheng Chen, Ruofei Zhang, Ming Zhou on 13 Jan, 2020. ProphetNet is an encoder-decoder model and can predict n-future tokens for ""ngram"" language modeling instead of just the next token. The abstract from the paper is the following: *In this paper, we present a new sequence-to-sequence pretraining model called ProphetNet, which introduces a novel self-supervised objective named future n-gram prediction and the proposed n-stream self-attention mechanism. Instead of the optimization of one-step ahead prediction in traditional sequence-to-sequence model, the ProphetNet is optimized by n-step ahead prediction which predicts the next n tokens simultaneously based on previous context tokens at each time step. The future n-gram prediction explicitly encourages the model to plan for the future tokens and prevent overfitting on strong local correlations. We pre-train ProphetNet using a base scale dataset (16GB) and a large scale dataset (160GB) respectively. Then we conduct experiments on CNN/DailyMail, Gigaword, and SQuAD 1.1 benchmarks for abstractive summarization and question generation tasks. Experimental results show that ProphetNet achieves new state-of-the-art results on all these datasets compared to the models using the same scale pretraining corpus.* The Authors' code can be found [here](https://github.com/microsoft/ProphetNet). ## Usage tips - ProphetNet is a model with absolute position embeddings so it's usually advised to pad the inputs on the right rather than the left. - The model architecture is based on the original Transformer, but replaces the “standard” self-attention mechanism in the decoder by a a main self-attention mechanism and a self and n-stream (predict) self-attention mechanism. ## Resources - [Causal language modeling task guide](../tasks/language_modeling) - [Translation task guide](../tasks/translation) - [Summarization task guide](../tasks/summarization) ## ProphetNetConfig [[autodoc]] ProphetNetConfig ## ProphetNetTokenizer [[autodoc]] ProphetNetTokenizer ## ProphetNet specific outputs [[autodoc]] models.prophetnet.modeling_prophetnet.ProphetNetSeq2SeqLMOutput [[autodoc]] models.prophetnet.modeling_prophetnet.ProphetNetSeq2SeqModelOutput [[autodoc]] models.prophetnet.modeling_prophetnet.ProphetNetDecoderModelOutput [[autodoc]] models.prophetnet.modeling_prophetnet.ProphetNetDecoderLMOutput ## ProphetNetModel [[autodoc]] ProphetNetModel - forward ## ProphetNetEncoder [[autodoc]] ProphetNetEncoder - forward ## ProphetNetDecoder [[autodoc]] ProphetNetDecoder - forward ## ProphetNetForConditionalGeneration [[autodoc]] ProphetNetForConditionalGeneration - forward ## ProphetNetForCausalLM [[autodoc]] ProphetNetForCausalLM - forward " model_doc/levit.md," # LeViT ## Overview The LeViT model was proposed in [LeViT: Introducing Convolutions to Vision Transformers](https://arxiv.org/abs/2104.01136) by Ben Graham, Alaaeldin El-Nouby, Hugo Touvron, Pierre Stock, Armand Joulin, Hervé Jégou, Matthijs Douze. LeViT improves the [Vision Transformer (ViT)](vit) in performance and efficiency by a few architectural differences such as activation maps with decreasing resolutions in Transformers and the introduction of an attention bias to integrate positional information. The abstract from the paper is the following: *We design a family of image classification architectures that optimize the trade-off between accuracy and efficiency in a high-speed regime. Our work exploits recent findings in attention-based architectures, which are competitive on highly parallel processing hardware. We revisit principles from the extensive literature on convolutional neural networks to apply them to transformers, in particular activation maps with decreasing resolutions. We also introduce the attention bias, a new way to integrate positional information in vision transformers. As a result, we propose LeVIT: a hybrid neural network for fast inference image classification. We consider different measures of efficiency on different hardware platforms, so as to best reflect a wide range of application scenarios. Our extensive experiments empirically validate our technical choices and show they are suitable to most architectures. Overall, LeViT significantly outperforms existing convnets and vision transformers with respect to the speed/accuracy tradeoff. For example, at 80% ImageNet top-1 accuracy, LeViT is 5 times faster than EfficientNet on CPU. * LeViT Architecture. Taken from the original paper. This model was contributed by [anugunj](https://huggingface.co/anugunj). The original code can be found [here](https://github.com/facebookresearch/LeViT). ## Usage tips - Compared to ViT, LeViT models use an additional distillation head to effectively learn from a teacher (which, in the LeViT paper, is a ResNet like-model). The distillation head is learned through backpropagation under supervision of a ResNet like-model. They also draw inspiration from convolution neural networks to use activation maps with decreasing resolutions to increase the efficiency. - There are 2 ways to fine-tune distilled models, either (1) in a classic way, by only placing a prediction head on top of the final hidden state and not using the distillation head, or (2) by placing both a prediction head and distillation head on top of the final hidden state. In that case, the prediction head is trained using regular cross-entropy between the prediction of the head and the ground-truth label, while the distillation prediction head is trained using hard distillation (cross-entropy between the prediction of the distillation head and the label predicted by the teacher). At inference time, one takes the average prediction between both heads as final prediction. (2) is also called ""fine-tuning with distillation"", because one relies on a teacher that has already been fine-tuned on the downstream dataset. In terms of models, (1) corresponds to [`LevitForImageClassification`] and (2) corresponds to [`LevitForImageClassificationWithTeacher`]. - All released checkpoints were pre-trained and fine-tuned on [ImageNet-1k](https://huggingface.co/datasets/imagenet-1k) (also referred to as ILSVRC 2012, a collection of 1.3 million images and 1,000 classes). only. No external data was used. This is in contrast with the original ViT model, which used external data like the JFT-300M dataset/Imagenet-21k for pre-training. - The authors of LeViT released 5 trained LeViT models, which you can directly plug into [`LevitModel`] or [`LevitForImageClassification`]. Techniques like data augmentation, optimization, and regularization were used in order to simulate training on a much larger dataset (while only using ImageNet-1k for pre-training). The 5 variants available are (all trained on images of size 224x224): *facebook/levit-128S*, *facebook/levit-128*, *facebook/levit-192*, *facebook/levit-256* and *facebook/levit-384*. Note that one should use [`LevitImageProcessor`] in order to prepare images for the model. - [`LevitForImageClassificationWithTeacher`] currently supports only inference and not training or fine-tuning. - You can check out demo notebooks regarding inference as well as fine-tuning on custom data [here](https://github.com/NielsRogge/Transformers-Tutorials/tree/master/VisionTransformer) (you can just replace [`ViTFeatureExtractor`] by [`LevitImageProcessor`] and [`ViTForImageClassification`] by [`LevitForImageClassification`] or [`LevitForImageClassificationWithTeacher`]). ## Resources A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with LeViT. - [`LevitForImageClassification`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/pytorch/image-classification) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/image_classification.ipynb). - See also: [Image classification task guide](../tasks/image_classification) If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource. ## LevitConfig [[autodoc]] LevitConfig ## LevitFeatureExtractor [[autodoc]] LevitFeatureExtractor - __call__ ## LevitImageProcessor [[autodoc]] LevitImageProcessor - preprocess ## LevitModel [[autodoc]] LevitModel - forward ## LevitForImageClassification [[autodoc]] LevitForImageClassification - forward ## LevitForImageClassificationWithTeacher [[autodoc]] LevitForImageClassificationWithTeacher - forward " model_doc/code_llama.md," # CodeLlama ## Overview The Code Llama model was proposed in [Code Llama: Open Foundation Models for Code](https://ai.meta.com/research/publications/code-llama-open-foundation-models-for-code/) by Baptiste Rozière, Jonas Gehring, Fabian Gloeckle, Sten Sootla, Itai Gat, Xiaoqing Ellen Tan, Yossi Adi, Jingyu Liu, Tal Remez, Jérémy Rapin, Artyom Kozhevnikov, Ivan Evtimov, Joanna Bitton, Manish Bhatt, Cristian Canton Ferrer, Aaron Grattafiori, Wenhan Xiong, Alexandre Défossez, Jade Copet, Faisal Azhar, Hugo Touvron, Louis Martin, Nicolas Usunier, Thomas Scialom, Gabriel Synnaeve. The abstract from the paper is the following: *We release Code Llama, a family of large language models for code based on Llama 2 providing state-of-the-art performance among open models, infilling capabilities, support for large input contexts, and zero-shot instruction following ability for programming tasks. We provide multiple flavors to cover a wide range of applications: foundation models (Code Llama), Python specializations (Code Llama - Python), and instruction-following models (Code Llama - Instruct) with 7B, 13B and 34B parameters each. All models are trained on sequences of 16k tokens and show improvements on inputs with up to 100k tokens. 7B and 13B Code Llama and Code Llama - Instruct variants support infilling based on surrounding content. Code Llama reaches state-of-the-art performance among open models on several code benchmarks, with scores of up to 53% and 55% on HumanEval and MBPP, respectively. Notably, Code Llama - Python 7B outperforms Llama 2 70B on HumanEval and MBPP, and all our models outperform every other publicly available model on MultiPL-E. We release Code Llama under a permissive license that allows for both research and commercial use.* Check out all Code Llama model checkpoints [here](https://huggingface.co/models?search=code_llama) and the officially released ones in the [codellama org](https://huggingface.co/codellama). This model was contributed by [ArthurZucker](https://huggingface.co/ArthurZ). The original code of the authors can be found [here](https://github.com/facebookresearch/llama). ## Usage tips and examples The `Llama2` family models, on which Code Llama is based, were trained using `bfloat16`, but the original inference uses `float16`. Let's look at the different precisions: * `float32`: PyTorch convention on model initialization is to load models in `float32`, no matter with which `dtype` the model weights were stored. `transformers` also follows this convention for consistency with PyTorch. This will be picked by default. If you want the `AutoModel` API to cast the load the checkpoints with the storage weights type, you must specify `torch_dtype=""auto""`, e.g. `model = AutoModelForCausalLM.from_pretrained(""path"", torch_dtype = ""auto"")`. * `bfloat16`: Code Llama was trained with this precision, so we recommend using it for further training or fine-tuning. * `float16`: We recommend running inference using this precision, as it's usually faster than `bfloat16`, and evaluation metrics show no discernible degradation with respect to `bfloat16`. You can also run inference using `bfloat16`, and we recommend you check inference results with both `float16` and `bfloat16` after fine-tuning. As mentioned above, the `dtype` of the storage weights is mostly irrelevant unless you are using `torch_dtype=""auto""` when initializing a model using. The reason is that the model will first be downloaded (using the `dtype` of the checkpoints online) and then will be casted to the default `dtype` of `torch` (becomes `torch.float32`). If there is a specified `torch_dtype`, it will be used instead. Tips: - The infilling task is supported out of the box. You should be using the `tokenizer.fill_token` where you want your input to be filled. - The model conversion script is the same as for the `Llama2` family: Here is a sample usage: ```bash python src/transformers/models/llama/convert_llama_weights_to_hf.py \ --input_dir /path/to/downloaded/llama/weights --model_size 7B --output_dir /output/path Note that executing the script requires enough CPU RAM to host the whole model in float16 precision (even if the biggest versions come in several checkpoints they each contain a part of each weight of the model, so we need to load them all in RAM). After conversion, the model and tokenizer can be loaded via: thon >>> from transformers import LlamaForCausalLM, CodeLlamaTokenizer >>> tokenizer = CodeLlamaTokenizer.from_pretrained(""codellama/CodeLlama-7b-hf"") >>> model = LlamaForCausalLM.from_pretrained(""codellama/CodeLlama-7b-hf"") >>> PROMPT = '''def remove_non_ascii(s: str) -> str: """""" return result ''' >>> input_ids = tokenizer(PROMPT, return_tensors=""pt"")[""input_ids""] >>> generated_ids = model.generate(input_ids, max_new_tokens=128) >>> filling = tokenizer.batch_decode(generated_ids[:, input_ids.shape[1]:], skip_special_tokens = True)[0] >>> print(PROMPT.replace("""", filling)) def remove_non_ascii(s: str) -> str: """""" Remove non-ASCII characters from a string. Args: s: The string to remove non-ASCII characters from. Returns: The string with non-ASCII characters removed. """""" result = """" for c in s: if ord(c) < 128: result += c return result If you only want the infilled part: thon >>> from transformers import pipeline >>> import torch >>> generator = pipeline(""text-generation"",model=""codellama/CodeLlama-7b-hf"",torch_dtype=torch.float16, device_map=""auto"") >>> generator('def remove_non_ascii(s: str) -> str:\n """""" \n return result', max_new_tokens = 128, return_type = 1) Under the hood, the tokenizer [automatically splits by ``](https://huggingface.co/docs/transformers/main/model_doc/code_llama#transformers.CodeLlamaTokenizer.fill_token) to create a formatted input string that follows [the original training pattern](https://github.com/facebookresearch/codellama/blob/cb51c14ec761370ba2e2bc351374a79265d0465e/llama/generation.py#L402). This is more robust than preparing the pattern yourself: it avoids pitfalls, such as token glueing, that are very hard to debug. To see how much CPU and GPU memory you need for this model or others, try [this calculator](https://huggingface.co/spaces/hf-accelerate/model-memory-usage) which can help determine that value. The LLaMA tokenizer is a BPE model based on [sentencepiece](https://github.com/google/sentencepiece). One quirk of sentencepiece is that when decoding a sequence, if the first token is the start of the word (e.g. ""Banana""), the tokenizer does not prepend the prefix space to the string. Code Llama has the same architecture as the `Llama2` models, refer to [Llama2's documentation page](llama2) for the API reference. Find Code Llama tokenizer reference below. ## CodeLlamaTokenizer [[autodoc]] CodeLlamaTokenizer - build_inputs_with_special_tokens - get_special_tokens_mask - create_token_type_ids_from_sequences - save_vocabulary ## CodeLlamaTokenizerFast [[autodoc]] CodeLlamaTokenizerFast - build_inputs_with_special_tokens - get_special_tokens_mask - create_token_type_ids_from_sequences - update_post_processor - save_vocabulary " model_doc/lxmert.md," # LXMERT ## Overview The LXMERT model was proposed in [LXMERT: Learning Cross-Modality Encoder Representations from Transformers](https://arxiv.org/abs/1908.07490) by Hao Tan & Mohit Bansal. It is a series of bidirectional transformer encoders (one for the vision modality, one for the language modality, and then one to fuse both modalities) pretrained using a combination of masked language modeling, visual-language text alignment, ROI-feature regression, masked visual-attribute modeling, masked visual-object modeling, and visual-question answering objectives. The pretraining consists of multiple multi-modal datasets: MSCOCO, Visual-Genome + Visual-Genome Question Answering, VQA 2.0, and GQA. The abstract from the paper is the following: *Vision-and-language reasoning requires an understanding of visual concepts, language semantics, and, most importantly, the alignment and relationships between these two modalities. We thus propose the LXMERT (Learning Cross-Modality Encoder Representations from Transformers) framework to learn these vision-and-language connections. In LXMERT, we build a large-scale Transformer model that consists of three encoders: an object relationship encoder, a language encoder, and a cross-modality encoder. Next, to endow our model with the capability of connecting vision and language semantics, we pre-train the model with large amounts of image-and-sentence pairs, via five diverse representative pretraining tasks: masked language modeling, masked object prediction (feature regression and label classification), cross-modality matching, and image question answering. These tasks help in learning both intra-modality and cross-modality relationships. After fine-tuning from our pretrained parameters, our model achieves the state-of-the-art results on two visual question answering datasets (i.e., VQA and GQA). We also show the generalizability of our pretrained cross-modality model by adapting it to a challenging visual-reasoning task, NLVR, and improve the previous best result by 22% absolute (54% to 76%). Lastly, we demonstrate detailed ablation studies to prove that both our novel model components and pretraining strategies significantly contribute to our strong results; and also present several attention visualizations for the different encoders* This model was contributed by [eltoto1219](https://huggingface.co/eltoto1219). The original code can be found [here](https://github.com/airsplay/lxmert). ## Usage tips - Bounding boxes are not necessary to be used in the visual feature embeddings, any kind of visual-spacial features will work. - Both the language hidden states and the visual hidden states that LXMERT outputs are passed through the cross-modality layer, so they contain information from both modalities. To access a modality that only attends to itself, select the vision/language hidden states from the first input in the tuple. - The bidirectional cross-modality encoder attention only returns attention values when the language modality is used as the input and the vision modality is used as the context vector. Further, while the cross-modality encoder contains self-attention for each respective modality and cross-attention, only the cross attention is returned and both self attention outputs are disregarded. ## Resources - [Question answering task guide](../tasks/question_answering) ## LxmertConfig [[autodoc]] LxmertConfig ## LxmertTokenizer [[autodoc]] LxmertTokenizer ## LxmertTokenizerFast [[autodoc]] LxmertTokenizerFast ## Lxmert specific outputs [[autodoc]] models.lxmert.modeling_lxmert.LxmertModelOutput [[autodoc]] models.lxmert.modeling_lxmert.LxmertForPreTrainingOutput [[autodoc]] models.lxmert.modeling_lxmert.LxmertForQuestionAnsweringOutput [[autodoc]] models.lxmert.modeling_tf_lxmert.TFLxmertModelOutput [[autodoc]] models.lxmert.modeling_tf_lxmert.TFLxmertForPreTrainingOutput ## LxmertModel [[autodoc]] LxmertModel - forward ## LxmertForPreTraining [[autodoc]] LxmertForPreTraining - forward ## LxmertForQuestionAnswering [[autodoc]] LxmertForQuestionAnswering - forward ## TFLxmertModel [[autodoc]] TFLxmertModel - call ## TFLxmertForPreTraining [[autodoc]] TFLxmertForPreTraining - call " model_doc/convnext.md," # ConvNeXT ## Overview The ConvNeXT model was proposed in [A ConvNet for the 2020s](https://arxiv.org/abs/2201.03545) by Zhuang Liu, Hanzi Mao, Chao-Yuan Wu, Christoph Feichtenhofer, Trevor Darrell, Saining Xie. ConvNeXT is a pure convolutional model (ConvNet), inspired by the design of Vision Transformers, that claims to outperform them. The abstract from the paper is the following: *The ""Roaring 20s"" of visual recognition began with the introduction of Vision Transformers (ViTs), which quickly superseded ConvNets as the state-of-the-art image classification model. A vanilla ViT, on the other hand, faces difficulties when applied to general computer vision tasks such as object detection and semantic segmentation. It is the hierarchical Transformers (e.g., Swin Transformers) that reintroduced several ConvNet priors, making Transformers practically viable as a generic vision backbone and demonstrating remarkable performance on a wide variety of vision tasks. However, the effectiveness of such hybrid approaches is still largely credited to the intrinsic superiority of Transformers, rather than the inherent inductive biases of convolutions. In this work, we reexamine the design spaces and test the limits of what a pure ConvNet can achieve. We gradually ""modernize"" a standard ResNet toward the design of a vision Transformer, and discover several key components that contribute to the performance difference along the way. The outcome of this exploration is a family of pure ConvNet models dubbed ConvNeXt. Constructed entirely from standard ConvNet modules, ConvNeXts compete favorably with Transformers in terms of accuracy and scalability, achieving 87.8% ImageNet top-1 accuracy and outperforming Swin Transformers on COCO detection and ADE20K segmentation, while maintaining the simplicity and efficiency of standard ConvNets.* ConvNeXT architecture. Taken from the original paper. This model was contributed by [nielsr](https://huggingface.co/nielsr). TensorFlow version of the model was contributed by [ariG23498](https://github.com/ariG23498), [gante](https://github.com/gante), and [sayakpaul](https://github.com/sayakpaul) (equal contribution). The original code can be found [here](https://github.com/facebookresearch/ConvNeXt). ## Resources A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with ConvNeXT. - [`ConvNextForImageClassification`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/pytorch/image-classification) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/image_classification.ipynb). - See also: [Image classification task guide](../tasks/image_classification) If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource. ## ConvNextConfig [[autodoc]] ConvNextConfig ## ConvNextFeatureExtractor [[autodoc]] ConvNextFeatureExtractor ## ConvNextImageProcessor [[autodoc]] ConvNextImageProcessor - preprocess ## ConvNextModel [[autodoc]] ConvNextModel - forward ## ConvNextForImageClassification [[autodoc]] ConvNextForImageClassification - forward ## TFConvNextModel [[autodoc]] TFConvNextModel - call ## TFConvNextForImageClassification [[autodoc]] TFConvNextForImageClassification - call " model_doc/whisper.md," # Whisper ## Overview The Whisper model was proposed in [Robust Speech Recognition via Large-Scale Weak Supervision](https://cdn.openai.com/papers/whisper.pdf) by Alec Radford, Jong Wook Kim, Tao Xu, Greg Brockman, Christine McLeavey, Ilya Sutskever. The abstract from the paper is the following: *We study the capabilities of speech processing systems trained simply to predict large amounts of transcripts of audio on the internet. When scaled to 680,000 hours of multilingual and multitask supervision, the resulting models generalize well to standard benchmarks and are often competitive with prior fully supervised results but in a zeroshot transfer setting without the need for any finetuning. When compared to humans, the models approach their accuracy and robustness. We are releasing models and inference code to serve as a foundation for further work on robust speech processing.* This model was contributed by [Arthur Zucker](https://huggingface.co/ArthurZ). The Tensorflow version of this model was contributed by [amyeroberts](https://huggingface.co/amyeroberts). The original code can be found [here](https://github.com/openai/whisper). ## Usage tips - The model usually performs well without requiring any finetuning. - The architecture follows a classic encoder-decoder architecture, which means that it relies on the [`~generation.GenerationMixin.generate`] function for inference. - Inference is currently only implemented for short-form i.e. audio is pre-segmented into <=30s segments. Long-form (including timestamps) will be implemented in a future release. - One can use [`WhisperProcessor`] to prepare audio for the model, and decode the predicted ID's back into text. - To convert the model and the processor, we recommend using the following: ```bash python src/transformers/models/whisper/convert_openai_to_hf.py --checkpoint_path """" --pytorch_dump_folder_path ""Arthur/whisper-3"" --convert_preprocessor True The script will automatically determine all necessary parameters from the OpenAI checkpoint. A `tiktoken` library needs to be installed to perform the conversion of the OpenAI tokenizer to the `tokenizers` version. ## Inference Here is a step-by-step guide to transcribing an audio sample using a pre-trained Whisper model: thon >>> from datasets import load_dataset >>> from transformers import WhisperProcessor, WhisperForConditionalGeneration >>> # Select an audio file and read it: >>> ds = load_dataset(""hf-internal-testing/librispeech_asr_dummy"", ""clean"", split=""validation"") >>> audio_sample = ds[0][""audio""] >>> waveform = audio_sample[""array""] >>> sampling_rate = audio_sample[""sampling_rate""] >>> # Load the Whisper model in Hugging Face format: >>> processor = WhisperProcessor.from_pretrained(""openai/whisper-tiny.en"") >>> model = WhisperForConditionalGeneration.from_pretrained(""openai/whisper-tiny.en"") >>> # Use the model and processor to transcribe the audio: >>> input_features = processor( waveform, sampling_rate=sampling_rate, return_tensors=""pt"" ).input_features >>> # Generate token ids >>> predicted_ids = model.generate(input_features) >>> # Decode token ids to text >>> transcription = processor.batch_decode(predicted_ids, skip_special_tokens=True) >>> transcription[0] ' Mr. Quilter is the apostle of the middle classes, and we are glad to welcome his gospel.' ## Resources A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with Whisper. If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource. - A fork with a script to [convert a Whisper model in Hugging Face format to OpenAI format](https://github.com/zuazo-forks/transformers/blob/convert_hf_to_openai/src/transformers/models/whisper/convert_hf_to_openai.py). 🌎 Usage example: ```bash pip install -U openai-whisper python convert_hf_to_openai.py \ --checkpoint openai/whisper-tiny \ --whisper_dump_path whisper-tiny-openai.pt ## WhisperConfig [[autodoc]] WhisperConfig ## WhisperTokenizer [[autodoc]] WhisperTokenizer - set_prefix_tokens - build_inputs_with_special_tokens - get_special_tokens_mask - create_token_type_ids_from_sequences - save_vocabulary - batch_decode - decode ## WhisperTokenizerFast [[autodoc]] WhisperTokenizerFast - set_prefix_tokens - build_inputs_with_special_tokens - get_special_tokens_mask - create_token_type_ids_from_sequences - save_vocabulary - batch_decode - decode ## WhisperFeatureExtractor [[autodoc]] WhisperFeatureExtractor - __call__ ## WhisperProcessor [[autodoc]] WhisperProcessor - __call__ - from_pretrained - save_pretrained - batch_decode - decode ## WhisperModel [[autodoc]] WhisperModel - forward - _mask_input_features ## WhisperForConditionalGeneration [[autodoc]] WhisperForConditionalGeneration - forward - generate ## WhisperForCausalLM [[autodoc]] WhisperForCausalLM - forward ## WhisperForAudioClassification [[autodoc]] WhisperForAudioClassification - forward ## TFWhisperModel [[autodoc]] TFWhisperModel - call ## TFWhisperForConditionalGeneration [[autodoc]] TFWhisperForConditionalGeneration - call ## FlaxWhisperModel [[autodoc]] FlaxWhisperModel - __call__ ## FlaxWhisperForConditionalGeneration [[autodoc]] FlaxWhisperForConditionalGeneration - __call__ ## FlaxWhisperForAudioClassification [[autodoc]] FlaxWhisperForAudioClassification - __call__ " model_doc/sew.md," # SEW ## Overview SEW (Squeezed and Efficient Wav2Vec) was proposed in [Performance-Efficiency Trade-offs in Unsupervised Pre-training for Speech Recognition](https://arxiv.org/abs/2109.06870) by Felix Wu, Kwangyoun Kim, Jing Pan, Kyu Han, Kilian Q. Weinberger, Yoav Artzi. The abstract from the paper is the following: *This paper is a study of performance-efficiency trade-offs in pre-trained models for automatic speech recognition (ASR). We focus on wav2vec 2.0, and formalize several architecture designs that influence both the model performance and its efficiency. Putting together all our observations, we introduce SEW (Squeezed and Efficient Wav2vec), a pre-trained model architecture with significant improvements along both performance and efficiency dimensions across a variety of training setups. For example, under the 100h-960h semi-supervised setup on LibriSpeech, SEW achieves a 1.9x inference speedup compared to wav2vec 2.0, with a 13.5% relative reduction in word error rate. With a similar inference time, SEW reduces word error rate by 25-50% across different model sizes.* This model was contributed by [anton-l](https://huggingface.co/anton-l). ## Usage tips - SEW is a speech model that accepts a float array corresponding to the raw waveform of the speech signal. - SEWForCTC is fine-tuned using connectionist temporal classification (CTC) so the model output has to be decoded using [`Wav2Vec2CTCTokenizer`]. ## Resources - [Audio classification task guide](../tasks/audio_classification) - [Automatic speech recognition task guide](../tasks/asr) ## SEWConfig [[autodoc]] SEWConfig ## SEWModel [[autodoc]] SEWModel - forward ## SEWForCTC [[autodoc]] SEWForCTC - forward ## SEWForSequenceClassification [[autodoc]] SEWForSequenceClassification - forward " model_doc/gpt2.md," # OpenAI GPT2 ## Overview OpenAI GPT-2 model was proposed in [Language Models are Unsupervised Multitask Learners](https://cdn.openai.com/better-language-models/language_models_are_unsupervised_multitask_learners.pdf) by Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei and Ilya Sutskever from [OpenAI](https://huggingface.co/openai). It's a causal (unidirectional) transformer pretrained using language modeling on a very large corpus of ~40 GB of text data. The abstract from the paper is the following: *GPT-2 is a large transformer-based language model with 1.5 billion parameters, trained on a dataset[1] of 8 million web pages. GPT-2 is trained with a simple objective: predict the next word, given all of the previous words within some text. The diversity of the dataset causes this simple goal to contain naturally occurring demonstrations of many tasks across diverse domains. GPT-2 is a direct scale-up of GPT, with more than 10X the parameters and trained on more than 10X the amount of data.* [Write With Transformer](https://transformer.huggingface.co/doc/gpt2-large) is a webapp created and hosted by Hugging Face showcasing the generative capabilities of several models. GPT-2 is one of them and is available in five different sizes: small, medium, large, xl and a distilled version of the small checkpoint: *distilgpt-2*. This model was contributed by [thomwolf](https://huggingface.co/thomwolf). The original code can be found [here](https://openai.com/blog/better-language-models/). ## Usage tips - GPT-2 is a model with absolute position embeddings so it's usually advised to pad the inputs on the right rather than the left. - GPT-2 was trained with a causal language modeling (CLM) objective and is therefore powerful at predicting the next token in a sequence. Leveraging this feature allows GPT-2 to generate syntactically coherent text as it can be observed in the *run_generation.py* example script. - The model can take the *past_key_values* (for PyTorch) or *past* (for TF) as input, which is the previously computed key/value attention pairs. Using this (*past_key_values* or *past*) value prevents the model from re-computing pre-computed values in the context of text generation. For PyTorch, see *past_key_values* argument of the [`GPT2Model.forward`] method, or for TF the *past* argument of the [`TFGPT2Model.call`] method for more information on its usage. - Enabling the *scale_attn_by_inverse_layer_idx* and *reorder_and_upcast_attn* flags will apply the training stability improvements from [Mistral](https://github.com/stanford-crfm/mistral/) (for PyTorch only). ## Resources A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with GPT2. If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource. - A blog on how to [Finetune a non-English GPT-2 Model with Hugging Face](https://www.philschmid.de/fine-tune-a-non-english-gpt-2-model-with-huggingface). - A blog on [How to generate text: using different decoding methods for language generation with Transformers](https://huggingface.co/blog/how-to-generate) with GPT-2. - A blog on [Training CodeParrot 🦜 from Scratch](https://huggingface.co/blog/codeparrot), a large GPT-2 model. - A blog on [Faster Text Generation with TensorFlow and XLA](https://huggingface.co/blog/tf-xla-generate) with GPT-2. - A blog on [How to train a Language Model with Megatron-LM](https://huggingface.co/blog/megatron-training) with a GPT-2 model. - A notebook on how to [finetune GPT2 to generate lyrics in the style of your favorite artist](https://colab.research.google.com/github/AlekseyKorshuk/huggingartists/blob/master/huggingartists-demo.ipynb). 🌎 - A notebook on how to [finetune GPT2 to generate tweets in the style of your favorite Twitter user](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb). 🌎 - [Causal language modeling](https://huggingface.co/course/en/chapter7/6?fw=pt#training-a-causal-language-model-from-scratch) chapter of the 🤗 Hugging Face Course. - [`GPT2LMHeadModel`] is supported by this [causal language modeling example script](https://github.com/huggingface/transformers/tree/main/examples/pytorch/language-modeling#gpt-2gpt-and-causal-language-modeling), [text generation example script](https://github.com/huggingface/transformers/tree/main/examples/pytorch/text-generation), and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/language_modeling.ipynb). - [`TFGPT2LMHeadModel`] is supported by this [causal language modeling example script](https://github.com/huggingface/transformers/tree/main/examples/tensorflow/language-modeling#run_clmpy) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/language_modeling-tf.ipynb). - [`FlaxGPT2LMHeadModel`] is supported by this [causal language modeling example script](https://github.com/huggingface/transformers/tree/main/examples/flax/language-modeling#causal-language-modeling) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/causal_language_modeling_flax.ipynb). - [Text classification task guide](../tasks/sequence_classification) - [Token classification task guide](../tasks/token_classification) - [Causal language modeling task guide](../tasks/language_modeling) ## GPT2Config [[autodoc]] GPT2Config ## GPT2Tokenizer [[autodoc]] GPT2Tokenizer - save_vocabulary ## GPT2TokenizerFast [[autodoc]] GPT2TokenizerFast ## GPT2 specific outputs [[autodoc]] models.gpt2.modeling_gpt2.GPT2DoubleHeadsModelOutput [[autodoc]] models.gpt2.modeling_tf_gpt2.TFGPT2DoubleHeadsModelOutput ## GPT2Model [[autodoc]] GPT2Model - forward ## GPT2LMHeadModel [[autodoc]] GPT2LMHeadModel - forward ## GPT2DoubleHeadsModel [[autodoc]] GPT2DoubleHeadsModel - forward ## GPT2ForQuestionAnswering [[autodoc]] GPT2ForQuestionAnswering - forward ## GPT2ForSequenceClassification [[autodoc]] GPT2ForSequenceClassification - forward ## GPT2ForTokenClassification [[autodoc]] GPT2ForTokenClassification - forward ## TFGPT2Model [[autodoc]] TFGPT2Model - call ## TFGPT2LMHeadModel [[autodoc]] TFGPT2LMHeadModel - call ## TFGPT2DoubleHeadsModel [[autodoc]] TFGPT2DoubleHeadsModel - call ## TFGPT2ForSequenceClassification [[autodoc]] TFGPT2ForSequenceClassification - call ## TFSequenceClassifierOutputWithPast [[autodoc]] modeling_tf_outputs.TFSequenceClassifierOutputWithPast ## TFGPT2Tokenizer [[autodoc]] TFGPT2Tokenizer ## FlaxGPT2Model [[autodoc]] FlaxGPT2Model - __call__ ## FlaxGPT2LMHeadModel [[autodoc]] FlaxGPT2LMHeadModel - __call__ " model_doc/speech-encoder-decoder.md," # Speech Encoder Decoder Models The [`SpeechEncoderDecoderModel`] can be used to initialize a speech-to-text model with any pretrained speech autoencoding model as the encoder (*e.g.* [Wav2Vec2](wav2vec2), [Hubert](hubert)) and any pretrained autoregressive model as the decoder. The effectiveness of initializing speech-sequence-to-text-sequence models with pretrained checkpoints for speech recognition and speech translation has *e.g.* been shown in [Large-Scale Self- and Semi-Supervised Learning for Speech Translation](https://arxiv.org/abs/2104.06678) by Changhan Wang, Anne Wu, Juan Pino, Alexei Baevski, Michael Auli, Alexis Conneau. An example of how to use a [`SpeechEncoderDecoderModel`] for inference can be seen in [Speech2Text2](speech_to_text_2). ## Randomly initializing `SpeechEncoderDecoderModel` from model configurations. [`SpeechEncoderDecoderModel`] can be randomly initialized from an encoder and a decoder config. In the following example, we show how to do this using the default [`Wav2Vec2Model`] configuration for the encoder and the default [`BertForCausalLM`] configuration for the decoder. thon >>> from transformers import BertConfig, Wav2Vec2Config, SpeechEncoderDecoderConfig, SpeechEncoderDecoderModel >>> config_encoder = Wav2Vec2Config() >>> config_decoder = BertConfig() >>> config = SpeechEncoderDecoderConfig.from_encoder_decoder_configs(config_encoder, config_decoder) >>> model = SpeechEncoderDecoderModel(config=config) ## Initialising `SpeechEncoderDecoderModel` from a pretrained encoder and a pretrained decoder. [`SpeechEncoderDecoderModel`] can be initialized from a pretrained encoder checkpoint and a pretrained decoder checkpoint. Note that any pretrained Transformer-based speech model, *e.g.* [Wav2Vec2](wav2vec2), [Hubert](hubert) can serve as the encoder and both pretrained auto-encoding models, *e.g.* BERT, pretrained causal language models, *e.g.* GPT2, as well as the pretrained decoder part of sequence-to-sequence models, *e.g.* decoder of BART, can be used as the decoder. Depending on which architecture you choose as the decoder, the cross-attention layers might be randomly initialized. Initializing [`SpeechEncoderDecoderModel`] from a pretrained encoder and decoder checkpoint requires the model to be fine-tuned on a downstream task, as has been shown in [the *Warm-starting-encoder-decoder blog post*](https://huggingface.co/blog/warm-starting-encoder-decoder). To do so, the `SpeechEncoderDecoderModel` class provides a [`SpeechEncoderDecoderModel.from_encoder_decoder_pretrained`] method. thon >>> from transformers import SpeechEncoderDecoderModel >>> model = SpeechEncoderDecoderModel.from_encoder_decoder_pretrained( ""facebook/hubert-large-ll60k"", ""bert-base-uncased"" ) ## Loading an existing `SpeechEncoderDecoderModel` checkpoint and perform inference. To load fine-tuned checkpoints of the `SpeechEncoderDecoderModel` class, [`SpeechEncoderDecoderModel`] provides the `from_pretrained()` method just like any other model architecture in Transformers. To perform inference, one uses the [`generate`] method, which allows to autoregressively generate text. This method supports various forms of decoding, such as greedy, beam search and multinomial sampling. thon >>> from transformers import Wav2Vec2Processor, SpeechEncoderDecoderModel >>> from datasets import load_dataset >>> import torch >>> # load a fine-tuned speech translation model and corresponding processor >>> model = SpeechEncoderDecoderModel.from_pretrained(""facebook/wav2vec2-xls-r-300m-en-to-15"") >>> processor = Wav2Vec2Processor.from_pretrained(""facebook/wav2vec2-xls-r-300m-en-to-15"") >>> # let's perform inference on a piece of English speech (which we'll translate to German) >>> ds = load_dataset(""hf-internal-testing/librispeech_asr_dummy"", ""clean"", split=""validation"") >>> input_values = processor(ds[0][""audio""][""array""], return_tensors=""pt"").input_values >>> # autoregressively generate transcription (uses greedy decoding by default) >>> generated_ids = model.generate(input_values) >>> generated_text = processor.batch_decode(generated_ids, skip_special_tokens=True)[0] >>> print(generated_text) Mr. Quilter ist der Apostel der Mittelschicht und wir freuen uns, sein Evangelium willkommen heißen zu können. ## Training Once the model is created, it can be fine-tuned similar to BART, T5 or any other encoder-decoder model on a dataset of (speech, text) pairs. As you can see, only 2 inputs are required for the model in order to compute a loss: `input_values` (which are the speech inputs) and `labels` (which are the `input_ids` of the encoded target sequence). thon >>> from transformers import AutoTokenizer, AutoFeatureExtractor, SpeechEncoderDecoderModel >>> from datasets import load_dataset >>> encoder_id = ""facebook/wav2vec2-base-960h"" # acoustic model encoder >>> decoder_id = ""bert-base-uncased"" # text decoder >>> feature_extractor = AutoFeatureExtractor.from_pretrained(encoder_id) >>> tokenizer = AutoTokenizer.from_pretrained(decoder_id) >>> # Combine pre-trained encoder and pre-trained decoder to form a Seq2Seq model >>> model = SpeechEncoderDecoderModel.from_encoder_decoder_pretrained(encoder_id, decoder_id) >>> model.config.decoder_start_token_id = tokenizer.cls_token_id >>> model.config.pad_token_id = tokenizer.pad_token_id >>> # load an audio input and pre-process (normalise mean/std to 0/1) >>> ds = load_dataset(""hf-internal-testing/librispeech_asr_dummy"", ""clean"", split=""validation"") >>> input_values = feature_extractor(ds[0][""audio""][""array""], return_tensors=""pt"").input_values >>> # load its corresponding transcription and tokenize to generate labels >>> labels = tokenizer(ds[0][""text""], return_tensors=""pt"").input_ids >>> # the forward function automatically creates the correct decoder_input_ids >>> loss = model(input_values=input_values, labels=labels).loss >>> loss.backward() ## SpeechEncoderDecoderConfig [[autodoc]] SpeechEncoderDecoderConfig ## SpeechEncoderDecoderModel [[autodoc]] SpeechEncoderDecoderModel - forward - from_encoder_decoder_pretrained ## FlaxSpeechEncoderDecoderModel [[autodoc]] FlaxSpeechEncoderDecoderModel - __call__ - from_encoder_decoder_pretrained " model_doc/llama2.md," # Llama2 ## Overview The Llama2 model was proposed in [LLaMA: Open Foundation and Fine-Tuned Chat Models](https://ai.meta.com/research/publications/llama-2-open-foundation-and-fine-tuned-chat-models/) by Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, Dan Bikel, Lukas Blecher, Cristian Canton Ferrer, Moya Chen, Guillem Cucurull, David Esiobu, Jude Fernandes, Jeremy Fu, Wenyin Fu, Brian Fuller, Cynthia Gao, Vedanuj Goswami, Naman Goyal, Anthony Hartshorn, Saghar Hosseini, Rui Hou, Hakan Inan, Marcin Kardas, Viktor Kerkez Madian Khabsa, Isabel Kloumann, Artem Korenev, Punit Singh Koura, Marie-Anne Lachaux, Thibaut Lavril, Jenya Lee, Diana Liskovich, Yinghai Lu, Yuning Mao, Xavier Martinet, Todor Mihaylov, Pushka rMishra, Igor Molybog, Yixin Nie, Andrew Poulton, Jeremy Reizenstein, Rashi Rungta, Kalyan Saladi, Alan Schelten, Ruan Silva, Eric Michael Smith, Ranjan Subramanian, Xiaoqing EllenTan, Binh Tang, Ross Taylor, Adina Williams, Jian Xiang Kuan, Puxin Xu, Zheng Yan, Iliyan Zarov, Yuchen Zhang, Angela Fan, Melanie Kambadur, Sharan Narang, Aurelien Rodriguez, Robert Stojnic, Sergey Edunov, Thomas Scialom. It is a collection of foundation language models ranging from 7B to 70B parameters, with checkpoints finetuned for chat application! The abstract from the paper is the following: *In this work, we develop and release Llama 2, a collection of pretrained and fine-tuned large language models (LLMs) ranging in scale from 7 billion to 70 billion parameters. Our fine-tuned LLMs, called Llama 2-Chat, are optimized for dialogue use cases. Our models outperform open-source chat models on most benchmarks we tested, and based on our human evaluations for helpfulness and safety, may be a suitable substitute for closed-source models. We provide a detailed description of our approach to fine-tuning and safety improvements of Llama 2-Chat in order to enable the community to build on our work and contribute to the responsible development of LLMs.* Checkout all Llama2 model checkpoints [here](https://huggingface.co/models?search=llama2). This model was contributed by [Arthur Zucker](https://huggingface.co/ArthurZ) with contributions from [Lysandre Debut](https://huggingface.co/lysandre). The code of the implementation in Hugging Face is based on GPT-NeoX [here](https://github.com/EleutherAI/gpt-neox). The original code of the authors can be found [here](https://github.com/facebookresearch/llama). ## Usage tips The `Llama2` models were trained using `bfloat16`, but the original inference uses `float16`. The checkpoints uploaded on the Hub use `torch_dtype = 'float16'`, which will be used by the `AutoModel` API to cast the checkpoints from `torch.float32` to `torch.float16`. The `dtype` of the online weights is mostly irrelevant unless you are using `torch_dtype=""auto""` when initializing a model using `model = AutoModelForCausalLM.from_pretrained(""path"", torch_dtype = ""auto"")`. The reason is that the model will first be downloaded ( using the `dtype` of the checkpoints online), then it will be casted to the default `dtype` of `torch` (becomes `torch.float32`), and finally, if there is a `torch_dtype` provided in the config, it will be used. Training the model in `float16` is not recommended and is known to produce `nan`; as such, the model should be trained in `bfloat16`. Tips: - Weights for the Llama2 models can be obtained by filling out [this form](https://ai.meta.com/resources/models-and-libraries/llama-downloads/) - The architecture is very similar to the first Llama, with the addition of Grouped Query Attention (GQA) following this [paper](https://arxiv.org/pdf/2305.13245.pdf) - Setting `config.pretraining_tp` to a value different than 1 will activate the more accurate but slower computation of the linear layers, which should better match the original logits. - The original model uses `pad_id = -1` which means that there is no padding token. We can't have the same logic, make sure to add a padding token using `tokenizer.add_special_tokens({""pad_token"":""""})` and resize the token embedding accordingly. You should also set the `model.config.pad_token_id`. The `embed_tokens` layer of the model is initialized with `self.embed_tokens = nn.Embedding(config.vocab_size, config.hidden_size, self.config.padding_idx)`, which makes sure that encoding the padding token will output zeros, so passing it when initializing is recommended. - After filling out the form and gaining access to the model checkpoints, you should be able to use the already converted checkpoints. Otherwise, if you are converting your own model, feel free to use the [conversion script](https://github.com/huggingface/transformers/blob/main/src/transformers/models/llama/convert_llama_weights_to_hf.py). The script can be called with the following (example) command: ```bash python src/transformers/models/llama/convert_llama_weights_to_hf.py \ --input_dir /path/to/downloaded/llama/weights --model_size 7B --output_dir /output/path - After conversion, the model and tokenizer can be loaded via: thon from transformers import LlamaForCausalLM, LlamaTokenizer tokenizer = LlamaTokenizer.from_pretrained(""/output/path"") model = LlamaForCausalLM.from_pretrained(""/output/path"") Note that executing the script requires enough CPU RAM to host the whole model in float16 precision (even if the biggest versions come in several checkpoints they each contain a part of each weight of the model, so we need to load them all in RAM). For the 75B model, it's thus 145GB of RAM needed. - The LLaMA tokenizer is a BPE model based on [sentencepiece](https://github.com/google/sentencepiece). One quirk of sentencepiece is that when decoding a sequence, if the first token is the start of the word (e.g. ""Banana""), the tokenizer does not prepend the prefix space to the string. ## Resources A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with LLaMA2. If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource. - [Llama 2 is here - get it on Hugging Face](https://huggingface.co/blog/llama2), a blog post about Llama 2 and how to use it with 🤗 Transformers and 🤗 PEFT. - [LLaMA 2 - Every Resource you need](https://www.philschmid.de/llama-2), a compilation of relevant resources to learn about LLaMA 2 and how to get started quickly. - A [notebook](https://colab.research.google.com/drive/1PEQyJO1-f6j0S_XJ8DV50NkpzasXkrzd?usp=sharing) on how to fine-tune Llama 2 in Google Colab using QLoRA and 4-bit precision. 🌎 - A [notebook](https://colab.research.google.com/drive/134o_cXcMe_lsvl15ZE_4Y75Kstepsntu?usp=sharing) on how to fine-tune the ""Llama-v2-7b-guanaco"" model with 4-bit QLoRA and generate Q&A datasets from PDFs. 🌎 - A [notebook](https://colab.research.google.com/drive/1ggaa2oRFphdBmqIjSEbnb_HGkcIRC2ZB?usp=sharing) on how to fine-tune the Llama 2 model with QLoRa, TRL, and Korean text classification dataset. 🌎🇰🇷 ⚗️ Optimization - [Fine-tune Llama 2 with DPO](https://huggingface.co/blog/dpo-trl), a guide to using the TRL library's DPO method to fine tune Llama 2 on a specific dataset. - [Extended Guide: Instruction-tune Llama 2](https://www.philschmid.de/instruction-tune-llama-2), a guide to training Llama 2 to generate instructions from inputs, transforming the model from instruction-following to instruction-giving. - A [notebook](https://colab.research.google.com/drive/1SYpgFpcmtIUzdE7pxqknrM4ArCASfkFQ?usp=sharing) on how to fine-tune the Llama 2 model on a personal computer using QLoRa and TRL. 🌎 ⚡️ Inference - A [notebook](https://colab.research.google.com/drive/1TC56ArKerXUpbgRy5vM3woRsbTEVNq7h?usp=sharing) on how to quantize the Llama 2 model using GPTQ from the AutoGPTQ library. 🌎 - A [notebook](https://colab.research.google.com/drive/1X1z9Q6domMKl2CnEM0QGHNwidLfR4dW2?usp=sharing) on how to run the Llama 2 Chat Model with 4-bit quantization on a local computer or Google Colab. 🌎 🚀 Deploy - [Fine-tune LLaMA 2 (7-70B) on Amazon SageMaker](https://www.philschmid.de/sagemaker-llama2-qlora), a complete guide from setup to QLoRA fine-tuning and deployment on Amazon SageMaker. - [Deploy Llama 2 7B/13B/70B on Amazon SageMaker](https://www.philschmid.de/sagemaker-llama-llm), a guide on using Hugging Face's LLM DLC container for secure and scalable deployment. ## LlamaConfig [[autodoc]] LlamaConfig ## LlamaTokenizer [[autodoc]] LlamaTokenizer - build_inputs_with_special_tokens - get_special_tokens_mask - create_token_type_ids_from_sequences - save_vocabulary ## LlamaTokenizerFast [[autodoc]] LlamaTokenizerFast - build_inputs_with_special_tokens - get_special_tokens_mask - create_token_type_ids_from_sequences - update_post_processor - save_vocabulary ## LlamaModel [[autodoc]] LlamaModel - forward ## LlamaForCausalLM [[autodoc]] LlamaForCausalLM - forward ## LlamaForSequenceClassification [[autodoc]] LlamaForSequenceClassification - forward " model_doc/barthez.md," # BARThez ## Overview The BARThez model was proposed in [BARThez: a Skilled Pretrained French Sequence-to-Sequence Model](https://arxiv.org/abs/2010.12321) by Moussa Kamal Eddine, Antoine J.-P. Tixier, Michalis Vazirgiannis on 23 Oct, 2020. The abstract of the paper: *Inductive transfer learning, enabled by self-supervised learning, have taken the entire Natural Language Processing (NLP) field by storm, with models such as BERT and BART setting new state of the art on countless natural language understanding tasks. While there are some notable exceptions, most of the available models and research have been conducted for the English language. In this work, we introduce BARThez, the first BART model for the French language (to the best of our knowledge). BARThez was pretrained on a very large monolingual French corpus from past research that we adapted to suit BART's perturbation schemes. Unlike already existing BERT-based French language models such as CamemBERT and FlauBERT, BARThez is particularly well-suited for generative tasks, since not only its encoder but also its decoder is pretrained. In addition to discriminative tasks from the FLUE benchmark, we evaluate BARThez on a novel summarization dataset, OrangeSum, that we release with this paper. We also continue the pretraining of an already pretrained multilingual BART on BARThez's corpus, and we show that the resulting model, which we call mBARTHez, provides a significant boost over vanilla BARThez, and is on par with or outperforms CamemBERT and FlauBERT.* This model was contributed by [moussakam](https://huggingface.co/moussakam). The Authors' code can be found [here](https://github.com/moussaKam/BARThez). BARThez implementation is the same as BART, except for tokenization. Refer to [BART documentation](bart) for information on configuration classes and their parameters. BARThez-specific tokenizers are documented below. ## Resources - BARThez can be fine-tuned on sequence-to-sequence tasks in a similar way as BART, check: [examples/pytorch/summarization/](https://github.com/huggingface/transformers/tree/main/examples/pytorch/summarization/README.md). ## BarthezTokenizer [[autodoc]] BarthezTokenizer ## BarthezTokenizerFast [[autodoc]] BarthezTokenizerFast " model_doc/flava.md," # FLAVA ## Overview The FLAVA model was proposed in [FLAVA: A Foundational Language And Vision Alignment Model](https://arxiv.org/abs/2112.04482) by Amanpreet Singh, Ronghang Hu, Vedanuj Goswami, Guillaume Couairon, Wojciech Galuba, Marcus Rohrbach, and Douwe Kiela and is accepted at CVPR 2022. The paper aims at creating a single unified foundation model which can work across vision, language as well as vision-and-language multimodal tasks. The abstract from the paper is the following: *State-of-the-art vision and vision-and-language models rely on large-scale visio-linguistic pretraining for obtaining good performance on a variety of downstream tasks. Generally, such models are often either cross-modal (contrastive) or multi-modal (with earlier fusion) but not both; and they often only target specific modalities or tasks. A promising direction would be to use a single holistic universal model, as a ""foundation"", that targets all modalities at once -- a true vision and language foundation model should be good at vision tasks, language tasks, and cross- and multi-modal vision and language tasks. We introduce FLAVA as such a model and demonstrate impressive performance on a wide range of 35 tasks spanning these target modalities.* This model was contributed by [aps](https://huggingface.co/aps). The original code can be found [here](https://github.com/facebookresearch/multimodal/tree/main/examples/flava). ## FlavaConfig [[autodoc]] FlavaConfig ## FlavaTextConfig [[autodoc]] FlavaTextConfig ## FlavaImageConfig [[autodoc]] FlavaImageConfig ## FlavaMultimodalConfig [[autodoc]] FlavaMultimodalConfig ## FlavaImageCodebookConfig [[autodoc]] FlavaImageCodebookConfig ## FlavaProcessor [[autodoc]] FlavaProcessor ## FlavaFeatureExtractor [[autodoc]] FlavaFeatureExtractor ## FlavaImageProcessor [[autodoc]] FlavaImageProcessor - preprocess ## FlavaForPreTraining [[autodoc]] FlavaForPreTraining - forward ## FlavaModel [[autodoc]] FlavaModel - forward - get_text_features - get_image_features ## FlavaImageCodebook [[autodoc]] FlavaImageCodebook - forward - get_codebook_indices - get_codebook_probs ## FlavaTextModel [[autodoc]] FlavaTextModel - forward ## FlavaImageModel [[autodoc]] FlavaImageModel - forward ## FlavaMultimodalModel [[autodoc]] FlavaMultimodalModel - forward " model_doc/pegasus_x.md," # PEGASUS-X ## Overview The PEGASUS-X model was proposed in [Investigating Efficiently Extending Transformers for Long Input Summarization](https://arxiv.org/abs/2208.04347) by Jason Phang, Yao Zhao and Peter J. Liu. PEGASUS-X (PEGASUS eXtended) extends the PEGASUS models for long input summarization through additional long input pretraining and using staggered block-local attention with global tokens in the encoder. The abstract from the paper is the following: *While large pretrained Transformer models have proven highly capable at tackling natural language tasks, handling long sequence inputs continues to be a significant challenge. One such task is long input summarization, where inputs are longer than the maximum input context of most pretrained models. Through an extensive set of experiments, we investigate what model architectural changes and pretraining paradigms can most efficiently adapt a pretrained Transformer for long input summarization. We find that a staggered, block-local Transformer with global encoder tokens strikes a good balance of performance and efficiency, and that an additional pretraining phase on long sequences meaningfully improves downstream summarization performance. Based on our findings, we introduce PEGASUS-X, an extension of the PEGASUS model with additional long input pretraining to handle inputs of up to 16K tokens. PEGASUS-X achieves strong performance on long input summarization tasks comparable with much larger models while adding few additional parameters and not requiring model parallelism to train.* This model was contributed by [zphang]( PEGASUS-X uses the same tokenizer as [PEGASUS](pegasus). ## PegasusXConfig [[autodoc]] PegasusXConfig ## PegasusXModel [[autodoc]] PegasusXModel - forward ## PegasusXForConditionalGeneration [[autodoc]] PegasusXForConditionalGeneration - forward " model_doc/markuplm.md," # MarkupLM ## Overview The MarkupLM model was proposed in [MarkupLM: Pre-training of Text and Markup Language for Visually-rich Document Understanding](https://arxiv.org/abs/2110.08518) by Junlong Li, Yiheng Xu, Lei Cui, Furu Wei. MarkupLM is BERT, but applied to HTML pages instead of raw text documents. The model incorporates additional embedding layers to improve performance, similar to [LayoutLM](layoutlm). The model can be used for tasks like question answering on web pages or information extraction from web pages. It obtains state-of-the-art results on 2 important benchmarks: - [WebSRC](https://x-lance.github.io/WebSRC/), a dataset for Web-Based Structural Reading Comprehension (a bit like SQuAD but for web pages) - [SWDE](https://www.researchgate.net/publication/221299838_From_one_tree_to_a_forest_a_unified_solution_for_structured_web_data_extraction), a dataset for information extraction from web pages (basically named-entity recogntion on web pages) The abstract from the paper is the following: *Multimodal pre-training with text, layout, and image has made significant progress for Visually-rich Document Understanding (VrDU), especially the fixed-layout documents such as scanned document images. While, there are still a large number of digital documents where the layout information is not fixed and needs to be interactively and dynamically rendered for visualization, making existing layout-based pre-training approaches not easy to apply. In this paper, we propose MarkupLM for document understanding tasks with markup languages as the backbone such as HTML/XML-based documents, where text and markup information is jointly pre-trained. Experiment results show that the pre-trained MarkupLM significantly outperforms the existing strong baseline models on several document understanding tasks. The pre-trained model and code will be publicly available.* This model was contributed by [nielsr](https://huggingface.co/nielsr). The original code can be found [here](https://github.com/microsoft/unilm/tree/master/markuplm). ## Usage tips - In addition to `input_ids`, [`~MarkupLMModel.forward`] expects 2 additional inputs, namely `xpath_tags_seq` and `xpath_subs_seq`. These are the XPATH tags and subscripts respectively for each token in the input sequence. - One can use [`MarkupLMProcessor`] to prepare all data for the model. Refer to the [usage guide](#usage-markuplmprocessor) for more info. MarkupLM architecture. Taken from the original paper. ## Usage: MarkupLMProcessor The easiest way to prepare data for the model is to use [`MarkupLMProcessor`], which internally combines a feature extractor ([`MarkupLMFeatureExtractor`]) and a tokenizer ([`MarkupLMTokenizer`] or [`MarkupLMTokenizerFast`]). The feature extractor is used to extract all nodes and xpaths from the HTML strings, which are then provided to the tokenizer, which turns them into the token-level inputs of the model (`input_ids` etc.). Note that you can still use the feature extractor and tokenizer separately, if you only want to handle one of the two tasks. thon from transformers import MarkupLMFeatureExtractor, MarkupLMTokenizerFast, MarkupLMProcessor feature_extractor = MarkupLMFeatureExtractor() tokenizer = MarkupLMTokenizerFast.from_pretrained(""microsoft/markuplm-base"") processor = MarkupLMProcessor(feature_extractor, tokenizer) In short, one can provide HTML strings (and possibly additional data) to [`MarkupLMProcessor`], and it will create the inputs expected by the model. Internally, the processor first uses [`MarkupLMFeatureExtractor`] to get a list of nodes and corresponding xpaths. The nodes and xpaths are then provided to [`MarkupLMTokenizer`] or [`MarkupLMTokenizerFast`], which converts them to token-level `input_ids`, `attention_mask`, `token_type_ids`, `xpath_subs_seq`, `xpath_tags_seq`. Optionally, one can provide node labels to the processor, which are turned into token-level `labels`. [`MarkupLMFeatureExtractor`] uses [Beautiful Soup](https://www.crummy.com/software/BeautifulSoup/bs4/doc/), a Python library for pulling data out of HTML and XML files, under the hood. Note that you can still use your own parsing solution of choice, and provide the nodes and xpaths yourself to [`MarkupLMTokenizer`] or [`MarkupLMTokenizerFast`]. In total, there are 5 use cases that are supported by the processor. Below, we list them all. Note that each of these use cases work for both batched and non-batched inputs (we illustrate them for non-batched inputs). **Use case 1: web page classification (training, inference) + token classification (inference), parse_html = True** This is the simplest case, in which the processor will use the feature extractor to get all nodes and xpaths from the HTML. thon >>> from transformers import MarkupLMProcessor >>> processor = MarkupLMProcessor.from_pretrained(""microsoft/markuplm-base"") >>> html_string = """""" html Hello world Welcome Here is my website. """""" >>> # note that you can also add provide all tokenizer parameters here such as padding, truncation >>> encoding = processor(html_string, return_tensors=""pt"") >>> print(encoding.keys()) dict_keys(['input_ids', 'token_type_ids', 'attention_mask', 'xpath_tags_seq', 'xpath_subs_seq']) **Use case 2: web page classification (training, inference) + token classification (inference), parse_html=False** In case one already has obtained all nodes and xpaths, one doesn't need the feature extractor. In that case, one should provide the nodes and corresponding xpaths themselves to the processor, and make sure to set `parse_html` to `False`. thon >>> from transformers import MarkupLMProcessor >>> processor = MarkupLMProcessor.from_pretrained(""microsoft/markuplm-base"") >>> processor.parse_html = False >>> nodes = [""hello"", ""world"", ""how"", ""are""] >>> xpaths = [""/html/body/div/li[1]/div/span"", ""/html/body/div/li[1]/div/span"", ""html/body"", ""html/body/div""] >>> encoding = processor(nodes=nodes, xpaths=xpaths, return_tensors=""pt"") >>> print(encoding.keys()) dict_keys(['input_ids', 'token_type_ids', 'attention_mask', 'xpath_tags_seq', 'xpath_subs_seq']) **Use case 3: token classification (training), parse_html=False** For token classification tasks (such as [SWDE](https://paperswithcode.com/dataset/swde)), one can also provide the corresponding node labels in order to train a model. The processor will then convert these into token-level `labels`. By default, it will only label the first wordpiece of a word, and label the remaining wordpieces with -100, which is the `ignore_index` of PyTorch's CrossEntropyLoss. In case you want all wordpieces of a word to be labeled, you can initialize the tokenizer with `only_label_first_subword` set to `False`. thon >>> from transformers import MarkupLMProcessor >>> processor = MarkupLMProcessor.from_pretrained(""microsoft/markuplm-base"") >>> processor.parse_html = False >>> nodes = [""hello"", ""world"", ""how"", ""are""] >>> xpaths = [""/html/body/div/li[1]/div/span"", ""/html/body/div/li[1]/div/span"", ""html/body"", ""html/body/div""] >>> node_labels = [1, 2, 2, 1] >>> encoding = processor(nodes=nodes, xpaths=xpaths, node_labels=node_labels, return_tensors=""pt"") >>> print(encoding.keys()) dict_keys(['input_ids', 'token_type_ids', 'attention_mask', 'xpath_tags_seq', 'xpath_subs_seq', 'labels']) **Use case 4: web page question answering (inference), parse_html=True** For question answering tasks on web pages, you can provide a question to the processor. By default, the processor will use the feature extractor to get all nodes and xpaths, and create [CLS] question tokens [SEP] word tokens [SEP]. thon >>> from transformers import MarkupLMProcessor >>> processor = MarkupLMProcessor.from_pretrained(""microsoft/markuplm-base"") >>> html_string = """""" html Hello world Welcome My name is Niels. """""" >>> question = ""What's his name?"" >>> encoding = processor(html_string, questions=question, return_tensors=""pt"") >>> print(encoding.keys()) dict_keys(['input_ids', 'token_type_ids', 'attention_mask', 'xpath_tags_seq', 'xpath_subs_seq']) **Use case 5: web page question answering (inference), parse_html=False** For question answering tasks (such as WebSRC), you can provide a question to the processor. If you have extracted all nodes and xpaths yourself, you can provide them directly to the processor. Make sure to set `parse_html` to `False`. thon >>> from transformers import MarkupLMProcessor >>> processor = MarkupLMProcessor.from_pretrained(""microsoft/markuplm-base"") >>> processor.parse_html = False >>> nodes = [""hello"", ""world"", ""how"", ""are""] >>> xpaths = [""/html/body/div/li[1]/div/span"", ""/html/body/div/li[1]/div/span"", ""html/body"", ""html/body/div""] >>> question = ""What's his name?"" >>> encoding = processor(nodes=nodes, xpaths=xpaths, questions=question, return_tensors=""pt"") >>> print(encoding.keys()) dict_keys(['input_ids', 'token_type_ids', 'attention_mask', 'xpath_tags_seq', 'xpath_subs_seq']) ## Resources - [Demo notebooks](https://github.com/NielsRogge/Transformers-Tutorials/tree/master/MarkupLM) - [Text classification task guide](../tasks/sequence_classification) - [Token classification task guide](../tasks/token_classification) - [Question answering task guide](../tasks/question_answering) ## MarkupLMConfig [[autodoc]] MarkupLMConfig - all ## MarkupLMFeatureExtractor [[autodoc]] MarkupLMFeatureExtractor - __call__ ## MarkupLMTokenizer [[autodoc]] MarkupLMTokenizer - build_inputs_with_special_tokens - get_special_tokens_mask - create_token_type_ids_from_sequences - save_vocabulary ## MarkupLMTokenizerFast [[autodoc]] MarkupLMTokenizerFast - all ## MarkupLMProcessor [[autodoc]] MarkupLMProcessor - __call__ ## MarkupLMModel [[autodoc]] MarkupLMModel - forward ## MarkupLMForSequenceClassification [[autodoc]] MarkupLMForSequenceClassification - forward ## MarkupLMForTokenClassification [[autodoc]] MarkupLMForTokenClassification - forward ## MarkupLMForQuestionAnswering [[autodoc]] MarkupLMForQuestionAnswering - forward " model_doc/vivit.md," # Video Vision Transformer (ViViT) ## Overview The Vivit model was proposed in [ViViT: A Video Vision Transformer](https://arxiv.org/abs/2103.15691) by Anurag Arnab, Mostafa Dehghani, Georg Heigold, Chen Sun, Mario Lučić, Cordelia Schmid. The paper proposes one of the first successful pure-transformer based set of models for video understanding. The abstract from the paper is the following: *We present pure-transformer based models for video classification, drawing upon the recent success of such models in image classification. Our model extracts spatio-temporal tokens from the input video, which are then encoded by a series of transformer layers. In order to handle the long sequences of tokens encountered in video, we propose several, efficient variants of our model which factorise the spatial- and temporal-dimensions of the input. Although transformer-based models are known to only be effective when large training datasets are available, we show how we can effectively regularise the model during training and leverage pretrained image models to be able to train on comparatively small datasets. We conduct thorough ablation studies, and achieve state-of-the-art results on multiple video classification benchmarks including Kinetics 400 and 600, Epic Kitchens, Something-Something v2 and Moments in Time, outperforming prior methods based on deep 3D convolutional networks.* This model was contributed by [jegormeister](https://huggingface.co/jegormeister). The original code (written in JAX) can be found [here](https://github.com/google-research/scenic/tree/main/scenic/projects/vivit). ## VivitConfig [[autodoc]] VivitConfig ## VivitImageProcessor [[autodoc]] VivitImageProcessor - preprocess ## VivitModel [[autodoc]] VivitModel - forward ## VivitForVideoClassification [[autodoc]] transformers.VivitForVideoClassification - forward " model_doc/graphormer.md," # Graphormer ## Overview The Graphormer model was proposed in [Do Transformers Really Perform Bad for Graph Representation?](https://arxiv.org/abs/2106.05234) by Chengxuan Ying, Tianle Cai, Shengjie Luo, Shuxin Zheng, Guolin Ke, Di He, Yanming Shen and Tie-Yan Liu. It is a Graph Transformer model, modified to allow computations on graphs instead of text sequences by generating embeddings and features of interest during preprocessing and collation, then using a modified attention. The abstract from the paper is the following: *The Transformer architecture has become a dominant choice in many domains, such as natural language processing and computer vision. Yet, it has not achieved competitive performance on popular leaderboards of graph-level prediction compared to mainstream GNN variants. Therefore, it remains a mystery how Transformers could perform well for graph representation learning. In this paper, we solve this mystery by presenting Graphormer, which is built upon the standard Transformer architecture, and could attain excellent results on a broad range of graph representation learning tasks, especially on the recent OGB Large-Scale Challenge. Our key insight to utilizing Transformer in the graph is the necessity of effectively encoding the structural information of a graph into the model. To this end, we propose several simple yet effective structural encoding methods to help Graphormer better model graph-structured data. Besides, we mathematically characterize the expressive power of Graphormer and exhibit that with our ways of encoding the structural information of graphs, many popular GNN variants could be covered as the special cases of Graphormer.* This model was contributed by [clefourrier](https://huggingface.co/clefourrier). The original code can be found [here](https://github.com/microsoft/Graphormer). ## Usage tips This model will not work well on large graphs (more than 100 nodes/edges), as it will make the memory explode. You can reduce the batch size, increase your RAM, or decrease the `UNREACHABLE_NODE_DISTANCE` parameter in algos_graphormer.pyx, but it will be hard to go above 700 nodes/edges. This model does not use a tokenizer, but instead a special collator during training. ## GraphormerConfig [[autodoc]] GraphormerConfig ## GraphormerModel [[autodoc]] GraphormerModel - forward ## GraphormerForGraphClassification [[autodoc]] GraphormerForGraphClassification - forward " model_doc/bert-japanese.md," # BertJapanese ## Overview The BERT models trained on Japanese text. There are models with two different tokenization methods: - Tokenize with MeCab and WordPiece. This requires some extra dependencies, [fugashi](https://github.com/polm/fugashi) which is a wrapper around [MeCab](https://taku910.github.io/mecab/). - Tokenize into characters. To use *MecabTokenizer*, you should `pip install transformers[""ja""]` (or `pip install -e .[""ja""]` if you install from source) to install dependencies. See [details on cl-tohoku repository](https://github.com/cl-tohoku/bert-japanese). Example of using a model with MeCab and WordPiece tokenization: thon >>> import torch >>> from transformers import AutoModel, AutoTokenizer >>> bertjapanese = AutoModel.from_pretrained(""cl-tohoku/bert-base-japanese"") >>> tokenizer = AutoTokenizer.from_pretrained(""cl-tohoku/bert-base-japanese"") >>> ## Input Japanese Text >>> line = ""吾輩は猫である。"" >>> inputs = tokenizer(line, return_tensors=""pt"") >>> print(tokenizer.decode(inputs[""input_ids""][0])) [CLS] 吾輩 は 猫 で ある 。 [SEP] >>> outputs = bertjapanese(**inputs) Example of using a model with Character tokenization: thon >>> bertjapanese = AutoModel.from_pretrained(""cl-tohoku/bert-base-japanese-char"") >>> tokenizer = AutoTokenizer.from_pretrained(""cl-tohoku/bert-base-japanese-char"") >>> ## Input Japanese Text >>> line = ""吾輩は猫である。"" >>> inputs = tokenizer(line, return_tensors=""pt"") >>> print(tokenizer.decode(inputs[""input_ids""][0])) [CLS] 吾 輩 は 猫 で あ る 。 [SEP] >>> outputs = bertjapanese(**inputs) This model was contributed by [cl-tohoku](https://huggingface.co/cl-tohoku). This implementation is the same as BERT, except for tokenization method. Refer to [BERT documentation](bert) for API reference information. ## BertJapaneseTokenizer [[autodoc]] BertJapaneseTokenizer " model_doc/instructblip.md," # InstructBLIP ## Overview The InstructBLIP model was proposed in [InstructBLIP: Towards General-purpose Vision-Language Models with Instruction Tuning](https://arxiv.org/abs/2305.06500) by Wenliang Dai, Junnan Li, Dongxu Li, Anthony Meng Huat Tiong, Junqi Zhao, Weisheng Wang, Boyang Li, Pascale Fung, Steven Hoi. InstructBLIP leverages the [BLIP-2](blip2) architecture for visual instruction tuning. The abstract from the paper is the following: *General-purpose language models that can solve various language-domain tasks have emerged driven by the pre-training and instruction-tuning pipeline. However, building general-purpose vision-language models is challenging due to the increased task discrepancy introduced by the additional visual input. Although vision-language pre-training has been widely studied, vision-language instruction tuning remains relatively less explored. In this paper, we conduct a systematic and comprehensive study on vision-language instruction tuning based on the pre-trained BLIP-2 models. We gather a wide variety of 26 publicly available datasets, transform them into instruction tuning format and categorize them into two clusters for held-in instruction tuning and held-out zero-shot evaluation. Additionally, we introduce instruction-aware visual feature extraction, a crucial method that enables the model to extract informative features tailored to the given instruction. The resulting InstructBLIP models achieve state-of-the-art zero-shot performance across all 13 held-out datasets, substantially outperforming BLIP-2 and the larger Flamingo. Our models also lead to state-of-the-art performance when finetuned on individual downstream tasks (e.g., 90.7% accuracy on ScienceQA IMG). Furthermore, we qualitatively demonstrate the advantages of InstructBLIP over concurrent multimodal models.* InstructBLIP architecture. Taken from the original paper. This model was contributed by [nielsr](https://huggingface.co/nielsr). The original code can be found [here](https://github.com/salesforce/LAVIS/tree/main/projects/instructblip). ## Usage tips InstructBLIP uses the same architecture as [BLIP-2](blip2) with a tiny but important difference: it also feeds the text prompt (instruction) to the Q-Former. ## InstructBlipConfig [[autodoc]] InstructBlipConfig - from_vision_qformer_text_configs ## InstructBlipVisionConfig [[autodoc]] InstructBlipVisionConfig ## InstructBlipQFormerConfig [[autodoc]] InstructBlipQFormerConfig ## InstructBlipProcessor [[autodoc]] InstructBlipProcessor ## InstructBlipVisionModel [[autodoc]] InstructBlipVisionModel - forward ## InstructBlipQFormerModel [[autodoc]] InstructBlipQFormerModel - forward ## InstructBlipForConditionalGeneration [[autodoc]] InstructBlipForConditionalGeneration - forward - generate" model_doc/auto.md," # Auto Classes In many cases, the architecture you want to use can be guessed from the name or the path of the pretrained model you are supplying to the `from_pretrained()` method. AutoClasses are here to do this job for you so that you automatically retrieve the relevant model given the name/path to the pretrained weights/config/vocabulary. Instantiating one of [`AutoConfig`], [`AutoModel`], and [`AutoTokenizer`] will directly create a class of the relevant architecture. For instance thon model = AutoModel.from_pretrained(""bert-base-cased"") will create a model that is an instance of [`BertModel`]. There is one class of `AutoModel` for each task, and for each backend (PyTorch, TensorFlow, or Flax). ## Extending the Auto Classes Each of the auto classes has a method to be extended with your custom classes. For instance, if you have defined a custom class of model `NewModel`, make sure you have a `NewModelConfig` then you can add those to the auto classes like this: thon from transformers import AutoConfig, AutoModel AutoConfig.register(""new-model"", NewModelConfig) AutoModel.register(NewModelConfig, NewModel) You will then be able to use the auto classes like you would usually do! If your `NewModelConfig` is a subclass of [`~transformer.PretrainedConfig`], make sure its `model_type` attribute is set to the same key you use when registering the config (here `""new-model""`). Likewise, if your `NewModel` is a subclass of [`PreTrainedModel`], make sure its `config_class` attribute is set to the same class you use when registering the model (here `NewModelConfig`). ## AutoConfig [[autodoc]] AutoConfig ## AutoTokenizer [[autodoc]] AutoTokenizer ## AutoFeatureExtractor [[autodoc]] AutoFeatureExtractor ## AutoImageProcessor [[autodoc]] AutoImageProcessor ## AutoProcessor [[autodoc]] AutoProcessor ## Generic model classes The following auto classes are available for instantiating a base model class without a specific head. ### AutoModel [[autodoc]] AutoModel ### TFAutoModel [[autodoc]] TFAutoModel ### FlaxAutoModel [[autodoc]] FlaxAutoModel ## Generic pretraining classes The following auto classes are available for instantiating a model with a pretraining head. ### AutoModelForPreTraining [[autodoc]] AutoModelForPreTraining ### TFAutoModelForPreTraining [[autodoc]] TFAutoModelForPreTraining ### FlaxAutoModelForPreTraining [[autodoc]] FlaxAutoModelForPreTraining ## Natural Language Processing The following auto classes are available for the following natural language processing tasks. ### AutoModelForCausalLM [[autodoc]] AutoModelForCausalLM ### TFAutoModelForCausalLM [[autodoc]] TFAutoModelForCausalLM ### FlaxAutoModelForCausalLM [[autodoc]] FlaxAutoModelForCausalLM ### AutoModelForMaskedLM [[autodoc]] AutoModelForMaskedLM ### TFAutoModelForMaskedLM [[autodoc]] TFAutoModelForMaskedLM ### FlaxAutoModelForMaskedLM [[autodoc]] FlaxAutoModelForMaskedLM ### AutoModelForMaskGeneration [[autodoc]] AutoModelForMaskGeneration ### TFAutoModelForMaskGeneration [[autodoc]] TFAutoModelForMaskGeneration ### AutoModelForSeq2SeqLM [[autodoc]] AutoModelForSeq2SeqLM ### TFAutoModelForSeq2SeqLM [[autodoc]] TFAutoModelForSeq2SeqLM ### FlaxAutoModelForSeq2SeqLM [[autodoc]] FlaxAutoModelForSeq2SeqLM ### AutoModelForSequenceClassification [[autodoc]] AutoModelForSequenceClassification ### TFAutoModelForSequenceClassification [[autodoc]] TFAutoModelForSequenceClassification ### FlaxAutoModelForSequenceClassification [[autodoc]] FlaxAutoModelForSequenceClassification ### AutoModelForMultipleChoice [[autodoc]] AutoModelForMultipleChoice ### TFAutoModelForMultipleChoice [[autodoc]] TFAutoModelForMultipleChoice ### FlaxAutoModelForMultipleChoice [[autodoc]] FlaxAutoModelForMultipleChoice ### AutoModelForNextSentencePrediction [[autodoc]] AutoModelForNextSentencePrediction ### TFAutoModelForNextSentencePrediction [[autodoc]] TFAutoModelForNextSentencePrediction ### FlaxAutoModelForNextSentencePrediction [[autodoc]] FlaxAutoModelForNextSentencePrediction ### AutoModelForTokenClassification [[autodoc]] AutoModelForTokenClassification ### TFAutoModelForTokenClassification [[autodoc]] TFAutoModelForTokenClassification ### FlaxAutoModelForTokenClassification [[autodoc]] FlaxAutoModelForTokenClassification ### AutoModelForQuestionAnswering [[autodoc]] AutoModelForQuestionAnswering ### TFAutoModelForQuestionAnswering [[autodoc]] TFAutoModelForQuestionAnswering ### FlaxAutoModelForQuestionAnswering [[autodoc]] FlaxAutoModelForQuestionAnswering ### AutoModelForTextEncoding [[autodoc]] AutoModelForTextEncoding ### TFAutoModelForTextEncoding [[autodoc]] TFAutoModelForTextEncoding ## Computer vision The following auto classes are available for the following computer vision tasks. ### AutoModelForDepthEstimation [[autodoc]] AutoModelForDepthEstimation ### AutoModelForImageClassification [[autodoc]] AutoModelForImageClassification ### TFAutoModelForImageClassification [[autodoc]] TFAutoModelForImageClassification ### FlaxAutoModelForImageClassification [[autodoc]] FlaxAutoModelForImageClassification ### AutoModelForVideoClassification [[autodoc]] AutoModelForVideoClassification ### AutoModelForMaskedImageModeling [[autodoc]] AutoModelForMaskedImageModeling ### TFAutoModelForMaskedImageModeling [[autodoc]] TFAutoModelForMaskedImageModeling ### AutoModelForObjectDetection [[autodoc]] AutoModelForObjectDetection ### AutoModelForImageSegmentation [[autodoc]] AutoModelForImageSegmentation ### AutoModelForImageToImage [[autodoc]] AutoModelForImageToImage ### AutoModelForSemanticSegmentation [[autodoc]] AutoModelForSemanticSegmentation ### TFAutoModelForSemanticSegmentation [[autodoc]] TFAutoModelForSemanticSegmentation ### AutoModelForInstanceSegmentation [[autodoc]] AutoModelForInstanceSegmentation ### AutoModelForUniversalSegmentation [[autodoc]] AutoModelForUniversalSegmentation ### AutoModelForZeroShotImageClassification [[autodoc]] AutoModelForZeroShotImageClassification ### TFAutoModelForZeroShotImageClassification [[autodoc]] TFAutoModelForZeroShotImageClassification ### AutoModelForZeroShotObjectDetection [[autodoc]] AutoModelForZeroShotObjectDetection ## Audio The following auto classes are available for the following audio tasks. ### AutoModelForAudioClassification [[autodoc]] AutoModelForAudioClassification ### AutoModelForAudioFrameClassification [[autodoc]] TFAutoModelForAudioClassification ### TFAutoModelForAudioFrameClassification [[autodoc]] AutoModelForAudioFrameClassification ### AutoModelForCTC [[autodoc]] AutoModelForCTC ### AutoModelForSpeechSeq2Seq [[autodoc]] AutoModelForSpeechSeq2Seq ### TFAutoModelForSpeechSeq2Seq [[autodoc]] TFAutoModelForSpeechSeq2Seq ### FlaxAutoModelForSpeechSeq2Seq [[autodoc]] FlaxAutoModelForSpeechSeq2Seq ### AutoModelForAudioXVector [[autodoc]] AutoModelForAudioXVector ### AutoModelForTextToSpectrogram [[autodoc]] AutoModelForTextToSpectrogram ### AutoModelForTextToWaveform [[autodoc]] AutoModelForTextToWaveform ## Multimodal The following auto classes are available for the following multimodal tasks. ### AutoModelForTableQuestionAnswering [[autodoc]] AutoModelForTableQuestionAnswering ### TFAutoModelForTableQuestionAnswering [[autodoc]] TFAutoModelForTableQuestionAnswering ### AutoModelForDocumentQuestionAnswering [[autodoc]] AutoModelForDocumentQuestionAnswering ### TFAutoModelForDocumentQuestionAnswering [[autodoc]] TFAutoModelForDocumentQuestionAnswering ### AutoModelForVisualQuestionAnswering [[autodoc]] AutoModelForVisualQuestionAnswering ### AutoModelForVision2Seq [[autodoc]] AutoModelForVision2Seq ### TFAutoModelForVision2Seq [[autodoc]] TFAutoModelForVision2Seq ### FlaxAutoModelForVision2Seq [[autodoc]] FlaxAutoModelForVision2Seq " model_doc/tvp.md," # TVP ## Overview The text-visual prompting (TVP) framework was proposed in the paper [Text-Visual Prompting for Efficient 2D Temporal Video Grounding](https://arxiv.org/abs/2303.04995) by Yimeng Zhang, Xin Chen, Jinghan Jia, Sijia Liu, Ke Ding. The abstract from the paper is the following: *In this paper, we study the problem of temporal video grounding (TVG), which aims to predict the starting/ending time points of moments described by a text sentence within a long untrimmed video. Benefiting from fine-grained 3D visual features, the TVG techniques have achieved remarkable progress in recent years. However, the high complexity of 3D convolutional neural networks (CNNs) makes extracting dense 3D visual features time-consuming, which calls for intensive memory and computing resources. Towards efficient TVG, we propose a novel text-visual prompting (TVP) framework, which incorporates optimized perturbation patterns (that we call ‘prompts’) into both visual inputs and textual features of a TVG model. In sharp contrast to 3D CNNs, we show that TVP allows us to effectively co-train vision encoder and language encoder in a 2D TVG model and improves the performance of cross-modal feature fusion using only low-complexity sparse 2D visual features. Further, we propose a Temporal-Distance IoU (TDIoU) loss for efficient learning of TVG. Experiments on two benchmark datasets, Charades-STA and ActivityNet Captions datasets, empirically show that the proposed TVP significantly boosts the performance of 2D TVG (e.g., 9.79% improvement on Charades-STA and 30.77% improvement on ActivityNet Captions) and achieves 5× inference acceleration over TVG using 3D visual features.* This research addresses temporal video grounding (TVG), which is the process of pinpointing the start and end times of specific events in a long video, as described by a text sentence. Text-visual prompting (TVP), is proposed to enhance TVG. TVP involves integrating specially designed patterns, known as 'prompts', into both the visual (image-based) and textual (word-based) input components of a TVG model. These prompts provide additional spatial-temporal context, improving the model's ability to accurately determine event timings in the video. The approach employs 2D visual inputs in place of 3D ones. Although 3D inputs offer more spatial-temporal detail, they are also more time-consuming to process. The use of 2D inputs with the prompting method aims to provide similar levels of context and accuracy more efficiently. TVP architecture. Taken from the original paper. This model was contributed by [Jiqing Feng](https://huggingface.co/Jiqing). The original code can be found [here](https://github.com/intel/TVP). ## Usage tips and examples Prompts are optimized perturbation patterns, which would be added to input video frames or text features. Universal set refers to using the same exact set of prompts for any input, this means that these prompts are added consistently to all video frames and text features, regardless of the input's content. TVP consists of a visual encoder and cross-modal encoder. A universal set of visual prompts and text prompts to be integrated into sampled video frames and textual features, respectively. Specially, a set of different visual prompts are applied to uniformly-sampled frames of one untrimmed video in order. The goal of this model is to incorporate trainable prompts into both visual inputs and textual features to temporal video grounding(TVG) problems. In principle, one can apply any visual, cross-modal encoder in the proposed architecture. The [`TvpProcessor`] wraps [`BertTokenizer`] and [`TvpImageProcessor`] into a single instance to both encode the text and prepare the images respectively. The following example shows how to run temporal video grounding using [`TvpProcessor`] and [`TvpForVideoGrounding`]. thon import av import cv2 import numpy as np import torch from huggingface_hub import hf_hub_download from transformers import AutoProcessor, TvpForVideoGrounding def pyav_decode(container, sampling_rate, num_frames, clip_idx, num_clips, target_fps): ''' Convert the video from its original fps to the target_fps and decode the video with PyAV decoder. Args: container (container): pyav container. sampling_rate (int): frame sampling rate (interval between two sampled frames). num_frames (int): number of frames to sample. clip_idx (int): if clip_idx is -1, perform random temporal sampling. If clip_idx is larger than -1, uniformly split the video to num_clips clips, and select the clip_idx-th video clip. num_clips (int): overall number of clips to uniformly sample from the given video. target_fps (int): the input video may have different fps, convert it to the target video fps before frame sampling. Returns: frames (tensor): decoded frames from the video. Return None if the no video stream was found. fps (float): the number of frames per second of the video. ''' video = container.streams.video[0] fps = float(video.average_rate) clip_size = sampling_rate * num_frames / target_fps * fps delta = max(num_frames - clip_size, 0) start_idx = delta * clip_idx / num_clips end_idx = start_idx + clip_size - 1 timebase = video.duration / num_frames video_start_pts = int(start_idx * timebase) video_end_pts = int(end_idx * timebase) seek_offset = max(video_start_pts - 1024, 0) container.seek(seek_offset, any_frame=False, backward=True, stream=video) frames = {} for frame in container.decode(video=0): if frame.pts < video_start_pts: continue frames[frame.pts] = frame if frame.pts > video_end_pts: break frames = [frames[pts] for pts in sorted(frames)] return frames, fps def decode(container, sampling_rate, num_frames, clip_idx, num_clips, target_fps): ''' Decode the video and perform temporal sampling. Args: container (container): pyav container. sampling_rate (int): frame sampling rate (interval between two sampled frames). num_frames (int): number of frames to sample. clip_idx (int): if clip_idx is -1, perform random temporal sampling. If clip_idx is larger than -1, uniformly split the video to num_clips clips, and select the clip_idx-th video clip. num_clips (int): overall number of clips to uniformly sample from the given video. target_fps (int): the input video may have different fps, convert it to the target video fps before frame sampling. Returns: frames (tensor): decoded frames from the video. ''' assert clip_idx >= -2, ""Not a valied clip_idx {}"".format(clip_idx) frames, fps = pyav_decode(container, sampling_rate, num_frames, clip_idx, num_clips, target_fps) clip_size = sampling_rate * num_frames / target_fps * fps index = np.linspace(0, clip_size - 1, num_frames) index = np.clip(index, 0, len(frames) - 1).astype(np.int64) frames = np.array([frames[idx].to_rgb().to_ndarray() for idx in index]) frames = frames.transpose(0, 3, 1, 2) return frames file = hf_hub_download(repo_id=""Intel/tvp_demo"", filename=""AK2KG.mp4"", repo_type=""dataset"") model = TvpForVideoGrounding.from_pretrained(""Intel/tvp-base"") decoder_kwargs = dict( container=av.open(file, metadata_errors=""ignore""), sampling_rate=1, num_frames=model.config.num_frames, clip_idx=0, num_clips=1, target_fps=3, ) raw_sampled_frms = decode(**decoder_kwargs) text = ""a person is sitting on a bed."" processor = AutoProcessor.from_pretrained(""Intel/tvp-base"") model_inputs = processor( text=[text], videos=list(raw_sampled_frms), return_tensors=""pt"", max_text_length=100#, size=size ) model_inputs[""pixel_values""] = model_inputs[""pixel_values""].to(model.dtype) output = model(**model_inputs) def get_video_duration(filename): cap = cv2.VideoCapture(filename) if cap.isOpened(): rate = cap.get(5) frame_num = cap.get(7) duration = frame_num/rate return duration return -1 duration = get_video_duration(file) start, end = processor.post_process_video_grounding(output.logits, duration) print(f""The time slot of the video corresponding to the text \""{text}\"" is from {start}s to {end}s"") Tips: - This implementation of TVP uses [`BertTokenizer`] to generate text embeddings and Resnet-50 model to compute visual embeddings. - Checkpoints for pre-trained [tvp-base](https://huggingface.co/Intel/tvp-base) is released. - Please refer to [Table 2](https://arxiv.org/pdf/2303.04995.pdf) for TVP's performance on Temporal Video Grounding task. ## TvpConfig [[autodoc]] TvpConfig ## TvpImageProcessor [[autodoc]] TvpImageProcessor - preprocess ## TvpProcessor [[autodoc]] TvpProcessor - __call__ ## TvpModel [[autodoc]] TvpModel - forward ## TvpForVideoGrounding [[autodoc]] TvpForVideoGrounding - forward " model_doc/esm.md," # ESM ## Overview This page provides code and pre-trained weights for Transformer protein language models from Meta AI's Fundamental AI Research Team, providing the state-of-the-art ESMFold and ESM-2, and the previously released ESM-1b and ESM-1v. Transformer protein language models were introduced in the paper [Biological structure and function emerge from scaling unsupervised learning to 250 million protein sequences](https://www.pnas.org/content/118/15/e2016239118) by Alexander Rives, Joshua Meier, Tom Sercu, Siddharth Goyal, Zeming Lin, Jason Liu, Demi Guo, Myle Ott, C. Lawrence Zitnick, Jerry Ma, and Rob Fergus. The first version of this paper was [preprinted in 2019](https://www.biorxiv.org/content/10.1101/622803v1?versioned=true). ESM-2 outperforms all tested single-sequence protein language models across a range of structure prediction tasks, and enables atomic resolution structure prediction. It was released with the paper [Language models of protein sequences at the scale of evolution enable accurate structure prediction](https://doi.org/10.1101/2022.07.20.500902) by Zeming Lin, Halil Akin, Roshan Rao, Brian Hie, Zhongkai Zhu, Wenting Lu, Allan dos Santos Costa, Maryam Fazel-Zarandi, Tom Sercu, Sal Candido and Alexander Rives. Also introduced in this paper was ESMFold. It uses an ESM-2 stem with a head that can predict folded protein structures with state-of-the-art accuracy. Unlike [AlphaFold2](https://www.nature.com/articles/s41586-021-03819-2), it relies on the token embeddings from the large pre-trained protein language model stem and does not perform a multiple sequence alignment (MSA) step at inference time, which means that ESMFold checkpoints are fully ""standalone"" - they do not require a database of known protein sequences and structures with associated external query tools to make predictions, and are much faster as a result. The abstract from ""Biological structure and function emerge from scaling unsupervised learning to 250 million protein sequences"" is *In the field of artificial intelligence, a combination of scale in data and model capacity enabled by unsupervised learning has led to major advances in representation learning and statistical generation. In the life sciences, the anticipated growth of sequencing promises unprecedented data on natural sequence diversity. Protein language modeling at the scale of evolution is a logical step toward predictive and generative artificial intelligence for biology. To this end, we use unsupervised learning to train a deep contextual language model on 86 billion amino acids across 250 million protein sequences spanning evolutionary diversity. The resulting model contains information about biological properties in its representations. The representations are learned from sequence data alone. The learned representation space has a multiscale organization reflecting structure from the level of biochemical properties of amino acids to remote homology of proteins. Information about secondary and tertiary structure is encoded in the representations and can be identified by linear projections. Representation learning produces features that generalize across a range of applications, enabling state-of-the-art supervised prediction of mutational effect and secondary structure and improving state-of-the-art features for long-range contact prediction.* The abstract from ""Language models of protein sequences at the scale of evolution enable accurate structure prediction"" is *Large language models have recently been shown to develop emergent capabilities with scale, going beyond simple pattern matching to perform higher level reasoning and generate lifelike images and text. While language models trained on protein sequences have been studied at a smaller scale, little is known about what they learn about biology as they are scaled up. In this work we train models up to 15 billion parameters, the largest language models of proteins to be evaluated to date. We find that as models are scaled they learn information enabling the prediction of the three-dimensional structure of a protein at the resolution of individual atoms. We present ESMFold for high accuracy end-to-end atomic level structure prediction directly from the individual sequence of a protein. ESMFold has similar accuracy to AlphaFold2 and RoseTTAFold for sequences with low perplexity that are well understood by the language model. ESMFold inference is an order of magnitude faster than AlphaFold2, enabling exploration of the structural space of metagenomic proteins in practical timescales.* The original code can be found [here](https://github.com/facebookresearch/esm) and was was developed by the Fundamental AI Research team at Meta AI. ESM-1b, ESM-1v and ESM-2 were contributed to huggingface by [jasonliu](https://huggingface.co/jasonliu) and [Matt](https://huggingface.co/Rocketknight1). ESMFold was contributed to huggingface by [Matt](https://huggingface.co/Rocketknight1) and [Sylvain](https://huggingface.co/sgugger), with a big thank you to Nikita Smetanin, Roshan Rao and Tom Sercu for their help throughout the process! ## Usage tips - ESM models are trained with a masked language modeling (MLM) objective. - The HuggingFace port of ESMFold uses portions of the [openfold](https://github.com/aqlaboratory/openfold) library. The `openfold` library is licensed under the Apache License 2.0. ## Resources - [Text classification task guide](../tasks/sequence_classification) - [Token classification task guide](../tasks/token_classification) - [Masked language modeling task guide](../tasks/masked_language_modeling) ## EsmConfig [[autodoc]] EsmConfig - all ## EsmTokenizer [[autodoc]] EsmTokenizer - build_inputs_with_special_tokens - get_special_tokens_mask - create_token_type_ids_from_sequences - save_vocabulary ## EsmModel [[autodoc]] EsmModel - forward ## EsmForMaskedLM [[autodoc]] EsmForMaskedLM - forward ## EsmForSequenceClassification [[autodoc]] EsmForSequenceClassification - forward ## EsmForTokenClassification [[autodoc]] EsmForTokenClassification - forward ## EsmForProteinFolding [[autodoc]] EsmForProteinFolding - forward ## TFEsmModel [[autodoc]] TFEsmModel - call ## TFEsmForMaskedLM [[autodoc]] TFEsmForMaskedLM - call ## TFEsmForSequenceClassification [[autodoc]] TFEsmForSequenceClassification - call ## TFEsmForTokenClassification [[autodoc]] TFEsmForTokenClassification - call " model_doc/hubert.md," # Hubert ## Overview Hubert was proposed in [HuBERT: Self-Supervised Speech Representation Learning by Masked Prediction of Hidden Units](https://arxiv.org/abs/2106.07447) by Wei-Ning Hsu, Benjamin Bolte, Yao-Hung Hubert Tsai, Kushal Lakhotia, Ruslan Salakhutdinov, Abdelrahman Mohamed. The abstract from the paper is the following: *Self-supervised approaches for speech representation learning are challenged by three unique problems: (1) there are multiple sound units in each input utterance, (2) there is no lexicon of input sound units during the pre-training phase, and (3) sound units have variable lengths with no explicit segmentation. To deal with these three problems, we propose the Hidden-Unit BERT (HuBERT) approach for self-supervised speech representation learning, which utilizes an offline clustering step to provide aligned target labels for a BERT-like prediction loss. A key ingredient of our approach is applying the prediction loss over the masked regions only, which forces the model to learn a combined acoustic and language model over the continuous inputs. HuBERT relies primarily on the consistency of the unsupervised clustering step rather than the intrinsic quality of the assigned cluster labels. Starting with a simple k-means teacher of 100 clusters, and using two iterations of clustering, the HuBERT model either matches or improves upon the state-of-the-art wav2vec 2.0 performance on the Librispeech (960h) and Libri-light (60,000h) benchmarks with 10min, 1h, 10h, 100h, and 960h fine-tuning subsets. Using a 1B parameter model, HuBERT shows up to 19% and 13% relative WER reduction on the more challenging dev-other and test-other evaluation subsets.* This model was contributed by [patrickvonplaten](https://huggingface.co/patrickvonplaten). # Usage tips - Hubert is a speech model that accepts a float array corresponding to the raw waveform of the speech signal. - Hubert model was fine-tuned using connectionist temporal classification (CTC) so the model output has to be decoded using [`Wav2Vec2CTCTokenizer`]. ## Resources - [Audio classification task guide](../tasks/audio_classification) - [Automatic speech recognition task guide](../tasks/asr) ## HubertConfig [[autodoc]] HubertConfig ## HubertModel [[autodoc]] HubertModel - forward ## HubertForCTC [[autodoc]] HubertForCTC - forward ## HubertForSequenceClassification [[autodoc]] HubertForSequenceClassification - forward ## TFHubertModel [[autodoc]] TFHubertModel - call ## TFHubertForCTC [[autodoc]] TFHubertForCTC - call " model_doc/distilbert.md," # DistilBERT ## Overview The DistilBERT model was proposed in the blog post [Smaller, faster, cheaper, lighter: Introducing DistilBERT, a distilled version of BERT](https://medium.com/huggingface/distilbert-8cf3380435b5), and the paper [DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter](https://arxiv.org/abs/1910.01108). DistilBERT is a small, fast, cheap and light Transformer model trained by distilling BERT base. It has 40% less parameters than *bert-base-uncased*, runs 60% faster while preserving over 95% of BERT's performances as measured on the GLUE language understanding benchmark. The abstract from the paper is the following: *As Transfer Learning from large-scale pre-trained models becomes more prevalent in Natural Language Processing (NLP), operating these large models in on-the-edge and/or under constrained computational training or inference budgets remains challenging. In this work, we propose a method to pre-train a smaller general-purpose language representation model, called DistilBERT, which can then be fine-tuned with good performances on a wide range of tasks like its larger counterparts. While most prior work investigated the use of distillation for building task-specific models, we leverage knowledge distillation during the pretraining phase and show that it is possible to reduce the size of a BERT model by 40%, while retaining 97% of its language understanding capabilities and being 60% faster. To leverage the inductive biases learned by larger models during pretraining, we introduce a triple loss combining language modeling, distillation and cosine-distance losses. Our smaller, faster and lighter model is cheaper to pre-train and we demonstrate its capabilities for on-device computations in a proof-of-concept experiment and a comparative on-device study.* This model was contributed by [victorsanh](https://huggingface.co/victorsanh). This model jax version was contributed by [kamalkraj](https://huggingface.co/kamalkraj). The original code can be found [here](https://github.com/huggingface/transformers/tree/main/examples/research_projects/distillation). ## Usage tips - DistilBERT doesn't have `token_type_ids`, you don't need to indicate which token belongs to which segment. Just separate your segments with the separation token `tokenizer.sep_token` (or `[SEP]`). - DistilBERT doesn't have options to select the input positions (`position_ids` input). This could be added if necessary though, just let us know if you need this option. - Same as BERT but smaller. Trained by distillation of the pretrained BERT model, meaning it’s been trained to predict the same probabilities as the larger model. The actual objective is a combination of: * finding the same probabilities as the teacher model * predicting the masked tokens correctly (but no next-sentence objective) * a cosine similarity between the hidden states of the student and the teacher model ## Resources A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with DistilBERT. If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource. - A blog post on [Getting Started with Sentiment Analysis using Python](https://huggingface.co/blog/sentiment-analysis-python) with DistilBERT. - A blog post on how to [train DistilBERT with Blurr for sequence classification](https://huggingface.co/blog/fastai). - A blog post on how to use [Ray to tune DistilBERT hyperparameters](https://huggingface.co/blog/ray-tune). - A blog post on how to [train DistilBERT with Hugging Face and Amazon SageMaker](https://huggingface.co/blog/the-partnership-amazon-sagemaker-and-hugging-face). - A notebook on how to [finetune DistilBERT for multi-label classification](https://colab.research.google.com/github/DhavalTaunk08/Transformers_scripts/blob/master/Transformers_multilabel_distilbert.ipynb). 🌎 - A notebook on how to [finetune DistilBERT for multiclass classification with PyTorch](https://colab.research.google.com/github/abhimishra91/transformers-tutorials/blob/master/transformers_multiclass_classification.ipynb). 🌎 - A notebook on how to [finetune DistilBERT for text classification in TensorFlow](https://colab.research.google.com/github/peterbayerle/huggingface_notebook/blob/main/distilbert_tf.ipynb). 🌎 - [`DistilBertForSequenceClassification`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/pytorch/text-classification) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/text_classification.ipynb). - [`TFDistilBertForSequenceClassification`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/tensorflow/text-classification) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/text_classification-tf.ipynb). - [`FlaxDistilBertForSequenceClassification`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/flax/text-classification) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/text_classification_flax.ipynb). - [Text classification task guide](../tasks/sequence_classification) - [`DistilBertForTokenClassification`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/pytorch/token-classification) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/token_classification.ipynb). - [`TFDistilBertForTokenClassification`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/tensorflow/token-classification) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/token_classification-tf.ipynb). - [`FlaxDistilBertForTokenClassification`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/flax/token-classification). - [Token classification](https://huggingface.co/course/chapter7/2?fw=pt) chapter of the 🤗 Hugging Face Course. - [Token classification task guide](../tasks/token_classification) - [`DistilBertForMaskedLM`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/pytorch/language-modeling#robertabertdistilbert-and-masked-language-modeling) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/language_modeling.ipynb). - [`TFDistilBertForMaskedLM`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/tensorflow/language-modeling#run_mlmpy) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/language_modeling-tf.ipynb). - [`FlaxDistilBertForMaskedLM`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/flax/language-modeling#masked-language-modeling) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/masked_language_modeling_flax.ipynb). - [Masked language modeling](https://huggingface.co/course/chapter7/3?fw=pt) chapter of the 🤗 Hugging Face Course. - [Masked language modeling task guide](../tasks/masked_language_modeling) - [`DistilBertForQuestionAnswering`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/pytorch/question-answering) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/question_answering.ipynb). - [`TFDistilBertForQuestionAnswering`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/tensorflow/question-answering) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/question_answering-tf.ipynb). - [`FlaxDistilBertForQuestionAnswering`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/flax/question-answering). - [Question answering](https://huggingface.co/course/chapter7/7?fw=pt) chapter of the 🤗 Hugging Face Course. - [Question answering task guide](../tasks/question_answering) **Multiple choice** - [`DistilBertForMultipleChoice`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/pytorch/multiple-choice) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/multiple_choice.ipynb). - [`TFDistilBertForMultipleChoice`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/tensorflow/multiple-choice) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/multiple_choice-tf.ipynb). - [Multiple choice task guide](../tasks/multiple_choice) ⚗️ Optimization - A blog post on how to [quantize DistilBERT with 🤗 Optimum and Intel](https://huggingface.co/blog/intel). - A blog post on how [Optimizing Transformers for GPUs with 🤗 Optimum](https://www.philschmid.de/optimizing-transformers-with-optimum-gpu). - A blog post on [Optimizing Transformers with Hugging Face Optimum](https://www.philschmid.de/optimizing-transformers-with-optimum). ⚡️ Inference - A blog post on how to [Accelerate BERT inference with Hugging Face Transformers and AWS Inferentia](https://huggingface.co/blog/bert-inferentia-sagemaker) with DistilBERT. - A blog post on [Serverless Inference with Hugging Face's Transformers, DistilBERT and Amazon SageMaker](https://www.philschmid.de/sagemaker-serverless-huggingface-distilbert). 🚀 Deploy - A blog post on how to [deploy DistilBERT on Google Cloud](https://huggingface.co/blog/how-to-deploy-a-pipeline-to-google-clouds). - A blog post on how to [deploy DistilBERT with Amazon SageMaker](https://huggingface.co/blog/deploy-hugging-face-models-easily-with-amazon-sagemaker). - A blog post on how to [Deploy BERT with Hugging Face Transformers, Amazon SageMaker and Terraform module](https://www.philschmid.de/terraform-huggingface-amazon-sagemaker). ## Combining DistilBERT and Flash Attention 2 First, make sure to install the latest version of Flash Attention 2 to include the sliding window attention feature. ```bash pip install -U flash-attn --no-build-isolation Make also sure that you have a hardware that is compatible with Flash-Attention 2. Read more about it in the official documentation of flash-attn repository. Make also sure to load your model in half-precision (e.g. `torch.float16`) To load and run a model using Flash Attention 2, refer to the snippet below: thon >>> import torch >>> from transformers import AutoTokenizer, AutoModel >>> device = ""cuda"" # the device to load the model onto >>> tokenizer = AutoTokenizer.from_pretrained('distilbert-base-uncased') >>> model = AutoModel.from_pretrained(""distilbert-base-uncased"", torch_dtype=torch.float16, use_flash_attention_2=True) >>> text = ""Replace me by any text you'd like."" >>> encoded_input = tokenizer(text, return_tensors='pt').to(device) >>> model.to(device) >>> output = model(**encoded_input) ## DistilBertConfig [[autodoc]] DistilBertConfig ## DistilBertTokenizer [[autodoc]] DistilBertTokenizer ## DistilBertTokenizerFast [[autodoc]] DistilBertTokenizerFast ## DistilBertModel [[autodoc]] DistilBertModel - forward ## DistilBertForMaskedLM [[autodoc]] DistilBertForMaskedLM - forward ## DistilBertForSequenceClassification [[autodoc]] DistilBertForSequenceClassification - forward ## DistilBertForMultipleChoice [[autodoc]] DistilBertForMultipleChoice - forward ## DistilBertForTokenClassification [[autodoc]] DistilBertForTokenClassification - forward ## DistilBertForQuestionAnswering [[autodoc]] DistilBertForQuestionAnswering - forward ## TFDistilBertModel [[autodoc]] TFDistilBertModel - call ## TFDistilBertForMaskedLM [[autodoc]] TFDistilBertForMaskedLM - call ## TFDistilBertForSequenceClassification [[autodoc]] TFDistilBertForSequenceClassification - call ## TFDistilBertForMultipleChoice [[autodoc]] TFDistilBertForMultipleChoice - call ## TFDistilBertForTokenClassification [[autodoc]] TFDistilBertForTokenClassification - call ## TFDistilBertForQuestionAnswering [[autodoc]] TFDistilBertForQuestionAnswering - call ## FlaxDistilBertModel [[autodoc]] FlaxDistilBertModel - __call__ ## FlaxDistilBertForMaskedLM [[autodoc]] FlaxDistilBertForMaskedLM - __call__ ## FlaxDistilBertForSequenceClassification [[autodoc]] FlaxDistilBertForSequenceClassification - __call__ ## FlaxDistilBertForMultipleChoice [[autodoc]] FlaxDistilBertForMultipleChoice - __call__ ## FlaxDistilBertForTokenClassification [[autodoc]] FlaxDistilBertForTokenClassification - __call__ ## FlaxDistilBertForQuestionAnswering [[autodoc]] FlaxDistilBertForQuestionAnswering - __call__ " model_doc/kosmos-2.md," # KOSMOS-2 ## Overview The KOSMOS-2 model was proposed in [Kosmos-2: Grounding Multimodal Large Language Models to the World](https://arxiv.org/abs/2306.14824) by Zhiliang Peng, Wenhui Wang, Li Dong, Yaru Hao, Shaohan Huang, Shuming Ma, Furu Wei. KOSMOS-2 is a Transformer-based causal language model and is trained using the next-word prediction task on a web-scale dataset of grounded image-text pairs [GRIT](https://huggingface.co/datasets/zzliang/GRIT). The spatial coordinates of the bounding boxes in the dataset are converted to a sequence of location tokens, which are appended to their respective entity text spans (for example, `a snowman` followed by ``). The data format is similar to “hyperlinks” that connect the object regions in an image to their text span in the corresponding caption. The abstract from the paper is the following: *We introduce Kosmos-2, a Multimodal Large Language Model (MLLM), enabling new capabilities of perceiving object descriptions (e.g., bounding boxes) and grounding text to the visual world. Specifically, we represent refer expressions as links in Markdown, i.e., ``[text span](bounding boxes)'', where object descriptions are sequences of location tokens. Together with multimodal corpora, we construct large-scale data of grounded image-text pairs (called GrIT) to train the model. In addition to the existing capabilities of MLLMs (e.g., perceiving general modalities, following instructions, and performing in-context learning), Kosmos-2 integrates the grounding capability into downstream applications. We evaluate Kosmos-2 on a wide range of tasks, including (i) multimodal grounding, such as referring expression comprehension, and phrase grounding, (ii) multimodal referring, such as referring expression generation, (iii) perception-language tasks, and (iv) language understanding and generation. This work lays out the foundation for the development of Embodiment AI and sheds light on the big convergence of language, multimodal perception, action, and world modeling, which is a key step toward artificial general intelligence. Code and pretrained models are available at https://aka.ms/kosmos-2.* Overview of tasks that KOSMOS-2 can handle. Taken from the original paper. ## Example thon >>> from PIL import Image >>> import requests >>> from transformers import AutoProcessor, Kosmos2ForConditionalGeneration >>> model = Kosmos2ForConditionalGeneration.from_pretrained(""microsoft/kosmos-2-patch14-224"") >>> processor = AutoProcessor.from_pretrained(""microsoft/kosmos-2-patch14-224"") >>> url = ""https://huggingface.co/microsoft/kosmos-2-patch14-224/resolve/main/snowman.jpg"" >>> image = Image.open(requests.get(url, stream=True).raw) >>> prompt = "" An image of"" >>> inputs = processor(text=prompt, images=image, return_tensors=""pt"") >>> generated_ids = model.generate( pixel_values=inputs[""pixel_values""], input_ids=inputs[""input_ids""], attention_mask=inputs[""attention_mask""], image_embeds=None, image_embeds_position_mask=inputs[""image_embeds_position_mask""], use_cache=True, max_new_tokens=64, ) >>> generated_text = processor.batch_decode(generated_ids, skip_special_tokens=True)[0] >>> processed_text = processor.post_process_generation(generated_text, cleanup_and_extract=False) >>> processed_text ' An image of a snowman warming himself by a fire.' >>> caption, entities = processor.post_process_generation(generated_text) >>> caption 'An image of a snowman warming himself by a fire.' >>> entities [('a snowman', (12, 21), [(0.390625, 0.046875, 0.984375, 0.828125)]), ('a fire', (41, 47), [(0.171875, 0.015625, 0.484375, 0.890625)])] This model was contributed by [Yih-Dar SHIEH](https://huggingface.co/ydshieh). The original code can be found [here](https://github.com/microsoft/unilm/tree/master/kosmos-2). ## Kosmos2Config [[autodoc]] Kosmos2Config ## Kosmos2ImageProcessor ## Kosmos2Processor [[autodoc]] Kosmos2Processor - __call__ ## Kosmos2Model [[autodoc]] Kosmos2Model - forward ## Kosmos2ForConditionalGeneration [[autodoc]] Kosmos2ForConditionalGeneration - forward " model_doc/bloom.md," # BLOOM ## Overview The BLOOM model has been proposed with its various versions through the [BigScience Workshop](https://bigscience.huggingface.co/). BigScience is inspired by other open science initiatives where researchers have pooled their time and resources to collectively achieve a higher impact. The architecture of BLOOM is essentially similar to GPT3 (auto-regressive model for next token prediction), but has been trained on 46 different languages and 13 programming languages. Several smaller versions of the models have been trained on the same dataset. BLOOM is available in the following versions: - [bloom-560m](https://huggingface.co/bigscience/bloom-560m) - [bloom-1b1](https://huggingface.co/bigscience/bloom-1b1) - [bloom-1b7](https://huggingface.co/bigscience/bloom-1b7) - [bloom-3b](https://huggingface.co/bigscience/bloom-3b) - [bloom-7b1](https://huggingface.co/bigscience/bloom-7b1) - [bloom](https://huggingface.co/bigscience/bloom) (176B parameters) ## Resources A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with BLOOM. If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource. - [`BloomForCausalLM`] is supported by this [causal language modeling example script](https://github.com/huggingface/transformers/tree/main/examples/pytorch/language-modeling#gpt-2gpt-and-causal-language-modeling) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/language_modeling.ipynb). See also: - [Causal language modeling task guide](../tasks/language_modeling) - [Text classification task guide](../tasks/sequence_classification) - [Token classification task guide](../tasks/token_classification) - [Question answering task guide](../tasks/question_answering) ⚡️ Inference - A blog on [Optimization story: Bloom inference](https://huggingface.co/blog/bloom-inference-optimization). - A blog on [Incredibly Fast BLOOM Inference with DeepSpeed and Accelerate](https://huggingface.co/blog/bloom-inference-pytorch-scripts). ⚙️ Training - A blog on [The Technology Behind BLOOM Training](https://huggingface.co/blog/bloom-megatron-deepspeed). ## BloomConfig [[autodoc]] BloomConfig - all ## BloomTokenizerFast [[autodoc]] BloomTokenizerFast - all ## BloomModel [[autodoc]] BloomModel - forward ## BloomForCausalLM [[autodoc]] BloomForCausalLM - forward ## BloomForSequenceClassification [[autodoc]] BloomForSequenceClassification - forward ## BloomForTokenClassification [[autodoc]] BloomForTokenClassification - forward ## BloomForQuestionAnswering [[autodoc]] BloomForQuestionAnswering - forward ## FlaxBloomModel [[autodoc]] FlaxBloomModel - __call__ ## FlaxBloomForCausalLM [[autodoc]] FlaxBloomForCausalLM - __call__ " model_doc/switch_transformers.md," # SwitchTransformers ## Overview The SwitchTransformers model was proposed in [Switch Transformers: Scaling to Trillion Parameter Models with Simple and Efficient Sparsity](https://arxiv.org/abs/2101.03961) by William Fedus, Barret Zoph, Noam Shazeer. The Switch Transformer model uses a sparse T5 encoder-decoder architecture, where the MLP are replaced by a Mixture of Experts (MoE). A routing mechanism (top 1 in this case) associates each token to one of the expert, where each expert is a dense MLP. While switch transformers have a lot more weights than their equivalent dense models, the sparsity allows better scaling and better finetuning performance at scale. During a forward pass, only a fraction of the weights are used. The routing mechanism allows the model to select relevant weights on the fly which increases the model capacity without increasing the number of operations. The abstract from the paper is the following: *In deep learning, models typically reuse the same parameters for all inputs. Mixture of Experts (MoE) defies this and instead selects different parameters for each incoming example. The result is a sparsely-activated model -- with outrageous numbers of parameters -- but a constant computational cost. However, despite several notable successes of MoE, widespread adoption has been hindered by complexity, communication costs and training instability -- we address these with the Switch Transformer. We simplify the MoE routing algorithm and design intuitive improved models with reduced communication and computational costs. Our proposed training techniques help wrangle the instabilities and we show large sparse models may be trained, for the first time, with lower precision (bfloat16) formats. We design models based off T5-Base and T5-Large to obtain up to 7x increases in pre-training speed with the same computational resources. These improvements extend into multilingual settings where we measure gains over the mT5-Base version across all 101 languages. Finally, we advance the current scale of language models by pre-training up to trillion parameter models on the ""Colossal Clean Crawled Corpus"" and achieve a 4x speedup over the T5-XXL model.* This model was contributed by [Younes Belkada](https://huggingface.co/ybelkada) and [Arthur Zucker](https://huggingface.co/ArthurZ). The original code can be found [here](https://github.com/google/flaxformer/tree/main/flaxformer/architectures/moe). ## Usage tips - SwitchTransformers uses the [`T5Tokenizer`], which can be loaded directly from each model's repository. - The released weights are pretrained on English [Masked Language Modeling](https://moon-ci-docs.huggingface.co/docs/transformers/pr_19323/en/glossary#general-terms) task, and should be finetuned. ## Resources - [Translation task guide](../tasks/translation) - [Summarization task guide](../tasks/summarization) ## SwitchTransformersConfig [[autodoc]] SwitchTransformersConfig ## SwitchTransformersTop1Router [[autodoc]] SwitchTransformersTop1Router - _compute_router_probabilities - forward ## SwitchTransformersSparseMLP [[autodoc]] SwitchTransformersSparseMLP - forward ## SwitchTransformersModel [[autodoc]] SwitchTransformersModel - forward ## SwitchTransformersForConditionalGeneration [[autodoc]] SwitchTransformersForConditionalGeneration - forward ## SwitchTransformersEncoderModel [[autodoc]] SwitchTransformersEncoderModel - forward " model_doc/segformer.md," # SegFormer ## Overview The SegFormer model was proposed in [SegFormer: Simple and Efficient Design for Semantic Segmentation with Transformers](https://arxiv.org/abs/2105.15203) by Enze Xie, Wenhai Wang, Zhiding Yu, Anima Anandkumar, Jose M. Alvarez, Ping Luo. The model consists of a hierarchical Transformer encoder and a lightweight all-MLP decode head to achieve great results on image segmentation benchmarks such as ADE20K and Cityscapes. The abstract from the paper is the following: *We present SegFormer, a simple, efficient yet powerful semantic segmentation framework which unifies Transformers with lightweight multilayer perception (MLP) decoders. SegFormer has two appealing features: 1) SegFormer comprises a novel hierarchically structured Transformer encoder which outputs multiscale features. It does not need positional encoding, thereby avoiding the interpolation of positional codes which leads to decreased performance when the testing resolution differs from training. 2) SegFormer avoids complex decoders. The proposed MLP decoder aggregates information from different layers, and thus combining both local attention and global attention to render powerful representations. We show that this simple and lightweight design is the key to efficient segmentation on Transformers. We scale our approach up to obtain a series of models from SegFormer-B0 to SegFormer-B5, reaching significantly better performance and efficiency than previous counterparts. For example, SegFormer-B4 achieves 50.3% mIoU on ADE20K with 64M parameters, being 5x smaller and 2.2% better than the previous best method. Our best model, SegFormer-B5, achieves 84.0% mIoU on Cityscapes validation set and shows excellent zero-shot robustness on Cityscapes-C.* The figure below illustrates the architecture of SegFormer. Taken from the [original paper](https://arxiv.org/abs/2105.15203). This model was contributed by [nielsr](https://huggingface.co/nielsr). The TensorFlow version of the model was contributed by [sayakpaul](https://huggingface.co/sayakpaul). The original code can be found [here](https://github.com/NVlabs/SegFormer). ## Usage tips - SegFormer consists of a hierarchical Transformer encoder, and a lightweight all-MLP decoder head. [`SegformerModel`] is the hierarchical Transformer encoder (which in the paper is also referred to as Mix Transformer or MiT). [`SegformerForSemanticSegmentation`] adds the all-MLP decoder head on top to perform semantic segmentation of images. In addition, there's [`SegformerForImageClassification`] which can be used to - you guessed it - classify images. The authors of SegFormer first pre-trained the Transformer encoder on ImageNet-1k to classify images. Next, they throw away the classification head, and replace it by the all-MLP decode head. Next, they fine-tune the model altogether on ADE20K, Cityscapes and COCO-stuff, which are important benchmarks for semantic segmentation. All checkpoints can be found on the [hub](https://huggingface.co/models?other=segformer). - The quickest way to get started with SegFormer is by checking the [example notebooks](https://github.com/NielsRogge/Transformers-Tutorials/tree/master/SegFormer) (which showcase both inference and fine-tuning on custom data). One can also check out the [blog post](https://huggingface.co/blog/fine-tune-segformer) introducing SegFormer and illustrating how it can be fine-tuned on custom data. - TensorFlow users should refer to [this repository](https://github.com/deep-diver/segformer-tf-transformers) that shows off-the-shelf inference and fine-tuning. - One can also check out [this interactive demo on Hugging Face Spaces](https://huggingface.co/spaces/chansung/segformer-tf-transformers) to try out a SegFormer model on custom images. - SegFormer works on any input size, as it pads the input to be divisible by `config.patch_sizes`. - One can use [`SegformerImageProcessor`] to prepare images and corresponding segmentation maps for the model. Note that this image processor is fairly basic and does not include all data augmentations used in the original paper. The original preprocessing pipelines (for the ADE20k dataset for instance) can be found [here](https://github.com/NVlabs/SegFormer/blob/master/local_configs/_base_/datasets/ade20k_repeat.py). The most important preprocessing step is that images and segmentation maps are randomly cropped and padded to the same size, such as 512x512 or 640x640, after which they are normalized. - One additional thing to keep in mind is that one can initialize [`SegformerImageProcessor`] with `reduce_labels` set to `True` or `False`. In some datasets (like ADE20k), the 0 index is used in the annotated segmentation maps for background. However, ADE20k doesn't include the ""background"" class in its 150 labels. Therefore, `reduce_labels` is used to reduce all labels by 1, and to make sure no loss is computed for the background class (i.e. it replaces 0 in the annotated maps by 255, which is the *ignore_index* of the loss function used by [`SegformerForSemanticSegmentation`]). However, other datasets use the 0 index as background class and include this class as part of all labels. In that case, `reduce_labels` should be set to `False`, as loss should also be computed for the background class. - As most models, SegFormer comes in different sizes, the details of which can be found in the table below (taken from Table 7 of the [original paper](https://arxiv.org/abs/2105.15203)). | **Model variant** | **Depths** | **Hidden sizes** | **Decoder hidden size** | **Params (M)** | **ImageNet-1k Top 1** | | :---------------: | ------------- | ------------------- | :---------------------: | :------------: | :-------------------: | | MiT-b0 | [2, 2, 2, 2] | [32, 64, 160, 256] | 256 | 3.7 | 70.5 | | MiT-b1 | [2, 2, 2, 2] | [64, 128, 320, 512] | 256 | 14.0 | 78.7 | | MiT-b2 | [3, 4, 6, 3] | [64, 128, 320, 512] | 768 | 25.4 | 81.6 | | MiT-b3 | [3, 4, 18, 3] | [64, 128, 320, 512] | 768 | 45.2 | 83.1 | | MiT-b4 | [3, 8, 27, 3] | [64, 128, 320, 512] | 768 | 62.6 | 83.6 | | MiT-b5 | [3, 6, 40, 3] | [64, 128, 320, 512] | 768 | 82.0 | 83.8 | Note that MiT in the above table refers to the Mix Transformer encoder backbone introduced in SegFormer. For SegFormer's results on the segmentation datasets like ADE20k, refer to the [paper](https://arxiv.org/abs/2105.15203). ## Resources A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with SegFormer. - [`SegformerForImageClassification`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/pytorch/image-classification) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/image_classification.ipynb). - [Image classification task guide](../tasks/image_classification) Semantic segmentation: - [`SegformerForSemanticSegmentation`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/pytorch/semantic-segmentation). - A blog on fine-tuning SegFormer on a custom dataset can be found [here](https://huggingface.co/blog/fine-tune-segformer). - More demo notebooks on SegFormer (both inference + fine-tuning on a custom dataset) can be found [here](https://github.com/NielsRogge/Transformers-Tutorials/tree/master/SegFormer). - [`TFSegformerForSemanticSegmentation`] is supported by this [example notebook](https://github.com/huggingface/notebooks/blob/main/examples/semantic_segmentation-tf.ipynb). - [Semantic segmentation task guide](../tasks/semantic_segmentation) If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource. ## SegformerConfig [[autodoc]] SegformerConfig ## SegformerFeatureExtractor [[autodoc]] SegformerFeatureExtractor - __call__ - post_process_semantic_segmentation ## SegformerImageProcessor [[autodoc]] SegformerImageProcessor - preprocess - post_process_semantic_segmentation ## SegformerModel [[autodoc]] SegformerModel - forward ## SegformerDecodeHead [[autodoc]] SegformerDecodeHead - forward ## SegformerForImageClassification [[autodoc]] SegformerForImageClassification - forward ## SegformerForSemanticSegmentation [[autodoc]] SegformerForSemanticSegmentation - forward ## TFSegformerDecodeHead [[autodoc]] TFSegformerDecodeHead - call ## TFSegformerModel [[autodoc]] TFSegformerModel - call ## TFSegformerForImageClassification [[autodoc]] TFSegformerForImageClassification - call ## TFSegformerForSemanticSegmentation [[autodoc]] TFSegformerForSemanticSegmentation - call " model_doc/gpt_neo.md," # GPT Neo ## Overview The GPTNeo model was released in the [EleutherAI/gpt-neo](https://github.com/EleutherAI/gpt-neo) repository by Sid Black, Stella Biderman, Leo Gao, Phil Wang and Connor Leahy. It is a GPT2 like causal language model trained on the [Pile](https://pile.eleuther.ai/) dataset. The architecture is similar to GPT2 except that GPT Neo uses local attention in every other layer with a window size of 256 tokens. This model was contributed by [valhalla](https://huggingface.co/valhalla). ## Usage example The `generate()` method can be used to generate text using GPT Neo model. thon >>> from transformers import GPTNeoForCausalLM, GPT2Tokenizer >>> model = GPTNeoForCausalLM.from_pretrained(""EleutherAI/gpt-neo-1.3B"") >>> tokenizer = GPT2Tokenizer.from_pretrained(""EleutherAI/gpt-neo-1.3B"") >>> prompt = ( ""In a shocking finding, scientists discovered a herd of unicorns living in a remote, "" ""previously unexplored valley, in the Andes Mountains. Even more surprising to the "" ""researchers was the fact that the unicorns spoke perfect English."" ) >>> input_ids = tokenizer(prompt, return_tensors=""pt"").input_ids >>> gen_tokens = model.generate( input_ids, do_sample=True, temperature=0.9, max_length=100, ) >>> gen_text = tokenizer.batch_decode(gen_tokens)[0] ## Combining GPT-Neo and Flash Attention 2 First, make sure to install the latest version of Flash Attention 2 to include the sliding window attention feature. ```bash pip install -U flash-attn --no-build-isolation Make also sure that you have a hardware that is compatible with Flash-Attention 2. Read more about it in the official documentation of flash-attn repository. Make also sure to load your model in half-precision (e.g. `torch.float16``) To load and run a model using Flash Attention 2, refer to the snippet below: thon >>> import torch >>> from transformers import AutoModelForCausalLM, AutoTokenizer >>> device = ""cuda"" # the device to load the model onto >>> model = AutoModelForCausalLM.from_pretrained(""EleutherAI/gpt-neo-2.7B"", torch_dtype=torch.float16, use_flash_attention_2=True) >>> tokenizer = AutoTokenizer.from_pretrained(""EleutherAI/gpt-neo-2.7B"") >>> prompt = ""def hello_world():"" >>> model_inputs = tokenizer([prompt], return_tensors=""pt"").to(device) >>> model.to(device) >>> generated_ids = model.generate(**model_inputs, max_new_tokens=100, do_sample=True) >>> tokenizer.batch_decode(generated_ids)[0] ""def hello_world():\n >>> run_script(""hello.py"")\n >>> exit(0)\n<|endoftext|>"" ### Expected speedups Below is an expected speedup diagram that compares pure inference time between the native implementation in transformers using `EleutherAI/gpt-neo-2.7B` checkpoint and the Flash Attention 2 version of the model. Note that for GPT-Neo it is not possible to train / run on very long context as the max [position embeddings](https://huggingface.co/EleutherAI/gpt-neo-2.7B/blob/main/config.json#L58 ) is limited to 2048 - but this is applicable to all gpt-neo models and not specific to FA-2 ## Resources - [Text classification task guide](../tasks/sequence_classification) - [Causal language modeling task guide](../tasks/language_modeling) ## GPTNeoConfig [[autodoc]] GPTNeoConfig ## GPTNeoModel [[autodoc]] GPTNeoModel - forward ## GPTNeoForCausalLM [[autodoc]] GPTNeoForCausalLM - forward ## GPTNeoForQuestionAnswering [[autodoc]] GPTNeoForQuestionAnswering - forward ## GPTNeoForSequenceClassification [[autodoc]] GPTNeoForSequenceClassification - forward ## GPTNeoForTokenClassification [[autodoc]] GPTNeoForTokenClassification - forward ## FlaxGPTNeoModel [[autodoc]] FlaxGPTNeoModel - __call__ ## FlaxGPTNeoForCausalLM [[autodoc]] FlaxGPTNeoForCausalLM - __call__ " model_doc/realm.md," # REALM ## Overview The REALM model was proposed in [REALM: Retrieval-Augmented Language Model Pre-Training](https://arxiv.org/abs/2002.08909) by Kelvin Guu, Kenton Lee, Zora Tung, Panupong Pasupat and Ming-Wei Chang. It's a retrieval-augmented language model that firstly retrieves documents from a textual knowledge corpus and then utilizes retrieved documents to process question answering tasks. The abstract from the paper is the following: *Language model pre-training has been shown to capture a surprising amount of world knowledge, crucial for NLP tasks such as question answering. However, this knowledge is stored implicitly in the parameters of a neural network, requiring ever-larger networks to cover more facts. To capture knowledge in a more modular and interpretable way, we augment language model pre-training with a latent knowledge retriever, which allows the model to retrieve and attend over documents from a large corpus such as Wikipedia, used during pre-training, fine-tuning and inference. For the first time, we show how to pre-train such a knowledge retriever in an unsupervised manner, using masked language modeling as the learning signal and backpropagating through a retrieval step that considers millions of documents. We demonstrate the effectiveness of Retrieval-Augmented Language Model pre-training (REALM) by fine-tuning on the challenging task of Open-domain Question Answering (Open-QA). We compare against state-of-the-art models for both explicit and implicit knowledge storage on three popular Open-QA benchmarks, and find that we outperform all previous methods by a significant margin (4-16% absolute accuracy), while also providing qualitative benefits such as interpretability and modularity.* This model was contributed by [qqaatw](https://huggingface.co/qqaatw). The original code can be found [here](https://github.com/google-research/language/tree/master/language/realm). ## RealmConfig [[autodoc]] RealmConfig ## RealmTokenizer [[autodoc]] RealmTokenizer - build_inputs_with_special_tokens - get_special_tokens_mask - create_token_type_ids_from_sequences - save_vocabulary - batch_encode_candidates ## RealmTokenizerFast [[autodoc]] RealmTokenizerFast - batch_encode_candidates ## RealmRetriever [[autodoc]] RealmRetriever ## RealmEmbedder [[autodoc]] RealmEmbedder - forward ## RealmScorer [[autodoc]] RealmScorer - forward ## RealmKnowledgeAugEncoder [[autodoc]] RealmKnowledgeAugEncoder - forward ## RealmReader [[autodoc]] RealmReader - forward ## RealmForOpenQA [[autodoc]] RealmForOpenQA - block_embedding_to - forward" model_doc/decision_transformer.md," # Decision Transformer ## Overview The Decision Transformer model was proposed in [Decision Transformer: Reinforcement Learning via Sequence Modeling](https://arxiv.org/abs/2106.01345) by Lili Chen, Kevin Lu, Aravind Rajeswaran, Kimin Lee, Aditya Grover, Michael Laskin, Pieter Abbeel, Aravind Srinivas, Igor Mordatch. The abstract from the paper is the following: *We introduce a framework that abstracts Reinforcement Learning (RL) as a sequence modeling problem. This allows us to draw upon the simplicity and scalability of the Transformer architecture, and associated advances in language modeling such as GPT-x and BERT. In particular, we present Decision Transformer, an architecture that casts the problem of RL as conditional sequence modeling. Unlike prior approaches to RL that fit value functions or compute policy gradients, Decision Transformer simply outputs the optimal actions by leveraging a causally masked Transformer. By conditioning an autoregressive model on the desired return (reward), past states, and actions, our Decision Transformer model can generate future actions that achieve the desired return. Despite its simplicity, Decision Transformer matches or exceeds the performance of state-of-the-art model-free offline RL baselines on Atari, OpenAI Gym, and Key-to-Door tasks.* This version of the model is for tasks where the state is a vector. This model was contributed by [edbeeching](https://huggingface.co/edbeeching). The original code can be found [here](https://github.com/kzl/decision-transformer). ## DecisionTransformerConfig [[autodoc]] DecisionTransformerConfig ## DecisionTransformerGPT2Model [[autodoc]] DecisionTransformerGPT2Model - forward ## DecisionTransformerModel [[autodoc]] DecisionTransformerModel - forward " model_doc/roc_bert.md," # RoCBert ## Overview The RoCBert model was proposed in [RoCBert: Robust Chinese Bert with Multimodal Contrastive Pretraining](https://aclanthology.org/2022.acl-long.65.pdf) by HuiSu, WeiweiShi, XiaoyuShen, XiaoZhou, TuoJi, JiaruiFang, JieZhou. It's a pretrained Chinese language model that is robust under various forms of adversarial attacks. The abstract from the paper is the following: *Large-scale pretrained language models have achieved SOTA results on NLP tasks. However, they have been shown vulnerable to adversarial attacks especially for logographic languages like Chinese. In this work, we propose ROCBERT: a pretrained Chinese Bert that is robust to various forms of adversarial attacks like word perturbation, synonyms, typos, etc. It is pretrained with the contrastive learning objective which maximizes the label consistency under different synthesized adversarial examples. The model takes as input multimodal information including the semantic, phonetic and visual features. We show all these features are important to the model robustness since the attack can be performed in all the three forms. Across 5 Chinese NLU tasks, ROCBERT outperforms strong baselines under three blackbox adversarial algorithms without sacrificing the performance on clean testset. It also performs the best in the toxic content detection task under human-made attacks.* This model was contributed by [weiweishi](https://huggingface.co/weiweishi). ## Resources - [Text classification task guide](../tasks/sequence_classification) - [Token classification task guide](../tasks/token_classification) - [Question answering task guide](../tasks/question_answering) - [Causal language modeling task guide](../tasks/language_modeling) - [Masked language modeling task guide](../tasks/masked_language_modeling) - [Multiple choice task guide](../tasks/multiple_choice) ## RoCBertConfig [[autodoc]] RoCBertConfig - all ## RoCBertTokenizer [[autodoc]] RoCBertTokenizer - build_inputs_with_special_tokens - get_special_tokens_mask - create_token_type_ids_from_sequences - save_vocabulary ## RoCBertModel [[autodoc]] RoCBertModel - forward ## RoCBertForPreTraining [[autodoc]] RoCBertForPreTraining - forward ## RoCBertForCausalLM [[autodoc]] RoCBertForCausalLM - forward ## RoCBertForMaskedLM [[autodoc]] RoCBertForMaskedLM - forward ## RoCBertForSequenceClassification [[autodoc]] transformers.RoCBertForSequenceClassification - forward ## RoCBertForMultipleChoice [[autodoc]] transformers.RoCBertForMultipleChoice - forward ## RoCBertForTokenClassification [[autodoc]] transformers.RoCBertForTokenClassification - forward ## RoCBertForQuestionAnswering [[autodoc]] RoCBertForQuestionAnswering - forward " model_doc/deberta-v2.md," # DeBERTa-v2 ## Overview The DeBERTa model was proposed in [DeBERTa: Decoding-enhanced BERT with Disentangled Attention](https://arxiv.org/abs/2006.03654) by Pengcheng He, Xiaodong Liu, Jianfeng Gao, Weizhu Chen It is based on Google's BERT model released in 2018 and Facebook's RoBERTa model released in 2019. It builds on RoBERTa with disentangled attention and enhanced mask decoder training with half of the data used in RoBERTa. The abstract from the paper is the following: *Recent progress in pre-trained neural language models has significantly improved the performance of many natural language processing (NLP) tasks. In this paper we propose a new model architecture DeBERTa (Decoding-enhanced BERT with disentangled attention) that improves the BERT and RoBERTa models using two novel techniques. The first is the disentangled attention mechanism, where each word is represented using two vectors that encode its content and position, respectively, and the attention weights among words are computed using disentangled matrices on their contents and relative positions. Second, an enhanced mask decoder is used to replace the output softmax layer to predict the masked tokens for model pretraining. We show that these two techniques significantly improve the efficiency of model pretraining and performance of downstream tasks. Compared to RoBERTa-Large, a DeBERTa model trained on half of the training data performs consistently better on a wide range of NLP tasks, achieving improvements on MNLI by +0.9% (90.2% vs. 91.1%), on SQuAD v2.0 by +2.3% (88.4% vs. 90.7%) and RACE by +3.6% (83.2% vs. 86.8%). The DeBERTa code and pre-trained models will be made publicly available at https://github.com/microsoft/DeBERTa.* The following information is visible directly on the [original implementation repository](https://github.com/microsoft/DeBERTa). DeBERTa v2 is the second version of the DeBERTa model. It includes the 1.5B model used for the SuperGLUE single-model submission and achieving 89.9, versus human baseline 89.8. You can find more details about this submission in the authors' [blog](https://www.microsoft.com/en-us/research/blog/microsoft-deberta-surpasses-human-performance-on-the-superglue-benchmark/) New in v2: - **Vocabulary** In v2 the tokenizer is changed to use a new vocabulary of size 128K built from the training data. Instead of a GPT2-based tokenizer, the tokenizer is now [sentencepiece-based](https://github.com/google/sentencepiece) tokenizer. - **nGiE(nGram Induced Input Encoding)** The DeBERTa-v2 model uses an additional convolution layer aside with the first transformer layer to better learn the local dependency of input tokens. - **Sharing position projection matrix with content projection matrix in attention layer** Based on previous experiments, this can save parameters without affecting the performance. - **Apply bucket to encode relative positions** The DeBERTa-v2 model uses log bucket to encode relative positions similar to T5. - **900M model & 1.5B model** Two additional model sizes are available: 900M and 1.5B, which significantly improves the performance of downstream tasks. This model was contributed by [DeBERTa](https://huggingface.co/DeBERTa). This model TF 2.0 implementation was contributed by [kamalkraj](https://huggingface.co/kamalkraj). The original code can be found [here](https://github.com/microsoft/DeBERTa). ## Resources - [Text classification task guide](../tasks/sequence_classification) - [Token classification task guide](../tasks/token_classification) - [Question answering task guide](../tasks/question_answering) - [Masked language modeling task guide](../tasks/masked_language_modeling) - [Multiple choice task guide](../tasks/multiple_choice) ## DebertaV2Config [[autodoc]] DebertaV2Config ## DebertaV2Tokenizer [[autodoc]] DebertaV2Tokenizer - build_inputs_with_special_tokens - get_special_tokens_mask - create_token_type_ids_from_sequences - save_vocabulary ## DebertaV2TokenizerFast [[autodoc]] DebertaV2TokenizerFast - build_inputs_with_special_tokens - create_token_type_ids_from_sequences ## DebertaV2Model [[autodoc]] DebertaV2Model - forward ## DebertaV2PreTrainedModel [[autodoc]] DebertaV2PreTrainedModel - forward ## DebertaV2ForMaskedLM [[autodoc]] DebertaV2ForMaskedLM - forward ## DebertaV2ForSequenceClassification [[autodoc]] DebertaV2ForSequenceClassification - forward ## DebertaV2ForTokenClassification [[autodoc]] DebertaV2ForTokenClassification - forward ## DebertaV2ForQuestionAnswering [[autodoc]] DebertaV2ForQuestionAnswering - forward ## DebertaV2ForMultipleChoice [[autodoc]] DebertaV2ForMultipleChoice - forward ## TFDebertaV2Model [[autodoc]] TFDebertaV2Model - call ## TFDebertaV2PreTrainedModel [[autodoc]] TFDebertaV2PreTrainedModel - call ## TFDebertaV2ForMaskedLM [[autodoc]] TFDebertaV2ForMaskedLM - call ## TFDebertaV2ForSequenceClassification [[autodoc]] TFDebertaV2ForSequenceClassification - call ## TFDebertaV2ForTokenClassification [[autodoc]] TFDebertaV2ForTokenClassification - call ## TFDebertaV2ForQuestionAnswering [[autodoc]] TFDebertaV2ForQuestionAnswering - call ## TFDebertaV2ForMultipleChoice [[autodoc]] TFDebertaV2ForMultipleChoice - call " model_doc/xmod.md," # X-MOD ## Overview The X-MOD model was proposed in [Lifting the Curse of Multilinguality by Pre-training Modular Transformers](http://dx.doi.org/10.18653/v1/2022.naacl-main.255) by Jonas Pfeiffer, Naman Goyal, Xi Lin, Xian Li, James Cross, Sebastian Riedel, and Mikel Artetxe. X-MOD extends multilingual masked language models like [XLM-R](xlm-roberta) to include language-specific modular components (_language adapters_) during pre-training. For fine-tuning, the language adapters in each transformer layer are frozen. The abstract from the paper is the following: *Multilingual pre-trained models are known to suffer from the curse of multilinguality, which causes per-language performance to drop as they cover more languages. We address this issue by introducing language-specific modules, which allows us to grow the total capacity of the model, while keeping the total number of trainable parameters per language constant. In contrast with prior work that learns language-specific components post-hoc, we pre-train the modules of our Cross-lingual Modular (X-MOD) models from the start. Our experiments on natural language inference, named entity recognition and question answering show that our approach not only mitigates the negative interference between languages, but also enables positive transfer, resulting in improved monolingual and cross-lingual performance. Furthermore, our approach enables adding languages post-hoc with no measurable drop in performance, no longer limiting the model usage to the set of pre-trained languages.* This model was contributed by [jvamvas](https://huggingface.co/jvamvas). The original code can be found [here](https://github.com/facebookresearch/fairseq/tree/58cc6cca18f15e6d56e3f60c959fe4f878960a60/fairseq/models/xmod) and the original documentation is found [here](https://github.com/facebookresearch/fairseq/tree/58cc6cca18f15e6d56e3f60c959fe4f878960a60/examples/xmod). ## Usage tips Tips: - X-MOD is similar to [XLM-R](xlm-roberta), but a difference is that the input language needs to be specified so that the correct language adapter can be activated. - The main models – base and large – have adapters for 81 languages. ## Adapter Usage ### Input language There are two ways to specify the input language: 1. By setting a default language before using the model: thon from transformers import XmodModel model = XmodModel.from_pretrained(""facebook/xmod-base"") model.set_default_language(""en_XX"") 2. By explicitly passing the index of the language adapter for each sample: thon import torch input_ids = torch.tensor( [ [0, 581, 10269, 83, 99942, 136, 60742, 23, 70, 80583, 18276, 2], [0, 1310, 49083, 443, 269, 71, 5486, 165, 60429, 660, 23, 2], ] ) lang_ids = torch.LongTensor( [ 0, # en_XX 8, # de_DE ] ) output = model(input_ids, lang_ids=lang_ids) ### Fine-tuning The paper recommends that the embedding layer and the language adapters are frozen during fine-tuning. A method for doing this is provided: thon model.freeze_embeddings_and_language_adapters() # Fine-tune the model ### Cross-lingual transfer After fine-tuning, zero-shot cross-lingual transfer can be tested by activating the language adapter of the target language: thon model.set_default_language(""de_DE"") # Evaluate the model on German examples ## Resources - [Text classification task guide](../tasks/sequence_classification) - [Token classification task guide](../tasks/token_classification) - [Question answering task guide](../tasks/question_answering) - [Causal language modeling task guide](../tasks/language_modeling) - [Masked language modeling task guide](../tasks/masked_language_modeling) - [Multiple choice task guide](../tasks/multiple_choice) ## XmodConfig [[autodoc]] XmodConfig ## XmodModel [[autodoc]] XmodModel - forward ## XmodForCausalLM [[autodoc]] XmodForCausalLM - forward ## XmodForMaskedLM [[autodoc]] XmodForMaskedLM - forward ## XmodForSequenceClassification [[autodoc]] XmodForSequenceClassification - forward ## XmodForMultipleChoice [[autodoc]] XmodForMultipleChoice - forward ## XmodForTokenClassification [[autodoc]] XmodForTokenClassification - forward ## XmodForQuestionAnswering [[autodoc]] XmodForQuestionAnswering - forward " model_doc/albert.md," # ALBERT ## Overview The ALBERT model was proposed in [ALBERT: A Lite BERT for Self-supervised Learning of Language Representations](https://arxiv.org/abs/1909.11942) by Zhenzhong Lan, Mingda Chen, Sebastian Goodman, Kevin Gimpel, Piyush Sharma, Radu Soricut. It presents two parameter-reduction techniques to lower memory consumption and increase the training speed of BERT: - Splitting the embedding matrix into two smaller matrices. - Using repeating layers split among groups. The abstract from the paper is the following: *Increasing model size when pretraining natural language representations often results in improved performance on downstream tasks. However, at some point further model increases become harder due to GPU/TPU memory limitations, longer training times, and unexpected model degradation. To address these problems, we present two parameter-reduction techniques to lower memory consumption and increase the training speed of BERT. Comprehensive empirical evidence shows that our proposed methods lead to models that scale much better compared to the original BERT. We also use a self-supervised loss that focuses on modeling inter-sentence coherence, and show it consistently helps downstream tasks with multi-sentence inputs. As a result, our best model establishes new state-of-the-art results on the GLUE, RACE, and SQuAD benchmarks while having fewer parameters compared to BERT-large.* This model was contributed by [lysandre](https://huggingface.co/lysandre). This model jax version was contributed by [kamalkraj](https://huggingface.co/kamalkraj). The original code can be found [here](https://github.com/google-research/ALBERT). ## Usage tips - ALBERT is a model with absolute position embeddings so it's usually advised to pad the inputs on the right rather than the left. - ALBERT uses repeating layers which results in a small memory footprint, however the computational cost remains similar to a BERT-like architecture with the same number of hidden layers as it has to iterate through the same number of (repeating) layers. - Embedding size E is different from hidden size H justified because the embeddings are context independent (one embedding vector represents one token), whereas hidden states are context dependent (one hidden state represents a sequence of tokens) so it's more logical to have H >> E. Also, the embedding matrix is large since it's V x E (V being the vocab size). If E < H, it has less parameters. - Layers are split in groups that share parameters (to save memory). Next sentence prediction is replaced by a sentence ordering prediction: in the inputs, we have two sentences A and B (that are consecutive) and we either feed A followed by B or B followed by A. The model must predict if they have been swapped or not. This model was contributed by [lysandre](https://huggingface.co/lysandre). This model jax version was contributed by [kamalkraj](https://huggingface.co/kamalkraj). The original code can be found [here](https://github.com/google-research/ALBERT). ## Resources The resources provided in the following sections consist of a list of official Hugging Face and community (indicated by 🌎) resources to help you get started with AlBERT. If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource. - [`AlbertForSequenceClassification`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/pytorch/text-classification). - [`TFAlbertForSequenceClassification`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/tensorflow/text-classification). - [`FlaxAlbertForSequenceClassification`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/flax/text-classification) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/text_classification_flax.ipynb). - Check the [Text classification task guide](../tasks/sequence_classification) on how to use the model. - [`AlbertForTokenClassification`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/pytorch/token-classification). - [`TFAlbertForTokenClassification`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/tensorflow/token-classification) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/token_classification-tf.ipynb). - [`FlaxAlbertForTokenClassification`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/flax/token-classification). - [Token classification](https://huggingface.co/course/chapter7/2?fw=pt) chapter of the 🤗 Hugging Face Course. - Check the [Token classification task guide](../tasks/token_classification) on how to use the model. - [`AlbertForMaskedLM`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/pytorch/language-modeling#robertabertdistilbert-and-masked-language-modeling) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/language_modeling.ipynb). - [`TFAlbertForMaskedLM`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/tensorflow/language-modeling#run_mlmpy) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/language_modeling-tf.ipynb). - [`FlaxAlbertForMaskedLM`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/flax/language-modeling#masked-language-modeling) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/masked_language_modeling_flax.ipynb). - [Masked language modeling](https://huggingface.co/course/chapter7/3?fw=pt) chapter of the 🤗 Hugging Face Course. - Check the [Masked language modeling task guide](../tasks/masked_language_modeling) on how to use the model. - [`AlbertForQuestionAnswering`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/pytorch/question-answering) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/question_answering.ipynb). - [`TFAlbertForQuestionAnswering`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/tensorflow/question-answering) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/question_answering-tf.ipynb). - [`FlaxAlbertForQuestionAnswering`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/flax/question-answering). - [Question answering](https://huggingface.co/course/chapter7/7?fw=pt) chapter of the 🤗 Hugging Face Course. - Check the [Question answering task guide](../tasks/question_answering) on how to use the model. **Multiple choice** - [`AlbertForMultipleChoice`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/pytorch/multiple-choice) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/multiple_choice.ipynb). - [`TFAlbertForMultipleChoice`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/tensorflow/multiple-choice) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/multiple_choice-tf.ipynb). - Check the [Multiple choice task guide](../tasks/multiple_choice) on how to use the model. ## AlbertConfig [[autodoc]] AlbertConfig ## AlbertTokenizer [[autodoc]] AlbertTokenizer - build_inputs_with_special_tokens - get_special_tokens_mask - create_token_type_ids_from_sequences - save_vocabulary ## AlbertTokenizerFast [[autodoc]] AlbertTokenizerFast ## Albert specific outputs [[autodoc]] models.albert.modeling_albert.AlbertForPreTrainingOutput [[autodoc]] models.albert.modeling_tf_albert.TFAlbertForPreTrainingOutput ## AlbertModel [[autodoc]] AlbertModel - forward ## AlbertForPreTraining [[autodoc]] AlbertForPreTraining - forward ## AlbertForMaskedLM [[autodoc]] AlbertForMaskedLM - forward ## AlbertForSequenceClassification [[autodoc]] AlbertForSequenceClassification - forward ## AlbertForMultipleChoice [[autodoc]] AlbertForMultipleChoice ## AlbertForTokenClassification [[autodoc]] AlbertForTokenClassification - forward ## AlbertForQuestionAnswering [[autodoc]] AlbertForQuestionAnswering - forward ## TFAlbertModel [[autodoc]] TFAlbertModel - call ## TFAlbertForPreTraining [[autodoc]] TFAlbertForPreTraining - call ## TFAlbertForMaskedLM [[autodoc]] TFAlbertForMaskedLM - call ## TFAlbertForSequenceClassification [[autodoc]] TFAlbertForSequenceClassification - call ## TFAlbertForMultipleChoice [[autodoc]] TFAlbertForMultipleChoice - call ## TFAlbertForTokenClassification [[autodoc]] TFAlbertForTokenClassification - call ## TFAlbertForQuestionAnswering [[autodoc]] TFAlbertForQuestionAnswering - call ## FlaxAlbertModel [[autodoc]] FlaxAlbertModel - __call__ ## FlaxAlbertForPreTraining [[autodoc]] FlaxAlbertForPreTraining - __call__ ## FlaxAlbertForMaskedLM [[autodoc]] FlaxAlbertForMaskedLM - __call__ ## FlaxAlbertForSequenceClassification [[autodoc]] FlaxAlbertForSequenceClassification - __call__ ## FlaxAlbertForMultipleChoice [[autodoc]] FlaxAlbertForMultipleChoice - __call__ ## FlaxAlbertForTokenClassification [[autodoc]] FlaxAlbertForTokenClassification - __call__ ## FlaxAlbertForQuestionAnswering [[autodoc]] FlaxAlbertForQuestionAnswering - __call__ " model_doc/seamless_m4t.md," # SeamlessM4T ## Overview The SeamlessM4T model was proposed in [SeamlessM4T — Massively Multilingual & Multimodal Machine Translation](https://dl.fbaipublicfiles.com/seamless/seamless_m4t_paper.pdf) by the Seamless Communication team from Meta AI. SeamlessM4T is a collection of models designed to provide high quality translation, allowing people from different linguistic communities to communicate effortlessly through speech and text. SeamlessM4T enables multiple tasks without relying on separate models: - Speech-to-speech translation (S2ST) - Speech-to-text translation (S2TT) - Text-to-speech translation (T2ST) - Text-to-text translation (T2TT) - Automatic speech recognition (ASR) [`SeamlessM4TModel`] can perform all the above tasks, but each task also has its own dedicated sub-model. The abstract from the paper is the following: *What does it take to create the Babel Fish, a tool that can help individuals translate speech between any two languages? While recent breakthroughs in text-based models have pushed machine translation coverage beyond 200 languages, unified speech-to-speech translation models have yet to achieve similar strides. More specifically, conventional speech-to-speech translation systems rely on cascaded systems that perform translation progressively, putting high-performing unified systems out of reach. To address these gaps, we introduce SeamlessM4T, a single model that supports speech-to-speech translation, speech-to-text translation, text-to-speech translation, text-to-text translation, and automatic speech recognition for up to 100 languages. To build this, we used 1 million hours of open speech audio data to learn self-supervised speech representations with w2v-BERT 2.0. Subsequently, we created a multimodal corpus of automatically aligned speech translations. Filtered and combined with human-labeled and pseudo-labeled data, we developed the first multilingual system capable of translating from and into English for both speech and text. On FLEURS, SeamlessM4T sets a new standard for translations into multiple target languages, achieving an improvement of 20% BLEU over the previous SOTA in direct speech-to-text translation. Compared to strong cascaded models, SeamlessM4T improves the quality of into-English translation by 1.3 BLEU points in speech-to-text and by 2.6 ASR-BLEU points in speech-to-speech. Tested for robustness, our system performs better against background noises and speaker variations in speech-to-text tasks compared to the current SOTA model. Critically, we evaluated SeamlessM4T on gender bias and added toxicity to assess translation safety. Finally, all contributions in this work are open-sourced and accessible at https://github.com/facebookresearch/seamless_communication* ## Usage First, load the processor and a checkpoint of the model: thon >>> from transformers import AutoProcessor, SeamlessM4TModel >>> processor = AutoProcessor.from_pretrained(""facebook/hf-seamless-m4t-medium"") >>> model = SeamlessM4TModel.from_pretrained(""facebook/hf-seamless-m4t-medium"") You can seamlessly use this model on text or on audio, to generated either translated text or translated audio. Here is how to use the processor to process text and audio: thon >>> # let's load an audio sample from an Arabic speech corpus >>> from datasets import load_dataset >>> dataset = load_dataset(""arabic_speech_corpus"", split=""test"", streaming=True) >>> audio_sample = next(iter(dataset))[""audio""] >>> # now, process it >>> audio_inputs = processor(audios=audio_sample[""array""], return_tensors=""pt"") >>> # now, process some English test as well >>> text_inputs = processor(text = ""Hello, my dog is cute"", src_lang=""eng"", return_tensors=""pt"") ### Speech [`SeamlessM4TModel`] can *seamlessly* generate text or speech with few or no changes. Let's target Russian voice translation: thon >>> audio_array_from_text = model.generate(**text_inputs, tgt_lang=""rus"")[0].cpu().numpy().squeeze() >>> audio_array_from_audio = model.generate(**audio_inputs, tgt_lang=""rus"")[0].cpu().numpy().squeeze() With basically the same code, I've translated English text and Arabic speech to Russian speech samples. ### Text Similarly, you can generate translated text from audio files or from text with the same model. You only have to pass `generate_speech=False` to [`SeamlessM4TModel.generate`]. This time, let's translate to French. thon >>> # from audio >>> output_tokens = model.generate(**audio_inputs, tgt_lang=""fra"", generate_speech=False) >>> translated_text_from_audio = processor.decode(output_tokens[0].tolist()[0], skip_special_tokens=True) >>> # from text >>> output_tokens = model.generate(**text_inputs, tgt_lang=""fra"", generate_speech=False) >>> translated_text_from_text = processor.decode(output_tokens[0].tolist()[0], skip_special_tokens=True) ### Tips #### 1. Use dedicated models [`SeamlessM4TModel`] is transformers top level model to generate speech and text, but you can also use dedicated models that perform the task without additional components, thus reducing the memory footprint. For example, you can replace the audio-to-audio generation snippet with the model dedicated to the S2ST task, the rest is exactly the same code: thon >>> from transformers import SeamlessM4TForSpeechToSpeech >>> model = SeamlessM4TForSpeechToSpeech.from_pretrained(""facebook/hf-seamless-m4t-medium"") Or you can replace the text-to-text generation snippet with the model dedicated to the T2TT task, you only have to remove `generate_speech=False`. thon >>> from transformers import SeamlessM4TForTextToText >>> model = SeamlessM4TForTextToText.from_pretrained(""facebook/hf-seamless-m4t-medium"") Feel free to try out [`SeamlessM4TForSpeechToText`] and [`SeamlessM4TForTextToSpeech`] as well. #### 2. Change the speaker identity You have the possibility to change the speaker used for speech synthesis with the `spkr_id` argument. Some `spkr_id` works better than other for some languages! #### 3. Change the generation strategy You can use different [generation strategies](./generation_strategies) for speech and text generation, e.g `.generate(input_ids=input_ids, text_num_beams=4, speech_do_sample=True)` which will successively perform beam-search decoding on the text model, and multinomial sampling on the speech model. #### 4. Generate speech and text at the same time Use `return_intermediate_token_ids=True` with [`SeamlessM4TModel`] to return both speech and text ! ## Model architecture SeamlessM4T features a versatile architecture that smoothly handles the sequential generation of text and speech. This setup comprises two sequence-to-sequence (seq2seq) models. The first model translates the input modality into translated text, while the second model generates speech tokens, known as ""unit tokens,"" from the translated text. Each modality has its own dedicated encoder with a unique architecture. Additionally, for speech output, a vocoder inspired by the [HiFi-GAN](https://arxiv.org/abs/2010.05646) architecture is placed on top of the second seq2seq model. Here's how the generation process works: - Input text or speech is processed through its specific encoder. - A decoder creates text tokens in the desired language. - If speech generation is required, the second seq2seq model, following a standard encoder-decoder structure, generates unit tokens. - These unit tokens are then passed through the final vocoder to produce the actual speech. This model was contributed by [ylacombe](https://huggingface.co/ylacombe). The original code can be found [here](https://github.com/facebookresearch/seamless_communication). ## SeamlessM4TModel [[autodoc]] SeamlessM4TModel - generate ## SeamlessM4TForTextToSpeech [[autodoc]] SeamlessM4TForTextToSpeech - generate ## SeamlessM4TForSpeechToSpeech [[autodoc]] SeamlessM4TForSpeechToSpeech - generate ## SeamlessM4TForTextToText [[autodoc]] transformers.SeamlessM4TForTextToText - forward - generate ## SeamlessM4TForSpeechToText [[autodoc]] transformers.SeamlessM4TForSpeechToText - forward - generate ## SeamlessM4TConfig [[autodoc]] SeamlessM4TConfig ## SeamlessM4TTokenizer [[autodoc]] SeamlessM4TTokenizer - __call__ - build_inputs_with_special_tokens - get_special_tokens_mask - create_token_type_ids_from_sequences - save_vocabulary ## SeamlessM4TTokenizerFast [[autodoc]] SeamlessM4TTokenizerFast - __call__ ## SeamlessM4TFeatureExtractor [[autodoc]] SeamlessM4TFeatureExtractor - __call__ ## SeamlessM4TProcessor [[autodoc]] SeamlessM4TProcessor - __call__ ## SeamlessM4TCodeHifiGan [[autodoc]] SeamlessM4TCodeHifiGan ## SeamlessM4THifiGan [[autodoc]] SeamlessM4THifiGan ## SeamlessM4TTextToUnitModel [[autodoc]] SeamlessM4TTextToUnitModel ## SeamlessM4TTextToUnitForConditionalGeneration [[autodoc]] SeamlessM4TTextToUnitForConditionalGeneration " model_doc/dialogpt.md," # DialoGPT ## Overview DialoGPT was proposed in [DialoGPT: Large-Scale Generative Pre-training for Conversational Response Generation](https://arxiv.org/abs/1911.00536) by Yizhe Zhang, Siqi Sun, Michel Galley, Yen-Chun Chen, Chris Brockett, Xiang Gao, Jianfeng Gao, Jingjing Liu, Bill Dolan. It's a GPT2 Model trained on 147M conversation-like exchanges extracted from Reddit. The abstract from the paper is the following: *We present a large, tunable neural conversational response generation model, DialoGPT (dialogue generative pre-trained transformer). Trained on 147M conversation-like exchanges extracted from Reddit comment chains over a period spanning from 2005 through 2017, DialoGPT extends the Hugging Face PyTorch transformer to attain a performance close to human both in terms of automatic and human evaluation in single-turn dialogue settings. We show that conversational systems that leverage DialoGPT generate more relevant, contentful and context-consistent responses than strong baseline systems. The pre-trained model and training pipeline are publicly released to facilitate research into neural response generation and the development of more intelligent open-domain dialogue systems.* The original code can be found [here](https://github.com/microsoft/DialoGPT). ## Usage tips - DialoGPT is a model with absolute position embeddings so it's usually advised to pad the inputs on the right rather than the left. - DialoGPT was trained with a causal language modeling (CLM) objective on conversational data and is therefore powerful at response generation in open-domain dialogue systems. - DialoGPT enables the user to create a chat bot in just 10 lines of code as shown on [DialoGPT's model card](https://huggingface.co/microsoft/DialoGPT-medium). Training: In order to train or fine-tune DialoGPT, one can use causal language modeling training. To cite the official paper: *We follow the OpenAI GPT-2 to model a multiturn dialogue session as a long text and frame the generation task as language modeling. We first concatenate all dialog turns within a dialogue session into a long text x_1,, x_N (N is the sequence length), ended by the end-of-text token.* For more information please confer to the original paper. DialoGPT's architecture is based on the GPT2 model, refer to [GPT2's documentation page](gpt2) for API reference and examples. " model_doc/dinat.md," # Dilated Neighborhood Attention Transformer ## Overview DiNAT was proposed in [Dilated Neighborhood Attention Transformer](https://arxiv.org/abs/2209.15001) by Ali Hassani and Humphrey Shi. It extends [NAT](nat) by adding a Dilated Neighborhood Attention pattern to capture global context, and shows significant performance improvements over it. The abstract from the paper is the following: *Transformers are quickly becoming one of the most heavily applied deep learning architectures across modalities, domains, and tasks. In vision, on top of ongoing efforts into plain transformers, hierarchical transformers have also gained significant attention, thanks to their performance and easy integration into existing frameworks. These models typically employ localized attention mechanisms, such as the sliding-window Neighborhood Attention (NA) or Swin Transformer's Shifted Window Self Attention. While effective at reducing self attention's quadratic complexity, local attention weakens two of the most desirable properties of self attention: long range inter-dependency modeling, and global receptive field. In this paper, we introduce Dilated Neighborhood Attention (DiNA), a natural, flexible and efficient extension to NA that can capture more global context and expand receptive fields exponentially at no additional cost. NA's local attention and DiNA's sparse global attention complement each other, and therefore we introduce Dilated Neighborhood Attention Transformer (DiNAT), a new hierarchical vision transformer built upon both. DiNAT variants enjoy significant improvements over strong baselines such as NAT, Swin, and ConvNeXt. Our large model is faster and ahead of its Swin counterpart by 1.5% box AP in COCO object detection, 1.3% mask AP in COCO instance segmentation, and 1.1% mIoU in ADE20K semantic segmentation. Paired with new frameworks, our large variant is the new state of the art panoptic segmentation model on COCO (58.2 PQ) and ADE20K (48.5 PQ), and instance segmentation model on Cityscapes (44.5 AP) and ADE20K (35.4 AP) (no extra data). It also matches the state of the art specialized semantic segmentation models on ADE20K (58.2 mIoU), and ranks second on Cityscapes (84.5 mIoU) (no extra data). * Neighborhood Attention with different dilation values. Taken from the original paper. This model was contributed by [Ali Hassani](https://huggingface.co/alihassanijr). The original code can be found [here](https://github.com/SHI-Labs/Neighborhood-Attention-Transformer). ## Usage tips DiNAT can be used as a *backbone*. When `output_hidden_states = True`, it will output both `hidden_states` and `reshaped_hidden_states`. The `reshaped_hidden_states` have a shape of `(batch, num_channels, height, width)` rather than `(batch_size, height, width, num_channels)`. Notes: - DiNAT depends on [NATTEN](https://github.com/SHI-Labs/NATTEN/)'s implementation of Neighborhood Attention and Dilated Neighborhood Attention. You can install it with pre-built wheels for Linux by referring to [shi-labs.com/natten](https://shi-labs.com/natten), or build on your system by running `pip install natten`. Note that the latter will likely take time to compile. NATTEN does not support Windows devices yet. - Patch size of 4 is only supported at the moment. ## Resources A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with DiNAT. - [`DinatForImageClassification`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/pytorch/image-classification) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/image_classification.ipynb). - See also: [Image classification task guide](../tasks/image_classification) If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource. ## DinatConfig [[autodoc]] DinatConfig ## DinatModel [[autodoc]] DinatModel - forward ## DinatForImageClassification [[autodoc]] DinatForImageClassification - forward " model_doc/altclip.md," # AltCLIP ## Overview The AltCLIP model was proposed in [AltCLIP: Altering the Language Encoder in CLIP for Extended Language Capabilities](https://arxiv.org/abs/2211.06679v2) by Zhongzhi Chen, Guang Liu, Bo-Wen Zhang, Fulong Ye, Qinghong Yang, Ledell Wu. AltCLIP (Altering the Language Encoder in CLIP) is a neural network trained on a variety of image-text and text-text pairs. By switching CLIP's text encoder with a pretrained multilingual text encoder XLM-R, we could obtain very close performances with CLIP on almost all tasks, and extended original CLIP's capabilities such as multilingual understanding. The abstract from the paper is the following: *In this work, we present a conceptually simple and effective method to train a strong bilingual multimodal representation model. Starting from the pretrained multimodal representation model CLIP released by OpenAI, we switched its text encoder with a pretrained multilingual text encoder XLM-R, and aligned both languages and image representations by a two-stage training schema consisting of teacher learning and contrastive learning. We validate our method through evaluations of a wide range of tasks. We set new state-of-the-art performances on a bunch of tasks including ImageNet-CN, Flicker30k- CN, and COCO-CN. Further, we obtain very close performances with CLIP on almost all tasks, suggesting that one can simply alter the text encoder in CLIP for extended capabilities such as multilingual understanding.* This model was contributed by [jongjyh](https://huggingface.co/jongjyh). ## Usage tips and example The usage of AltCLIP is very similar to the CLIP. the difference between CLIP is the text encoder. Note that we use bidirectional attention instead of casual attention and we take the [CLS] token in XLM-R to represent text embedding. AltCLIP is a multi-modal vision and language model. It can be used for image-text similarity and for zero-shot image classification. AltCLIP uses a ViT like transformer to get visual features and a bidirectional language model to get the text features. Both the text and visual features are then projected to a latent space with identical dimension. The dot product between the projected image and text features is then used as a similar score. To feed images to the Transformer encoder, each image is split into a sequence of fixed-size non-overlapping patches, which are then linearly embedded. A [CLS] token is added to serve as representation of an entire image. The authors also add absolute position embeddings, and feed the resulting sequence of vectors to a standard Transformer encoder. The [`CLIPImageProcessor`] can be used to resize (or rescale) and normalize images for the model. The [`AltCLIPProcessor`] wraps a [`CLIPImageProcessor`] and a [`XLMRobertaTokenizer`] into a single instance to both encode the text and prepare the images. The following example shows how to get the image-text similarity scores using [`AltCLIPProcessor`] and [`AltCLIPModel`]. thon >>> from PIL import Image >>> import requests >>> from transformers import AltCLIPModel, AltCLIPProcessor >>> model = AltCLIPModel.from_pretrained(""BAAI/AltCLIP"") >>> processor = AltCLIPProcessor.from_pretrained(""BAAI/AltCLIP"") >>> url = ""http://images.cocodataset.org/val2017/000000039769.jpg"" >>> image = Image.open(requests.get(url, stream=True).raw) >>> inputs = processor(text=[""a photo of a cat"", ""a photo of a dog""], images=image, return_tensors=""pt"", padding=True) >>> outputs = model(**inputs) >>> logits_per_image = outputs.logits_per_image # this is the image-text similarity score >>> probs = logits_per_image.softmax(dim=1) # we can take the softmax to get the label probabilities This model is based on `CLIPModel`, use it like you would use the original [CLIP](clip). ## AltCLIPConfig [[autodoc]] AltCLIPConfig - from_text_vision_configs ## AltCLIPTextConfig [[autodoc]] AltCLIPTextConfig ## AltCLIPVisionConfig [[autodoc]] AltCLIPVisionConfig ## AltCLIPProcessor [[autodoc]] AltCLIPProcessor ## AltCLIPModel [[autodoc]] AltCLIPModel - forward - get_text_features - get_image_features ## AltCLIPTextModel [[autodoc]] AltCLIPTextModel - forward ## AltCLIPVisionModel [[autodoc]] AltCLIPVisionModel - forward" model_doc/regnet.md," # RegNet ## Overview The RegNet model was proposed in [Designing Network Design Spaces](https://arxiv.org/abs/2003.13678) by Ilija Radosavovic, Raj Prateek Kosaraju, Ross Girshick, Kaiming He, Piotr Dollár. The authors design search spaces to perform Neural Architecture Search (NAS). They first start from a high dimensional search space and iteratively reduce the search space by empirically applying constraints based on the best-performing models sampled by the current search space. The abstract from the paper is the following: *In this work, we present a new network design paradigm. Our goal is to help advance the understanding of network design and discover design principles that generalize across settings. Instead of focusing on designing individual network instances, we design network design spaces that parametrize populations of networks. The overall process is analogous to classic manual design of networks, but elevated to the design space level. Using our methodology we explore the structure aspect of network design and arrive at a low-dimensional design space consisting of simple, regular networks that we call RegNet. The core insight of the RegNet parametrization is surprisingly simple: widths and depths of good networks can be explained by a quantized linear function. We analyze the RegNet design space and arrive at interesting findings that do not match the current practice of network design. The RegNet design space provides simple and fast networks that work well across a wide range of flop regimes. Under comparable training settings and flops, the RegNet models outperform the popular EfficientNet models while being up to 5x faster on GPUs.* This model was contributed by [Francesco](https://huggingface.co/Francesco). The TensorFlow version of the model was contributed by [sayakpaul](https://huggingface.co/sayakpaul) and [ariG23498](https://huggingface.co/ariG23498). The original code can be found [here](https://github.com/facebookresearch/pycls). The huge 10B model from [Self-supervised Pretraining of Visual Features in the Wild](https://arxiv.org/abs/2103.01988), trained on one billion Instagram images, is available on the [hub](https://huggingface.co/facebook/regnet-y-10b-seer) ## Resources A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with RegNet. - [`RegNetForImageClassification`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/pytorch/image-classification) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/image_classification.ipynb). - See also: [Image classification task guide](../tasks/image_classification) If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource. ## RegNetConfig [[autodoc]] RegNetConfig ## RegNetModel [[autodoc]] RegNetModel - forward ## RegNetForImageClassification [[autodoc]] RegNetForImageClassification - forward ## TFRegNetModel [[autodoc]] TFRegNetModel - call ## TFRegNetForImageClassification [[autodoc]] TFRegNetForImageClassification - call ## FlaxRegNetModel [[autodoc]] FlaxRegNetModel - __call__ ## FlaxRegNetForImageClassification [[autodoc]] FlaxRegNetForImageClassification - __call__ " model_doc/audio-spectrogram-transformer.md," # Audio Spectrogram Transformer ## Overview The Audio Spectrogram Transformer model was proposed in [AST: Audio Spectrogram Transformer](https://arxiv.org/abs/2104.01778) by Yuan Gong, Yu-An Chung, James Glass. The Audio Spectrogram Transformer applies a [Vision Transformer](vit) to audio, by turning audio into an image (spectrogram). The model obtains state-of-the-art results for audio classification. The abstract from the paper is the following: *In the past decade, convolutional neural networks (CNNs) have been widely adopted as the main building block for end-to-end audio classification models, which aim to learn a direct mapping from audio spectrograms to corresponding labels. To better capture long-range global context, a recent trend is to add a self-attention mechanism on top of the CNN, forming a CNN-attention hybrid model. However, it is unclear whether the reliance on a CNN is necessary, and if neural networks purely based on attention are sufficient to obtain good performance in audio classification. In this paper, we answer the question by introducing the Audio Spectrogram Transformer (AST), the first convolution-free, purely attention-based model for audio classification. We evaluate AST on various audio classification benchmarks, where it achieves new state-of-the-art results of 0.485 mAP on AudioSet, 95.6% accuracy on ESC-50, and 98.1% accuracy on Speech Commands V2.* Audio Spectrogram Transformer architecture. Taken from the original paper. This model was contributed by [nielsr](https://huggingface.co/nielsr). The original code can be found [here](https://github.com/YuanGongND/ast). ## Usage tips - When fine-tuning the Audio Spectrogram Transformer (AST) on your own dataset, it's recommended to take care of the input normalization (to make sure the input has mean of 0 and std of 0.5). [`ASTFeatureExtractor`] takes care of this. Note that it uses the AudioSet mean and std by default. You can check [`ast/src/get_norm_stats.py`](https://github.com/YuanGongND/ast/blob/master/src/get_norm_stats.py) to see how the authors compute the stats for a downstream dataset. - Note that the AST needs a low learning rate (the authors use a 10 times smaller learning rate compared to their CNN model proposed in the [PSLA paper](https://arxiv.org/abs/2102.01243)) and converges quickly, so please search for a suitable learning rate and learning rate scheduler for your task. ## Resources A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with the Audio Spectrogram Transformer. - A notebook illustrating inference with AST for audio classification can be found [here](https://github.com/NielsRogge/Transformers-Tutorials/tree/master/AST). - [`ASTForAudioClassification`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/pytorch/audio-classification) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/audio_classification.ipynb). - See also: [Audio classification](../tasks/audio_classification). If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource. ## ASTConfig [[autodoc]] ASTConfig ## ASTFeatureExtractor [[autodoc]] ASTFeatureExtractor - __call__ ## ASTModel [[autodoc]] ASTModel - forward ## ASTForAudioClassification [[autodoc]] ASTForAudioClassification - forward " model_doc/univnet.md," # UnivNet ## Overview The UnivNet model was proposed in [UnivNet: A Neural Vocoder with Multi-Resolution Spectrogram Discriminators for High-Fidelity Waveform Generation](https://arxiv.org/abs/2106.07889) by Won Jang, Dan Lim, Jaesam Yoon, Bongwan Kin, and Juntae Kim. The UnivNet model is a generative adversarial network (GAN) trained to synthesize high fidelity speech waveforms. The UnivNet model shared in `transformers` is the *generator*, which maps a conditioning log-mel spectrogram and optional noise sequence to a speech waveform (e.g. a vocoder). Only the generator is required for inference. The *discriminator* used to train the `generator` is not implemented. The abstract from the paper is the following: *Most neural vocoders employ band-limited mel-spectrograms to generate waveforms. If full-band spectral features are used as the input, the vocoder can be provided with as much acoustic information as possible. However, in some models employing full-band mel-spectrograms, an over-smoothing problem occurs as part of which non-sharp spectrograms are generated. To address this problem, we propose UnivNet, a neural vocoder that synthesizes high-fidelity waveforms in real time. Inspired by works in the field of voice activity detection, we added a multi-resolution spectrogram discriminator that employs multiple linear spectrogram magnitudes computed using various parameter sets. Using full-band mel-spectrograms as input, we expect to generate high-resolution signals by adding a discriminator that employs spectrograms of multiple resolutions as the input. In an evaluation on a dataset containing information on hundreds of speakers, UnivNet obtained the best objective and subjective results among competing models for both seen and unseen speakers. These results, including the best subjective score for text-to-speech, demonstrate the potential for fast adaptation to new speakers without a need for training from scratch.* Tips: - The `noise_sequence` argument for [`UnivNetModel.forward`] should be standard Gaussian noise (such as from `torch.randn`) of shape `([batch_size], noise_length, model.config.model_in_channels)`, where `noise_length` should match the length dimension (dimension 1) of the `input_features` argument. If not supplied, it will be randomly generated; a `torch.Generator` can be supplied to the `generator` argument so that the forward pass can be reproduced. (Note that [`UnivNetFeatureExtractor`] will return generated noise by default, so it shouldn't be necessary to generate `noise_sequence` manually.) - Padding added by [`UnivNetFeatureExtractor`] can be removed from the [`UnivNetModel`] output through the [`UnivNetFeatureExtractor.batch_decode`] method, as shown in the usage example below. - Padding the end of each waveform with silence can reduce artifacts at the end of the generated audio sample. This can be done by supplying `pad_end = True` to [`UnivNetFeatureExtractor.__call__`]. See [this issue](https://github.com/seungwonpark/melgan/issues/8) for more details. Usage Example: thon import torch from scipy.io.wavfile import write from datasets import Audio, load_dataset from transformers import UnivNetFeatureExtractor, UnivNetModel model_id_or_path = ""dg845/univnet-dev"" model = UnivNetModel.from_pretrained(model_id_or_path) feature_extractor = UnivNetFeatureExtractor.from_pretrained(model_id_or_path) ds = load_dataset(""hf-internal-testing/librispeech_asr_dummy"", ""clean"", split=""validation"") # Resample the audio to the model and feature extractor's sampling rate. ds = ds.cast_column(""audio"", Audio(sampling_rate=feature_extractor.sampling_rate)) # Pad the end of the converted waveforms to reduce artifacts at the end of the output audio samples. inputs = feature_extractor( ds[0][""audio""][""array""], sampling_rate=ds[0][""audio""][""sampling_rate""], pad_end=True, return_tensors=""pt"" ) with torch.no_grad(): audio = model(**inputs) # Remove the extra padding at the end of the output. audio = feature_extractor.batch_decode(**audio)[0] # Convert to wav file write(""sample_audio.wav"", feature_extractor.sampling_rate, audio) This model was contributed by [dg845](https://huggingface.co/dg845). To the best of my knowledge, there is no official code release, but an unofficial implementation can be found at [maum-ai/univnet](https://github.com/maum-ai/univnet) with pretrained checkpoints [here](https://github.com/maum-ai/univnet#pre-trained-model). ## UnivNetConfig [[autodoc]] UnivNetConfig ## UnivNetFeatureExtractor [[autodoc]] UnivNetFeatureExtractor - __call__ ## UnivNetModel [[autodoc]] UnivNetModel - forward" model_doc/llama.md," # LLaMA ## Overview The LLaMA model was proposed in [LLaMA: Open and Efficient Foundation Language Models](https://arxiv.org/abs/2302.13971) by Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timothée Lacroix, Baptiste Rozière, Naman Goyal, Eric Hambro, Faisal Azhar, Aurelien Rodriguez, Armand Joulin, Edouard Grave, Guillaume Lample. It is a collection of foundation language models ranging from 7B to 65B parameters. The abstract from the paper is the following: *We introduce LLaMA, a collection of foundation language models ranging from 7B to 65B parameters. We train our models on trillions of tokens, and show that it is possible to train state-of-the-art models using publicly available datasets exclusively, without resorting to proprietary and inaccessible datasets. In particular, LLaMA-13B outperforms GPT-3 (175B) on most benchmarks, and LLaMA-65B is competitive with the best models, Chinchilla-70B and PaLM-540B. We release all our models to the research community. * This model was contributed by [zphang](https://huggingface.co/zphang) with contributions from [BlackSamorez](https://huggingface.co/BlackSamorez). The code of the implementation in Hugging Face is based on GPT-NeoX [here](https://github.com/EleutherAI/gpt-neox). The original code of the authors can be found [here](https://github.com/facebookresearch/llama). ## Usage tips - Weights for the LLaMA models can be obtained from by filling out [this form](https://docs.google.com/forms/d/e/1FAIpQLSfqNECQnMkycAp2jP4Z9TFX0cGR4uf7b_fBxjY_OjhJILlKGA/viewform?usp=send_form) - After downloading the weights, they will need to be converted to the Hugging Face Transformers format using the [conversion script](https://github.com/huggingface/transformers/blob/main/src/transformers/models/llama/convert_llama_weights_to_hf.py). The script can be called with the following (example) command: ```bash python src/transformers/models/llama/convert_llama_weights_to_hf.py \ --input_dir /path/to/downloaded/llama/weights --model_size 7B --output_dir /output/path - After conversion, the model and tokenizer can be loaded via: thon from transformers import LlamaForCausalLM, LlamaTokenizer tokenizer = LlamaTokenizer.from_pretrained(""/output/path"") model = LlamaForCausalLM.from_pretrained(""/output/path"") Note that executing the script requires enough CPU RAM to host the whole model in float16 precision (even if the biggest versions come in several checkpoints they each contain a part of each weight of the model, so we need to load them all in RAM). For the 65B model, it's thus 130GB of RAM needed. - The LLaMA tokenizer is a BPE model based on [sentencepiece](https://github.com/google/sentencepiece). One quirk of sentencepiece is that when decoding a sequence, if the first token is the start of the word (e.g. ""Banana""), the tokenizer does not prepend the prefix space to the string. Based on the original LLaMA model, Meta AI has released some follow-up works: - **Llama2**: Llama2 is an improved version of Llama with some architectural tweaks (Grouped Query Attention), and is pre-trained on 2Trillion tokens. Refer to the documentation of Llama2 which can be found [here](llama2). ## Resources A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with LLaMA. If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource. - A [notebook](https://colab.research.google.com/github/bigscience-workshop/petals/blob/main/examples/prompt-tuning-sst2.ipynb#scrollTo=f04ba4d2) on how to use prompt tuning to adapt the LLaMA model for text classification task. 🌎 - [StackLLaMA: A hands-on guide to train LLaMA with RLHF](https://huggingface.co/blog/stackllama#stackllama-a-hands-on-guide-to-train-llama-with-rlhf), a blog post about how to train LLaMA to answer questions on [Stack Exchange](https://stackexchange.com/) with RLHF. ⚗️ Optimization - A [notebook](https://colab.research.google.com/drive/1SQUXq1AMZPSLD4mk3A3swUIc6Y2dclme?usp=sharing) on how to fine-tune LLaMA model using xturing library on GPU which has limited memory. 🌎 ⚡️ Inference - A [notebook](https://colab.research.google.com/github/DominguesM/alpaca-lora-ptbr-7b/blob/main/notebooks/02%20-%20Evaluate.ipynb) on how to run the LLaMA Model using PeftModel from the 🤗 PEFT library. 🌎 - A [notebook](https://colab.research.google.com/drive/1l2GiSSPbajVyp2Nk3CFT4t3uH6-5TiBe?usp=sharing) on how to load a PEFT adapter LLaMA model with LangChain. 🌎 🚀 Deploy - A [notebook](https://colab.research.google.com/github/lxe/simple-llama-finetuner/blob/master/Simple_LLaMA_FineTuner.ipynb#scrollTo=3PM_DilAZD8T) on how to fine-tune LLaMA model using LoRA method via the 🤗 PEFT library with intuitive UI. 🌎 - A [notebook](https://github.com/aws/amazon-sagemaker-examples/blob/main/introduction_to_amazon_algorithms/jumpstart-foundation-models/text-generation-open-llama.ipynb) on how to deploy Open-LLaMA model for text generation on Amazon SageMaker. 🌎 ## LlamaConfig [[autodoc]] LlamaConfig ## LlamaTokenizer [[autodoc]] LlamaTokenizer - build_inputs_with_special_tokens - get_special_tokens_mask - create_token_type_ids_from_sequences - save_vocabulary ## LlamaTokenizerFast [[autodoc]] LlamaTokenizerFast - build_inputs_with_special_tokens - get_special_tokens_mask - create_token_type_ids_from_sequences - update_post_processor - save_vocabulary ## LlamaModel [[autodoc]] LlamaModel - forward ## LlamaForCausalLM [[autodoc]] LlamaForCausalLM - forward ## LlamaForSequenceClassification [[autodoc]] LlamaForSequenceClassification - forward " model_doc/qdqbert.md," # QDQBERT ## Overview The QDQBERT model can be referenced in [Integer Quantization for Deep Learning Inference: Principles and Empirical Evaluation](https://arxiv.org/abs/2004.09602) by Hao Wu, Patrick Judd, Xiaojie Zhang, Mikhail Isaev and Paulius Micikevicius. The abstract from the paper is the following: *Quantization techniques can reduce the size of Deep Neural Networks and improve inference latency and throughput by taking advantage of high throughput integer instructions. In this paper we review the mathematical aspects of quantization parameters and evaluate their choices on a wide range of neural network models for different application domains, including vision, speech, and language. We focus on quantization techniques that are amenable to acceleration by processors with high-throughput integer math pipelines. We also present a workflow for 8-bit quantization that is able to maintain accuracy within 1% of the floating-point baseline on all networks studied, including models that are more difficult to quantize, such as MobileNets and BERT-large.* This model was contributed by [shangz](https://huggingface.co/shangz). ## Usage tips - QDQBERT model adds fake quantization operations (pair of QuantizeLinear/DequantizeLinear ops) to (i) linear layer inputs and weights, (ii) matmul inputs, (iii) residual add inputs, in BERT model. - QDQBERT requires the dependency of [Pytorch Quantization Toolkit](https://github.com/NVIDIA/TensorRT/tree/master/tools/pytorch-quantization). To install `pip install pytorch-quantization --extra-index-url https://pypi.ngc.nvidia.com` - QDQBERT model can be loaded from any checkpoint of HuggingFace BERT model (for example *bert-base-uncased*), and perform Quantization Aware Training/Post Training Quantization. - A complete example of using QDQBERT model to perform Quatization Aware Training and Post Training Quantization for SQUAD task can be found at [transformers/examples/research_projects/quantization-qdqbert/](examples/research_projects/quantization-qdqbert/). ### Set default quantizers QDQBERT model adds fake quantization operations (pair of QuantizeLinear/DequantizeLinear ops) to BERT by `TensorQuantizer` in [Pytorch Quantization Toolkit](https://github.com/NVIDIA/TensorRT/tree/master/tools/pytorch-quantization). `TensorQuantizer` is the module for quantizing tensors, with `QuantDescriptor` defining how the tensor should be quantized. Refer to [Pytorch Quantization Toolkit userguide](https://docs.nvidia.com/deeplearning/tensorrt/pytorch-quantization-toolkit/docs/userguide.html) for more details. Before creating QDQBERT model, one has to set the default `QuantDescriptor` defining default tensor quantizers. Example: thon >>> import pytorch_quantization.nn as quant_nn >>> from pytorch_quantization.tensor_quant import QuantDescriptor >>> # The default tensor quantizer is set to use Max calibration method >>> input_desc = QuantDescriptor(num_bits=8, calib_method=""max"") >>> # The default tensor quantizer is set to be per-channel quantization for weights >>> weight_desc = QuantDescriptor(num_bits=8, axis=((0,))) >>> quant_nn.QuantLinear.set_default_quant_desc_input(input_desc) >>> quant_nn.QuantLinear.set_default_quant_desc_weight(weight_desc) ### Calibration Calibration is the terminology of passing data samples to the quantizer and deciding the best scaling factors for tensors. After setting up the tensor quantizers, one can use the following example to calibrate the model: thon >>> # Find the TensorQuantizer and enable calibration >>> for name, module in model.named_modules(): if name.endswith(""_input_quantizer""): module.enable_calib() module.disable_quant() # Use full precision data to calibrate >>> # Feeding data samples >>> model(x) >>> # >>> # Finalize calibration >>> for name, module in model.named_modules(): if name.endswith(""_input_quantizer""): module.load_calib_amax() module.enable_quant() >>> # If running on GPU, it needs to call .cuda() again because new tensors will be created by calibration process >>> model.cuda() >>> # Keep running the quantized model >>> # ### Export to ONNX The goal of exporting to ONNX is to deploy inference by [TensorRT](https://developer.nvidia.com/tensorrt). Fake quantization will be broken into a pair of QuantizeLinear/DequantizeLinear ONNX ops. After setting static member of TensorQuantizer to use Pytorch’s own fake quantization functions, fake quantized model can be exported to ONNX, follow the instructions in [torch.onnx](https://pytorch.org/docs/stable/onnx.html). Example: thon >>> from pytorch_quantization.nn import TensorQuantizer >>> TensorQuantizer.use_fb_fake_quant = True >>> # Load the calibrated model >>> >>> # ONNX export >>> torch.onnx.export() ## Resources - [Text classification task guide](../tasks/sequence_classification) - [Token classification task guide](../tasks/token_classification) - [Question answering task guide](../tasks/question_answering) - [Causal language modeling task guide](../tasks/language_modeling) - [Masked language modeling task guide](../tasks/masked_language_modeling) - [Multiple choice task guide](../tasks/multiple_choice) ## QDQBertConfig [[autodoc]] QDQBertConfig ## QDQBertModel [[autodoc]] QDQBertModel - forward ## QDQBertLMHeadModel [[autodoc]] QDQBertLMHeadModel - forward ## QDQBertForMaskedLM [[autodoc]] QDQBertForMaskedLM - forward ## QDQBertForSequenceClassification [[autodoc]] QDQBertForSequenceClassification - forward ## QDQBertForNextSentencePrediction [[autodoc]] QDQBertForNextSentencePrediction - forward ## QDQBertForMultipleChoice [[autodoc]] QDQBertForMultipleChoice - forward ## QDQBertForTokenClassification [[autodoc]] QDQBertForTokenClassification - forward ## QDQBertForQuestionAnswering [[autodoc]] QDQBertForQuestionAnswering - forward " model_doc/bigbird_pegasus.md," # BigBirdPegasus ## Overview The BigBird model was proposed in [Big Bird: Transformers for Longer Sequences](https://arxiv.org/abs/2007.14062) by Zaheer, Manzil and Guruganesh, Guru and Dubey, Kumar Avinava and Ainslie, Joshua and Alberti, Chris and Ontanon, Santiago and Pham, Philip and Ravula, Anirudh and Wang, Qifan and Yang, Li and others. BigBird, is a sparse-attention based transformer which extends Transformer based models, such as BERT to much longer sequences. In addition to sparse attention, BigBird also applies global attention as well as random attention to the input sequence. Theoretically, it has been shown that applying sparse, global, and random attention approximates full attention, while being computationally much more efficient for longer sequences. As a consequence of the capability to handle longer context, BigBird has shown improved performance on various long document NLP tasks, such as question answering and summarization, compared to BERT or RoBERTa. The abstract from the paper is the following: *Transformers-based models, such as BERT, have been one of the most successful deep learning models for NLP. Unfortunately, one of their core limitations is the quadratic dependency (mainly in terms of memory) on the sequence length due to their full attention mechanism. To remedy this, we propose, BigBird, a sparse attention mechanism that reduces this quadratic dependency to linear. We show that BigBird is a universal approximator of sequence functions and is Turing complete, thereby preserving these properties of the quadratic, full attention model. Along the way, our theoretical analysis reveals some of the benefits of having O(1) global tokens (such as CLS), that attend to the entire sequence as part of the sparse attention mechanism. The proposed sparse attention can handle sequences of length up to 8x of what was previously possible using similar hardware. As a consequence of the capability to handle longer context, BigBird drastically improves performance on various NLP tasks such as question answering and summarization. We also propose novel applications to genomics data.* The original code can be found [here](https://github.com/google-research/bigbird). ## Usage tips - For an in-detail explanation on how BigBird's attention works, see [this blog post](https://huggingface.co/blog/big-bird). - BigBird comes with 2 implementations: **original_full** & **block_sparse**. For the sequence length < 1024, using **original_full** is advised as there is no benefit in using **block_sparse** attention. - The code currently uses window size of 3 blocks and 2 global blocks. - Sequence length must be divisible by block size. - Current implementation supports only **ITC**. - Current implementation doesn't support **num_random_blocks = 0**. - BigBirdPegasus uses the [PegasusTokenizer](https://github.com/huggingface/transformers/blob/main/src/transformers/models/pegasus/tokenization_pegasus.py). - BigBird is a model with absolute position embeddings so it's usually advised to pad the inputs on the right rather than the left. ## Resources - [Text classification task guide](../tasks/sequence_classification) - [Question answering task guide](../tasks/question_answering) - [Causal language modeling task guide](../tasks/language_modeling) - [Translation task guide](../tasks/translation) - [Summarization task guide](../tasks/summarization) ## BigBirdPegasusConfig [[autodoc]] BigBirdPegasusConfig - all ## BigBirdPegasusModel [[autodoc]] BigBirdPegasusModel - forward ## BigBirdPegasusForConditionalGeneration [[autodoc]] BigBirdPegasusForConditionalGeneration - forward ## BigBirdPegasusForSequenceClassification [[autodoc]] BigBirdPegasusForSequenceClassification - forward ## BigBirdPegasusForQuestionAnswering [[autodoc]] BigBirdPegasusForQuestionAnswering - forward ## BigBirdPegasusForCausalLM [[autodoc]] BigBirdPegasusForCausalLM - forward " model_doc/git.md," # GIT ## Overview The GIT model was proposed in [GIT: A Generative Image-to-text Transformer for Vision and Language](https://arxiv.org/abs/2205.14100) by Jianfeng Wang, Zhengyuan Yang, Xiaowei Hu, Linjie Li, Kevin Lin, Zhe Gan, Zicheng Liu, Ce Liu, Lijuan Wang. GIT is a decoder-only Transformer that leverages [CLIP](clip)'s vision encoder to condition the model on vision inputs besides text. The model obtains state-of-the-art results on image captioning and visual question answering benchmarks. The abstract from the paper is the following: *In this paper, we design and train a Generative Image-to-text Transformer, GIT, to unify vision-language tasks such as image/video captioning and question answering. While generative models provide a consistent network architecture between pre-training and fine-tuning, existing work typically contains complex structures (uni/multi-modal encoder/decoder) and depends on external modules such as object detectors/taggers and optical character recognition (OCR). In GIT, we simplify the architecture as one image encoder and one text decoder under a single language modeling task. We also scale up the pre-training data and the model size to boost the model performance. Without bells and whistles, our GIT establishes new state of the arts on 12 challenging benchmarks with a large margin. For instance, our model surpasses the human performance for the first time on TextCaps (138.2 vs. 125.5 in CIDEr). Furthermore, we present a new scheme of generation-based image classification and scene text recognition, achieving decent performance on standard benchmarks.* GIT architecture. Taken from the original paper. This model was contributed by [nielsr](https://huggingface.co/nielsr). The original code can be found [here](https://github.com/microsoft/GenerativeImage2Text). ## Usage tips - GIT is implemented in a very similar way to GPT-2, the only difference being that the model is also conditioned on `pixel_values`. ## Resources A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with GIT. - Demo notebooks regarding inference + fine-tuning GIT on custom data can be found [here](https://github.com/NielsRogge/Transformers-Tutorials/tree/master/GIT). - See also: [Causal language modeling task guide](../tasks/language_modeling) If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we will review it. The resource should ideally demonstrate something new instead of duplicating an existing resource. ## GitVisionConfig [[autodoc]] GitVisionConfig ## GitVisionModel [[autodoc]] GitVisionModel - forward ## GitConfig [[autodoc]] GitConfig - all ## GitProcessor [[autodoc]] GitProcessor - __call__ ## GitModel [[autodoc]] GitModel - forward ## GitForCausalLM [[autodoc]] GitForCausalLM - forward" model_doc/plbart.md," # PLBart ## Overview The PLBART model was proposed in [Unified Pre-training for Program Understanding and Generation](https://arxiv.org/abs/2103.06333) by Wasi Uddin Ahmad, Saikat Chakraborty, Baishakhi Ray, Kai-Wei Chang. This is a BART-like model which can be used to perform code-summarization, code-generation, and code-translation tasks. The pre-trained model `plbart-base` has been trained using multilingual denoising task on Java, Python and English. According to the abstract *Code summarization and generation empower conversion between programming language (PL) and natural language (NL), while code translation avails the migration of legacy code from one PL to another. This paper introduces PLBART, a sequence-to-sequence model capable of performing a broad spectrum of program and language understanding and generation tasks. PLBART is pre-trained on an extensive collection of Java and Python functions and associated NL text via denoising autoencoding. Experiments on code summarization in the English language, code generation, and code translation in seven programming languages show that PLBART outperforms or rivals state-of-the-art models. Moreover, experiments on discriminative tasks, e.g., program repair, clone detection, and vulnerable code detection, demonstrate PLBART's effectiveness in program understanding. Furthermore, analysis reveals that PLBART learns program syntax, style (e.g., identifier naming convention), logical flow (e.g., if block inside an else block is equivalent to else if block) that are crucial to program semantics and thus excels even with limited annotations.* This model was contributed by [gchhablani](https://huggingface.co/gchhablani). The Authors' code can be found [here](https://github.com/wasiahmad/PLBART). ## Usage examples PLBart is a multilingual encoder-decoder (sequence-to-sequence) model primarily intended for code-to-text, text-to-code, code-to-code tasks. As the model is multilingual it expects the sequences in a different format. A special language id token is added in both the source and target text. The source text format is `X [eos, src_lang_code]` where `X` is the source text. The target text format is `[tgt_lang_code] X [eos]`. `bos` is never used. However, for fine-tuning, in some cases no language token is provided in cases where a single language is used. Please refer to [the paper](https://arxiv.org/abs/2103.06333) to learn more about this. In cases where the language code is needed, the regular [`~PLBartTokenizer.__call__`] will encode source text format when you pass texts as the first argument or with the keyword argument `text`, and will encode target text format if it's passed with the `text_target` keyword argument. ### Supervised training thon >>> from transformers import PLBartForConditionalGeneration, PLBartTokenizer >>> tokenizer = PLBartTokenizer.from_pretrained(""uclanlp/plbart-base"", src_lang=""en_XX"", tgt_lang=""python"") >>> example_python_phrase = ""def maximum(a,b,c):NEW_LINE_INDENTreturn max([a,b,c])"" >>> expected_translation_english = ""Returns the maximum value of a b c."" >>> inputs = tokenizer(example_python_phrase, text_target=expected_translation_english, return_tensors=""pt"") >>> model(**inputs) ### Generation While generating the target text set the `decoder_start_token_id` to the target language id. The following example shows how to translate Python to English using the `uclanlp/plbart-python-en_XX` model. thon >>> from transformers import PLBartForConditionalGeneration, PLBartTokenizer >>> tokenizer = PLBartTokenizer.from_pretrained(""uclanlp/plbart-python-en_XX"", src_lang=""python"", tgt_lang=""en_XX"") >>> example_python_phrase = ""def maximum(a,b,c):NEW_LINE_INDENTreturn max([a,b,c])"" >>> inputs = tokenizer(example_python_phrase, return_tensors=""pt"") >>> model = PLBartForConditionalGeneration.from_pretrained(""uclanlp/plbart-python-en_XX"") >>> translated_tokens = model.generate(**inputs, decoder_start_token_id=tokenizer.lang_code_to_id[""en_XX""]) >>> tokenizer.batch_decode(translated_tokens, skip_special_tokens=True)[0] ""Returns the maximum value of a b c."" ## Resources - [Text classification task guide](../tasks/sequence_classification) - [Causal language modeling task guide](../tasks/language_modeling) - [Translation task guide](../tasks/translation) - [Summarization task guide](../tasks/summarization) ## PLBartConfig [[autodoc]] PLBartConfig ## PLBartTokenizer [[autodoc]] PLBartTokenizer - build_inputs_with_special_tokens ## PLBartModel [[autodoc]] PLBartModel - forward ## PLBartForConditionalGeneration [[autodoc]] PLBartForConditionalGeneration - forward ## PLBartForSequenceClassification [[autodoc]] PLBartForSequenceClassification - forward ## PLBartForCausalLM [[autodoc]] PLBartForCausalLM - forward" model_doc/splinter.md," # Splinter ## Overview The Splinter model was proposed in [Few-Shot Question Answering by Pretraining Span Selection](https://arxiv.org/abs/2101.00438) by Ori Ram, Yuval Kirstain, Jonathan Berant, Amir Globerson, Omer Levy. Splinter is an encoder-only transformer (similar to BERT) pretrained using the recurring span selection task on a large corpus comprising Wikipedia and the Toronto Book Corpus. The abstract from the paper is the following: In several question answering benchmarks, pretrained models have reached human parity through fine-tuning on an order of 100,000 annotated questions and answers. We explore the more realistic few-shot setting, where only a few hundred training examples are available, and observe that standard models perform poorly, highlighting the discrepancy between current pretraining objectives and question answering. We propose a new pretraining scheme tailored for question answering: recurring span selection. Given a passage with multiple sets of recurring spans, we mask in each set all recurring spans but one, and ask the model to select the correct span in the passage for each masked span. Masked spans are replaced with a special token, viewed as a question representation, that is later used during fine-tuning to select the answer span. The resulting model obtains surprisingly good results on multiple benchmarks (e.g., 72.7 F1 on SQuAD with only 128 training examples), while maintaining competitive performance in the high-resource setting. This model was contributed by [yuvalkirstain](https://huggingface.co/yuvalkirstain) and [oriram](https://huggingface.co/oriram). The original code can be found [here](https://github.com/oriram/splinter). ## Usage tips - Splinter was trained to predict answers spans conditioned on a special [QUESTION] token. These tokens contextualize to question representations which are used to predict the answers. This layer is called QASS, and is the default behaviour in the [`SplinterForQuestionAnswering`] class. Therefore: - Use [`SplinterTokenizer`] (rather than [`BertTokenizer`]), as it already contains this special token. Also, its default behavior is to use this token when two sequences are given (for example, in the *run_qa.py* script). - If you plan on using Splinter outside *run_qa.py*, please keep in mind the question token - it might be important for the success of your model, especially in a few-shot setting. - Please note there are two different checkpoints for each size of Splinter. Both are basically the same, except that one also has the pretrained weights of the QASS layer (*tau/splinter-base-qass* and *tau/splinter-large-qass*) and one doesn't (*tau/splinter-base* and *tau/splinter-large*). This is done to support randomly initializing this layer at fine-tuning, as it is shown to yield better results for some cases in the paper. ## Resources - [Question answering task guide](../tasks/question-answering) ## SplinterConfig [[autodoc]] SplinterConfig ## SplinterTokenizer [[autodoc]] SplinterTokenizer - build_inputs_with_special_tokens - get_special_tokens_mask - create_token_type_ids_from_sequences - save_vocabulary ## SplinterTokenizerFast [[autodoc]] SplinterTokenizerFast ## SplinterModel [[autodoc]] SplinterModel - forward ## SplinterForQuestionAnswering [[autodoc]] SplinterForQuestionAnswering - forward ## SplinterForPreTraining [[autodoc]] SplinterForPreTraining - forward " model_doc/deit.md," # DeiT ## Overview The DeiT model was proposed in [Training data-efficient image transformers & distillation through attention](https://arxiv.org/abs/2012.12877) by Hugo Touvron, Matthieu Cord, Matthijs Douze, Francisco Massa, Alexandre Sablayrolles, Hervé Jégou. The [Vision Transformer (ViT)](vit) introduced in [Dosovitskiy et al., 2020](https://arxiv.org/abs/2010.11929) has shown that one can match or even outperform existing convolutional neural networks using a Transformer encoder (BERT-like). However, the ViT models introduced in that paper required training on expensive infrastructure for multiple weeks, using external data. DeiT (data-efficient image transformers) are more efficiently trained transformers for image classification, requiring far less data and far less computing resources compared to the original ViT models. The abstract from the paper is the following: *Recently, neural networks purely based on attention were shown to address image understanding tasks such as image classification. However, these visual transformers are pre-trained with hundreds of millions of images using an expensive infrastructure, thereby limiting their adoption. In this work, we produce a competitive convolution-free transformer by training on Imagenet only. We train them on a single computer in less than 3 days. Our reference vision transformer (86M parameters) achieves top-1 accuracy of 83.1% (single-crop evaluation) on ImageNet with no external data. More importantly, we introduce a teacher-student strategy specific to transformers. It relies on a distillation token ensuring that the student learns from the teacher through attention. We show the interest of this token-based distillation, especially when using a convnet as a teacher. This leads us to report results competitive with convnets for both Imagenet (where we obtain up to 85.2% accuracy) and when transferring to other tasks. We share our code and models.* This model was contributed by [nielsr](https://huggingface.co/nielsr). The TensorFlow version of this model was added by [amyeroberts](https://huggingface.co/amyeroberts). ## Usage tips - Compared to ViT, DeiT models use a so-called distillation token to effectively learn from a teacher (which, in the DeiT paper, is a ResNet like-model). The distillation token is learned through backpropagation, by interacting with the class ([CLS]) and patch tokens through the self-attention layers. - There are 2 ways to fine-tune distilled models, either (1) in a classic way, by only placing a prediction head on top of the final hidden state of the class token and not using the distillation signal, or (2) by placing both a prediction head on top of the class token and on top of the distillation token. In that case, the [CLS] prediction head is trained using regular cross-entropy between the prediction of the head and the ground-truth label, while the distillation prediction head is trained using hard distillation (cross-entropy between the prediction of the distillation head and the label predicted by the teacher). At inference time, one takes the average prediction between both heads as final prediction. (2) is also called ""fine-tuning with distillation"", because one relies on a teacher that has already been fine-tuned on the downstream dataset. In terms of models, (1) corresponds to [`DeiTForImageClassification`] and (2) corresponds to [`DeiTForImageClassificationWithTeacher`]. - Note that the authors also did try soft distillation for (2) (in which case the distillation prediction head is trained using KL divergence to match the softmax output of the teacher), but hard distillation gave the best results. - All released checkpoints were pre-trained and fine-tuned on ImageNet-1k only. No external data was used. This is in contrast with the original ViT model, which used external data like the JFT-300M dataset/Imagenet-21k for pre-training. - The authors of DeiT also released more efficiently trained ViT models, which you can directly plug into [`ViTModel`] or [`ViTForImageClassification`]. Techniques like data augmentation, optimization, and regularization were used in order to simulate training on a much larger dataset (while only using ImageNet-1k for pre-training). There are 4 variants available (in 3 different sizes): *facebook/deit-tiny-patch16-224*, *facebook/deit-small-patch16-224*, *facebook/deit-base-patch16-224* and *facebook/deit-base-patch16-384*. Note that one should use [`DeiTImageProcessor`] in order to prepare images for the model. ## Resources A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with DeiT. - [`DeiTForImageClassification`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/pytorch/image-classification) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/image_classification.ipynb). - See also: [Image classification task guide](../tasks/image_classification) Besides that: - [`DeiTForMaskedImageModeling`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/pytorch/image-pretraining). If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource. ## DeiTConfig [[autodoc]] DeiTConfig ## DeiTFeatureExtractor [[autodoc]] DeiTFeatureExtractor - __call__ ## DeiTImageProcessor [[autodoc]] DeiTImageProcessor - preprocess ## DeiTModel [[autodoc]] DeiTModel - forward ## DeiTForMaskedImageModeling [[autodoc]] DeiTForMaskedImageModeling - forward ## DeiTForImageClassification [[autodoc]] DeiTForImageClassification - forward ## DeiTForImageClassificationWithTeacher [[autodoc]] DeiTForImageClassificationWithTeacher - forward ## TFDeiTModel [[autodoc]] TFDeiTModel - call ## TFDeiTForMaskedImageModeling [[autodoc]] TFDeiTForMaskedImageModeling - call ## TFDeiTForImageClassification [[autodoc]] TFDeiTForImageClassification - call ## TFDeiTForImageClassificationWithTeacher [[autodoc]] TFDeiTForImageClassificationWithTeacher - call " model_doc/deformable_detr.md," # Deformable DETR ## Overview The Deformable DETR model was proposed in [Deformable DETR: Deformable Transformers for End-to-End Object Detection](https://arxiv.org/abs/2010.04159) by Xizhou Zhu, Weijie Su, Lewei Lu, Bin Li, Xiaogang Wang, Jifeng Dai. Deformable DETR mitigates the slow convergence issues and limited feature spatial resolution of the original [DETR](detr) by leveraging a new deformable attention module which only attends to a small set of key sampling points around a reference. The abstract from the paper is the following: *DETR has been recently proposed to eliminate the need for many hand-designed components in object detection while demonstrating good performance. However, it suffers from slow convergence and limited feature spatial resolution, due to the limitation of Transformer attention modules in processing image feature maps. To mitigate these issues, we proposed Deformable DETR, whose attention modules only attend to a small set of key sampling points around a reference. Deformable DETR can achieve better performance than DETR (especially on small objects) with 10 times less training epochs. Extensive experiments on the COCO benchmark demonstrate the effectiveness of our approach.* Deformable DETR architecture. Taken from the original paper. This model was contributed by [nielsr](https://huggingface.co/nielsr). The original code can be found [here](https://github.com/fundamentalvision/Deformable-DETR). ## Usage tips - Training Deformable DETR is equivalent to training the original [DETR](detr) model. See the [resources](#resources) section below for demo notebooks. ## Resources A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with Deformable DETR. - Demo notebooks regarding inference + fine-tuning on a custom dataset for [`DeformableDetrForObjectDetection`] can be found [here](https://github.com/NielsRogge/Transformers-Tutorials/tree/master/Deformable-DETR). - See also: [Object detection task guide](../tasks/object_detection). If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource. ## DeformableDetrImageProcessor [[autodoc]] DeformableDetrImageProcessor - preprocess - post_process_object_detection ## DeformableDetrFeatureExtractor [[autodoc]] DeformableDetrFeatureExtractor - __call__ - post_process_object_detection ## DeformableDetrConfig [[autodoc]] DeformableDetrConfig ## DeformableDetrModel [[autodoc]] DeformableDetrModel - forward ## DeformableDetrForObjectDetection [[autodoc]] DeformableDetrForObjectDetection - forward " model_doc/vit.md," # Vision Transformer (ViT) ## Overview The Vision Transformer (ViT) model was proposed in [An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale](https://arxiv.org/abs/2010.11929) by Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, Jakob Uszkoreit, Neil Houlsby. It's the first paper that successfully trains a Transformer encoder on ImageNet, attaining very good results compared to familiar convolutional architectures. The abstract from the paper is the following: *While the Transformer architecture has become the de-facto standard for natural language processing tasks, its applications to computer vision remain limited. In vision, attention is either applied in conjunction with convolutional networks, or used to replace certain components of convolutional networks while keeping their overall structure in place. We show that this reliance on CNNs is not necessary and a pure transformer applied directly to sequences of image patches can perform very well on image classification tasks. When pre-trained on large amounts of data and transferred to multiple mid-sized or small image recognition benchmarks (ImageNet, CIFAR-100, VTAB, etc.), Vision Transformer (ViT) attains excellent results compared to state-of-the-art convolutional networks while requiring substantially fewer computational resources to train.* ViT architecture. Taken from the original paper. Following the original Vision Transformer, some follow-up works have been made: - [DeiT](deit) (Data-efficient Image Transformers) by Facebook AI. DeiT models are distilled vision transformers. The authors of DeiT also released more efficiently trained ViT models, which you can directly plug into [`ViTModel`] or [`ViTForImageClassification`]. There are 4 variants available (in 3 different sizes): *facebook/deit-tiny-patch16-224*, *facebook/deit-small-patch16-224*, *facebook/deit-base-patch16-224* and *facebook/deit-base-patch16-384*. Note that one should use [`DeiTImageProcessor`] in order to prepare images for the model. - [BEiT](beit) (BERT pre-training of Image Transformers) by Microsoft Research. BEiT models outperform supervised pre-trained vision transformers using a self-supervised method inspired by BERT (masked image modeling) and based on a VQ-VAE. - DINO (a method for self-supervised training of Vision Transformers) by Facebook AI. Vision Transformers trained using the DINO method show very interesting properties not seen with convolutional models. They are capable of segmenting objects, without having ever been trained to do so. DINO checkpoints can be found on the [hub](https://huggingface.co/models?other=dino). - [MAE](vit_mae) (Masked Autoencoders) by Facebook AI. By pre-training Vision Transformers to reconstruct pixel values for a high portion (75%) of masked patches (using an asymmetric encoder-decoder architecture), the authors show that this simple method outperforms supervised pre-training after fine-tuning. This model was contributed by [nielsr](https://huggingface.co/nielsr). The original code (written in JAX) can be found [here](https://github.com/google-research/vision_transformer). Note that we converted the weights from Ross Wightman's [timm library](https://github.com/rwightman/pytorch-image-models), who already converted the weights from JAX to PyTorch. Credits go to him! ## Usage tips - To feed images to the Transformer encoder, each image is split into a sequence of fixed-size non-overlapping patches, which are then linearly embedded. A [CLS] token is added to serve as representation of an entire image, which can be used for classification. The authors also add absolute position embeddings, and feed the resulting sequence of vectors to a standard Transformer encoder. - As the Vision Transformer expects each image to be of the same size (resolution), one can use [`ViTImageProcessor`] to resize (or rescale) and normalize images for the model. - Both the patch resolution and image resolution used during pre-training or fine-tuning are reflected in the name of each checkpoint. For example, `google/vit-base-patch16-224` refers to a base-sized architecture with patch resolution of 16x16 and fine-tuning resolution of 224x224. All checkpoints can be found on the [hub](https://huggingface.co/models?search=vit). - The available checkpoints are either (1) pre-trained on [ImageNet-21k](http://www.image-net.org/) (a collection of 14 million images and 21k classes) only, or (2) also fine-tuned on [ImageNet](http://www.image-net.org/challenges/LSVRC/2012/) (also referred to as ILSVRC 2012, a collection of 1.3 million images and 1,000 classes). - The Vision Transformer was pre-trained using a resolution of 224x224. During fine-tuning, it is often beneficial to use a higher resolution than pre-training [(Touvron et al., 2019)](https://arxiv.org/abs/1906.06423), [(Kolesnikov et al., 2020)](https://arxiv.org/abs/1912.11370). In order to fine-tune at higher resolution, the authors perform 2D interpolation of the pre-trained position embeddings, according to their location in the original image. - The best results are obtained with supervised pre-training, which is not the case in NLP. The authors also performed an experiment with a self-supervised pre-training objective, namely masked patched prediction (inspired by masked language modeling). With this approach, the smaller ViT-B/16 model achieves 79.9% accuracy on ImageNet, a significant improvement of 2% to training from scratch, but still 4% behind supervised pre-training. ## Resources Demo notebooks regarding inference as well as fine-tuning ViT on custom data can be found [here](https://github.com/NielsRogge/Transformers-Tutorials/tree/master/VisionTransformer). A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with ViT. If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource. `ViTForImageClassification` is supported by: - A blog post on how to [Fine-Tune ViT for Image Classification with Hugging Face Transformers](https://huggingface.co/blog/fine-tune-vit) - A blog post on [Image Classification with Hugging Face Transformers and `Keras`](https://www.philschmid.de/image-classification-huggingface-transformers-keras) - A notebook on [Fine-tuning for Image Classification with Hugging Face Transformers](https://github.com/huggingface/notebooks/blob/main/examples/image_classification.ipynb) - A notebook on how to [Fine-tune the Vision Transformer on CIFAR-10 with the Hugging Face Trainer](https://github.com/NielsRogge/Transformers-Tutorials/blob/master/VisionTransformer/Fine_tuning_the_Vision_Transformer_on_CIFAR_10_with_the_%F0%9F%A4%97_Trainer.ipynb) - A notebook on how to [Fine-tune the Vision Transformer on CIFAR-10 with PyTorch Lightning](https://github.com/NielsRogge/Transformers-Tutorials/blob/master/VisionTransformer/Fine_tuning_the_Vision_Transformer_on_CIFAR_10_with_PyTorch_Lightning.ipynb) ⚗️ Optimization - A blog post on how to [Accelerate Vision Transformer (ViT) with Quantization using Optimum](https://www.philschmid.de/optimizing-vision-transformer) ⚡️ Inference - A notebook on [Quick demo: Vision Transformer (ViT) by Google Brain](https://github.com/NielsRogge/Transformers-Tutorials/blob/master/VisionTransformer/Quick_demo_of_HuggingFace_version_of_Vision_Transformer_inference.ipynb) 🚀 Deploy - A blog post on [Deploying Tensorflow Vision Models in Hugging Face with TF Serving](https://huggingface.co/blog/tf-serving-vision) - A blog post on [Deploying Hugging Face ViT on Vertex AI](https://huggingface.co/blog/deploy-vertex-ai) - A blog post on [Deploying Hugging Face ViT on Kubernetes with TF Serving](https://huggingface.co/blog/deploy-tfserving-kubernetes) ## ViTConfig [[autodoc]] ViTConfig ## ViTFeatureExtractor [[autodoc]] ViTFeatureExtractor - __call__ ## ViTImageProcessor [[autodoc]] ViTImageProcessor - preprocess ## ViTModel [[autodoc]] ViTModel - forward ## ViTForMaskedImageModeling [[autodoc]] ViTForMaskedImageModeling - forward ## ViTForImageClassification [[autodoc]] ViTForImageClassification - forward ## TFViTModel [[autodoc]] TFViTModel - call ## TFViTForImageClassification [[autodoc]] TFViTForImageClassification - call ## FlaxVitModel [[autodoc]] FlaxViTModel - __call__ ## FlaxViTForImageClassification [[autodoc]] FlaxViTForImageClassification - __call__ " model_doc/musicgen.md," # MusicGen ## Overview The MusicGen model was proposed in the paper [Simple and Controllable Music Generation](https://arxiv.org/abs/2306.05284) by Jade Copet, Felix Kreuk, Itai Gat, Tal Remez, David Kant, Gabriel Synnaeve, Yossi Adi and Alexandre Défossez. MusicGen is a single stage auto-regressive Transformer model capable of generating high-quality music samples conditioned on text descriptions or audio prompts. The text descriptions are passed through a frozen text encoder model to obtain a sequence of hidden-state representations. MusicGen is then trained to predict discrete audio tokens, or *audio codes*, conditioned on these hidden-states. These audio tokens are then decoded using an audio compression model, such as EnCodec, to recover the audio waveform. Through an efficient token interleaving pattern, MusicGen does not require a self-supervised semantic representation of the text/audio prompts, thus eliminating the need to cascade multiple models to predict a set of codebooks (e.g. hierarchically or upsampling). Instead, it is able to generate all the codebooks in a single forward pass. The abstract from the paper is the following: *We tackle the task of conditional music generation. We introduce MusicGen, a single Language Model (LM) that operates over several streams of compressed discrete music representation, i.e., tokens. Unlike prior work, MusicGen is comprised of a single-stage transformer LM together with efficient token interleaving patterns, which eliminates the need for cascading several models, e.g., hierarchically or upsampling. Following this approach, we demonstrate how MusicGen can generate high-quality samples, while being conditioned on textual description or melodic features, allowing better controls over the generated output. We conduct extensive empirical evaluation, considering both automatic and human studies, showing the proposed approach is superior to the evaluated baselines on a standard text-to-music benchmark. Through ablation studies, we shed light over the importance of each of the components comprising MusicGen.* This model was contributed by [sanchit-gandhi](https://huggingface.co/sanchit-gandhi). The original code can be found [here](https://github.com/facebookresearch/audiocraft). The pre-trained checkpoints can be found on the [Hugging Face Hub](https://huggingface.co/models?sort=downloads&search=facebook%2Fmusicgen-). ## Usage tips - After downloading the original checkpoints from [here](https://github.com/facebookresearch/audiocraft/blob/main/docs/MUSICGEN.md#importing--exporting-models) , you can convert them using the **conversion script** available at `src/transformers/models/musicgen/convert_musicgen_transformers.py` with the following command: ```bash python src/transformers/models/musicgen/convert_musicgen_transformers.py \ --checkpoint small --pytorch_dump_folder /output/path --safe_serialization ## Generation MusicGen is compatible with two generation modes: greedy and sampling. In practice, sampling leads to significantly better results than greedy, thus we encourage sampling mode to be used where possible. Sampling is enabled by default, and can be explicitly specified by setting `do_sample=True` in the call to [`MusicgenForConditionalGeneration.generate`], or by overriding the model's generation config (see below). Generation is limited by the sinusoidal positional embeddings to 30 second inputs. Meaning, MusicGen cannot generate more than 30 seconds of audio (1503 tokens), and input audio passed by Audio-Prompted Generation contributes to this limit so, given an input of 20 seconds of audio, MusicGen cannot generate more than 10 seconds of additional audio. Transformers supports both mono (1-channel) and stereo (2-channel) variants of MusicGen. The mono channel versions generate a single set of codebooks. The stereo versions generate 2 sets of codebooks, 1 for each channel (left/right), and each set of codebooks is decoded independently through the audio compression model. The audio streams for each channel are combined to give the final stereo output. ### Unconditional Generation The inputs for unconditional (or 'null') generation can be obtained through the method [`MusicgenForConditionalGeneration.get_unconditional_inputs`]: thon >>> from transformers import MusicgenForConditionalGeneration >>> model = MusicgenForConditionalGeneration.from_pretrained(""facebook/musicgen-small"") >>> unconditional_inputs = model.get_unconditional_inputs(num_samples=1) >>> audio_values = model.generate(**unconditional_inputs, do_sample=True, max_new_tokens=256) The audio outputs are a three-dimensional Torch tensor of shape `(batch_size, num_channels, sequence_length)`. To listen to the generated audio samples, you can either play them in an ipynb notebook: thon from IPython.display import Audio sampling_rate = model.config.audio_encoder.sampling_rate Audio(audio_values[0].numpy(), rate=sampling_rate) Or save them as a `.wav` file using a third-party library, e.g. `scipy`: thon >>> import scipy >>> sampling_rate = model.config.audio_encoder.sampling_rate >>> scipy.io.wavfile.write(""musicgen_out.wav"", rate=sampling_rate, data=audio_values[0, 0].numpy()) ### Text-Conditional Generation The model can generate an audio sample conditioned on a text prompt through use of the [`MusicgenProcessor`] to pre-process the inputs: thon >>> from transformers import AutoProcessor, MusicgenForConditionalGeneration >>> processor = AutoProcessor.from_pretrained(""facebook/musicgen-small"") >>> model = MusicgenForConditionalGeneration.from_pretrained(""facebook/musicgen-small"") >>> inputs = processor( text=[""80s pop track with bassy drums and synth"", ""90s rock song with loud guitars and heavy drums""], padding=True, return_tensors=""pt"", ) >>> audio_values = model.generate(**inputs, do_sample=True, guidance_scale=3, max_new_tokens=256) The `guidance_scale` is used in classifier free guidance (CFG), setting the weighting between the conditional logits (which are predicted from the text prompts) and the unconditional logits (which are predicted from an unconditional or 'null' prompt). Higher guidance scale encourages the model to generate samples that are more closely linked to the input prompt, usually at the expense of poorer audio quality. CFG is enabled by setting `guidance_scale > 1`. For best results, use `guidance_scale=3` (default). ### Audio-Prompted Generation The same [`MusicgenProcessor`] can be used to pre-process an audio prompt that is used for audio continuation. In the following example, we load an audio file using the 🤗 Datasets library, which can be pip installed through the command below: pip install --upgrade pip pip install datasets[audio] thon >>> from transformers import AutoProcessor, MusicgenForConditionalGeneration >>> from datasets import load_dataset >>> processor = AutoProcessor.from_pretrained(""facebook/musicgen-small"") >>> model = MusicgenForConditionalGeneration.from_pretrained(""facebook/musicgen-small"") >>> dataset = load_dataset(""sanchit-gandhi/gtzan"", split=""train"", streaming=True) >>> sample = next(iter(dataset))[""audio""] >>> # take the first half of the audio sample >>> sample[""array""] = sample[""array""][: len(sample[""array""]) // 2] >>> inputs = processor( audio=sample[""array""], sampling_rate=sample[""sampling_rate""], text=[""80s blues track with groovy saxophone""], padding=True, return_tensors=""pt"", ) >>> audio_values = model.generate(**inputs, do_sample=True, guidance_scale=3, max_new_tokens=256) For batched audio-prompted generation, the generated `audio_values` can be post-processed to remove padding by using the [`MusicgenProcessor`] class: thon >>> from transformers import AutoProcessor, MusicgenForConditionalGeneration >>> from datasets import load_dataset >>> processor = AutoProcessor.from_pretrained(""facebook/musicgen-small"") >>> model = MusicgenForConditionalGeneration.from_pretrained(""facebook/musicgen-small"") >>> dataset = load_dataset(""sanchit-gandhi/gtzan"", split=""train"", streaming=True) >>> sample = next(iter(dataset))[""audio""] >>> # take the first quarter of the audio sample >>> sample_1 = sample[""array""][: len(sample[""array""]) // 4] >>> # take the first half of the audio sample >>> sample_2 = sample[""array""][: len(sample[""array""]) // 2] >>> inputs = processor( audio=[sample_1, sample_2], sampling_rate=sample[""sampling_rate""], text=[""80s blues track with groovy saxophone"", ""90s rock song with loud guitars and heavy drums""], padding=True, return_tensors=""pt"", ) >>> audio_values = model.generate(**inputs, do_sample=True, guidance_scale=3, max_new_tokens=256) >>> # post-process to remove padding from the batched audio >>> audio_values = processor.batch_decode(audio_values, padding_mask=inputs.padding_mask) ### Generation Configuration The default parameters that control the generation process, such as sampling, guidance scale and number of generated tokens, can be found in the model's generation config, and updated as desired: thon >>> from transformers import MusicgenForConditionalGeneration >>> model = MusicgenForConditionalGeneration.from_pretrained(""facebook/musicgen-small"") >>> # inspect the default generation config >>> model.generation_config >>> # increase the guidance scale to 4.0 >>> model.generation_config.guidance_scale = 4.0 >>> # decrease the max length to 256 tokens >>> model.generation_config.max_length = 256 Note that any arguments passed to the generate method will **supersede** those in the generation config, so setting `do_sample=False` in the call to generate will supersede the setting of `model.generation_config.do_sample` in the generation config. ## Model Structure The MusicGen model can be de-composed into three distinct stages: 1. Text encoder: maps the text inputs to a sequence of hidden-state representations. The pre-trained MusicGen models use a frozen text encoder from either T5 or Flan-T5 2. MusicGen decoder: a language model (LM) that auto-regressively generates audio tokens (or codes) conditional on the encoder hidden-state representations 3. Audio encoder/decoder: used to encode an audio prompt to use as prompt tokens, and recover the audio waveform from the audio tokens predicted by the decoder Thus, the MusicGen model can either be used as a standalone decoder model, corresponding to the class [`MusicgenForCausalLM`], or as a composite model that includes the text encoder and audio encoder/decoder, corresponding to the class [`MusicgenForConditionalGeneration`]. If only the decoder needs to be loaded from the pre-trained checkpoint, it can be loaded by first specifying the correct config, or be accessed through the `.decoder` attribute of the composite model: thon >>> from transformers import AutoConfig, MusicgenForCausalLM, MusicgenForConditionalGeneration >>> # Option 1: get decoder config and pass to `.from_pretrained` >>> decoder_config = AutoConfig.from_pretrained(""facebook/musicgen-small"").decoder >>> decoder = MusicgenForCausalLM.from_pretrained(""facebook/musicgen-small"", **decoder_config) >>> # Option 2: load the entire composite model, but only return the decoder >>> decoder = MusicgenForConditionalGeneration.from_pretrained(""facebook/musicgen-small"").decoder Since the text encoder and audio encoder/decoder models are frozen during training, the MusicGen decoder [`MusicgenForCausalLM`] can be trained standalone on a dataset of encoder hidden-states and audio codes. For inference, the trained decoder can be combined with the frozen text encoder and audio encoder/decoders to recover the composite [`MusicgenForConditionalGeneration`] model. Tips: * MusicGen is trained on the 32kHz checkpoint of Encodec. You should ensure you use a compatible version of the Encodec model. * Sampling mode tends to deliver better results than greedy - you can toggle sampling with the variable `do_sample` in the call to [`MusicgenForConditionalGeneration.generate`] ## MusicgenDecoderConfig [[autodoc]] MusicgenDecoderConfig ## MusicgenConfig [[autodoc]] MusicgenConfig ## MusicgenProcessor [[autodoc]] MusicgenProcessor ## MusicgenModel [[autodoc]] MusicgenModel - forward ## MusicgenForCausalLM [[autodoc]] MusicgenForCausalLM - forward ## MusicgenForConditionalGeneration [[autodoc]] MusicgenForConditionalGeneration - forward " model_doc/detr.md," # DETR ## Overview The DETR model was proposed in [End-to-End Object Detection with Transformers](https://arxiv.org/abs/2005.12872) by Nicolas Carion, Francisco Massa, Gabriel Synnaeve, Nicolas Usunier, Alexander Kirillov and Sergey Zagoruyko. DETR consists of a convolutional backbone followed by an encoder-decoder Transformer which can be trained end-to-end for object detection. It greatly simplifies a lot of the complexity of models like Faster-R-CNN and Mask-R-CNN, which use things like region proposals, non-maximum suppression procedure and anchor generation. Moreover, DETR can also be naturally extended to perform panoptic segmentation, by simply adding a mask head on top of the decoder outputs. The abstract from the paper is the following: *We present a new method that views object detection as a direct set prediction problem. Our approach streamlines the detection pipeline, effectively removing the need for many hand-designed components like a non-maximum suppression procedure or anchor generation that explicitly encode our prior knowledge about the task. The main ingredients of the new framework, called DEtection TRansformer or DETR, are a set-based global loss that forces unique predictions via bipartite matching, and a transformer encoder-decoder architecture. Given a fixed small set of learned object queries, DETR reasons about the relations of the objects and the global image context to directly output the final set of predictions in parallel. The new model is conceptually simple and does not require a specialized library, unlike many other modern detectors. DETR demonstrates accuracy and run-time performance on par with the well-established and highly-optimized Faster RCNN baseline on the challenging COCO object detection dataset. Moreover, DETR can be easily generalized to produce panoptic segmentation in a unified manner. We show that it significantly outperforms competitive baselines.* This model was contributed by [nielsr](https://huggingface.co/nielsr). The original code can be found [here](https://github.com/facebookresearch/detr). ## How DETR works Here's a TLDR explaining how [`~transformers.DetrForObjectDetection`] works: First, an image is sent through a pre-trained convolutional backbone (in the paper, the authors use ResNet-50/ResNet-101). Let's assume we also add a batch dimension. This means that the input to the backbone is a tensor of shape `(batch_size, 3, height, width)`, assuming the image has 3 color channels (RGB). The CNN backbone outputs a new lower-resolution feature map, typically of shape `(batch_size, 2048, height/32, width/32)`. This is then projected to match the hidden dimension of the Transformer of DETR, which is `256` by default, using a `nn.Conv2D` layer. So now, we have a tensor of shape `(batch_size, 256, height/32, width/32).` Next, the feature map is flattened and transposed to obtain a tensor of shape `(batch_size, seq_len, d_model)` = `(batch_size, width/32*height/32, 256)`. So a difference with NLP models is that the sequence length is actually longer than usual, but with a smaller `d_model` (which in NLP is typically 768 or higher). Next, this is sent through the encoder, outputting `encoder_hidden_states` of the same shape (you can consider these as image features). Next, so-called **object queries** are sent through the decoder. This is a tensor of shape `(batch_size, num_queries, d_model)`, with `num_queries` typically set to 100 and initialized with zeros. These input embeddings are learnt positional encodings that the authors refer to as object queries, and similarly to the encoder, they are added to the input of each attention layer. Each object query will look for a particular object in the image. The decoder updates these embeddings through multiple self-attention and encoder-decoder attention layers to output `decoder_hidden_states` of the same shape: `(batch_size, num_queries, d_model)`. Next, two heads are added on top for object detection: a linear layer for classifying each object query into one of the objects or ""no object"", and a MLP to predict bounding boxes for each query. The model is trained using a **bipartite matching loss**: so what we actually do is compare the predicted classes + bounding boxes of each of the N = 100 object queries to the ground truth annotations, padded up to the same length N (so if an image only contains 4 objects, 96 annotations will just have a ""no object"" as class and ""no bounding box"" as bounding box). The [Hungarian matching algorithm](https://en.wikipedia.org/wiki/Hungarian_algorithm) is used to find an optimal one-to-one mapping of each of the N queries to each of the N annotations. Next, standard cross-entropy (for the classes) and a linear combination of the L1 and [generalized IoU loss](https://giou.stanford.edu/) (for the bounding boxes) are used to optimize the parameters of the model. DETR can be naturally extended to perform panoptic segmentation (which unifies semantic segmentation and instance segmentation). [`~transformers.DetrForSegmentation`] adds a segmentation mask head on top of [`~transformers.DetrForObjectDetection`]. The mask head can be trained either jointly, or in a two steps process, where one first trains a [`~transformers.DetrForObjectDetection`] model to detect bounding boxes around both ""things"" (instances) and ""stuff"" (background things like trees, roads, sky), then freeze all the weights and train only the mask head for 25 epochs. Experimentally, these two approaches give similar results. Note that predicting boxes is required for the training to be possible, since the Hungarian matching is computed using distances between boxes. ## Usage tips - DETR uses so-called **object queries** to detect objects in an image. The number of queries determines the maximum number of objects that can be detected in a single image, and is set to 100 by default (see parameter `num_queries` of [`~transformers.DetrConfig`]). Note that it's good to have some slack (in COCO, the authors used 100, while the maximum number of objects in a COCO image is ~70). - The decoder of DETR updates the query embeddings in parallel. This is different from language models like GPT-2, which use autoregressive decoding instead of parallel. Hence, no causal attention mask is used. - DETR adds position embeddings to the hidden states at each self-attention and cross-attention layer before projecting to queries and keys. For the position embeddings of the image, one can choose between fixed sinusoidal or learned absolute position embeddings. By default, the parameter `position_embedding_type` of [`~transformers.DetrConfig`] is set to `""sine""`. - During training, the authors of DETR did find it helpful to use auxiliary losses in the decoder, especially to help the model output the correct number of objects of each class. If you set the parameter `auxiliary_loss` of [`~transformers.DetrConfig`] to `True`, then prediction feedforward neural networks and Hungarian losses are added after each decoder layer (with the FFNs sharing parameters). - If you want to train the model in a distributed environment across multiple nodes, then one should update the _num_boxes_ variable in the _DetrLoss_ class of _modeling_detr.py_. When training on multiple nodes, this should be set to the average number of target boxes across all nodes, as can be seen in the original implementation [here](https://github.com/facebookresearch/detr/blob/a54b77800eb8e64e3ad0d8237789fcbf2f8350c5/models/detr.py#L227-L232). - [`~transformers.DetrForObjectDetection`] and [`~transformers.DetrForSegmentation`] can be initialized with any convolutional backbone available in the [timm library](https://github.com/rwightman/pytorch-image-models). Initializing with a MobileNet backbone for example can be done by setting the `backbone` attribute of [`~transformers.DetrConfig`] to `""tf_mobilenetv3_small_075""`, and then initializing the model with that config. - DETR resizes the input images such that the shortest side is at least a certain amount of pixels while the longest is at most 1333 pixels. At training time, scale augmentation is used such that the shortest side is randomly set to at least 480 and at most 800 pixels. At inference time, the shortest side is set to 800. One can use [`~transformers.DetrImageProcessor`] to prepare images (and optional annotations in COCO format) for the model. Due to this resizing, images in a batch can have different sizes. DETR solves this by padding images up to the largest size in a batch, and by creating a pixel mask that indicates which pixels are real/which are padding. Alternatively, one can also define a custom `collate_fn` in order to batch images together, using [`~transformers.DetrImageProcessor.pad_and_create_pixel_mask`]. - The size of the images will determine the amount of memory being used, and will thus determine the `batch_size`. It is advised to use a batch size of 2 per GPU. See [this Github thread](https://github.com/facebookresearch/detr/issues/150) for more info. There are three ways to instantiate a DETR model (depending on what you prefer): Option 1: Instantiate DETR with pre-trained weights for entire model >>> from transformers import DetrForObjectDetection >>> model = DetrForObjectDetection.from_pretrained(""facebook/detr-resnet-50"") Option 2: Instantiate DETR with randomly initialized weights for Transformer, but pre-trained weights for backbone >>> from transformers import DetrConfig, DetrForObjectDetection >>> config = DetrConfig() >>> model = DetrForObjectDetection(config) Option 3: Instantiate DETR with randomly initialized weights for backbone + Transformer >>> config = DetrConfig(use_pretrained_backbone=False) >>> model = DetrForObjectDetection(config) As a summary, consider the following table: | Task | Object detection | Instance segmentation | Panoptic segmentation | |------|------------------|-----------------------|-----------------------| | **Description** | Predicting bounding boxes and class labels around objects in an image | Predicting masks around objects (i.e. instances) in an image | Predicting masks around both objects (i.e. instances) as well as ""stuff"" (i.e. background things like trees and roads) in an image | | **Model** | [`~transformers.DetrForObjectDetection`] | [`~transformers.DetrForSegmentation`] | [`~transformers.DetrForSegmentation`] | | **Example dataset** | COCO detection | COCO detection, COCO panoptic | COCO panoptic | | | **Format of annotations to provide to** [`~transformers.DetrImageProcessor`] | {'image_id': `int`, 'annotations': `List[Dict]`} each Dict being a COCO object annotation | {'image_id': `int`, 'annotations': `List[Dict]`} (in case of COCO detection) or {'file_name': `str`, 'image_id': `int`, 'segments_info': `List[Dict]`} (in case of COCO panoptic) | {'file_name': `str`, 'image_id': `int`, 'segments_info': `List[Dict]`} and masks_path (path to directory containing PNG files of the masks) | | **Postprocessing** (i.e. converting the output of the model to COCO API) | [`~transformers.DetrImageProcessor.post_process`] | [`~transformers.DetrImageProcessor.post_process_segmentation`] | [`~transformers.DetrImageProcessor.post_process_segmentation`], [`~transformers.DetrImageProcessor.post_process_panoptic`] | | **evaluators** | `CocoEvaluator` with `iou_types=""bbox""` | `CocoEvaluator` with `iou_types=""bbox""` or `""segm""` | `CocoEvaluator` with `iou_tupes=""bbox""` or `""segm""`, `PanopticEvaluator` | In short, one should prepare the data either in COCO detection or COCO panoptic format, then use [`~transformers.DetrImageProcessor`] to create `pixel_values`, `pixel_mask` and optional `labels`, which can then be used to train (or fine-tune) a model. For evaluation, one should first convert the outputs of the model using one of the postprocessing methods of [`~transformers.DetrImageProcessor`]. These can be be provided to either `CocoEvaluator` or `PanopticEvaluator`, which allow you to calculate metrics like mean Average Precision (mAP) and Panoptic Quality (PQ). The latter objects are implemented in the [original repository](https://github.com/facebookresearch/detr). See the [example notebooks](https://github.com/NielsRogge/Transformers-Tutorials/tree/master/DETR) for more info regarding evaluation. ## Resources A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with DETR. - All example notebooks illustrating fine-tuning [`DetrForObjectDetection`] and [`DetrForSegmentation`] on a custom dataset an be found [here](https://github.com/NielsRogge/Transformers-Tutorials/tree/master/DETR). - See also: [Object detection task guide](../tasks/object_detection) If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource. ## DetrConfig [[autodoc]] DetrConfig ## DetrImageProcessor [[autodoc]] DetrImageProcessor - preprocess - post_process_object_detection - post_process_semantic_segmentation - post_process_instance_segmentation - post_process_panoptic_segmentation ## DetrFeatureExtractor [[autodoc]] DetrFeatureExtractor - __call__ - post_process_object_detection - post_process_semantic_segmentation - post_process_instance_segmentation - post_process_panoptic_segmentation ## DETR specific outputs [[autodoc]] models.detr.modeling_detr.DetrModelOutput [[autodoc]] models.detr.modeling_detr.DetrObjectDetectionOutput [[autodoc]] models.detr.modeling_detr.DetrSegmentationOutput ## DetrModel [[autodoc]] DetrModel - forward ## DetrForObjectDetection [[autodoc]] DetrForObjectDetection - forward ## DetrForSegmentation [[autodoc]] DetrForSegmentation - forward " model_doc/owlv2.md," # OWLv2 ## Overview OWLv2 was proposed in [Scaling Open-Vocabulary Object Detection](https://arxiv.org/abs/2306.09683) by Matthias Minderer, Alexey Gritsenko, Neil Houlsby. OWLv2 scales up [OWL-ViT](owlvit) using self-training, which uses an existing detector to generate pseudo-box annotations on image-text pairs. This results in large gains over the previous state-of-the-art for zero-shot object detection. The abstract from the paper is the following: *Open-vocabulary object detection has benefited greatly from pretrained vision-language models, but is still limited by the amount of available detection training data. While detection training data can be expanded by using Web image-text pairs as weak supervision, this has not been done at scales comparable to image-level pretraining. Here, we scale up detection data with self-training, which uses an existing detector to generate pseudo-box annotations on image-text pairs. Major challenges in scaling self-training are the choice of label space, pseudo-annotation filtering, and training efficiency. We present the OWLv2 model and OWL-ST self-training recipe, which address these challenges. OWLv2 surpasses the performance of previous state-of-the-art open-vocabulary detectors already at comparable training scales (~10M examples). However, with OWL-ST, we can scale to over 1B examples, yielding further large improvement: With an L/14 architecture, OWL-ST improves AP on LVIS rare classes, for which the model has seen no human box annotations, from 31.2% to 44.6% (43% relative improvement). OWL-ST unlocks Web-scale training for open-world localization, similar to what has been seen for image classification and language modelling.* OWLv2 high-level overview. Taken from the original paper. This model was contributed by [nielsr](https://huggingface.co/nielsr). The original code can be found [here](https://github.com/google-research/scenic/tree/main/scenic/projects/owl_vit). ## Usage example OWLv2 is, just like its predecessor [OWL-ViT](owlvit), a zero-shot text-conditioned object detection model. OWL-ViT uses [CLIP](clip) as its multi-modal backbone, with a ViT-like Transformer to get visual features and a causal language model to get the text features. To use CLIP for detection, OWL-ViT removes the final token pooling layer of the vision model and attaches a lightweight classification and box head to each transformer output token. Open-vocabulary classification is enabled by replacing the fixed classification layer weights with the class-name embeddings obtained from the text model. The authors first train CLIP from scratch and fine-tune it end-to-end with the classification and box heads on standard detection datasets using a bipartite matching loss. One or multiple text queries per image can be used to perform zero-shot text-conditioned object detection. [`Owlv2ImageProcessor`] can be used to resize (or rescale) and normalize images for the model and [`CLIPTokenizer`] is used to encode the text. [`Owlv2Processor`] wraps [`Owlv2ImageProcessor`] and [`CLIPTokenizer`] into a single instance to both encode the text and prepare the images. The following example shows how to perform object detection using [`Owlv2Processor`] and [`Owlv2ForObjectDetection`]. thon >>> import requests >>> from PIL import Image >>> import torch >>> from transformers import Owlv2Processor, Owlv2ForObjectDetection >>> processor = Owlv2Processor.from_pretrained(""google/owlv2-base-patch16-ensemble"") >>> model = Owlv2ForObjectDetection.from_pretrained(""google/owlv2-base-patch16-ensemble"") >>> url = ""http://images.cocodataset.org/val2017/000000039769.jpg"" >>> image = Image.open(requests.get(url, stream=True).raw) >>> texts = [[""a photo of a cat"", ""a photo of a dog""]] >>> inputs = processor(text=texts, images=image, return_tensors=""pt"") >>> outputs = model(**inputs) >>> # Target image sizes (height, width) to rescale box predictions [batch_size, 2] >>> target_sizes = torch.Tensor([image.size[::-1]]) >>> # Convert outputs (bounding boxes and class logits) to COCO API >>> results = processor.post_process_object_detection(outputs=outputs, target_sizes=target_sizes, threshold=0.1) >>> i = 0 # Retrieve predictions for the first image for the corresponding text queries >>> text = texts[i] >>> boxes, scores, labels = results[i][""boxes""], results[i][""scores""], results[i][""labels""] >>> for box, score, label in zip(boxes, scores, labels): box = [round(i, 2) for i in box.tolist()] print(f""Detected {text[label]} with confidence {round(score.item(), 3)} at location {box}"") Detected a photo of a cat with confidence 0.614 at location [341.67, 17.54, 642.32, 278.51] Detected a photo of a cat with confidence 0.665 at location [6.75, 38.97, 326.62, 354.85] ## Resources - A demo notebook on using OWLv2 for zero- and one-shot (image-guided) object detection can be found [here](https://github.com/NielsRogge/Transformers-Tutorials/tree/master/OWLv2). - [Zero-shot object detection task guide](../tasks/zero_shot_object_detection) The architecture of OWLv2 is identical to [OWL-ViT](owlvit), however the object detection head now also includes an objectness classifier, which predicts the (query-agnostic) likelihood that a predicted box contains an object (as opposed to background). The objectness score can be used to rank or filter predictions independently of text queries. Usage of OWLv2 is identical to [OWL-ViT](owlvit) with a new, updated image processor ([`Owlv2ImageProcessor`]). ## Owlv2Config [[autodoc]] Owlv2Config - from_text_vision_configs ## Owlv2TextConfig [[autodoc]] Owlv2TextConfig ## Owlv2VisionConfig [[autodoc]] Owlv2VisionConfig ## Owlv2ImageProcessor [[autodoc]] Owlv2ImageProcessor - preprocess - post_process_object_detection - post_process_image_guided_detection ## Owlv2Processor [[autodoc]] Owlv2Processor ## Owlv2Model [[autodoc]] Owlv2Model - forward - get_text_features - get_image_features ## Owlv2TextModel [[autodoc]] Owlv2TextModel - forward ## Owlv2VisionModel [[autodoc]] Owlv2VisionModel - forward ## Owlv2ForObjectDetection [[autodoc]] Owlv2ForObjectDetection - forward - image_guided_detection " model_doc/blenderbot.md," # Blenderbot ## Overview The Blender chatbot model was proposed in [Recipes for building an open-domain chatbot](https://arxiv.org/pdf/2004.13637.pdf) Stephen Roller, Emily Dinan, Naman Goyal, Da Ju, Mary Williamson, Yinhan Liu, Jing Xu, Myle Ott, Kurt Shuster, Eric M. Smith, Y-Lan Boureau, Jason Weston on 30 Apr 2020. The abstract of the paper is the following: *Building open-domain chatbots is a challenging area for machine learning research. While prior work has shown that scaling neural models in the number of parameters and the size of the data they are trained on gives improved results, we show that other ingredients are important for a high-performing chatbot. Good conversation requires a number of skills that an expert conversationalist blends in a seamless way: providing engaging talking points and listening to their partners, and displaying knowledge, empathy and personality appropriately, while maintaining a consistent persona. We show that large scale models can learn these skills when given appropriate training data and choice of generation strategy. We build variants of these recipes with 90M, 2.7B and 9.4B parameter models, and make our models and code publicly available. Human evaluations show our best models are superior to existing approaches in multi-turn dialogue in terms of engagingness and humanness measurements. We then discuss the limitations of this work by analyzing failure cases of our models.* This model was contributed by [sshleifer](https://huggingface.co/sshleifer). The authors' code can be found [here](https://github.com/facebookresearch/ParlAI) . ## Usage tips and example Blenderbot is a model with absolute position embeddings so it's usually advised to pad the inputs on the right rather than the left. An example: thon >>> from transformers import BlenderbotTokenizer, BlenderbotForConditionalGeneration >>> mname = ""facebook/blenderbot-400M-distill"" >>> model = BlenderbotForConditionalGeneration.from_pretrained(mname) >>> tokenizer = BlenderbotTokenizer.from_pretrained(mname) >>> UTTERANCE = ""My friends are cool but they eat too many carbs."" >>> inputs = tokenizer([UTTERANCE], return_tensors=""pt"") >>> reply_ids = model.generate(**inputs) >>> print(tokenizer.batch_decode(reply_ids)) ["" That's unfortunate. Are they trying to lose weight or are they just trying to be healthier?""] ## Implementation Notes - Blenderbot uses a standard [seq2seq model transformer](https://arxiv.org/pdf/1706.03762.pdf) based architecture. - Available checkpoints can be found in the [model hub](https://huggingface.co/models?search=blenderbot). - This is the *default* Blenderbot model class. However, some smaller checkpoints, such as `facebook/blenderbot_small_90M`, have a different architecture and consequently should be used with [BlenderbotSmall](blenderbot-small). ## Resources - [Causal language modeling task guide](../tasks/language_modeling) - [Translation task guide](../tasks/translation) - [Summarization task guide](../tasks/summarization) ## BlenderbotConfig [[autodoc]] BlenderbotConfig ## BlenderbotTokenizer [[autodoc]] BlenderbotTokenizer - build_inputs_with_special_tokens ## BlenderbotTokenizerFast [[autodoc]] BlenderbotTokenizerFast - build_inputs_with_special_tokens ## BlenderbotModel See [`~transformers.BartModel`] for arguments to *forward* and *generate* [[autodoc]] BlenderbotModel - forward ## BlenderbotForConditionalGeneration See [`~transformers.BartForConditionalGeneration`] for arguments to *forward* and *generate* [[autodoc]] BlenderbotForConditionalGeneration - forward ## BlenderbotForCausalLM [[autodoc]] BlenderbotForCausalLM - forward ## TFBlenderbotModel [[autodoc]] TFBlenderbotModel - call ## TFBlenderbotForConditionalGeneration [[autodoc]] TFBlenderbotForConditionalGeneration - call ## FlaxBlenderbotModel [[autodoc]] FlaxBlenderbotModel - __call__ - encode - decode ## FlaxBlenderbotForConditionalGeneration [[autodoc]] FlaxBlenderbotForConditionalGeneration - __call__ - encode - decode " model_doc/mt5.md," # mT5 ## Overview The mT5 model was presented in [mT5: A massively multilingual pre-trained text-to-text transformer](https://arxiv.org/abs/2010.11934) by Linting Xue, Noah Constant, Adam Roberts, Mihir Kale, Rami Al-Rfou, Aditya Siddhant, Aditya Barua, Colin Raffel. The abstract from the paper is the following: *The recent ""Text-to-Text Transfer Transformer"" (T5) leveraged a unified text-to-text format and scale to attain state-of-the-art results on a wide variety of English-language NLP tasks. In this paper, we introduce mT5, a multilingual variant of T5 that was pre-trained on a new Common Crawl-based dataset covering 101 languages. We detail the design and modified training of mT5 and demonstrate its state-of-the-art performance on many multilingual benchmarks. We also describe a simple technique to prevent ""accidental translation"" in the zero-shot setting, where a generative model chooses to (partially) translate its prediction into the wrong language. All of the code and model checkpoints used in this work are publicly available.* Note: mT5 was only pre-trained on [mC4](https://huggingface.co/datasets/mc4) excluding any supervised training. Therefore, this model has to be fine-tuned before it is usable on a downstream task, unlike the original T5 model. Since mT5 was pre-trained unsupervisedly, there's no real advantage to using a task prefix during single-task fine-tuning. If you are doing multi-task fine-tuning, you should use a prefix. Google has released the following variants: - [google/mt5-small](https://huggingface.co/google/mt5-small) - [google/mt5-base](https://huggingface.co/google/mt5-base) - [google/mt5-large](https://huggingface.co/google/mt5-large) - [google/mt5-xl](https://huggingface.co/google/mt5-xl) - [google/mt5-xxl](https://huggingface.co/google/mt5-xxl). This model was contributed by [patrickvonplaten](https://huggingface.co/patrickvonplaten). The original code can be found [here](https://github.com/google-research/multilingual-t5). ## Resources - [Translation task guide](../tasks/translation) - [Summarization task guide](../tasks/summarization) ## MT5Config [[autodoc]] MT5Config ## MT5Tokenizer [[autodoc]] MT5Tokenizer See [`T5Tokenizer`] for all details. ## MT5TokenizerFast [[autodoc]] MT5TokenizerFast See [`T5TokenizerFast`] for all details. ## MT5Model [[autodoc]] MT5Model ## MT5ForConditionalGeneration [[autodoc]] MT5ForConditionalGeneration ## MT5EncoderModel [[autodoc]] MT5EncoderModel ## MT5ForSequenceClassification [[autodoc]] MT5ForSequenceClassification ## MT5ForQuestionAnswering [[autodoc]] MT5ForQuestionAnswering ## TFMT5Model [[autodoc]] TFMT5Model ## TFMT5ForConditionalGeneration [[autodoc]] TFMT5ForConditionalGeneration ## TFMT5EncoderModel [[autodoc]] TFMT5EncoderModel ## FlaxMT5Model [[autodoc]] FlaxMT5Model ## FlaxMT5ForConditionalGeneration [[autodoc]] FlaxMT5ForConditionalGeneration ## FlaxMT5EncoderModel [[autodoc]] FlaxMT5EncoderModel " model_doc/mvp.md," # MVP ## Overview The MVP model was proposed in [MVP: Multi-task Supervised Pre-training for Natural Language Generation](https://arxiv.org/abs/2206.12131) by Tianyi Tang, Junyi Li, Wayne Xin Zhao and Ji-Rong Wen. According to the abstract, - MVP follows a standard Transformer encoder-decoder architecture. - MVP is supervised pre-trained using labeled datasets. - MVP also has task-specific soft prompts to stimulate the model's capacity in performing a certain task. - MVP is specially designed for natural language generation and can be adapted to a wide range of generation tasks, including but not limited to summarization, data-to-text generation, open-ended dialogue system, story generation, question answering, question generation, task-oriented dialogue system, commonsense generation, paraphrase generation, text style transfer, and text simplification. Our model can also be adapted to natural language understanding tasks such as sequence classification and (extractive) question answering. This model was contributed by [Tianyi Tang](https://huggingface.co/StevenTang). The detailed information and instructions can be found [here](https://github.com/RUCAIBox/MVP). ## Usage tips - We have released a series of models [here](https://huggingface.co/models?filter=mvp), including MVP, MVP with task-specific prompts, and multi-task pre-trained variants. - If you want to use a model without prompts (standard Transformer), you can load it through `MvpForConditionalGeneration.from_pretrained('RUCAIBox/mvp')`. - If you want to use a model with task-specific prompts, such as summarization, you can load it through `MvpForConditionalGeneration.from_pretrained('RUCAIBox/mvp-summarization')`. - Our model supports lightweight prompt tuning following [Prefix-tuning](https://arxiv.org/abs/2101.00190) with method `set_lightweight_tuning()`. ## Usage examples For summarization, it is an example to use MVP and MVP with summarization-specific prompts. thon >>> from transformers import MvpTokenizer, MvpForConditionalGeneration >>> tokenizer = MvpTokenizer.from_pretrained(""RUCAIBox/mvp"") >>> model = MvpForConditionalGeneration.from_pretrained(""RUCAIBox/mvp"") >>> model_with_prompt = MvpForConditionalGeneration.from_pretrained(""RUCAIBox/mvp-summarization"") >>> inputs = tokenizer( ""Summarize: You may want to stick it to your boss and leave your job, but don't do it if these are your reasons."", return_tensors=""pt"", ) >>> generated_ids = model.generate(**inputs) >>> tokenizer.batch_decode(generated_ids, skip_special_tokens=True) [""Why You Shouldn't Quit Your Job""] >>> generated_ids = model_with_prompt.generate(**inputs) >>> tokenizer.batch_decode(generated_ids, skip_special_tokens=True) [""Don't do it if these are your reasons""] For data-to-text generation, it is an example to use MVP and multi-task pre-trained variants. thon >>> from transformers import MvpTokenizerFast, MvpForConditionalGeneration >>> tokenizer = MvpTokenizerFast.from_pretrained(""RUCAIBox/mvp"") >>> model = MvpForConditionalGeneration.from_pretrained(""RUCAIBox/mvp"") >>> model_with_mtl = MvpForConditionalGeneration.from_pretrained(""RUCAIBox/mtl-data-to-text"") >>> inputs = tokenizer( ""Describe the following data: Iron Man | instance of | Superhero [SEP] Stan Lee | creator | Iron Man"", return_tensors=""pt"", ) >>> generated_ids = model.generate(**inputs) >>> tokenizer.batch_decode(generated_ids, skip_special_tokens=True) ['Stan Lee created the character of Iron Man, a fictional superhero appearing in American comic'] >>> generated_ids = model_with_mtl.generate(**inputs) >>> tokenizer.batch_decode(generated_ids, skip_special_tokens=True) ['Iron Man is a fictional superhero appearing in American comic books published by Marvel Comics.'] For lightweight tuning, *i.e.*, fixing the model and only tuning prompts, you can load MVP with randomly initialized prompts or with task-specific prompts. Our code also supports Prefix-tuning with BART following the [original paper](https://arxiv.org/abs/2101.00190). thon >>> from transformers import MvpForConditionalGeneration >>> model = MvpForConditionalGeneration.from_pretrained(""RUCAIBox/mvp"", use_prompt=True) >>> # the number of trainable parameters (full tuning) >>> sum(p.numel() for p in model.parameters() if p.requires_grad) 468116832 >>> # lightweight tuning with randomly initialized prompts >>> model.set_lightweight_tuning() >>> # the number of trainable parameters (lightweight tuning) >>> sum(p.numel() for p in model.parameters() if p.requires_grad) 61823328 >>> # lightweight tuning with task-specific prompts >>> model = MvpForConditionalGeneration.from_pretrained(""RUCAIBox/mtl-data-to-text"") >>> model.set_lightweight_tuning() >>> # original lightweight Prefix-tuning >>> model = MvpForConditionalGeneration.from_pretrained(""facebook/bart-large"", use_prompt=True) >>> model.set_lightweight_tuning() ## Resources - [Text classification task guide](../tasks/sequence_classification) - [Question answering task guide](../tasks/question_answering) - [Causal language modeling task guide](../tasks/language_modeling) - [Masked language modeling task guide](../tasks/masked_language_modeling) - [Translation task guide](../tasks/translation) - [Summarization task guide](../tasks/summarization) ## MvpConfig [[autodoc]] MvpConfig ## MvpTokenizer [[autodoc]] MvpTokenizer ## MvpTokenizerFast [[autodoc]] MvpTokenizerFast ## MvpModel [[autodoc]] MvpModel - forward ## MvpForConditionalGeneration [[autodoc]] MvpForConditionalGeneration - forward ## MvpForSequenceClassification [[autodoc]] MvpForSequenceClassification - forward ## MvpForQuestionAnswering [[autodoc]] MvpForQuestionAnswering - forward ## MvpForCausalLM [[autodoc]] MvpForCausalLM - forward " model_doc/swin2sr.md," # Swin2SR ## Overview The Swin2SR model was proposed in [Swin2SR: SwinV2 Transformer for Compressed Image Super-Resolution and Restoration](https://arxiv.org/abs/2209.11345) by Marcos V. Conde, Ui-Jin Choi, Maxime Burchi, Radu Timofte. Swin2R improves the [SwinIR](https://github.com/JingyunLiang/SwinIR/) model by incorporating [Swin Transformer v2](swinv2) layers which mitigates issues such as training instability, resolution gaps between pre-training and fine-tuning, and hunger on data. The abstract from the paper is the following: *Compression plays an important role on the efficient transmission and storage of images and videos through band-limited systems such as streaming services, virtual reality or videogames. However, compression unavoidably leads to artifacts and the loss of the original information, which may severely degrade the visual quality. For these reasons, quality enhancement of compressed images has become a popular research topic. While most state-of-the-art image restoration methods are based on convolutional neural networks, other transformers-based methods such as SwinIR, show impressive performance on these tasks. In this paper, we explore the novel Swin Transformer V2, to improve SwinIR for image super-resolution, and in particular, the compressed input scenario. Using this method we can tackle the major issues in training transformer vision models, such as training instability, resolution gaps between pre-training and fine-tuning, and hunger on data. We conduct experiments on three representative tasks: JPEG compression artifacts removal, image super-resolution (classical and lightweight), and compressed image super-resolution. Experimental results demonstrate that our method, Swin2SR, can improve the training convergence and performance of SwinIR, and is a top-5 solution at the ""AIM 2022 Challenge on Super-Resolution of Compressed Image and Video"".* Swin2SR architecture. Taken from the original paper. This model was contributed by [nielsr](https://huggingface.co/nielsr). The original code can be found [here](https://github.com/mv-lab/swin2sr). ## Resources Demo notebooks for Swin2SR can be found [here](https://github.com/NielsRogge/Transformers-Tutorials/tree/master/Swin2SR). A demo Space for image super-resolution with SwinSR can be found [here](https://huggingface.co/spaces/jjourney1125/swin2sr). ## Swin2SRImageProcessor [[autodoc]] Swin2SRImageProcessor - preprocess ## Swin2SRConfig [[autodoc]] Swin2SRConfig ## Swin2SRModel [[autodoc]] Swin2SRModel - forward ## Swin2SRForImageSuperResolution [[autodoc]] Swin2SRForImageSuperResolution - forward " model_doc/trajectory_transformer.md," # Trajectory Transformer This model is in maintenance mode only, so we won't accept any new PRs changing its code. If you run into any issues running this model, please reinstall the last version that supported this model: v4.30.0. You can do so by running the following command: `pip install -U transformers==4.30.0`. ## Overview The Trajectory Transformer model was proposed in [Offline Reinforcement Learning as One Big Sequence Modeling Problem](https://arxiv.org/abs/2106.02039) by Michael Janner, Qiyang Li, Sergey Levine. The abstract from the paper is the following: *Reinforcement learning (RL) is typically concerned with estimating stationary policies or single-step models, leveraging the Markov property to factorize problems in time. However, we can also view RL as a generic sequence modeling problem, with the goal being to produce a sequence of actions that leads to a sequence of high rewards. Viewed in this way, it is tempting to consider whether high-capacity sequence prediction models that work well in other domains, such as natural-language processing, can also provide effective solutions to the RL problem. To this end, we explore how RL can be tackled with the tools of sequence modeling, using a Transformer architecture to model distributions over trajectories and repurposing beam search as a planning algorithm. Framing RL as sequence modeling problem simplifies a range of design decisions, allowing us to dispense with many of the components common in offline RL algorithms. We demonstrate the flexibility of this approach across long-horizon dynamics prediction, imitation learning, goal-conditioned RL, and offline RL. Further, we show that this approach can be combined with existing model-free algorithms to yield a state-of-the-art planner in sparse-reward, long-horizon tasks.* This model was contributed by [CarlCochet](https://huggingface.co/CarlCochet). The original code can be found [here](https://github.com/jannerm/trajectory-transformer). ## Usage tips This Transformer is used for deep reinforcement learning. To use it, you need to create sequences from actions, states and rewards from all previous timesteps. This model will treat all these elements together as one big sequence (a trajectory). ## TrajectoryTransformerConfig [[autodoc]] TrajectoryTransformerConfig ## TrajectoryTransformerModel [[autodoc]] TrajectoryTransformerModel - forward " model_doc/unispeech.md," # UniSpeech ## Overview The UniSpeech model was proposed in [UniSpeech: Unified Speech Representation Learning with Labeled and Unlabeled Data](https://arxiv.org/abs/2101.07597) by Chengyi Wang, Yu Wu, Yao Qian, Kenichi Kumatani, Shujie Liu, Furu Wei, Michael Zeng, Xuedong Huang . The abstract from the paper is the following: *In this paper, we propose a unified pre-training approach called UniSpeech to learn speech representations with both unlabeled and labeled data, in which supervised phonetic CTC learning and phonetically-aware contrastive self-supervised learning are conducted in a multi-task learning manner. The resultant representations can capture information more correlated with phonetic structures and improve the generalization across languages and domains. We evaluate the effectiveness of UniSpeech for cross-lingual representation learning on public CommonVoice corpus. The results show that UniSpeech outperforms self-supervised pretraining and supervised transfer learning for speech recognition by a maximum of 13.4% and 17.8% relative phone error rate reductions respectively (averaged over all testing languages). The transferability of UniSpeech is also demonstrated on a domain-shift speech recognition task, i.e., a relative word error rate reduction of 6% against the previous approach.* This model was contributed by [patrickvonplaten](https://huggingface.co/patrickvonplaten). The Authors' code can be found [here](https://github.com/microsoft/UniSpeech/tree/main/UniSpeech). ## Usage tips - UniSpeech is a speech model that accepts a float array corresponding to the raw waveform of the speech signal. Please use [`Wav2Vec2Processor`] for the feature extraction. - UniSpeech model can be fine-tuned using connectionist temporal classification (CTC) so the model output has to be decoded using [`Wav2Vec2CTCTokenizer`]. ## Resources - [Audio classification task guide](../tasks/audio_classification) - [Automatic speech recognition task guide](../tasks/asr) ## UniSpeechConfig [[autodoc]] UniSpeechConfig ## UniSpeech specific outputs [[autodoc]] models.unispeech.modeling_unispeech.UniSpeechForPreTrainingOutput ## UniSpeechModel [[autodoc]] UniSpeechModel - forward ## UniSpeechForCTC [[autodoc]] UniSpeechForCTC - forward ## UniSpeechForSequenceClassification [[autodoc]] UniSpeechForSequenceClassification - forward ## UniSpeechForPreTraining [[autodoc]] UniSpeechForPreTraining - forward " model_doc/camembert.md," # CamemBERT ## Overview The CamemBERT model was proposed in [CamemBERT: a Tasty French Language Model](https://arxiv.org/abs/1911.03894) by Louis Martin, Benjamin Muller, Pedro Javier Ortiz Suárez, Yoann Dupont, Laurent Romary, Éric Villemonte de la Clergerie, Djamé Seddah, and Benoît Sagot. It is based on Facebook's RoBERTa model released in 2019. It is a model trained on 138GB of French text. The abstract from the paper is the following: *Pretrained language models are now ubiquitous in Natural Language Processing. Despite their success, most available models have either been trained on English data or on the concatenation of data in multiple languages. This makes practical use of such models --in all languages except English-- very limited. Aiming to address this issue for French, we release CamemBERT, a French version of the Bi-directional Encoders for Transformers (BERT). We measure the performance of CamemBERT compared to multilingual models in multiple downstream tasks, namely part-of-speech tagging, dependency parsing, named-entity recognition, and natural language inference. CamemBERT improves the state of the art for most of the tasks considered. We release the pretrained model for CamemBERT hoping to foster research and downstream applications for French NLP.* This model was contributed by [camembert](https://huggingface.co/camembert). The original code can be found [here](https://camembert-model.fr/). This implementation is the same as RoBERTa. Refer to the [documentation of RoBERTa](roberta) for usage examples as well as the information relative to the inputs and outputs. ## Resources - [Text classification task guide](../tasks/sequence_classification) - [Token classification task guide](../tasks/token_classification) - [Question answering task guide](../tasks/question_answering) - [Causal language modeling task guide](../tasks/language_modeling) - [Masked language modeling task guide](../tasks/masked_language_modeling) - [Multiple choice task guide](../tasks/multiple_choice) ## CamembertConfig [[autodoc]] CamembertConfig ## CamembertTokenizer [[autodoc]] CamembertTokenizer - build_inputs_with_special_tokens - get_special_tokens_mask - create_token_type_ids_from_sequences - save_vocabulary ## CamembertTokenizerFast [[autodoc]] CamembertTokenizerFast ## CamembertModel [[autodoc]] CamembertModel ## CamembertForCausalLM [[autodoc]] CamembertForCausalLM ## CamembertForMaskedLM [[autodoc]] CamembertForMaskedLM ## CamembertForSequenceClassification [[autodoc]] CamembertForSequenceClassification ## CamembertForMultipleChoice [[autodoc]] CamembertForMultipleChoice ## CamembertForTokenClassification [[autodoc]] CamembertForTokenClassification ## CamembertForQuestionAnswering [[autodoc]] CamembertForQuestionAnswering ## TFCamembertModel [[autodoc]] TFCamembertModel ## TFCamembertForCasualLM [[autodoc]] TFCamembertForCausalLM ## TFCamembertForMaskedLM [[autodoc]] TFCamembertForMaskedLM ## TFCamembertForSequenceClassification [[autodoc]] TFCamembertForSequenceClassification ## TFCamembertForMultipleChoice [[autodoc]] TFCamembertForMultipleChoice ## TFCamembertForTokenClassification [[autodoc]] TFCamembertForTokenClassification ## TFCamembertForQuestionAnswering [[autodoc]] TFCamembertForQuestionAnswering " model_doc/owlvit.md," # OWL-ViT ## Overview The OWL-ViT (short for Vision Transformer for Open-World Localization) was proposed in [Simple Open-Vocabulary Object Detection with Vision Transformers](https://arxiv.org/abs/2205.06230) by Matthias Minderer, Alexey Gritsenko, Austin Stone, Maxim Neumann, Dirk Weissenborn, Alexey Dosovitskiy, Aravindh Mahendran, Anurag Arnab, Mostafa Dehghani, Zhuoran Shen, Xiao Wang, Xiaohua Zhai, Thomas Kipf, and Neil Houlsby. OWL-ViT is an open-vocabulary object detection network trained on a variety of (image, text) pairs. It can be used to query an image with one or multiple text queries to search for and detect target objects described in text. The abstract from the paper is the following: *Combining simple architectures with large-scale pre-training has led to massive improvements in image classification. For object detection, pre-training and scaling approaches are less well established, especially in the long-tailed and open-vocabulary setting, where training data is relatively scarce. In this paper, we propose a strong recipe for transferring image-text models to open-vocabulary object detection. We use a standard Vision Transformer architecture with minimal modifications, contrastive image-text pre-training, and end-to-end detection fine-tuning. Our analysis of the scaling properties of this setup shows that increasing image-level pre-training and model size yield consistent improvements on the downstream detection task. We provide the adaptation strategies and regularizations needed to attain very strong performance on zero-shot text-conditioned and one-shot image-conditioned object detection. Code and models are available on GitHub.* OWL-ViT architecture. Taken from the original paper. This model was contributed by [adirik](https://huggingface.co/adirik). The original code can be found [here](https://github.com/google-research/scenic/tree/main/scenic/projects/owl_vit). ## Usage tips OWL-ViT is a zero-shot text-conditioned object detection model. OWL-ViT uses [CLIP](clip) as its multi-modal backbone, with a ViT-like Transformer to get visual features and a causal language model to get the text features. To use CLIP for detection, OWL-ViT removes the final token pooling layer of the vision model and attaches a lightweight classification and box head to each transformer output token. Open-vocabulary classification is enabled by replacing the fixed classification layer weights with the class-name embeddings obtained from the text model. The authors first train CLIP from scratch and fine-tune it end-to-end with the classification and box heads on standard detection datasets using a bipartite matching loss. One or multiple text queries per image can be used to perform zero-shot text-conditioned object detection. [`OwlViTImageProcessor`] can be used to resize (or rescale) and normalize images for the model and [`CLIPTokenizer`] is used to encode the text. [`OwlViTProcessor`] wraps [`OwlViTImageProcessor`] and [`CLIPTokenizer`] into a single instance to both encode the text and prepare the images. The following example shows how to perform object detection using [`OwlViTProcessor`] and [`OwlViTForObjectDetection`]. thon >>> import requests >>> from PIL import Image >>> import torch >>> from transformers import OwlViTProcessor, OwlViTForObjectDetection >>> processor = OwlViTProcessor.from_pretrained(""google/owlvit-base-patch32"") >>> model = OwlViTForObjectDetection.from_pretrained(""google/owlvit-base-patch32"") >>> url = ""http://images.cocodataset.org/val2017/000000039769.jpg"" >>> image = Image.open(requests.get(url, stream=True).raw) >>> texts = [[""a photo of a cat"", ""a photo of a dog""]] >>> inputs = processor(text=texts, images=image, return_tensors=""pt"") >>> outputs = model(**inputs) >>> # Target image sizes (height, width) to rescale box predictions [batch_size, 2] >>> target_sizes = torch.Tensor([image.size[::-1]]) >>> # Convert outputs (bounding boxes and class logits) to COCO API >>> results = processor.post_process_object_detection(outputs=outputs, target_sizes=target_sizes, threshold=0.1) >>> i = 0 # Retrieve predictions for the first image for the corresponding text queries >>> text = texts[i] >>> boxes, scores, labels = results[i][""boxes""], results[i][""scores""], results[i][""labels""] >>> for box, score, label in zip(boxes, scores, labels): box = [round(i, 2) for i in box.tolist()] print(f""Detected {text[label]} with confidence {round(score.item(), 3)} at location {box}"") Detected a photo of a cat with confidence 0.707 at location [324.97, 20.44, 640.58, 373.29] Detected a photo of a cat with confidence 0.717 at location [1.46, 55.26, 315.55, 472.17] ## Resources A demo notebook on using OWL-ViT for zero- and one-shot (image-guided) object detection can be found [here](https://github.com/huggingface/notebooks/blob/main/examples/zeroshot_object_detection_with_owlvit.ipynb). ## OwlViTConfig [[autodoc]] OwlViTConfig - from_text_vision_configs ## OwlViTTextConfig [[autodoc]] OwlViTTextConfig ## OwlViTVisionConfig [[autodoc]] OwlViTVisionConfig ## OwlViTImageProcessor [[autodoc]] OwlViTImageProcessor - preprocess - post_process_object_detection - post_process_image_guided_detection ## OwlViTFeatureExtractor [[autodoc]] OwlViTFeatureExtractor - __call__ - post_process - post_process_image_guided_detection ## OwlViTProcessor [[autodoc]] OwlViTProcessor ## OwlViTModel [[autodoc]] OwlViTModel - forward - get_text_features - get_image_features ## OwlViTTextModel [[autodoc]] OwlViTTextModel - forward ## OwlViTVisionModel [[autodoc]] OwlViTVisionModel - forward ## OwlViTForObjectDetection [[autodoc]] OwlViTForObjectDetection - forward - image_guided_detection " model_doc/electra.md," # ELECTRA ## Overview The ELECTRA model was proposed in the paper [ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators](https://openreview.net/pdf?id=r1xMH1BtvB). ELECTRA is a new pretraining approach which trains two transformer models: the generator and the discriminator. The generator's role is to replace tokens in a sequence, and is therefore trained as a masked language model. The discriminator, which is the model we're interested in, tries to identify which tokens were replaced by the generator in the sequence. The abstract from the paper is the following: *Masked language modeling (MLM) pretraining methods such as BERT corrupt the input by replacing some tokens with [MASK] and then train a model to reconstruct the original tokens. While they produce good results when transferred to downstream NLP tasks, they generally require large amounts of compute to be effective. As an alternative, we propose a more sample-efficient pretraining task called replaced token detection. Instead of masking the input, our approach corrupts it by replacing some tokens with plausible alternatives sampled from a small generator network. Then, instead of training a model that predicts the original identities of the corrupted tokens, we train a discriminative model that predicts whether each token in the corrupted input was replaced by a generator sample or not. Thorough experiments demonstrate this new pretraining task is more efficient than MLM because the task is defined over all input tokens rather than just the small subset that was masked out. As a result, the contextual representations learned by our approach substantially outperform the ones learned by BERT given the same model size, data, and compute. The gains are particularly strong for small models; for example, we train a model on one GPU for 4 days that outperforms GPT (trained using 30x more compute) on the GLUE natural language understanding benchmark. Our approach also works well at scale, where it performs comparably to RoBERTa and XLNet while using less than 1/4 of their compute and outperforms them when using the same amount of compute.* This model was contributed by [lysandre](https://huggingface.co/lysandre). The original code can be found [here](https://github.com/google-research/electra). ## Usage tips - ELECTRA is the pretraining approach, therefore there is nearly no changes done to the underlying model: BERT. The only change is the separation of the embedding size and the hidden size: the embedding size is generally smaller, while the hidden size is larger. An additional projection layer (linear) is used to project the embeddings from their embedding size to the hidden size. In the case where the embedding size is the same as the hidden size, no projection layer is used. - ELECTRA is a transformer model pretrained with the use of another (small) masked language model. The inputs are corrupted by that language model, which takes an input text that is randomly masked and outputs a text in which ELECTRA has to predict which token is an original and which one has been replaced. Like for GAN training, the small language model is trained for a few steps (but with the original texts as objective, not to fool the ELECTRA model like in a traditional GAN setting) then the ELECTRA model is trained for a few steps. - The ELECTRA checkpoints saved using [Google Research's implementation](https://github.com/google-research/electra) contain both the generator and discriminator. The conversion script requires the user to name which model to export into the correct architecture. Once converted to the HuggingFace format, these checkpoints may be loaded into all available ELECTRA models, however. This means that the discriminator may be loaded in the [`ElectraForMaskedLM`] model, and the generator may be loaded in the [`ElectraForPreTraining`] model (the classification head will be randomly initialized as it doesn't exist in the generator). ## Resources - [Text classification task guide](../tasks/sequence_classification) - [Token classification task guide](../tasks/token_classification) - [Question answering task guide](../tasks/question_answering) - [Causal language modeling task guide](../tasks/language_modeling) - [Masked language modeling task guide](../tasks/masked_language_modeling) - [Multiple choice task guide](../tasks/multiple_choice) ## ElectraConfig [[autodoc]] ElectraConfig ## ElectraTokenizer [[autodoc]] ElectraTokenizer ## ElectraTokenizerFast [[autodoc]] ElectraTokenizerFast ## Electra specific outputs [[autodoc]] models.electra.modeling_electra.ElectraForPreTrainingOutput [[autodoc]] models.electra.modeling_tf_electra.TFElectraForPreTrainingOutput ## ElectraModel [[autodoc]] ElectraModel - forward ## ElectraForPreTraining [[autodoc]] ElectraForPreTraining - forward ## ElectraForCausalLM [[autodoc]] ElectraForCausalLM - forward ## ElectraForMaskedLM [[autodoc]] ElectraForMaskedLM - forward ## ElectraForSequenceClassification [[autodoc]] ElectraForSequenceClassification - forward ## ElectraForMultipleChoice [[autodoc]] ElectraForMultipleChoice - forward ## ElectraForTokenClassification [[autodoc]] ElectraForTokenClassification - forward ## ElectraForQuestionAnswering [[autodoc]] ElectraForQuestionAnswering - forward ## TFElectraModel [[autodoc]] TFElectraModel - call ## TFElectraForPreTraining [[autodoc]] TFElectraForPreTraining - call ## TFElectraForMaskedLM [[autodoc]] TFElectraForMaskedLM - call ## TFElectraForSequenceClassification [[autodoc]] TFElectraForSequenceClassification - call ## TFElectraForMultipleChoice [[autodoc]] TFElectraForMultipleChoice - call ## TFElectraForTokenClassification [[autodoc]] TFElectraForTokenClassification - call ## TFElectraForQuestionAnswering [[autodoc]] TFElectraForQuestionAnswering - call ## FlaxElectraModel [[autodoc]] FlaxElectraModel - __call__ ## FlaxElectraForPreTraining [[autodoc]] FlaxElectraForPreTraining - __call__ ## FlaxElectraForCausalLM [[autodoc]] FlaxElectraForCausalLM - __call__ ## FlaxElectraForMaskedLM [[autodoc]] FlaxElectraForMaskedLM - __call__ ## FlaxElectraForSequenceClassification [[autodoc]] FlaxElectraForSequenceClassification - __call__ ## FlaxElectraForMultipleChoice [[autodoc]] FlaxElectraForMultipleChoice - __call__ ## FlaxElectraForTokenClassification [[autodoc]] FlaxElectraForTokenClassification - __call__ ## FlaxElectraForQuestionAnswering [[autodoc]] FlaxElectraForQuestionAnswering - __call__ " model_doc/nezha.md," # Nezha ## Overview The Nezha model was proposed in [NEZHA: Neural Contextualized Representation for Chinese Language Understanding](https://arxiv.org/abs/1909.00204) by Junqiu Wei et al. The abstract from the paper is the following: *The pre-trained language models have achieved great successes in various natural language understanding (NLU) tasks due to its capacity to capture the deep contextualized information in text by pre-training on large-scale corpora. In this technical report, we present our practice of pre-training language models named NEZHA (NEural contextualiZed representation for CHinese lAnguage understanding) on Chinese corpora and finetuning for the Chinese NLU tasks. The current version of NEZHA is based on BERT with a collection of proven improvements, which include Functional Relative Positional Encoding as an effective positional encoding scheme, Whole Word Masking strategy, Mixed Precision Training and the LAMB Optimizer in training the models. The experimental results show that NEZHA achieves the state-of-the-art performances when finetuned on several representative Chinese tasks, including named entity recognition (People's Daily NER), sentence matching (LCQMC), Chinese sentiment classification (ChnSenti) and natural language inference (XNLI).* This model was contributed by [sijunhe](https://huggingface.co/sijunhe). The original code can be found [here](https://github.com/huawei-noah/Pretrained-Language-Model/tree/master/NEZHA-PyTorch). ## Resources - [Text classification task guide](../tasks/sequence_classification) - [Token classification task guide](../tasks/token_classification) - [Question answering task guide](../tasks/question_answering) - [Masked language modeling task guide](../tasks/masked_language_modeling) - [Multiple choice task guide](../tasks/multiple_choice) ## NezhaConfig [[autodoc]] NezhaConfig ## NezhaModel [[autodoc]] NezhaModel - forward ## NezhaForPreTraining [[autodoc]] NezhaForPreTraining - forward ## NezhaForMaskedLM [[autodoc]] NezhaForMaskedLM - forward ## NezhaForNextSentencePrediction [[autodoc]] NezhaForNextSentencePrediction - forward ## NezhaForSequenceClassification [[autodoc]] NezhaForSequenceClassification - forward ## NezhaForMultipleChoice [[autodoc]] NezhaForMultipleChoice - forward ## NezhaForTokenClassification [[autodoc]] NezhaForTokenClassification - forward ## NezhaForQuestionAnswering [[autodoc]] NezhaForQuestionAnswering - forward" model_doc/mega.md," # MEGA ## Overview The MEGA model was proposed in [Mega: Moving Average Equipped Gated Attention](https://arxiv.org/abs/2209.10655) by Xuezhe Ma, Chunting Zhou, Xiang Kong, Junxian He, Liangke Gui, Graham Neubig, Jonathan May, and Luke Zettlemoyer. MEGA proposes a new approach to self-attention with each encoder layer having a multi-headed exponential moving average in addition to a single head of standard dot-product attention, giving the attention mechanism stronger positional biases. This allows MEGA to perform competitively to Transformers on standard benchmarks including LRA while also having significantly fewer parameters. MEGA's compute efficiency allows it to scale to very long sequences, making it an attractive option for long-document NLP tasks. The abstract from the paper is the following: *The design choices in the Transformer attention mechanism, including weak inductive bias and quadratic computational complexity, have limited its application for modeling long sequences. In this paper, we introduce Mega, a simple, theoretically grounded, single-head gated attention mechanism equipped with (exponential) moving average to incorporate inductive bias of position-aware local dependencies into the position-agnostic attention mechanism. We further propose a variant of Mega that offers linear time and space complexity yet yields only minimal quality loss, by efficiently splitting the whole sequence into multiple chunks with fixed length. Extensive experiments on a wide range of sequence modeling benchmarks, including the Long Range Arena, neural machine translation, auto-regressive language modeling, and image and speech classification, show that Mega achieves significant improvements over other sequence models, including variants of Transformers and recent state space models. * This model was contributed by [mnaylor](https://huggingface.co/mnaylor). The original code can be found [here](https://github.com/facebookresearch/mega). ## Usage tips - MEGA can perform quite well with relatively few parameters. See Appendix D in the MEGA paper for examples of architectural specs which perform well in various settings. If using MEGA as a decoder, be sure to set `bidirectional=False` to avoid errors with default bidirectional. - Mega-chunk is a variant of mega that reduces time and spaces complexity from quadratic to linear. Utilize chunking with MegaConfig.use_chunking and control chunk size with MegaConfig.chunk_size ## Implementation Notes - The original implementation of MEGA had an inconsistent expectation of attention masks for padding and causal self-attention between the softmax attention and Laplace/squared ReLU method. This implementation addresses that inconsistency. - The original implementation did not include token type embeddings; this implementation adds support for these, with the option controlled by MegaConfig.add_token_type_embeddings ## MegaConfig [[autodoc]] MegaConfig ## MegaModel [[autodoc]] MegaModel - forward ## MegaForCausalLM [[autodoc]] MegaForCausalLM - forward ## MegaForMaskedLM [[autodoc]] MegaForMaskedLM - forward ## MegaForSequenceClassification [[autodoc]] MegaForSequenceClassification - forward ## MegaForMultipleChoice [[autodoc]] MegaForMultipleChoice - forward ## MegaForTokenClassification [[autodoc]] MegaForTokenClassification - forward ## MegaForQuestionAnswering [[autodoc]] MegaForQuestionAnswering - forward " model_doc/led.md," # LED ## Overview The LED model was proposed in [Longformer: The Long-Document Transformer](https://arxiv.org/abs/2004.05150) by Iz Beltagy, Matthew E. Peters, Arman Cohan. The abstract from the paper is the following: *Transformer-based models are unable to process long sequences due to their self-attention operation, which scales quadratically with the sequence length. To address this limitation, we introduce the Longformer with an attention mechanism that scales linearly with sequence length, making it easy to process documents of thousands of tokens or longer. Longformer's attention mechanism is a drop-in replacement for the standard self-attention and combines a local windowed attention with a task motivated global attention. Following prior work on long-sequence transformers, we evaluate Longformer on character-level language modeling and achieve state-of-the-art results on text8 and enwik8. In contrast to most prior work, we also pretrain Longformer and finetune it on a variety of downstream tasks. Our pretrained Longformer consistently outperforms RoBERTa on long document tasks and sets new state-of-the-art results on WikiHop and TriviaQA. We finally introduce the Longformer-Encoder-Decoder (LED), a Longformer variant for supporting long document generative sequence-to-sequence tasks, and demonstrate its effectiveness on the arXiv summarization dataset.* ## Usage tips - [`LEDForConditionalGeneration`] is an extension of [`BartForConditionalGeneration`] exchanging the traditional *self-attention* layer with *Longformer*'s *chunked self-attention* layer. [`LEDTokenizer`] is an alias of [`BartTokenizer`]. - LED works very well on long-range *sequence-to-sequence* tasks where the `input_ids` largely exceed a length of 1024 tokens. - LED pads the `input_ids` to be a multiple of `config.attention_window` if required. Therefore a small speed-up is gained, when [`LEDTokenizer`] is used with the `pad_to_multiple_of` argument. - LED makes use of *global attention* by means of the `global_attention_mask` (see [`LongformerModel`]). For summarization, it is advised to put *global attention* only on the first `` token. For question answering, it is advised to put *global attention* on all tokens of the question. - To fine-tune LED on all 16384, *gradient checkpointing* can be enabled in case training leads to out-of-memory (OOM) errors. This can be done by executing `model.gradient_checkpointing_enable()`. Moreover, the `use_cache=False` flag can be used to disable the caching mechanism to save memory. - LED is a model with absolute position embeddings so it's usually advised to pad the inputs on the right rather than the left. This model was contributed by [patrickvonplaten](https://huggingface.co/patrickvonplaten). ## Resources - [A notebook showing how to evaluate LED](https://colab.research.google.com/drive/12INTTR6n64TzS4RrXZxMSXfrOd9Xzamo?usp=sharing). - [A notebook showing how to fine-tune LED](https://colab.research.google.com/drive/12LjJazBl7Gam0XBPy_y0CTOJZeZ34c2v?usp=sharing). - [Text classification task guide](../tasks/sequence_classification) - [Question answering task guide](../tasks/question_answering) - [Translation task guide](../tasks/translation) - [Summarization task guide](../tasks/summarization) ## LEDConfig [[autodoc]] LEDConfig ## LEDTokenizer [[autodoc]] LEDTokenizer - build_inputs_with_special_tokens - get_special_tokens_mask - create_token_type_ids_from_sequences - save_vocabulary ## LEDTokenizerFast [[autodoc]] LEDTokenizerFast ## LED specific outputs [[autodoc]] models.led.modeling_led.LEDEncoderBaseModelOutput [[autodoc]] models.led.modeling_led.LEDSeq2SeqModelOutput [[autodoc]] models.led.modeling_led.LEDSeq2SeqLMOutput [[autodoc]] models.led.modeling_led.LEDSeq2SeqSequenceClassifierOutput [[autodoc]] models.led.modeling_led.LEDSeq2SeqQuestionAnsweringModelOutput [[autodoc]] models.led.modeling_tf_led.TFLEDEncoderBaseModelOutput [[autodoc]] models.led.modeling_tf_led.TFLEDSeq2SeqModelOutput [[autodoc]] models.led.modeling_tf_led.TFLEDSeq2SeqLMOutput ## LEDModel [[autodoc]] LEDModel - forward ## LEDForConditionalGeneration [[autodoc]] LEDForConditionalGeneration - forward ## LEDForSequenceClassification [[autodoc]] LEDForSequenceClassification - forward ## LEDForQuestionAnswering [[autodoc]] LEDForQuestionAnswering - forward ## TFLEDModel [[autodoc]] TFLEDModel - call ## TFLEDForConditionalGeneration [[autodoc]] TFLEDForConditionalGeneration - call " model_doc/fsmt.md," # FSMT ## Overview FSMT (FairSeq MachineTranslation) models were introduced in [Facebook FAIR's WMT19 News Translation Task Submission](https://arxiv.org/abs/1907.06616) by Nathan Ng, Kyra Yee, Alexei Baevski, Myle Ott, Michael Auli, Sergey Edunov. The abstract of the paper is the following: *This paper describes Facebook FAIR's submission to the WMT19 shared news translation task. We participate in two language pairs and four language directions, English <-> German and English <-> Russian. Following our submission from last year, our baseline systems are large BPE-based transformer models trained with the Fairseq sequence modeling toolkit which rely on sampled back-translations. This year we experiment with different bitext data filtering schemes, as well as with adding filtered back-translated data. We also ensemble and fine-tune our models on domain-specific data, then decode using noisy channel model reranking. Our submissions are ranked first in all four directions of the human evaluation campaign. On En->De, our system significantly outperforms other systems as well as human translations. This system improves upon our WMT'18 submission by 4.5 BLEU points.* This model was contributed by [stas](https://huggingface.co/stas). The original code can be found [here](https://github.com/pytorch/fairseq/tree/master/examples/wmt19). ## Implementation Notes - FSMT uses source and target vocabulary pairs that aren't combined into one. It doesn't share embeddings tokens either. Its tokenizer is very similar to [`XLMTokenizer`] and the main model is derived from [`BartModel`]. ## FSMTConfig [[autodoc]] FSMTConfig ## FSMTTokenizer [[autodoc]] FSMTTokenizer - build_inputs_with_special_tokens - get_special_tokens_mask - create_token_type_ids_from_sequences - save_vocabulary ## FSMTModel [[autodoc]] FSMTModel - forward ## FSMTForConditionalGeneration [[autodoc]] FSMTForConditionalGeneration - forward " model_doc/clip.md," # CLIP ## Overview The CLIP model was proposed in [Learning Transferable Visual Models From Natural Language Supervision](https://arxiv.org/abs/2103.00020) by Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, Gretchen Krueger, Ilya Sutskever. CLIP (Contrastive Language-Image Pre-Training) is a neural network trained on a variety of (image, text) pairs. It can be instructed in natural language to predict the most relevant text snippet, given an image, without directly optimizing for the task, similarly to the zero-shot capabilities of GPT-2 and 3. The abstract from the paper is the following: *State-of-the-art computer vision systems are trained to predict a fixed set of predetermined object categories. This restricted form of supervision limits their generality and usability since additional labeled data is needed to specify any other visual concept. Learning directly from raw text about images is a promising alternative which leverages a much broader source of supervision. We demonstrate that the simple pre-training task of predicting which caption goes with which image is an efficient and scalable way to learn SOTA image representations from scratch on a dataset of 400 million (image, text) pairs collected from the internet. After pre-training, natural language is used to reference learned visual concepts (or describe new ones) enabling zero-shot transfer of the model to downstream tasks. We study the performance of this approach by benchmarking on over 30 different existing computer vision datasets, spanning tasks such as OCR, action recognition in videos, geo-localization, and many types of fine-grained object classification. The model transfers non-trivially to most tasks and is often competitive with a fully supervised baseline without the need for any dataset specific training. For instance, we match the accuracy of the original ResNet-50 on ImageNet zero-shot without needing to use any of the 1.28 million training examples it was trained on. We release our code and pre-trained model weights at this https URL.* This model was contributed by [valhalla](https://huggingface.co/valhalla). The original code can be found [here](https://github.com/openai/CLIP). ## Usage tips and example CLIP is a multi-modal vision and language model. It can be used for image-text similarity and for zero-shot image classification. CLIP uses a ViT like transformer to get visual features and a causal language model to get the text features. Both the text and visual features are then projected to a latent space with identical dimension. The dot product between the projected image and text features is then used as a similar score. To feed images to the Transformer encoder, each image is split into a sequence of fixed-size non-overlapping patches, which are then linearly embedded. A [CLS] token is added to serve as representation of an entire image. The authors also add absolute position embeddings, and feed the resulting sequence of vectors to a standard Transformer encoder. The [`CLIPImageProcessor`] can be used to resize (or rescale) and normalize images for the model. The [`CLIPTokenizer`] is used to encode the text. The [`CLIPProcessor`] wraps [`CLIPImageProcessor`] and [`CLIPTokenizer`] into a single instance to both encode the text and prepare the images. The following example shows how to get the image-text similarity scores using [`CLIPProcessor`] and [`CLIPModel`]. thon >>> from PIL import Image >>> import requests >>> from transformers import CLIPProcessor, CLIPModel >>> model = CLIPModel.from_pretrained(""openai/clip-vit-base-patch32"") >>> processor = CLIPProcessor.from_pretrained(""openai/clip-vit-base-patch32"") >>> url = ""http://images.cocodataset.org/val2017/000000039769.jpg"" >>> image = Image.open(requests.get(url, stream=True).raw) >>> inputs = processor(text=[""a photo of a cat"", ""a photo of a dog""], images=image, return_tensors=""pt"", padding=True) >>> outputs = model(**inputs) >>> logits_per_image = outputs.logits_per_image # this is the image-text similarity score >>> probs = logits_per_image.softmax(dim=1) # we can take the softmax to get the label probabilities ## Resources A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with CLIP. - [Fine tuning CLIP with Remote Sensing (Satellite) images and captions](https://huggingface.co/blog/fine-tune-clip-rsicd), a blog post about how to fine-tune CLIP with [RSICD dataset](https://github.com/201528014227051/RSICD_optimal) and comparison of performance changes due to data augmentation. - This [example script](https://github.com/huggingface/transformers/tree/main/examples/pytorch/contrastive-image-text) shows how to train a CLIP-like vision-text dual encoder model using a pre-trained vision and text encoder using [COCO dataset](https://cocodataset.org/#home). - A [notebook](https://colab.research.google.com/drive/1tuoAC5F4sC7qid56Z0ap-stR3rwdk0ZV?usp=sharing) on how to use a pretrained CLIP for inference with beam search for image captioning. 🌎 **Image retrieval** - A [notebook](https://colab.research.google.com/drive/1bLVwVKpAndpEDHqjzxVPr_9nGrSbuOQd?usp=sharing) on image retrieval using pretrained CLIP and computing MRR(Mean Reciprocal Rank) score. 🌎 - A [notebook](https://colab.research.google.com/github/deep-diver/image_search_with_natural_language/blob/main/notebooks/Image_Search_CLIP.ipynb) on image retrieval and showing the similarity score. 🌎 - A [notebook](https://colab.research.google.com/drive/1xO-wC_m_GNzgjIBQ4a4znvQkvDoZJvH4?usp=sharing) on how to map images and texts to the same vector space using Multilingual CLIP. 🌎 - A [notebook](https://colab.research.google.com/github/vivien000/clip-demo/blob/master/clip.ipynb#scrollTo=uzdFhRGqiWkR) on how to run CLIP on semantic image search using [Unsplash](https://unsplash.com) and [TMBD](https://www.themoviedb.org/) datasets. 🌎 **Explainability** - A [notebook](https://colab.research.google.com/github/hila-chefer/Transformer-MM-Explainability/blob/main/CLIP_explainability.ipynb) on how to visualize similarity between input token and image segment. 🌎 If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we will review it. The resource should ideally demonstrate something new instead of duplicating an existing resource. ## CLIPConfig [[autodoc]] CLIPConfig - from_text_vision_configs ## CLIPTextConfig [[autodoc]] CLIPTextConfig ## CLIPVisionConfig [[autodoc]] CLIPVisionConfig ## CLIPTokenizer [[autodoc]] CLIPTokenizer - build_inputs_with_special_tokens - get_special_tokens_mask - create_token_type_ids_from_sequences - save_vocabulary ## CLIPTokenizerFast [[autodoc]] CLIPTokenizerFast ## CLIPImageProcessor [[autodoc]] CLIPImageProcessor - preprocess ## CLIPFeatureExtractor [[autodoc]] CLIPFeatureExtractor ## CLIPProcessor [[autodoc]] CLIPProcessor ## CLIPModel [[autodoc]] CLIPModel - forward - get_text_features - get_image_features ## CLIPTextModel [[autodoc]] CLIPTextModel - forward ## CLIPTextModelWithProjection [[autodoc]] CLIPTextModelWithProjection - forward ## CLIPVisionModelWithProjection [[autodoc]] CLIPVisionModelWithProjection - forward ## CLIPVisionModel [[autodoc]] CLIPVisionModel - forward ## TFCLIPModel [[autodoc]] TFCLIPModel - call - get_text_features - get_image_features ## TFCLIPTextModel [[autodoc]] TFCLIPTextModel - call ## TFCLIPVisionModel [[autodoc]] TFCLIPVisionModel - call ## FlaxCLIPModel [[autodoc]] FlaxCLIPModel - __call__ - get_text_features - get_image_features ## FlaxCLIPTextModel [[autodoc]] FlaxCLIPTextModel - __call__ ## FlaxCLIPTextModelWithProjection [[autodoc]] FlaxCLIPTextModelWithProjection - __call__ ## FlaxCLIPVisionModel [[autodoc]] FlaxCLIPVisionModel - __call__ " model_doc/bark.md," # Bark ## Overview Bark is a transformer-based text-to-speech model proposed by Suno AI in [suno-ai/bark](https://github.com/suno-ai/bark). Bark is made of 4 main models: - [`BarkSemanticModel`] (also referred to as the 'text' model): a causal auto-regressive transformer model that takes as input tokenized text, and predicts semantic text tokens that capture the meaning of the text. - [`BarkCoarseModel`] (also referred to as the 'coarse acoustics' model): a causal autoregressive transformer, that takes as input the results of the [`BarkSemanticModel`] model. It aims at predicting the first two audio codebooks necessary for EnCodec. - [`BarkFineModel`] (the 'fine acoustics' model), this time a non-causal autoencoder transformer, which iteratively predicts the last codebooks based on the sum of the previous codebooks embeddings. - having predicted all the codebook channels from the [`EncodecModel`], Bark uses it to decode the output audio array. It should be noted that each of the first three modules can support conditional speaker embeddings to condition the output sound according to specific predefined voice. This model was contributed by [Yoach Lacombe (ylacombe)](https://huggingface.co/ylacombe) and [Sanchit Gandhi (sanchit-gandhi)](https://github.com/sanchit-gandhi). The original code can be found [here](https://github.com/suno-ai/bark). ### Optimizing Bark Bark can be optimized with just a few extra lines of code, which **significantly reduces its memory footprint** and **accelerates inference**. #### Using half-precision You can speed up inference and reduce memory footprint by 50% simply by loading the model in half-precision. thon from transformers import BarkModel import torch device = ""cuda"" if torch.cuda.is_available() else ""cpu"" model = BarkModel.from_pretrained(""suno/bark-small"", torch_dtype=torch.float16).to(device) #### Using CPU offload As mentioned above, Bark is made up of 4 sub-models, which are called up sequentially during audio generation. In other words, while one sub-model is in use, the other sub-models are idle. If you're using a CUDA device, a simple solution to benefit from an 80% reduction in memory footprint is to offload the submodels from GPU to CPU when they're idle. This operation is called *CPU offloading*. You can use it with one line of code as follows: thon model.enable_cpu_offload() Note that 🤗 Accelerate must be installed before using this feature. [Here's how to install it.](https://huggingface.co/docs/accelerate/basic_tutorials/install) #### Using Better Transformer Better Transformer is an 🤗 Optimum feature that performs kernel fusion under the hood. You can gain 20% to 30% in speed with zero performance degradation. It only requires one line of code to export the model to 🤗 Better Transformer: thon model = model.to_bettertransformer() Note that 🤗 Optimum must be installed before using this feature. [Here's how to install it.](https://huggingface.co/docs/optimum/installation) #### Using Flash Attention 2 Flash Attention 2 is an even faster, optimized version of the previous optimization. ##### Installation First, check whether your hardware is compatible with Flash Attention 2. The latest list of compatible hardware can be found in the [official documentation](https://github.com/Dao-AILab/flash-attention#installation-and-features). If your hardware is not compatible with Flash Attention 2, you can still benefit from attention kernel optimisations through Better Transformer support covered [above](https://huggingface.co/docs/transformers/main/en/model_doc/bark#using-better-transformer). Next, [install](https://github.com/Dao-AILab/flash-attention#installation-and-features) the latest version of Flash Attention 2: ```bash pip install -U flash-attn --no-build-isolation ##### Usage To load a model using Flash Attention 2, we can pass the `use_flash_attention_2` flag to [`.from_pretrained`](https://huggingface.co/docs/transformers/main/en/main_classes/model#transformers.PreTrainedModel.from_pretrained). We'll also load the model in half-precision (e.g. `torch.float16`), since it results in almost no degradation to audio quality but significantly lower memory usage and faster inference: thon model = BarkModel.from_pretrained(""suno/bark-small"", torch_dtype=torch.float16, use_flash_attention_2=True).to(device) ##### Performance comparison The following diagram shows the latency for the native attention implementation (no optimisation) against Better Transformer and Flash Attention 2. In all cases, we generate 400 semantic tokens on a 40GB A100 GPU with PyTorch 2.1. Flash Attention 2 is also consistently faster than Better Transformer, and its performance improves even more as batch sizes increase: To put this into perspective, on an NVIDIA A100 and when generating 400 semantic tokens with a batch size of 16, you can get 17 times the [throughput](https://huggingface.co/blog/optimizing-bark#throughput) and still be 2 seconds faster than generating sentences one by one with the native model implementation. In other words, all the samples will be generated 17 times faster. At batch size 8, on an NVIDIA A100, Flash Attention 2 is also 10% faster than Better Transformer, and at batch size 16, 25%. #### Combining optimization techniques You can combine optimization techniques, and use CPU offload, half-precision and Flash Attention 2 (or 🤗 Better Transformer) all at once. thon from transformers import BarkModel import torch device = ""cuda"" if torch.cuda.is_available() else ""cpu"" # load in fp16 and use Flash Attention 2 model = BarkModel.from_pretrained(""suno/bark-small"", torch_dtype=torch.float16, use_flash_attention_2=True).to(device) # enable CPU offload model.enable_cpu_offload() Find out more on inference optimization techniques [here](https://huggingface.co/docs/transformers/perf_infer_gpu_one). ### Usage tips Suno offers a library of voice presets in a number of languages [here](https://suno-ai.notion.site/8b8e8749ed514b0cbf3f699013548683?v=bc67cff786b04b50b3ceb756fd05f68c). These presets are also uploaded in the hub [here](https://huggingface.co/suno/bark-small/tree/main/speaker_embeddings) or [here](https://huggingface.co/suno/bark/tree/main/speaker_embeddings). thon >>> from transformers import AutoProcessor, BarkModel >>> processor = AutoProcessor.from_pretrained(""suno/bark"") >>> model = BarkModel.from_pretrained(""suno/bark"") >>> voice_preset = ""v2/en_speaker_6"" >>> inputs = processor(""Hello, my dog is cute"", voice_preset=voice_preset) >>> audio_array = model.generate(**inputs) >>> audio_array = audio_array.cpu().numpy().squeeze() Bark can generate highly realistic, **multilingual** speech as well as other audio - including music, background noise and simple sound effects. thon >>> # Multilingual speech - simplified Chinese >>> inputs = processor(""惊人的!我会说中文"") >>> # Multilingual speech - French - let's use a voice_preset as well >>> inputs = processor(""Incroyable! Je peux générer du son."", voice_preset=""fr_speaker_5"") >>> # Bark can also generate music. You can help it out by adding music notes around your lyrics. >>> inputs = processor(""♪ Hello, my dog is cute ♪"") >>> audio_array = model.generate(**inputs) >>> audio_array = audio_array.cpu().numpy().squeeze() The model can also produce **nonverbal communications** like laughing, sighing and crying. thon >>> # Adding non-speech cues to the input text >>> inputs = processor(""Hello uh [clears throat], my dog is cute [laughter]"") >>> audio_array = model.generate(**inputs) >>> audio_array = audio_array.cpu().numpy().squeeze() To save the audio, simply take the sample rate from the model config and some scipy utility: thon >>> from scipy.io.wavfile import write as write_wav >>> # save audio to disk, but first take the sample rate from the model config >>> sample_rate = model.generation_config.sample_rate >>> write_wav(""bark_generation.wav"", sample_rate, audio_array) ## BarkConfig [[autodoc]] BarkConfig - all ## BarkProcessor [[autodoc]] BarkProcessor - all - __call__ ## BarkModel [[autodoc]] BarkModel - generate - enable_cpu_offload ## BarkSemanticModel [[autodoc]] BarkSemanticModel - forward ## BarkCoarseModel [[autodoc]] BarkCoarseModel - forward ## BarkFineModel [[autodoc]] BarkFineModel - forward ## BarkCausalModel [[autodoc]] BarkCausalModel - forward ## BarkCoarseConfig [[autodoc]] BarkCoarseConfig - all ## BarkFineConfig [[autodoc]] BarkFineConfig - all ## BarkSemanticConfig [[autodoc]] BarkSemanticConfig - all " model_doc/speech_to_text_2.md," # Speech2Text2 ## Overview The Speech2Text2 model is used together with [Wav2Vec2](wav2vec2) for Speech Translation models proposed in [Large-Scale Self- and Semi-Supervised Learning for Speech Translation](https://arxiv.org/abs/2104.06678) by Changhan Wang, Anne Wu, Juan Pino, Alexei Baevski, Michael Auli, Alexis Conneau. Speech2Text2 is a *decoder-only* transformer model that can be used with any speech *encoder-only*, such as [Wav2Vec2](wav2vec2) or [HuBERT](hubert) for Speech-to-Text tasks. Please refer to the [SpeechEncoderDecoder](speech-encoder-decoder) class on how to combine Speech2Text2 with any speech *encoder-only* model. This model was contributed by [Patrick von Platen](https://huggingface.co/patrickvonplaten). The original code can be found [here](https://github.com/pytorch/fairseq/blob/1f7ef9ed1e1061f8c7f88f8b94c7186834398690/fairseq/models/wav2vec/wav2vec2_asr.py#L266). ## Usage tips - Speech2Text2 achieves state-of-the-art results on the CoVoST Speech Translation dataset. For more information, see the [official models](https://huggingface.co/models?other=speech2text2) . - Speech2Text2 is always used within the [SpeechEncoderDecoder](speech-encoder-decoder) framework. - Speech2Text2's tokenizer is based on [fastBPE](https://github.com/glample/fastBPE). ## Inference Speech2Text2's [`SpeechEncoderDecoderModel`] model accepts raw waveform input values from speech and makes use of [`~generation.GenerationMixin.generate`] to translate the input speech autoregressively to the target language. The [`Wav2Vec2FeatureExtractor`] class is responsible for preprocessing the input speech and [`Speech2Text2Tokenizer`] decodes the generated target tokens to the target string. The [`Speech2Text2Processor`] wraps [`Wav2Vec2FeatureExtractor`] and [`Speech2Text2Tokenizer`] into a single instance to both extract the input features and decode the predicted token ids. - Step-by-step Speech Translation thon >>> import torch >>> from transformers import Speech2Text2Processor, SpeechEncoderDecoderModel >>> from datasets import load_dataset >>> import soundfile as sf >>> model = SpeechEncoderDecoderModel.from_pretrained(""facebook/s2t-wav2vec2-large-en-de"") >>> processor = Speech2Text2Processor.from_pretrained(""facebook/s2t-wav2vec2-large-en-de"") >>> def map_to_array(batch): speech, _ = sf.read(batch[""file""]) batch[""speech""] = speech return batch >>> ds = load_dataset(""hf-internal-testing/librispeech_asr_dummy"", ""clean"", split=""validation"") >>> ds = ds.map(map_to_array) >>> inputs = processor(ds[""speech""][0], sampling_rate=16_000, return_tensors=""pt"") >>> generated_ids = model.generate(inputs=inputs[""input_values""], attention_mask=inputs[""attention_mask""]) >>> transcription = processor.batch_decode(generated_ids) - Speech Translation via Pipelines The automatic speech recognition pipeline can also be used to translate speech in just a couple lines of code thon >>> from datasets import load_dataset >>> from transformers import pipeline >>> librispeech_en = load_dataset(""hf-internal-testing/librispeech_asr_dummy"", ""clean"", split=""validation"") >>> asr = pipeline( ""automatic-speech-recognition"", model=""facebook/s2t-wav2vec2-large-en-de"", feature_extractor=""facebook/s2t-wav2vec2-large-en-de"", ) >>> translation_de = asr(librispeech_en[0][""file""]) See [model hub](https://huggingface.co/models?filter=speech2text2) to look for Speech2Text2 checkpoints. ## Resources - [Causal language modeling task guide](../tasks/language_modeling) ## Speech2Text2Config [[autodoc]] Speech2Text2Config ## Speech2TextTokenizer [[autodoc]] Speech2Text2Tokenizer - batch_decode - decode - save_vocabulary ## Speech2Text2Processor [[autodoc]] Speech2Text2Processor - __call__ - from_pretrained - save_pretrained - batch_decode - decode ## Speech2Text2ForCausalLM [[autodoc]] Speech2Text2ForCausalLM - forward " model_doc/fuyu.md," # Fuyu ## Overview The Fuyu model was created by [ADEPT](https://www.adept.ai/blog/fuyu-8b), and authored by Rohan Bavishi, Erich Elsen, Curtis Hawthorne, Maxwell Nye, Augustus Odena, Arushi Somani, Sağnak Taşırlar. The authors introduced Fuyu-8B, a decoder-only multimodal model based on the classic transformers architecture, with query and key normalization. A linear encoder is added to create multimodal embeddings from image inputs. By treating image tokens like text tokens and using a special image-newline character, the model knows when an image line ends. Image positional embeddings are removed. This avoids the need for different training phases for various image resolutions. With 8 billion parameters and licensed under CC-BY-NC, Fuyu-8B is notable for its ability to handle both text and images, its impressive context size of 16K, and its overall performance. The `Fuyu` models were trained using `bfloat16`, but the original inference uses `float16` The checkpoints uploaded on the hub use `torch_dtype = 'float16'` which will be used by the `AutoModel` API to cast the checkpoints from `torch.float32` to `torch.float16`. The `dtype` of the online weights is mostly irrelevant, unless you are using `torch_dtype=""auto""` when initializing a model using `model = AutoModelForCausalLM.from_pretrained(""path"", torch_dtype = ""auto"")`. The reason is that the model will first be downloaded ( using the `dtype` of the checkpoints online) then it will be cast to the default `dtype` of `torch` (becomes `torch.float32`). Users should specify the `torch_dtype` they want, and if they don't it will be `torch.float32`. Finetuning the model in `float16` is not recommended and known to produce `nan`, as such the model should be fine-tuned in `bfloat16`. Tips: - To convert the model, you need to clone the original repository using `git clone https://github.com/persimmon-ai-labs/adept-inference`, then get the checkpoints: ```bash git clone https://github.com/persimmon-ai-labs/adept-inference wget path/to/fuyu-8b-model-weights.tar tar -xvf fuyu-8b-model-weights.tar python src/transformers/models/fuyu/convert_fuyu_weights_to_hf.py --input_dir /path/to/downloaded/fuyu/weights/ --output_dir /output/path \ --pt_model_path /path/to/fuyu_8b_release/iter_0001251/mp_rank_00/model_optim_rng.pt --ada_lib_path /path/to/adept-inference For the chat model: ```bash wget https://axtkn4xl5cip.objectstorage.us-phoenix-1.oci.customer-oci.com/n/axtkn4xl5cip/b/adept-public-data/o/8b_chat_model_release.tar tar -xvf 8b_base_model_release.tar Then, model can be loaded via: from transformers import FuyuConfig, FuyuForCausalLM model_config = FuyuConfig() model = FuyuForCausalLM(model_config).from_pretrained('/output/path') Inputs need to be passed through a specific Processor to have the correct formats. A processor requires an image_processor and a tokenizer. Hence, inputs can be loaded via: from PIL import Image from transformers import AutoTokenizer from transformers.models.fuyu.processing_fuyu import FuyuProcessor from transformers.models.fuyu.image_processing_fuyu import FuyuImageProcessor tokenizer = AutoTokenizer.from_pretrained('adept-hf-collab/fuyu-8b') image_processor = FuyuImageProcessor() processor = FuyuProcessor(image_processor=image_processor, tokenizer=tokenizer) text_prompt = ""Generate a coco-style caption.\\n"" bus_image_url = ""https://huggingface.co/datasets/hf-internal-testing/fixtures-captioning/resolve/main/bus.png"" bus_image_pil = Image.open(io.BytesIO(requests.get(bus_image_url).content)) inputs_to_model = processor(text=text_prompt, images=image_pil) This model was contributed by [Molbap](https://huggingface.co/Molbap). The original code can be found [here](https://github.com/persimmon-ai-labs/adept-inference). - Fuyu uses a `sentencepiece` based tokenizer, with a `Unigram` model. It supports bytefallback, which is only available in `tokenizers==0.14.0` for the fast tokenizer. The `LlamaTokenizer` is used as it is a standard wrapper around sentencepiece. - The authors suggest to use the following prompt for image captioning: `f""Generate a coco-style caption.\\n""` ## FuyuConfig [[autodoc]] FuyuConfig ## FuyuForCausalLM [[autodoc]] FuyuForCausalLM - forward ## FuyuImageProcessor [[autodoc]] FuyuImageProcessor - __call__ ## FuyuProcessor [[autodoc]] FuyuProcessor - __call__ " model_doc/gpt_neox_japanese.md," # GPT-NeoX-Japanese ## Overview We introduce GPT-NeoX-Japanese, which is an autoregressive language model for Japanese, trained on top of [https://github.com/EleutherAI/gpt-neox](https://github.com/EleutherAI/gpt-neox). Japanese is a unique language with its large vocabulary and a combination of hiragana, katakana, and kanji writing scripts. To address this distinct structure of the Japanese language, we use a [special sub-word tokenizer](https://github.com/tanreinama/Japanese-BPEEncoder_V2). We are very grateful to *tanreinama* for open-sourcing this incredibly helpful tokenizer. Following the recommendations from Google's research on [PaLM](https://ai.googleblog.com/2022/04/pathways-language-model-palm-scaling-to.html), we have removed bias parameters from transformer blocks, achieving better model performance. Please refer [this article](https://medium.com/ml-abeja/training-a-better-gpt-2-93b157662ae4) in detail. Development of the model was led by [Shinya Otani](https://github.com/SO0529), [Takayoshi Makabe](https://github.com/spider-man-tm), [Anuj Arora](https://github.com/Anuj040), and [Kyo Hattori](https://github.com/go5paopao) from [ABEJA, Inc.](https://www.abejainc.com/). For more information on this model-building activity, please refer [here (ja)](https://tech-blog.abeja.asia/entry/abeja-gpt-project-202207). ### Usage example The `generate()` method can be used to generate text using GPT NeoX Japanese model. thon >>> from transformers import GPTNeoXJapaneseForCausalLM, GPTNeoXJapaneseTokenizer >>> model = GPTNeoXJapaneseForCausalLM.from_pretrained(""abeja/gpt-neox-japanese-2.7b"") >>> tokenizer = GPTNeoXJapaneseTokenizer.from_pretrained(""abeja/gpt-neox-japanese-2.7b"") >>> prompt = ""人とAIが協調するためには、"" >>> input_ids = tokenizer(prompt, return_tensors=""pt"").input_ids >>> gen_tokens = model.generate( input_ids, do_sample=True, temperature=0.9, max_length=100, ) >>> gen_text = tokenizer.batch_decode(gen_tokens, skip_special_tokens=True)[0] >>> print(gen_text) 人とAIが協調するためには、AIと人が共存し、AIを正しく理解する必要があります。 ## Resources - [Causal language modeling task guide](../tasks/language_modeling) ## GPTNeoXJapaneseConfig [[autodoc]] GPTNeoXJapaneseConfig ## GPTNeoXJapaneseTokenizer [[autodoc]] GPTNeoXJapaneseTokenizer ## GPTNeoXJapaneseModel [[autodoc]] GPTNeoXJapaneseModel - forward ## GPTNeoXJapaneseForCausalLM [[autodoc]] GPTNeoXJapaneseForCausalLM - forward " model_doc/dpr.md," # DPR ## Overview Dense Passage Retrieval (DPR) is a set of tools and models for state-of-the-art open-domain Q&A research. It was introduced in [Dense Passage Retrieval for Open-Domain Question Answering](https://arxiv.org/abs/2004.04906) by Vladimir Karpukhin, Barlas Oğuz, Sewon Min, Patrick Lewis, Ledell Wu, Sergey Edunov, Danqi Chen, Wen-tau Yih. The abstract from the paper is the following: *Open-domain question answering relies on efficient passage retrieval to select candidate contexts, where traditional sparse vector space models, such as TF-IDF or BM25, are the de facto method. In this work, we show that retrieval can be practically implemented using dense representations alone, where embeddings are learned from a small number of questions and passages by a simple dual-encoder framework. When evaluated on a wide range of open-domain QA datasets, our dense retriever outperforms a strong Lucene-BM25 system largely by 9%-19% absolute in terms of top-20 passage retrieval accuracy, and helps our end-to-end QA system establish new state-of-the-art on multiple open-domain QA benchmarks.* This model was contributed by [lhoestq](https://huggingface.co/lhoestq). The original code can be found [here](https://github.com/facebookresearch/DPR). ## Usage tips - DPR consists in three models: * Question encoder: encode questions as vectors * Context encoder: encode contexts as vectors * Reader: extract the answer of the questions inside retrieved contexts, along with a relevance score (high if the inferred span actually answers the question). ## DPRConfig [[autodoc]] DPRConfig ## DPRContextEncoderTokenizer [[autodoc]] DPRContextEncoderTokenizer ## DPRContextEncoderTokenizerFast [[autodoc]] DPRContextEncoderTokenizerFast ## DPRQuestionEncoderTokenizer [[autodoc]] DPRQuestionEncoderTokenizer ## DPRQuestionEncoderTokenizerFast [[autodoc]] DPRQuestionEncoderTokenizerFast ## DPRReaderTokenizer [[autodoc]] DPRReaderTokenizer ## DPRReaderTokenizerFast [[autodoc]] DPRReaderTokenizerFast ## DPR specific outputs [[autodoc]] models.dpr.modeling_dpr.DPRContextEncoderOutput [[autodoc]] models.dpr.modeling_dpr.DPRQuestionEncoderOutput [[autodoc]] models.dpr.modeling_dpr.DPRReaderOutput ## DPRContextEncoder [[autodoc]] DPRContextEncoder - forward ## DPRQuestionEncoder [[autodoc]] DPRQuestionEncoder - forward ## DPRReader [[autodoc]] DPRReader - forward ## TFDPRContextEncoder [[autodoc]] TFDPRContextEncoder - call ## TFDPRQuestionEncoder [[autodoc]] TFDPRQuestionEncoder - call ## TFDPRReader [[autodoc]] TFDPRReader - call " model_doc/align.md," # ALIGN ## Overview The ALIGN model was proposed in [Scaling Up Visual and Vision-Language Representation Learning With Noisy Text Supervision](https://arxiv.org/abs/2102.05918) by Chao Jia, Yinfei Yang, Ye Xia, Yi-Ting Chen, Zarana Parekh, Hieu Pham, Quoc V. Le, Yunhsuan Sung, Zhen Li, Tom Duerig. ALIGN is a multi-modal vision and language model. It can be used for image-text similarity and for zero-shot image classification. ALIGN features a dual-encoder architecture with [EfficientNet](efficientnet) as its vision encoder and [BERT](bert) as its text encoder, and learns to align visual and text representations with contrastive learning. Unlike previous work, ALIGN leverages a massive noisy dataset and shows that the scale of the corpus can be used to achieve SOTA representations with a simple recipe. The abstract from the paper is the following: *Pre-trained representations are becoming crucial for many NLP and perception tasks. While representation learning in NLP has transitioned to training on raw text without human annotations, visual and vision-language representations still rely heavily on curated training datasets that are expensive or require expert knowledge. For vision applications, representations are mostly learned using datasets with explicit class labels such as ImageNet or OpenImages. For vision-language, popular datasets like Conceptual Captions, MSCOCO, or CLIP all involve a non-trivial data collection (and cleaning) process. This costly curation process limits the size of datasets and hence hinders the scaling of trained models. In this paper, we leverage a noisy dataset of over one billion image alt-text pairs, obtained without expensive filtering or post-processing steps in the Conceptual Captions dataset. A simple dual-encoder architecture learns to align visual and language representations of the image and text pairs using a contrastive loss. We show that the scale of our corpus can make up for its noise and leads to state-of-the-art representations even with such a simple learning scheme. Our visual representation achieves strong performance when transferred to classification tasks such as ImageNet and VTAB. The aligned visual and language representations enables zero-shot image classification and also set new state-of-the-art results on Flickr30K and MSCOCO image-text retrieval benchmarks, even when compared with more sophisticated cross-attention models. The representations also enable cross-modality search with complex text and text + image queries.* This model was contributed by [Alara Dirik](https://huggingface.co/adirik). The original code is not released, this implementation is based on the Kakao Brain implementation based on the original paper. ## Usage example ALIGN uses EfficientNet to get visual features and BERT to get the text features. Both the text and visual features are then projected to a latent space with identical dimension. The dot product between the projected image and text features is then used as a similarity score. [`AlignProcessor`] wraps [`EfficientNetImageProcessor`] and [`BertTokenizer`] into a single instance to both encode the text and preprocess the images. The following example shows how to get the image-text similarity scores using [`AlignProcessor`] and [`AlignModel`]. thon import requests import torch from PIL import Image from transformers import AlignProcessor, AlignModel processor = AlignProcessor.from_pretrained(""kakaobrain/align-base"") model = AlignModel.from_pretrained(""kakaobrain/align-base"") url = ""http://images.cocodataset.org/val2017/000000039769.jpg"" image = Image.open(requests.get(url, stream=True).raw) candidate_labels = [""an image of a cat"", ""an image of a dog""] inputs = processor(text=candidate_labels, images=image, return_tensors=""pt"") with torch.no_grad(): outputs = model(**inputs) # this is the image-text similarity score logits_per_image = outputs.logits_per_image # we can take the softmax to get the label probabilities probs = logits_per_image.softmax(dim=1) print(probs) ## Resources A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with ALIGN. - A blog post on [ALIGN and the COYO-700M dataset](https://huggingface.co/blog/vit-align). - A zero-shot image classification [demo](https://huggingface.co/spaces/adirik/ALIGN-zero-shot-image-classification). - [Model card](https://huggingface.co/kakaobrain/align-base) of `kakaobrain/align-base` model. If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we will review it. The resource should ideally demonstrate something new instead of duplicating an existing resource. ## AlignConfig [[autodoc]] AlignConfig - from_text_vision_configs ## AlignTextConfig [[autodoc]] AlignTextConfig ## AlignVisionConfig [[autodoc]] AlignVisionConfig ## AlignProcessor [[autodoc]] AlignProcessor ## AlignModel [[autodoc]] AlignModel - forward - get_text_features - get_image_features ## AlignTextModel [[autodoc]] AlignTextModel - forward ## AlignVisionModel [[autodoc]] AlignVisionModel - forward " model_doc/nat.md," # Neighborhood Attention Transformer ## Overview NAT was proposed in [Neighborhood Attention Transformer](https://arxiv.org/abs/2204.07143) by Ali Hassani, Steven Walton, Jiachen Li, Shen Li, and Humphrey Shi. It is a hierarchical vision transformer based on Neighborhood Attention, a sliding-window self attention pattern. The abstract from the paper is the following: *We present Neighborhood Attention (NA), the first efficient and scalable sliding-window attention mechanism for vision. NA is a pixel-wise operation, localizing self attention (SA) to the nearest neighboring pixels, and therefore enjoys a linear time and space complexity compared to the quadratic complexity of SA. The sliding-window pattern allows NA's receptive field to grow without needing extra pixel shifts, and preserves translational equivariance, unlike Swin Transformer's Window Self Attention (WSA). We develop NATTEN (Neighborhood Attention Extension), a Python package with efficient C++ and CUDA kernels, which allows NA to run up to 40% faster than Swin's WSA while using up to 25% less memory. We further present Neighborhood Attention Transformer (NAT), a new hierarchical transformer design based on NA that boosts image classification and downstream vision performance. Experimental results on NAT are competitive; NAT-Tiny reaches 83.2% top-1 accuracy on ImageNet, 51.4% mAP on MS-COCO and 48.4% mIoU on ADE20K, which is 1.9% ImageNet accuracy, 1.0% COCO mAP, and 2.6% ADE20K mIoU improvement over a Swin model with similar size. * Neighborhood Attention compared to other attention patterns. Taken from the original paper. This model was contributed by [Ali Hassani](https://huggingface.co/alihassanijr). The original code can be found [here](https://github.com/SHI-Labs/Neighborhood-Attention-Transformer). ## Usage tips - One can use the [`AutoImageProcessor`] API to prepare images for the model. - NAT can be used as a *backbone*. When `output_hidden_states = True`, it will output both `hidden_states` and `reshaped_hidden_states`. The `reshaped_hidden_states` have a shape of `(batch, num_channels, height, width)` rather than `(batch_size, height, width, num_channels)`. Notes: - NAT depends on [NATTEN](https://github.com/SHI-Labs/NATTEN/)'s implementation of Neighborhood Attention. You can install it with pre-built wheels for Linux by referring to [shi-labs.com/natten](https://shi-labs.com/natten), or build on your system by running `pip install natten`. Note that the latter will likely take time to compile. NATTEN does not support Windows devices yet. - Patch size of 4 is only supported at the moment. ## Resources A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with NAT. - [`NatForImageClassification`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/pytorch/image-classification) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/image_classification.ipynb). - See also: [Image classification task guide](../tasks/image_classification) If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource. ## NatConfig [[autodoc]] NatConfig ## NatModel [[autodoc]] NatModel - forward ## NatForImageClassification [[autodoc]] NatForImageClassification - forward " model_doc/pop2piano.md," # Pop2Piano ## Overview The Pop2Piano model was proposed in [Pop2Piano : Pop Audio-based Piano Cover Generation](https://arxiv.org/abs/2211.00895) by Jongho Choi and Kyogu Lee. Piano covers of pop music are widely enjoyed, but generating them from music is not a trivial task. It requires great expertise with playing piano as well as knowing different characteristics and melodies of a song. With Pop2Piano you can directly generate a cover from a song's audio waveform. It is the first model to directly generate a piano cover from pop audio without melody and chord extraction modules. Pop2Piano is an encoder-decoder Transformer model based on [T5](https://arxiv.org/pdf/1910.10683.pdf). The input audio is transformed to its waveform and passed to the encoder, which transforms it to a latent representation. The decoder uses these latent representations to generate token ids in an autoregressive way. Each token id corresponds to one of four different token types: time, velocity, note and 'special'. The token ids are then decoded to their equivalent MIDI file. The abstract from the paper is the following: *Piano covers of pop music are enjoyed by many people. However, the task of automatically generating piano covers of pop music is still understudied. This is partly due to the lack of synchronized {Pop, Piano Cover} data pairs, which made it challenging to apply the latest data-intensive deep learning-based methods. To leverage the power of the data-driven approach, we make a large amount of paired and synchronized {Pop, Piano Cover} data using an automated pipeline. In this paper, we present Pop2Piano, a Transformer network that generates piano covers given waveforms of pop music. To the best of our knowledge, this is the first model to generate a piano cover directly from pop audio without using melody and chord extraction modules. We show that Pop2Piano, trained with our dataset, is capable of producing plausible piano covers.* This model was contributed by [Susnato Dhar](https://huggingface.co/susnato). The original code can be found [here](https://github.com/sweetcocoa/pop2piano). ## Usage tips * To use Pop2Piano, you will need to install the 🤗 Transformers library, as well as the following third party modules: pip install pretty-midi==0.2.9 essentia==2.1b6.dev1034 librosa scipy Please note that you may need to restart your runtime after installation. * Pop2Piano is an Encoder-Decoder based model like T5. * Pop2Piano can be used to generate midi-audio files for a given audio sequence. * Choosing different composers in `Pop2PianoForConditionalGeneration.generate()` can lead to variety of different results. * Setting the sampling rate to 44.1 kHz when loading the audio file can give good performance. * Though Pop2Piano was mainly trained on Korean Pop music, it also does pretty well on other Western Pop or Hip Hop songs. ## Examples - Example using HuggingFace Dataset: thon >>> from datasets import load_dataset >>> from transformers import Pop2PianoForConditionalGeneration, Pop2PianoProcessor >>> model = Pop2PianoForConditionalGeneration.from_pretrained(""sweetcocoa/pop2piano"") >>> processor = Pop2PianoProcessor.from_pretrained(""sweetcocoa/pop2piano"") >>> ds = load_dataset(""sweetcocoa/pop2piano_ci"", split=""test"") >>> inputs = processor( audio=ds[""audio""][0][""array""], sampling_rate=ds[""audio""][0][""sampling_rate""], return_tensors=""pt"" ) >>> model_output = model.generate(input_features=inputs[""input_features""], composer=""composer1"") >>> tokenizer_output = processor.batch_decode( token_ids=model_output, feature_extractor_output=inputs )[""pretty_midi_objects""][0] >>> tokenizer_output.write(""./Outputs/midi_output.mid"") - Example using your own audio file: thon >>> import librosa >>> from transformers import Pop2PianoForConditionalGeneration, Pop2PianoProcessor >>> audio, sr = librosa.load("""", sr=44100) # feel free to change the sr to a suitable value. >>> model = Pop2PianoForConditionalGeneration.from_pretrained(""sweetcocoa/pop2piano"") >>> processor = Pop2PianoProcessor.from_pretrained(""sweetcocoa/pop2piano"") >>> inputs = processor(audio=audio, sampling_rate=sr, return_tensors=""pt"") >>> model_output = model.generate(input_features=inputs[""input_features""], composer=""composer1"") >>> tokenizer_output = processor.batch_decode( token_ids=model_output, feature_extractor_output=inputs )[""pretty_midi_objects""][0] >>> tokenizer_output.write(""./Outputs/midi_output.mid"") - Example of processing multiple audio files in batch: thon >>> import librosa >>> from transformers import Pop2PianoForConditionalGeneration, Pop2PianoProcessor >>> # feel free to change the sr to a suitable value. >>> audio1, sr1 = librosa.load("""", sr=44100) >>> audio2, sr2 = librosa.load("""", sr=44100) >>> model = Pop2PianoForConditionalGeneration.from_pretrained(""sweetcocoa/pop2piano"") >>> processor = Pop2PianoProcessor.from_pretrained(""sweetcocoa/pop2piano"") >>> inputs = processor(audio=[audio1, audio2], sampling_rate=[sr1, sr2], return_attention_mask=True, return_tensors=""pt"") >>> # Since we now generating in batch(2 audios) we must pass the attention_mask >>> model_output = model.generate( input_features=inputs[""input_features""], attention_mask=inputs[""attention_mask""], composer=""composer1"", ) >>> tokenizer_output = processor.batch_decode( token_ids=model_output, feature_extractor_output=inputs )[""pretty_midi_objects""] >>> # Since we now have 2 generated MIDI files >>> tokenizer_output[0].write(""./Outputs/midi_output1.mid"") >>> tokenizer_output[1].write(""./Outputs/midi_output2.mid"") - Example of processing multiple audio files in batch (Using `Pop2PianoFeatureExtractor` and `Pop2PianoTokenizer`): thon >>> import librosa >>> from transformers import Pop2PianoForConditionalGeneration, Pop2PianoFeatureExtractor, Pop2PianoTokenizer >>> # feel free to change the sr to a suitable value. >>> audio1, sr1 = librosa.load("""", sr=44100) >>> audio2, sr2 = librosa.load("""", sr=44100) >>> model = Pop2PianoForConditionalGeneration.from_pretrained(""sweetcocoa/pop2piano"") >>> feature_extractor = Pop2PianoFeatureExtractor.from_pretrained(""sweetcocoa/pop2piano"") >>> tokenizer = Pop2PianoTokenizer.from_pretrained(""sweetcocoa/pop2piano"") >>> inputs = feature_extractor( audio=[audio1, audio2], sampling_rate=[sr1, sr2], return_attention_mask=True, return_tensors=""pt"", ) >>> # Since we now generating in batch(2 audios) we must pass the attention_mask >>> model_output = model.generate( input_features=inputs[""input_features""], attention_mask=inputs[""attention_mask""], composer=""composer1"", ) >>> tokenizer_output = tokenizer.batch_decode( token_ids=model_output, feature_extractor_output=inputs )[""pretty_midi_objects""] >>> # Since we now have 2 generated MIDI files >>> tokenizer_output[0].write(""./Outputs/midi_output1.mid"") >>> tokenizer_output[1].write(""./Outputs/midi_output2.mid"") ## Pop2PianoConfig [[autodoc]] Pop2PianoConfig ## Pop2PianoFeatureExtractor [[autodoc]] Pop2PianoFeatureExtractor - __call__ ## Pop2PianoForConditionalGeneration [[autodoc]] Pop2PianoForConditionalGeneration - forward - generate ## Pop2PianoTokenizer [[autodoc]] Pop2PianoTokenizer - __call__ ## Pop2PianoProcessor [[autodoc]] Pop2PianoProcessor - __call__ " model_doc/mctct.md," # M-CTC-T This model is in maintenance mode only, so we won't accept any new PRs changing its code. If you run into any issues running this model, please reinstall the last version that supported this model: v4.30.0. You can do so by running the following command: `pip install -U transformers==4.30.0`. ## Overview The M-CTC-T model was proposed in [Pseudo-Labeling For Massively Multilingual Speech Recognition](https://arxiv.org/abs/2111.00161) by Loren Lugosch, Tatiana Likhomanenko, Gabriel Synnaeve, and Ronan Collobert. The model is a 1B-param transformer encoder, with a CTC head over 8065 character labels and a language identification head over 60 language ID labels. It is trained on Common Voice (version 6.1, December 2020 release) and VoxPopuli. After training on Common Voice and VoxPopuli, the model is trained on Common Voice only. The labels are unnormalized character-level transcripts (punctuation and capitalization are not removed). The model takes as input Mel filterbank features from a 16Khz audio signal. The abstract from the paper is the following: *Semi-supervised learning through pseudo-labeling has become a staple of state-of-the-art monolingual speech recognition systems. In this work, we extend pseudo-labeling to massively multilingual speech recognition with 60 languages. We propose a simple pseudo-labeling recipe that works well even with low-resource languages: train a supervised multilingual model, fine-tune it with semi-supervised learning on a target language, generate pseudo-labels for that language, and train a final model using pseudo-labels for all languages, either from scratch or by fine-tuning. Experiments on the labeled Common Voice and unlabeled VoxPopuli datasets show that our recipe can yield a model with better performance for many languages that also transfers well to LibriSpeech.* This model was contributed by [cwkeam](https://huggingface.co/cwkeam). The original code can be found [here](https://github.com/flashlight/wav2letter/tree/main/recipes/mling_pl). ## Usage tips The PyTorch version of this model is only available in torch 1.9 and higher. ## Resources - [Automatic speech recognition task guide](../tasks/asr) ## MCTCTConfig [[autodoc]] MCTCTConfig ## MCTCTFeatureExtractor [[autodoc]] MCTCTFeatureExtractor - __call__ ## MCTCTProcessor [[autodoc]] MCTCTProcessor - __call__ - from_pretrained - save_pretrained - batch_decode - decode ## MCTCTModel [[autodoc]] MCTCTModel - forward ## MCTCTForCTC [[autodoc]] MCTCTForCTC - forward " model_doc/rembert.md," # RemBERT ## Overview The RemBERT model was proposed in [Rethinking Embedding Coupling in Pre-trained Language Models](https://arxiv.org/abs/2010.12821) by Hyung Won Chung, Thibault Févry, Henry Tsai, Melvin Johnson, Sebastian Ruder. The abstract from the paper is the following: *We re-evaluate the standard practice of sharing weights between input and output embeddings in state-of-the-art pre-trained language models. We show that decoupled embeddings provide increased modeling flexibility, allowing us to significantly improve the efficiency of parameter allocation in the input embedding of multilingual models. By reallocating the input embedding parameters in the Transformer layers, we achieve dramatically better performance on standard natural language understanding tasks with the same number of parameters during fine-tuning. We also show that allocating additional capacity to the output embedding provides benefits to the model that persist through the fine-tuning stage even though the output embedding is discarded after pre-training. Our analysis shows that larger output embeddings prevent the model's last layers from overspecializing to the pre-training task and encourage Transformer representations to be more general and more transferable to other tasks and languages. Harnessing these findings, we are able to train models that achieve strong performance on the XTREME benchmark without increasing the number of parameters at the fine-tuning stage.* ## Usage tips For fine-tuning, RemBERT can be thought of as a bigger version of mBERT with an ALBERT-like factorization of the embedding layer. The embeddings are not tied in pre-training, in contrast with BERT, which enables smaller input embeddings (preserved during fine-tuning) and bigger output embeddings (discarded at fine-tuning). The tokenizer is also similar to the Albert one rather than the BERT one. ## Resources - [Text classification task guide](../tasks/sequence_classification) - [Token classification task guide](../tasks/token_classification) - [Question answering task guide](../tasks/question_answering) - [Causal language modeling task guide](../tasks/language_modeling) - [Masked language modeling task guide](../tasks/masked_language_modeling) - [Multiple choice task guide](../tasks/multiple_choice) ## RemBertConfig [[autodoc]] RemBertConfig ## RemBertTokenizer [[autodoc]] RemBertTokenizer - build_inputs_with_special_tokens - get_special_tokens_mask - create_token_type_ids_from_sequences - save_vocabulary ## RemBertTokenizerFast [[autodoc]] RemBertTokenizerFast - build_inputs_with_special_tokens - get_special_tokens_mask - create_token_type_ids_from_sequences - save_vocabulary ## RemBertModel [[autodoc]] RemBertModel - forward ## RemBertForCausalLM [[autodoc]] RemBertForCausalLM - forward ## RemBertForMaskedLM [[autodoc]] RemBertForMaskedLM - forward ## RemBertForSequenceClassification [[autodoc]] RemBertForSequenceClassification - forward ## RemBertForMultipleChoice [[autodoc]] RemBertForMultipleChoice - forward ## RemBertForTokenClassification [[autodoc]] RemBertForTokenClassification - forward ## RemBertForQuestionAnswering [[autodoc]] RemBertForQuestionAnswering - forward ## TFRemBertModel [[autodoc]] TFRemBertModel - call ## TFRemBertForMaskedLM [[autodoc]] TFRemBertForMaskedLM - call ## TFRemBertForCausalLM [[autodoc]] TFRemBertForCausalLM - call ## TFRemBertForSequenceClassification [[autodoc]] TFRemBertForSequenceClassification - call ## TFRemBertForMultipleChoice [[autodoc]] TFRemBertForMultipleChoice - call ## TFRemBertForTokenClassification [[autodoc]] TFRemBertForTokenClassification - call ## TFRemBertForQuestionAnswering [[autodoc]] TFRemBertForQuestionAnswering - call " model_doc/tapex.md," # TAPEX This model is in maintenance mode only, we don't accept any new PRs changing its code. If you run into any issues running this model, please reinstall the last version that supported this model: v4.30.0. You can do so by running the following command: `pip install -U transformers==4.30.0`. ## Overview The TAPEX model was proposed in [TAPEX: Table Pre-training via Learning a Neural SQL Executor](https://arxiv.org/abs/2107.07653) by Qian Liu, Bei Chen, Jiaqi Guo, Morteza Ziyadi, Zeqi Lin, Weizhu Chen, Jian-Guang Lou. TAPEX pre-trains a BART model to solve synthetic SQL queries, after which it can be fine-tuned to answer natural language questions related to tabular data, as well as performing table fact checking. TAPEX has been fine-tuned on several datasets: - [SQA](https://www.microsoft.com/en-us/download/details.aspx?id=54253) (Sequential Question Answering by Microsoft) - [WTQ](https://github.com/ppasupat/WikiTableQuestions) (Wiki Table Questions by Stanford University) - [WikiSQL](https://github.com/salesforce/WikiSQL) (by Salesforce) - [TabFact](https://tabfact.github.io/) (by USCB NLP Lab). The abstract from the paper is the following: *Recent progress in language model pre-training has achieved a great success via leveraging large-scale unstructured textual data. However, it is still a challenge to apply pre-training on structured tabular data due to the absence of large-scale high-quality tabular data. In this paper, we propose TAPEX to show that table pre-training can be achieved by learning a neural SQL executor over a synthetic corpus, which is obtained by automatically synthesizing executable SQL queries and their execution outputs. TAPEX addresses the data scarcity challenge via guiding the language model to mimic a SQL executor on the diverse, large-scale and high-quality synthetic corpus. We evaluate TAPEX on four benchmark datasets. Experimental results demonstrate that TAPEX outperforms previous table pre-training approaches by a large margin and achieves new state-of-the-art results on all of them. This includes improvements on the weakly-supervised WikiSQL denotation accuracy to 89.5% (+2.3%), the WikiTableQuestions denotation accuracy to 57.5% (+4.8%), the SQA denotation accuracy to 74.5% (+3.5%), and the TabFact accuracy to 84.2% (+3.2%). To our knowledge, this is the first work to exploit table pre-training via synthetic executable programs and to achieve new state-of-the-art results on various downstream tasks.* ## Usage tips - TAPEX is a generative (seq2seq) model. One can directly plug in the weights of TAPEX into a BART model. - TAPEX has checkpoints on the hub that are either pre-trained only, or fine-tuned on WTQ, SQA, WikiSQL and TabFact. - Sentences + tables are presented to the model as `sentence + "" "" + linearized table`. The linearized table has the following format: `col: col1 | col2 | col 3 row 1 : val1 | val2 | val3 row 2 : `. - TAPEX has its own tokenizer, that allows to prepare all data for the model easily. One can pass Pandas DataFrames and strings to the tokenizer, and it will automatically create the `input_ids` and `attention_mask` (as shown in the usage examples below). ### Usage: inference Below, we illustrate how to use TAPEX for table question answering. As one can see, one can directly plug in the weights of TAPEX into a BART model. We use the [Auto API](auto), which will automatically instantiate the appropriate tokenizer ([`TapexTokenizer`]) and model ([`BartForConditionalGeneration`]) for us, based on the configuration file of the checkpoint on the hub. thon >>> from transformers import AutoTokenizer, AutoModelForSeq2SeqLM >>> import pandas as pd >>> tokenizer = AutoTokenizer.from_pretrained(""microsoft/tapex-large-finetuned-wtq"") >>> model = AutoModelForSeq2SeqLM.from_pretrained(""microsoft/tapex-large-finetuned-wtq"") >>> # prepare table + question >>> data = {""Actors"": [""Brad Pitt"", ""Leonardo Di Caprio"", ""George Clooney""], ""Number of movies"": [""87"", ""53"", ""69""]} >>> table = pd.DataFrame.from_dict(data) >>> question = ""how many movies does Leonardo Di Caprio have?"" >>> encoding = tokenizer(table, question, return_tensors=""pt"") >>> # let the model generate an answer autoregressively >>> outputs = model.generate(**encoding) >>> # decode back to text >>> predicted_answer = tokenizer.batch_decode(outputs, skip_special_tokens=True)[0] >>> print(predicted_answer) 53 Note that [`TapexTokenizer`] also supports batched inference. Hence, one can provide a batch of different tables/questions, or a batch of a single table and multiple questions, or a batch of a single query and multiple tables. Let's illustrate this: thon >>> # prepare table + question >>> data = {""Actors"": [""Brad Pitt"", ""Leonardo Di Caprio"", ""George Clooney""], ""Number of movies"": [""87"", ""53"", ""69""]} >>> table = pd.DataFrame.from_dict(data) >>> questions = [ ""how many movies does Leonardo Di Caprio have?"", ""which actor has 69 movies?"", ""what's the first name of the actor who has 87 movies?"", ] >>> encoding = tokenizer(table, questions, padding=True, return_tensors=""pt"") >>> # let the model generate an answer autoregressively >>> outputs = model.generate(**encoding) >>> # decode back to text >>> tokenizer.batch_decode(outputs, skip_special_tokens=True) [' 53', ' george clooney', ' brad pitt'] In case one wants to do table verification (i.e. the task of determining whether a given sentence is supported or refuted by the contents of a table), one can instantiate a [`BartForSequenceClassification`] model. TAPEX has checkpoints on the hub fine-tuned on TabFact, an important benchmark for table fact checking (it achieves 84% accuracy). The code example below again leverages the [Auto API](auto). thon >>> from transformers import AutoTokenizer, AutoModelForSequenceClassification >>> tokenizer = AutoTokenizer.from_pretrained(""microsoft/tapex-large-finetuned-tabfact"") >>> model = AutoModelForSequenceClassification.from_pretrained(""microsoft/tapex-large-finetuned-tabfact"") >>> # prepare table + sentence >>> data = {""Actors"": [""Brad Pitt"", ""Leonardo Di Caprio"", ""George Clooney""], ""Number of movies"": [""87"", ""53"", ""69""]} >>> table = pd.DataFrame.from_dict(data) >>> sentence = ""George Clooney has 30 movies"" >>> encoding = tokenizer(table, sentence, return_tensors=""pt"") >>> # forward pass >>> outputs = model(**encoding) >>> # print prediction >>> predicted_class_idx = outputs.logits[0].argmax(dim=0).item() >>> print(model.config.id2label[predicted_class_idx]) Refused TAPEX architecture is the same as BART, except for tokenization. Refer to [BART documentation](bart) for information on configuration classes and their parameters. TAPEX-specific tokenizer is documented below. ## TapexTokenizer [[autodoc]] TapexTokenizer - __call__ - save_vocabulary" model_doc/pegasus.md," # Pegasus ## Overview The Pegasus model was proposed in [PEGASUS: Pre-training with Extracted Gap-sentences for Abstractive Summarization](https://arxiv.org/pdf/1912.08777.pdf) by Jingqing Zhang, Yao Zhao, Mohammad Saleh and Peter J. Liu on Dec 18, 2019. According to the abstract, - Pegasus' pretraining task is intentionally similar to summarization: important sentences are removed/masked from an input document and are generated together as one output sequence from the remaining sentences, similar to an extractive summary. - Pegasus achieves SOTA summarization performance on all 12 downstream tasks, as measured by ROUGE and human eval. This model was contributed by [sshleifer](https://huggingface.co/sshleifer). The Authors' code can be found [here](https://github.com/google-research/pegasus). ## Usage tips - Sequence-to-sequence model with the same encoder-decoder model architecture as BART. Pegasus is pre-trained jointly on two self-supervised objective functions: Masked Language Modeling (MLM) and a novel summarization specific pretraining objective, called Gap Sentence Generation (GSG). * MLM: encoder input tokens are randomly replaced by a mask tokens and have to be predicted by the encoder (like in BERT) * GSG: whole encoder input sentences are replaced by a second mask token and fed to the decoder, but which has a causal mask to hide the future words like a regular auto-regressive transformer decoder. - FP16 is not supported (help/ideas on this appreciated!). - The adafactor optimizer is recommended for pegasus fine-tuning. ## Checkpoints All the [checkpoints](https://huggingface.co/models?search=pegasus) are fine-tuned for summarization, besides *pegasus-large*, whence the other checkpoints are fine-tuned: - Each checkpoint is 2.2 GB on disk and 568M parameters. - FP16 is not supported (help/ideas on this appreciated!). - Summarizing xsum in fp32 takes about 400ms/sample, with default parameters on a v100 GPU. - Full replication results and correctly pre-processed data can be found in this [Issue](https://github.com/huggingface/transformers/issues/6844#issue-689259666). - [Distilled checkpoints](https://huggingface.co/models?search=distill-pegasus) are described in this [paper](https://arxiv.org/abs/2010.13002). ## Implementation Notes - All models are transformer encoder-decoders with 16 layers in each component. - The implementation is completely inherited from [`BartForConditionalGeneration`] - Some key configuration differences: - static, sinusoidal position embeddings - the model starts generating with pad_token_id (which has 0 token_embedding) as the prefix. - more beams are used (`num_beams=8`) - All pretrained pegasus checkpoints are the same besides three attributes: `tokenizer.model_max_length` (maximum input size), `max_length` (the maximum number of tokens to generate) and `length_penalty`. - The code to convert checkpoints trained in the author's [repo](https://github.com/google-research/pegasus) can be found in `convert_pegasus_tf_to_pytorch.py`. ## Usage Example thon >>> from transformers import PegasusForConditionalGeneration, PegasusTokenizer >>> import torch >>> src_text = [ """""" PG&E stated it scheduled the blackouts in response to forecasts for high winds amid dry conditions. The aim is to reduce the risk of wildfires. Nearly 800 thousand customers were scheduled to be affected by the shutoffs which were expected to last through at least midday tomorrow."""""" ] model_name = ""google/pegasus-xsum"" device = ""cuda"" if torch.cuda.is_available() else ""cpu"" tokenizer = PegasusTokenizer.from_pretrained(model_name) model = PegasusForConditionalGeneration.from_pretrained(model_name).to(device) batch = tokenizer(src_text, truncation=True, padding=""longest"", return_tensors=""pt"").to(device) translated = model.generate(**batch) tgt_text = tokenizer.batch_decode(translated, skip_special_tokens=True) assert ( tgt_text[0] == ""California's largest electricity provider has turned off power to hundreds of thousands of customers."" ) ## Resources - [Script](https://github.com/huggingface/transformers/tree/main/examples/research_projects/seq2seq-distillation/finetune_pegasus_xsum.sh) to fine-tune pegasus on the XSUM dataset. Data download instructions at [examples/pytorch/summarization/](https://github.com/huggingface/transformers/tree/main/examples/pytorch/summarization/README.md). - [Causal language modeling task guide](../tasks/language_modeling) - [Translation task guide](../tasks/translation) - [Summarization task guide](../tasks/summarization) ## PegasusConfig [[autodoc]] PegasusConfig ## PegasusTokenizer warning: `add_tokens` does not work at the moment. [[autodoc]] PegasusTokenizer ## PegasusTokenizerFast [[autodoc]] PegasusTokenizerFast ## PegasusModel [[autodoc]] PegasusModel - forward ## PegasusForConditionalGeneration [[autodoc]] PegasusForConditionalGeneration - forward ## PegasusForCausalLM [[autodoc]] PegasusForCausalLM - forward ## TFPegasusModel [[autodoc]] TFPegasusModel - call ## TFPegasusForConditionalGeneration [[autodoc]] TFPegasusForConditionalGeneration - call ## FlaxPegasusModel [[autodoc]] FlaxPegasusModel - __call__ - encode - decode ## FlaxPegasusForConditionalGeneration [[autodoc]] FlaxPegasusForConditionalGeneration - __call__ - encode - decode " model_doc/cpm.md," # CPM ## Overview The CPM model was proposed in [CPM: A Large-scale Generative Chinese Pre-trained Language Model](https://arxiv.org/abs/2012.00413) by Zhengyan Zhang, Xu Han, Hao Zhou, Pei Ke, Yuxian Gu, Deming Ye, Yujia Qin, Yusheng Su, Haozhe Ji, Jian Guan, Fanchao Qi, Xiaozhi Wang, Yanan Zheng, Guoyang Zeng, Huanqi Cao, Shengqi Chen, Daixuan Li, Zhenbo Sun, Zhiyuan Liu, Minlie Huang, Wentao Han, Jie Tang, Juanzi Li, Xiaoyan Zhu, Maosong Sun. The abstract from the paper is the following: *Pre-trained Language Models (PLMs) have proven to be beneficial for various downstream NLP tasks. Recently, GPT-3, with 175 billion parameters and 570GB training data, drew a lot of attention due to the capacity of few-shot (even zero-shot) learning. However, applying GPT-3 to address Chinese NLP tasks is still challenging, as the training corpus of GPT-3 is primarily English, and the parameters are not publicly available. In this technical report, we release the Chinese Pre-trained Language Model (CPM) with generative pre-training on large-scale Chinese training data. To the best of our knowledge, CPM, with 2.6 billion parameters and 100GB Chinese training data, is the largest Chinese pre-trained language model, which could facilitate several downstream Chinese NLP tasks, such as conversation, essay generation, cloze test, and language understanding. Extensive experiments demonstrate that CPM achieves strong performance on many NLP tasks in the settings of few-shot (even zero-shot) learning.* This model was contributed by [canwenxu](https://huggingface.co/canwenxu). The original implementation can be found here: https://github.com/TsinghuaAI/CPM-Generate CPM's architecture is the same as GPT-2, except for tokenization method. Refer to [GPT-2 documentation](gpt2) for API reference information. ## CpmTokenizer [[autodoc]] CpmTokenizer ## CpmTokenizerFast [[autodoc]] CpmTokenizerFast " model_doc/swiftformer.md," # SwiftFormer ## Overview The SwiftFormer model was proposed in [SwiftFormer: Efficient Additive Attention for Transformer-based Real-time Mobile Vision Applications](https://arxiv.org/abs/2303.15446) by Abdelrahman Shaker, Muhammad Maaz, Hanoona Rasheed, Salman Khan, Ming-Hsuan Yang, Fahad Shahbaz Khan. The SwiftFormer paper introduces a novel efficient additive attention mechanism that effectively replaces the quadratic matrix multiplication operations in the self-attention computation with linear element-wise multiplications. A series of models called 'SwiftFormer' is built based on this, which achieves state-of-the-art performance in terms of both accuracy and mobile inference speed. Even their small variant achieves 78.5% top-1 ImageNet1K accuracy with only 0.8 ms latency on iPhone 14, which is more accurate and 2× faster compared to MobileViT-v2. The abstract from the paper is the following: *Self-attention has become a defacto choice for capturing global context in various vision applications. However, its quadratic computational complexity with respect to image resolution limits its use in real-time applications, especially for deployment on resource-constrained mobile devices. Although hybrid approaches have been proposed to combine the advantages of convolutions and self-attention for a better speed-accuracy trade-off, the expensive matrix multiplication operations in self-attention remain a bottleneck. In this work, we introduce a novel efficient additive attention mechanism that effectively replaces the quadratic matrix multiplication operations with linear element-wise multiplications. Our design shows that the key-value interaction can be replaced with a linear layer without sacrificing any accuracy. Unlike previous state-of-the-art methods, our efficient formulation of self-attention enables its usage at all stages of the network. Using our proposed efficient additive attention, we build a series of models called ""SwiftFormer"" which achieves state-of-the-art performance in terms of both accuracy and mobile inference speed. Our small variant achieves 78.5% top-1 ImageNet-1K accuracy with only 0.8 ms latency on iPhone 14, which is more accurate and 2x faster compared to MobileViT-v2.* This model was contributed by [shehan97](https://huggingface.co/shehan97). The original code can be found [here](https://github.com/Amshaker/SwiftFormer). ## SwiftFormerConfig [[autodoc]] SwiftFormerConfig ## SwiftFormerModel [[autodoc]] SwiftFormerModel - forward ## SwiftFormerForImageClassification [[autodoc]] SwiftFormerForImageClassification - forward " model_doc/layoutlmv2.md," # LayoutLMV2 ## Overview The LayoutLMV2 model was proposed in [LayoutLMv2: Multi-modal Pre-training for Visually-Rich Document Understanding](https://arxiv.org/abs/2012.14740) by Yang Xu, Yiheng Xu, Tengchao Lv, Lei Cui, Furu Wei, Guoxin Wang, Yijuan Lu, Dinei Florencio, Cha Zhang, Wanxiang Che, Min Zhang, Lidong Zhou. LayoutLMV2 improves [LayoutLM](layoutlm) to obtain state-of-the-art results across several document image understanding benchmarks: - information extraction from scanned documents: the [FUNSD](https://guillaumejaume.github.io/FUNSD/) dataset (a collection of 199 annotated forms comprising more than 30,000 words), the [CORD](https://github.com/clovaai/cord) dataset (a collection of 800 receipts for training, 100 for validation and 100 for testing), the [SROIE](https://rrc.cvc.uab.es/?ch=13) dataset (a collection of 626 receipts for training and 347 receipts for testing) and the [Kleister-NDA](https://github.com/applicaai/kleister-nda) dataset (a collection of non-disclosure agreements from the EDGAR database, including 254 documents for training, 83 documents for validation, and 203 documents for testing). - document image classification: the [RVL-CDIP](https://www.cs.cmu.edu/~aharley/rvl-cdip/) dataset (a collection of 400,000 images belonging to one of 16 classes). - document visual question answering: the [DocVQA](https://arxiv.org/abs/2007.00398) dataset (a collection of 50,000 questions defined on 12,000+ document images). The abstract from the paper is the following: *Pre-training of text and layout has proved effective in a variety of visually-rich document understanding tasks due to its effective model architecture and the advantage of large-scale unlabeled scanned/digital-born documents. In this paper, we present LayoutLMv2 by pre-training text, layout and image in a multi-modal framework, where new model architectures and pre-training tasks are leveraged. Specifically, LayoutLMv2 not only uses the existing masked visual-language modeling task but also the new text-image alignment and text-image matching tasks in the pre-training stage, where cross-modality interaction is better learned. Meanwhile, it also integrates a spatial-aware self-attention mechanism into the Transformer architecture, so that the model can fully understand the relative positional relationship among different text blocks. Experiment results show that LayoutLMv2 outperforms strong baselines and achieves new state-of-the-art results on a wide variety of downstream visually-rich document understanding tasks, including FUNSD (0.7895 -> 0.8420), CORD (0.9493 -> 0.9601), SROIE (0.9524 -> 0.9781), Kleister-NDA (0.834 -> 0.852), RVL-CDIP (0.9443 -> 0.9564), and DocVQA (0.7295 -> 0.8672). The pre-trained LayoutLMv2 model is publicly available at this https URL.* LayoutLMv2 depends on `detectron2`, `torchvision` and `tesseract`. Run the following to install them: python -m pip install 'git+https://github.com/facebookresearch/detectron2.git' python -m pip install torchvision tesseract (If you are developing for LayoutLMv2, note that passing the doctests also requires the installation of these packages.) ## Usage tips - The main difference between LayoutLMv1 and LayoutLMv2 is that the latter incorporates visual embeddings during pre-training (while LayoutLMv1 only adds visual embeddings during fine-tuning). - LayoutLMv2 adds both a relative 1D attention bias as well as a spatial 2D attention bias to the attention scores in the self-attention layers. Details can be found on page 5 of the [paper](https://arxiv.org/abs/2012.14740). - Demo notebooks on how to use the LayoutLMv2 model on RVL-CDIP, FUNSD, DocVQA, CORD can be found [here](https://github.com/NielsRogge/Transformers-Tutorials). - LayoutLMv2 uses Facebook AI's [Detectron2](https://github.com/facebookresearch/detectron2/) package for its visual backbone. See [this link](https://detectron2.readthedocs.io/en/latest/tutorials/install.html) for installation instructions. - In addition to `input_ids`, [`~LayoutLMv2Model.forward`] expects 2 additional inputs, namely `image` and `bbox`. The `image` input corresponds to the original document image in which the text tokens occur. The model expects each document image to be of size 224x224. This means that if you have a batch of document images, `image` should be a tensor of shape (batch_size, 3, 224, 224). This can be either a `torch.Tensor` or a `Detectron2.structures.ImageList`. You don't need to normalize the channels, as this is done by the model. Important to note is that the visual backbone expects BGR channels instead of RGB, as all models in Detectron2 are pre-trained using the BGR format. The `bbox` input are the bounding boxes (i.e. 2D-positions) of the input text tokens. This is identical to [`LayoutLMModel`]. These can be obtained using an external OCR engine such as Google's [Tesseract](https://github.com/tesseract-ocr/tesseract) (there's a [Python wrapper](https://pypi.org/project/pytesseract/) available). Each bounding box should be in (x0, y0, x1, y1) format, where (x0, y0) corresponds to the position of the upper left corner in the bounding box, and (x1, y1) represents the position of the lower right corner. Note that one first needs to normalize the bounding boxes to be on a 0-1000 scale. To normalize, you can use the following function: thon def normalize_bbox(bbox, width, height): return [ int(1000 * (bbox[0] / width)), int(1000 * (bbox[1] / height)), int(1000 * (bbox[2] / width)), int(1000 * (bbox[3] / height)), ] Here, `width` and `height` correspond to the width and height of the original document in which the token occurs (before resizing the image). Those can be obtained using the Python Image Library (PIL) library for example, as follows: thon from PIL import Image image = Image.open( ""name_of_your_document - can be a png, jpg, etc. of your documents (PDFs must be converted to images)."" ) width, height = image.size However, this model includes a brand new [`~transformers.LayoutLMv2Processor`] which can be used to directly prepare data for the model (including applying OCR under the hood). More information can be found in the ""Usage"" section below. - Internally, [`~transformers.LayoutLMv2Model`] will send the `image` input through its visual backbone to obtain a lower-resolution feature map, whose shape is equal to the `image_feature_pool_shape` attribute of [`~transformers.LayoutLMv2Config`]. This feature map is then flattened to obtain a sequence of image tokens. As the size of the feature map is 7x7 by default, one obtains 49 image tokens. These are then concatenated with the text tokens, and send through the Transformer encoder. This means that the last hidden states of the model will have a length of 512 + 49 = 561, if you pad the text tokens up to the max length. More generally, the last hidden states will have a shape of `seq_length` + `image_feature_pool_shape[0]` * `config.image_feature_pool_shape[1]`. - When calling [`~transformers.LayoutLMv2Model.from_pretrained`], a warning will be printed with a long list of parameter names that are not initialized. This is not a problem, as these parameters are batch normalization statistics, which are going to have values when fine-tuning on a custom dataset. - If you want to train the model in a distributed environment, make sure to call [`synchronize_batch_norm`] on the model in order to properly synchronize the batch normalization layers of the visual backbone. In addition, there's LayoutXLM, which is a multilingual version of LayoutLMv2. More information can be found on [LayoutXLM's documentation page](layoutxlm). ## Resources A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with LayoutLMv2. If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource. - A notebook on how to [finetune LayoutLMv2 for text-classification on RVL-CDIP dataset](https://colab.research.google.com/github/NielsRogge/Transformers-Tutorials/blob/master/LayoutLMv2/RVL-CDIP/Fine_tuning_LayoutLMv2ForSequenceClassification_on_RVL_CDIP.ipynb). - See also: [Text classification task guide](../tasks/sequence_classification) - A notebook on how to [finetune LayoutLMv2 for question-answering on DocVQA dataset](https://colab.research.google.com/github/NielsRogge/Transformers-Tutorials/blob/master/LayoutLMv2/DocVQA/Fine_tuning_LayoutLMv2ForQuestionAnswering_on_DocVQA.ipynb). - See also: [Question answering task guide](../tasks/question_answering) - See also: [Document question answering task guide](../tasks/document_question_answering) - A notebook on how to [finetune LayoutLMv2 for token-classification on CORD dataset](https://colab.research.google.com/github/NielsRogge/Transformers-Tutorials/blob/master/LayoutLMv2/CORD/Fine_tuning_LayoutLMv2ForTokenClassification_on_CORD.ipynb). - A notebook on how to [finetune LayoutLMv2 for token-classification on FUNSD dataset](https://colab.research.google.com/github/NielsRogge/Transformers-Tutorials/blob/master/LayoutLMv2/FUNSD/Fine_tuning_LayoutLMv2ForTokenClassification_on_FUNSD_using_HuggingFace_Trainer.ipynb). - See also: [Token classification task guide](../tasks/token_classification) ## Usage: LayoutLMv2Processor The easiest way to prepare data for the model is to use [`LayoutLMv2Processor`], which internally combines a image processor ([`LayoutLMv2ImageProcessor`]) and a tokenizer ([`LayoutLMv2Tokenizer`] or [`LayoutLMv2TokenizerFast`]). The image processor handles the image modality, while the tokenizer handles the text modality. A processor combines both, which is ideal for a multi-modal model like LayoutLMv2. Note that you can still use both separately, if you only want to handle one modality. thon from transformers import LayoutLMv2ImageProcessor, LayoutLMv2TokenizerFast, LayoutLMv2Processor image_processor = LayoutLMv2ImageProcessor() # apply_ocr is set to True by default tokenizer = LayoutLMv2TokenizerFast.from_pretrained(""microsoft/layoutlmv2-base-uncased"") processor = LayoutLMv2Processor(image_processor, tokenizer) In short, one can provide a document image (and possibly additional data) to [`LayoutLMv2Processor`], and it will create the inputs expected by the model. Internally, the processor first uses [`LayoutLMv2ImageProcessor`] to apply OCR on the image to get a list of words and normalized bounding boxes, as well to resize the image to a given size in order to get the `image` input. The words and normalized bounding boxes are then provided to [`LayoutLMv2Tokenizer`] or [`LayoutLMv2TokenizerFast`], which converts them to token-level `input_ids`, `attention_mask`, `token_type_ids`, `bbox`. Optionally, one can provide word labels to the processor, which are turned into token-level `labels`. [`LayoutLMv2Processor`] uses [PyTesseract](https://pypi.org/project/pytesseract/), a Python wrapper around Google's Tesseract OCR engine, under the hood. Note that you can still use your own OCR engine of choice, and provide the words and normalized boxes yourself. This requires initializing [`LayoutLMv2ImageProcessor`] with `apply_ocr` set to `False`. In total, there are 5 use cases that are supported by the processor. Below, we list them all. Note that each of these use cases work for both batched and non-batched inputs (we illustrate them for non-batched inputs). **Use case 1: document image classification (training, inference) + token classification (inference), apply_ocr = True** This is the simplest case, in which the processor (actually the image processor) will perform OCR on the image to get the words and normalized bounding boxes. thon from transformers import LayoutLMv2Processor from PIL import Image processor = LayoutLMv2Processor.from_pretrained(""microsoft/layoutlmv2-base-uncased"") image = Image.open( ""name_of_your_document - can be a png, jpg, etc. of your documents (PDFs must be converted to images)."" ).convert(""RGB"") encoding = processor( image, return_tensors=""pt"" ) # you can also add all tokenizer parameters here such as padding, truncation print(encoding.keys()) # dict_keys(['input_ids', 'token_type_ids', 'attention_mask', 'bbox', 'image']) **Use case 2: document image classification (training, inference) + token classification (inference), apply_ocr=False** In case one wants to do OCR themselves, one can initialize the image processor with `apply_ocr` set to `False`. In that case, one should provide the words and corresponding (normalized) bounding boxes themselves to the processor. thon from transformers import LayoutLMv2Processor from PIL import Image processor = LayoutLMv2Processor.from_pretrained(""microsoft/layoutlmv2-base-uncased"", revision=""no_ocr"") image = Image.open( ""name_of_your_document - can be a png, jpg, etc. of your documents (PDFs must be converted to images)."" ).convert(""RGB"") words = [""hello"", ""world""] boxes = [[1, 2, 3, 4], [5, 6, 7, 8]] # make sure to normalize your bounding boxes encoding = processor(image, words, boxes=boxes, return_tensors=""pt"") print(encoding.keys()) # dict_keys(['input_ids', 'token_type_ids', 'attention_mask', 'bbox', 'image']) **Use case 3: token classification (training), apply_ocr=False** For token classification tasks (such as FUNSD, CORD, SROIE, Kleister-NDA), one can also provide the corresponding word labels in order to train a model. The processor will then convert these into token-level `labels`. By default, it will only label the first wordpiece of a word, and label the remaining wordpieces with -100, which is the `ignore_index` of PyTorch's CrossEntropyLoss. In case you want all wordpieces of a word to be labeled, you can initialize the tokenizer with `only_label_first_subword` set to `False`. thon from transformers import LayoutLMv2Processor from PIL import Image processor = LayoutLMv2Processor.from_pretrained(""microsoft/layoutlmv2-base-uncased"", revision=""no_ocr"") image = Image.open( ""name_of_your_document - can be a png, jpg, etc. of your documents (PDFs must be converted to images)."" ).convert(""RGB"") words = [""hello"", ""world""] boxes = [[1, 2, 3, 4], [5, 6, 7, 8]] # make sure to normalize your bounding boxes word_labels = [1, 2] encoding = processor(image, words, boxes=boxes, word_labels=word_labels, return_tensors=""pt"") print(encoding.keys()) # dict_keys(['input_ids', 'token_type_ids', 'attention_mask', 'bbox', 'labels', 'image']) **Use case 4: visual question answering (inference), apply_ocr=True** For visual question answering tasks (such as DocVQA), you can provide a question to the processor. By default, the processor will apply OCR on the image, and create [CLS] question tokens [SEP] word tokens [SEP]. thon from transformers import LayoutLMv2Processor from PIL import Image processor = LayoutLMv2Processor.from_pretrained(""microsoft/layoutlmv2-base-uncased"") image = Image.open( ""name_of_your_document - can be a png, jpg, etc. of your documents (PDFs must be converted to images)."" ).convert(""RGB"") question = ""What's his name?"" encoding = processor(image, question, return_tensors=""pt"") print(encoding.keys()) # dict_keys(['input_ids', 'token_type_ids', 'attention_mask', 'bbox', 'image']) **Use case 5: visual question answering (inference), apply_ocr=False** For visual question answering tasks (such as DocVQA), you can provide a question to the processor. If you want to perform OCR yourself, you can provide your own words and (normalized) bounding boxes to the processor. thon from transformers import LayoutLMv2Processor from PIL import Image processor = LayoutLMv2Processor.from_pretrained(""microsoft/layoutlmv2-base-uncased"", revision=""no_ocr"") image = Image.open( ""name_of_your_document - can be a png, jpg, etc. of your documents (PDFs must be converted to images)."" ).convert(""RGB"") question = ""What's his name?"" words = [""hello"", ""world""] boxes = [[1, 2, 3, 4], [5, 6, 7, 8]] # make sure to normalize your bounding boxes encoding = processor(image, question, words, boxes=boxes, return_tensors=""pt"") print(encoding.keys()) # dict_keys(['input_ids', 'token_type_ids', 'attention_mask', 'bbox', 'image']) ## LayoutLMv2Config [[autodoc]] LayoutLMv2Config ## LayoutLMv2FeatureExtractor [[autodoc]] LayoutLMv2FeatureExtractor - __call__ ## LayoutLMv2ImageProcessor [[autodoc]] LayoutLMv2ImageProcessor - preprocess ## LayoutLMv2Tokenizer [[autodoc]] LayoutLMv2Tokenizer - __call__ - save_vocabulary ## LayoutLMv2TokenizerFast [[autodoc]] LayoutLMv2TokenizerFast - __call__ ## LayoutLMv2Processor [[autodoc]] LayoutLMv2Processor - __call__ ## LayoutLMv2Model [[autodoc]] LayoutLMv2Model - forward ## LayoutLMv2ForSequenceClassification [[autodoc]] LayoutLMv2ForSequenceClassification ## LayoutLMv2ForTokenClassification [[autodoc]] LayoutLMv2ForTokenClassification ## LayoutLMv2ForQuestionAnswering [[autodoc]] LayoutLMv2ForQuestionAnswering " model_doc/mbart.md," # MBart and MBart-50 ## Overview of MBart The MBart model was presented in [Multilingual Denoising Pre-training for Neural Machine Translation](https://arxiv.org/abs/2001.08210) by Yinhan Liu, Jiatao Gu, Naman Goyal, Xian Li, Sergey Edunov Marjan Ghazvininejad, Mike Lewis, Luke Zettlemoyer. According to the abstract, MBART is a sequence-to-sequence denoising auto-encoder pretrained on large-scale monolingual corpora in many languages using the BART objective. mBART is one of the first methods for pretraining a complete sequence-to-sequence model by denoising full texts in multiple languages, while previous approaches have focused only on the encoder, decoder, or reconstructing parts of the text. This model was contributed by [valhalla](https://huggingface.co/valhalla). The Authors' code can be found [here](https://github.com/pytorch/fairseq/tree/master/examples/mbart) ### Training of MBart MBart is a multilingual encoder-decoder (sequence-to-sequence) model primarily intended for translation task. As the model is multilingual it expects the sequences in a different format. A special language id token is added in both the source and target text. The source text format is `X [eos, src_lang_code]` where `X` is the source text. The target text format is `[tgt_lang_code] X [eos]`. `bos` is never used. The regular [`~MBartTokenizer.__call__`] will encode source text format passed as first argument or with the `text` keyword, and target text format passed with the `text_label` keyword argument. - Supervised training thon >>> from transformers import MBartForConditionalGeneration, MBartTokenizer >>> tokenizer = MBartTokenizer.from_pretrained(""facebook/mbart-large-en-ro"", src_lang=""en_XX"", tgt_lang=""ro_RO"") >>> example_english_phrase = ""UN Chief Says There Is No Military Solution in Syria"" >>> expected_translation_romanian = ""Şeful ONU declară că nu există o soluţie militară în Siria"" >>> inputs = tokenizer(example_english_phrase, text_target=expected_translation_romanian, return_tensors=""pt"") >>> model = MBartForConditionalGeneration.from_pretrained(""facebook/mbart-large-en-ro"") >>> # forward pass >>> model(**inputs) - Generation While generating the target text set the `decoder_start_token_id` to the target language id. The following example shows how to translate English to Romanian using the *facebook/mbart-large-en-ro* model. thon >>> from transformers import MBartForConditionalGeneration, MBartTokenizer >>> tokenizer = MBartTokenizer.from_pretrained(""facebook/mbart-large-en-ro"", src_lang=""en_XX"") >>> article = ""UN Chief Says There Is No Military Solution in Syria"" >>> inputs = tokenizer(article, return_tensors=""pt"") >>> translated_tokens = model.generate(**inputs, decoder_start_token_id=tokenizer.lang_code_to_id[""ro_RO""]) >>> tokenizer.batch_decode(translated_tokens, skip_special_tokens=True)[0] ""Şeful ONU declară că nu există o soluţie militară în Siria"" ## Overview of MBart-50 MBart-50 was introduced in the [Multilingual Translation with Extensible Multilingual Pretraining and Finetuning](https://arxiv.org/abs/2008.00401) paper by Yuqing Tang, Chau Tran, Xian Li, Peng-Jen Chen, Naman Goyal, Vishrav Chaudhary, Jiatao Gu, Angela Fan. MBart-50 is created using the original *mbart-large-cc25* checkpoint by extendeding its embedding layers with randomly initialized vectors for an extra set of 25 language tokens and then pretrained on 50 languages. According to the abstract *Multilingual translation models can be created through multilingual finetuning. Instead of finetuning on one direction, a pretrained model is finetuned on many directions at the same time. It demonstrates that pretrained models can be extended to incorporate additional languages without loss of performance. Multilingual finetuning improves on average 1 BLEU over the strongest baselines (being either multilingual from scratch or bilingual finetuning) while improving 9.3 BLEU on average over bilingual baselines from scratch.* ### Training of MBart-50 The text format for MBart-50 is slightly different from mBART. For MBart-50 the language id token is used as a prefix for both source and target text i.e the text format is `[lang_code] X [eos]`, where `lang_code` is source language id for source text and target language id for target text, with `X` being the source or target text respectively. MBart-50 has its own tokenizer [`MBart50Tokenizer`]. - Supervised training thon from transformers import MBartForConditionalGeneration, MBart50TokenizerFast model = MBartForConditionalGeneration.from_pretrained(""facebook/mbart-large-50"") tokenizer = MBart50TokenizerFast.from_pretrained(""facebook/mbart-large-50"", src_lang=""en_XX"", tgt_lang=""ro_RO"") src_text = "" UN Chief Says There Is No Military Solution in Syria"" tgt_text = ""Şeful ONU declară că nu există o soluţie militară în Siria"" model_inputs = tokenizer(src_text, text_target=tgt_text, return_tensors=""pt"") model(**model_inputs) # forward pass - Generation To generate using the mBART-50 multilingual translation models, `eos_token_id` is used as the `decoder_start_token_id` and the target language id is forced as the first generated token. To force the target language id as the first generated token, pass the *forced_bos_token_id* parameter to the *generate* method. The following example shows how to translate between Hindi to French and Arabic to English using the *facebook/mbart-50-large-many-to-many* checkpoint. thon from transformers import MBartForConditionalGeneration, MBart50TokenizerFast article_hi = ""संयुक्त राष्ट्र के प्रमुख का कहना है कि सीरिया में कोई सैन्य समाधान नहीं है"" article_ar = ""الأمين العام للأمم المتحدة يقول إنه لا يوجد حل عسكري في سوريا."" model = MBartForConditionalGeneration.from_pretrained(""facebook/mbart-large-50-many-to-many-mmt"") tokenizer = MBart50TokenizerFast.from_pretrained(""facebook/mbart-large-50-many-to-many-mmt"") # translate Hindi to French tokenizer.src_lang = ""hi_IN"" encoded_hi = tokenizer(article_hi, return_tensors=""pt"") generated_tokens = model.generate(**encoded_hi, forced_bos_token_id=tokenizer.lang_code_to_id[""fr_XX""]) tokenizer.batch_decode(generated_tokens, skip_special_tokens=True) # => ""Le chef de l 'ONU affirme qu 'il n 'y a pas de solution militaire en Syria."" # translate Arabic to English tokenizer.src_lang = ""ar_AR"" encoded_ar = tokenizer(article_ar, return_tensors=""pt"") generated_tokens = model.generate(**encoded_ar, forced_bos_token_id=tokenizer.lang_code_to_id[""en_XX""]) tokenizer.batch_decode(generated_tokens, skip_special_tokens=True) # => ""The Secretary-General of the United Nations says there is no military solution in Syria."" ## Documentation resources - [Text classification task guide](../tasks/sequence_classification) - [Question answering task guide](../tasks/question_answering) - [Causal language modeling task guide](../tasks/language_modeling) - [Masked language modeling task guide](../tasks/masked_language_modeling) - [Translation task guide](../tasks/translation) - [Summarization task guide](../tasks/summarization) ## MBartConfig [[autodoc]] MBartConfig ## MBartTokenizer [[autodoc]] MBartTokenizer - build_inputs_with_special_tokens ## MBartTokenizerFast [[autodoc]] MBartTokenizerFast ## MBart50Tokenizer [[autodoc]] MBart50Tokenizer ## MBart50TokenizerFast [[autodoc]] MBart50TokenizerFast ## MBartModel [[autodoc]] MBartModel ## MBartForConditionalGeneration [[autodoc]] MBartForConditionalGeneration ## MBartForQuestionAnswering [[autodoc]] MBartForQuestionAnswering ## MBartForSequenceClassification [[autodoc]] MBartForSequenceClassification ## MBartForCausalLM [[autodoc]] MBartForCausalLM - forward ## TFMBartModel [[autodoc]] TFMBartModel - call ## TFMBartForConditionalGeneration [[autodoc]] TFMBartForConditionalGeneration - call ## FlaxMBartModel [[autodoc]] FlaxMBartModel - __call__ - encode - decode ## FlaxMBartForConditionalGeneration [[autodoc]] FlaxMBartForConditionalGeneration - __call__ - encode - decode ## FlaxMBartForSequenceClassification [[autodoc]] FlaxMBartForSequenceClassification - __call__ - encode - decode ## FlaxMBartForQuestionAnswering [[autodoc]] FlaxMBartForQuestionAnswering - __call__ - encode - decode " model_doc/dit.md," # DiT ## Overview DiT was proposed in [DiT: Self-supervised Pre-training for Document Image Transformer](https://arxiv.org/abs/2203.02378) by Junlong Li, Yiheng Xu, Tengchao Lv, Lei Cui, Cha Zhang, Furu Wei. DiT applies the self-supervised objective of [BEiT](beit) (BERT pre-training of Image Transformers) to 42 million document images, allowing for state-of-the-art results on tasks including: - document image classification: the [RVL-CDIP](https://www.cs.cmu.edu/~aharley/rvl-cdip/) dataset (a collection of 400,000 images belonging to one of 16 classes). - document layout analysis: the [PubLayNet](https://github.com/ibm-aur-nlp/PubLayNet) dataset (a collection of more than 360,000 document images constructed by automatically parsing PubMed XML files). - table detection: the [ICDAR 2019 cTDaR](https://github.com/cndplab-founder/ICDAR2019_cTDaR) dataset (a collection of 600 training images and 240 testing images). The abstract from the paper is the following: *Image Transformer has recently achieved significant progress for natural image understanding, either using supervised (ViT, DeiT, etc.) or self-supervised (BEiT, MAE, etc.) pre-training techniques. In this paper, we propose DiT, a self-supervised pre-trained Document Image Transformer model using large-scale unlabeled text images for Document AI tasks, which is essential since no supervised counterparts ever exist due to the lack of human labeled document images. We leverage DiT as the backbone network in a variety of vision-based Document AI tasks, including document image classification, document layout analysis, as well as table detection. Experiment results have illustrated that the self-supervised pre-trained DiT model achieves new state-of-the-art results on these downstream tasks, e.g. document image classification (91.11 → 92.69), document layout analysis (91.0 → 94.9) and table detection (94.23 → 96.55). * Summary of the approach. Taken from the [original paper](https://arxiv.org/abs/2203.02378). This model was contributed by [nielsr](https://huggingface.co/nielsr). The original code can be found [here](https://github.com/microsoft/unilm/tree/master/dit). ## Usage tips One can directly use the weights of DiT with the AutoModel API: thon from transformers import AutoModel model = AutoModel.from_pretrained(""microsoft/dit-base"") This will load the model pre-trained on masked image modeling. Note that this won't include the language modeling head on top, used to predict visual tokens. To include the head, you can load the weights into a `BeitForMaskedImageModeling` model, like so: thon from transformers import BeitForMaskedImageModeling model = BeitForMaskedImageModeling.from_pretrained(""microsoft/dit-base"") You can also load a fine-tuned model from the [hub](https://huggingface.co/models?other=dit), like so: thon from transformers import AutoModelForImageClassification model = AutoModelForImageClassification.from_pretrained(""microsoft/dit-base-finetuned-rvlcdip"") This particular checkpoint was fine-tuned on [RVL-CDIP](https://www.cs.cmu.edu/~aharley/rvl-cdip/), an important benchmark for document image classification. A notebook that illustrates inference for document image classification can be found [here](https://github.com/NielsRogge/Transformers-Tutorials/blob/master/DiT/Inference_with_DiT_(Document_Image_Transformer)_for_document_image_classification.ipynb). ## Resources A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with DiT. - [`BeitForImageClassification`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/pytorch/image-classification) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/image_classification.ipynb). If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource. As DiT's architecture is equivalent to that of BEiT, one can refer to [BEiT's documentation page](beit) for all tips, code examples and notebooks. " model_doc/imagegpt.md," # ImageGPT ## Overview The ImageGPT model was proposed in [Generative Pretraining from Pixels](https://openai.com/blog/image-gpt) by Mark Chen, Alec Radford, Rewon Child, Jeffrey Wu, Heewoo Jun, David Luan, Ilya Sutskever. ImageGPT (iGPT) is a GPT-2-like model trained to predict the next pixel value, allowing for both unconditional and conditional image generation. The abstract from the paper is the following: *Inspired by progress in unsupervised representation learning for natural language, we examine whether similar models can learn useful representations for images. We train a sequence Transformer to auto-regressively predict pixels, without incorporating knowledge of the 2D input structure. Despite training on low-resolution ImageNet without labels, we find that a GPT-2 scale model learns strong image representations as measured by linear probing, fine-tuning, and low-data classification. On CIFAR-10, we achieve 96.3% accuracy with a linear probe, outperforming a supervised Wide ResNet, and 99.0% accuracy with full fine-tuning, matching the top supervised pre-trained models. We are also competitive with self-supervised benchmarks on ImageNet when substituting pixels for a VQVAE encoding, achieving 69.0% top-1 accuracy on a linear probe of our features.* Summary of the approach. Taken from the [original paper](https://cdn.openai.com/papers/Generative_Pretraining_from_Pixels_V2.pdf). This model was contributed by [nielsr](https://huggingface.co/nielsr), based on [this issue](https://github.com/openai/image-gpt/issues/7). The original code can be found [here](https://github.com/openai/image-gpt). ## Usage tips - ImageGPT is almost exactly the same as [GPT-2](gpt2), with the exception that a different activation function is used (namely ""quick gelu""), and the layer normalization layers don't mean center the inputs. ImageGPT also doesn't have tied input- and output embeddings. - As the time- and memory requirements of the attention mechanism of Transformers scales quadratically in the sequence length, the authors pre-trained ImageGPT on smaller input resolutions, such as 32x32 and 64x64. However, feeding a sequence of 32x32x3=3072 tokens from 0..255 into a Transformer is still prohibitively large. Therefore, the authors applied k-means clustering to the (R,G,B) pixel values with k=512. This way, we only have a 32*32 = 1024-long sequence, but now of integers in the range 0..511. So we are shrinking the sequence length at the cost of a bigger embedding matrix. In other words, the vocabulary size of ImageGPT is 512, + 1 for a special ""start of sentence"" (SOS) token, used at the beginning of every sequence. One can use [`ImageGPTImageProcessor`] to prepare images for the model. - Despite being pre-trained entirely unsupervised (i.e. without the use of any labels), ImageGPT produces fairly performant image features useful for downstream tasks, such as image classification. The authors showed that the features in the middle of the network are the most performant, and can be used as-is to train a linear model (such as a sklearn logistic regression model for example). This is also referred to as ""linear probing"". Features can be easily obtained by first forwarding the image through the model, then specifying `output_hidden_states=True`, and then average-pool the hidden states at whatever layer you like. - Alternatively, one can further fine-tune the entire model on a downstream dataset, similar to BERT. For this, you can use [`ImageGPTForImageClassification`]. - ImageGPT comes in different sizes: there's ImageGPT-small, ImageGPT-medium and ImageGPT-large. The authors did also train an XL variant, which they didn't release. The differences in size are summarized in the following table: | **Model variant** | **Depths** | **Hidden sizes** | **Decoder hidden size** | **Params (M)** | **ImageNet-1k Top 1** | |---|---|---|---|---|---| | MiT-b0 | [2, 2, 2, 2] | [32, 64, 160, 256] | 256 | 3.7 | 70.5 | | MiT-b1 | [2, 2, 2, 2] | [64, 128, 320, 512] | 256 | 14.0 | 78.7 | | MiT-b2 | [3, 4, 6, 3] | [64, 128, 320, 512] | 768 | 25.4 | 81.6 | | MiT-b3 | [3, 4, 18, 3] | [64, 128, 320, 512] | 768 | 45.2 | 83.1 | | MiT-b4 | [3, 8, 27, 3] | [64, 128, 320, 512] | 768 | 62.6 | 83.6 | | MiT-b5 | [3, 6, 40, 3] | [64, 128, 320, 512] | 768 | 82.0 | 83.8 | ## Resources A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with ImageGPT. - Demo notebooks for ImageGPT can be found [here](https://github.com/NielsRogge/Transformers-Tutorials/tree/master/ImageGPT). - [`ImageGPTForImageClassification`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/pytorch/image-classification) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/image_classification.ipynb). - See also: [Image classification task guide](../tasks/image_classification) If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource. ## ImageGPTConfig [[autodoc]] ImageGPTConfig ## ImageGPTFeatureExtractor [[autodoc]] ImageGPTFeatureExtractor - __call__ ## ImageGPTImageProcessor [[autodoc]] ImageGPTImageProcessor - preprocess ## ImageGPTModel [[autodoc]] ImageGPTModel - forward ## ImageGPTForCausalImageModeling [[autodoc]] ImageGPTForCausalImageModeling - forward ## ImageGPTForImageClassification [[autodoc]] ImageGPTForImageClassification - forward " model_doc/bertweet.md," # BERTweet ## Overview The BERTweet model was proposed in [BERTweet: A pre-trained language model for English Tweets](https://www.aclweb.org/anthology/2020.emnlp-demos.2.pdf) by Dat Quoc Nguyen, Thanh Vu, Anh Tuan Nguyen. The abstract from the paper is the following: *We present BERTweet, the first public large-scale pre-trained language model for English Tweets. Our BERTweet, having the same architecture as BERT-base (Devlin et al., 2019), is trained using the RoBERTa pre-training procedure (Liu et al., 2019). Experiments show that BERTweet outperforms strong baselines RoBERTa-base and XLM-R-base (Conneau et al., 2020), producing better performance results than the previous state-of-the-art models on three Tweet NLP tasks: Part-of-speech tagging, Named-entity recognition and text classification.* This model was contributed by [dqnguyen](https://huggingface.co/dqnguyen). The original code can be found [here](https://github.com/VinAIResearch/BERTweet). ## Usage example thon >>> import torch >>> from transformers import AutoModel, AutoTokenizer >>> bertweet = AutoModel.from_pretrained(""vinai/bertweet-base"") >>> # For transformers v4.x+: >>> tokenizer = AutoTokenizer.from_pretrained(""vinai/bertweet-base"", use_fast=False) >>> # For transformers v3.x: >>> # tokenizer = AutoTokenizer.from_pretrained(""vinai/bertweet-base"") >>> # INPUT TWEET IS ALREADY NORMALIZED! >>> line = ""SC has first two presumptive cases of coronavirus , DHEC confirms HTTPURL via @USER :cry:"" >>> input_ids = torch.tensor([tokenizer.encode(line)]) >>> with torch.no_grad(): features = bertweet(input_ids) # Models outputs are now tuples >>> # With TensorFlow 2.0+: >>> # from transformers import TFAutoModel >>> # bertweet = TFAutoModel.from_pretrained(""vinai/bertweet-base"") This implementation is the same as BERT, except for tokenization method. Refer to [BERT documentation](bert) for API reference information. ## BertweetTokenizer [[autodoc]] BertweetTokenizer " model_doc/bros.md," # BROS ## Overview The BROS model was proposed in [BROS: A Pre-trained Language Model Focusing on Text and Layout for Better Key Information Extraction from Documents](https://arxiv.org/abs/2108.04539) by Teakgyu Hong, Donghyun Kim, Mingi Ji, Wonseok Hwang, Daehyun Nam, Sungrae Park. BROS stands for *BERT Relying On Spatiality*. It is an encoder-only Transformer model that takes a sequence of tokens and their bounding boxes as inputs and outputs a sequence of hidden states. BROS encode relative spatial information instead of using absolute spatial information. It is pre-trained with two objectives: a token-masked language modeling objective (TMLM) used in BERT, and a novel area-masked language modeling objective (AMLM) In TMLM, tokens are randomly masked, and the model predicts the masked tokens using spatial information and other unmasked tokens. AMLM is a 2D version of TMLM. It randomly masks text tokens and predicts with the same information as TMLM, but it masks text blocks (areas). `BrosForTokenClassification` has a simple linear layer on top of BrosModel. It predicts the label of each token. `BrosSpadeEEForTokenClassification` has an `initial_token_classifier` and `subsequent_token_classifier` on top of BrosModel. `initial_token_classifier` is used to predict the first token of each entity, and `subsequent_token_classifier` is used to predict the next token of within entity. `BrosSpadeELForTokenClassification` has an `entity_linker` on top of BrosModel. `entity_linker` is used to predict the relation between two entities. `BrosForTokenClassification` and `BrosSpadeEEForTokenClassification` essentially perform the same job. However, `BrosForTokenClassification` assumes input tokens are perfectly serialized (which is very challenging task since they exist in a 2D space), while `BrosSpadeEEForTokenClassification` allows for more flexibility in handling serialization errors as it predicts next connection tokens from one token. `BrosSpadeELForTokenClassification` perform the intra-entity linking task. It predicts relation from one token (of one entity) to another token (of another entity) if these two entities share some relation. BROS achieves comparable or better result on Key Information Extraction (KIE) benchmarks such as FUNSD, SROIE, CORD and SciTSR, without relying on explicit visual features. The abstract from the paper is the following: *Key information extraction (KIE) from document images requires understanding the contextual and spatial semantics of texts in two-dimensional (2D) space. Many recent studies try to solve the task by developing pre-trained language models focusing on combining visual features from document images with texts and their layout. On the other hand, this paper tackles the problem by going back to the basic: effective combination of text and layout. Specifically, we propose a pre-trained language model, named BROS (BERT Relying On Spatiality), that encodes relative positions of texts in 2D space and learns from unlabeled documents with area-masking strategy. With this optimized training scheme for understanding texts in 2D space, BROS shows comparable or better performance compared to previous methods on four KIE benchmarks (FUNSD, SROIE*, CORD, and SciTSR) without relying on visual features. This paper also reveals two real-world challenges in KIE tasks-(1) minimizing the error from incorrect text ordering and (2) efficient learning from fewer downstream examples-and demonstrates the superiority of BROS over previous methods.* This model was contributed by [jinho8345](https://huggingface.co/jinho8345). The original code can be found [here](https://github.com/clovaai/bros). ## Usage tips and examples - [`~transformers.BrosModel.forward`] requires `input_ids` and `bbox` (bounding box). Each bounding box should be in (x0, y0, x1, y1) format (top-left corner, bottom-right corner). Obtaining of Bounding boxes depends on external OCR system. The `x` coordinate should be normalized by document image width, and the `y` coordinate should be normalized by document image height. thon def expand_and_normalize_bbox(bboxes, doc_width, doc_height): # here, bboxes are numpy array # Normalize bbox -> 0 ~ 1 bboxes[:, [0, 2]] = bboxes[:, [0, 2]] / width bboxes[:, [1, 3]] = bboxes[:, [1, 3]] / height - [`~transformers.BrosForTokenClassification.forward`, `~transformers.BrosSpadeEEForTokenClassification.forward`, `~transformers.BrosSpadeEEForTokenClassification.forward`] require not only `input_ids` and `bbox` but also `box_first_token_mask` for loss calculation. It is a mask to filter out non-first tokens of each box. You can obtain this mask by saving start token indices of bounding boxes when creating `input_ids` from words. You can make `box_first_token_mask` with following code, thon def make_box_first_token_mask(bboxes, words, tokenizer, max_seq_length=512): box_first_token_mask = np.zeros(max_seq_length, dtype=np.bool_) # encode(tokenize) each word from words (List[str]) input_ids_list: List[List[int]] = [tokenizer.encode(e, add_special_tokens=False) for e in words] # get the length of each box tokens_length_list: List[int] = [len(l) for l in input_ids_list] box_end_token_indices = np.array(list(itertools.accumulate(tokens_length_list))) box_start_token_indices = box_end_token_indices - np.array(tokens_length_list) # filter out the indices that are out of max_seq_length box_end_token_indices = box_end_token_indices[box_end_token_indices < max_seq_length - 1] if len(box_start_token_indices) > len(box_end_token_indices): box_start_token_indices = box_start_token_indices[: len(box_end_token_indices)] # set box_start_token_indices to True box_first_token_mask[box_start_token_indices] = True return box_first_token_mask ## Resources - Demo scripts can be found [here](https://github.com/clovaai/bros). ## BrosConfig [[autodoc]] BrosConfig ## BrosProcessor [[autodoc]] BrosProcessor - __call__ ## BrosModel [[autodoc]] BrosModel - forward ## BrosForTokenClassification [[autodoc]] BrosForTokenClassification - forward ## BrosSpadeEEForTokenClassification [[autodoc]] BrosSpadeEEForTokenClassification - forward ## BrosSpadeELForTokenClassification [[autodoc]] BrosSpadeELForTokenClassification - forward " model_doc/trocr.md," # TrOCR ## Overview The TrOCR model was proposed in [TrOCR: Transformer-based Optical Character Recognition with Pre-trained Models](https://arxiv.org/abs/2109.10282) by Minghao Li, Tengchao Lv, Lei Cui, Yijuan Lu, Dinei Florencio, Cha Zhang, Zhoujun Li, Furu Wei. TrOCR consists of an image Transformer encoder and an autoregressive text Transformer decoder to perform [optical character recognition (OCR)](https://en.wikipedia.org/wiki/Optical_character_recognition). The abstract from the paper is the following: *Text recognition is a long-standing research problem for document digitalization. Existing approaches for text recognition are usually built based on CNN for image understanding and RNN for char-level text generation. In addition, another language model is usually needed to improve the overall accuracy as a post-processing step. In this paper, we propose an end-to-end text recognition approach with pre-trained image Transformer and text Transformer models, namely TrOCR, which leverages the Transformer architecture for both image understanding and wordpiece-level text generation. The TrOCR model is simple but effective, and can be pre-trained with large-scale synthetic data and fine-tuned with human-labeled datasets. Experiments show that the TrOCR model outperforms the current state-of-the-art models on both printed and handwritten text recognition tasks.* TrOCR architecture. Taken from the original paper. Please refer to the [`VisionEncoderDecoder`] class on how to use this model. This model was contributed by [nielsr](https://huggingface.co/nielsr). The original code can be found [here](https://github.com/microsoft/unilm/tree/6f60612e7cc86a2a1ae85c47231507a587ab4e01/trocr). ## Usage tips - The quickest way to get started with TrOCR is by checking the [tutorial notebooks](https://github.com/NielsRogge/Transformers-Tutorials/tree/master/TrOCR), which show how to use the model at inference time as well as fine-tuning on custom data. - TrOCR is pre-trained in 2 stages before being fine-tuned on downstream datasets. It achieves state-of-the-art results on both printed (e.g. the [SROIE dataset](https://paperswithcode.com/dataset/sroie) and handwritten (e.g. the [IAM Handwriting dataset](https://fki.tic.heia-fr.ch/databases/iam-handwriting-database>) text recognition tasks. For more information, see the [official models](https://huggingface.co/models?other=trocr>). - TrOCR is always used within the [VisionEncoderDecoder](vision-encoder-decoder) framework. ## Resources A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with TrOCR. If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource. - A blog post on [Accelerating Document AI](https://huggingface.co/blog/document-ai) with TrOCR. - A blog post on how to [Document AI](https://github.com/philschmid/document-ai-transformers) with TrOCR. - A notebook on how to [finetune TrOCR on IAM Handwriting Database using Seq2SeqTrainer](https://colab.research.google.com/github/NielsRogge/Transformers-Tutorials/blob/master/TrOCR/Fine_tune_TrOCR_on_IAM_Handwriting_Database_using_Seq2SeqTrainer.ipynb). - A notebook on [inference with TrOCR](https://colab.research.google.com/github/NielsRogge/Transformers-Tutorials/blob/master/TrOCR/Inference_with_TrOCR_%2B_Gradio_demo.ipynb) and Gradio demo. - A notebook on [finetune TrOCR on the IAM Handwriting Database](https://colab.research.google.com/github/NielsRogge/Transformers-Tutorials/blob/master/TrOCR/Fine_tune_TrOCR_on_IAM_Handwriting_Database_using_native_PyTorch.ipynb) using native PyTorch. - A notebook on [evaluating TrOCR on the IAM test set](https://colab.research.google.com/github/NielsRogge/Transformers-Tutorials/blob/master/TrOCR/Evaluating_TrOCR_base_handwritten_on_the_IAM_test_set.ipynb). - [Casual language modeling](https://huggingface.co/docs/transformers/tasks/language_modeling) task guide. ⚡️ Inference - An interactive-demo on [TrOCR handwritten character recognition](https://huggingface.co/spaces/nielsr/TrOCR-handwritten). ## Inference TrOCR's [`VisionEncoderDecoder`] model accepts images as input and makes use of [`~generation.GenerationMixin.generate`] to autoregressively generate text given the input image. The [`ViTImageProcessor`/`DeiTImageProcessor`] class is responsible for preprocessing the input image and [`RobertaTokenizer`/`XLMRobertaTokenizer`] decodes the generated target tokens to the target string. The [`TrOCRProcessor`] wraps [`ViTImageProcessor`/`DeiTImageProcessor`] and [`RobertaTokenizer`/`XLMRobertaTokenizer`] into a single instance to both extract the input features and decode the predicted token ids. - Step-by-step Optical Character Recognition (OCR) ``` py >>> from transformers import TrOCRProcessor, VisionEncoderDecoderModel >>> import requests >>> from PIL import Image >>> processor = TrOCRProcessor.from_pretrained(""microsoft/trocr-base-handwritten"") >>> model = VisionEncoderDecoderModel.from_pretrained(""microsoft/trocr-base-handwritten"") >>> # load image from the IAM dataset >>> url = ""https://fki.tic.heia-fr.ch/static/img/a01-122-02.jpg"" >>> image = Image.open(requests.get(url, stream=True).raw).convert(""RGB"") >>> pixel_values = processor(image, return_tensors=""pt"").pixel_values >>> generated_ids = model.generate(pixel_values) >>> generated_text = processor.batch_decode(generated_ids, skip_special_tokens=True)[0] See the [model hub](https://huggingface.co/models?filter=trocr) to look for TrOCR checkpoints. ## TrOCRConfig [[autodoc]] TrOCRConfig ## TrOCRProcessor [[autodoc]] TrOCRProcessor - __call__ - from_pretrained - save_pretrained - batch_decode - decode ## TrOCRForCausalLM [[autodoc]] TrOCRForCausalLM - forward " model_doc/xlnet.md," # XLNet ## Overview The XLNet model was proposed in [XLNet: Generalized Autoregressive Pretraining for Language Understanding](https://arxiv.org/abs/1906.08237) by Zhilin Yang, Zihang Dai, Yiming Yang, Jaime Carbonell, Ruslan Salakhutdinov, Quoc V. Le. XLnet is an extension of the Transformer-XL model pre-trained using an autoregressive method to learn bidirectional contexts by maximizing the expected likelihood over all permutations of the input sequence factorization order. The abstract from the paper is the following: *With the capability of modeling bidirectional contexts, denoising autoencoding based pretraining like BERT achieves better performance than pretraining approaches based on autoregressive language modeling. However, relying on corrupting the input with masks, BERT neglects dependency between the masked positions and suffers from a pretrain-finetune discrepancy. In light of these pros and cons, we propose XLNet, a generalized autoregressive pretraining method that (1) enables learning bidirectional contexts by maximizing the expected likelihood over all permutations of the factorization order and (2) overcomes the limitations of BERT thanks to its autoregressive formulation. Furthermore, XLNet integrates ideas from Transformer-XL, the state-of-the-art autoregressive model, into pretraining. Empirically, under comparable experiment settings, XLNet outperforms BERT on 20 tasks, often by a large margin, including question answering, natural language inference, sentiment analysis, and document ranking.* This model was contributed by [thomwolf](https://huggingface.co/thomwolf). The original code can be found [here](https://github.com/zihangdai/xlnet/). ## Usage tips - The specific attention pattern can be controlled at training and test time using the `perm_mask` input. - Due to the difficulty of training a fully auto-regressive model over various factorization order, XLNet is pretrained using only a sub-set of the output tokens as target which are selected with the `target_mapping` input. - To use XLNet for sequential decoding (i.e. not in fully bi-directional setting), use the `perm_mask` and `target_mapping` inputs to control the attention span and outputs (see examples in *examples/pytorch/text-generation/run_generation.py*) - XLNet is one of the few models that has no sequence length limit. - XLNet is not a traditional autoregressive model but uses a training strategy that builds on that. It permutes the tokens in the sentence, then allows the model to use the last n tokens to predict the token n+1. Since this is all done with a mask, the sentence is actually fed in the model in the right order, but instead of masking the first n tokens for n+1, XLNet uses a mask that hides the previous tokens in some given permutation of 1,…,sequence length. - XLNet also uses the same recurrence mechanism as Transformer-XL to build long-term dependencies. ## Resources - [Text classification task guide](../tasks/sequence_classification) - [Token classification task guide](../tasks/token_classification) - [Question answering task guide](../tasks/question_answering) - [Causal language modeling task guide](../tasks/language_modeling) - [Multiple choice task guide](../tasks/multiple_choice) ## XLNetConfig [[autodoc]] XLNetConfig ## XLNetTokenizer [[autodoc]] XLNetTokenizer - build_inputs_with_special_tokens - get_special_tokens_mask - create_token_type_ids_from_sequences - save_vocabulary ## XLNetTokenizerFast [[autodoc]] XLNetTokenizerFast ## XLNet specific outputs [[autodoc]] models.xlnet.modeling_xlnet.XLNetModelOutput [[autodoc]] models.xlnet.modeling_xlnet.XLNetLMHeadModelOutput [[autodoc]] models.xlnet.modeling_xlnet.XLNetForSequenceClassificationOutput [[autodoc]] models.xlnet.modeling_xlnet.XLNetForMultipleChoiceOutput [[autodoc]] models.xlnet.modeling_xlnet.XLNetForTokenClassificationOutput [[autodoc]] models.xlnet.modeling_xlnet.XLNetForQuestionAnsweringSimpleOutput [[autodoc]] models.xlnet.modeling_xlnet.XLNetForQuestionAnsweringOutput [[autodoc]] models.xlnet.modeling_tf_xlnet.TFXLNetModelOutput [[autodoc]] models.xlnet.modeling_tf_xlnet.TFXLNetLMHeadModelOutput [[autodoc]] models.xlnet.modeling_tf_xlnet.TFXLNetForSequenceClassificationOutput [[autodoc]] models.xlnet.modeling_tf_xlnet.TFXLNetForMultipleChoiceOutput [[autodoc]] models.xlnet.modeling_tf_xlnet.TFXLNetForTokenClassificationOutput [[autodoc]] models.xlnet.modeling_tf_xlnet.TFXLNetForQuestionAnsweringSimpleOutput ## XLNetModel [[autodoc]] XLNetModel - forward ## XLNetLMHeadModel [[autodoc]] XLNetLMHeadModel - forward ## XLNetForSequenceClassification [[autodoc]] XLNetForSequenceClassification - forward ## XLNetForMultipleChoice [[autodoc]] XLNetForMultipleChoice - forward ## XLNetForTokenClassification [[autodoc]] XLNetForTokenClassification - forward ## XLNetForQuestionAnsweringSimple [[autodoc]] XLNetForQuestionAnsweringSimple - forward ## XLNetForQuestionAnswering [[autodoc]] XLNetForQuestionAnswering - forward ## TFXLNetModel [[autodoc]] TFXLNetModel - call ## TFXLNetLMHeadModel [[autodoc]] TFXLNetLMHeadModel - call ## TFXLNetForSequenceClassification [[autodoc]] TFXLNetForSequenceClassification - call ## TFLNetForMultipleChoice [[autodoc]] TFXLNetForMultipleChoice - call ## TFXLNetForTokenClassification [[autodoc]] TFXLNetForTokenClassification - call ## TFXLNetForQuestionAnsweringSimple [[autodoc]] TFXLNetForQuestionAnsweringSimple - call " model_doc/conditional_detr.md," # Conditional DETR ## Overview The Conditional DETR model was proposed in [Conditional DETR for Fast Training Convergence](https://arxiv.org/abs/2108.06152) by Depu Meng, Xiaokang Chen, Zejia Fan, Gang Zeng, Houqiang Li, Yuhui Yuan, Lei Sun, Jingdong Wang. Conditional DETR presents a conditional cross-attention mechanism for fast DETR training. Conditional DETR converges 6.7× to 10× faster than DETR. The abstract from the paper is the following: *The recently-developed DETR approach applies the transformer encoder and decoder architecture to object detection and achieves promising performance. In this paper, we handle the critical issue, slow training convergence, and present a conditional cross-attention mechanism for fast DETR training. Our approach is motivated by that the cross-attention in DETR relies highly on the content embeddings for localizing the four extremities and predicting the box, which increases the need for high-quality content embeddings and thus the training difficulty. Our approach, named conditional DETR, learns a conditional spatial query from the decoder embedding for decoder multi-head cross-attention. The benefit is that through the conditional spatial query, each cross-attention head is able to attend to a band containing a distinct region, e.g., one object extremity or a region inside the object box. This narrows down the spatial range for localizing the distinct regions for object classification and box regression, thus relaxing the dependence on the content embeddings and easing the training. Empirical results show that conditional DETR converges 6.7× faster for the backbones R50 and R101 and 10× faster for stronger backbones DC5-R50 and DC5-R101. Code is available at https://github.com/Atten4Vis/ConditionalDETR.* Conditional DETR shows much faster convergence compared to the original DETR. Taken from the original paper. This model was contributed by [DepuMeng](https://huggingface.co/DepuMeng). The original code can be found [here](https://github.com/Atten4Vis/ConditionalDETR). ## Resources - [Object detection task guide](../tasks/object_detection) ## ConditionalDetrConfig [[autodoc]] ConditionalDetrConfig ## ConditionalDetrImageProcessor [[autodoc]] ConditionalDetrImageProcessor - preprocess - post_process_object_detection - post_process_instance_segmentation - post_process_semantic_segmentation - post_process_panoptic_segmentation ## ConditionalDetrFeatureExtractor [[autodoc]] ConditionalDetrFeatureExtractor - __call__ - post_process_object_detection - post_process_instance_segmentation - post_process_semantic_segmentation - post_process_panoptic_segmentation ## ConditionalDetrModel [[autodoc]] ConditionalDetrModel - forward ## ConditionalDetrForObjectDetection [[autodoc]] ConditionalDetrForObjectDetection - forward ## ConditionalDetrForSegmentation [[autodoc]] ConditionalDetrForSegmentation - forward " model_doc/gptsan-japanese.md," # GPTSAN-japanese ## Overview The GPTSAN-japanese model was released in the repository by Toshiyuki Sakamoto (tanreinama). GPTSAN is a Japanese language model using Switch Transformer. It has the same structure as the model introduced as Prefix LM in the T5 paper, and support both Text Generation and Masked Language Modeling tasks. These basic tasks similarly can fine-tune for translation or summarization. ### Usage example The `generate()` method can be used to generate text using GPTSAN-Japanese model. thon >>> from transformers import AutoModel, AutoTokenizer >>> import torch >>> tokenizer = AutoTokenizer.from_pretrained(""Tanrei/GPTSAN-japanese"") >>> model = AutoModel.from_pretrained(""Tanrei/GPTSAN-japanese"").cuda() >>> x_tok = tokenizer(""は、"", prefix_text=""織田信長"", return_tensors=""pt"") >>> torch.manual_seed(0) >>> gen_tok = model.generate(x_tok.input_ids.cuda(), token_type_ids=x_tok.token_type_ids.cuda(), max_new_tokens=20) >>> tokenizer.decode(gen_tok[0]) '織田信長は、2004年に『戦国BASARA』のために、豊臣秀吉' ## GPTSAN Features GPTSAN has some unique features. It has a model structure of Prefix-LM. It works as a shifted Masked Language Model for Prefix Input tokens. Un-prefixed inputs behave like normal generative models. The Spout vector is a GPTSAN specific input. Spout is pre-trained with random inputs, but you can specify a class of text or an arbitrary vector during fine-tuning. This allows you to indicate the tendency of the generated text. GPTSAN has a sparse Feed Forward based on Switch-Transformer. You can also add other layers and train them partially. See the original GPTSAN repository for details. ### Prefix-LM Model GPTSAN has the structure of the model named Prefix-LM in the `T5` paper. (The original GPTSAN repository calls it `hybrid`) In GPTSAN, the `Prefix` part of Prefix-LM, that is, the input position that can be referenced by both tokens, can be specified with any length. Arbitrary lengths can also be specified differently for each batch. This length applies to the text entered in `prefix_text` for the tokenizer. The tokenizer returns the mask of the `Prefix` part of Prefix-LM as `token_type_ids`. The model treats the part where `token_type_ids` is 1 as a `Prefix` part, that is, the input can refer to both tokens before and after. ## Usage tips Specifying the Prefix part is done with a mask passed to self-attention. When token_type_ids=None or all zero, it is equivalent to regular causal mask for example: >>> x_token = tokenizer(""アイウエ"") input_ids: | SOT | SEG | ア | イ | ウ | エ | token_type_ids: | 1 | 0 | 0 | 0 | 0 | 0 | prefix_lm_mask: SOT | 1 0 0 0 0 0 | SEG | 1 1 0 0 0 0 | ア | 1 1 1 0 0 0 | イ | 1 1 1 1 0 0 | ウ | 1 1 1 1 1 0 | エ | 1 1 1 1 1 1 | >>> x_token = tokenizer("""", prefix_text=""アイウエ"") input_ids: | SOT | ア | イ | ウ | エ | SEG | token_type_ids: | 1 | 1 | 1 | 1 | 1 | 0 | prefix_lm_mask: SOT | 1 1 1 1 1 0 | ア | 1 1 1 1 1 0 | イ | 1 1 1 1 1 0 | ウ | 1 1 1 1 1 0 | エ | 1 1 1 1 1 0 | SEG | 1 1 1 1 1 1 | >>> x_token = tokenizer(""ウエ"", prefix_text=""アイ"") input_ids: | SOT | ア | イ | SEG | ウ | エ | token_type_ids: | 1 | 1 | 1 | 0 | 0 | 0 | prefix_lm_mask: SOT | 1 1 1 0 0 0 | ア | 1 1 1 0 0 0 | イ | 1 1 1 0 0 0 | SEG | 1 1 1 1 0 0 | ウ | 1 1 1 1 1 0 | エ | 1 1 1 1 1 1 | ### Spout Vector A Spout Vector is a special vector for controlling text generation. This vector is treated as the first embedding in self-attention to bring extraneous attention to the generated tokens. In the pre-trained model published from `Tanrei/GPTSAN-japanese`, the Spout Vector is a 128-dimensional vector that passes through 8 fully connected layers in the model and is projected into the space acting as external attention. The Spout Vector projected by the fully connected layer is split to be passed to all self-attentions. ## GPTSanJapaneseConfig [[autodoc]] GPTSanJapaneseConfig ## GPTSanJapaneseTokenizer [[autodoc]] GPTSanJapaneseTokenizer ## GPTSanJapaneseModel [[autodoc]] GPTSanJapaneseModel ## GPTSanJapaneseForConditionalGeneration [[autodoc]] GPTSanJapaneseForConditionalGeneration - forward " model_doc/groupvit.md," # GroupViT ## Overview The GroupViT model was proposed in [GroupViT: Semantic Segmentation Emerges from Text Supervision](https://arxiv.org/abs/2202.11094) by Jiarui Xu, Shalini De Mello, Sifei Liu, Wonmin Byeon, Thomas Breuel, Jan Kautz, Xiaolong Wang. Inspired by [CLIP](clip), GroupViT is a vision-language model that can perform zero-shot semantic segmentation on any given vocabulary categories. The abstract from the paper is the following: *Grouping and recognition are important components of visual scene understanding, e.g., for object detection and semantic segmentation. With end-to-end deep learning systems, grouping of image regions usually happens implicitly via top-down supervision from pixel-level recognition labels. Instead, in this paper, we propose to bring back the grouping mechanism into deep networks, which allows semantic segments to emerge automatically with only text supervision. We propose a hierarchical Grouping Vision Transformer (GroupViT), which goes beyond the regular grid structure representation and learns to group image regions into progressively larger arbitrary-shaped segments. We train GroupViT jointly with a text encoder on a large-scale image-text dataset via contrastive losses. With only text supervision and without any pixel-level annotations, GroupViT learns to group together semantic regions and successfully transfers to the task of semantic segmentation in a zero-shot manner, i.e., without any further fine-tuning. It achieves a zero-shot accuracy of 52.3% mIoU on the PASCAL VOC 2012 and 22.4% mIoU on PASCAL Context datasets, and performs competitively to state-of-the-art transfer-learning methods requiring greater levels of supervision.* This model was contributed by [xvjiarui](https://huggingface.co/xvjiarui). The TensorFlow version was contributed by [ariG23498](https://huggingface.co/ariG23498) with the help of [Yih-Dar SHIEH](https://huggingface.co/ydshieh), [Amy Roberts](https://huggingface.co/amyeroberts), and [Joao Gante](https://huggingface.co/joaogante). The original code can be found [here](https://github.com/NVlabs/GroupViT). ## Usage tips - You may specify `output_segmentation=True` in the forward of `GroupViTModel` to get the segmentation logits of input texts. ## Resources A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with GroupViT. - The quickest way to get started with GroupViT is by checking the [example notebooks](https://github.com/xvjiarui/GroupViT/blob/main/demo/GroupViT_hf_inference_notebook.ipynb) (which showcase zero-shot segmentation inference). - One can also check out the [HuggingFace Spaces demo](https://huggingface.co/spaces/xvjiarui/GroupViT) to play with GroupViT. ## GroupViTConfig [[autodoc]] GroupViTConfig - from_text_vision_configs ## GroupViTTextConfig [[autodoc]] GroupViTTextConfig ## GroupViTVisionConfig [[autodoc]] GroupViTVisionConfig ## GroupViTModel [[autodoc]] GroupViTModel - forward - get_text_features - get_image_features ## GroupViTTextModel [[autodoc]] GroupViTTextModel - forward ## GroupViTVisionModel [[autodoc]] GroupViTVisionModel - forward ## TFGroupViTModel [[autodoc]] TFGroupViTModel - call - get_text_features - get_image_features ## TFGroupViTTextModel [[autodoc]] TFGroupViTTextModel - call ## TFGroupViTVisionModel [[autodoc]] TFGroupViTVisionModel - call " model_doc/longformer.md," # Longformer ## Overview The Longformer model was presented in [Longformer: The Long-Document Transformer](https://arxiv.org/pdf/2004.05150.pdf) by Iz Beltagy, Matthew E. Peters, Arman Cohan. The abstract from the paper is the following: *Transformer-based models are unable to process long sequences due to their self-attention operation, which scales quadratically with the sequence length. To address this limitation, we introduce the Longformer with an attention mechanism that scales linearly with sequence length, making it easy to process documents of thousands of tokens or longer. Longformer's attention mechanism is a drop-in replacement for the standard self-attention and combines a local windowed attention with a task motivated global attention. Following prior work on long-sequence transformers, we evaluate Longformer on character-level language modeling and achieve state-of-the-art results on text8 and enwik8. In contrast to most prior work, we also pretrain Longformer and finetune it on a variety of downstream tasks. Our pretrained Longformer consistently outperforms RoBERTa on long document tasks and sets new state-of-the-art results on WikiHop and TriviaQA.* This model was contributed by [beltagy](https://huggingface.co/beltagy). The Authors' code can be found [here](https://github.com/allenai/longformer). ## Usage tips - Since the Longformer is based on RoBERTa, it doesn't have `token_type_ids`. You don't need to indicate which token belongs to which segment. Just separate your segments with the separation token `tokenizer.sep_token` (or ``). - A transformer model replacing the attention matrices by sparse matrices to go faster. Often, the local context (e.g., what are the two tokens left and right?) is enough to take action for a given token. Some preselected input tokens are still given global attention, but the attention matrix has way less parameters, resulting in a speed-up. See the local attention section for more information. ## Longformer Self Attention Longformer self attention employs self attention on both a ""local"" context and a ""global"" context. Most tokens only attend ""locally"" to each other meaning that each token attends to its \\(\frac{1}{2} w\\) previous tokens and \\(\frac{1}{2} w\\) succeeding tokens with \\(w\\) being the window length as defined in `config.attention_window`. Note that `config.attention_window` can be of type `List` to define a different \\(w\\) for each layer. A selected few tokens attend ""globally"" to all other tokens, as it is conventionally done for all tokens in `BertSelfAttention`. Note that ""locally"" and ""globally"" attending tokens are projected by different query, key and value matrices. Also note that every ""locally"" attending token not only attends to tokens within its window \\(w\\), but also to all ""globally"" attending tokens so that global attention is *symmetric*. The user can define which tokens attend ""locally"" and which tokens attend ""globally"" by setting the tensor `global_attention_mask` at run-time appropriately. All Longformer models employ the following logic for `global_attention_mask`: - 0: the token attends ""locally"", - 1: the token attends ""globally"". For more information please also refer to [`~LongformerModel.forward`] method. Using Longformer self attention, the memory and time complexity of the query-key matmul operation, which usually represents the memory and time bottleneck, can be reduced from \\(\mathcal{O}(n_s \times n_s)\\) to \\(\mathcal{O}(n_s \times w)\\), with \\(n_s\\) being the sequence length and \\(w\\) being the average window size. It is assumed that the number of ""globally"" attending tokens is insignificant as compared to the number of ""locally"" attending tokens. For more information, please refer to the official [paper](https://arxiv.org/pdf/2004.05150.pdf). ## Training [`LongformerForMaskedLM`] is trained the exact same way [`RobertaForMaskedLM`] is trained and should be used as follows: thon input_ids = tokenizer.encode(""This is a sentence from [MASK] training data"", return_tensors=""pt"") mlm_labels = tokenizer.encode(""This is a sentence from the training data"", return_tensors=""pt"") loss = model(input_ids, labels=input_ids, masked_lm_labels=mlm_labels)[0] ## Resources - [Text classification task guide](../tasks/sequence_classification) - [Token classification task guide](../tasks/token_classification) - [Question answering task guide](../tasks/question_answering) - [Masked language modeling task guide](../tasks/masked_language_modeling) - [Multiple choice task guide](../tasks/multiple_choice) ## LongformerConfig [[autodoc]] LongformerConfig ## LongformerTokenizer [[autodoc]] LongformerTokenizer ## LongformerTokenizerFast [[autodoc]] LongformerTokenizerFast ## Longformer specific outputs [[autodoc]] models.longformer.modeling_longformer.LongformerBaseModelOutput [[autodoc]] models.longformer.modeling_longformer.LongformerBaseModelOutputWithPooling [[autodoc]] models.longformer.modeling_longformer.LongformerMaskedLMOutput [[autodoc]] models.longformer.modeling_longformer.LongformerQuestionAnsweringModelOutput [[autodoc]] models.longformer.modeling_longformer.LongformerSequenceClassifierOutput [[autodoc]] models.longformer.modeling_longformer.LongformerMultipleChoiceModelOutput [[autodoc]] models.longformer.modeling_longformer.LongformerTokenClassifierOutput [[autodoc]] models.longformer.modeling_tf_longformer.TFLongformerBaseModelOutput [[autodoc]] models.longformer.modeling_tf_longformer.TFLongformerBaseModelOutputWithPooling [[autodoc]] models.longformer.modeling_tf_longformer.TFLongformerMaskedLMOutput [[autodoc]] models.longformer.modeling_tf_longformer.TFLongformerQuestionAnsweringModelOutput [[autodoc]] models.longformer.modeling_tf_longformer.TFLongformerSequenceClassifierOutput [[autodoc]] models.longformer.modeling_tf_longformer.TFLongformerMultipleChoiceModelOutput [[autodoc]] models.longformer.modeling_tf_longformer.TFLongformerTokenClassifierOutput ## LongformerModel [[autodoc]] LongformerModel - forward ## LongformerForMaskedLM [[autodoc]] LongformerForMaskedLM - forward ## LongformerForSequenceClassification [[autodoc]] LongformerForSequenceClassification - forward ## LongformerForMultipleChoice [[autodoc]] LongformerForMultipleChoice - forward ## LongformerForTokenClassification [[autodoc]] LongformerForTokenClassification - forward ## LongformerForQuestionAnswering [[autodoc]] LongformerForQuestionAnswering - forward ## TFLongformerModel [[autodoc]] TFLongformerModel - call ## TFLongformerForMaskedLM [[autodoc]] TFLongformerForMaskedLM - call ## TFLongformerForQuestionAnswering [[autodoc]] TFLongformerForQuestionAnswering - call ## TFLongformerForSequenceClassification [[autodoc]] TFLongformerForSequenceClassification - call ## TFLongformerForTokenClassification [[autodoc]] TFLongformerForTokenClassification - call ## TFLongformerForMultipleChoice [[autodoc]] TFLongformerForMultipleChoice - call " model_doc/informer.md," # Informer ## Overview The Informer model was proposed in [Informer: Beyond Efficient Transformer for Long Sequence Time-Series Forecasting ](https://arxiv.org/abs/2012.07436) by Haoyi Zhou, Shanghang Zhang, Jieqi Peng, Shuai Zhang, Jianxin Li, Hui Xiong, and Wancai Zhang. This method introduces a Probabilistic Attention mechanism to select the ""active"" queries rather than the ""lazy"" queries and provides a sparse Transformer thus mitigating the quadratic compute and memory requirements of vanilla attention. The abstract from the paper is the following: *Many real-world applications require the prediction of long sequence time-series, such as electricity consumption planning. Long sequence time-series forecasting (LSTF) demands a high prediction capacity of the model, which is the ability to capture precise long-range dependency coupling between output and input efficiently. Recent studies have shown the potential of Transformer to increase the prediction capacity. However, there are several severe issues with Transformer that prevent it from being directly applicable to LSTF, including quadratic time complexity, high memory usage, and inherent limitation of the encoder-decoder architecture. To address these issues, we design an efficient transformer-based model for LSTF, named Informer, with three distinctive characteristics: (i) a ProbSparse self-attention mechanism, which achieves O(L logL) in time complexity and memory usage, and has comparable performance on sequences' dependency alignment. (ii) the self-attention distilling highlights dominating attention by halving cascading layer input, and efficiently handles extreme long input sequences. (iii) the generative style decoder, while conceptually simple, predicts the long time-series sequences at one forward operation rather than a step-by-step way, which drastically improves the inference speed of long-sequence predictions. Extensive experiments on four large-scale datasets demonstrate that Informer significantly outperforms existing methods and provides a new solution to the LSTF problem.* This model was contributed by [elisim](https://huggingface.co/elisim) and [kashif](https://huggingface.co/kashif). The original code can be found [here](https://github.com/zhouhaoyi/Informer2020). ## Resources A list of official Hugging Face and community (indicated by 🌎) resources to help you get started. If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource. - Check out the Informer blog-post in HuggingFace blog: [Multivariate Probabilistic Time Series Forecasting with Informer](https://huggingface.co/blog/informer) ## InformerConfig [[autodoc]] InformerConfig ## InformerModel [[autodoc]] InformerModel - forward ## InformerForPrediction [[autodoc]] InformerForPrediction - forward" model_doc/tvlt.md," # TVLT ## Overview The TVLT model was proposed in [TVLT: Textless Vision-Language Transformer](https://arxiv.org/abs/2209.14156) by Zineng Tang, Jaemin Cho, Yixin Nie, Mohit Bansal (the first three authors contributed equally). The Textless Vision-Language Transformer (TVLT) is a model that uses raw visual and audio inputs for vision-and-language representation learning, without using text-specific modules such as tokenization or automatic speech recognition (ASR). It can perform various audiovisual and vision-language tasks like retrieval, question answering, etc. The abstract from the paper is the following: *In this work, we present the Textless Vision-Language Transformer (TVLT), where homogeneous transformer blocks take raw visual and audio inputs for vision-and-language representation learning with minimal modality-specific design, and do not use text-specific modules such as tokenization or automatic speech recognition (ASR). TVLT is trained by reconstructing masked patches of continuous video frames and audio spectrograms (masked autoencoding) and contrastive modeling to align video and audio. TVLT attains performance comparable to its text-based counterpart on various multimodal tasks, such as visual question answering, image retrieval, video retrieval, and multimodal sentiment analysis, with 28x faster inference speed and only 1/3 of the parameters. Our findings suggest the possibility of learning compact and efficient visual-linguistic representations from low-level visual and audio signals without assuming the prior existence of text.* TVLT architecture. Taken from the original paper. The original code can be found [here](https://github.com/zinengtang/TVLT). This model was contributed by [Zineng Tang](https://huggingface.co/ZinengTang). ## Usage tips - TVLT is a model that takes both `pixel_values` and `audio_values` as input. One can use [`TvltProcessor`] to prepare data for the model. This processor wraps an image processor (for the image/video modality) and an audio feature extractor (for the audio modality) into one. - TVLT is trained with images/videos and audios of various sizes: the authors resize and crop the input images/videos to 224 and limit the length of audio spectrogram to 2048. To make batching of videos and audios possible, the authors use a `pixel_mask` that indicates which pixels are real/padding and `audio_mask` that indicates which audio values are real/padding. - The design of TVLT is very similar to that of a standard Vision Transformer (ViT) and masked autoencoder (MAE) as in [ViTMAE](vitmae). The difference is that the model includes embedding layers for the audio modality. - The PyTorch version of this model is only available in torch 1.10 and higher. ## TvltConfig [[autodoc]] TvltConfig ## TvltProcessor [[autodoc]] TvltProcessor - __call__ ## TvltImageProcessor [[autodoc]] TvltImageProcessor - preprocess ## TvltFeatureExtractor [[autodoc]] TvltFeatureExtractor - __call__ ## TvltModel [[autodoc]] TvltModel - forward ## TvltForPreTraining [[autodoc]] TvltForPreTraining - forward ## TvltForAudioVisualClassification [[autodoc]] TvltForAudioVisualClassification - forward " model_doc/table-transformer.md," # Table Transformer ## Overview The Table Transformer model was proposed in [PubTables-1M: Towards comprehensive table extraction from unstructured documents](https://arxiv.org/abs/2110.00061) by Brandon Smock, Rohith Pesala, Robin Abraham. The authors introduce a new dataset, PubTables-1M, to benchmark progress in table extraction from unstructured documents, as well as table structure recognition and functional analysis. The authors train 2 [DETR](detr) models, one for table detection and one for table structure recognition, dubbed Table Transformers. The abstract from the paper is the following: *Recently, significant progress has been made applying machine learning to the problem of table structure inference and extraction from unstructured documents. However, one of the greatest challenges remains the creation of datasets with complete, unambiguous ground truth at scale. To address this, we develop a new, more comprehensive dataset for table extraction, called PubTables-1M. PubTables-1M contains nearly one million tables from scientific articles, supports multiple input modalities, and contains detailed header and location information for table structures, making it useful for a wide variety of modeling approaches. It also addresses a significant source of ground truth inconsistency observed in prior datasets called oversegmentation, using a novel canonicalization procedure. We demonstrate that these improvements lead to a significant increase in training performance and a more reliable estimate of model performance at evaluation for table structure recognition. Further, we show that transformer-based object detection models trained on PubTables-1M produce excellent results for all three tasks of detection, structure recognition, and functional analysis without the need for any special customization for these tasks.* Table detection and table structure recognition clarified. Taken from the original paper. The authors released 2 models, one for [table detection](https://huggingface.co/microsoft/table-transformer-detection) in documents, one for [table structure recognition](https://huggingface.co/microsoft/table-transformer-structure-recognition) (the task of recognizing the individual rows, columns etc. in a table). This model was contributed by [nielsr](https://huggingface.co/nielsr). The original code can be found [here](https://github.com/microsoft/table-transformer). ## Resources - A demo notebook for the Table Transformer can be found [here](https://github.com/NielsRogge/Transformers-Tutorials/tree/master/Table%20Transformer). - It turns out padding of images is quite important for detection. An interesting Github thread with replies from the authors can be found [here](https://github.com/microsoft/table-transformer/issues/68). ## TableTransformerConfig [[autodoc]] TableTransformerConfig ## TableTransformerModel [[autodoc]] TableTransformerModel - forward ## TableTransformerForObjectDetection [[autodoc]] TableTransformerForObjectDetection - forward " model_doc/wav2vec2-conformer.md," # Wav2Vec2-Conformer ## Overview The Wav2Vec2-Conformer was added to an updated version of [fairseq S2T: Fast Speech-to-Text Modeling with fairseq](https://arxiv.org/abs/2010.05171) by Changhan Wang, Yun Tang, Xutai Ma, Anne Wu, Sravya Popuri, Dmytro Okhonko, Juan Pino. The official results of the model can be found in Table 3 and Table 4 of the paper. The Wav2Vec2-Conformer weights were released by the Meta AI team within the [Fairseq library](https://github.com/pytorch/fairseq/blob/main/examples/wav2vec/README.md#pre-trained-models). This model was contributed by [patrickvonplaten](https://huggingface.co/patrickvonplaten). The original code can be found [here](https://github.com/pytorch/fairseq/tree/main/examples/wav2vec). ## Usage tips - Wav2Vec2-Conformer follows the same architecture as Wav2Vec2, but replaces the *Attention*-block with a *Conformer*-block as introduced in [Conformer: Convolution-augmented Transformer for Speech Recognition](https://arxiv.org/abs/2005.08100). - For the same number of layers, Wav2Vec2-Conformer requires more parameters than Wav2Vec2, but also yields an improved word error rate. - Wav2Vec2-Conformer uses the same tokenizer and feature extractor as Wav2Vec2. - Wav2Vec2-Conformer can use either no relative position embeddings, Transformer-XL-like position embeddings, or rotary position embeddings by setting the correct `config.position_embeddings_type`. ## Resources - [Audio classification task guide](../tasks/audio_classification) - [Automatic speech recognition task guide](../tasks/asr) ## Wav2Vec2ConformerConfig [[autodoc]] Wav2Vec2ConformerConfig ## Wav2Vec2Conformer specific outputs [[autodoc]] models.wav2vec2_conformer.modeling_wav2vec2_conformer.Wav2Vec2ConformerForPreTrainingOutput ## Wav2Vec2ConformerModel [[autodoc]] Wav2Vec2ConformerModel - forward ## Wav2Vec2ConformerForCTC [[autodoc]] Wav2Vec2ConformerForCTC - forward ## Wav2Vec2ConformerForSequenceClassification [[autodoc]] Wav2Vec2ConformerForSequenceClassification - forward ## Wav2Vec2ConformerForAudioFrameClassification [[autodoc]] Wav2Vec2ConformerForAudioFrameClassification - forward ## Wav2Vec2ConformerForXVector [[autodoc]] Wav2Vec2ConformerForXVector - forward ## Wav2Vec2ConformerForPreTraining [[autodoc]] Wav2Vec2ConformerForPreTraining - forward " model_doc/swin.md," # Swin Transformer ## Overview The Swin Transformer was proposed in [Swin Transformer: Hierarchical Vision Transformer using Shifted Windows](https://arxiv.org/abs/2103.14030) by Ze Liu, Yutong Lin, Yue Cao, Han Hu, Yixuan Wei, Zheng Zhang, Stephen Lin, Baining Guo. The abstract from the paper is the following: *This paper presents a new vision Transformer, called Swin Transformer, that capably serves as a general-purpose backbone for computer vision. Challenges in adapting Transformer from language to vision arise from differences between the two domains, such as large variations in the scale of visual entities and the high resolution of pixels in images compared to words in text. To address these differences, we propose a hierarchical Transformer whose representation is computed with \bold{S}hifted \bold{win}dows. The shifted windowing scheme brings greater efficiency by limiting self-attention computation to non-overlapping local windows while also allowing for cross-window connection. This hierarchical architecture has the flexibility to model at various scales and has linear computational complexity with respect to image size. These qualities of Swin Transformer make it compatible with a broad range of vision tasks, including image classification (87.3 top-1 accuracy on ImageNet-1K) and dense prediction tasks such as object detection (58.7 box AP and 51.1 mask AP on COCO test-dev) and semantic segmentation (53.5 mIoU on ADE20K val). Its performance surpasses the previous state-of-the-art by a large margin of +2.7 box AP and +2.6 mask AP on COCO, and +3.2 mIoU on ADE20K, demonstrating the potential of Transformer-based models as vision backbones. The hierarchical design and the shifted window approach also prove beneficial for all-MLP architectures.* Swin Transformer architecture. Taken from the original paper. This model was contributed by [novice03](https://huggingface.co/novice03). The Tensorflow version of this model was contributed by [amyeroberts](https://huggingface.co/amyeroberts). The original code can be found [here](https://github.com/microsoft/Swin-Transformer). ## Usage tips - Swin pads the inputs supporting any input height and width (if divisible by `32`). - Swin can be used as a *backbone*. When `output_hidden_states = True`, it will output both `hidden_states` and `reshaped_hidden_states`. The `reshaped_hidden_states` have a shape of `(batch, num_channels, height, width)` rather than `(batch_size, sequence_length, num_channels)`. ## Resources A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with Swin Transformer. - [`SwinForImageClassification`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/pytorch/image-classification) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/image_classification.ipynb). - See also: [Image classification task guide](../tasks/image_classification) Besides that: - [`SwinForMaskedImageModeling`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/pytorch/image-pretraining). If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource. ## SwinConfig [[autodoc]] SwinConfig ## SwinModel [[autodoc]] SwinModel - forward ## SwinForMaskedImageModeling [[autodoc]] SwinForMaskedImageModeling - forward ## SwinForImageClassification [[autodoc]] transformers.SwinForImageClassification - forward ## TFSwinModel [[autodoc]] TFSwinModel - call ## TFSwinForMaskedImageModeling [[autodoc]] TFSwinForMaskedImageModeling - call ## TFSwinForImageClassification [[autodoc]] transformers.TFSwinForImageClassification - call " model_doc/bert-generation.md," # BertGeneration ## Overview The BertGeneration model is a BERT model that can be leveraged for sequence-to-sequence tasks using [`EncoderDecoderModel`] as proposed in [Leveraging Pre-trained Checkpoints for Sequence Generation Tasks](https://arxiv.org/abs/1907.12461) by Sascha Rothe, Shashi Narayan, Aliaksei Severyn. The abstract from the paper is the following: *Unsupervised pretraining of large neural models has recently revolutionized Natural Language Processing. By warm-starting from the publicly released checkpoints, NLP practitioners have pushed the state-of-the-art on multiple benchmarks while saving significant amounts of compute time. So far the focus has been mainly on the Natural Language Understanding tasks. In this paper, we demonstrate the efficacy of pre-trained checkpoints for Sequence Generation. We developed a Transformer-based sequence-to-sequence model that is compatible with publicly available pre-trained BERT, GPT-2 and RoBERTa checkpoints and conducted an extensive empirical study on the utility of initializing our model, both encoder and decoder, with these checkpoints. Our models result in new state-of-the-art results on Machine Translation, Text Summarization, Sentence Splitting, and Sentence Fusion.* This model was contributed by [patrickvonplaten](https://huggingface.co/patrickvonplaten). The original code can be found [here](https://tfhub.dev/s?module-type=text-generation&subtype=module,placeholder). ## Usage examples and tips The model can be used in combination with the [`EncoderDecoderModel`] to leverage two pretrained BERT checkpoints for subsequent fine-tuning: thon >>> # leverage checkpoints for Bert2Bert model >>> # use BERT's cls token as BOS token and sep token as EOS token >>> encoder = BertGenerationEncoder.from_pretrained(""bert-large-uncased"", bos_token_id=101, eos_token_id=102) >>> # add cross attention layers and use BERT's cls token as BOS token and sep token as EOS token >>> decoder = BertGenerationDecoder.from_pretrained( ""bert-large-uncased"", add_cross_attention=True, is_decoder=True, bos_token_id=101, eos_token_id=102 ) >>> bert2bert = EncoderDecoderModel(encoder=encoder, decoder=decoder) >>> # create tokenizer >>> tokenizer = BertTokenizer.from_pretrained(""bert-large-uncased"") >>> input_ids = tokenizer( ""This is a long article to summarize"", add_special_tokens=False, return_tensors=""pt"" ).input_ids >>> labels = tokenizer(""This is a short summary"", return_tensors=""pt"").input_ids >>> # train >>> loss = bert2bert(input_ids=input_ids, decoder_input_ids=labels, labels=labels).loss >>> loss.backward() Pretrained [`EncoderDecoderModel`] are also directly available in the model hub, e.g.: thon >>> # instantiate sentence fusion model >>> sentence_fuser = EncoderDecoderModel.from_pretrained(""google/roberta2roberta_L-24_discofuse"") >>> tokenizer = AutoTokenizer.from_pretrained(""google/roberta2roberta_L-24_discofuse"") >>> input_ids = tokenizer( ""This is the first sentence. This is the second sentence."", add_special_tokens=False, return_tensors=""pt"" ).input_ids >>> outputs = sentence_fuser.generate(input_ids) >>> print(tokenizer.decode(outputs[0])) Tips: - [`BertGenerationEncoder`] and [`BertGenerationDecoder`] should be used in combination with [`EncoderDecoder`]. - For summarization, sentence splitting, sentence fusion and translation, no special tokens are required for the input. Therefore, no EOS token should be added to the end of the input. ## BertGenerationConfig [[autodoc]] BertGenerationConfig ## BertGenerationTokenizer [[autodoc]] BertGenerationTokenizer - save_vocabulary ## BertGenerationEncoder [[autodoc]] BertGenerationEncoder - forward ## BertGenerationDecoder [[autodoc]] BertGenerationDecoder - forward " model_doc/open-llama.md," # Open-Llama This model is in maintenance mode only, we don't accept any new PRs changing its code. If you run into any issues running this model, please reinstall the last version that supported this model: v4.31.0. You can do so by running the following command: `pip install -U transformers==4.31.0`. This model differs from the [OpenLLaMA models](https://huggingface.co/models?search=openllama) on the Hugging Face Hub, which primarily use the [LLaMA](llama) architecture. ## Overview The Open-Llama model was proposed in the open source Open-Llama project by community developer s-JoL. The model is mainly based on LLaMA with some modifications, incorporating memory-efficient attention from Xformers, stable embedding from Bloom, and shared input-output embedding from PaLM. And the model is pre-trained on both Chinese and English, which gives it better performance on Chinese language tasks. This model was contributed by [s-JoL](https://huggingface.co/s-JoL). The original code was released on GitHub by [s-JoL](https://github.com/s-JoL), but is now removed. ## OpenLlamaConfig [[autodoc]] OpenLlamaConfig ## OpenLlamaModel [[autodoc]] OpenLlamaModel - forward ## OpenLlamaForCausalLM [[autodoc]] OpenLlamaForCausalLM - forward ## OpenLlamaForSequenceClassification [[autodoc]] OpenLlamaForSequenceClassification - forward " model_doc/deplot.md," # DePlot ## Overview DePlot was proposed in the paper [DePlot: One-shot visual language reasoning by plot-to-table translation](https://arxiv.org/abs/2212.10505) from Fangyu Liu, Julian Martin Eisenschlos, Francesco Piccinno, Syrine Krichene, Chenxi Pang, Kenton Lee, Mandar Joshi, Wenhu Chen, Nigel Collier, Yasemin Altun. The abstract of the paper states the following: *Visual language such as charts and plots is ubiquitous in the human world. Comprehending plots and charts requires strong reasoning skills. Prior state-of-the-art (SOTA) models require at least tens of thousands of training examples and their reasoning capabilities are still much limited, especially on complex human-written queries. This paper presents the first one-shot solution to visual language reasoning. We decompose the challenge of visual language reasoning into two steps: (1) plot-to-text translation, and (2) reasoning over the translated text. The key in this method is a modality conversion module, named as DePlot, which translates the image of a plot or chart to a linearized table. The output of DePlot can then be directly used to prompt a pretrained large language model (LLM), exploiting the few-shot reasoning capabilities of LLMs. To obtain DePlot, we standardize the plot-to-table task by establishing unified task formats and metrics, and train DePlot end-to-end on this task. DePlot can then be used off-the-shelf together with LLMs in a plug-and-play fashion. Compared with a SOTA model finetuned on more than >28k data points, DePlot+LLM with just one-shot prompting achieves a 24.0% improvement over finetuned SOTA on human-written queries from the task of chart QA.* DePlot is a model that is trained using `Pix2Struct` architecture. You can find more information about `Pix2Struct` in the [Pix2Struct documentation](https://huggingface.co/docs/transformers/main/en/model_doc/pix2struct). DePlot is a Visual Question Answering subset of `Pix2Struct` architecture. It renders the input question on the image and predicts the answer. ## Usage example Currently one checkpoint is available for DePlot: - `google/deplot`: DePlot fine-tuned on ChartQA dataset thon from transformers import AutoProcessor, Pix2StructForConditionalGeneration import requests from PIL import Image model = Pix2StructForConditionalGeneration.from_pretrained(""google/deplot"") processor = AutoProcessor.from_pretrained(""google/deplot"") url = ""https://raw.githubusercontent.com/vis-nlp/ChartQA/main/ChartQA%20Dataset/val/png/5090.png"" image = Image.open(requests.get(url, stream=True).raw) inputs = processor(images=image, text=""Generate underlying data table of the figure below:"", return_tensors=""pt"") predictions = model.generate(**inputs, max_new_tokens=512) print(processor.decode(predictions[0], skip_special_tokens=True)) ## Fine-tuning To fine-tune DePlot, refer to the pix2struct [fine-tuning notebook](https://github.com/huggingface/notebooks/blob/main/examples/image_captioning_pix2struct.ipynb). For `Pix2Struct` models, we have found out that fine-tuning the model with Adafactor and cosine learning rate scheduler leads to faster convergence: thon from transformers.optimization import Adafactor, get_cosine_schedule_with_warmup optimizer = Adafactor(self.parameters(), scale_parameter=False, relative_step=False, lr=0.01, weight_decay=1e-05) scheduler = get_cosine_schedule_with_warmup(optimizer, num_warmup_steps=1000, num_training_steps=40000) DePlot is a model trained using `Pix2Struct` architecture. For API reference, see [`Pix2Struct` documentation](pix2struct). " model_doc/bartpho.md," # BARTpho ## Overview The BARTpho model was proposed in [BARTpho: Pre-trained Sequence-to-Sequence Models for Vietnamese](https://arxiv.org/abs/2109.09701) by Nguyen Luong Tran, Duong Minh Le and Dat Quoc Nguyen. The abstract from the paper is the following: *We present BARTpho with two versions -- BARTpho_word and BARTpho_syllable -- the first public large-scale monolingual sequence-to-sequence models pre-trained for Vietnamese. Our BARTpho uses the ""large"" architecture and pre-training scheme of the sequence-to-sequence denoising model BART, thus especially suitable for generative NLP tasks. Experiments on a downstream task of Vietnamese text summarization show that in both automatic and human evaluations, our BARTpho outperforms the strong baseline mBART and improves the state-of-the-art. We release BARTpho to facilitate future research and applications of generative Vietnamese NLP tasks.* This model was contributed by [dqnguyen](https://huggingface.co/dqnguyen). The original code can be found [here](https://github.com/VinAIResearch/BARTpho). ## Usage example thon >>> import torch >>> from transformers import AutoModel, AutoTokenizer >>> bartpho = AutoModel.from_pretrained(""vinai/bartpho-syllable"") >>> tokenizer = AutoTokenizer.from_pretrained(""vinai/bartpho-syllable"") >>> line = ""Chúng tôi là những nghiên cứu viên."" >>> input_ids = tokenizer(line, return_tensors=""pt"") >>> with torch.no_grad(): features = bartpho(**input_ids) # Models outputs are now tuples >>> # With TensorFlow 2.0+: >>> from transformers import TFAutoModel >>> bartpho = TFAutoModel.from_pretrained(""vinai/bartpho-syllable"") >>> input_ids = tokenizer(line, return_tensors=""tf"") >>> features = bartpho(**input_ids) ## Usage tips - Following mBART, BARTpho uses the ""large"" architecture of BART with an additional layer-normalization layer on top of both the encoder and decoder. Thus, usage examples in the [documentation of BART](bart), when adapting to use with BARTpho, should be adjusted by replacing the BART-specialized classes with the mBART-specialized counterparts. For example: thon >>> from transformers import MBartForConditionalGeneration >>> bartpho = MBartForConditionalGeneration.from_pretrained(""vinai/bartpho-syllable"") >>> TXT = ""Chúng tôi là nghiên cứu viên."" >>> input_ids = tokenizer([TXT], return_tensors=""pt"")[""input_ids""] >>> logits = bartpho(input_ids).logits >>> masked_index = (input_ids[0] == tokenizer.mask_token_id).nonzero().item() >>> probs = logits[0, masked_index].softmax(dim=0) >>> values, predictions = probs.topk(5) >>> print(tokenizer.decode(predictions).split()) - This implementation is only for tokenization: ""monolingual_vocab_file"" consists of Vietnamese-specialized types extracted from the pre-trained SentencePiece model ""vocab_file"" that is available from the multilingual XLM-RoBERTa. Other languages, if employing this pre-trained multilingual SentencePiece model ""vocab_file"" for subword segmentation, can reuse BartphoTokenizer with their own language-specialized ""monolingual_vocab_file"". ## BartphoTokenizer [[autodoc]] BartphoTokenizer " model_doc/big_bird.md," # BigBird ## Overview The BigBird model was proposed in [Big Bird: Transformers for Longer Sequences](https://arxiv.org/abs/2007.14062) by Zaheer, Manzil and Guruganesh, Guru and Dubey, Kumar Avinava and Ainslie, Joshua and Alberti, Chris and Ontanon, Santiago and Pham, Philip and Ravula, Anirudh and Wang, Qifan and Yang, Li and others. BigBird, is a sparse-attention based transformer which extends Transformer based models, such as BERT to much longer sequences. In addition to sparse attention, BigBird also applies global attention as well as random attention to the input sequence. Theoretically, it has been shown that applying sparse, global, and random attention approximates full attention, while being computationally much more efficient for longer sequences. As a consequence of the capability to handle longer context, BigBird has shown improved performance on various long document NLP tasks, such as question answering and summarization, compared to BERT or RoBERTa. The abstract from the paper is the following: *Transformers-based models, such as BERT, have been one of the most successful deep learning models for NLP. Unfortunately, one of their core limitations is the quadratic dependency (mainly in terms of memory) on the sequence length due to their full attention mechanism. To remedy this, we propose, BigBird, a sparse attention mechanism that reduces this quadratic dependency to linear. We show that BigBird is a universal approximator of sequence functions and is Turing complete, thereby preserving these properties of the quadratic, full attention model. Along the way, our theoretical analysis reveals some of the benefits of having O(1) global tokens (such as CLS), that attend to the entire sequence as part of the sparse attention mechanism. The proposed sparse attention can handle sequences of length up to 8x of what was previously possible using similar hardware. As a consequence of the capability to handle longer context, BigBird drastically improves performance on various NLP tasks such as question answering and summarization. We also propose novel applications to genomics data.* This model was contributed by [vasudevgupta](https://huggingface.co/vasudevgupta). The original code can be found [here](https://github.com/google-research/bigbird). ## Usage tips - For an in-detail explanation on how BigBird's attention works, see [this blog post](https://huggingface.co/blog/big-bird). - BigBird comes with 2 implementations: **original_full** & **block_sparse**. For the sequence length < 1024, using **original_full** is advised as there is no benefit in using **block_sparse** attention. - The code currently uses window size of 3 blocks and 2 global blocks. - Sequence length must be divisible by block size. - Current implementation supports only **ITC**. - Current implementation doesn't support **num_random_blocks = 0** - BigBird is a model with absolute position embeddings so it's usually advised to pad the inputs on the right rather than the left. ## Resources - [Text classification task guide](../tasks/sequence_classification) - [Token classification task guide](../tasks/token_classification) - [Question answering task guide](../tasks/question_answering) - [Causal language modeling task guide](../tasks/language_modeling) - [Masked language modeling task guide](../tasks/masked_language_modeling) - [Multiple choice task guide](../tasks/multiple_choice) ## BigBirdConfig [[autodoc]] BigBirdConfig ## BigBirdTokenizer [[autodoc]] BigBirdTokenizer - build_inputs_with_special_tokens - get_special_tokens_mask - create_token_type_ids_from_sequences - save_vocabulary ## BigBirdTokenizerFast [[autodoc]] BigBirdTokenizerFast ## BigBird specific outputs [[autodoc]] models.big_bird.modeling_big_bird.BigBirdForPreTrainingOutput ## BigBirdModel [[autodoc]] BigBirdModel - forward ## BigBirdForPreTraining [[autodoc]] BigBirdForPreTraining - forward ## BigBirdForCausalLM [[autodoc]] BigBirdForCausalLM - forward ## BigBirdForMaskedLM [[autodoc]] BigBirdForMaskedLM - forward ## BigBirdForSequenceClassification [[autodoc]] BigBirdForSequenceClassification - forward ## BigBirdForMultipleChoice [[autodoc]] BigBirdForMultipleChoice - forward ## BigBirdForTokenClassification [[autodoc]] BigBirdForTokenClassification - forward ## BigBirdForQuestionAnswering [[autodoc]] BigBirdForQuestionAnswering - forward ## FlaxBigBirdModel [[autodoc]] FlaxBigBirdModel - __call__ ## FlaxBigBirdForPreTraining [[autodoc]] FlaxBigBirdForPreTraining - __call__ ## FlaxBigBirdForCausalLM [[autodoc]] FlaxBigBirdForCausalLM - __call__ ## FlaxBigBirdForMaskedLM [[autodoc]] FlaxBigBirdForMaskedLM - __call__ ## FlaxBigBirdForSequenceClassification [[autodoc]] FlaxBigBirdForSequenceClassification - __call__ ## FlaxBigBirdForMultipleChoice [[autodoc]] FlaxBigBirdForMultipleChoice - __call__ ## FlaxBigBirdForTokenClassification [[autodoc]] FlaxBigBirdForTokenClassification - __call__ ## FlaxBigBirdForQuestionAnswering [[autodoc]] FlaxBigBirdForQuestionAnswering - __call__ " model_doc/t5.md," # T5 ## Overview The T5 model was presented in [Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer](https://arxiv.org/pdf/1910.10683.pdf) by [Colin Raffel](https://huggingface.co/craffel), Noam Shazeer, [Adam Roberts](https://huggingface.co/adarob), Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, [Peter J. Liu](https://huggingface.co/peterjliu). The abstract from the paper is the following: *Transfer learning, where a model is first pre-trained on a data-rich task before being fine-tuned on a downstream task, has emerged as a powerful technique in natural language processing (NLP). The effectiveness of transfer learning has given rise to a diversity of approaches, methodology, and practice. In this paper, we explore the landscape of transfer learning techniques for NLP by introducing a unified framework that converts every language problem into a text-to-text format. Our systematic study compares pretraining objectives, architectures, unlabeled datasets, transfer approaches, and other factors on dozens of language understanding tasks. By combining the insights from our exploration with scale and our new ""Colossal Clean Crawled Corpus"", we achieve state-of-the-art results on many benchmarks covering summarization, question answering, text classification, and more. To facilitate future work on transfer learning for NLP, we release our dataset, pre-trained models, and code.* All checkpoints can be found on the [hub](https://huggingface.co/models?search=t5). This model was contributed by [thomwolf](https://huggingface.co/thomwolf). The original code can be found [here](https://github.com/google-research/text-to-text-transfer-transformer). ## Usage tips - T5 is an encoder-decoder model pre-trained on a multi-task mixture of unsupervised and supervised tasks and for which each task is converted into a text-to-text format. T5 works well on a variety of tasks out-of-the-box by prepending a different prefix to the input corresponding to each task, e.g., for translation: *translate English to German: *, for summarization: *summarize: *. - The pretraining includes both supervised and self-supervised training. Supervised training is conducted on downstream tasks provided by the GLUE and SuperGLUE benchmarks (converting them into text-to-text tasks as explained above). - Self-supervised training uses corrupted tokens, by randomly removing 15% of the tokens and replacing them with individual sentinel tokens (if several consecutive tokens are marked for removal, the whole group is replaced with a single sentinel token). The input of the encoder is the corrupted sentence, the input of the decoder is the original sentence and the target is then the dropped out tokens delimited by their sentinel tokens. - T5 uses relative scalar embeddings. Encoder input padding can be done on the left and on the right. - See the [training](#training), [inference](#inference) and [scripts](#scripts) sections below for all details regarding usage. T5 comes in different sizes: - [t5-small](https://huggingface.co/t5-small) - [t5-base](https://huggingface.co/t5-base) - [t5-large](https://huggingface.co/t5-large) - [t5-3b](https://huggingface.co/t5-3b) - [t5-11b](https://huggingface.co/t5-11b). Based on the original T5 model, Google has released some follow-up works: - **T5v1.1**: T5v1.1 is an improved version of T5 with some architectural tweaks, and is pre-trained on C4 only without mixing in the supervised tasks. Refer to the documentation of T5v1.1 which can be found [here](t5v1.1). - **mT5**: mT5 is a multilingual T5 model. It is pre-trained on the mC4 corpus, which includes 101 languages. Refer to the documentation of mT5 which can be found [here](mt5). - **byT5**: byT5 is a T5 model pre-trained on byte sequences rather than SentencePiece subword token sequences. Refer to the documentation of byT5 which can be found [here](byt5). - **UL2**: UL2 is a T5 like model pretrained on various denoising objectives - **Flan-T5**: Flan is a pretraining methods that is based on prompting. The Flan-T5 are T5 models trained on the Flan collection of datasets which include: `taskmaster2`, `djaym7/wiki_dialog`, `deepmind/code_contests`, `lambada`, `gsm8k`, `aqua_rat`, `esnli`, `quasc` and `qed`. - **FLan-UL2** : the UL2 model finetuned using the ""Flan"" prompt tuning and dataset collection. - **UMT5**: UmT5 is a multilingual T5 model trained on an improved and refreshed mC4 multilingual corpus, 29 trillion characters across 107 language, using a new sampling method, UniMax. Refer to the documentation of mT5 which can be found [here](umt5). ## Training T5 is an encoder-decoder model and converts all NLP problems into a text-to-text format. It is trained using teacher forcing. This means that for training, we always need an input sequence and a corresponding target sequence. The input sequence is fed to the model using `input_ids`. The target sequence is shifted to the right, i.e., prepended by a start-sequence token and fed to the decoder using the `decoder_input_ids`. In teacher-forcing style, the target sequence is then appended by the EOS token and corresponds to the `labels`. The PAD token is hereby used as the start-sequence token. T5 can be trained / fine-tuned both in a supervised and unsupervised fashion. One can use [`T5ForConditionalGeneration`] (or the Tensorflow/Flax variant), which includes the language modeling head on top of the decoder. - Unsupervised denoising training In this setup, spans of the input sequence are masked by so-called sentinel tokens (*a.k.a* unique mask tokens) and the output sequence is formed as a concatenation of the same sentinel tokens and the *real* masked tokens. Each sentinel token represents a unique mask token for this sentence and should start with ``, ``, up to ``. As a default, 100 sentinel tokens are available in [`T5Tokenizer`]. For instance, the sentence ""The cute dog walks in the park"" with the masks put on ""cute dog"" and ""the"" should be processed as follows: thon >>> from transformers import T5Tokenizer, T5ForConditionalGeneration >>> tokenizer = T5Tokenizer.from_pretrained(""t5-small"") >>> model = T5ForConditionalGeneration.from_pretrained(""t5-small"") >>> input_ids = tokenizer(""The walks in park"", return_tensors=""pt"").input_ids >>> labels = tokenizer("" cute dog the "", return_tensors=""pt"").input_ids >>> # the forward function automatically creates the correct decoder_input_ids >>> loss = model(input_ids=input_ids, labels=labels).loss >>> loss.item() 3.7837 If you're interested in pre-training T5 on a new corpus, check out the [run_t5_mlm_flax.py](https://github.com/huggingface/transformers/tree/main/examples/flax/language-modeling) script in the Examples directory. - Supervised training In this setup, the input sequence and output sequence are a standard sequence-to-sequence input-output mapping. Suppose that we want to fine-tune the model for translation for example, and we have a training example: the input sequence ""The house is wonderful."" and output sequence ""Das Haus ist wunderbar."", then they should be prepared for the model as follows: thon >>> from transformers import T5Tokenizer, T5ForConditionalGeneration >>> tokenizer = T5Tokenizer.from_pretrained(""t5-small"") >>> model = T5ForConditionalGeneration.from_pretrained(""t5-small"") >>> input_ids = tokenizer(""translate English to German: The house is wonderful."", return_tensors=""pt"").input_ids >>> labels = tokenizer(""Das Haus ist wunderbar."", return_tensors=""pt"").input_ids >>> # the forward function automatically creates the correct decoder_input_ids >>> loss = model(input_ids=input_ids, labels=labels).loss >>> loss.item() 0.2542 As you can see, only 2 inputs are required for the model in order to compute a loss: `input_ids` (which are the `input_ids` of the encoded input sequence) and `labels` (which are the `input_ids` of the encoded target sequence). The model will automatically create the `decoder_input_ids` based on the `labels`, by shifting them one position to the right and prepending the `config.decoder_start_token_id`, which for T5 is equal to 0 (i.e. the id of the pad token). Also note the task prefix: we prepend the input sequence with 'translate English to German: ' before encoding it. This will help in improving the performance, as this task prefix was used during T5's pre-training. However, the example above only shows a single training example. In practice, one trains deep learning models in batches. This entails that we must pad/truncate examples to the same length. For encoder-decoder models, one typically defines a `max_source_length` and `max_target_length`, which determine the maximum length of the input and output sequences respectively (otherwise they are truncated). These should be carefully set depending on the task. In addition, we must make sure that padding token id's of the `labels` are not taken into account by the loss function. In PyTorch and Tensorflow, this can be done by replacing them with -100, which is the `ignore_index` of the `CrossEntropyLoss`. In Flax, one can use the `decoder_attention_mask` to ignore padded tokens from the loss (see the [Flax summarization script](https://github.com/huggingface/transformers/tree/main/examples/flax/summarization) for details). We also pass `attention_mask` as additional input to the model, which makes sure that padding tokens of the inputs are ignored. The code example below illustrates all of this. thon >>> from transformers import T5Tokenizer, T5ForConditionalGeneration >>> import torch >>> tokenizer = T5Tokenizer.from_pretrained(""t5-small"") >>> model = T5ForConditionalGeneration.from_pretrained(""t5-small"") >>> # the following 2 hyperparameters are task-specific >>> max_source_length = 512 >>> max_target_length = 128 >>> # Suppose we have the following 2 training examples: >>> input_sequence_1 = ""Welcome to NYC"" >>> output_sequence_1 = ""Bienvenue à NYC"" >>> input_sequence_2 = ""HuggingFace is a company"" >>> output_sequence_2 = ""HuggingFace est une entreprise"" >>> # encode the inputs >>> task_prefix = ""translate English to French: "" >>> input_sequences = [input_sequence_1, input_sequence_2] >>> encoding = tokenizer( [task_prefix + sequence for sequence in input_sequences], padding=""longest"", max_length=max_source_length, truncation=True, return_tensors=""pt"", ) >>> input_ids, attention_mask = encoding.input_ids, encoding.attention_mask >>> # encode the targets >>> target_encoding = tokenizer( [output_sequence_1, output_sequence_2], padding=""longest"", max_length=max_target_length, truncation=True, return_tensors=""pt"", ) >>> labels = target_encoding.input_ids >>> # replace padding token id's of the labels by -100 so it's ignored by the loss >>> labels[labels == tokenizer.pad_token_id] = -100 >>> # forward pass >>> loss = model(input_ids=input_ids, attention_mask=attention_mask, labels=labels).loss >>> loss.item() 0.188 Additional training tips: - T5 models need a slightly higher learning rate than the default one set in the `Trainer` when using the AdamW optimizer. Typically, 1e-4 and 3e-4 work well for most problems (classification, summarization, translation, question answering, question generation). Note that T5 was pre-trained using the AdaFactor optimizer. According to [this forum post](https://discuss.huggingface.co/t/t5-finetuning-tips/684), task prefixes matter when (1) doing multi-task training (2) your task is similar or related to one of the supervised tasks used in T5's pre-training mixture (see Appendix D of the [paper](https://arxiv.org/pdf/1910.10683.pdf) for the task prefixes used). If training on TPU, it is recommended to pad all examples of the dataset to the same length or make use of *pad_to_multiple_of* to have a small number of predefined bucket sizes to fit all examples in. Dynamically padding batches to the longest example is not recommended on TPU as it triggers a recompilation for every batch shape that is encountered during training thus significantly slowing down the training. only padding up to the longest example in a batch) leads to very slow training on TPU. ## Inference At inference time, it is recommended to use [`~generation.GenerationMixin.generate`]. This method takes care of encoding the input and feeding the encoded hidden states via cross-attention layers to the decoder and auto-regressively generates the decoder output. Check out [this blog post](https://huggingface.co/blog/how-to-generate) to know all the details about generating text with Transformers. There's also [this blog post](https://huggingface.co/blog/encoder-decoder#encoder-decoder) which explains how generation works in general in encoder-decoder models. thon >>> from transformers import T5Tokenizer, T5ForConditionalGeneration >>> tokenizer = T5Tokenizer.from_pretrained(""t5-small"") >>> model = T5ForConditionalGeneration.from_pretrained(""t5-small"") >>> input_ids = tokenizer(""translate English to German: The house is wonderful."", return_tensors=""pt"").input_ids >>> outputs = model.generate(input_ids) >>> print(tokenizer.decode(outputs[0], skip_special_tokens=True)) Das Haus ist wunderbar. Note that T5 uses the `pad_token_id` as the `decoder_start_token_id`, so when doing generation without using [`~generation.GenerationMixin.generate`], make sure you start it with the `pad_token_id`. The example above only shows a single example. You can also do batched inference, like so: thon >>> from transformers import T5Tokenizer, T5ForConditionalGeneration >>> tokenizer = T5Tokenizer.from_pretrained(""t5-small"") >>> model = T5ForConditionalGeneration.from_pretrained(""t5-small"") >>> task_prefix = ""translate English to German: "" >>> # use different length sentences to test batching >>> sentences = [""The house is wonderful."", ""I like to work in NYC.""] >>> inputs = tokenizer([task_prefix + sentence for sentence in sentences], return_tensors=""pt"", padding=True) >>> output_sequences = model.generate( input_ids=inputs[""input_ids""], attention_mask=inputs[""attention_mask""], do_sample=False, # disable sampling to test if batching affects output ) >>> print(tokenizer.batch_decode(output_sequences, skip_special_tokens=True)) ['Das Haus ist wunderbar.', 'Ich arbeite gerne in NYC.'] Because T5 has been trained with the span-mask denoising objective, it can be used to predict the sentinel (masked-out) tokens during inference. The predicted tokens will then be placed between the sentinel tokens. thon >>> from transformers import T5Tokenizer, T5ForConditionalGeneration >>> tokenizer = T5Tokenizer.from_pretrained(""t5-small"") >>> model = T5ForConditionalGeneration.from_pretrained(""t5-small"") >>> input_ids = tokenizer(""The walks in park"", return_tensors=""pt"").input_ids >>> sequence_ids = model.generate(input_ids) >>> sequences = tokenizer.batch_decode(sequence_ids) >>> sequences [' park offers the park.'] ## Performance If you'd like a faster training and inference performance, install [apex](https://github.com/NVIDIA/apex#quick-start) and then the model will automatically use `apex.normalization.FusedRMSNorm` instead of `T5LayerNorm`. The former uses an optimized fused kernel which is several times faster than the latter. ## Resources A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with T5. If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource. - A notebook for how to [finetune T5 for classification and multiple choice](https://colab.research.google.com/github/patil-suraj/exploring-T5/blob/master/t5_fine_tuning.ipynb). - A notebook for how to [finetune T5 for sentiment span extraction](https://colab.research.google.com/github/enzoampil/t5-intro/blob/master/t5_qa_training_pytorch_span_extraction.ipynb). 🌎 - A notebook for how to [finetune T5 for named entity recognition](https://colab.research.google.com/drive/1obr78FY_cBmWY5ODViCmzdY6O1KB65Vc?usp=sharing). 🌎 - A notebook for [Finetuning CodeT5 for generating docstrings from Ruby code](https://colab.research.google.com/github/NielsRogge/Transformers-Tutorials/blob/master/T5/Fine_tune_CodeT5_for_generating_docstrings_from_Ruby_code.ipynb). - A notebook to [Finetune T5-base-dutch to perform Dutch abstractive summarization on a TPU](https://colab.research.google.com/github/NielsRogge/Transformers-Tutorials/blob/master/T5/Fine_tuning_Dutch_T5_base_on_CNN_Daily_Mail_for_summarization_(on_TPU_using_HuggingFace_Accelerate).ipynb). - A notebook for how to [finetune T5 for summarization in PyTorch and track experiments with WandB](https://colab.research.google.com/github/abhimishra91/transformers-tutorials/blob/master/transformers_summarization_wandb.ipynb#scrollTo=OKRpFvYhBauC). 🌎 - A blog post on [Distributed Training: Train BART/T5 for Summarization using 🤗 Transformers and Amazon SageMaker](https://huggingface.co/blog/sagemaker-distributed-training-seq2seq). - [`T5ForConditionalGeneration`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/pytorch/summarization) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/summarization.ipynb). - [`TFT5ForConditionalGeneration`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/tensorflow/summarization) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/summarization-tf.ipynb). - [`FlaxT5ForConditionalGeneration`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/flax/summarization). - [Summarization](https://huggingface.co/course/chapter7/5?fw=pt#summarization) chapter of the 🤗 Hugging Face course. - [Summarization task guide](../tasks/summarization) - [`FlaxT5ForConditionalGeneration`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/flax/language-modeling#t5-like-span-masked-language-modeling) for training T5 with a span-masked language model objective. The script also shows how to train a T5 tokenizer. [`FlaxT5ForConditionalGeneration`] is also supported by this [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/masked_language_modeling_flax.ipynb). - [`T5ForConditionalGeneration`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/pytorch/translation) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/translation.ipynb). - [`TFT5ForConditionalGeneration`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/tensorflow/translation) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/translation-tf.ipynb). - [Translation task guide](../tasks/translation) - A notebook on how to [finetune T5 for question answering with TensorFlow 2](https://colab.research.google.com/github/snapthat/TF-T5-text-to-text/blob/master/snapthatT5/notebooks/TF-T5-Datasets%20Training.ipynb). 🌎 - A notebook on how to [finetune T5 for question answering on a TPU](https://colab.research.google.com/github/patil-suraj/exploring-T5/blob/master/T5_on_TPU.ipynb#scrollTo=QLGiFCDqvuil). 🚀 **Deploy** - A blog post on how to deploy [T5 11B for inference for less than $500](https://www.philschmid.de/deploy-t5-11b). ## T5Config [[autodoc]] T5Config ## T5Tokenizer [[autodoc]] T5Tokenizer - build_inputs_with_special_tokens - get_special_tokens_mask - create_token_type_ids_from_sequences - save_vocabulary ## T5TokenizerFast [[autodoc]] T5TokenizerFast ## T5Model [[autodoc]] T5Model - forward ## T5ForConditionalGeneration [[autodoc]] T5ForConditionalGeneration - forward ## T5EncoderModel [[autodoc]] T5EncoderModel - forward ## T5ForSequenceClassification [[autodoc]] T5ForSequenceClassification - forward ## T5ForQuestionAnswering [[autodoc]] T5ForQuestionAnswering - forward ## TFT5Model [[autodoc]] TFT5Model - call ## TFT5ForConditionalGeneration [[autodoc]] TFT5ForConditionalGeneration - call ## TFT5EncoderModel [[autodoc]] TFT5EncoderModel - call ## FlaxT5Model [[autodoc]] FlaxT5Model - __call__ - encode - decode ## FlaxT5ForConditionalGeneration [[autodoc]] FlaxT5ForConditionalGeneration - __call__ - encode - decode ## FlaxT5EncoderModel [[autodoc]] FlaxT5EncoderModel - __call__ " model_doc/mluke.md," # mLUKE ## Overview The mLUKE model was proposed in [mLUKE: The Power of Entity Representations in Multilingual Pretrained Language Models](https://arxiv.org/abs/2110.08151) by Ryokan Ri, Ikuya Yamada, and Yoshimasa Tsuruoka. It's a multilingual extension of the [LUKE model](https://arxiv.org/abs/2010.01057) trained on the basis of XLM-RoBERTa. It is based on XLM-RoBERTa and adds entity embeddings, which helps improve performance on various downstream tasks involving reasoning about entities such as named entity recognition, extractive question answering, relation classification, cloze-style knowledge completion. The abstract from the paper is the following: *Recent studies have shown that multilingual pretrained language models can be effectively improved with cross-lingual alignment information from Wikipedia entities. However, existing methods only exploit entity information in pretraining and do not explicitly use entities in downstream tasks. In this study, we explore the effectiveness of leveraging entity representations for downstream cross-lingual tasks. We train a multilingual language model with 24 languages with entity representations and show the model consistently outperforms word-based pretrained models in various cross-lingual transfer tasks. We also analyze the model and the key insight is that incorporating entity representations into the input allows us to extract more language-agnostic features. We also evaluate the model with a multilingual cloze prompt task with the mLAMA dataset. We show that entity-based prompt elicits correct factual knowledge more likely than using only word representations.* This model was contributed by [ryo0634](https://huggingface.co/ryo0634). The original code can be found [here](https://github.com/studio-ousia/luke). ## Usage tips One can directly plug in the weights of mLUKE into a LUKE model, like so: thon from transformers import LukeModel model = LukeModel.from_pretrained(""studio-ousia/mluke-base"") Note that mLUKE has its own tokenizer, [`MLukeTokenizer`]. You can initialize it as follows: thon from transformers import MLukeTokenizer tokenizer = MLukeTokenizer.from_pretrained(""studio-ousia/mluke-base"") As mLUKE's architecture is equivalent to that of LUKE, one can refer to [LUKE's documentation page](luke) for all tips, code examples and notebooks. ## MLukeTokenizer [[autodoc]] MLukeTokenizer - __call__ - save_vocabulary " model_doc/rag.md," # RAG ## Overview Retrieval-augmented generation (""RAG"") models combine the powers of pretrained dense retrieval (DPR) and sequence-to-sequence models. RAG models retrieve documents, pass them to a seq2seq model, then marginalize to generate outputs. The retriever and seq2seq modules are initialized from pretrained models, and fine-tuned jointly, allowing both retrieval and generation to adapt to downstream tasks. It is based on the paper [Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks](https://arxiv.org/abs/2005.11401) by Patrick Lewis, Ethan Perez, Aleksandara Piktus, Fabio Petroni, Vladimir Karpukhin, Naman Goyal, Heinrich Küttler, Mike Lewis, Wen-tau Yih, Tim Rocktäschel, Sebastian Riedel, Douwe Kiela. The abstract from the paper is the following: *Large pre-trained language models have been shown to store factual knowledge in their parameters, and achieve state-of-the-art results when fine-tuned on downstream NLP tasks. However, their ability to access and precisely manipulate knowledge is still limited, and hence on knowledge-intensive tasks, their performance lags behind task-specific architectures. Additionally, providing provenance for their decisions and updating their world knowledge remain open research problems. Pre-trained models with a differentiable access mechanism to explicit nonparametric memory can overcome this issue, but have so far been only investigated for extractive downstream tasks. We explore a general-purpose fine-tuning recipe for retrieval-augmented generation (RAG) — models which combine pre-trained parametric and non-parametric memory for language generation. We introduce RAG models where the parametric memory is a pre-trained seq2seq model and the non-parametric memory is a dense vector index of Wikipedia, accessed with a pre-trained neural retriever. We compare two RAG formulations, one which conditions on the same retrieved passages across the whole generated sequence, the other can use different passages per token. We fine-tune and evaluate our models on a wide range of knowledge-intensive NLP tasks and set the state-of-the-art on three open domain QA tasks, outperforming parametric seq2seq models and task-specific retrieve-and-extract architectures. For language generation tasks, we find that RAG models generate more specific, diverse and factual language than a state-of-the-art parametric-only seq2seq baseline.* This model was contributed by [ola13](https://huggingface.co/ola13). ## Usage tips Retrieval-augmented generation (""RAG"") models combine the powers of pretrained dense retrieval (DPR) and Seq2Seq models. RAG models retrieve docs, pass them to a seq2seq model, then marginalize to generate outputs. The retriever and seq2seq modules are initialized from pretrained models, and fine-tuned jointly, allowing both retrieval and generation to adapt to downstream tasks. ## RagConfig [[autodoc]] RagConfig ## RagTokenizer [[autodoc]] RagTokenizer ## Rag specific outputs [[autodoc]] models.rag.modeling_rag.RetrievAugLMMarginOutput [[autodoc]] models.rag.modeling_rag.RetrievAugLMOutput ## RagRetriever [[autodoc]] RagRetriever ## RagModel [[autodoc]] RagModel - forward ## RagSequenceForGeneration [[autodoc]] RagSequenceForGeneration - forward - generate ## RagTokenForGeneration [[autodoc]] RagTokenForGeneration - forward - generate ## TFRagModel [[autodoc]] TFRagModel - call ## TFRagSequenceForGeneration [[autodoc]] TFRagSequenceForGeneration - call - generate ## TFRagTokenForGeneration [[autodoc]] TFRagTokenForGeneration - call - generate " model_doc/bort.md," # BORT This model is in maintenance mode only, we do not accept any new PRs changing its code. If you run into any issues running this model, please reinstall the last version that supported this model: v4.30.0. You can do so by running the following command: `pip install -U transformers==4.30.0`. ## Overview The BORT model was proposed in [Optimal Subarchitecture Extraction for BERT](https://arxiv.org/abs/2010.10499) by Adrian de Wynter and Daniel J. Perry. It is an optimal subset of architectural parameters for the BERT, which the authors refer to as ""Bort"". The abstract from the paper is the following: *We extract an optimal subset of architectural parameters for the BERT architecture from Devlin et al. (2018) by applying recent breakthroughs in algorithms for neural architecture search. This optimal subset, which we refer to as ""Bort"", is demonstrably smaller, having an effective (that is, not counting the embedding layer) size of 5.5% the original BERT-large architecture, and 16% of the net size. Bort is also able to be pretrained in 288 GPU hours, which is 1.2% of the time required to pretrain the highest-performing BERT parametric architectural variant, RoBERTa-large (Liu et al., 2019), and about 33% of that of the world-record, in GPU hours, required to train BERT-large on the same hardware. It is also 7.9x faster on a CPU, as well as being better performing than other compressed variants of the architecture, and some of the non-compressed variants: it obtains performance improvements of between 0.3% and 31%, absolute, with respect to BERT-large, on multiple public natural language understanding (NLU) benchmarks.* This model was contributed by [stefan-it](https://huggingface.co/stefan-it). The original code can be found [here](https://github.com/alexa/bort/). ## Usage tips - BORT's model architecture is based on BERT, refer to [BERT's documentation page](bert) for the model's API reference as well as usage examples. - BORT uses the RoBERTa tokenizer instead of the BERT tokenizer, refer to [RoBERTa's documentation page](roberta) for the tokenizer's API reference as well as usage examples. - BORT requires a specific fine-tuning algorithm, called [Agora](https://adewynter.github.io/notes/bort_algorithms_and_applications.html#fine-tuning-with-algebraic-topology) , that is sadly not open-sourced yet. It would be very useful for the community, if someone tries to implement the algorithm to make BORT fine-tuning work. " model_doc/speecht5.md," # SpeechT5 ## Overview The SpeechT5 model was proposed in [SpeechT5: Unified-Modal Encoder-Decoder Pre-Training for Spoken Language Processing](https://arxiv.org/abs/2110.07205) by Junyi Ao, Rui Wang, Long Zhou, Chengyi Wang, Shuo Ren, Yu Wu, Shujie Liu, Tom Ko, Qing Li, Yu Zhang, Zhihua Wei, Yao Qian, Jinyu Li, Furu Wei. The abstract from the paper is the following: *Motivated by the success of T5 (Text-To-Text Transfer Transformer) in pre-trained natural language processing models, we propose a unified-modal SpeechT5 framework that explores the encoder-decoder pre-training for self-supervised speech/text representation learning. The SpeechT5 framework consists of a shared encoder-decoder network and six modal-specific (speech/text) pre/post-nets. After preprocessing the input speech/text through the pre-nets, the shared encoder-decoder network models the sequence-to-sequence transformation, and then the post-nets generate the output in the speech/text modality based on the output of the decoder. Leveraging large-scale unlabeled speech and text data, we pre-train SpeechT5 to learn a unified-modal representation, hoping to improve the modeling capability for both speech and text. To align the textual and speech information into this unified semantic space, we propose a cross-modal vector quantization approach that randomly mixes up speech/text states with latent units as the interface between encoder and decoder. Extensive evaluations show the superiority of the proposed SpeechT5 framework on a wide variety of spoken language processing tasks, including automatic speech recognition, speech synthesis, speech translation, voice conversion, speech enhancement, and speaker identification.* This model was contributed by [Matthijs](https://huggingface.co/Matthijs). The original code can be found [here](https://github.com/microsoft/SpeechT5). ## SpeechT5Config [[autodoc]] SpeechT5Config ## SpeechT5HifiGanConfig [[autodoc]] SpeechT5HifiGanConfig ## SpeechT5Tokenizer [[autodoc]] SpeechT5Tokenizer - __call__ - save_vocabulary - decode - batch_decode ## SpeechT5FeatureExtractor [[autodoc]] SpeechT5FeatureExtractor - __call__ ## SpeechT5Processor [[autodoc]] SpeechT5Processor - __call__ - pad - from_pretrained - save_pretrained - batch_decode - decode ## SpeechT5Model [[autodoc]] SpeechT5Model - forward ## SpeechT5ForSpeechToText [[autodoc]] SpeechT5ForSpeechToText - forward ## SpeechT5ForTextToSpeech [[autodoc]] SpeechT5ForTextToSpeech - forward - generate ## SpeechT5ForSpeechToSpeech [[autodoc]] SpeechT5ForSpeechToSpeech - forward - generate_speech ## SpeechT5HifiGan [[autodoc]] SpeechT5HifiGan - forward " model_doc/clipseg.md," # CLIPSeg ## Overview The CLIPSeg model was proposed in [Image Segmentation Using Text and Image Prompts](https://arxiv.org/abs/2112.10003) by Timo Lüddecke and Alexander Ecker. CLIPSeg adds a minimal decoder on top of a frozen [CLIP](clip) model for zero- and one-shot image segmentation. The abstract from the paper is the following: *Image segmentation is usually addressed by training a model for a fixed set of object classes. Incorporating additional classes or more complex queries later is expensive as it requires re-training the model on a dataset that encompasses these expressions. Here we propose a system that can generate image segmentations based on arbitrary prompts at test time. A prompt can be either a text or an image. This approach enables us to create a unified model (trained once) for three common segmentation tasks, which come with distinct challenges: referring expression segmentation, zero-shot segmentation and one-shot segmentation. We build upon the CLIP model as a backbone which we extend with a transformer-based decoder that enables dense prediction. After training on an extended version of the PhraseCut dataset, our system generates a binary segmentation map for an image based on a free-text prompt or on an additional image expressing the query. We analyze different variants of the latter image-based prompts in detail. This novel hybrid input allows for dynamic adaptation not only to the three segmentation tasks mentioned above, but to any binary segmentation task where a text or image query can be formulated. Finally, we find our system to adapt well to generalized queries involving affordances or properties* CLIPSeg overview. Taken from the original paper. This model was contributed by [nielsr](https://huggingface.co/nielsr). The original code can be found [here](https://github.com/timojl/clipseg). ## Usage tips - [`CLIPSegForImageSegmentation`] adds a decoder on top of [`CLIPSegModel`]. The latter is identical to [`CLIPModel`]. - [`CLIPSegForImageSegmentation`] can generate image segmentations based on arbitrary prompts at test time. A prompt can be either a text (provided to the model as `input_ids`) or an image (provided to the model as `conditional_pixel_values`). One can also provide custom conditional embeddings (provided to the model as `conditional_embeddings`). ## Resources A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with CLIPSeg. If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource. - A notebook that illustrates [zero-shot image segmentation with CLIPSeg](https://github.com/NielsRogge/Transformers-Tutorials/blob/master/CLIPSeg/Zero_shot_image_segmentation_with_CLIPSeg.ipynb). ## CLIPSegConfig [[autodoc]] CLIPSegConfig - from_text_vision_configs ## CLIPSegTextConfig [[autodoc]] CLIPSegTextConfig ## CLIPSegVisionConfig [[autodoc]] CLIPSegVisionConfig ## CLIPSegProcessor [[autodoc]] CLIPSegProcessor ## CLIPSegModel [[autodoc]] CLIPSegModel - forward - get_text_features - get_image_features ## CLIPSegTextModel [[autodoc]] CLIPSegTextModel - forward ## CLIPSegVisionModel [[autodoc]] CLIPSegVisionModel - forward ## CLIPSegForImageSegmentation [[autodoc]] CLIPSegForImageSegmentation - forward" model_doc/mobilenet_v1.md," # MobileNet V1 ## Overview The MobileNet model was proposed in [MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications](https://arxiv.org/abs/1704.04861) by Andrew G. Howard, Menglong Zhu, Bo Chen, Dmitry Kalenichenko, Weijun Wang, Tobias Weyand, Marco Andreetto, Hartwig Adam. The abstract from the paper is the following: *We present a class of efficient models called MobileNets for mobile and embedded vision applications. MobileNets are based on a streamlined architecture that uses depth-wise separable convolutions to build light weight deep neural networks. We introduce two simple global hyper-parameters that efficiently trade off between latency and accuracy. These hyper-parameters allow the model builder to choose the right sized model for their application based on the constraints of the problem. We present extensive experiments on resource and accuracy tradeoffs and show strong performance compared to other popular models on ImageNet classification. We then demonstrate the effectiveness of MobileNets across a wide range of applications and use cases including object detection, finegrain classification, face attributes and large scale geo-localization.* This model was contributed by [matthijs](https://huggingface.co/Matthijs). The original code and weights can be found [here](https://github.com/tensorflow/models/blob/master/research/slim/nets/mobilenet_v1.md). ## Usage tips - The checkpoints are named **mobilenet\_v1\_*depth*\_*size***, for example **mobilenet\_v1\_1.0\_224**, where **1.0** is the depth multiplier (sometimes also referred to as ""alpha"" or the width multiplier) and **224** is the resolution of the input images the model was trained on. - Even though the checkpoint is trained on images of specific size, the model will work on images of any size. The smallest supported image size is 32x32. - One can use [`MobileNetV1ImageProcessor`] to prepare images for the model. - The available image classification checkpoints are pre-trained on [ImageNet-1k](https://huggingface.co/datasets/imagenet-1k) (also referred to as ILSVRC 2012, a collection of 1.3 million images and 1,000 classes). However, the model predicts 1001 classes: the 1000 classes from ImageNet plus an extra “background” class (index 0). - The original TensorFlow checkpoints use different padding rules than PyTorch, requiring the model to determine the padding amount at inference time, since this depends on the input image size. To use native PyTorch padding behavior, create a [`MobileNetV1Config`] with `tf_padding = False`. Unsupported features: - The [`MobileNetV1Model`] outputs a globally pooled version of the last hidden state. In the original model it is possible to use a 7x7 average pooling layer with stride 2 instead of global pooling. For larger inputs, this gives a pooled output that is larger than 1x1 pixel. The HuggingFace implementation does not support this. - It is currently not possible to specify an `output_stride`. For smaller output strides, the original model invokes dilated convolution to prevent the spatial resolution from being reduced further. The output stride of the HuggingFace model is always 32. - The original TensorFlow checkpoints include quantized models. We do not support these models as they include additional ""FakeQuantization"" operations to unquantize the weights. - It's common to extract the output from the pointwise layers at indices 5, 11, 12, 13 for downstream purposes. Using `output_hidden_states=True` returns the output from all intermediate layers. There is currently no way to limit this to specific layers. ## Resources A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with MobileNetV1. - [`MobileNetV1ForImageClassification`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/pytorch/image-classification) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/image_classification.ipynb). - See also: [Image classification task guide](../tasks/image_classification) If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource. ## MobileNetV1Config [[autodoc]] MobileNetV1Config ## MobileNetV1FeatureExtractor [[autodoc]] MobileNetV1FeatureExtractor - preprocess ## MobileNetV1ImageProcessor [[autodoc]] MobileNetV1ImageProcessor - preprocess ## MobileNetV1Model [[autodoc]] MobileNetV1Model - forward ## MobileNetV1ForImageClassification [[autodoc]] MobileNetV1ForImageClassification - forward " model_doc/openai-gpt.md," # OpenAI GPT ## Overview OpenAI GPT model was proposed in [Improving Language Understanding by Generative Pre-Training](https://s3-us-west-2.amazonaws.com/openai-assets/research-covers/language-unsupervised/language_understanding_paper.pdf) by Alec Radford, Karthik Narasimhan, Tim Salimans and Ilya Sutskever. It's a causal (unidirectional) transformer pre-trained using language modeling on a large corpus will long range dependencies, the Toronto Book Corpus. The abstract from the paper is the following: *Natural language understanding comprises a wide range of diverse tasks such as textual entailment, question answering, semantic similarity assessment, and document classification. Although large unlabeled text corpora are abundant, labeled data for learning these specific tasks is scarce, making it challenging for discriminatively trained models to perform adequately. We demonstrate that large gains on these tasks can be realized by generative pretraining of a language model on a diverse corpus of unlabeled text, followed by discriminative fine-tuning on each specific task. In contrast to previous approaches, we make use of task-aware input transformations during fine-tuning to achieve effective transfer while requiring minimal changes to the model architecture. We demonstrate the effectiveness of our approach on a wide range of benchmarks for natural language understanding. Our general task-agnostic model outperforms discriminatively trained models that use architectures specifically crafted for each task, significantly improving upon the state of the art in 9 out of the 12 tasks studied.* [Write With Transformer](https://transformer.huggingface.co/doc/gpt) is a webapp created and hosted by Hugging Face showcasing the generative capabilities of several models. GPT is one of them. This model was contributed by [thomwolf](https://huggingface.co/thomwolf). The original code can be found [here](https://github.com/openai/finetune-transformer-lm). ## Usage tips - GPT is a model with absolute position embeddings so it's usually advised to pad the inputs on the right rather than the left. - GPT was trained with a causal language modeling (CLM) objective and is therefore powerful at predicting the next token in a sequence. Leveraging this feature allows GPT-2 to generate syntactically coherent text as it can be observed in the *run_generation.py* example script. Note: If you want to reproduce the original tokenization process of the *OpenAI GPT* paper, you will need to install `ftfy` and `SpaCy`: ```bash pip install spacy ftfy==4.4.3 python -m spacy download en If you don't install `ftfy` and `SpaCy`, the [`OpenAIGPTTokenizer`] will default to tokenize using BERT's `BasicTokenizer` followed by Byte-Pair Encoding (which should be fine for most usage, don't worry). ## Resources A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with OpenAI GPT. If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource. - A blog post on [outperforming OpenAI GPT-3 with SetFit for text-classification](https://www.philschmid.de/getting-started-setfit). - See also: [Text classification task guide](../tasks/sequence_classification) - A blog on how to [Finetune a non-English GPT-2 Model with Hugging Face](https://www.philschmid.de/fine-tune-a-non-english-gpt-2-model-with-huggingface). - A blog on [How to generate text: using different decoding methods for language generation with Transformers](https://huggingface.co/blog/how-to-generate) with GPT-2. - A blog on [Training CodeParrot 🦜 from Scratch](https://huggingface.co/blog/codeparrot), a large GPT-2 model. - A blog on [Faster Text Generation with TensorFlow and XLA](https://huggingface.co/blog/tf-xla-generate) with GPT-2. - A blog on [How to train a Language Model with Megatron-LM](https://huggingface.co/blog/megatron-training) with a GPT-2 model. - A notebook on how to [finetune GPT2 to generate lyrics in the style of your favorite artist](https://colab.research.google.com/github/AlekseyKorshuk/huggingartists/blob/master/huggingartists-demo.ipynb). 🌎 - A notebook on how to [finetune GPT2 to generate tweets in the style of your favorite Twitter user](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb). 🌎 - [Causal language modeling](https://huggingface.co/course/en/chapter7/6?fw=pt#training-a-causal-language-model-from-scratch) chapter of the 🤗 Hugging Face Course. - [`OpenAIGPTLMHeadModel`] is supported by this [causal language modeling example script](https://github.com/huggingface/transformers/tree/main/examples/pytorch/language-modeling#gpt-2gpt-and-causal-language-modeling), [text generation example script](https://github.com/huggingface/transformers/blob/main/examples/pytorch/text-generation/run_generation.py) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/language_modeling.ipynb). - [`TFOpenAIGPTLMHeadModel`] is supported by this [causal language modeling example script](https://github.com/huggingface/transformers/tree/main/examples/tensorflow/language-modeling#run_clmpy) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/language_modeling-tf.ipynb). - See also: [Causal language modeling task guide](../tasks/language_modeling) - A course material on [Byte-Pair Encoding tokenization](https://huggingface.co/course/en/chapter6/5). ## OpenAIGPTConfig [[autodoc]] OpenAIGPTConfig ## OpenAIGPTTokenizer [[autodoc]] OpenAIGPTTokenizer - save_vocabulary ## OpenAIGPTTokenizerFast [[autodoc]] OpenAIGPTTokenizerFast ## OpenAI specific outputs [[autodoc]] models.openai.modeling_openai.OpenAIGPTDoubleHeadsModelOutput [[autodoc]] models.openai.modeling_tf_openai.TFOpenAIGPTDoubleHeadsModelOutput ## OpenAIGPTModel [[autodoc]] OpenAIGPTModel - forward ## OpenAIGPTLMHeadModel [[autodoc]] OpenAIGPTLMHeadModel - forward ## OpenAIGPTDoubleHeadsModel [[autodoc]] OpenAIGPTDoubleHeadsModel - forward ## OpenAIGPTForSequenceClassification [[autodoc]] OpenAIGPTForSequenceClassification - forward ## TFOpenAIGPTModel [[autodoc]] TFOpenAIGPTModel - call ## TFOpenAIGPTLMHeadModel [[autodoc]] TFOpenAIGPTLMHeadModel - call ## TFOpenAIGPTDoubleHeadsModel [[autodoc]] TFOpenAIGPTDoubleHeadsModel - call ## TFOpenAIGPTForSequenceClassification [[autodoc]] TFOpenAIGPTForSequenceClassification - call " model_doc/matcha.md," # MatCha ## Overview MatCha has been proposed in the paper [MatCha: Enhancing Visual Language Pretraining with Math Reasoning and Chart Derendering](https://arxiv.org/abs/2212.09662), from Fangyu Liu, Francesco Piccinno, Syrine Krichene, Chenxi Pang, Kenton Lee, Mandar Joshi, Yasemin Altun, Nigel Collier, Julian Martin Eisenschlos. The abstract of the paper states the following: *Visual language data such as plots, charts, and infographics are ubiquitous in the human world. However, state-of-the-art vision-language models do not perform well on these data. We propose MatCha (Math reasoning and Chart derendering pretraining) to enhance visual language models' capabilities in jointly modeling charts/plots and language data. Specifically, we propose several pretraining tasks that cover plot deconstruction and numerical reasoning which are the key capabilities in visual language modeling. We perform the MatCha pretraining starting from Pix2Struct, a recently proposed image-to-text visual language model. On standard benchmarks such as PlotQA and ChartQA, the MatCha model outperforms state-of-the-art methods by as much as nearly 20%. We also examine how well MatCha pretraining transfers to domains such as screenshots, textbook diagrams, and document figures and observe overall improvement, verifying the usefulness of MatCha pretraining on broader visual language tasks.* ## Model description MatCha is a model that is trained using `Pix2Struct` architecture. You can find more information about `Pix2Struct` in the [Pix2Struct documentation](https://huggingface.co/docs/transformers/main/en/model_doc/pix2struct). MatCha is a Visual Question Answering subset of `Pix2Struct` architecture. It renders the input question on the image and predicts the answer. ## Usage Currently 6 checkpoints are available for MatCha: - `google/matcha`: the base MatCha model, used to fine-tune MatCha on downstream tasks - `google/matcha-chartqa`: MatCha model fine-tuned on ChartQA dataset. It can be used to answer questions about charts. - `google/matcha-plotqa-v1`: MatCha model fine-tuned on PlotQA dataset. It can be used to answer questions about plots. - `google/matcha-plotqa-v2`: MatCha model fine-tuned on PlotQA dataset. It can be used to answer questions about plots. - `google/matcha-chart2text-statista`: MatCha model fine-tuned on Statista dataset. - `google/matcha-chart2text-pew`: MatCha model fine-tuned on Pew dataset. The models finetuned on `chart2text-pew` and `chart2text-statista` are more suited for summarization, whereas the models finetuned on `plotqa` and `chartqa` are more suited for question answering. You can use these models as follows (example on a ChatQA dataset): thon from transformers import AutoProcessor, Pix2StructForConditionalGeneration import requests from PIL import Image model = Pix2StructForConditionalGeneration.from_pretrained(""google/matcha-chartqa"").to(0) processor = AutoProcessor.from_pretrained(""google/matcha-chartqa"") url = ""https://raw.githubusercontent.com/vis-nlp/ChartQA/main/ChartQA%20Dataset/val/png/20294671002019.png"" image = Image.open(requests.get(url, stream=True).raw) inputs = processor(images=image, text=""Is the sum of all 4 places greater than Laos?"", return_tensors=""pt"").to(0) predictions = model.generate(**inputs, max_new_tokens=512) print(processor.decode(predictions[0], skip_special_tokens=True)) ## Fine-tuning To fine-tune MatCha, refer to the pix2struct [fine-tuning notebook](https://github.com/huggingface/notebooks/blob/main/examples/image_captioning_pix2struct.ipynb). For `Pix2Struct` models, we have found out that fine-tuning the model with Adafactor and cosine learning rate scheduler leads to faste convergence: thon from transformers.optimization import Adafactor, get_cosine_schedule_with_warmup optimizer = Adafactor(self.parameters(), scale_parameter=False, relative_step=False, lr=0.01, weight_decay=1e-05) scheduler = get_cosine_schedule_with_warmup(optimizer, num_warmup_steps=1000, num_training_steps=40000) MatCha is a model that is trained using `Pix2Struct` architecture. You can find more information about `Pix2Struct` in the [Pix2Struct documentation](https://huggingface.co/docs/transformers/main/en/model_doc/pix2struct). " model_doc/timesformer.md," # TimeSformer ## Overview The TimeSformer model was proposed in [TimeSformer: Is Space-Time Attention All You Need for Video Understanding?](https://arxiv.org/abs/2102.05095) by Facebook Research. This work is a milestone in action-recognition field being the first video transformer. It inspired many transformer based video understanding and classification papers. The abstract from the paper is the following: *We present a convolution-free approach to video classification built exclusively on self-attention over space and time. Our method, named ""TimeSformer,"" adapts the standard Transformer architecture to video by enabling spatiotemporal feature learning directly from a sequence of frame-level patches. Our experimental study compares different self-attention schemes and suggests that ""divided attention,"" where temporal attention and spatial attention are separately applied within each block, leads to the best video classification accuracy among the design choices considered. Despite the radically new design, TimeSformer achieves state-of-the-art results on several action recognition benchmarks, including the best reported accuracy on Kinetics-400 and Kinetics-600. Finally, compared to 3D convolutional networks, our model is faster to train, it can achieve dramatically higher test efficiency (at a small drop in accuracy), and it can also be applied to much longer video clips (over one minute long). Code and models are available at: [this https URL](https://github.com/facebookresearch/TimeSformer).* This model was contributed by [fcakyon](https://huggingface.co/fcakyon). The original code can be found [here](https://github.com/facebookresearch/TimeSformer). ## Usage tips There are many pretrained variants. Select your pretrained model based on the dataset it is trained on. Moreover, the number of input frames per clip changes based on the model size so you should consider this parameter while selecting your pretrained model. ## Resources - [Video classification task guide](../tasks/video_classification) ## TimesformerConfig [[autodoc]] TimesformerConfig ## TimesformerModel [[autodoc]] TimesformerModel - forward ## TimesformerForVideoClassification [[autodoc]] TimesformerForVideoClassification - forward" model_doc/efficientformer.md," # EfficientFormer ## Overview The EfficientFormer model was proposed in [EfficientFormer: Vision Transformers at MobileNet Speed](https://arxiv.org/abs/2206.01191) by Yanyu Li, Geng Yuan, Yang Wen, Eric Hu, Georgios Evangelidis, Sergey Tulyakov, Yanzhi Wang, Jian Ren. EfficientFormer proposes a dimension-consistent pure transformer that can be run on mobile devices for dense prediction tasks like image classification, object detection and semantic segmentation. The abstract from the paper is the following: *Vision Transformers (ViT) have shown rapid progress in computer vision tasks, achieving promising results on various benchmarks. However, due to the massive number of parameters and model design, e.g., attention mechanism, ViT-based models are generally times slower than lightweight convolutional networks. Therefore, the deployment of ViT for real-time applications is particularly challenging, especially on resource-constrained hardware such as mobile devices. Recent efforts try to reduce the computation complexity of ViT through network architecture search or hybrid design with MobileNet block, yet the inference speed is still unsatisfactory. This leads to an important question: can transformers run as fast as MobileNet while obtaining high performance? To answer this, we first revisit the network architecture and operators used in ViT-based models and identify inefficient designs. Then we introduce a dimension-consistent pure transformer (without MobileNet blocks) as a design paradigm. Finally, we perform latency-driven slimming to get a series of final models dubbed EfficientFormer. Extensive experiments show the superiority of EfficientFormer in performance and speed on mobile devices. Our fastest model, EfficientFormer-L1, achieves 79.2% top-1 accuracy on ImageNet-1K with only 1.6 ms inference latency on iPhone 12 (compiled with CoreML), which { runs as fast as MobileNetV2×1.4 (1.6 ms, 74.7% top-1),} and our largest model, EfficientFormer-L7, obtains 83.3% accuracy with only 7.0 ms latency. Our work proves that properly designed transformers can reach extremely low latency on mobile devices while maintaining high performance.* This model was contributed by [novice03](https://huggingface.co/novice03) and [Bearnardd](https://huggingface.co/Bearnardd). The original code can be found [here](https://github.com/snap-research/EfficientFormer). The TensorFlow version of this model was added by [D-Roberts](https://huggingface.co/D-Roberts). ## Documentation resources - [Image classification task guide](../tasks/image_classification) ## EfficientFormerConfig [[autodoc]] EfficientFormerConfig ## EfficientFormerImageProcessor [[autodoc]] EfficientFormerImageProcessor - preprocess ## EfficientFormerModel [[autodoc]] EfficientFormerModel - forward ## EfficientFormerForImageClassification [[autodoc]] EfficientFormerForImageClassification - forward ## EfficientFormerForImageClassificationWithTeacher [[autodoc]] EfficientFormerForImageClassificationWithTeacher - forward ## TFEfficientFormerModel [[autodoc]] TFEfficientFormerModel - call ## TFEfficientFormerForImageClassification [[autodoc]] TFEfficientFormerForImageClassification - call ## TFEfficientFormerForImageClassificationWithTeacher [[autodoc]] TFEfficientFormerForImageClassificationWithTeacher - call " model_doc/layoutlmv3.md," # LayoutLMv3 ## Overview The LayoutLMv3 model was proposed in [LayoutLMv3: Pre-training for Document AI with Unified Text and Image Masking](https://arxiv.org/abs/2204.08387) by Yupan Huang, Tengchao Lv, Lei Cui, Yutong Lu, Furu Wei. LayoutLMv3 simplifies [LayoutLMv2](layoutlmv2) by using patch embeddings (as in [ViT](vit)) instead of leveraging a CNN backbone, and pre-trains the model on 3 objectives: masked language modeling (MLM), masked image modeling (MIM) and word-patch alignment (WPA). The abstract from the paper is the following: *Self-supervised pre-training techniques have achieved remarkable progress in Document AI. Most multimodal pre-trained models use a masked language modeling objective to learn bidirectional representations on the text modality, but they differ in pre-training objectives for the image modality. This discrepancy adds difficulty to multimodal representation learning. In this paper, we propose LayoutLMv3 to pre-train multimodal Transformers for Document AI with unified text and image masking. Additionally, LayoutLMv3 is pre-trained with a word-patch alignment objective to learn cross-modal alignment by predicting whether the corresponding image patch of a text word is masked. The simple unified architecture and training objectives make LayoutLMv3 a general-purpose pre-trained model for both text-centric and image-centric Document AI tasks. Experimental results show that LayoutLMv3 achieves state-of-the-art performance not only in text-centric tasks, including form understanding, receipt understanding, and document visual question answering, but also in image-centric tasks such as document image classification and document layout analysis.* LayoutLMv3 architecture. Taken from the original paper. This model was contributed by [nielsr](https://huggingface.co/nielsr). The TensorFlow version of this model was added by [chriskoo](https://huggingface.co/chriskoo), [tokec](https://huggingface.co/tokec), and [lre](https://huggingface.co/lre). The original code can be found [here](https://github.com/microsoft/unilm/tree/master/layoutlmv3). ## Usage tips - In terms of data processing, LayoutLMv3 is identical to its predecessor [LayoutLMv2](layoutlmv2), except that: - images need to be resized and normalized with channels in regular RGB format. LayoutLMv2 on the other hand normalizes the images internally and expects the channels in BGR format. - text is tokenized using byte-pair encoding (BPE), as opposed to WordPiece. Due to these differences in data preprocessing, one can use [`LayoutLMv3Processor`] which internally combines a [`LayoutLMv3ImageProcessor`] (for the image modality) and a [`LayoutLMv3Tokenizer`]/[`LayoutLMv3TokenizerFast`] (for the text modality) to prepare all data for the model. - Regarding usage of [`LayoutLMv3Processor`], we refer to the [usage guide](layoutlmv2#usage-layoutlmv2processor) of its predecessor. ## Resources A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with LayoutLMv3. If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource. LayoutLMv3 is nearly identical to LayoutLMv2, so we've also included LayoutLMv2 resources you can adapt for LayoutLMv3 tasks. For these notebooks, take care to use [`LayoutLMv2Processor`] instead when preparing data for the model! - Demo notebooks for LayoutLMv3 can be found [here](https://github.com/NielsRogge/Transformers-Tutorials/tree/master/LayoutLMv3). - Demo scripts can be found [here](https://github.com/huggingface/transformers/tree/main/examples/research_projects/layoutlmv3). - [`LayoutLMv2ForSequenceClassification`] is supported by this [notebook](https://colab.research.google.com/github/NielsRogge/Transformers-Tutorials/blob/master/LayoutLMv2/RVL-CDIP/Fine_tuning_LayoutLMv2ForSequenceClassification_on_RVL_CDIP.ipynb). - [Text classification task guide](../tasks/sequence_classification) - [`LayoutLMv3ForTokenClassification`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/research_projects/layoutlmv3) and [notebook](https://colab.research.google.com/github/NielsRogge/Transformers-Tutorials/blob/master/LayoutLMv3/Fine_tune_LayoutLMv3_on_FUNSD_(HuggingFace_Trainer).ipynb). - A [notebook](https://colab.research.google.com/github/NielsRogge/Transformers-Tutorials/blob/master/LayoutLMv2/FUNSD/Inference_with_LayoutLMv2ForTokenClassification.ipynb) for how to perform inference with [`LayoutLMv2ForTokenClassification`] and a [notebook](https://colab.research.google.com/github/NielsRogge/Transformers-Tutorials/blob/master/LayoutLMv2/FUNSD/True_inference_with_LayoutLMv2ForTokenClassification_%2B_Gradio_demo.ipynb) for how to perform inference when no labels are available with [`LayoutLMv2ForTokenClassification`]. - A [notebook](https://colab.research.google.com/github/NielsRogge/Transformers-Tutorials/blob/master/LayoutLMv2/FUNSD/Fine_tuning_LayoutLMv2ForTokenClassification_on_FUNSD_using_HuggingFace_Trainer.ipynb) for how to finetune [`LayoutLMv2ForTokenClassification`] with the 🤗 Trainer. - [Token classification task guide](../tasks/token_classification) - [`LayoutLMv2ForQuestionAnswering`] is supported by this [notebook](https://colab.research.google.com/github/NielsRogge/Transformers-Tutorials/blob/master/LayoutLMv2/DocVQA/Fine_tuning_LayoutLMv2ForQuestionAnswering_on_DocVQA.ipynb). - [Question answering task guide](../tasks/question_answering) **Document question answering** - [Document question answering task guide](../tasks/document_question_answering) ## LayoutLMv3Config [[autodoc]] LayoutLMv3Config ## LayoutLMv3FeatureExtractor [[autodoc]] LayoutLMv3FeatureExtractor - __call__ ## LayoutLMv3ImageProcessor [[autodoc]] LayoutLMv3ImageProcessor - preprocess ## LayoutLMv3Tokenizer [[autodoc]] LayoutLMv3Tokenizer - __call__ - save_vocabulary ## LayoutLMv3TokenizerFast [[autodoc]] LayoutLMv3TokenizerFast - __call__ ## LayoutLMv3Processor [[autodoc]] LayoutLMv3Processor - __call__ ## LayoutLMv3Model [[autodoc]] LayoutLMv3Model - forward ## LayoutLMv3ForSequenceClassification [[autodoc]] LayoutLMv3ForSequenceClassification - forward ## LayoutLMv3ForTokenClassification [[autodoc]] LayoutLMv3ForTokenClassification - forward ## LayoutLMv3ForQuestionAnswering [[autodoc]] LayoutLMv3ForQuestionAnswering - forward ## TFLayoutLMv3Model [[autodoc]] TFLayoutLMv3Model - call ## TFLayoutLMv3ForSequenceClassification [[autodoc]] TFLayoutLMv3ForSequenceClassification - call ## TFLayoutLMv3ForTokenClassification [[autodoc]] TFLayoutLMv3ForTokenClassification - call ## TFLayoutLMv3ForQuestionAnswering [[autodoc]] TFLayoutLMv3ForQuestionAnswering - call " model_doc/umt5.md," # UMT5 ## Overview The UMT5 model was proposed in [UniMax: Fairer and More Effective Language Sampling for Large-Scale Multilingual Pretraining](https://openreview.net/forum?id=kXwdL1cWOAi) by Hyung Won Chung, Xavier Garcia, Adam Roberts, Yi Tay, Orhan Firat, Sharan Narang, Noah Constant. The abstract from the paper is the following: *Pretrained multilingual large language models have typically used heuristic temperature-based sampling to balance between different languages. However previous work has not systematically evaluated the efficacy of different pretraining language distributions across model scales. In this paper, we propose a new sampling method, UniMax, that delivers more uniform coverage of head languages while mitigating overfitting on tail languages by explicitly capping the number of repeats over each language's corpus. We perform an extensive series of ablations testing a range of sampling strategies on a suite of multilingual benchmarks, while varying model scale. We find that UniMax outperforms standard temperature-based sampling, and the benefits persist as scale increases. As part of our contribution, we release: (i) an improved and refreshed mC4 multilingual corpus consisting of 29 trillion characters across 107 languages, and (ii) a suite of pretrained umT5 model checkpoints trained with UniMax sampling.* Google has released the following variants: - [google/umt5-small](https://huggingface.co/google/umt5-small) - [google/umt5-base](https://huggingface.co/google/umt5-base) - [google/umt5-xl](https://huggingface.co/google/umt5-xl) - [google/umt5-xxl](https://huggingface.co/google/umt5-xxl). This model was contributed by [agemagician](https://huggingface.co/agemagician) and [stefan-it](https://huggingface.co/stefan-it). The original code can be found [here](https://github.com/google-research/t5x). ## Usage tips - UMT5 was only pre-trained on [mC4](https://huggingface.co/datasets/mc4) excluding any supervised training. Therefore, this model has to be fine-tuned before it is usable on a downstream task, unlike the original T5 model. - Since umT5 was pre-trained in an unsupervise manner, there's no real advantage to using a task prefix during single-task fine-tuning. If you are doing multi-task fine-tuning, you should use a prefix. ## Differences with mT5? `UmT5` is based on mT5, with a non-shared relative positional bias that is computed for each layer. This means that the model set `has_relative_bias` for each layer. The conversion script is also different because the model was saved in t5x's latest checkpointing format. # Sample usage thon >>> from transformers import AutoModelForSeq2SeqLM, AutoTokenizer >>> model = AutoModelForSeq2SeqLM.from_pretrained(""google/umt5-small"") >>> tokenizer = AutoTokenizer.from_pretrained(""google/umt5-small"") >>> inputs = tokenizer( ""A walks into a bar and orders a with pinch of ."", return_tensors=""pt"", ) >>> outputs = model.generate(**inputs) >>> print(tokenizer.batch_decode(outputs)) ['nyone who drink a alcohol A A. This I'] Refer to [T5's documentation page](t5) for more tips, code examples and notebooks. ## UMT5Config [[autodoc]] UMT5Config ## UMT5Model [[autodoc]] UMT5Model - forward ## UMT5ForConditionalGeneration [[autodoc]] UMT5ForConditionalGeneration - forward ## UMT5EncoderModel [[autodoc]] UMT5EncoderModel - forward ## UMT5ForSequenceClassification [[autodoc]] UMT5ForSequenceClassification - forward ## UMT5ForQuestionAnswering [[autodoc]] UMT5ForQuestionAnswering - forward " model_doc/megatron-bert.md," # MegatronBERT ## Overview The MegatronBERT model was proposed in [Megatron-LM: Training Multi-Billion Parameter Language Models Using Model Parallelism](https://arxiv.org/abs/1909.08053) by Mohammad Shoeybi, Mostofa Patwary, Raul Puri, Patrick LeGresley, Jared Casper and Bryan Catanzaro. The abstract from the paper is the following: *Recent work in language modeling demonstrates that training large transformer models advances the state of the art in Natural Language Processing applications. However, very large models can be quite difficult to train due to memory constraints. In this work, we present our techniques for training very large transformer models and implement a simple, efficient intra-layer model parallel approach that enables training transformer models with billions of parameters. Our approach does not require a new compiler or library changes, is orthogonal and complimentary to pipeline model parallelism, and can be fully implemented with the insertion of a few communication operations in native PyTorch. We illustrate this approach by converging transformer based models up to 8.3 billion parameters using 512 GPUs. We sustain 15.1 PetaFLOPs across the entire application with 76% scaling efficiency when compared to a strong single GPU baseline that sustains 39 TeraFLOPs, which is 30% of peak FLOPs. To demonstrate that large language models can further advance the state of the art (SOTA), we train an 8.3 billion parameter transformer language model similar to GPT-2 and a 3.9 billion parameter model similar to BERT. We show that careful attention to the placement of layer normalization in BERT-like models is critical to achieving increased performance as the model size grows. Using the GPT-2 model we achieve SOTA results on the WikiText103 (10.8 compared to SOTA perplexity of 15.8) and LAMBADA (66.5% compared to SOTA accuracy of 63.2%) datasets. Our BERT model achieves SOTA results on the RACE dataset (90.9% compared to SOTA accuracy of 89.4%).* This model was contributed by [jdemouth](https://huggingface.co/jdemouth). The original code can be found [here](https://github.com/NVIDIA/Megatron-LM). That repository contains a multi-GPU and multi-node implementation of the Megatron Language models. In particular, it contains a hybrid model parallel approach using ""tensor parallel"" and ""pipeline parallel"" techniques. ## Usage tips We have provided pretrained [BERT-345M](https://ngc.nvidia.com/catalog/models/nvidia:megatron_bert_345m) checkpoints for use to evaluate or finetuning downstream tasks. To access these checkpoints, first [sign up](https://ngc.nvidia.com/signup) for and setup the NVIDIA GPU Cloud (NGC) Registry CLI. Further documentation for downloading models can be found in the [NGC documentation](https://docs.nvidia.com/dgx/ngc-registry-cli-user-guide/index.html#topic_6_4_1). Alternatively, you can directly download the checkpoints using: BERT-345M-uncased: ```bash wget --content-disposition https://api.ngc.nvidia.com/v2/models/nvidia/megatron_bert_345m/versions/v0.1_uncased/zip -O megatron_bert_345m_v0_1_uncased.zip BERT-345M-cased: ```bash wget --content-disposition https://api.ngc.nvidia.com/v2/models/nvidia/megatron_bert_345m/versions/v0.1_cased/zip -O megatron_bert_345m_v0_1_cased.zip Once you have obtained the checkpoints from NVIDIA GPU Cloud (NGC), you have to convert them to a format that will easily be loaded by Hugging Face Transformers and our port of the BERT code. The following commands allow you to do the conversion. We assume that the folder `models/megatron_bert` contains `megatron_bert_345m_v0_1_{cased, uncased}.zip` and that the commands are run from inside that folder: ```bash python3 $PATH_TO_TRANSFORMERS/models/megatron_bert/convert_megatron_bert_checkpoint.py megatron_bert_345m_v0_1_uncased.zip ```bash python3 $PATH_TO_TRANSFORMERS/models/megatron_bert/convert_megatron_bert_checkpoint.py megatron_bert_345m_v0_1_cased.zip ## Resources - [Text classification task guide](../tasks/sequence_classification) - [Token classification task guide](../tasks/token_classification) - [Question answering task guide](../tasks/question_answering) - [Causal language modeling task guide](../tasks/language_modeling) - [Masked language modeling task guide](../tasks/masked_language_modeling) - [Multiple choice task guide](../tasks/multiple_choice) ## MegatronBertConfig [[autodoc]] MegatronBertConfig ## MegatronBertModel [[autodoc]] MegatronBertModel - forward ## MegatronBertForMaskedLM [[autodoc]] MegatronBertForMaskedLM - forward ## MegatronBertForCausalLM [[autodoc]] MegatronBertForCausalLM - forward ## MegatronBertForNextSentencePrediction [[autodoc]] MegatronBertForNextSentencePrediction - forward ## MegatronBertForPreTraining [[autodoc]] MegatronBertForPreTraining - forward ## MegatronBertForSequenceClassification [[autodoc]] MegatronBertForSequenceClassification - forward ## MegatronBertForMultipleChoice [[autodoc]] MegatronBertForMultipleChoice - forward ## MegatronBertForTokenClassification [[autodoc]] MegatronBertForTokenClassification - forward ## MegatronBertForQuestionAnswering [[autodoc]] MegatronBertForQuestionAnswering - forward " model_doc/ibert.md," # I-BERT ## Overview The I-BERT model was proposed in [I-BERT: Integer-only BERT Quantization](https://arxiv.org/abs/2101.01321) by Sehoon Kim, Amir Gholami, Zhewei Yao, Michael W. Mahoney and Kurt Keutzer. It's a quantized version of RoBERTa running inference up to four times faster. The abstract from the paper is the following: *Transformer based models, like BERT and RoBERTa, have achieved state-of-the-art results in many Natural Language Processing tasks. However, their memory footprint, inference latency, and power consumption are prohibitive for efficient inference at the edge, and even at the data center. While quantization can be a viable solution for this, previous work on quantizing Transformer based models use floating-point arithmetic during inference, which cannot efficiently utilize integer-only logical units such as the recent Turing Tensor Cores, or traditional integer-only ARM processors. In this work, we propose I-BERT, a novel quantization scheme for Transformer based models that quantizes the entire inference with integer-only arithmetic. Based on lightweight integer-only approximation methods for nonlinear operations, e.g., GELU, Softmax, and Layer Normalization, I-BERT performs an end-to-end integer-only BERT inference without any floating point calculation. We evaluate our approach on GLUE downstream tasks using RoBERTa-Base/Large. We show that for both cases, I-BERT achieves similar (and slightly higher) accuracy as compared to the full-precision baseline. Furthermore, our preliminary implementation of I-BERT shows a speedup of 2.4 - 4.0x for INT8 inference on a T4 GPU system as compared to FP32 inference. The framework has been developed in PyTorch and has been open-sourced.* This model was contributed by [kssteven](https://huggingface.co/kssteven). The original code can be found [here](https://github.com/kssteven418/I-BERT). ## Resources - [Text classification task guide](../tasks/sequence_classification) - [Token classification task guide](../tasks/token_classification) - [Question answering task guide](../tasks/question_answering) - [Masked language modeling task guide](../tasks/masked_language_modeling) - [Multiple choice task guide](../tasks/masked_language_modeling) ## IBertConfig [[autodoc]] IBertConfig ## IBertModel [[autodoc]] IBertModel - forward ## IBertForMaskedLM [[autodoc]] IBertForMaskedLM - forward ## IBertForSequenceClassification [[autodoc]] IBertForSequenceClassification - forward ## IBertForMultipleChoice [[autodoc]] IBertForMultipleChoice - forward ## IBertForTokenClassification [[autodoc]] IBertForTokenClassification - forward ## IBertForQuestionAnswering [[autodoc]] IBertForQuestionAnswering - forward " model_doc/swinv2.md," # Swin Transformer V2 ## Overview The Swin Transformer V2 model was proposed in [Swin Transformer V2: Scaling Up Capacity and Resolution](https://arxiv.org/abs/2111.09883) by Ze Liu, Han Hu, Yutong Lin, Zhuliang Yao, Zhenda Xie, Yixuan Wei, Jia Ning, Yue Cao, Zheng Zhang, Li Dong, Furu Wei, Baining Guo. The abstract from the paper is the following: *Large-scale NLP models have been shown to significantly improve the performance on language tasks with no signs of saturation. They also demonstrate amazing few-shot capabilities like that of human beings. This paper aims to explore large-scale models in computer vision. We tackle three major issues in training and application of large vision models, including training instability, resolution gaps between pre-training and fine-tuning, and hunger on labelled data. Three main techniques are proposed: 1) a residual-post-norm method combined with cosine attention to improve training stability; 2) A log-spaced continuous position bias method to effectively transfer models pre-trained using low-resolution images to downstream tasks with high-resolution inputs; 3) A self-supervised pre-training method, SimMIM, to reduce the needs of vast labeled images. Through these techniques, this paper successfully trained a 3 billion-parameter Swin Transformer V2 model, which is the largest dense vision model to date, and makes it capable of training with images of up to 1,536×1,536 resolution. It set new performance records on 4 representative vision tasks, including ImageNet-V2 image classification, COCO object detection, ADE20K semantic segmentation, and Kinetics-400 video action classification. Also note our training is much more efficient than that in Google's billion-level visual models, which consumes 40 times less labelled data and 40 times less training time.* This model was contributed by [nandwalritik](https://huggingface.co/nandwalritik). The original code can be found [here](https://github.com/microsoft/Swin-Transformer). ## Resources A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with Swin Transformer v2. - [`Swinv2ForImageClassification`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/pytorch/image-classification) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/image_classification.ipynb). - See also: [Image classification task guide](../tasks/image_classification) Besides that: - [`Swinv2ForMaskedImageModeling`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/pytorch/image-pretraining). If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource. ## Swinv2Config [[autodoc]] Swinv2Config ## Swinv2Model [[autodoc]] Swinv2Model - forward ## Swinv2ForMaskedImageModeling [[autodoc]] Swinv2ForMaskedImageModeling - forward ## Swinv2ForImageClassification [[autodoc]] transformers.Swinv2ForImageClassification - forward " model_doc/vits.md," # VITS ## Overview The VITS model was proposed in [Conditional Variational Autoencoder with Adversarial Learning for End-to-End Text-to-Speech](https://arxiv.org/abs/2106.06103) by Jaehyeon Kim, Jungil Kong, Juhee Son. VITS (**V**ariational **I**nference with adversarial learning for end-to-end **T**ext-to-**S**peech) is an end-to-end speech synthesis model that predicts a speech waveform conditional on an input text sequence. It is a conditional variational autoencoder (VAE) comprised of a posterior encoder, decoder, and conditional prior. A set of spectrogram-based acoustic features are predicted by the flow-based module, which is formed of a Transformer-based text encoder and multiple coupling layers. The spectrogram is decoded using a stack of transposed convolutional layers, much in the same style as the HiFi-GAN vocoder. Motivated by the one-to-many nature of the TTS problem, where the same text input can be spoken in multiple ways, the model also includes a stochastic duration predictor, which allows the model to synthesise speech with different rhythms from the same input text. The model is trained end-to-end with a combination of losses derived from variational lower bound and adversarial training. To improve the expressiveness of the model, normalizing flows are applied to the conditional prior distribution. During inference, the text encodings are up-sampled based on the duration prediction module, and then mapped into the waveform using a cascade of the flow module and HiFi-GAN decoder. Due to the stochastic nature of the duration predictor, the model is non-deterministic, and thus requires a fixed seed to generate the same speech waveform. The abstract from the paper is the following: *Several recent end-to-end text-to-speech (TTS) models enabling single-stage training and parallel sampling have been proposed, but their sample quality does not match that of two-stage TTS systems. In this work, we present a parallel end-to-end TTS method that generates more natural sounding audio than current two-stage models. Our method adopts variational inference augmented with normalizing flows and an adversarial training process, which improves the expressive power of generative modeling. We also propose a stochastic duration predictor to synthesize speech with diverse rhythms from input text. With the uncertainty modeling over latent variables and the stochastic duration predictor, our method expresses the natural one-to-many relationship in which a text input can be spoken in multiple ways with different pitches and rhythms. A subjective human evaluation (mean opinion score, or MOS) on the LJ Speech, a single speaker dataset, shows that our method outperforms the best publicly available TTS systems and achieves a MOS comparable to ground truth.* This model can also be used with TTS checkpoints from [Massively Multilingual Speech (MMS)](https://arxiv.org/abs/2305.13516) as these checkpoints use the same architecture and a slightly modified tokenizer. This model was contributed by [Matthijs](https://huggingface.co/Matthijs) and [sanchit-gandhi](https://huggingface.co/sanchit-gandhi). The original code can be found [here](https://github.com/jaywalnut310/vits). ## Usage examples Both the VITS and MMS-TTS checkpoints can be used with the same API. Since the flow-based model is non-deterministic, it is good practice to set a seed to ensure reproducibility of the outputs. For languages with a Roman alphabet, such as English or French, the tokenizer can be used directly to pre-process the text inputs. The following code example runs a forward pass using the MMS-TTS English checkpoint: thon import torch from transformers import VitsTokenizer, VitsModel, set_seed tokenizer = VitsTokenizer.from_pretrained(""facebook/mms-tts-eng"") model = VitsModel.from_pretrained(""facebook/mms-tts-eng"") inputs = tokenizer(text=""Hello - my dog is cute"", return_tensors=""pt"") set_seed(555) # make deterministic with torch.no_grad(): outputs = model(**inputs) waveform = outputs.waveform[0] The resulting waveform can be saved as a `.wav` file: thon import scipy scipy.io.wavfile.write(""techno.wav"", rate=model.config.sampling_rate, data=waveform) Or displayed in a Jupyter Notebook / Google Colab: thon from IPython.display import Audio Audio(waveform, rate=model.config.sampling_rate) For certain languages with a non-Roman alphabet, such as Arabic, Mandarin or Hindi, the [`uroman`](https://github.com/isi-nlp/uroman) perl package is required to pre-process the text inputs to the Roman alphabet. You can check whether you require the `uroman` package for your language by inspecting the `is_uroman` attribute of the pre-trained `tokenizer`: thon from transformers import VitsTokenizer tokenizer = VitsTokenizer.from_pretrained(""facebook/mms-tts-eng"") print(tokenizer.is_uroman) If required, you should apply the uroman package to your text inputs **prior** to passing them to the `VitsTokenizer`, since currently the tokenizer does not support performing the pre-processing itself. To do this, first clone the uroman repository to your local machine and set the bash variable `UROMAN` to the local path: ```bash git clone https://github.com/isi-nlp/uroman.git cd uroman export UROMAN=$(pwd) You can then pre-process the text input using the following code snippet. You can either rely on using the bash variable `UROMAN` to point to the uroman repository, or you can pass the uroman directory as an argument to the `uromaize` function: thon import torch from transformers import VitsTokenizer, VitsModel, set_seed import os import subprocess tokenizer = VitsTokenizer.from_pretrained(""facebook/mms-tts-kor"") model = VitsModel.from_pretrained(""facebook/mms-tts-kor"") def uromanize(input_string, uroman_path): """"""Convert non-Roman strings to Roman using the `uroman` perl package."""""" script_path = os.path.join(uroman_path, ""bin"", ""uroman.pl"") command = [""perl"", script_path] process = subprocess.Popen(command, stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE) # Execute the perl command stdout, stderr = process.communicate(input=input_string.encode()) if process.returncode != 0: raise ValueError(f""Error {process.returncode}: {stderr.decode()}"") # Return the output as a string and skip the new-line character at the end return stdout.decode()[:-1] text = ""이봐 무슨 일이야"" uromaized_text = uromanize(text, uroman_path=os.environ[""UROMAN""]) inputs = tokenizer(text=uromaized_text, return_tensors=""pt"") set_seed(555) # make deterministic with torch.no_grad(): outputs = model(inputs[""input_ids""]) waveform = outputs.waveform[0] ## VitsConfig [[autodoc]] VitsConfig ## VitsTokenizer [[autodoc]] VitsTokenizer - __call__ - save_vocabulary ## VitsModel [[autodoc]] VitsModel - forward " model_doc/resnet.md," # ResNet ## Overview The ResNet model was proposed in [Deep Residual Learning for Image Recognition](https://arxiv.org/abs/1512.03385) by Kaiming He, Xiangyu Zhang, Shaoqing Ren and Jian Sun. Our implementation follows the small changes made by [Nvidia](https://catalog.ngc.nvidia.com/orgs/nvidia/resources/resnet_50_v1_5_for_pytorch), we apply the `stride=2` for downsampling in bottleneck's `3x3` conv and not in the first `1x1`. This is generally known as ""ResNet v1.5"". ResNet introduced residual connections, they allow to train networks with an unseen number of layers (up to 1000). ResNet won the 2015 ILSVRC & COCO competition, one important milestone in deep computer vision. The abstract from the paper is the following: *Deeper neural networks are more difficult to train. We present a residual learning framework to ease the training of networks that are substantially deeper than those used previously. We explicitly reformulate the layers as learning residual functions with reference to the layer inputs, instead of learning unreferenced functions. We provide comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth. On the ImageNet dataset we evaluate residual nets with a depth of up to 152 layers---8x deeper than VGG nets but still having lower complexity. An ensemble of these residual nets achieves 3.57% error on the ImageNet test set. This result won the 1st place on the ILSVRC 2015 classification task. We also present analysis on CIFAR-10 with 100 and 1000 layers. The depth of representations is of central importance for many visual recognition tasks. Solely due to our extremely deep representations, we obtain a 28% relative improvement on the COCO object detection dataset. Deep residual nets are foundations of our submissions to ILSVRC & COCO 2015 competitions, where we also won the 1st places on the tasks of ImageNet detection, ImageNet localization, COCO detection, and COCO segmentation.* The figure below illustrates the architecture of ResNet. Taken from the [original paper](https://arxiv.org/abs/1512.03385). This model was contributed by [Francesco](https://huggingface.co/Francesco). The TensorFlow version of this model was added by [amyeroberts](https://huggingface.co/amyeroberts). The original code can be found [here](https://github.com/KaimingHe/deep-residual-networks). ## Resources A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with ResNet. - [`ResNetForImageClassification`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/pytorch/image-classification) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/image_classification.ipynb). - See also: [Image classification task guide](../tasks/image_classification) If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource. ## ResNetConfig [[autodoc]] ResNetConfig ## ResNetModel [[autodoc]] ResNetModel - forward ## ResNetForImageClassification [[autodoc]] ResNetForImageClassification - forward ## TFResNetModel [[autodoc]] TFResNetModel - call ## TFResNetForImageClassification [[autodoc]] TFResNetForImageClassification - call ## FlaxResNetModel [[autodoc]] FlaxResNetModel - __call__ ## FlaxResNetForImageClassification [[autodoc]] FlaxResNetForImageClassification - __call__ " model_doc/chinese_clip.md," # Chinese-CLIP ## Overview The Chinese-CLIP model was proposed in [Chinese CLIP: Contrastive Vision-Language Pretraining in Chinese](https://arxiv.org/abs/2211.01335) by An Yang, Junshu Pan, Junyang Lin, Rui Men, Yichang Zhang, Jingren Zhou, Chang Zhou. Chinese-CLIP is an implementation of CLIP (Radford et al., 2021) on a large-scale dataset of Chinese image-text pairs. It is capable of performing cross-modal retrieval and also playing as a vision backbone for vision tasks like zero-shot image classification, open-domain object detection, etc. The original Chinese-CLIP code is released [at this link](https://github.com/OFA-Sys/Chinese-CLIP). The abstract from the paper is the following: *The tremendous success of CLIP (Radford et al., 2021) has promoted the research and application of contrastive learning for vision-language pretraining. In this work, we construct a large-scale dataset of image-text pairs in Chinese, where most data are retrieved from publicly available datasets, and we pretrain Chinese CLIP models on the new dataset. We develop 5 Chinese CLIP models of multiple sizes, spanning from 77 to 958 million parameters. Furthermore, we propose a two-stage pretraining method, where the model is first trained with the image encoder frozen and then trained with all parameters being optimized, to achieve enhanced model performance. Our comprehensive experiments demonstrate that Chinese CLIP can achieve the state-of-the-art performance on MUGE, Flickr30K-CN, and COCO-CN in the setups of zero-shot learning and finetuning, and it is able to achieve competitive performance in zero-shot image classification based on the evaluation on the ELEVATER benchmark (Li et al., 2022). Our codes, pretrained models, and demos have been released.* The Chinese-CLIP model was contributed by [OFA-Sys](https://huggingface.co/OFA-Sys). ## Usage example The code snippet below shows how to compute image & text features and similarities: thon >>> from PIL import Image >>> import requests >>> from transformers import ChineseCLIPProcessor, ChineseCLIPModel >>> model = ChineseCLIPModel.from_pretrained(""OFA-Sys/chinese-clip-vit-base-patch16"") >>> processor = ChineseCLIPProcessor.from_pretrained(""OFA-Sys/chinese-clip-vit-base-patch16"") >>> url = ""https://clip-cn-beijing.oss-cn-beijing.aliyuncs.com/pokemon.jpeg"" >>> image = Image.open(requests.get(url, stream=True).raw) >>> # Squirtle, Bulbasaur, Charmander, Pikachu in English >>> texts = [""杰尼龟"", ""妙蛙种子"", ""小火龙"", ""皮卡丘""] >>> # compute image feature >>> inputs = processor(images=image, return_tensors=""pt"") >>> image_features = model.get_image_features(**inputs) >>> image_features = image_features / image_features.norm(p=2, dim=-1, keepdim=True) # normalize >>> # compute text features >>> inputs = processor(text=texts, padding=True, return_tensors=""pt"") >>> text_features = model.get_text_features(**inputs) >>> text_features = text_features / text_features.norm(p=2, dim=-1, keepdim=True) # normalize >>> # compute image-text similarity scores >>> inputs = processor(text=texts, images=image, return_tensors=""pt"", padding=True) >>> outputs = model(**inputs) >>> logits_per_image = outputs.logits_per_image # this is the image-text similarity score >>> probs = logits_per_image.softmax(dim=1) # probs: [[1.2686e-03, 5.4499e-02, 6.7968e-04, 9.4355e-01]] Currently, following scales of pretrained Chinese-CLIP models are available on 🤗 Hub: - [OFA-Sys/chinese-clip-vit-base-patch16](https://huggingface.co/OFA-Sys/chinese-clip-vit-base-patch16) - [OFA-Sys/chinese-clip-vit-large-patch14](https://huggingface.co/OFA-Sys/chinese-clip-vit-large-patch14) - [OFA-Sys/chinese-clip-vit-large-patch14-336px](https://huggingface.co/OFA-Sys/chinese-clip-vit-large-patch14-336px) - [OFA-Sys/chinese-clip-vit-huge-patch14](https://huggingface.co/OFA-Sys/chinese-clip-vit-huge-patch14) ## ChineseCLIPConfig [[autodoc]] ChineseCLIPConfig - from_text_vision_configs ## ChineseCLIPTextConfig [[autodoc]] ChineseCLIPTextConfig ## ChineseCLIPVisionConfig [[autodoc]] ChineseCLIPVisionConfig ## ChineseCLIPImageProcessor [[autodoc]] ChineseCLIPImageProcessor - preprocess ## ChineseCLIPFeatureExtractor [[autodoc]] ChineseCLIPFeatureExtractor ## ChineseCLIPProcessor [[autodoc]] ChineseCLIPProcessor ## ChineseCLIPModel [[autodoc]] ChineseCLIPModel - forward - get_text_features - get_image_features ## ChineseCLIPTextModel [[autodoc]] ChineseCLIPTextModel - forward ## ChineseCLIPVisionModel [[autodoc]] ChineseCLIPVisionModel - forward" model_doc/mpt.md," # MPT ## Overview The MPT model was proposed by the [MosaicML](https://www.mosaicml.com/) team and released with multiple sizes and finetuned variants. The MPT models is a series of open source and commercially usable LLMs pre-trained on 1T tokens. MPT models are GPT-style decoder-only transformers with several improvements: performance-optimized layer implementations, architecture changes that provide greater training stability, and the elimination of context length limits by replacing positional embeddings with ALiBi. - MPT base: MPT base pre-trained models on next token prediction - MPT instruct: MPT base models fine-tuned on instruction based tasks - MPT storywriter: MPT base models fine-tuned for 2500 steps on 65k-token excerpts of fiction books contained in the books3 corpus, this enables the model to handle very long sequences The original code is available at the [`llm-foundry`](https://github.com/mosaicml/llm-foundry/tree/main) repository. Read more about it [in the release blogpost](https://www.mosaicml.com/blog/mpt-7b) ## Usage tips - Learn more about some techniques behind training of the model [in this section of llm-foundry repository](https://github.com/mosaicml/llm-foundry/blob/main/TUTORIAL.md#faqs) - If you want to use the advanced version of the model (triton kernels, direct flash attention integration), you can still use the original model implementation by adding `trust_remote_code=True` when calling `from_pretrained`. ## Resources - [Fine-tuning Notebook](https://colab.research.google.com/drive/1HCpQkLL7UXW8xJUJJ29X7QAeNJKO0frZ?usp=sharing) on how to fine-tune MPT-7B on a free Google Colab instance to turn the model into a Chatbot. ## MptConfig [[autodoc]] MptConfig - all ## MptModel [[autodoc]] MptModel - forward ## MptForCausalLM [[autodoc]] MptForCausalLM - forward ## MptForSequenceClassification [[autodoc]] MptForSequenceClassification - forward ## MptForTokenClassification [[autodoc]] MptForTokenClassification - forward ## MptForQuestionAnswering [[autodoc]] MptForQuestionAnswering - forward " model_doc/xlm-roberta-xl.md," # XLM-RoBERTa-XL ## Overview The XLM-RoBERTa-XL model was proposed in [Larger-Scale Transformers for Multilingual Masked Language Modeling](https://arxiv.org/abs/2105.00572) by Naman Goyal, Jingfei Du, Myle Ott, Giri Anantharaman, Alexis Conneau. The abstract from the paper is the following: *Recent work has demonstrated the effectiveness of cross-lingual language model pretraining for cross-lingual understanding. In this study, we present the results of two larger multilingual masked language models, with 3.5B and 10.7B parameters. Our two new models dubbed XLM-R XL and XLM-R XXL outperform XLM-R by 1.8% and 2.4% average accuracy on XNLI. Our model also outperforms the RoBERTa-Large model on several English tasks of the GLUE benchmark by 0.3% on average while handling 99 more languages. This suggests pretrained models with larger capacity may obtain both strong performance on high-resource languages while greatly improving low-resource languages. We make our code and models publicly available.* This model was contributed by [Soonhwan-Kwon](https://github.com/Soonhwan-Kwon) and [stefan-it](https://huggingface.co/stefan-it). The original code can be found [here](https://github.com/pytorch/fairseq/tree/master/examples/xlmr). ## Usage tips XLM-RoBERTa-XL is a multilingual model trained on 100 different languages. Unlike some XLM multilingual models, it does not require `lang` tensors to understand which language is used, and should be able to determine the correct language from the input ids. ## Resources - [Text classification task guide](../tasks/sequence_classification) - [Token classification task guide](../tasks/token_classification) - [Question answering task guide](../tasks/question_answering) - [Causal language modeling task guide](../tasks/language_modeling) - [Masked language modeling task guide](../tasks/masked_language_modeling) - [Multiple choice task guide](../tasks/multiple_choice) ## XLMRobertaXLConfig [[autodoc]] XLMRobertaXLConfig ## XLMRobertaXLModel [[autodoc]] XLMRobertaXLModel - forward ## XLMRobertaXLForCausalLM [[autodoc]] XLMRobertaXLForCausalLM - forward ## XLMRobertaXLForMaskedLM [[autodoc]] XLMRobertaXLForMaskedLM - forward ## XLMRobertaXLForSequenceClassification [[autodoc]] XLMRobertaXLForSequenceClassification - forward ## XLMRobertaXLForMultipleChoice [[autodoc]] XLMRobertaXLForMultipleChoice - forward ## XLMRobertaXLForTokenClassification [[autodoc]] XLMRobertaXLForTokenClassification - forward ## XLMRobertaXLForQuestionAnswering [[autodoc]] XLMRobertaXLForQuestionAnswering - forward " model_doc/van.md," # VAN This model is in maintenance mode only, we don't accept any new PRs changing its code. If you run into any issues running this model, please reinstall the last version that supported this model: v4.30.0. You can do so by running the following command: `pip install -U transformers==4.30.0`. ## Overview The VAN model was proposed in [Visual Attention Network](https://arxiv.org/abs/2202.09741) by Meng-Hao Guo, Cheng-Ze Lu, Zheng-Ning Liu, Ming-Ming Cheng, Shi-Min Hu. This paper introduces a new attention layer based on convolution operations able to capture both local and distant relationships. This is done by combining normal and large kernel convolution layers. The latter uses a dilated convolution to capture distant correlations. The abstract from the paper is the following: *While originally designed for natural language processing tasks, the self-attention mechanism has recently taken various computer vision areas by storm. However, the 2D nature of images brings three challenges for applying self-attention in computer vision. (1) Treating images as 1D sequences neglects their 2D structures. (2) The quadratic complexity is too expensive for high-resolution images. (3) It only captures spatial adaptability but ignores channel adaptability. In this paper, we propose a novel large kernel attention (LKA) module to enable self-adaptive and long-range correlations in self-attention while avoiding the above issues. We further introduce a novel neural network based on LKA, namely Visual Attention Network (VAN). While extremely simple, VAN outperforms the state-of-the-art vision transformers and convolutional neural networks with a large margin in extensive experiments, including image classification, object detection, semantic segmentation, instance segmentation, etc. Code is available at [this https URL](https://github.com/Visual-Attention-Network/VAN-Classification).* Tips: - VAN does not have an embedding layer, thus the `hidden_states` will have a length equal to the number of stages. The figure below illustrates the architecture of a Visual Aattention Layer. Taken from the [original paper](https://arxiv.org/abs/2202.09741). This model was contributed by [Francesco](https://huggingface.co/Francesco). The original code can be found [here](https://github.com/Visual-Attention-Network/VAN-Classification). ## Resources A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with VAN. - [`VanForImageClassification`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/pytorch/image-classification) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/image_classification.ipynb). - See also: [Image classification task guide](../tasks/image_classification) If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource. ## VanConfig [[autodoc]] VanConfig ## VanModel [[autodoc]] VanModel - forward ## VanForImageClassification [[autodoc]] VanForImageClassification - forward " model_doc/mask2former.md," # Mask2Former ## Overview The Mask2Former model was proposed in [Masked-attention Mask Transformer for Universal Image Segmentation](https://arxiv.org/abs/2112.01527) by Bowen Cheng, Ishan Misra, Alexander G. Schwing, Alexander Kirillov, Rohit Girdhar. Mask2Former is a unified framework for panoptic, instance and semantic segmentation and features significant performance and efficiency improvements over [MaskFormer](maskformer). The abstract from the paper is the following: *Image segmentation groups pixels with different semantics, e.g., category or instance membership. Each choice of semantics defines a task. While only the semantics of each task differ, current research focuses on designing specialized architectures for each task. We present Masked-attention Mask Transformer (Mask2Former), a new architecture capable of addressing any image segmentation task (panoptic, instance or semantic). Its key components include masked attention, which extracts localized features by constraining cross-attention within predicted mask regions. In addition to reducing the research effort by at least three times, it outperforms the best specialized architectures by a significant margin on four popular datasets. Most notably, Mask2Former sets a new state-of-the-art for panoptic segmentation (57.8 PQ on COCO), instance segmentation (50.1 AP on COCO) and semantic segmentation (57.7 mIoU on ADE20K).* Mask2Former architecture. Taken from the original paper. This model was contributed by [Shivalika Singh](https://huggingface.co/shivi) and [Alara Dirik](https://huggingface.co/adirik). The original code can be found [here](https://github.com/facebookresearch/Mask2Former). ## Usage tips - Mask2Former uses the same preprocessing and postprocessing steps as [MaskFormer](maskformer). Use [`Mask2FormerImageProcessor`] or [`AutoImageProcessor`] to prepare images and optional targets for the model. - To get the final segmentation, depending on the task, you can call [`~Mask2FormerImageProcessor.post_process_semantic_segmentation`] or [`~Mask2FormerImageProcessor.post_process_instance_segmentation`] or [`~Mask2FormerImageProcessor.post_process_panoptic_segmentation`]. All three tasks can be solved using [`Mask2FormerForUniversalSegmentation`] output, panoptic segmentation accepts an optional `label_ids_to_fuse` argument to fuse instances of the target object/s (e.g. sky) together. ## Resources A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with Mask2Former. - Demo notebooks regarding inference + fine-tuning Mask2Former on custom data can be found [here](https://github.com/NielsRogge/Transformers-Tutorials/tree/master/Mask2Former). If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we will review it. The resource should ideally demonstrate something new instead of duplicating an existing resource. ## Mask2FormerConfig [[autodoc]] Mask2FormerConfig ## MaskFormer specific outputs [[autodoc]] models.mask2former.modeling_mask2former.Mask2FormerModelOutput [[autodoc]] models.mask2former.modeling_mask2former.Mask2FormerForUniversalSegmentationOutput ## Mask2FormerModel [[autodoc]] Mask2FormerModel - forward ## Mask2FormerForUniversalSegmentation [[autodoc]] Mask2FormerForUniversalSegmentation - forward ## Mask2FormerImageProcessor [[autodoc]] Mask2FormerImageProcessor - preprocess - encode_inputs - post_process_semantic_segmentation - post_process_instance_segmentation - post_process_panoptic_segmentation" model_doc/byt5.md," # ByT5 ## Overview The ByT5 model was presented in [ByT5: Towards a token-free future with pre-trained byte-to-byte models](https://arxiv.org/abs/2105.13626) by Linting Xue, Aditya Barua, Noah Constant, Rami Al-Rfou, Sharan Narang, Mihir Kale, Adam Roberts, Colin Raffel. The abstract from the paper is the following: *Most widely-used pre-trained language models operate on sequences of tokens corresponding to word or subword units. Encoding text as a sequence of tokens requires a tokenizer, which is typically created as an independent artifact from the model. Token-free models that instead operate directly on raw text (bytes or characters) have many benefits: they can process text in any language out of the box, they are more robust to noise, and they minimize technical debt by removing complex and error-prone text preprocessing pipelines. Since byte or character sequences are longer than token sequences, past work on token-free models has often introduced new model architectures designed to amortize the cost of operating directly on raw text. In this paper, we show that a standard Transformer architecture can be used with minimal modifications to process byte sequences. We carefully characterize the trade-offs in terms of parameter count, training FLOPs, and inference speed, and show that byte-level models are competitive with their token-level counterparts. We also demonstrate that byte-level models are significantly more robust to noise and perform better on tasks that are sensitive to spelling and pronunciation. As part of our contribution, we release a new set of pre-trained byte-level Transformer models based on the T5 architecture, as well as all code and data used in our experiments.* This model was contributed by [patrickvonplaten](https://huggingface.co/patrickvonplaten). The original code can be found [here](https://github.com/google-research/byt5). ByT5's architecture is based on the T5v1.1 model, refer to [T5v1.1's documentation page](t5v1.1) for the API reference. They only differ in how inputs should be prepared for the model, see the code examples below. Since ByT5 was pre-trained unsupervisedly, there's no real advantage to using a task prefix during single-task fine-tuning. If you are doing multi-task fine-tuning, you should use a prefix. ## Usage example ByT5 works on raw UTF-8 bytes, so it can be used without a tokenizer: thon >>> from transformers import T5ForConditionalGeneration >>> import torch >>> model = T5ForConditionalGeneration.from_pretrained(""google/byt5-small"") >>> num_special_tokens = 3 >>> # Model has 3 special tokens which take up the input ids 0,1,2 of ByT5. >>> # => Need to shift utf-8 character encodings by 3 before passing ids to model. >>> input_ids = torch.tensor([list(""Life is like a box of chocolates."".encode(""utf-8""))]) + num_special_tokens >>> labels = torch.tensor([list(""La vie est comme une boîte de chocolat."".encode(""utf-8""))]) + num_special_tokens >>> loss = model(input_ids, labels=labels).loss >>> loss.item() 2.66 For batched inference and training it is however recommended to make use of the tokenizer: thon >>> from transformers import T5ForConditionalGeneration, AutoTokenizer >>> model = T5ForConditionalGeneration.from_pretrained(""google/byt5-small"") >>> tokenizer = AutoTokenizer.from_pretrained(""google/byt5-small"") >>> model_inputs = tokenizer( [""Life is like a box of chocolates."", ""Today is Monday.""], padding=""longest"", return_tensors=""pt"" ) >>> labels_dict = tokenizer( [""La vie est comme une boîte de chocolat."", ""Aujourd'hui c'est lundi.""], padding=""longest"", return_tensors=""pt"" ) >>> labels = labels_dict.input_ids >>> loss = model(**model_inputs, labels=labels).loss >>> loss.item() 17.9 Similar to [T5](t5), ByT5 was trained on the span-mask denoising task. However, since the model works directly on characters, the pretraining task is a bit different. Let's corrupt some characters of the input sentence `""The dog chases a ball in the park.""` and ask ByT5 to predict them for us. thon >>> from transformers import AutoTokenizer, AutoModelForSeq2SeqLM >>> import torch >>> tokenizer = AutoTokenizer.from_pretrained(""google/byt5-base"") >>> model = AutoModelForSeq2SeqLM.from_pretrained(""google/byt5-base"") >>> input_ids_prompt = ""The dog chases a ball in the park."" >>> input_ids = tokenizer(input_ids_prompt).input_ids >>> # Note that we cannot add ""{extra_id_}"" to the string directly >>> # as the Byte tokenizer would incorrectly merge the tokens >>> # For ByT5, we need to work directly on the character level >>> # Contrary to T5, ByT5 does not use sentinel tokens for masking, but instead >>> # uses final utf character ids. >>> # UTF-8 is represented by 8 bits and ByT5 has 3 special tokens. >>> # => There are 2**8+2 = 259 input ids and mask tokens count down from index 258. >>> # => mask to ""The dog [258]a ball [257]park."" >>> input_ids = torch.tensor([input_ids[:8] + [258] + input_ids[14:21] + [257] + input_ids[28:]]) >>> input_ids tensor([[ 87, 107, 104, 35, 103, 114, 106, 35, 258, 35, 100, 35, 101, 100, 111, 111, 257, 35, 115, 100, 117, 110, 49, 1]]) >>> # ByT5 produces only one char at a time so we need to produce many more output characters here -> set `max_length=100`. >>> output_ids = model.generate(input_ids, max_length=100)[0].tolist() >>> output_ids [0, 258, 108, 118, 35, 119, 107, 104, 35, 114, 113, 104, 35, 122, 107, 114, 35, 103, 114, 104, 118, 257, 35, 108, 113, 35, 119, 107, 104, 35, 103, 108, 118, 102, 114, 256, 108, 113, 35, 119, 107, 104, 35, 115, 100, 117, 110, 49, 35, 87, 107, 104, 35, 103, 114, 106, 35, 108, 118, 35, 119, 107, 104, 35, 114, 113, 104, 35, 122, 107, 114, 35, 103, 114, 104, 118, 35, 100, 35, 101, 100, 111, 111, 35, 108, 113, 255, 35, 108, 113, 35, 119, 107, 104, 35, 115, 100, 117, 110, 49] >>> # ^- Note how 258 descends to 257, 256, 255 >>> # Now we need to split on the sentinel tokens, let's write a short loop for this >>> output_ids_list = [] >>> start_token = 0 >>> sentinel_token = 258 >>> while sentinel_token in output_ids: split_idx = output_ids.index(sentinel_token) output_ids_list.append(output_ids[start_token:split_idx]) start_token = split_idx sentinel_token -= 1 >>> output_ids_list.append(output_ids[start_token:]) >>> output_string = tokenizer.batch_decode(output_ids_list) >>> output_string ['', 'is the one who does', ' in the disco', 'in the park. The dog is the one who does a ball in', ' in the park.'] ## ByT5Tokenizer [[autodoc]] ByT5Tokenizer See [`ByT5Tokenizer`] for all details. " model_doc/roformer.md," # RoFormer ## Overview The RoFormer model was proposed in [RoFormer: Enhanced Transformer with Rotary Position Embedding](https://arxiv.org/pdf/2104.09864v1.pdf) by Jianlin Su and Yu Lu and Shengfeng Pan and Bo Wen and Yunfeng Liu. The abstract from the paper is the following: *Position encoding in transformer architecture provides supervision for dependency modeling between elements at different positions in the sequence. We investigate various methods to encode positional information in transformer-based language models and propose a novel implementation named Rotary Position Embedding(RoPE). The proposed RoPE encodes absolute positional information with rotation matrix and naturally incorporates explicit relative position dependency in self-attention formulation. Notably, RoPE comes with valuable properties such as flexibility of being expand to any sequence lengths, decaying inter-token dependency with increasing relative distances, and capability of equipping the linear self-attention with relative position encoding. As a result, the enhanced transformer with rotary position embedding, or RoFormer, achieves superior performance in tasks with long texts. We release the theoretical analysis along with some preliminary experiment results on Chinese data. The undergoing experiment for English benchmark will soon be updated.* This model was contributed by [junnyu](https://huggingface.co/junnyu). The original code can be found [here](https://github.com/ZhuiyiTechnology/roformer). ## Usage tips RoFormer is a BERT-like autoencoding model with rotary position embeddings. Rotary position embeddings have shown improved performance on classification tasks with long texts. ## Resources - [Text classification task guide](../tasks/sequence_classification) - [Token classification task guide](../tasks/token_classification) - [Question answering task guide](../tasks/question_answering) - [Causal language modeling task guide](../tasks/language_modeling) - [Masked language modeling task guide](../tasks/masked_language_modeling) - [Multiple choice task guide](../tasks/multiple_choice) ## RoFormerConfig [[autodoc]] RoFormerConfig ## RoFormerTokenizer [[autodoc]] RoFormerTokenizer - build_inputs_with_special_tokens - get_special_tokens_mask - create_token_type_ids_from_sequences - save_vocabulary ## RoFormerTokenizerFast [[autodoc]] RoFormerTokenizerFast - build_inputs_with_special_tokens ## RoFormerModel [[autodoc]] RoFormerModel - forward ## RoFormerForCausalLM [[autodoc]] RoFormerForCausalLM - forward ## RoFormerForMaskedLM [[autodoc]] RoFormerForMaskedLM - forward ## RoFormerForSequenceClassification [[autodoc]] RoFormerForSequenceClassification - forward ## RoFormerForMultipleChoice [[autodoc]] RoFormerForMultipleChoice - forward ## RoFormerForTokenClassification [[autodoc]] RoFormerForTokenClassification - forward ## RoFormerForQuestionAnswering [[autodoc]] RoFormerForQuestionAnswering - forward ## TFRoFormerModel [[autodoc]] TFRoFormerModel - call ## TFRoFormerForMaskedLM [[autodoc]] TFRoFormerForMaskedLM - call ## TFRoFormerForCausalLM [[autodoc]] TFRoFormerForCausalLM - call ## TFRoFormerForSequenceClassification [[autodoc]] TFRoFormerForSequenceClassification - call ## TFRoFormerForMultipleChoice [[autodoc]] TFRoFormerForMultipleChoice - call ## TFRoFormerForTokenClassification [[autodoc]] TFRoFormerForTokenClassification - call ## TFRoFormerForQuestionAnswering [[autodoc]] TFRoFormerForQuestionAnswering - call ## FlaxRoFormerModel [[autodoc]] FlaxRoFormerModel - __call__ ## FlaxRoFormerForMaskedLM [[autodoc]] FlaxRoFormerForMaskedLM - __call__ ## FlaxRoFormerForSequenceClassification [[autodoc]] FlaxRoFormerForSequenceClassification - __call__ ## FlaxRoFormerForMultipleChoice [[autodoc]] FlaxRoFormerForMultipleChoice - __call__ ## FlaxRoFormerForTokenClassification [[autodoc]] FlaxRoFormerForTokenClassification - __call__ ## FlaxRoFormerForQuestionAnswering [[autodoc]] FlaxRoFormerForQuestionAnswering - __call__ " model_doc/flan-ul2.md," # FLAN-UL2 ## Overview Flan-UL2 is an encoder decoder model based on the T5 architecture. It uses the same configuration as the [UL2](ul2) model released earlier last year. It was fine tuned using the ""Flan"" prompt tuning and dataset collection. Similar to `Flan-T5`, one can directly use FLAN-UL2 weights without finetuning the model: According to the original blog here are the notable improvements: - The original UL2 model was only trained with receptive field of 512, which made it non-ideal for N-shot prompting where N is large. - The Flan-UL2 checkpoint uses a receptive field of 2048 which makes it more usable for few-shot in-context learning. - The original UL2 model also had mode switch tokens that was rather mandatory to get good performance. However, they were a little cumbersome as this requires often some changes during inference or finetuning. In this update/change, we continue training UL2 20B for an additional 100k steps (with small batch) to forget “mode tokens” before applying Flan instruction tuning. This Flan-UL2 checkpoint does not require mode tokens anymore. Google has released the following variants: The original checkpoints can be found [here](https://github.com/google-research/t5x/blob/main/docs/models.md#flan-ul2-checkpoints). ## Running on low resource devices The model is pretty heavy (~40GB in half precision) so if you just want to run the model, make sure you load your model in 8bit, and use `device_map=""auto""` to make sure you don't have any OOM issue! thon >>> from transformers import AutoModelForSeq2SeqLM, AutoTokenizer >>> model = AutoModelForSeq2SeqLM.from_pretrained(""google/flan-ul2"", load_in_8bit=True, device_map=""auto"") >>> tokenizer = AutoTokenizer.from_pretrained(""google/flan-ul2"") >>> inputs = tokenizer(""A step by step recipe to make bolognese pasta:"", return_tensors=""pt"") >>> outputs = model.generate(**inputs) >>> print(tokenizer.batch_decode(outputs, skip_special_tokens=True)) ['In a large skillet, brown the ground beef and onion over medium heat. Add the garlic'] Refer to [T5's documentation page](t5) for API reference, tips, code examples and notebooks. " model_doc/vilt.md," # ViLT ## Overview The ViLT model was proposed in [ViLT: Vision-and-Language Transformer Without Convolution or Region Supervision](https://arxiv.org/abs/2102.03334) by Wonjae Kim, Bokyung Son, Ildoo Kim. ViLT incorporates text embeddings into a Vision Transformer (ViT), allowing it to have a minimal design for Vision-and-Language Pre-training (VLP). The abstract from the paper is the following: *Vision-and-Language Pre-training (VLP) has improved performance on various joint vision-and-language downstream tasks. Current approaches to VLP heavily rely on image feature extraction processes, most of which involve region supervision (e.g., object detection) and the convolutional architecture (e.g., ResNet). Although disregarded in the literature, we find it problematic in terms of both (1) efficiency/speed, that simply extracting input features requires much more computation than the multimodal interaction steps; and (2) expressive power, as it is upper bounded to the expressive power of the visual embedder and its predefined visual vocabulary. In this paper, we present a minimal VLP model, Vision-and-Language Transformer (ViLT), monolithic in the sense that the processing of visual inputs is drastically simplified to just the same convolution-free manner that we process textual inputs. We show that ViLT is up to tens of times faster than previous VLP models, yet with competitive or better downstream task performance.* ViLT architecture. Taken from the original paper. This model was contributed by [nielsr](https://huggingface.co/nielsr). The original code can be found [here](https://github.com/dandelin/ViLT). ## Usage tips - The quickest way to get started with ViLT is by checking the [example notebooks](https://github.com/NielsRogge/Transformers-Tutorials/tree/master/ViLT) (which showcase both inference and fine-tuning on custom data). - ViLT is a model that takes both `pixel_values` and `input_ids` as input. One can use [`ViltProcessor`] to prepare data for the model. This processor wraps a image processor (for the image modality) and a tokenizer (for the language modality) into one. - ViLT is trained with images of various sizes: the authors resize the shorter edge of input images to 384 and limit the longer edge to under 640 while preserving the aspect ratio. To make batching of images possible, the authors use a `pixel_mask` that indicates which pixel values are real and which are padding. [`ViltProcessor`] automatically creates this for you. - The design of ViLT is very similar to that of a standard Vision Transformer (ViT). The only difference is that the model includes additional embedding layers for the language modality. - The PyTorch version of this model is only available in torch 1.10 and higher. ## ViltConfig [[autodoc]] ViltConfig ## ViltFeatureExtractor [[autodoc]] ViltFeatureExtractor - __call__ ## ViltImageProcessor [[autodoc]] ViltImageProcessor - preprocess ## ViltProcessor [[autodoc]] ViltProcessor - __call__ ## ViltModel [[autodoc]] ViltModel - forward ## ViltForMaskedLM [[autodoc]] ViltForMaskedLM - forward ## ViltForQuestionAnswering [[autodoc]] ViltForQuestionAnswering - forward ## ViltForImagesAndTextClassification [[autodoc]] ViltForImagesAndTextClassification - forward ## ViltForImageAndTextRetrieval [[autodoc]] ViltForImageAndTextRetrieval - forward ## ViltForTokenClassification [[autodoc]] ViltForTokenClassification - forward " model_doc/rwkv.md," # RWKV ## Overview The RWKV model was proposed in [this repo](https://github.com/BlinkDL/RWKV-LM) It suggests a tweak in the traditional Transformer attention to make it linear. This way, the model can be used as recurrent network: passing inputs for timestamp 0 and timestamp 1 together is the same as passing inputs at timestamp 0, then inputs at timestamp 1 along with the state of timestamp 0 (see example below). This can be more efficient than a regular Transformer and can deal with sentence of any length (even if the model uses a fixed context length for training). This model was contributed by [sgugger](https://huggingface.co/sgugger). The original code can be found [here](https://github.com/BlinkDL/RWKV-LM). ## Usage example import torch from transformers import AutoTokenizer, RwkvConfig, RwkvModel model = RwkvModel.from_pretrained(""sgugger/rwkv-430M-pile"") tokenizer = AutoTokenizer.from_pretrained(""sgugger/rwkv-430M-pile"") inputs = tokenizer(""This is an example."", return_tensors=""pt"") # Feed everything to the model outputs = model(inputs[""input_ids""]) output_whole = outputs.last_hidden_state outputs = model(inputs[""input_ids""][:, :2]) output_one = outputs.last_hidden_state # Using the state computed on the first inputs, we will get the same output outputs = model(inputs[""input_ids""][:, 2:], state=outputs.state) output_two = outputs.last_hidden_state torch.allclose(torch.cat([output_one, output_two], dim=1), output_whole, atol=1e-5) If you want to make sure the model stops generating when `'\n\n'` is detected, we recommend using the following stopping criteria: thon from transformers import StoppingCriteria class RwkvStoppingCriteria(StoppingCriteria): def __init__(self, eos_sequence = [187,187], eos_token_id = 537): self.eos_sequence = eos_sequence self.eos_token_id = eos_token_id def __call__(self, input_ids: torch.LongTensor, scores: torch.FloatTensor, **kwargs) -> bool: last_2_ids = input_ids[:,-2:].tolist() return self.eos_sequence in last_2_ids output = model.generate(inputs[""input_ids""], max_new_tokens=64, stopping_criteria = [RwkvStoppingCriteria()]) ## RwkvConfig [[autodoc]] RwkvConfig ## RwkvModel [[autodoc]] RwkvModel - forward ## RwkvLMHeadModel [[autodoc]] RwkvForCausalLM - forward ## Rwkv attention and the recurrent formulas In a traditional auto-regressive Transformer, attention is written as $$O = \hbox{softmax}(QK^{T} / \sqrt{d}) V$$ with \\(Q\\), \\(K\\) and \\(V\\) are matrices of shape `seq_len x hidden_size` named query, key and value (they are actually bigger matrices with a batch dimension and an attention head dimension but we're only interested in the last two, which is where the matrix product is taken, so for the sake of simplicity we only consider those two). The product \\(QK^{T}\\) then has shape `seq_len x seq_len` and we can take the maxtrix product with \\(V\\) to get the output \\(O\\) of the same shape as the others. Replacing the softmax by its value gives: $$O_{i} = \frac{\sum_{j=1}^{i} e^{Q_{i} K_{j}^{T} / \sqrt{d}} V_{j}}{\sum_{j=1}^{i} e^{Q_{i} K_{j}^{T} / \sqrt{d}}}$$ Note that the entries in \\(QK^{T}\\) corresponding to \\(j > i\\) are masked (the sum stops at j) because the attention is not allowed to look at future tokens (only past ones). In comparison, the RWKV attention is given by $$O_{i} = \sigma(R_{i}) \frac{\sum_{j=1}^{i} e^{W_{i-j} + K_{j}} V_{j}}{\sum_{j=1}^{i} e^{W_{i-j} + K_{j}}}$$ where \\(R\\) is a new matrix called receptance by the author, \\(K\\) and \\(V\\) are still the key and value (\\(\sigma\\) here is the sigmoid function). \\(W\\) is a new vector that represents the position of the token and is given by $$W_{0} = u \hbox{ and } W_{k} = (k-1)w \hbox{ for } k \geq 1$$ with \\(u\\) and \\(w\\) learnable parameters called in the code `time_first` and `time_decay` respectively. The numerator and denominator can both be expressed recursively. Naming them \\(N_{i}\\) and \\(D_{i}\\) we have: $$N_{i} = e^{u + K_{i}} V_{i} + \hat{N}_{i} \hbox{ where } \hat{N}_{i} = e^{K_{i-1}} V_{i-1} + e^{w + K_{i-2}} V_{i-2} \cdots + e^{(i-2)w + K_{1}} V_{1}$$ so \\(\hat{N}_{i}\\) (called `numerator_state` in the code) satistfies $$\hat{N}_{0} = 0 \hbox{ and } \hat{N}_{j+1} = e^{K_{j}} V_{j} + e^{w} \hat{N}_{j}$$ and $$D_{i} = e^{u + K_{i}} + \hat{D}_{i} \hbox{ where } \hat{D}_{i} = e^{K_{i-1}} + e^{w + K_{i-2}} \cdots + e^{(i-2)w + K_{1}}$$ so \\(\hat{D}_{i}\\) (called `denominator_state` in the code) satistfies $$\hat{D}_{0} = 0 \hbox{ and } \hat{D}_{j+1} = e^{K_{j}} + e^{w} \hat{D}_{j}$$ The actual recurrent formula used are a tiny bit more complex, as for numerical stability we don't want to compute exponentials of big numbers. Usually the softmax is not computed as is, but the exponential of the maximum term is divided of the numerator and denominator: $$\frac{e^{x_{i}}}{\sum_{j=1}^{n} e^{x_{j}}} = \frac{e^{x_{i} - M}}{\sum_{j=1}^{n} e^{x_{j} - M}}$$ with \\(M\\) the maximum of all \\(x_{j}\\). So here on top of saving the numerator state (\\(\hat{N}\\)) and the denominator state (\\(\hat{D}\\)) we also keep track of the maximum of all terms encountered in the exponentials. So we actually use $$\tilde{N}_{i} = e^{-M_{i}} \hat{N}_{i} \hbox{ and } \tilde{D}_{i} = e^{-M_{i}} \hat{D}_{i}$$ defined by the following recurrent formulas: $$\tilde{N}_{0} = 0 \hbox{ and } \tilde{N}_{j+1} = e^{K_{j} - q} V_{j} + e^{w + M_{j} - q} \tilde{N}_{j} \hbox{ where } q = \max(K_{j}, w + M_{j})$$ and $$\tilde{D}_{0} = 0 \hbox{ and } \tilde{D}_{j+1} = e^{K_{j} - q} + e^{w + M_{j} - q} \tilde{D}_{j} \hbox{ where } q = \max(K_{j}, w + M_{j})$$ and \\(M_{j+1} = q\\). With those, we can then compute $$N_{i} = e^{u + K_{i} - q} V_{i} + e^{M_{i}} \tilde{N}_{i} \hbox{ where } q = \max(u + K_{i}, M_{i})$$ and $$D_{i} = e^{u + K_{i} - q} + e^{M_{i}} \tilde{D}_{i} \hbox{ where } q = \max(u + K_{i}, M_{i})$$ which finally gives us $$O_{i} = \sigma(R_{i}) \frac{N_{i}}{D_{i}}$$" model_doc/xlsr_wav2vec2.md," # XLSR-Wav2Vec2 ## Overview The XLSR-Wav2Vec2 model was proposed in [Unsupervised Cross-Lingual Representation Learning For Speech Recognition](https://arxiv.org/abs/2006.13979) by Alexis Conneau, Alexei Baevski, Ronan Collobert, Abdelrahman Mohamed, Michael Auli. The abstract from the paper is the following: *This paper presents XLSR which learns cross-lingual speech representations by pretraining a single model from the raw waveform of speech in multiple languages. We build on wav2vec 2.0 which is trained by solving a contrastive task over masked latent speech representations and jointly learns a quantization of the latents shared across languages. The resulting model is fine-tuned on labeled data and experiments show that cross-lingual pretraining significantly outperforms monolingual pretraining. On the CommonVoice benchmark, XLSR shows a relative phoneme error rate reduction of 72% compared to the best known results. On BABEL, our approach improves word error rate by 16% relative compared to a comparable system. Our approach enables a single multilingual speech recognition model which is competitive to strong individual models. Analysis shows that the latent discrete speech representations are shared across languages with increased sharing for related languages. We hope to catalyze research in low-resource speech understanding by releasing XLSR-53, a large model pretrained in 53 languages.* The original code can be found [here](https://github.com/pytorch/fairseq/tree/master/fairseq/models/wav2vec). ## Usage tips - XLSR-Wav2Vec2 is a speech model that accepts a float array corresponding to the raw waveform of the speech signal. - XLSR-Wav2Vec2 model was trained using connectionist temporal classification (CTC) so the model output has to be decoded using [`Wav2Vec2CTCTokenizer`]. XLSR-Wav2Vec2's architecture is based on the Wav2Vec2 model, so one can refer to [Wav2Vec2's documentation page](wav2vec2). " model_doc/jukebox.md," # Jukebox ## Overview The Jukebox model was proposed in [Jukebox: A generative model for music](https://arxiv.org/pdf/2005.00341.pdf) by Prafulla Dhariwal, Heewoo Jun, Christine Payne, Jong Wook Kim, Alec Radford, Ilya Sutskever. It introduces a generative music model which can produce minute long samples that can be conditioned on an artist, genres and lyrics. The abstract from the paper is the following: *We introduce Jukebox, a model that generates music with singing in the raw audio domain. We tackle the long context of raw audio using a multiscale VQ-VAE to compress it to discrete codes, and modeling those using autoregressive Transformers. We show that the combined model at scale can generate high-fidelity and diverse songs with coherence up to multiple minutes. We can condition on artist and genre to steer the musical and vocal style, and on unaligned lyrics to make the singing more controllable. We are releasing thousands of non cherry-picked samples, along with model weights and code.* As shown on the following figure, Jukebox is made of 3 `priors` which are decoder only models. They follow the architecture described in [Generating Long Sequences with Sparse Transformers](https://arxiv.org/abs/1904.10509), modified to support longer context length. First, a autoencoder is used to encode the text lyrics. Next, the first (also called `top_prior`) prior attends to the last hidden states extracted from the lyrics encoder. The priors are linked to the previous priors respectively via an `AudioConditionner` module. The`AudioConditioner` upsamples the outputs of the previous prior to raw tokens at a certain audio frame per second resolution. The metadata such as *artist, genre and timing* are passed to each prior, in the form of a start token and positional embedding for the timing data. The hidden states are mapped to the closest codebook vector from the VQVAE in order to convert them to raw audio. ![JukeboxModel](https://gist.githubusercontent.com/ArthurZucker/92c1acaae62ebf1b6a951710bdd8b6af/raw/c9c517bf4eff61393f6c7dec9366ef02bdd059a3/jukebox.svg) This model was contributed by [Arthur Zucker](https://huggingface.co/ArthurZ). The original code can be found [here](https://github.com/openai/jukebox). ## Usage tips - This model only supports inference. This is for a few reasons, mostly because it requires a crazy amount of memory to train. Feel free to open a PR and add what's missing to have a full integration with the hugging face traineer! - This model is very slow, and takes 8h to generate a minute long audio using the 5b top prior on a V100 GPU. In order automaticallay handle the device on which the model should execute, use `accelerate`. - Contrary to the paper, the order of the priors goes from `0` to `1` as it felt more intuitive : we sample starting from `0`. - Primed sampling (conditioning the sampling on raw audio) requires more memory than ancestral sampling and should be used with `fp16` set to `True`. This model was contributed by [Arthur Zucker](https://huggingface.co/ArthurZ). The original code can be found [here](https://github.com/openai/jukebox). ## JukeboxConfig [[autodoc]] JukeboxConfig ## JukeboxPriorConfig [[autodoc]] JukeboxPriorConfig ## JukeboxVQVAEConfig [[autodoc]] JukeboxVQVAEConfig ## JukeboxTokenizer [[autodoc]] JukeboxTokenizer - save_vocabulary ## JukeboxModel [[autodoc]] JukeboxModel - ancestral_sample - primed_sample - continue_sample - upsample - _sample ## JukeboxPrior [[autodoc]] JukeboxPrior - sample - forward ## JukeboxVQVAE [[autodoc]] JukeboxVQVAE - forward - encode - decode " model_doc/oneformer.md," # OneFormer ## Overview The OneFormer model was proposed in [OneFormer: One Transformer to Rule Universal Image Segmentation](https://arxiv.org/abs/2211.06220) by Jitesh Jain, Jiachen Li, MangTik Chiu, Ali Hassani, Nikita Orlov, Humphrey Shi. OneFormer is a universal image segmentation framework that can be trained on a single panoptic dataset to perform semantic, instance, and panoptic segmentation tasks. OneFormer uses a task token to condition the model on the task in focus, making the architecture task-guided for training, and task-dynamic for inference. The abstract from the paper is the following: *Universal Image Segmentation is not a new concept. Past attempts to unify image segmentation in the last decades include scene parsing, panoptic segmentation, and, more recently, new panoptic architectures. However, such panoptic architectures do not truly unify image segmentation because they need to be trained individually on the semantic, instance, or panoptic segmentation to achieve the best performance. Ideally, a truly universal framework should be trained only once and achieve SOTA performance across all three image segmentation tasks. To that end, we propose OneFormer, a universal image segmentation framework that unifies segmentation with a multi-task train-once design. We first propose a task-conditioned joint training strategy that enables training on ground truths of each domain (semantic, instance, and panoptic segmentation) within a single multi-task training process. Secondly, we introduce a task token to condition our model on the task at hand, making our model task-dynamic to support multi-task training and inference. Thirdly, we propose using a query-text contrastive loss during training to establish better inter-task and inter-class distinctions. Notably, our single OneFormer model outperforms specialized Mask2Former models across all three segmentation tasks on ADE20k, CityScapes, and COCO, despite the latter being trained on each of the three tasks individually with three times the resources. With new ConvNeXt and DiNAT backbones, we observe even more performance improvement. We believe OneFormer is a significant step towards making image segmentation more universal and accessible.* The figure below illustrates the architecture of OneFormer. Taken from the [original paper](https://arxiv.org/abs/2211.06220). This model was contributed by [Jitesh Jain](https://huggingface.co/praeclarumjj3). The original code can be found [here](https://github.com/SHI-Labs/OneFormer). ## Usage tips - OneFormer requires two inputs during inference: *image* and *task token*. - During training, OneFormer only uses panoptic annotations. - If you want to train the model in a distributed environment across multiple nodes, then one should update the `get_num_masks` function inside in the `OneFormerLoss` class of `modeling_oneformer.py`. When training on multiple nodes, this should be set to the average number of target masks across all nodes, as can be seen in the original implementation [here](https://github.com/SHI-Labs/OneFormer/blob/33ebb56ed34f970a30ae103e786c0cb64c653d9a/oneformer/modeling/criterion.py#L287). - One can use [`OneFormerProcessor`] to prepare input images and task inputs for the model and optional targets for the model. [`OneformerProcessor`] wraps [`OneFormerImageProcessor`] and [`CLIPTokenizer`] into a single instance to both prepare the images and encode the task inputs. - To get the final segmentation, depending on the task, you can call [`~OneFormerProcessor.post_process_semantic_segmentation`] or [`~OneFormerImageProcessor.post_process_instance_segmentation`] or [`~OneFormerImageProcessor.post_process_panoptic_segmentation`]. All three tasks can be solved using [`OneFormerForUniversalSegmentation`] output, panoptic segmentation accepts an optional `label_ids_to_fuse` argument to fuse instances of the target object/s (e.g. sky) together. ## Resources A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with OneFormer. - Demo notebooks regarding inference + fine-tuning on custom data can be found [here](https://github.com/NielsRogge/Transformers-Tutorials/tree/master/OneFormer). If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we will review it. The resource should ideally demonstrate something new instead of duplicating an existing resource. ## OneFormer specific outputs [[autodoc]] models.oneformer.modeling_oneformer.OneFormerModelOutput [[autodoc]] models.oneformer.modeling_oneformer.OneFormerForUniversalSegmentationOutput ## OneFormerConfig [[autodoc]] OneFormerConfig ## OneFormerImageProcessor [[autodoc]] OneFormerImageProcessor - preprocess - encode_inputs - post_process_semantic_segmentation - post_process_instance_segmentation - post_process_panoptic_segmentation ## OneFormerProcessor [[autodoc]] OneFormerProcessor ## OneFormerModel [[autodoc]] OneFormerModel - forward ## OneFormerForUniversalSegmentation [[autodoc]] OneFormerForUniversalSegmentation - forward " model_doc/mobilevit.md," # MobileViT ## Overview The MobileViT model was proposed in [MobileViT: Light-weight, General-purpose, and Mobile-friendly Vision Transformer](https://arxiv.org/abs/2110.02178) by Sachin Mehta and Mohammad Rastegari. MobileViT introduces a new layer that replaces local processing in convolutions with global processing using transformers. The abstract from the paper is the following: *Light-weight convolutional neural networks (CNNs) are the de-facto for mobile vision tasks. Their spatial inductive biases allow them to learn representations with fewer parameters across different vision tasks. However, these networks are spatially local. To learn global representations, self-attention-based vision trans-formers (ViTs) have been adopted. Unlike CNNs, ViTs are heavy-weight. In this paper, we ask the following question: is it possible to combine the strengths of CNNs and ViTs to build a light-weight and low latency network for mobile vision tasks? Towards this end, we introduce MobileViT, a light-weight and general-purpose vision transformer for mobile devices. MobileViT presents a different perspective for the global processing of information with transformers, i.e., transformers as convolutions. Our results show that MobileViT significantly outperforms CNN- and ViT-based networks across different tasks and datasets. On the ImageNet-1k dataset, MobileViT achieves top-1 accuracy of 78.4% with about 6 million parameters, which is 3.2% and 6.2% more accurate than MobileNetv3 (CNN-based) and DeIT (ViT-based) for a similar number of parameters. On the MS-COCO object detection task, MobileViT is 5.7% more accurate than MobileNetv3 for a similar number of parameters.* This model was contributed by [matthijs](https://huggingface.co/Matthijs). The TensorFlow version of the model was contributed by [sayakpaul](https://huggingface.co/sayakpaul). The original code and weights can be found [here](https://github.com/apple/ml-cvnets). ## Usage tips - MobileViT is more like a CNN than a Transformer model. It does not work on sequence data but on batches of images. Unlike ViT, there are no embeddings. The backbone model outputs a feature map. You can follow [this tutorial](https://keras.io/examples/vision/mobilevit) for a lightweight introduction. - One can use [`MobileViTImageProcessor`] to prepare images for the model. Note that if you do your own preprocessing, the pretrained checkpoints expect images to be in BGR pixel order (not RGB). - The available image classification checkpoints are pre-trained on [ImageNet-1k](https://huggingface.co/datasets/imagenet-1k) (also referred to as ILSVRC 2012, a collection of 1.3 million images and 1,000 classes). - The segmentation model uses a [DeepLabV3](https://arxiv.org/abs/1706.05587) head. The available semantic segmentation checkpoints are pre-trained on [PASCAL VOC](http://host.robots.ox.ac.uk/pascal/VOC/). - As the name suggests MobileViT was designed to be performant and efficient on mobile phones. The TensorFlow versions of the MobileViT models are fully compatible with [TensorFlow Lite](https://www.tensorflow.org/lite). You can use the following code to convert a MobileViT checkpoint (be it image classification or semantic segmentation) to generate a TensorFlow Lite model: from transformers import TFMobileViTForImageClassification import tensorflow as tf model_ckpt = ""apple/mobilevit-xx-small"" model = TFMobileViTForImageClassification.from_pretrained(model_ckpt) converter = tf.lite.TFLiteConverter.from_keras_model(model) converter.optimizations = [tf.lite.Optimize.DEFAULT] converter.target_spec.supported_ops = [ tf.lite.OpsSet.TFLITE_BUILTINS, tf.lite.OpsSet.SELECT_TF_OPS, ] tflite_model = converter.convert() tflite_filename = model_ckpt.split(""/"")[-1] + "".tflite"" with open(tflite_filename, ""wb"") as f: f.write(tflite_model) The resulting model will be just **about an MB** making it a good fit for mobile applications where resources and network bandwidth can be constrained. ## Resources A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with MobileViT. - [`MobileViTForImageClassification`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/pytorch/image-classification) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/image_classification.ipynb). - See also: [Image classification task guide](../tasks/image_classification) **Semantic segmentation** - [Semantic segmentation task guide](../tasks/semantic_segmentation) If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource. ## MobileViTConfig [[autodoc]] MobileViTConfig ## MobileViTFeatureExtractor [[autodoc]] MobileViTFeatureExtractor - __call__ - post_process_semantic_segmentation ## MobileViTImageProcessor [[autodoc]] MobileViTImageProcessor - preprocess - post_process_semantic_segmentation ## MobileViTModel [[autodoc]] MobileViTModel - forward ## MobileViTForImageClassification [[autodoc]] MobileViTForImageClassification - forward ## MobileViTForSemanticSegmentation [[autodoc]] MobileViTForSemanticSegmentation - forward ## TFMobileViTModel [[autodoc]] TFMobileViTModel - call ## TFMobileViTForImageClassification [[autodoc]] TFMobileViTForImageClassification - call ## TFMobileViTForSemanticSegmentation [[autodoc]] TFMobileViTForSemanticSegmentation - call " model_doc/xls_r.md," # XLS-R ## Overview The XLS-R model was proposed in [XLS-R: Self-supervised Cross-lingual Speech Representation Learning at Scale](https://arxiv.org/abs/2111.09296) by Arun Babu, Changhan Wang, Andros Tjandra, Kushal Lakhotia, Qiantong Xu, Naman Goyal, Kritika Singh, Patrick von Platen, Yatharth Saraf, Juan Pino, Alexei Baevski, Alexis Conneau, Michael Auli. The abstract from the paper is the following: *This paper presents XLS-R, a large-scale model for cross-lingual speech representation learning based on wav2vec 2.0. We train models with up to 2B parameters on nearly half a million hours of publicly available speech audio in 128 languages, an order of magnitude more public data than the largest known prior work. Our evaluation covers a wide range of tasks, domains, data regimes and languages, both high and low-resource. On the CoVoST-2 speech translation benchmark, we improve the previous state of the art by an average of 7.4 BLEU over 21 translation directions into English. For speech recognition, XLS-R improves over the best known prior work on BABEL, MLS, CommonVoice as well as VoxPopuli, lowering error rates by 14-34% relative on average. XLS-R also sets a new state of the art on VoxLingua107 language identification. Moreover, we show that with sufficient model size, cross-lingual pretraining can outperform English-only pretraining when translating English speech into other languages, a setting which favors monolingual pretraining. We hope XLS-R can help to improve speech processing tasks for many more languages of the world.* Relevant checkpoints can be found under https://huggingface.co/models?other=xls_r. The original code can be found [here](https://github.com/pytorch/fairseq/tree/master/fairseq/models/wav2vec). ## Usage tips - XLS-R is a speech model that accepts a float array corresponding to the raw waveform of the speech signal. - XLS-R model was trained using connectionist temporal classification (CTC) so the model output has to be decoded using [`Wav2Vec2CTCTokenizer`]. XLS-R's architecture is based on the Wav2Vec2 model, refer to [Wav2Vec2's documentation page](wav2vec2) for API reference. " model_doc/retribert.md," # RetriBERT This model is in maintenance mode only, so we won't accept any new PRs changing its code. If you run into any issues running this model, please reinstall the last version that supported this model: v4.30.0. You can do so by running the following command: `pip install -U transformers==4.30.0`. ## Overview The RetriBERT model was proposed in the blog post [Explain Anything Like I'm Five: A Model for Open Domain Long Form Question Answering](https://yjernite.github.io/lfqa.html). RetriBERT is a small model that uses either a single or pair of BERT encoders with lower-dimension projection for dense semantic indexing of text. This model was contributed by [yjernite](https://huggingface.co/yjernite). Code to train and use the model can be found [here](https://github.com/huggingface/transformers/tree/main/examples/research-projects/distillation). ## RetriBertConfig [[autodoc]] RetriBertConfig ## RetriBertTokenizer [[autodoc]] RetriBertTokenizer ## RetriBertTokenizerFast [[autodoc]] RetriBertTokenizerFast ## RetriBertModel [[autodoc]] RetriBertModel - forward " model_doc/blip-2.md," # BLIP-2 ## Overview The BLIP-2 model was proposed in [BLIP-2: Bootstrapping Language-Image Pre-training with Frozen Image Encoders and Large Language Models](https://arxiv.org/abs/2301.12597) by Junnan Li, Dongxu Li, Silvio Savarese, Steven Hoi. BLIP-2 leverages frozen pre-trained image encoders and large language models (LLMs) by training a lightweight, 12-layer Transformer encoder in between them, achieving state-of-the-art performance on various vision-language tasks. Most notably, BLIP-2 improves upon [Flamingo](https://arxiv.org/abs/2204.14198), an 80 billion parameter model, by 8.7% on zero-shot VQAv2 with 54x fewer trainable parameters. The abstract from the paper is the following: *The cost of vision-and-language pre-training has become increasingly prohibitive due to end-to-end training of large-scale models. This paper proposes BLIP-2, a generic and efficient pre-training strategy that bootstraps vision-language pre-training from off-the-shelf frozen pre-trained image encoders and frozen large language models. BLIP-2 bridges the modality gap with a lightweight Querying Transformer, which is pre-trained in two stages. The first stage bootstraps vision-language representation learning from a frozen image encoder. The second stage bootstraps vision-to-language generative learning from a frozen language model. BLIP-2 achieves state-of-the-art performance on various vision-language tasks, despite having significantly fewer trainable parameters than existing methods. For example, our model outperforms Flamingo80B by 8.7% on zero-shot VQAv2 with 54x fewer trainable parameters. We also demonstrate the model's emerging capabilities of zero-shot image-to-text generation that can follow natural language instructions.* BLIP-2 architecture. Taken from the original paper. This model was contributed by [nielsr](https://huggingface.co/nielsr). The original code can be found [here](https://github.com/salesforce/LAVIS/tree/5ee63d688ba4cebff63acee04adaef2dee9af207). ## Usage tips - BLIP-2 can be used for conditional text generation given an image and an optional text prompt. At inference time, it's recommended to use the [`generate`] method. - One can use [`Blip2Processor`] to prepare images for the model, and decode the predicted tokens ID's back to text. ## Resources A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with BLIP-2. - Demo notebooks for BLIP-2 for image captioning, visual question answering (VQA) and chat-like conversations can be found [here](https://github.com/NielsRogge/Transformers-Tutorials/tree/master/BLIP-2). If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource. ## Blip2Config [[autodoc]] Blip2Config - from_vision_qformer_text_configs ## Blip2VisionConfig [[autodoc]] Blip2VisionConfig ## Blip2QFormerConfig [[autodoc]] Blip2QFormerConfig ## Blip2Processor [[autodoc]] Blip2Processor ## Blip2VisionModel [[autodoc]] Blip2VisionModel - forward ## Blip2QFormerModel [[autodoc]] Blip2QFormerModel - forward ## Blip2Model [[autodoc]] Blip2Model - forward - get_text_features - get_image_features - get_qformer_features ## Blip2ForConditionalGeneration [[autodoc]] Blip2ForConditionalGeneration - forward - generate" model_doc/tapas.md," # TAPAS ## Overview The TAPAS model was proposed in [TAPAS: Weakly Supervised Table Parsing via Pre-training](https://www.aclweb.org/anthology/2020.acl-main.398) by Jonathan Herzig, Paweł Krzysztof Nowak, Thomas Müller, Francesco Piccinno and Julian Martin Eisenschlos. It's a BERT-based model specifically designed (and pre-trained) for answering questions about tabular data. Compared to BERT, TAPAS uses relative position embeddings and has 7 token types that encode tabular structure. TAPAS is pre-trained on the masked language modeling (MLM) objective on a large dataset comprising millions of tables from English Wikipedia and corresponding texts. For question answering, TAPAS has 2 heads on top: a cell selection head and an aggregation head, for (optionally) performing aggregations (such as counting or summing) among selected cells. TAPAS has been fine-tuned on several datasets: - [SQA](https://www.microsoft.com/en-us/download/details.aspx?id=54253) (Sequential Question Answering by Microsoft) - [WTQ](https://github.com/ppasupat/WikiTableQuestions) (Wiki Table Questions by Stanford University) - [WikiSQL](https://github.com/salesforce/WikiSQL) (by Salesforce). It achieves state-of-the-art on both SQA and WTQ, while having comparable performance to SOTA on WikiSQL, with a much simpler architecture. The abstract from the paper is the following: *Answering natural language questions over tables is usually seen as a semantic parsing task. To alleviate the collection cost of full logical forms, one popular approach focuses on weak supervision consisting of denotations instead of logical forms. However, training semantic parsers from weak supervision poses difficulties, and in addition, the generated logical forms are only used as an intermediate step prior to retrieving the denotation. In this paper, we present TAPAS, an approach to question answering over tables without generating logical forms. TAPAS trains from weak supervision, and predicts the denotation by selecting table cells and optionally applying a corresponding aggregation operator to such selection. TAPAS extends BERT's architecture to encode tables as input, initializes from an effective joint pre-training of text segments and tables crawled from Wikipedia, and is trained end-to-end. We experiment with three different semantic parsing datasets, and find that TAPAS outperforms or rivals semantic parsing models by improving state-of-the-art accuracy on SQA from 55.1 to 67.2 and performing on par with the state-of-the-art on WIKISQL and WIKITQ, but with a simpler model architecture. We additionally find that transfer learning, which is trivial in our setting, from WIKISQL to WIKITQ, yields 48.7 accuracy, 4.2 points above the state-of-the-art.* In addition, the authors have further pre-trained TAPAS to recognize **table entailment**, by creating a balanced dataset of millions of automatically created training examples which are learned in an intermediate step prior to fine-tuning. The authors of TAPAS call this further pre-training intermediate pre-training (since TAPAS is first pre-trained on MLM, and then on another dataset). They found that intermediate pre-training further improves performance on SQA, achieving a new state-of-the-art as well as state-of-the-art on [TabFact](https://github.com/wenhuchen/Table-Fact-Checking), a large-scale dataset with 16k Wikipedia tables for table entailment (a binary classification task). For more details, see their follow-up paper: [Understanding tables with intermediate pre-training](https://www.aclweb.org/anthology/2020.findings-emnlp.27/) by Julian Martin Eisenschlos, Syrine Krichene and Thomas Müller. TAPAS architecture. Taken from the original blog post. This model was contributed by [nielsr](https://huggingface.co/nielsr). The Tensorflow version of this model was contributed by [kamalkraj](https://huggingface.co/kamalkraj). The original code can be found [here](https://github.com/google-research/tapas). ## Usage tips - TAPAS is a model that uses relative position embeddings by default (restarting the position embeddings at every cell of the table). Note that this is something that was added after the publication of the original TAPAS paper. According to the authors, this usually results in a slightly better performance, and allows you to encode longer sequences without running out of embeddings. This is reflected in the `reset_position_index_per_cell` parameter of [`TapasConfig`], which is set to `True` by default. The default versions of the models available on the [hub](https://huggingface.co/models?search=tapas) all use relative position embeddings. You can still use the ones with absolute position embeddings by passing in an additional argument `revision=""no_reset""` when calling the `from_pretrained()` method. Note that it's usually advised to pad the inputs on the right rather than the left. - TAPAS is based on BERT, so `TAPAS-base` for example corresponds to a `BERT-base` architecture. Of course, `TAPAS-large` will result in the best performance (the results reported in the paper are from `TAPAS-large`). Results of the various sized models are shown on the [original GitHub repository](https://github.com/google-research/tapas). - TAPAS has checkpoints fine-tuned on SQA, which are capable of answering questions related to a table in a conversational set-up. This means that you can ask follow-up questions such as ""what is his age?"" related to the previous question. Note that the forward pass of TAPAS is a bit different in case of a conversational set-up: in that case, you have to feed every table-question pair one by one to the model, such that the `prev_labels` token type ids can be overwritten by the predicted `labels` of the model to the previous question. See ""Usage"" section for more info. - TAPAS is similar to BERT and therefore relies on the masked language modeling (MLM) objective. It is therefore efficient at predicting masked tokens and at NLU in general, but is not optimal for text generation. Models trained with a causal language modeling (CLM) objective are better in that regard. Note that TAPAS can be used as an encoder in the EncoderDecoderModel framework, to combine it with an autoregressive text decoder such as GPT-2. ## Usage: fine-tuning Here we explain how you can fine-tune [`TapasForQuestionAnswering`] on your own dataset. **STEP 1: Choose one of the 3 ways in which you can use TAPAS - or experiment** Basically, there are 3 different ways in which one can fine-tune [`TapasForQuestionAnswering`], corresponding to the different datasets on which Tapas was fine-tuned: 1. SQA: if you're interested in asking follow-up questions related to a table, in a conversational set-up. For example if you first ask ""what's the name of the first actor?"" then you can ask a follow-up question such as ""how old is he?"". Here, questions do not involve any aggregation (all questions are cell selection questions). 2. WTQ: if you're not interested in asking questions in a conversational set-up, but rather just asking questions related to a table, which might involve aggregation, such as counting a number of rows, summing up cell values or averaging cell values. You can then for example ask ""what's the total number of goals Cristiano Ronaldo made in his career?"". This case is also called **weak supervision**, since the model itself must learn the appropriate aggregation operator (SUM/COUNT/AVERAGE/NONE) given only the answer to the question as supervision. 3. WikiSQL-supervised: this dataset is based on WikiSQL with the model being given the ground truth aggregation operator during training. This is also called **strong supervision**. Here, learning the appropriate aggregation operator is much easier. To summarize: | **Task** | **Example dataset** | **Description** | |-------------------------------------|---------------------|---------------------------------------------------------------------------------------------------------| | Conversational | SQA | Conversational, only cell selection questions | | Weak supervision for aggregation | WTQ | Questions might involve aggregation, and the model must learn this given only the answer as supervision | | Strong supervision for aggregation | WikiSQL-supervised | Questions might involve aggregation, and the model must learn this given the gold aggregation operator | Initializing a model with a pre-trained base and randomly initialized classification heads from the hub can be done as shown below. >>> from transformers import TapasConfig, TapasForQuestionAnswering >>> # for example, the base sized model with default SQA configuration >>> model = TapasForQuestionAnswering.from_pretrained(""google/tapas-base"") >>> # or, the base sized model with WTQ configuration >>> config = TapasConfig.from_pretrained(""google/tapas-base-finetuned-wtq"") >>> model = TapasForQuestionAnswering.from_pretrained(""google/tapas-base"", config=config) >>> # or, the base sized model with WikiSQL configuration >>> config = TapasConfig(""google-base-finetuned-wikisql-supervised"") >>> model = TapasForQuestionAnswering.from_pretrained(""google/tapas-base"", config=config) Of course, you don't necessarily have to follow one of these three ways in which TAPAS was fine-tuned. You can also experiment by defining any hyperparameters you want when initializing [`TapasConfig`], and then create a [`TapasForQuestionAnswering`] based on that configuration. For example, if you have a dataset that has both conversational questions and questions that might involve aggregation, then you can do it this way. Here's an example: >>> from transformers import TapasConfig, TapasForQuestionAnswering >>> # you can initialize the classification heads any way you want (see docs of TapasConfig) >>> config = TapasConfig(num_aggregation_labels=3, average_logits_per_cell=True) >>> # initializing the pre-trained base sized model with our custom classification heads >>> model = TapasForQuestionAnswering.from_pretrained(""google/tapas-base"", config=config) Initializing a model with a pre-trained base and randomly initialized classification heads from the hub can be done as shown below. Be sure to have installed the [tensorflow_probability](https://github.com/tensorflow/probability) dependency: >>> from transformers import TapasConfig, TFTapasForQuestionAnswering >>> # for example, the base sized model with default SQA configuration >>> model = TFTapasForQuestionAnswering.from_pretrained(""google/tapas-base"") >>> # or, the base sized model with WTQ configuration >>> config = TapasConfig.from_pretrained(""google/tapas-base-finetuned-wtq"") >>> model = TFTapasForQuestionAnswering.from_pretrained(""google/tapas-base"", config=config) >>> # or, the base sized model with WikiSQL configuration >>> config = TapasConfig(""google-base-finetuned-wikisql-supervised"") >>> model = TFTapasForQuestionAnswering.from_pretrained(""google/tapas-base"", config=config) Of course, you don't necessarily have to follow one of these three ways in which TAPAS was fine-tuned. You can also experiment by defining any hyperparameters you want when initializing [`TapasConfig`], and then create a [`TFTapasForQuestionAnswering`] based on that configuration. For example, if you have a dataset that has both conversational questions and questions that might involve aggregation, then you can do it this way. Here's an example: >>> from transformers import TapasConfig, TFTapasForQuestionAnswering >>> # you can initialize the classification heads any way you want (see docs of TapasConfig) >>> config = TapasConfig(num_aggregation_labels=3, average_logits_per_cell=True) >>> # initializing the pre-trained base sized model with our custom classification heads >>> model = TFTapasForQuestionAnswering.from_pretrained(""google/tapas-base"", config=config) What you can also do is start from an already fine-tuned checkpoint. A note here is that the already fine-tuned checkpoint on WTQ has some issues due to the L2-loss which is somewhat brittle. See [here](https://github.com/google-research/tapas/issues/91#issuecomment-735719340) for more info. For a list of all pre-trained and fine-tuned TAPAS checkpoints available on HuggingFace's hub, see [here](https://huggingface.co/models?search=tapas). **STEP 2: Prepare your data in the SQA format** Second, no matter what you picked above, you should prepare your dataset in the [SQA](https://www.microsoft.com/en-us/download/details.aspx?id=54253) format. This format is a TSV/CSV file with the following columns: - `id`: optional, id of the table-question pair, for bookkeeping purposes. - `annotator`: optional, id of the person who annotated the table-question pair, for bookkeeping purposes. - `position`: integer indicating if the question is the first, second, third, related to the table. Only required in case of conversational setup (SQA). You don't need this column in case you're going for WTQ/WikiSQL-supervised. - `question`: string - `table_file`: string, name of a csv file containing the tabular data - `answer_coordinates`: list of one or more tuples (each tuple being a cell coordinate, i.e. row, column pair that is part of the answer) - `answer_text`: list of one or more strings (each string being a cell value that is part of the answer) - `aggregation_label`: index of the aggregation operator. Only required in case of strong supervision for aggregation (the WikiSQL-supervised case) - `float_answer`: the float answer to the question, if there is one (np.nan if there isn't). Only required in case of weak supervision for aggregation (such as WTQ and WikiSQL) The tables themselves should be present in a folder, each table being a separate csv file. Note that the authors of the TAPAS algorithm used conversion scripts with some automated logic to convert the other datasets (WTQ, WikiSQL) into the SQA format. The author explains this [here](https://github.com/google-research/tapas/issues/50#issuecomment-705465960). A conversion of this script that works with HuggingFace's implementation can be found [here](https://github.com/NielsRogge/tapas_utils). Interestingly, these conversion scripts are not perfect (the `answer_coordinates` and `float_answer` fields are populated based on the `answer_text`), meaning that WTQ and WikiSQL results could actually be improved. **STEP 3: Convert your data into tensors using TapasTokenizer** Third, given that you've prepared your data in this TSV/CSV format (and corresponding CSV files containing the tabular data), you can then use [`TapasTokenizer`] to convert table-question pairs into `input_ids`, `attention_mask`, `token_type_ids` and so on. Again, based on which of the three cases you picked above, [`TapasForQuestionAnswering`] requires different inputs to be fine-tuned: | **Task** | **Required inputs** | |------------------------------------|---------------------------------------------------------------------------------------------------------------------| | Conversational | `input_ids`, `attention_mask`, `token_type_ids`, `labels` | | Weak supervision for aggregation | `input_ids`, `attention_mask`, `token_type_ids`, `labels`, `numeric_values`, `numeric_values_scale`, `float_answer` | | Strong supervision for aggregation | `input ids`, `attention mask`, `token type ids`, `labels`, `aggregation_labels` | [`TapasTokenizer`] creates the `labels`, `numeric_values` and `numeric_values_scale` based on the `answer_coordinates` and `answer_text` columns of the TSV file. The `float_answer` and `aggregation_labels` are already in the TSV file of step 2. Here's an example: >>> from transformers import TapasTokenizer >>> import pandas as pd >>> model_name = ""google/tapas-base"" >>> tokenizer = TapasTokenizer.from_pretrained(model_name) >>> data = {""Actors"": [""Brad Pitt"", ""Leonardo Di Caprio"", ""George Clooney""], ""Number of movies"": [""87"", ""53"", ""69""]} >>> queries = [ ""What is the name of the first actor?"", ""How many movies has George Clooney played in?"", ""What is the total number of movies?"", ] >>> answer_coordinates = [[(0, 0)], [(2, 1)], [(0, 1), (1, 1), (2, 1)]] >>> answer_text = [[""Brad Pitt""], [""69""], [""209""]] >>> table = pd.DataFrame.from_dict(data) >>> inputs = tokenizer( table=table, queries=queries, answer_coordinates=answer_coordinates, answer_text=answer_text, padding=""max_length"", return_tensors=""pt"", ) >>> inputs {'input_ids': tensor([[ ]]), 'attention_mask': tensor([[]]), 'token_type_ids': tensor([[[]]]), 'numeric_values': tensor([[ ]]), 'numeric_values_scale: tensor([[ ]]), labels: tensor([[ ]])} Note that [`TapasTokenizer`] expects the data of the table to be **text-only**. You can use `.astype(str)` on a dataframe to turn it into text-only data. Of course, this only shows how to encode a single training example. It is advised to create a dataloader to iterate over batches: >>> import torch >>> import pandas as pd >>> tsv_path = ""your_path_to_the_tsv_file"" >>> table_csv_path = ""your_path_to_a_directory_containing_all_csv_files"" >>> class TableDataset(torch.utils.data.Dataset): def __init__(self, data, tokenizer): self.data = data self.tokenizer = tokenizer def __getitem__(self, idx): item = data.iloc[idx] table = pd.read_csv(table_csv_path + item.table_file).astype( str ) # be sure to make your table data text only encoding = self.tokenizer( table=table, queries=item.question, answer_coordinates=item.answer_coordinates, answer_text=item.answer_text, truncation=True, padding=""max_length"", return_tensors=""pt"", ) # remove the batch dimension which the tokenizer adds by default encoding = {key: val.squeeze(0) for key, val in encoding.items()} # add the float_answer which is also required (weak supervision for aggregation case) encoding[""float_answer""] = torch.tensor(item.float_answer) return encoding def __len__(self): return len(self.data) >>> data = pd.read_csv(tsv_path, sep=""\t"") >>> train_dataset = TableDataset(data, tokenizer) >>> train_dataloader = torch.utils.data.DataLoader(train_dataset, batch_size=32) Third, given that you've prepared your data in this TSV/CSV format (and corresponding CSV files containing the tabular data), you can then use [`TapasTokenizer`] to convert table-question pairs into `input_ids`, `attention_mask`, `token_type_ids` and so on. Again, based on which of the three cases you picked above, [`TFTapasForQuestionAnswering`] requires different inputs to be fine-tuned: | **Task** | **Required inputs** | |------------------------------------|---------------------------------------------------------------------------------------------------------------------| | Conversational | `input_ids`, `attention_mask`, `token_type_ids`, `labels` | | Weak supervision for aggregation | `input_ids`, `attention_mask`, `token_type_ids`, `labels`, `numeric_values`, `numeric_values_scale`, `float_answer` | | Strong supervision for aggregation | `input ids`, `attention mask`, `token type ids`, `labels`, `aggregation_labels` | [`TapasTokenizer`] creates the `labels`, `numeric_values` and `numeric_values_scale` based on the `answer_coordinates` and `answer_text` columns of the TSV file. The `float_answer` and `aggregation_labels` are already in the TSV file of step 2. Here's an example: >>> from transformers import TapasTokenizer >>> import pandas as pd >>> model_name = ""google/tapas-base"" >>> tokenizer = TapasTokenizer.from_pretrained(model_name) >>> data = {""Actors"": [""Brad Pitt"", ""Leonardo Di Caprio"", ""George Clooney""], ""Number of movies"": [""87"", ""53"", ""69""]} >>> queries = [ ""What is the name of the first actor?"", ""How many movies has George Clooney played in?"", ""What is the total number of movies?"", ] >>> answer_coordinates = [[(0, 0)], [(2, 1)], [(0, 1), (1, 1), (2, 1)]] >>> answer_text = [[""Brad Pitt""], [""69""], [""209""]] >>> table = pd.DataFrame.from_dict(data) >>> inputs = tokenizer( table=table, queries=queries, answer_coordinates=answer_coordinates, answer_text=answer_text, padding=""max_length"", return_tensors=""tf"", ) >>> inputs {'input_ids': tensor([[ ]]), 'attention_mask': tensor([[]]), 'token_type_ids': tensor([[[]]]), 'numeric_values': tensor([[ ]]), 'numeric_values_scale: tensor([[ ]]), labels: tensor([[ ]])} Note that [`TapasTokenizer`] expects the data of the table to be **text-only**. You can use `.astype(str)` on a dataframe to turn it into text-only data. Of course, this only shows how to encode a single training example. It is advised to create a dataloader to iterate over batches: >>> import tensorflow as tf >>> import pandas as pd >>> tsv_path = ""your_path_to_the_tsv_file"" >>> table_csv_path = ""your_path_to_a_directory_containing_all_csv_files"" >>> class TableDataset: def __init__(self, data, tokenizer): self.data = data self.tokenizer = tokenizer def __iter__(self): for idx in range(self.__len__()): item = self.data.iloc[idx] table = pd.read_csv(table_csv_path + item.table_file).astype( str ) # be sure to make your table data text only encoding = self.tokenizer( table=table, queries=item.question, answer_coordinates=item.answer_coordinates, answer_text=item.answer_text, truncation=True, padding=""max_length"", return_tensors=""tf"", ) # remove the batch dimension which the tokenizer adds by default encoding = {key: tf.squeeze(val, 0) for key, val in encoding.items()} # add the float_answer which is also required (weak supervision for aggregation case) encoding[""float_answer""] = tf.convert_to_tensor(item.float_answer, dtype=tf.float32) yield encoding[""input_ids""], encoding[""attention_mask""], encoding[""numeric_values""], encoding[ ""numeric_values_scale"" ], encoding[""token_type_ids""], encoding[""labels""], encoding[""float_answer""] def __len__(self): return len(self.data) >>> data = pd.read_csv(tsv_path, sep=""\t"") >>> train_dataset = TableDataset(data, tokenizer) >>> output_signature = ( tf.TensorSpec(shape=(512,), dtype=tf.int32), tf.TensorSpec(shape=(512,), dtype=tf.int32), tf.TensorSpec(shape=(512,), dtype=tf.float32), tf.TensorSpec(shape=(512,), dtype=tf.float32), tf.TensorSpec(shape=(512, 7), dtype=tf.int32), tf.TensorSpec(shape=(512,), dtype=tf.int32), tf.TensorSpec(shape=(512,), dtype=tf.float32), ) >>> train_dataloader = tf.data.Dataset.from_generator(train_dataset, output_signature=output_signature).batch(32) Note that here, we encode each table-question pair independently. This is fine as long as your dataset is **not conversational**. In case your dataset involves conversational questions (such as in SQA), then you should first group together the `queries`, `answer_coordinates` and `answer_text` per table (in the order of their `position` index) and batch encode each table with its questions. This will make sure that the `prev_labels` token types (see docs of [`TapasTokenizer`]) are set correctly. See [this notebook](https://github.com/NielsRogge/Transformers-Tutorials/blob/master/TAPAS/Fine_tuning_TapasForQuestionAnswering_on_SQA.ipynb) for more info. See [this notebook](https://github.com/kamalkraj/Tapas-Tutorial/blob/master/TAPAS/Fine_tuning_TapasForQuestionAnswering_on_SQA.ipynb) for more info regarding using the TensorFlow model. **STEP 4: Train (fine-tune) the model You can then fine-tune [`TapasForQuestionAnswering`] as follows (shown here for the weak supervision for aggregation case): >>> from transformers import TapasConfig, TapasForQuestionAnswering, AdamW >>> # this is the default WTQ configuration >>> config = TapasConfig( num_aggregation_labels=4, use_answer_as_supervision=True, answer_loss_cutoff=0.664694, cell_selection_preference=0.207951, huber_loss_delta=0.121194, init_cell_selection_weights_to_zero=True, select_one_column=True, allow_empty_column_selection=False, temperature=0.0352513, ) >>> model = TapasForQuestionAnswering.from_pretrained(""google/tapas-base"", config=config) >>> optimizer = AdamW(model.parameters(), lr=5e-5) >>> model.train() >>> for epoch in range(2): # loop over the dataset multiple times for batch in train_dataloader: # get the inputs; input_ids = batch[""input_ids""] attention_mask = batch[""attention_mask""] token_type_ids = batch[""token_type_ids""] labels = batch[""labels""] numeric_values = batch[""numeric_values""] numeric_values_scale = batch[""numeric_values_scale""] float_answer = batch[""float_answer""] # zero the parameter gradients optimizer.zero_grad() # forward + backward + optimize outputs = model( input_ids=input_ids, attention_mask=attention_mask, token_type_ids=token_type_ids, labels=labels, numeric_values=numeric_values, numeric_values_scale=numeric_values_scale, float_answer=float_answer, ) loss = outputs.loss loss.backward() optimizer.step() You can then fine-tune [`TFTapasForQuestionAnswering`] as follows (shown here for the weak supervision for aggregation case): >>> import tensorflow as tf >>> from transformers import TapasConfig, TFTapasForQuestionAnswering >>> # this is the default WTQ configuration >>> config = TapasConfig( num_aggregation_labels=4, use_answer_as_supervision=True, answer_loss_cutoff=0.664694, cell_selection_preference=0.207951, huber_loss_delta=0.121194, init_cell_selection_weights_to_zero=True, select_one_column=True, allow_empty_column_selection=False, temperature=0.0352513, ) >>> model = TFTapasForQuestionAnswering.from_pretrained(""google/tapas-base"", config=config) >>> optimizer = tf.keras.optimizers.Adam(learning_rate=5e-5) >>> for epoch in range(2): # loop over the dataset multiple times for batch in train_dataloader: # get the inputs; input_ids = batch[0] attention_mask = batch[1] token_type_ids = batch[4] labels = batch[-1] numeric_values = batch[2] numeric_values_scale = batch[3] float_answer = batch[6] # forward + backward + optimize with tf.GradientTape() as tape: outputs = model( input_ids=input_ids, attention_mask=attention_mask, token_type_ids=token_type_ids, labels=labels, numeric_values=numeric_values, numeric_values_scale=numeric_values_scale, float_answer=float_answer, ) grads = tape.gradient(outputs.loss, model.trainable_weights) optimizer.apply_gradients(zip(grads, model.trainable_weights)) ## Usage: inference Here we explain how you can use [`TapasForQuestionAnswering`] or [`TFTapasForQuestionAnswering`] for inference (i.e. making predictions on new data). For inference, only `input_ids`, `attention_mask` and `token_type_ids` (which you can obtain using [`TapasTokenizer`]) have to be provided to the model to obtain the logits. Next, you can use the handy [`~models.tapas.tokenization_tapas.convert_logits_to_predictions`] method to convert these into predicted coordinates and optional aggregation indices. However, note that inference is **different** depending on whether or not the setup is conversational. In a non-conversational set-up, inference can be done in parallel on all table-question pairs of a batch. Here's an example of that: >>> from transformers import TapasTokenizer, TapasForQuestionAnswering >>> import pandas as pd >>> model_name = ""google/tapas-base-finetuned-wtq"" >>> model = TapasForQuestionAnswering.from_pretrained(model_name) >>> tokenizer = TapasTokenizer.from_pretrained(model_name) >>> data = {""Actors"": [""Brad Pitt"", ""Leonardo Di Caprio"", ""George Clooney""], ""Number of movies"": [""87"", ""53"", ""69""]} >>> queries = [ ""What is the name of the first actor?"", ""How many movies has George Clooney played in?"", ""What is the total number of movies?"", ] >>> table = pd.DataFrame.from_dict(data) >>> inputs = tokenizer(table=table, queries=queries, padding=""max_length"", return_tensors=""pt"") >>> outputs = model(**inputs) >>> predicted_answer_coordinates, predicted_aggregation_indices = tokenizer.convert_logits_to_predictions( inputs, outputs.logits.detach(), outputs.logits_aggregation.detach() ) >>> # let's print out the results: >>> id2aggregation = {0: ""NONE"", 1: ""SUM"", 2: ""AVERAGE"", 3: ""COUNT""} >>> aggregation_predictions_string = [id2aggregation[x] for x in predicted_aggregation_indices] >>> answers = [] >>> for coordinates in predicted_answer_coordinates: if len(coordinates) == 1: # only a single cell: answers.append(table.iat[coordinates[0]]) else: # multiple cells cell_values = [] for coordinate in coordinates: cell_values.append(table.iat[coordinate]) answers.append("", "".join(cell_values)) >>> display(table) >>> print("""") >>> for query, answer, predicted_agg in zip(queries, answers, aggregation_predictions_string): print(query) if predicted_agg == ""NONE"": print(""Predicted answer: "" + answer) else: print(""Predicted answer: "" + predicted_agg + "" > "" + answer) What is the name of the first actor? Predicted answer: Brad Pitt How many movies has George Clooney played in? Predicted answer: COUNT > 69 What is the total number of movies? Predicted answer: SUM > 87, 53, 69 Here we explain how you can use [`TFTapasForQuestionAnswering`] for inference (i.e. making predictions on new data). For inference, only `input_ids`, `attention_mask` and `token_type_ids` (which you can obtain using [`TapasTokenizer`]) have to be provided to the model to obtain the logits. Next, you can use the handy [`~models.tapas.tokenization_tapas.convert_logits_to_predictions`] method to convert these into predicted coordinates and optional aggregation indices. However, note that inference is **different** depending on whether or not the setup is conversational. In a non-conversational set-up, inference can be done in parallel on all table-question pairs of a batch. Here's an example of that: >>> from transformers import TapasTokenizer, TFTapasForQuestionAnswering >>> import pandas as pd >>> model_name = ""google/tapas-base-finetuned-wtq"" >>> model = TFTapasForQuestionAnswering.from_pretrained(model_name) >>> tokenizer = TapasTokenizer.from_pretrained(model_name) >>> data = {""Actors"": [""Brad Pitt"", ""Leonardo Di Caprio"", ""George Clooney""], ""Number of movies"": [""87"", ""53"", ""69""]} >>> queries = [ ""What is the name of the first actor?"", ""How many movies has George Clooney played in?"", ""What is the total number of movies?"", ] >>> table = pd.DataFrame.from_dict(data) >>> inputs = tokenizer(table=table, queries=queries, padding=""max_length"", return_tensors=""tf"") >>> outputs = model(**inputs) >>> predicted_answer_coordinates, predicted_aggregation_indices = tokenizer.convert_logits_to_predictions( inputs, outputs.logits, outputs.logits_aggregation ) >>> # let's print out the results: >>> id2aggregation = {0: ""NONE"", 1: ""SUM"", 2: ""AVERAGE"", 3: ""COUNT""} >>> aggregation_predictions_string = [id2aggregation[x] for x in predicted_aggregation_indices] >>> answers = [] >>> for coordinates in predicted_answer_coordinates: if len(coordinates) == 1: # only a single cell: answers.append(table.iat[coordinates[0]]) else: # multiple cells cell_values = [] for coordinate in coordinates: cell_values.append(table.iat[coordinate]) answers.append("", "".join(cell_values)) >>> display(table) >>> print("""") >>> for query, answer, predicted_agg in zip(queries, answers, aggregation_predictions_string): print(query) if predicted_agg == ""NONE"": print(""Predicted answer: "" + answer) else: print(""Predicted answer: "" + predicted_agg + "" > "" + answer) What is the name of the first actor? Predicted answer: Brad Pitt How many movies has George Clooney played in? Predicted answer: COUNT > 69 What is the total number of movies? Predicted answer: SUM > 87, 53, 69 In case of a conversational set-up, then each table-question pair must be provided **sequentially** to the model, such that the `prev_labels` token types can be overwritten by the predicted `labels` of the previous table-question pair. Again, more info can be found in [this notebook](https://github.com/NielsRogge/Transformers-Tutorials/blob/master/TAPAS/Fine_tuning_TapasForQuestionAnswering_on_SQA.ipynb) (for PyTorch) and [this notebook](https://github.com/kamalkraj/Tapas-Tutorial/blob/master/TAPAS/Fine_tuning_TapasForQuestionAnswering_on_SQA.ipynb) (for TensorFlow). ## Resources - [Text classification task guide](../tasks/sequence_classification) - [Masked language modeling task guide](../tasks/masked_language_modeling) ## TAPAS specific outputs [[autodoc]] models.tapas.modeling_tapas.TableQuestionAnsweringOutput ## TapasConfig [[autodoc]] TapasConfig ## TapasTokenizer [[autodoc]] TapasTokenizer - __call__ - convert_logits_to_predictions - save_vocabulary ## TapasModel [[autodoc]] TapasModel - forward ## TapasForMaskedLM [[autodoc]] TapasForMaskedLM - forward ## TapasForSequenceClassification [[autodoc]] TapasForSequenceClassification - forward ## TapasForQuestionAnswering [[autodoc]] TapasForQuestionAnswering - forward ## TFTapasModel [[autodoc]] TFTapasModel - call ## TFTapasForMaskedLM [[autodoc]] TFTapasForMaskedLM - call ## TFTapasForSequenceClassification [[autodoc]] TFTapasForSequenceClassification - call ## TFTapasForQuestionAnswering [[autodoc]] TFTapasForQuestionAnswering - call " model_doc/wav2vec2.md," # Wav2Vec2 ## Overview The Wav2Vec2 model was proposed in [wav2vec 2.0: A Framework for Self-Supervised Learning of Speech Representations](https://arxiv.org/abs/2006.11477) by Alexei Baevski, Henry Zhou, Abdelrahman Mohamed, Michael Auli. The abstract from the paper is the following: *We show for the first time that learning powerful representations from speech audio alone followed by fine-tuning on transcribed speech can outperform the best semi-supervised methods while being conceptually simpler. wav2vec 2.0 masks the speech input in the latent space and solves a contrastive task defined over a quantization of the latent representations which are jointly learned. Experiments using all labeled data of Librispeech achieve 1.8/3.3 WER on the clean/other test sets. When lowering the amount of labeled data to one hour, wav2vec 2.0 outperforms the previous state of the art on the 100 hour subset while using 100 times less labeled data. Using just ten minutes of labeled data and pre-training on 53k hours of unlabeled data still achieves 4.8/8.2 WER. This demonstrates the feasibility of speech recognition with limited amounts of labeled data.* This model was contributed by [patrickvonplaten](https://huggingface.co/patrickvonplaten). ## Usage tips - Wav2Vec2 is a speech model that accepts a float array corresponding to the raw waveform of the speech signal. - Wav2Vec2 model was trained using connectionist temporal classification (CTC) so the model output has to be decoded using [`Wav2Vec2CTCTokenizer`]. ## Resources A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with Wav2Vec2. If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource. - A notebook on how to [leverage a pretrained Wav2Vec2 model for emotion classification](https://colab.research.google.com/github/m3hrdadfi/soxan/blob/main/notebooks/Emotion_recognition_in_Greek_speech_using_Wav2Vec2.ipynb). 🌎 - [`Wav2Vec2ForCTC`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/pytorch/audio-classification) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/audio_classification.ipynb). - [Audio classification task guide](../tasks/audio_classification) - A blog post on [boosting Wav2Vec2 with n-grams in 🤗 Transformers](https://huggingface.co/blog/wav2vec2-with-ngram). - A blog post on how to [finetune Wav2Vec2 for English ASR with 🤗 Transformers](https://huggingface.co/blog/fine-tune-wav2vec2-english). - A blog post on [finetuning XLS-R for Multi-Lingual ASR with 🤗 Transformers](https://huggingface.co/blog/fine-tune-xlsr-wav2vec2). - A notebook on how to [create YouTube captions from any video by transcribing audio with Wav2Vec2](https://colab.research.google.com/github/Muennighoff/ytclipcc/blob/main/wav2vec_youtube_captions.ipynb). 🌎 - [`Wav2Vec2ForCTC`] is supported by a notebook on [how to finetune a speech recognition model in English](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/speech_recognition.ipynb), and [how to finetune a speech recognition model in any language](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/multi_lingual_speech_recognition.ipynb). - [Automatic speech recognition task guide](../tasks/asr) 🚀 Deploy - A blog post on how to deploy Wav2Vec2 for [Automatic Speech Recogntion with Hugging Face's Transformers & Amazon SageMaker](https://www.philschmid.de/automatic-speech-recognition-sagemaker). ## Wav2Vec2Config [[autodoc]] Wav2Vec2Config ## Wav2Vec2CTCTokenizer [[autodoc]] Wav2Vec2CTCTokenizer - __call__ - save_vocabulary - decode - batch_decode - set_target_lang ## Wav2Vec2FeatureExtractor [[autodoc]] Wav2Vec2FeatureExtractor - __call__ ## Wav2Vec2Processor [[autodoc]] Wav2Vec2Processor - __call__ - pad - from_pretrained - save_pretrained - batch_decode - decode ## Wav2Vec2ProcessorWithLM [[autodoc]] Wav2Vec2ProcessorWithLM - __call__ - pad - from_pretrained - save_pretrained - batch_decode - decode ### Decoding multiple audios If you are planning to decode multiple batches of audios, you should consider using [`~Wav2Vec2ProcessorWithLM.batch_decode`] and passing an instantiated `multiprocessing.Pool`. Otherwise, [`~Wav2Vec2ProcessorWithLM.batch_decode`] performance will be slower than calling [`~Wav2Vec2ProcessorWithLM.decode`] for each audio individually, as it internally instantiates a new `Pool` for every call. See the example below: thon >>> # Let's see how to use a user-managed pool for batch decoding multiple audios >>> from multiprocessing import get_context >>> from transformers import AutoTokenizer, AutoProcessor, AutoModelForCTC >>> from datasets import load_dataset >>> import datasets >>> import torch >>> # import model, feature extractor, tokenizer >>> model = AutoModelForCTC.from_pretrained(""patrickvonplaten/wav2vec2-base-100h-with-lm"").to(""cuda"") >>> processor = AutoProcessor.from_pretrained(""patrickvonplaten/wav2vec2-base-100h-with-lm"") >>> # load example dataset >>> dataset = load_dataset(""hf-internal-testing/librispeech_asr_dummy"", ""clean"", split=""validation"") >>> dataset = dataset.cast_column(""audio"", datasets.Audio(sampling_rate=16_000)) >>> def map_to_array(batch): batch[""speech""] = batch[""audio""][""array""] return batch >>> # prepare speech data for batch inference >>> dataset = dataset.map(map_to_array, remove_columns=[""audio""]) >>> def map_to_pred(batch, pool): inputs = processor(batch[""speech""], sampling_rate=16_000, padding=True, return_tensors=""pt"") inputs = {k: v.to(""cuda"") for k, v in inputs.items()} with torch.no_grad(): logits = model(**inputs).logits transcription = processor.batch_decode(logits.cpu().numpy(), pool).text batch[""transcription""] = transcription return batch >>> # note: pool should be instantiated *after* `Wav2Vec2ProcessorWithLM`. >>> # otherwise, the LM won't be available to the pool's sub-processes >>> # select number of processes and batch_size based on number of CPU cores available and on dataset size >>> with get_context(""fork"").Pool(processes=2) as pool: result = dataset.map( map_to_pred, batched=True, batch_size=2, fn_kwargs={""pool"": pool}, remove_columns=[""speech""] ) >>> result[""transcription""][:2] ['MISTER QUILTER IS THE APOSTLE OF THE MIDDLE CLASSES AND WE ARE GLAD TO WELCOME HIS GOSPEL', ""NOR IS MISTER COULTER'S MANNER LESS INTERESTING THAN HIS MATTER""] ## Wav2Vec2 specific outputs [[autodoc]] models.wav2vec2_with_lm.processing_wav2vec2_with_lm.Wav2Vec2DecoderWithLMOutput [[autodoc]] models.wav2vec2.modeling_wav2vec2.Wav2Vec2BaseModelOutput [[autodoc]] models.wav2vec2.modeling_wav2vec2.Wav2Vec2ForPreTrainingOutput [[autodoc]] models.wav2vec2.modeling_flax_wav2vec2.FlaxWav2Vec2BaseModelOutput [[autodoc]] models.wav2vec2.modeling_flax_wav2vec2.FlaxWav2Vec2ForPreTrainingOutput ## Wav2Vec2Model [[autodoc]] Wav2Vec2Model - forward ## Wav2Vec2ForCTC [[autodoc]] Wav2Vec2ForCTC - forward - load_adapter ## Wav2Vec2ForSequenceClassification [[autodoc]] Wav2Vec2ForSequenceClassification - forward ## Wav2Vec2ForAudioFrameClassification [[autodoc]] Wav2Vec2ForAudioFrameClassification - forward ## Wav2Vec2ForXVector [[autodoc]] Wav2Vec2ForXVector - forward ## Wav2Vec2ForPreTraining [[autodoc]] Wav2Vec2ForPreTraining - forward ## TFWav2Vec2Model [[autodoc]] TFWav2Vec2Model - call ## TFWav2Vec2ForSequenceClassification [[autodoc]] TFWav2Vec2ForSequenceClassification - call ## TFWav2Vec2ForCTC [[autodoc]] TFWav2Vec2ForCTC - call ## FlaxWav2Vec2Model [[autodoc]] FlaxWav2Vec2Model - __call__ ## FlaxWav2Vec2ForCTC [[autodoc]] FlaxWav2Vec2ForCTC - __call__ ## FlaxWav2Vec2ForPreTraining [[autodoc]] FlaxWav2Vec2ForPreTraining - __call__ " model_doc/vitmatte.md," # ViTMatte ## Overview The ViTMatte model was proposed in [Boosting Image Matting with Pretrained Plain Vision Transformers](https://arxiv.org/abs/2305.15272) by Jingfeng Yao, Xinggang Wang, Shusheng Yang, Baoyuan Wang. ViTMatte leverages plain [Vision Transformers](vit) for the task of image matting, which is the process of accurately estimating the foreground object in images and videos. The abstract from the paper is the following: *Recently, plain vision Transformers (ViTs) have shown impressive performance on various computer vision tasks, thanks to their strong modeling capacity and large-scale pretraining. However, they have not yet conquered the problem of image matting. We hypothesize that image matting could also be boosted by ViTs and present a new efficient and robust ViT-based matting system, named ViTMatte. Our method utilizes (i) a hybrid attention mechanism combined with a convolution neck to help ViTs achieve an excellent performance-computation trade-off in matting tasks. (ii) Additionally, we introduce the detail capture module, which just consists of simple lightweight convolutions to complement the detailed information required by matting. To the best of our knowledge, ViTMatte is the first work to unleash the potential of ViT on image matting with concise adaptation. It inherits many superior properties from ViT to matting, including various pretraining strategies, concise architecture design, and flexible inference strategies. We evaluate ViTMatte on Composition-1k and Distinctions-646, the most commonly used benchmark for image matting, our method achieves state-of-the-art performance and outperforms prior matting works by a large margin.* This model was contributed by [nielsr](https://huggingface.co/nielsr). The original code can be found [here](https://github.com/hustvl/ViTMatte). ViTMatte high-level overview. Taken from the original paper. ## Resources A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with ViTMatte. - A demo notebook regarding inference with [`VitMatteForImageMatting`], including background replacement, can be found [here](https://github.com/NielsRogge/Transformers-Tutorials/tree/master/ViTMatte). The model expects both the image and trimap (concatenated) as input. Use [`ViTMatteImageProcessor`] for this purpose. ## VitMatteConfig [[autodoc]] VitMatteConfig ## VitMatteImageProcessor [[autodoc]] VitMatteImageProcessor - preprocess ## VitMatteForImageMatting [[autodoc]] VitMatteForImageMatting - forward" model_doc/mistral.md," # Mistral ## Overview Mistral-7B-v0.1 is Mistral AI's first Large Language Model (LLM). ### Model Details Mistral-7B-v0.1 is a decoder-based LM with the following architectural choices: * Sliding Window Attention - Trained with 8k context length and fixed cache size, with a theoretical attention span of 128K tokens * GQA (Grouped Query Attention) - allowing faster inference and lower cache size. * Byte-fallback BPE tokenizer - ensures that characters are never mapped to out of vocabulary tokens. We also provide an instruction fine-tuned model: `Mistral-7B-Instruct-v0.1` which can be used for chat-based inference. For more details please read our [release blog post](https://mistral.ai/news/announcing-mistral-7b/) ### License Both `Mistral-7B-v0.1` and `Mistral-7B-Instruct-v0.1` are released under the Apache 2.0 license. ## Usage tips `Mistral-7B-v0.1` and `Mistral-7B-Instruct-v0.1` can be found on the [Huggingface Hub](https://huggingface.co/mistralai) These ready-to-use checkpoints can be downloaded and used via the HuggingFace Hub: thon >>> from transformers import AutoModelForCausalLM, AutoTokenizer >>> device = ""cuda"" # the device to load the model onto >>> model = AutoModelForCausalLM.from_pretrained(""mistralai/Mistral-7B-v0.1"") >>> tokenizer = AutoTokenizer.from_pretrained(""mistralai/Mistral-7B-v0.1"") >>> prompt = ""My favourite condiment is"" >>> model_inputs = tokenizer([prompt], return_tensors=""pt"").to(device) >>> model.to(device) >>> generated_ids = model.generate(**model_inputs, max_new_tokens=100, do_sample=True) >>> tokenizer.batch_decode(generated_ids)[0] ""The expected output"" Raw weights for `Mistral-7B-v0.1` and `Mistral-7B-Instruct-v0.1` can be downloaded from: | Model Name | Checkpoint | |----------------------------|-----------------------------------------------------------------------------------------| | `Mistral-7B-v0.1` | [Raw Checkpoint](https://files.mistral-7b-v0-1.mistral.ai/mistral-7B-v0.1.tar) | | `Mistral-7B-Instruct-v0.1` | [Raw Checkpoint](https://files.mistral-7b-v0-1.mistral.ai/mistral-7B-instruct-v0.1.tar) | To use these raw checkpoints with HuggingFace you can use the `convert_mistral_weights_to_hf.py` script to convert them to the HuggingFace format: ```bash python src/transformers/models/mistral/convert_mistral_weights_to_hf.py \ --input_dir /path/to/downloaded/mistral/weights --model_size 7B --output_dir /output/path You can then load the converted model from the `output/path`: thon from transformers import MistralForCausalLM, LlamaTokenizer tokenizer = LlamaTokenizer.from_pretrained(""/output/path"") model = MistralForCausalLM.from_pretrained(""/output/path"") ## Combining Mistral and Flash Attention 2 First, make sure to install the latest version of Flash Attention 2 to include the sliding window attention feature. ```bash pip install -U flash-attn --no-build-isolation Make also sure that you have a hardware that is compatible with Flash-Attention 2. Read more about it in the official documentation of [`flash-attn`](https://github.com/Dao-AILab/flash-attention) repository. Make also sure to load your model in half-precision (e.g. `torch.float16`) To load and run a model using Flash Attention 2, refer to the snippet below: thon >>> import torch >>> from transformers import AutoModelForCausalLM, AutoTokenizer >>> device = ""cuda"" # the device to load the model onto >>> model = AutoModelForCausalLM.from_pretrained(""mistralai/Mistral-7B-v0.1"", torch_dtype=torch.float16, use_flash_attention_2=True) >>> tokenizer = AutoTokenizer.from_pretrained(""mistralai/Mistral-7B-v0.1"") >>> prompt = ""My favourite condiment is"" >>> model_inputs = tokenizer([prompt], return_tensors=""pt"").to(device) >>> model.to(device) >>> generated_ids = model.generate(**model_inputs, max_new_tokens=100, do_sample=True) >>> tokenizer.batch_decode(generated_ids)[0] ""The expected output"" ### Expected speedups Below is a expected speedup diagram that compares pure inference time between the native implementation in transformers using `mistralai/Mistral-7B-v0.1` checkpoint and the Flash Attention 2 version of the model. ### Sliding window Attention The current implementation supports the sliding window attention mechanism and memory efficient cache management. To enable sliding window attention, just make sure to have a `flash-attn` version that is compatible with sliding window attention (`>=2.3.0`). The Flash Attention-2 model uses also a more memory efficient cache slicing mechanism - as recommended per the official implementation of Mistral model that use rolling cache mechanism we keep the cache size fixed (`self.config.sliding_window`), support batched generation only for `padding_side=""left""` and use the absolute position of the current token to compute the positional embedding. ## The Mistral Team Albert Jiang, Alexandre Sablayrolles, Arthur Mensch, Chris Bamford, Devendra Singh Chaplot, Diego de las Casas, Florian Bressand, Gianna Lengyel, Guillaume Lample, Lélio Renard Lavaud, Lucile Saulnier, Marie-Anne Lachaux, Pierre Stock, Teven Le Scao, Thibaut Lavril, Thomas Wang, Timothée Lacroix, William El Sayed. ## MistralConfig [[autodoc]] MistralConfig ## MistralModel [[autodoc]] MistralModel - forward ## MistralForCausalLM [[autodoc]] MistralForCausalLM - forward ## MistralForSequenceClassification [[autodoc]] MistralForSequenceClassification - forward " model_doc/mms.md," # MMS ## Overview The MMS model was proposed in [Scaling Speech Technology to 1,000+ Languages](https://arxiv.org/abs/2305.13516) by Vineel Pratap, Andros Tjandra, Bowen Shi, Paden Tomasello, Arun Babu, Sayani Kundu, Ali Elkahky, Zhaoheng Ni, Apoorv Vyas, Maryam Fazel-Zarandi, Alexei Baevski, Yossi Adi, Xiaohui Zhang, Wei-Ning Hsu, Alexis Conneau, Michael Auli The abstract from the paper is the following: *Expanding the language coverage of speech technology has the potential to improve access to information for many more people. However, current speech technology is restricted to about one hundred languages which is a small fraction of the over 7,000 languages spoken around the world. The Massively Multilingual Speech (MMS) project increases the number of supported languages by 10-40x, depending on the task. The main ingredients are a new dataset based on readings of publicly available religious texts and effectively leveraging self-supervised learning. We built pre-trained wav2vec 2.0 models covering 1,406 languages, a single multilingual automatic speech recognition model for 1,107 languages, speech synthesis models for the same number of languages, as well as a language identification model for 4,017 languages. Experiments show that our multilingual speech recognition model more than halves the word error rate of Whisper on 54 languages of the FLEURS benchmark while being trained on a small fraction of the labeled data.* Here are the different models open sourced in the MMS project. The models and code are originally released [here](https://github.com/facebookresearch/fairseq/tree/main/examples/mms). We have add them to the `transformers` framework, making them easier to use. ### Automatic Speech Recognition (ASR) The ASR model checkpoints can be found here : [mms-1b-fl102](https://huggingface.co/facebook/mms-1b-fl102), [mms-1b-l1107](https://huggingface.co/facebook/mms-1b-l1107), [mms-1b-all](https://huggingface.co/facebook/mms-1b-all). For best accuracy, use the `mms-1b-all` model. Tips: - All ASR models accept a float array corresponding to the raw waveform of the speech signal. The raw waveform should be pre-processed with [`Wav2Vec2FeatureExtractor`]. - The models were trained using connectionist temporal classification (CTC) so the model output has to be decoded using [`Wav2Vec2CTCTokenizer`]. - You can load different language adapter weights for different languages via [`~Wav2Vec2PreTrainedModel.load_adapter`]. Language adapters only consists of roughly 2 million parameters and can therefore be efficiently loaded on the fly when needed. #### Loading By default MMS loads adapter weights for English. If you want to load adapter weights of another language make sure to specify `target_lang=` as well as `""ignore_mismatched_sizes=True`. The `ignore_mismatched_sizes=True` keyword has to be passed to allow the language model head to be resized according to the vocabulary of the specified language. Similarly, the processor should be loaded with the same target language from transformers import Wav2Vec2ForCTC, AutoProcessor model_id = ""facebook/mms-1b-all"" target_lang = ""fra"" processor = AutoProcessor.from_pretrained(model_id, target_lang=target_lang) model = Wav2Vec2ForCTC.from_pretrained(model_id, target_lang=target_lang, ignore_mismatched_sizes=True) You can safely ignore a warning such as: ```text Some weights of Wav2Vec2ForCTC were not initialized from the model checkpoint at facebook/mms-1b-all and are newly initialized because the shapes did not match: - lm_head.bias: found shape torch.Size([154]) in the checkpoint and torch.Size([314]) in the model instantiated - lm_head.weight: found shape torch.Size([154, 1280]) in the checkpoint and torch.Size([314, 1280]) in the model instantiated You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference. If you want to use the ASR pipeline, you can load your chosen target language as such: from transformers import pipeline model_id = ""facebook/mms-1b-all"" target_lang = ""fra"" pipe = pipeline(model=model_id, model_kwargs={""target_lang"": ""fra"", ""ignore_mismatched_sizes"": True}) #### Inference Next, let's look at how we can run MMS in inference and change adapter layers after having called [`~PretrainedModel.from_pretrained`] First, we load audio data in different languages using the [Datasets](https://github.com/huggingface/datasets). from datasets import load_dataset, Audio # English stream_data = load_dataset(""mozilla-foundation/common_voice_13_0"", ""en"", split=""test"", streaming=True) stream_data = stream_data.cast_column(""audio"", Audio(sampling_rate=16000)) en_sample = next(iter(stream_data))[""audio""][""array""] # French stream_data = load_dataset(""mozilla-foundation/common_voice_13_0"", ""fr"", split=""test"", streaming=True) stream_data = stream_data.cast_column(""audio"", Audio(sampling_rate=16000)) fr_sample = next(iter(stream_data))[""audio""][""array""] Next, we load the model and processor from transformers import Wav2Vec2ForCTC, AutoProcessor import torch model_id = ""facebook/mms-1b-all"" processor = AutoProcessor.from_pretrained(model_id) model = Wav2Vec2ForCTC.from_pretrained(model_id) Now we process the audio data, pass the processed audio data to the model and transcribe the model output, just like we usually do for [`Wav2Vec2ForCTC`]. inputs = processor(en_sample, sampling_rate=16_000, return_tensors=""pt"") with torch.no_grad(): outputs = model(**inputs).logits ids = torch.argmax(outputs, dim=-1)[0] transcription = processor.decode(ids) # 'joe keton disapproved of films and buster also had reservations about the media' We can now keep the same model in memory and simply switch out the language adapters by calling the convenient [`~Wav2Vec2ForCTC.load_adapter`] function for the model and [`~Wav2Vec2CTCTokenizer.set_target_lang`] for the tokenizer. We pass the target language as an input - `""fra""` for French. processor.tokenizer.set_target_lang(""fra"") model.load_adapter(""fra"") inputs = processor(fr_sample, sampling_rate=16_000, return_tensors=""pt"") with torch.no_grad(): outputs = model(**inputs).logits ids = torch.argmax(outputs, dim=-1)[0] transcription = processor.decode(ids) # ""ce dernier est volé tout au long de l'histoire romaine"" In the same way the language can be switched out for all other supported languages. Please have a look at: processor.tokenizer.vocab.keys() to see all supported languages. To further improve performance from ASR models, language model decoding can be used. See the documentation [here](https://huggingface.co/facebook/mms-1b-all) for further details. ### Speech Synthesis (TTS) MMS-TTS uses the same model architecture as VITS, which was added to 🤗 Transformers in v4.33. MMS trains a separate model checkpoint for each of the 1100+ languages in the project. All available checkpoints can be found on the Hugging Face Hub: [facebook/mms-tts](https://huggingface.co/models?sort=trending&search=facebook%2Fmms-tts), and the inference documentation under [VITS](https://huggingface.co/docs/transformers/main/en/model_doc/vits). #### Inference To use the MMS model, first update to the latest version of the Transformers library: ```bash pip install --upgrade transformers accelerate Since the flow-based model in VITS is non-deterministic, it is good practice to set a seed to ensure reproducibility of the outputs. - For languages with a Roman alphabet, such as English or French, the tokenizer can be used directly to pre-process the text inputs. The following code example runs a forward pass using the MMS-TTS English checkpoint: thon import torch from transformers import VitsTokenizer, VitsModel, set_seed tokenizer = VitsTokenizer.from_pretrained(""facebook/mms-tts-eng"") model = VitsModel.from_pretrained(""facebook/mms-tts-eng"") inputs = tokenizer(text=""Hello - my dog is cute"", return_tensors=""pt"") set_seed(555) # make deterministic with torch.no_grad(): outputs = model(**inputs) waveform = outputs.waveform[0] The resulting waveform can be saved as a `.wav` file: thon import scipy scipy.io.wavfile.write(""synthesized_speech.wav"", rate=model.config.sampling_rate, data=waveform) Or displayed in a Jupyter Notebook / Google Colab: thon from IPython.display import Audio Audio(waveform, rate=model.config.sampling_rate) For certain languages with non-Roman alphabets, such as Arabic, Mandarin or Hindi, the [`uroman`](https://github.com/isi-nlp/uroman) perl package is required to pre-process the text inputs to the Roman alphabet. You can check whether you require the `uroman` package for your language by inspecting the `is_uroman` attribute of the pre-trained `tokenizer`: thon from transformers import VitsTokenizer tokenizer = VitsTokenizer.from_pretrained(""facebook/mms-tts-eng"") print(tokenizer.is_uroman) If required, you should apply the uroman package to your text inputs **prior** to passing them to the `VitsTokenizer`, since currently the tokenizer does not support performing the pre-processing itself. To do this, first clone the uroman repository to your local machine and set the bash variable `UROMAN` to the local path: ```bash git clone https://github.com/isi-nlp/uroman.git cd uroman export UROMAN=$(pwd) You can then pre-process the text input using the following code snippet. You can either rely on using the bash variable `UROMAN` to point to the uroman repository, or you can pass the uroman directory as an argument to the `uromaize` function: thon import torch from transformers import VitsTokenizer, VitsModel, set_seed import os import subprocess tokenizer = VitsTokenizer.from_pretrained(""facebook/mms-tts-kor"") model = VitsModel.from_pretrained(""facebook/mms-tts-kor"") def uromanize(input_string, uroman_path): """"""Convert non-Roman strings to Roman using the `uroman` perl package."""""" script_path = os.path.join(uroman_path, ""bin"", ""uroman.pl"") command = [""perl"", script_path] process = subprocess.Popen(command, stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE) # Execute the perl command stdout, stderr = process.communicate(input=input_string.encode()) if process.returncode != 0: raise ValueError(f""Error {process.returncode}: {stderr.decode()}"") # Return the output as a string and skip the new-line character at the end return stdout.decode()[:-1] text = ""이봐 무슨 일이야"" uromaized_text = uromanize(text, uroman_path=os.environ[""UROMAN""]) inputs = tokenizer(text=uromaized_text, return_tensors=""pt"") set_seed(555) # make deterministic with torch.no_grad(): outputs = model(inputs[""input_ids""]) waveform = outputs.waveform[0] **Tips:** * The MMS-TTS checkpoints are trained on lower-cased, un-punctuated text. By default, the `VitsTokenizer` *normalizes* the inputs by removing any casing and punctuation, to avoid passing out-of-vocabulary characters to the model. Hence, the model is agnostic to casing and punctuation, so these should be avoided in the text prompt. You can disable normalisation by setting `noramlize=False` in the call to the tokenizer, but this will lead to un-expected behaviour and is discouraged. * The speaking rate can be varied by setting the attribute `model.speaking_rate` to a chosen value. Likewise, the randomness of the noise is controlled by `model.noise_scale`: thon import torch from transformers import VitsTokenizer, VitsModel, set_seed tokenizer = VitsTokenizer.from_pretrained(""facebook/mms-tts-eng"") model = VitsModel.from_pretrained(""facebook/mms-tts-eng"") inputs = tokenizer(text=""Hello - my dog is cute"", return_tensors=""pt"") # make deterministic set_seed(555) # make speech faster and more noisy model.speaking_rate = 1.5 model.noise_scale = 0.8 with torch.no_grad(): outputs = model(**inputs) ### Language Identification (LID) Different LID models are available based on the number of languages they can recognize - [126](https://huggingface.co/facebook/mms-lid-126), [256](https://huggingface.co/facebook/mms-lid-256), [512](https://huggingface.co/facebook/mms-lid-512), [1024](https://huggingface.co/facebook/mms-lid-1024), [2048](https://huggingface.co/facebook/mms-lid-2048), [4017](https://huggingface.co/facebook/mms-lid-4017). #### Inference First, we install transformers and some other libraries ```bash pip install torch accelerate datasets[audio] pip install --upgrade transformers ` Next, we load a couple of audio samples via `datasets`. Make sure that the audio data is sampled to 16000 kHz. from datasets import load_dataset, Audio # English stream_data = load_dataset(""mozilla-foundation/common_voice_13_0"", ""en"", split=""test"", streaming=True) stream_data = stream_data.cast_column(""audio"", Audio(sampling_rate=16000)) en_sample = next(iter(stream_data))[""audio""][""array""] # Arabic stream_data = load_dataset(""mozilla-foundation/common_voice_13_0"", ""ar"", split=""test"", streaming=True) stream_data = stream_data.cast_column(""audio"", Audio(sampling_rate=16000)) ar_sample = next(iter(stream_data))[""audio""][""array""] Next, we load the model and processor from transformers import Wav2Vec2ForSequenceClassification, AutoFeatureExtractor import torch model_id = ""facebook/mms-lid-126"" processor = AutoFeatureExtractor.from_pretrained(model_id) model = Wav2Vec2ForSequenceClassification.from_pretrained(model_id) Now we process the audio data, pass the processed audio data to the model to classify it into a language, just like we usually do for Wav2Vec2 audio classification models such as [ehcalabres/wav2vec2-lg-xlsr-en-speech-emotion-recognition](https://huggingface.co/harshit345/xlsr-wav2vec-speech-emotion-recognition) # English inputs = processor(en_sample, sampling_rate=16_000, return_tensors=""pt"") with torch.no_grad(): outputs = model(**inputs).logits lang_id = torch.argmax(outputs, dim=-1)[0].item() detected_lang = model.config.id2label[lang_id] # 'eng' # Arabic inputs = processor(ar_sample, sampling_rate=16_000, return_tensors=""pt"") with torch.no_grad(): outputs = model(**inputs).logits lang_id = torch.argmax(outputs, dim=-1)[0].item() detected_lang = model.config.id2label[lang_id] # 'ara' To see all the supported languages of a checkpoint, you can print out the language ids as follows: processor.id2label.values() ### Audio Pretrained Models Pretrained models are available for two different sizes - [300M](https://huggingface.co/facebook/mms-300m) , [1Bil](https://huggingface.co/facebook/mms-1b). The MMS for ASR architecture is based on the Wav2Vec2 model, refer to [Wav2Vec2's documentation page](wav2vec2) for further details on how to finetune with models for various downstream tasks. MMS-TTS uses the same model architecture as VITS, refer to [VITS's documentation page](vits) for API reference. " model_doc/videomae.md," # VideoMAE ## Overview The VideoMAE model was proposed in [VideoMAE: Masked Autoencoders are Data-Efficient Learners for Self-Supervised Video Pre-Training](https://arxiv.org/abs/2203.12602) by Zhan Tong, Yibing Song, Jue Wang, Limin Wang. VideoMAE extends masked auto encoders ([MAE](vit_mae)) to video, claiming state-of-the-art performance on several video classification benchmarks. The abstract from the paper is the following: *Pre-training video transformers on extra large-scale datasets is generally required to achieve premier performance on relatively small datasets. In this paper, we show that video masked autoencoders (VideoMAE) are data-efficient learners for self-supervised video pre-training (SSVP). We are inspired by the recent ImageMAE and propose customized video tube masking and reconstruction. These simple designs turn out to be effective for overcoming information leakage caused by the temporal correlation during video reconstruction. We obtain three important findings on SSVP: (1) An extremely high proportion of masking ratio (i.e., 90% to 95%) still yields favorable performance of VideoMAE. The temporally redundant video content enables higher masking ratio than that of images. (2) VideoMAE achieves impressive results on very small datasets (i.e., around 3k-4k videos) without using any extra data. This is partially ascribed to the challenging task of video reconstruction to enforce high-level structure learning. (3) VideoMAE shows that data quality is more important than data quantity for SSVP. Domain shift between pre-training and target datasets are important issues in SSVP. Notably, our VideoMAE with the vanilla ViT backbone can achieve 83.9% on Kinects-400, 75.3% on Something-Something V2, 90.8% on UCF101, and 61.1% on HMDB51 without using any extra data.* VideoMAE pre-training. Taken from the original paper. This model was contributed by [nielsr](https://huggingface.co/nielsr). The original code can be found [here](https://github.com/MCG-NJU/VideoMAE). ## Resources A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with VideoMAE. If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource. **Video classification** - [A notebook](https://github.com/huggingface/notebooks/blob/main/examples/video_classification.ipynb) that shows how to fine-tune a VideoMAE model on a custom dataset. - [Video classification task guide](../tasks/video_classification) - [A 🤗 Space](https://huggingface.co/spaces/sayakpaul/video-classification-ucf101-subset) showing how to perform inference with a video classification model. ## VideoMAEConfig [[autodoc]] VideoMAEConfig ## VideoMAEFeatureExtractor [[autodoc]] VideoMAEFeatureExtractor - __call__ ## VideoMAEImageProcessor [[autodoc]] VideoMAEImageProcessor - preprocess ## VideoMAEModel [[autodoc]] VideoMAEModel - forward ## VideoMAEForPreTraining `VideoMAEForPreTraining` includes the decoder on top for self-supervised pre-training. [[autodoc]] transformers.VideoMAEForPreTraining - forward ## VideoMAEForVideoClassification [[autodoc]] transformers.VideoMAEForVideoClassification - forward " model_doc/ernie.md," # ERNIE ## Overview ERNIE is a series of powerful models proposed by baidu, especially in Chinese tasks, including [ERNIE1.0](https://arxiv.org/abs/1904.09223), [ERNIE2.0](https://ojs.aaai.org/index.php/AAAI/article/view/6428), [ERNIE3.0](https://arxiv.org/abs/2107.02137), [ERNIE-Gram](https://arxiv.org/abs/2010.12148), [ERNIE-health](https://arxiv.org/abs/2110.07244), etc. These models are contributed by [nghuyong](https://huggingface.co/nghuyong) and the official code can be found in [PaddleNLP](https://github.com/PaddlePaddle/PaddleNLP) (in PaddlePaddle). ### Usage example Take `ernie-1.0-base-zh` as an example: ```Python from transformers import AutoTokenizer, AutoModel tokenizer = AutoTokenizer.from_pretrained(""nghuyong/ernie-1.0-base-zh"") model = AutoModel.from_pretrained(""nghuyong/ernie-1.0-base-zh"") ### Model checkpoints | Model Name | Language | Description | |:-------------------:|:--------:|:-------------------------------:| | ernie-1.0-base-zh | Chinese | Layer:12, Heads:12, Hidden:768 | | ernie-2.0-base-en | English | Layer:12, Heads:12, Hidden:768 | | ernie-2.0-large-en | English | Layer:24, Heads:16, Hidden:1024 | | ernie-3.0-base-zh | Chinese | Layer:12, Heads:12, Hidden:768 | | ernie-3.0-medium-zh | Chinese | Layer:6, Heads:12, Hidden:768 | | ernie-3.0-mini-zh | Chinese | Layer:6, Heads:12, Hidden:384 | | ernie-3.0-micro-zh | Chinese | Layer:4, Heads:12, Hidden:384 | | ernie-3.0-nano-zh | Chinese | Layer:4, Heads:12, Hidden:312 | | ernie-health-zh | Chinese | Layer:12, Heads:12, Hidden:768 | | ernie-gram-zh | Chinese | Layer:12, Heads:12, Hidden:768 | You can find all the supported models from huggingface's model hub: [huggingface.co/nghuyong](https://huggingface.co/nghuyong), and model details from paddle's official repo: [PaddleNLP](https://paddlenlp.readthedocs.io/zh/latest/model_zoo/transformers/ERNIE/contents.html) and [ERNIE](https://github.com/PaddlePaddle/ERNIE/blob/repro). ## Resources - [Text classification task guide](../tasks/sequence_classification) - [Token classification task guide](../tasks/token_classification) - [Question answering task guide](../tasks/question_answering) - [Causal language modeling task guide](../tasks/language_modeling) - [Masked language modeling task guide](../tasks/masked_language_modeling) - [Multiple choice task guide](../tasks/multiple_choice) ## ErnieConfig [[autodoc]] ErnieConfig - all ## Ernie specific outputs [[autodoc]] models.ernie.modeling_ernie.ErnieForPreTrainingOutput ## ErnieModel [[autodoc]] ErnieModel - forward ## ErnieForPreTraining [[autodoc]] ErnieForPreTraining - forward ## ErnieForCausalLM [[autodoc]] ErnieForCausalLM - forward ## ErnieForMaskedLM [[autodoc]] ErnieForMaskedLM - forward ## ErnieForNextSentencePrediction [[autodoc]] ErnieForNextSentencePrediction - forward ## ErnieForSequenceClassification [[autodoc]] ErnieForSequenceClassification - forward ## ErnieForMultipleChoice [[autodoc]] ErnieForMultipleChoice - forward ## ErnieForTokenClassification [[autodoc]] ErnieForTokenClassification - forward ## ErnieForQuestionAnswering [[autodoc]] ErnieForQuestionAnswering - forward" model_doc/deberta.md," # DeBERTa ## Overview The DeBERTa model was proposed in [DeBERTa: Decoding-enhanced BERT with Disentangled Attention](https://arxiv.org/abs/2006.03654) by Pengcheng He, Xiaodong Liu, Jianfeng Gao, Weizhu Chen It is based on Google's BERT model released in 2018 and Facebook's RoBERTa model released in 2019. It builds on RoBERTa with disentangled attention and enhanced mask decoder training with half of the data used in RoBERTa. The abstract from the paper is the following: *Recent progress in pre-trained neural language models has significantly improved the performance of many natural language processing (NLP) tasks. In this paper we propose a new model architecture DeBERTa (Decoding-enhanced BERT with disentangled attention) that improves the BERT and RoBERTa models using two novel techniques. The first is the disentangled attention mechanism, where each word is represented using two vectors that encode its content and position, respectively, and the attention weights among words are computed using disentangled matrices on their contents and relative positions. Second, an enhanced mask decoder is used to replace the output softmax layer to predict the masked tokens for model pretraining. We show that these two techniques significantly improve the efficiency of model pretraining and performance of downstream tasks. Compared to RoBERTa-Large, a DeBERTa model trained on half of the training data performs consistently better on a wide range of NLP tasks, achieving improvements on MNLI by +0.9% (90.2% vs. 91.1%), on SQuAD v2.0 by +2.3% (88.4% vs. 90.7%) and RACE by +3.6% (83.2% vs. 86.8%). The DeBERTa code and pre-trained models will be made publicly available at https://github.com/microsoft/DeBERTa.* This model was contributed by [DeBERTa](https://huggingface.co/DeBERTa). This model TF 2.0 implementation was contributed by [kamalkraj](https://huggingface.co/kamalkraj) . The original code can be found [here](https://github.com/microsoft/DeBERTa). ## Resources A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with DeBERTa. If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource. - A blog post on how to [Accelerate Large Model Training using DeepSpeed](https://huggingface.co/blog/accelerate-deepspeed) with DeBERTa. - A blog post on [Supercharged Customer Service with Machine Learning](https://huggingface.co/blog/supercharge-customer-service-with-machine-learning) with DeBERTa. - [`DebertaForSequenceClassification`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/pytorch/text-classification) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/text_classification.ipynb). - [`TFDebertaForSequenceClassification`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/tensorflow/text-classification) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/text_classification-tf.ipynb). - [Text classification task guide](../tasks/sequence_classification) - [`DebertaForTokenClassification`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/pytorch/token-classification) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/token_classification.ipynb). - [`TFDebertaForTokenClassification`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/tensorflow/token-classification) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/token_classification-tf.ipynb). - [Token classification](https://huggingface.co/course/chapter7/2?fw=pt) chapter of the 🤗 Hugging Face Course. - [Byte-Pair Encoding tokenization](https://huggingface.co/course/chapter6/5?fw=pt) chapter of the 🤗 Hugging Face Course. - [Token classification task guide](../tasks/token_classification) - [`DebertaForMaskedLM`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/pytorch/language-modeling#robertabertdistilbert-and-masked-language-modeling) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/language_modeling.ipynb). - [`TFDebertaForMaskedLM`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/tensorflow/language-modeling#run_mlmpy) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/language_modeling-tf.ipynb). - [Masked language modeling](https://huggingface.co/course/chapter7/3?fw=pt) chapter of the 🤗 Hugging Face Course. - [Masked language modeling task guide](../tasks/masked_language_modeling) - [`DebertaForQuestionAnswering`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/pytorch/question-answering) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/question_answering.ipynb). - [`TFDebertaForQuestionAnswering`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/tensorflow/question-answering) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/question_answering-tf.ipynb). - [Question answering](https://huggingface.co/course/chapter7/7?fw=pt) chapter of the 🤗 Hugging Face Course. - [Question answering task guide](../tasks/question_answering) ## DebertaConfig [[autodoc]] DebertaConfig ## DebertaTokenizer [[autodoc]] DebertaTokenizer - build_inputs_with_special_tokens - get_special_tokens_mask - create_token_type_ids_from_sequences - save_vocabulary ## DebertaTokenizerFast [[autodoc]] DebertaTokenizerFast - build_inputs_with_special_tokens - create_token_type_ids_from_sequences ## DebertaModel [[autodoc]] DebertaModel - forward ## DebertaPreTrainedModel [[autodoc]] DebertaPreTrainedModel ## DebertaForMaskedLM [[autodoc]] DebertaForMaskedLM - forward ## DebertaForSequenceClassification [[autodoc]] DebertaForSequenceClassification - forward ## DebertaForTokenClassification [[autodoc]] DebertaForTokenClassification - forward ## DebertaForQuestionAnswering [[autodoc]] DebertaForQuestionAnswering - forward ## TFDebertaModel [[autodoc]] TFDebertaModel - call ## TFDebertaPreTrainedModel [[autodoc]] TFDebertaPreTrainedModel - call ## TFDebertaForMaskedLM [[autodoc]] TFDebertaForMaskedLM - call ## TFDebertaForSequenceClassification [[autodoc]] TFDebertaForSequenceClassification - call ## TFDebertaForTokenClassification [[autodoc]] TFDebertaForTokenClassification - call ## TFDebertaForQuestionAnswering [[autodoc]] TFDebertaForQuestionAnswering - call " model_doc/biogpt.md," # BioGPT ## Overview The BioGPT model was proposed in [BioGPT: generative pre-trained transformer for biomedical text generation and mining ](https://academic.oup.com/bib/advance-article/doi/10.1093/bib/bbac409/6713511?guestAccessKey=a66d9b5d-4f83-4017-bb52-405815c907b9) by Renqian Luo, Liai Sun, Yingce Xia, Tao Qin, Sheng Zhang, Hoifung Poon and Tie-Yan Liu. BioGPT is a domain-specific generative pre-trained Transformer language model for biomedical text generation and mining. BioGPT follows the Transformer language model backbone, and is pre-trained on 15M PubMed abstracts from scratch. The abstract from the paper is the following: *Pre-trained language models have attracted increasing attention in the biomedical domain, inspired by their great success in the general natural language domain. Among the two main branches of pre-trained language models in the general language domain, i.e. BERT (and its variants) and GPT (and its variants), the first one has been extensively studied in the biomedical domain, such as BioBERT and PubMedBERT. While they have achieved great success on a variety of discriminative downstream biomedical tasks, the lack of generation ability constrains their application scope. In this paper, we propose BioGPT, a domain-specific generative Transformer language model pre-trained on large-scale biomedical literature. We evaluate BioGPT on six biomedical natural language processing tasks and demonstrate that our model outperforms previous models on most tasks. Especially, we get 44.98%, 38.42% and 40.76% F1 score on BC5CDR, KD-DTI and DDI end-to-end relation extraction tasks, respectively, and 78.2% accuracy on PubMedQA, creating a new record. Our case study on text generation further demonstrates the advantage of BioGPT on biomedical literature to generate fluent descriptions for biomedical terms.* This model was contributed by [kamalkraj](https://huggingface.co/kamalkraj). The original code can be found [here](https://github.com/microsoft/BioGPT). ## Usage tips - BioGPT is a model with absolute position embeddings so it's usually advised to pad the inputs on the right rather than the left. - BioGPT was trained with a causal language modeling (CLM) objective and is therefore powerful at predicting the next token in a sequence. Leveraging this feature allows BioGPT to generate syntactically coherent text as it can be observed in the run_generation.py example script. - The model can take the `past_key_values` (for PyTorch) as input, which is the previously computed key/value attention pairs. Using this (past_key_values or past) value prevents the model from re-computing pre-computed values in the context of text generation. For PyTorch, see past_key_values argument of the BioGptForCausalLM.forward() method for more information on its usage. ## Resources - [Causal language modeling task guide](../tasks/language_modeling) ## BioGptConfig [[autodoc]] BioGptConfig ## BioGptTokenizer [[autodoc]] BioGptTokenizer - save_vocabulary ## BioGptModel [[autodoc]] BioGptModel - forward ## BioGptForCausalLM [[autodoc]] BioGptForCausalLM - forward ## BioGptForTokenClassification [[autodoc]] BioGptForTokenClassification - forward ## BioGptForSequenceClassification [[autodoc]] BioGptForSequenceClassification - forward" model_doc/xlm.md," # XLM ## Overview The XLM model was proposed in [Cross-lingual Language Model Pretraining](https://arxiv.org/abs/1901.07291) by Guillaume Lample, Alexis Conneau. It's a transformer pretrained using one of the following objectives: - a causal language modeling (CLM) objective (next token prediction), - a masked language modeling (MLM) objective (BERT-like), or - a Translation Language Modeling (TLM) object (extension of BERT's MLM to multiple language inputs) The abstract from the paper is the following: *Recent studies have demonstrated the efficiency of generative pretraining for English natural language understanding. In this work, we extend this approach to multiple languages and show the effectiveness of cross-lingual pretraining. We propose two methods to learn cross-lingual language models (XLMs): one unsupervised that only relies on monolingual data, and one supervised that leverages parallel data with a new cross-lingual language model objective. We obtain state-of-the-art results on cross-lingual classification, unsupervised and supervised machine translation. On XNLI, our approach pushes the state of the art by an absolute gain of 4.9% accuracy. On unsupervised machine translation, we obtain 34.3 BLEU on WMT'16 German-English, improving the previous state of the art by more than 9 BLEU. On supervised machine translation, we obtain a new state of the art of 38.5 BLEU on WMT'16 Romanian-English, outperforming the previous best approach by more than 4 BLEU. Our code and pretrained models will be made publicly available.* This model was contributed by [thomwolf](https://huggingface.co/thomwolf). The original code can be found [here](https://github.com/facebookresearch/XLM/). ## Usage tips - XLM has many different checkpoints, which were trained using different objectives: CLM, MLM or TLM. Make sure to select the correct objective for your task (e.g. MLM checkpoints are not suitable for generation). - XLM has multilingual checkpoints which leverage a specific `lang` parameter. Check out the [multi-lingual](../multilingual) page for more information. - A transformer model trained on several languages. There are three different type of training for this model and the library provides checkpoints for all of them: * Causal language modeling (CLM) which is the traditional autoregressive training (so this model could be in the previous section as well). One of the languages is selected for each training sample, and the model input is a sentence of 256 tokens, that may span over several documents in one of those languages. * Masked language modeling (MLM) which is like RoBERTa. One of the languages is selected for each training sample, and the model input is a sentence of 256 tokens, that may span over several documents in one of those languages, with dynamic masking of the tokens. * A combination of MLM and translation language modeling (TLM). This consists of concatenating a sentence in two different languages, with random masking. To predict one of the masked tokens, the model can use both, the surrounding context in language 1 and the context given by language 2. ## Resources - [Text classification task guide](../tasks/sequence_classification) - [Token classification task guide](../tasks/token_classification) - [Question answering task guide](../tasks/question_answering) - [Causal language modeling task guide](../tasks/language_modeling) - [Masked language modeling task guide](../tasks/masked_language_modeling) - [Multiple choice task guide](../tasks/multiple_choice) ## XLMConfig [[autodoc]] XLMConfig ## XLMTokenizer [[autodoc]] XLMTokenizer - build_inputs_with_special_tokens - get_special_tokens_mask - create_token_type_ids_from_sequences - save_vocabulary ## XLM specific outputs [[autodoc]] models.xlm.modeling_xlm.XLMForQuestionAnsweringOutput ## XLMModel [[autodoc]] XLMModel - forward ## XLMWithLMHeadModel [[autodoc]] XLMWithLMHeadModel - forward ## XLMForSequenceClassification [[autodoc]] XLMForSequenceClassification - forward ## XLMForMultipleChoice [[autodoc]] XLMForMultipleChoice - forward ## XLMForTokenClassification [[autodoc]] XLMForTokenClassification - forward ## XLMForQuestionAnsweringSimple [[autodoc]] XLMForQuestionAnsweringSimple - forward ## XLMForQuestionAnswering [[autodoc]] XLMForQuestionAnswering - forward ## TFXLMModel [[autodoc]] TFXLMModel - call ## TFXLMWithLMHeadModel [[autodoc]] TFXLMWithLMHeadModel - call ## TFXLMForSequenceClassification [[autodoc]] TFXLMForSequenceClassification - call ## TFXLMForMultipleChoice [[autodoc]] TFXLMForMultipleChoice - call ## TFXLMForTokenClassification [[autodoc]] TFXLMForTokenClassification - call ## TFXLMForQuestionAnsweringSimple [[autodoc]] TFXLMForQuestionAnsweringSimple - call " model_doc/funnel.md," # Funnel Transformer ## Overview The Funnel Transformer model was proposed in the paper [Funnel-Transformer: Filtering out Sequential Redundancy for Efficient Language Processing](https://arxiv.org/abs/2006.03236). It is a bidirectional transformer model, like BERT, but with a pooling operation after each block of layers, a bit like in traditional convolutional neural networks (CNN) in computer vision. The abstract from the paper is the following: *With the success of language pretraining, it is highly desirable to develop more efficient architectures of good scalability that can exploit the abundant unlabeled data at a lower cost. To improve the efficiency, we examine the much-overlooked redundancy in maintaining a full-length token-level presentation, especially for tasks that only require a single-vector presentation of the sequence. With this intuition, we propose Funnel-Transformer which gradually compresses the sequence of hidden states to a shorter one and hence reduces the computation cost. More importantly, by re-investing the saved FLOPs from length reduction in constructing a deeper or wider model, we further improve the model capacity. In addition, to perform token-level predictions as required by common pretraining objectives, Funnel-Transformer is able to recover a deep representation for each token from the reduced hidden sequence via a decoder. Empirically, with comparable or fewer FLOPs, Funnel-Transformer outperforms the standard Transformer on a wide variety of sequence-level prediction tasks, including text classification, language understanding, and reading comprehension.* This model was contributed by [sgugger](https://huggingface.co/sgugger). The original code can be found [here](https://github.com/laiguokun/Funnel-Transformer). ## Usage tips - Since Funnel Transformer uses pooling, the sequence length of the hidden states changes after each block of layers. This way, their length is divided by 2, which speeds up the computation of the next hidden states. The base model therefore has a final sequence length that is a quarter of the original one. This model can be used directly for tasks that just require a sentence summary (like sequence classification or multiple choice). For other tasks, the full model is used; this full model has a decoder that upsamples the final hidden states to the same sequence length as the input. - For tasks such as classification, this is not a problem, but for tasks like masked language modeling or token classification, we need a hidden state with the same sequence length as the original input. In those cases, the final hidden states are upsampled to the input sequence length and go through two additional layers. That's why there are two versions of each checkpoint. The version suffixed with “-base” contains only the three blocks, while the version without that suffix contains the three blocks and the upsampling head with its additional layers. - The Funnel Transformer checkpoints are all available with a full version and a base version. The first ones should be used for [`FunnelModel`], [`FunnelForPreTraining`], [`FunnelForMaskedLM`], [`FunnelForTokenClassification`] and [`FunnelForQuestionAnswering`]. The second ones should be used for [`FunnelBaseModel`], [`FunnelForSequenceClassification`] and [`FunnelForMultipleChoice`]. ## Resources - [Text classification task guide](../tasks/sequence_classification) - [Token classification task guide](../tasks/token_classification) - [Question answering task guide](../tasks/question_answering) - [Masked language modeling task guide](../tasks/masked_language_modeling) - [Multiple choice task guide](../tasks/multiple_choice) ## FunnelConfig [[autodoc]] FunnelConfig ## FunnelTokenizer [[autodoc]] FunnelTokenizer - build_inputs_with_special_tokens - get_special_tokens_mask - create_token_type_ids_from_sequences - save_vocabulary ## FunnelTokenizerFast [[autodoc]] FunnelTokenizerFast ## Funnel specific outputs [[autodoc]] models.funnel.modeling_funnel.FunnelForPreTrainingOutput [[autodoc]] models.funnel.modeling_tf_funnel.TFFunnelForPreTrainingOutput ## FunnelBaseModel [[autodoc]] FunnelBaseModel - forward ## FunnelModel [[autodoc]] FunnelModel - forward ## FunnelModelForPreTraining [[autodoc]] FunnelForPreTraining - forward ## FunnelForMaskedLM [[autodoc]] FunnelForMaskedLM - forward ## FunnelForSequenceClassification [[autodoc]] FunnelForSequenceClassification - forward ## FunnelForMultipleChoice [[autodoc]] FunnelForMultipleChoice - forward ## FunnelForTokenClassification [[autodoc]] FunnelForTokenClassification - forward ## FunnelForQuestionAnswering [[autodoc]] FunnelForQuestionAnswering - forward ## TFFunnelBaseModel [[autodoc]] TFFunnelBaseModel - call ## TFFunnelModel [[autodoc]] TFFunnelModel - call ## TFFunnelModelForPreTraining [[autodoc]] TFFunnelForPreTraining - call ## TFFunnelForMaskedLM [[autodoc]] TFFunnelForMaskedLM - call ## TFFunnelForSequenceClassification [[autodoc]] TFFunnelForSequenceClassification - call ## TFFunnelForMultipleChoice [[autodoc]] TFFunnelForMultipleChoice - call ## TFFunnelForTokenClassification [[autodoc]] TFFunnelForTokenClassification - call ## TFFunnelForQuestionAnswering [[autodoc]] TFFunnelForQuestionAnswering - call " model_doc/longt5.md," # LongT5 ## Overview The LongT5 model was proposed in [LongT5: Efficient Text-To-Text Transformer for Long Sequences](https://arxiv.org/abs/2112.07916) by Mandy Guo, Joshua Ainslie, David Uthus, Santiago Ontanon, Jianmo Ni, Yun-Hsuan Sung and Yinfei Yang. It's an encoder-decoder transformer pre-trained in a text-to-text denoising generative setting. LongT5 model is an extension of T5 model, and it enables using one of the two different efficient attention mechanisms - (1) Local attention, or (2) Transient-Global attention. The abstract from the paper is the following: *Recent work has shown that either (1) increasing the input length or (2) increasing model size can improve the performance of Transformer-based neural models. In this paper, we present a new model, called LongT5, with which we explore the effects of scaling both the input length and model size at the same time. Specifically, we integrated attention ideas from long-input transformers (ETC), and adopted pre-training strategies from summarization pre-training (PEGASUS) into the scalable T5 architecture. The result is a new attention mechanism we call {\em Transient Global} (TGlobal), which mimics ETC's local/global attention mechanism, but without requiring additional side-inputs. We are able to achieve state-of-the-art results on several summarization tasks and outperform the original T5 models on question answering tasks.* This model was contributed by [stancld](https://huggingface.co/stancld). The original code can be found [here](https://github.com/google-research/longt5). ## Usage tips - [`LongT5ForConditionalGeneration`] is an extension of [`T5ForConditionalGeneration`] exchanging the traditional encoder *self-attention* layer with efficient either *local* attention or *transient-global* (*tglobal*) attention. - Unlike the T5 model, LongT5 does not use a task prefix. Furthermore, it uses a different pre-training objective inspired by the pre-training of [`PegasusForConditionalGeneration`]. - LongT5 model is designed to work efficiently and very well on long-range *sequence-to-sequence* tasks where the input sequence exceeds commonly used 512 tokens. It is capable of handling input sequences of a length up to 16,384 tokens. - For *Local Attention*, the sparse sliding-window local attention operation allows a given token to attend only `r` tokens to the left and right of it (with `r=127` by default). *Local Attention* does not introduce any new parameters to the model. The complexity of the mechanism is linear in input sequence length `l`: `O(l*r)`. - *Transient Global Attention* is an extension of the *Local Attention*. It, furthermore, allows each input token to interact with all other tokens in the layer. This is achieved via splitting an input sequence into blocks of a fixed length `k` (with a default `k=16`). Then, a global token for such a block is obtained via summing and normalizing the embeddings of every token in the block. Thanks to this, the attention allows each token to attend to both nearby tokens like in Local attention, and also every global token like in the case of standard global attention (*transient* represents the fact the global tokens are constructed dynamically within each attention operation). As a consequence, *TGlobal* attention introduces a few new parameters -- global relative position biases and a layer normalization for global token's embedding. The complexity of this mechanism is `O(l(r + l/k))`. - An example showing how to evaluate a fine-tuned LongT5 model on the [pubmed dataset](https://huggingface.co/datasets/scientific_papers) is below. thon >>> import evaluate >>> from datasets import load_dataset >>> from transformers import AutoTokenizer, LongT5ForConditionalGeneration >>> dataset = load_dataset(""scientific_papers"", ""pubmed"", split=""validation"") >>> model = ( LongT5ForConditionalGeneration.from_pretrained(""Stancld/longt5-tglobal-large-16384-pubmed-3k_steps"") .to(""cuda"") .half() ) >>> tokenizer = AutoTokenizer.from_pretrained(""Stancld/longt5-tglobal-large-16384-pubmed-3k_steps"") >>> def generate_answers(batch): inputs_dict = tokenizer( batch[""article""], max_length=16384, padding=""max_length"", truncation=True, return_tensors=""pt"" ) input_ids = inputs_dict.input_ids.to(""cuda"") attention_mask = inputs_dict.attention_mask.to(""cuda"") output_ids = model.generate(input_ids, attention_mask=attention_mask, max_length=512, num_beams=2) batch[""predicted_abstract""] = tokenizer.batch_decode(output_ids, skip_special_tokens=True) return batch >>> result = dataset.map(generate_answer, batched=True, batch_size=2) >>> rouge = evaluate.load(""rouge"") >>> rouge.compute(predictions=result[""predicted_abstract""], references=result[""abstract""]) ## Resources - [Translation task guide](../tasks/translation) - [Summarization task guide](../tasks/summarization) ## LongT5Config [[autodoc]] LongT5Config ## LongT5Model [[autodoc]] LongT5Model - forward ## LongT5ForConditionalGeneration [[autodoc]] LongT5ForConditionalGeneration - forward ## LongT5EncoderModel [[autodoc]] LongT5EncoderModel - forward ## FlaxLongT5Model [[autodoc]] FlaxLongT5Model - __call__ - encode - decode ## FlaxLongT5ForConditionalGeneration [[autodoc]] FlaxLongT5ForConditionalGeneration - __call__ - encode - decode " model_doc/glpn.md," # GLPN This is a recently introduced model so the API hasn't been tested extensively. There may be some bugs or slight breaking changes to fix it in the future. If you see something strange, file a [Github Issue](https://github.com/huggingface/transformers/issues/new?assignees=&labels=&template=bug-report.md&title). ## Overview The GLPN model was proposed in [Global-Local Path Networks for Monocular Depth Estimation with Vertical CutDepth](https://arxiv.org/abs/2201.07436) by Doyeon Kim, Woonghyun Ga, Pyungwhan Ahn, Donggyu Joo, Sehwan Chun, Junmo Kim. GLPN combines [SegFormer](segformer)'s hierarchical mix-Transformer with a lightweight decoder for monocular depth estimation. The proposed decoder shows better performance than the previously proposed decoders, with considerably less computational complexity. The abstract from the paper is the following: *Depth estimation from a single image is an important task that can be applied to various fields in computer vision, and has grown rapidly with the development of convolutional neural networks. In this paper, we propose a novel structure and training strategy for monocular depth estimation to further improve the prediction accuracy of the network. We deploy a hierarchical transformer encoder to capture and convey the global context, and design a lightweight yet powerful decoder to generate an estimated depth map while considering local connectivity. By constructing connected paths between multi-scale local features and the global decoding stream with our proposed selective feature fusion module, the network can integrate both representations and recover fine details. In addition, the proposed decoder shows better performance than the previously proposed decoders, with considerably less computational complexity. Furthermore, we improve the depth-specific augmentation method by utilizing an important observation in depth estimation to enhance the model. Our network achieves state-of-the-art performance over the challenging depth dataset NYU Depth V2. Extensive experiments have been conducted to validate and show the effectiveness of the proposed approach. Finally, our model shows better generalisation ability and robustness than other comparative models.* Summary of the approach. Taken from the original paper. This model was contributed by [nielsr](https://huggingface.co/nielsr). The original code can be found [here](https://github.com/vinvino02/GLPDepth). ## Resources A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with GLPN. - Demo notebooks for [`GLPNForDepthEstimation`] can be found [here](https://github.com/NielsRogge/Transformers-Tutorials/tree/master/GLPN). - [Monocular depth estimation task guide](../tasks/monocular_depth_estimation) ## GLPNConfig [[autodoc]] GLPNConfig ## GLPNFeatureExtractor [[autodoc]] GLPNFeatureExtractor - __call__ ## GLPNImageProcessor [[autodoc]] GLPNImageProcessor - preprocess ## GLPNModel [[autodoc]] GLPNModel - forward ## GLPNForDepthEstimation [[autodoc]] GLPNForDepthEstimation - forward " model_doc/speech_to_text.md," # Speech2Text ## Overview The Speech2Text model was proposed in [fairseq S2T: Fast Speech-to-Text Modeling with fairseq](https://arxiv.org/abs/2010.05171) by Changhan Wang, Yun Tang, Xutai Ma, Anne Wu, Dmytro Okhonko, Juan Pino. It's a transformer-based seq2seq (encoder-decoder) model designed for end-to-end Automatic Speech Recognition (ASR) and Speech Translation (ST). It uses a convolutional downsampler to reduce the length of speech inputs by 3/4th before they are fed into the encoder. The model is trained with standard autoregressive cross-entropy loss and generates the transcripts/translations autoregressively. Speech2Text has been fine-tuned on several datasets for ASR and ST: [LibriSpeech](http://www.openslr.org/12), [CoVoST 2](https://github.com/facebookresearch/covost), [MuST-C](https://ict.fbk.eu/must-c/). This model was contributed by [valhalla](https://huggingface.co/valhalla). The original code can be found [here](https://github.com/pytorch/fairseq/tree/master/examples/speech_to_text). ## Inference Speech2Text is a speech model that accepts a float tensor of log-mel filter-bank features extracted from the speech signal. It's a transformer-based seq2seq model, so the transcripts/translations are generated autoregressively. The `generate()` method can be used for inference. The [`Speech2TextFeatureExtractor`] class is responsible for extracting the log-mel filter-bank features. The [`Speech2TextProcessor`] wraps [`Speech2TextFeatureExtractor`] and [`Speech2TextTokenizer`] into a single instance to both extract the input features and decode the predicted token ids. The feature extractor depends on `torchaudio` and the tokenizer depends on `sentencepiece` so be sure to install those packages before running the examples. You could either install those as extra speech dependencies with `pip install transformers""[speech, sentencepiece]""` or install the packages separately with `pip install torchaudio sentencepiece`. Also `torchaudio` requires the development version of the [libsndfile](http://www.mega-nerd.com/libsndfile/) package which can be installed via a system package manager. On Ubuntu it can be installed as follows: `apt install libsndfile1-dev` - ASR and Speech Translation thon >>> import torch >>> from transformers import Speech2TextProcessor, Speech2TextForConditionalGeneration >>> from datasets import load_dataset >>> model = Speech2TextForConditionalGeneration.from_pretrained(""facebook/s2t-small-librispeech-asr"") >>> processor = Speech2TextProcessor.from_pretrained(""facebook/s2t-small-librispeech-asr"") >>> ds = load_dataset(""hf-internal-testing/librispeech_asr_demo"", ""clean"", split=""validation"") >>> inputs = processor(ds[0][""audio""][""array""], sampling_rate=ds[0][""audio""][""sampling_rate""], return_tensors=""pt"") >>> generated_ids = model.generate(inputs[""input_features""], attention_mask=inputs[""attention_mask""]) >>> transcription = processor.batch_decode(generated_ids, skip_special_tokens=True) >>> transcription ['mister quilter is the apostle of the middle classes and we are glad to welcome his gospel'] - Multilingual speech translation For multilingual speech translation models, `eos_token_id` is used as the `decoder_start_token_id` and the target language id is forced as the first generated token. To force the target language id as the first generated token, pass the `forced_bos_token_id` parameter to the `generate()` method. The following example shows how to transate English speech to French text using the *facebook/s2t-medium-mustc-multilingual-st* checkpoint. thon >>> import torch >>> from transformers import Speech2TextProcessor, Speech2TextForConditionalGeneration >>> from datasets import load_dataset >>> model = Speech2TextForConditionalGeneration.from_pretrained(""facebook/s2t-medium-mustc-multilingual-st"") >>> processor = Speech2TextProcessor.from_pretrained(""facebook/s2t-medium-mustc-multilingual-st"") >>> ds = load_dataset(""hf-internal-testing/librispeech_asr_demo"", ""clean"", split=""validation"") >>> inputs = processor(ds[0][""audio""][""array""], sampling_rate=ds[0][""audio""][""sampling_rate""], return_tensors=""pt"") >>> generated_ids = model.generate( inputs[""input_features""], attention_mask=inputs[""attention_mask""], forced_bos_token_id=processor.tokenizer.lang_code_to_id[""fr""], ) >>> translation = processor.batch_decode(generated_ids, skip_special_tokens=True) >>> translation [""(Vidéo) Si M. Kilder est l'apossible des classes moyennes, et nous sommes heureux d'être accueillis dans son évangile.""] See the [model hub](https://huggingface.co/models?filter=speech_to_text) to look for Speech2Text checkpoints. ## Speech2TextConfig [[autodoc]] Speech2TextConfig ## Speech2TextTokenizer [[autodoc]] Speech2TextTokenizer - build_inputs_with_special_tokens - get_special_tokens_mask - create_token_type_ids_from_sequences - save_vocabulary ## Speech2TextFeatureExtractor [[autodoc]] Speech2TextFeatureExtractor - __call__ ## Speech2TextProcessor [[autodoc]] Speech2TextProcessor - __call__ - from_pretrained - save_pretrained - batch_decode - decode ## Speech2TextModel [[autodoc]] Speech2TextModel - forward ## Speech2TextForConditionalGeneration [[autodoc]] Speech2TextForConditionalGeneration - forward ## TFSpeech2TextModel [[autodoc]] TFSpeech2TextModel - call ## TFSpeech2TextForConditionalGeneration [[autodoc]] TFSpeech2TextForConditionalGeneration - call " model_doc/efficientnet.md," # EfficientNet ## Overview The EfficientNet model was proposed in [EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks](https://arxiv.org/abs/1905.11946) by Mingxing Tan and Quoc V. Le. EfficientNets are a family of image classification models, which achieve state-of-the-art accuracy, yet being an order-of-magnitude smaller and faster than previous models. The abstract from the paper is the following: *Convolutional Neural Networks (ConvNets) are commonly developed at a fixed resource budget, and then scaled up for better accuracy if more resources are available. In this paper, we systematically study model scaling and identify that carefully balancing network depth, width, and resolution can lead to better performance. Based on this observation, we propose a new scaling method that uniformly scales all dimensions of depth/width/resolution using a simple yet highly effective compound coefficient. We demonstrate the effectiveness of this method on scaling up MobileNets and ResNet. To go even further, we use neural architecture search to design a new baseline network and scale it up to obtain a family of models, called EfficientNets, which achieve much better accuracy and efficiency than previous ConvNets. In particular, our EfficientNet-B7 achieves state-of-the-art 84.3% top-1 accuracy on ImageNet, while being 8.4x smaller and 6.1x faster on inference than the best existing ConvNet. Our EfficientNets also transfer well and achieve state-of-the-art accuracy on CIFAR-100 (91.7%), Flowers (98.8%), and 3 other transfer learning datasets, with an order of magnitude fewer parameters.* This model was contributed by [adirik](https://huggingface.co/adirik). The original code can be found [here](https://github.com/tensorflow/tpu/tree/master/models/official/efficientnet). ## EfficientNetConfig [[autodoc]] EfficientNetConfig ## EfficientNetImageProcessor [[autodoc]] EfficientNetImageProcessor - preprocess ## EfficientNetModel [[autodoc]] EfficientNetModel - forward ## EfficientNetForImageClassification [[autodoc]] EfficientNetForImageClassification - forward " model_doc/layoutlm.md," # LayoutLM ## Overview The LayoutLM model was proposed in the paper [LayoutLM: Pre-training of Text and Layout for Document Image Understanding](https://arxiv.org/abs/1912.13318) by Yiheng Xu, Minghao Li, Lei Cui, Shaohan Huang, Furu Wei, and Ming Zhou. It's a simple but effective pretraining method of text and layout for document image understanding and information extraction tasks, such as form understanding and receipt understanding. It obtains state-of-the-art results on several downstream tasks: - form understanding: the [FUNSD](https://guillaumejaume.github.io/FUNSD/) dataset (a collection of 199 annotated forms comprising more than 30,000 words). - receipt understanding: the [SROIE](https://rrc.cvc.uab.es/?ch=13) dataset (a collection of 626 receipts for training and 347 receipts for testing). - document image classification: the [RVL-CDIP](https://www.cs.cmu.edu/~aharley/rvl-cdip/) dataset (a collection of 400,000 images belonging to one of 16 classes). The abstract from the paper is the following: *Pre-training techniques have been verified successfully in a variety of NLP tasks in recent years. Despite the widespread use of pretraining models for NLP applications, they almost exclusively focus on text-level manipulation, while neglecting layout and style information that is vital for document image understanding. In this paper, we propose the LayoutLM to jointly model interactions between text and layout information across scanned document images, which is beneficial for a great number of real-world document image understanding tasks such as information extraction from scanned documents. Furthermore, we also leverage image features to incorporate words' visual information into LayoutLM. To the best of our knowledge, this is the first time that text and layout are jointly learned in a single framework for document-level pretraining. It achieves new state-of-the-art results in several downstream tasks, including form understanding (from 70.72 to 79.27), receipt understanding (from 94.02 to 95.24) and document image classification (from 93.07 to 94.42).* ## Usage tips - In addition to *input_ids*, [`~transformers.LayoutLMModel.forward`] also expects the input `bbox`, which are the bounding boxes (i.e. 2D-positions) of the input tokens. These can be obtained using an external OCR engine such as Google's [Tesseract](https://github.com/tesseract-ocr/tesseract) (there's a [Python wrapper](https://pypi.org/project/pytesseract/) available). Each bounding box should be in (x0, y0, x1, y1) format, where (x0, y0) corresponds to the position of the upper left corner in the bounding box, and (x1, y1) represents the position of the lower right corner. Note that one first needs to normalize the bounding boxes to be on a 0-1000 scale. To normalize, you can use the following function: thon def normalize_bbox(bbox, width, height): return [ int(1000 * (bbox[0] / width)), int(1000 * (bbox[1] / height)), int(1000 * (bbox[2] / width)), int(1000 * (bbox[3] / height)), ] Here, `width` and `height` correspond to the width and height of the original document in which the token occurs. Those can be obtained using the Python Image Library (PIL) library for example, as follows: thon from PIL import Image # Document can be a png, jpg, etc. PDFs must be converted to images. image = Image.open(name_of_your_document).convert(""RGB"") width, height = image.size ## Resources A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with LayoutLM. If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource. - A blog post on [fine-tuning LayoutLM for document-understanding using Keras & Hugging Face Transformers](https://www.philschmid.de/fine-tuning-layoutlm-keras). - A blog post on how to [fine-tune LayoutLM for document-understanding using only Hugging Face Transformers](https://www.philschmid.de/fine-tuning-layoutlm). - A notebook on how to [fine-tune LayoutLM on the FUNSD dataset with image embeddings](https://colab.research.google.com/github/NielsRogge/Transformers-Tutorials/blob/master/LayoutLM/Add_image_embeddings_to_LayoutLM.ipynb). - See also: [Document question answering task guide](../tasks/document_question_answering) - A notebook on how to [fine-tune LayoutLM for sequence classification on the RVL-CDIP dataset](https://colab.research.google.com/github/NielsRogge/Transformers-Tutorials/blob/master/LayoutLM/Fine_tuning_LayoutLMForSequenceClassification_on_RVL_CDIP.ipynb). - [Text classification task guide](../tasks/sequence_classification) - A notebook on how to [ fine-tune LayoutLM for token classification on the FUNSD dataset](https://github.com/NielsRogge/Transformers-Tutorials/blob/master/LayoutLM/Fine_tuning_LayoutLMForTokenClassification_on_FUNSD.ipynb). - [Token classification task guide](../tasks/token_classification) **Other resources** - [Masked language modeling task guide](../tasks/masked_language_modeling) 🚀 Deploy - A blog post on how to [Deploy LayoutLM with Hugging Face Inference Endpoints](https://www.philschmid.de/inference-endpoints-layoutlm). ## LayoutLMConfig [[autodoc]] LayoutLMConfig ## LayoutLMTokenizer [[autodoc]] LayoutLMTokenizer ## LayoutLMTokenizerFast [[autodoc]] LayoutLMTokenizerFast ## LayoutLMModel [[autodoc]] LayoutLMModel ## LayoutLMForMaskedLM [[autodoc]] LayoutLMForMaskedLM ## LayoutLMForSequenceClassification [[autodoc]] LayoutLMForSequenceClassification ## LayoutLMForTokenClassification [[autodoc]] LayoutLMForTokenClassification ## LayoutLMForQuestionAnswering [[autodoc]] LayoutLMForQuestionAnswering ## TFLayoutLMModel [[autodoc]] TFLayoutLMModel ## TFLayoutLMForMaskedLM [[autodoc]] TFLayoutLMForMaskedLM ## TFLayoutLMForSequenceClassification [[autodoc]] TFLayoutLMForSequenceClassification ## TFLayoutLMForTokenClassification [[autodoc]] TFLayoutLMForTokenClassification ## TFLayoutLMForQuestionAnswering [[autodoc]] TFLayoutLMForQuestionAnswering " model_doc/vitdet.md," # ViTDet ## Overview The ViTDet model was proposed in [Exploring Plain Vision Transformer Backbones for Object Detection](https://arxiv.org/abs/2203.16527) by Yanghao Li, Hanzi Mao, Ross Girshick, Kaiming He. VitDet leverages the plain [Vision Transformer](vit) for the task of object detection. The abstract from the paper is the following: *We explore the plain, non-hierarchical Vision Transformer (ViT) as a backbone network for object detection. This design enables the original ViT architecture to be fine-tuned for object detection without needing to redesign a hierarchical backbone for pre-training. With minimal adaptations for fine-tuning, our plain-backbone detector can achieve competitive results. Surprisingly, we observe: (i) it is sufficient to build a simple feature pyramid from a single-scale feature map (without the common FPN design) and (ii) it is sufficient to use window attention (without shifting) aided with very few cross-window propagation blocks. With plain ViT backbones pre-trained as Masked Autoencoders (MAE), our detector, named ViTDet, can compete with the previous leading methods that were all based on hierarchical backbones, reaching up to 61.3 AP_box on the COCO dataset using only ImageNet-1K pre-training. We hope our study will draw attention to research on plain-backbone detectors.* This model was contributed by [nielsr](https://huggingface.co/nielsr). The original code can be found [here](https://github.com/facebookresearch/detectron2/tree/main/projects/ViTDet). Tips: - At the moment, only the backbone is available. ## VitDetConfig [[autodoc]] VitDetConfig ## VitDetModel [[autodoc]] VitDetModel - forward" model_doc/autoformer.md," # Autoformer ## Overview The Autoformer model was proposed in [Autoformer: Decomposition Transformers with Auto-Correlation for Long-Term Series Forecasting](https://arxiv.org/abs/2106.13008) by Haixu Wu, Jiehui Xu, Jianmin Wang, Mingsheng Long. This model augments the Transformer as a deep decomposition architecture, which can progressively decompose the trend and seasonal components during the forecasting process. The abstract from the paper is the following: *Extending the forecasting time is a critical demand for real applications, such as extreme weather early warning and long-term energy consumption planning. This paper studies the long-term forecasting problem of time series. Prior Transformer-based models adopt various self-attention mechanisms to discover the long-range dependencies. However, intricate temporal patterns of the long-term future prohibit the model from finding reliable dependencies. Also, Transformers have to adopt the sparse versions of point-wise self-attentions for long series efficiency, resulting in the information utilization bottleneck. Going beyond Transformers, we design Autoformer as a novel decomposition architecture with an Auto-Correlation mechanism. We break with the pre-processing convention of series decomposition and renovate it as a basic inner block of deep models. This design empowers Autoformer with progressive decomposition capacities for complex time series. Further, inspired by the stochastic process theory, we design the Auto-Correlation mechanism based on the series periodicity, which conducts the dependencies discovery and representation aggregation at the sub-series level. Auto-Correlation outperforms self-attention in both efficiency and accuracy. In long-term forecasting, Autoformer yields state-of-the-art accuracy, with a 38% relative improvement on six benchmarks, covering five practical applications: energy, traffic, economics, weather and disease.* This model was contributed by [elisim](https://huggingface.co/elisim) and [kashif](https://huggingface.co/kashif). The original code can be found [here](https://github.com/thuml/Autoformer). ## Resources A list of official Hugging Face and community (indicated by 🌎) resources to help you get started. If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource. - Check out the Autoformer blog-post in HuggingFace blog: [Yes, Transformers are Effective for Time Series Forecasting (+ Autoformer)](https://huggingface.co/blog/autoformer) ## AutoformerConfig [[autodoc]] AutoformerConfig ## AutoformerModel [[autodoc]] AutoformerModel - forward ## AutoformerForPrediction [[autodoc]] AutoformerForPrediction - forward " model_doc/persimmon.md," # Persimmon ## Overview The Persimmon model was created by [ADEPT](https://www.adept.ai/blog/persimmon-8b), and authored by Erich Elsen, Augustus Odena, Maxwell Nye, Sağnak Taşırlar, Tri Dao, Curtis Hawthorne, Deepak Moparthi, Arushi Somani. The authors introduced Persimmon-8B, a decoder model based on the classic transformers architecture, with query and key normalization. Persimmon-8B is a fully permissively-licensed model with approximately 8 billion parameters, released under the Apache license. Some of the key attributes of Persimmon-8B are long context size (16K), performance, and capabilities for multimodal extensions. The authors showcase their approach to model evaluation, focusing on practical text generation, mirroring how users interact with language models. The work also includes a comparative analysis, pitting Persimmon-8B against other prominent models (MPT 7B Instruct and Llama 2 Base 7B 1-Shot), across various evaluation tasks. The results demonstrate Persimmon-8B's competitive performance, even with limited training data. In terms of model details, the work outlines the architecture and training methodology of Persimmon-8B, providing insights into its design choices, sequence length, and dataset composition. The authors present a fast inference code that outperforms traditional implementations through operator fusion and CUDA graph utilization while maintaining code coherence. They express their anticipation of how the community will leverage this contribution to drive innovation, hinting at further upcoming releases as part of an ongoing series of developments. This model was contributed by [ArthurZ](https://huggingface.co/ArthurZ). The original code can be found [here](https://github.com/persimmon-ai-labs/adept-inference). ## Usage tips The `Persimmon` models were trained using `bfloat16`, but the original inference uses `float16` The checkpoints uploaded on the hub use `torch_dtype = 'float16'` which will be used by the `AutoModel` API to cast the checkpoints from `torch.float32` to `torch.float16`. The `dtype` of the online weights is mostly irrelevant, unless you are using `torch_dtype=""auto""` when initializing a model using `model = AutoModelForCausalLM.from_pretrained(""path"", torch_dtype = ""auto"")`. The reason is that the model will first be downloaded ( using the `dtype` of the checkpoints online) then it will be cast to the default `dtype` of `torch` (becomes `torch.float32`). Users should specify the `torch_dtype` they want, and if they don't it will be `torch.float32`. Finetuning the model in `float16` is not recommended and known to produce `nan`, as such the model should be fine-tuned in `bfloat16`. Tips: - To convert the model, you need to clone the original repository using `git clone https://github.com/persimmon-ai-labs/adept-inference`, then get the checkpoints: ```bash git clone https://github.com/persimmon-ai-labs/adept-inference wget https://axtkn4xl5cip.objectstorage.us-phoenix-1.oci.customer-oci.com/n/axtkn4xl5cip/b/adept-public-data/o/8b_base_model_release.tar tar -xvf 8b_base_model_release.tar python src/transformers/models/persimmon/convert_persimmon_weights_to_hf.py --input_dir /path/to/downloaded/persimmon/weights/ --output_dir /output/path \ --pt_model_path /path/to/8b_chat_model_release/iter_0001251/mp_rank_00/model_optim_rng.pt --ada_lib_path /path/to/adept-inference For the chat model: ```bash wget https://axtkn4xl5cip.objectstorage.us-phoenix-1.oci.customer-oci.com/n/axtkn4xl5cip/b/adept-public-data/o/8b_chat_model_release.tar tar -xvf 8b_base_model_release.tar Thereafter, models can be loaded via: from transformers import PersimmonForCausalLM, PersimmonTokenizer model = PersimmonForCausalLM.from_pretrained(""/output/path"") tokenizer = PersimmonTokenizer.from_pretrained(""/output/path"") - Perismmon uses a `sentencepiece` based tokenizer, with a `Unigram` model. It supports bytefallback, which is only available in `tokenizers==0.14.0` for the fast tokenizer. The `LlamaTokenizer` is used as it is a standard wrapper around sentencepiece. The `chat` template will be updated with the templating functions in a follow up PR! - The authors suggest to use the following prompt format for the chat mode: `f""human: {prompt}\n\nadept:""` ## PersimmonConfig [[autodoc]] PersimmonConfig ## PersimmonModel [[autodoc]] PersimmonModel - forward ## PersimmonForCausalLM [[autodoc]] PersimmonForCausalLM - forward ## PersimmonForSequenceClassification [[autodoc]] PersimmonForSequenceClassification - forward " model_doc/vit_mae.md," # ViTMAE ## Overview The ViTMAE model was proposed in [Masked Autoencoders Are Scalable Vision Learners](https://arxiv.org/abs/2111.06377v2) by Kaiming He, Xinlei Chen, Saining Xie, Yanghao Li, Piotr Dollár, Ross Girshick. The paper shows that, by pre-training a Vision Transformer (ViT) to reconstruct pixel values for masked patches, one can get results after fine-tuning that outperform supervised pre-training. The abstract from the paper is the following: *This paper shows that masked autoencoders (MAE) are scalable self-supervised learners for computer vision. Our MAE approach is simple: we mask random patches of the input image and reconstruct the missing pixels. It is based on two core designs. First, we develop an asymmetric encoder-decoder architecture, with an encoder that operates only on the visible subset of patches (without mask tokens), along with a lightweight decoder that reconstructs the original image from the latent representation and mask tokens. Second, we find that masking a high proportion of the input image, e.g., 75%, yields a nontrivial and meaningful self-supervisory task. Coupling these two designs enables us to train large models efficiently and effectively: we accelerate training (by 3x or more) and improve accuracy. Our scalable approach allows for learning high-capacity models that generalize well: e.g., a vanilla ViT-Huge model achieves the best accuracy (87.8%) among methods that use only ImageNet-1K data. Transfer performance in downstream tasks outperforms supervised pre-training and shows promising scaling behavior.* MAE architecture. Taken from the original paper. This model was contributed by [nielsr](https://huggingface.co/nielsr). TensorFlow version of the model was contributed by [sayakpaul](https://github.com/sayakpaul) and [ariG23498](https://github.com/ariG23498) (equal contribution). The original code can be found [here](https://github.com/facebookresearch/mae). ## Usage tips - MAE (masked auto encoding) is a method for self-supervised pre-training of Vision Transformers (ViTs). The pre-training objective is relatively simple: by masking a large portion (75%) of the image patches, the model must reconstruct raw pixel values. One can use [`ViTMAEForPreTraining`] for this purpose. - After pre-training, one ""throws away"" the decoder used to reconstruct pixels, and one uses the encoder for fine-tuning/linear probing. This means that after fine-tuning, one can directly plug in the weights into a [`ViTForImageClassification`]. - One can use [`ViTImageProcessor`] to prepare images for the model. See the code examples for more info. - Note that the encoder of MAE is only used to encode the visual patches. The encoded patches are then concatenated with mask tokens, which the decoder (which also consists of Transformer blocks) takes as input. Each mask token is a shared, learned vector that indicates the presence of a missing patch to be predicted. Fixed sin/cos position embeddings are added both to the input of the encoder and the decoder. - For a visual understanding of how MAEs work you can check out this [post](https://keras.io/examples/vision/masked_image_modeling/). ## Resources A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with ViTMAE. - [`ViTMAEForPreTraining`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/pytorch/image-pretraining), allowing you to pre-train the model from scratch/further pre-train the model on custom data. - A notebook that illustrates how to visualize reconstructed pixel values with [`ViTMAEForPreTraining`] can be found [here](https://github.com/NielsRogge/Transformers-Tutorials/blob/master/ViTMAE/ViT_MAE_visualization_demo.ipynb). If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource. ## ViTMAEConfig [[autodoc]] ViTMAEConfig ## ViTMAEModel [[autodoc]] ViTMAEModel - forward ## ViTMAEForPreTraining [[autodoc]] transformers.ViTMAEForPreTraining - forward ## TFViTMAEModel [[autodoc]] TFViTMAEModel - call ## TFViTMAEForPreTraining [[autodoc]] transformers.TFViTMAEForPreTraining - call " model_doc/clvp.md," # CLVP ## Overview The CLVP (Contrastive Language-Voice Pretrained Transformer) model was proposed in [Better speech synthesis through scaling](https://arxiv.org/abs/2305.07243) by James Betker. The abstract from the paper is the following: *In recent years, the field of image generation has been revolutionized by the application of autoregressive transformers and DDPMs. These approaches model the process of image generation as a step-wise probabilistic processes and leverage large amounts of compute and data to learn the image distribution. This methodology of improving performance need not be confined to images. This paper describes a way to apply advances in the image generative domain to speech synthesis. The result is TorToise - an expressive, multi-voice text-to-speech system.* This model was contributed by [Susnato Dhar](https://huggingface.co/susnato). The original code can be found [here](https://github.com/neonbjb/tortoise-tts). ## Usage tips 1. CLVP is an integral part of the Tortoise TTS model. 2. CLVP can be used to compare different generated speech candidates with the provided text, and the best speech tokens are forwarded to the diffusion model. 3. The use of the [`ClvpModelForConditionalGeneration.generate()`] method is strongly recommended for tortoise usage. 4. Note that the CLVP model expects the audio to be sampled at 22.05 kHz contrary to other audio models which expects 16 kHz. ## Brief Explanation: - The [`ClvpTokenizer`] tokenizes the text input, and the [`ClvpFeatureExtractor`] extracts the log mel-spectrogram from the desired audio. - [`ClvpConditioningEncoder`] takes those text tokens and audio representations and converts them into embeddings conditioned on the text and audio. - The [`ClvpForCausalLM`] uses those embeddings to generate multiple speech candidates. - Each speech candidate is passed through the speech encoder ([`ClvpEncoder`]) which converts them into a vector representation, and the text encoder ([`ClvpEncoder`]) converts the text tokens into the same latent space. - At the end, we compare each speech vector with the text vector to see which speech vector is most similar to the text vector. - [`ClvpModelForConditionalGeneration.generate()`] compresses all of the logic described above into a single method. Example : thon >>> import datasets >>> from transformers import ClvpProcessor, ClvpModelForConditionalGeneration >>> # Define the Text and Load the Audio (We are taking an audio example from HuggingFace Hub using `datasets` library). >>> text = ""This is an example text."" >>> ds = datasets.load_dataset(""hf-internal-testing/librispeech_asr_dummy"", ""clean"", split=""validation"") >>> ds = ds.cast_column(""audio"", datasets.Audio(sampling_rate=22050)) >>> sample = ds[0][""audio""] >>> # Define processor and model. >>> processor = ClvpProcessor.from_pretrained(""susnato/clvp_dev"") >>> model = ClvpModelForConditionalGeneration.from_pretrained(""susnato/clvp_dev"") >>> # Generate processor output and model output. >>> processor_output = processor(raw_speech=sample[""array""], sampling_rate=sample[""sampling_rate""], text=text, return_tensors=""pt"") >>> generated_output = model.generate(**processor_output) ## ClvpConfig [[autodoc]] ClvpConfig - from_sub_model_configs ## ClvpEncoderConfig [[autodoc]] ClvpEncoderConfig ## ClvpDecoderConfig [[autodoc]] ClvpDecoderConfig ## ClvpTokenizer [[autodoc]] ClvpTokenizer - save_vocabulary ## ClvpFeatureExtractor [[autodoc]] ClvpFeatureExtractor - __call__ ## ClvpProcessor [[autodoc]] ClvpProcessor - __call__ - decode - batch_decode ## ClvpModelForConditionalGeneration [[autodoc]] ClvpModelForConditionalGeneration - forward - generate - get_text_features - get_speech_features ## ClvpForCausalLM [[autodoc]] ClvpForCausalLM ## ClvpModel [[autodoc]] ClvpModel ## ClvpEncoder [[autodoc]] ClvpEncoder ## ClvpDecoder [[autodoc]] ClvpDecoder " model_doc/phobert.md," # PhoBERT ## Overview The PhoBERT model was proposed in [PhoBERT: Pre-trained language models for Vietnamese](https://www.aclweb.org/anthology/2020.findings-emnlp.92.pdf) by Dat Quoc Nguyen, Anh Tuan Nguyen. The abstract from the paper is the following: *We present PhoBERT with two versions, PhoBERT-base and PhoBERT-large, the first public large-scale monolingual language models pre-trained for Vietnamese. Experimental results show that PhoBERT consistently outperforms the recent best pre-trained multilingual model XLM-R (Conneau et al., 2020) and improves the state-of-the-art in multiple Vietnamese-specific NLP tasks including Part-of-speech tagging, Dependency parsing, Named-entity recognition and Natural language inference.* This model was contributed by [dqnguyen](https://huggingface.co/dqnguyen). The original code can be found [here](https://github.com/VinAIResearch/PhoBERT). ## Usage example thon >>> import torch >>> from transformers import AutoModel, AutoTokenizer >>> phobert = AutoModel.from_pretrained(""vinai/phobert-base"") >>> tokenizer = AutoTokenizer.from_pretrained(""vinai/phobert-base"") >>> # INPUT TEXT MUST BE ALREADY WORD-SEGMENTED! >>> line = ""Tôi là sinh_viên trường đại_học Công_nghệ ."" >>> input_ids = torch.tensor([tokenizer.encode(line)]) >>> with torch.no_grad(): features = phobert(input_ids) # Models outputs are now tuples >>> # With TensorFlow 2.0+: >>> # from transformers import TFAutoModel >>> # phobert = TFAutoModel.from_pretrained(""vinai/phobert-base"") PhoBERT implementation is the same as BERT, except for tokenization. Refer to [EART documentation](bert) for information on configuration classes and their parameters. PhoBERT-specific tokenizer is documented below. ## PhobertTokenizer [[autodoc]] PhobertTokenizer " model_doc/ul2.md," # UL2 ## Overview The T5 model was presented in [Unifying Language Learning Paradigms](https://arxiv.org/pdf/2205.05131v1.pdf) by Yi Tay, Mostafa Dehghani, Vinh Q. Tran, Xavier Garcia, Dara Bahri, Tal Schuster, Huaixiu Steven Zheng, Neil Houlsby, Donald Metzler. The abstract from the paper is the following: *Existing pre-trained models are generally geared towards a particular class of problems. To date, there seems to be still no consensus on what the right architecture and pre-training setup should be. This paper presents a unified framework for pre-training models that are universally effective across datasets and setups. We begin by disentangling architectural archetypes with pre-training objectives -- two concepts that are commonly conflated. Next, we present a generalized and unified perspective for self-supervision in NLP and show how different pre-training objectives can be cast as one another and how interpolating between different objectives can be effective. We then propose Mixture-of-Denoisers (MoD), a pre-training objective that combines diverse pre-training paradigms together. We furthermore introduce a notion of mode switching, wherein downstream fine-tuning is associated with specific pre-training schemes. We conduct extensive ablative experiments to compare multiple pre-training objectives and find that our method pushes the Pareto-frontier by outperforming T5 and/or GPT-like models across multiple diverse setups. Finally, by scaling our model up to 20B parameters, we achieve SOTA performance on 50 well-established supervised NLP tasks ranging from language generation (with automated and human evaluation), language understanding, text classification, question answering, commonsense reasoning, long text reasoning, structured knowledge grounding and information retrieval. Our model also achieve strong results at in-context learning, outperforming 175B GPT-3 on zero-shot SuperGLUE and tripling the performance of T5-XXL on one-shot summarization.* This model was contributed by [DanielHesslow](https://huggingface.co/Seledorn). The original code can be found [here](https://github.com/google-research/google-research/tree/master/ul2). ## Usage tips - UL2 is an encoder-decoder model pre-trained on a mixture of denoising functions as well as fine-tuned on an array of downstream tasks. - UL2 has the same architecture as [T5v1.1](t5v1.1) but uses the Gated-SiLU activation function instead of Gated-GELU. - The authors release checkpoints of one architecture which can be seen [here](https://huggingface.co/google/ul2) As UL2 has the same architecture as T5v1.1, refer to [T5's documentation page](t5) for API reference, tips, code examples and notebooks. " model_doc/mpnet.md," # MPNet ## Overview The MPNet model was proposed in [MPNet: Masked and Permuted Pre-training for Language Understanding](https://arxiv.org/abs/2004.09297) by Kaitao Song, Xu Tan, Tao Qin, Jianfeng Lu, Tie-Yan Liu. MPNet adopts a novel pre-training method, named masked and permuted language modeling, to inherit the advantages of masked language modeling and permuted language modeling for natural language understanding. The abstract from the paper is the following: *BERT adopts masked language modeling (MLM) for pre-training and is one of the most successful pre-training models. Since BERT neglects dependency among predicted tokens, XLNet introduces permuted language modeling (PLM) for pre-training to address this problem. However, XLNet does not leverage the full position information of a sentence and thus suffers from position discrepancy between pre-training and fine-tuning. In this paper, we propose MPNet, a novel pre-training method that inherits the advantages of BERT and XLNet and avoids their limitations. MPNet leverages the dependency among predicted tokens through permuted language modeling (vs. MLM in BERT), and takes auxiliary position information as input to make the model see a full sentence and thus reducing the position discrepancy (vs. PLM in XLNet). We pre-train MPNet on a large-scale dataset (over 160GB text corpora) and fine-tune on a variety of down-streaming tasks (GLUE, SQuAD, etc). Experimental results show that MPNet outperforms MLM and PLM by a large margin, and achieves better results on these tasks compared with previous state-of-the-art pre-trained methods (e.g., BERT, XLNet, RoBERTa) under the same model setting.* The original code can be found [here](https://github.com/microsoft/MPNet). ## Usage tips MPNet doesn't have `token_type_ids`, you don't need to indicate which token belongs to which segment. Just separate your segments with the separation token `tokenizer.sep_token` (or `[sep]`). ## Resources - [Text classification task guide](../tasks/sequence_classification) - [Token classification task guide](../tasks/token_classification) - [Question answering task guide](../tasks/question_answering) - [Masked language modeling task guide](../tasks/masked_language_modeling) - [Multiple choice task guide](../tasks/multiple_choice) ## MPNetConfig [[autodoc]] MPNetConfig ## MPNetTokenizer [[autodoc]] MPNetTokenizer - build_inputs_with_special_tokens - get_special_tokens_mask - create_token_type_ids_from_sequences - save_vocabulary ## MPNetTokenizerFast [[autodoc]] MPNetTokenizerFast ## MPNetModel [[autodoc]] MPNetModel - forward ## MPNetForMaskedLM [[autodoc]] MPNetForMaskedLM - forward ## MPNetForSequenceClassification [[autodoc]] MPNetForSequenceClassification - forward ## MPNetForMultipleChoice [[autodoc]] MPNetForMultipleChoice - forward ## MPNetForTokenClassification [[autodoc]] MPNetForTokenClassification - forward ## MPNetForQuestionAnswering [[autodoc]] MPNetForQuestionAnswering - forward ## TFMPNetModel [[autodoc]] TFMPNetModel - call ## TFMPNetForMaskedLM [[autodoc]] TFMPNetForMaskedLM - call ## TFMPNetForSequenceClassification [[autodoc]] TFMPNetForSequenceClassification - call ## TFMPNetForMultipleChoice [[autodoc]] TFMPNetForMultipleChoice - call ## TFMPNetForTokenClassification [[autodoc]] TFMPNetForTokenClassification - call ## TFMPNetForQuestionAnswering [[autodoc]] TFMPNetForQuestionAnswering - call " model_doc/mobilenet_v2.md," # MobileNet V2 ## Overview The MobileNet model was proposed in [MobileNetV2: Inverted Residuals and Linear Bottlenecks](https://arxiv.org/abs/1801.04381) by Mark Sandler, Andrew Howard, Menglong Zhu, Andrey Zhmoginov, Liang-Chieh Chen. The abstract from the paper is the following: *In this paper we describe a new mobile architecture, MobileNetV2, that improves the state of the art performance of mobile models on multiple tasks and benchmarks as well as across a spectrum of different model sizes. We also describe efficient ways of applying these mobile models to object detection in a novel framework we call SSDLite. Additionally, we demonstrate how to build mobile semantic segmentation models through a reduced form of DeepLabv3 which we call Mobile DeepLabv3.* *The MobileNetV2 architecture is based on an inverted residual structure where the input and output of the residual block are thin bottleneck layers opposite to traditional residual models which use expanded representations in the input an MobileNetV2 uses lightweight depthwise convolutions to filter features in the intermediate expansion layer. Additionally, we find that it is important to remove non-linearities in the narrow layers in order to maintain representational power. We demonstrate that this improves performance and provide an intuition that led to this design. Finally, our approach allows decoupling of the input/output domains from the expressiveness of the transformation, which provides a convenient framework for further analysis. We measure our performance on Imagenet classification, COCO object detection, VOC image segmentation. We evaluate the trade-offs between accuracy, and number of operations measured by multiply-adds (MAdd), as well as the number of parameters.* This model was contributed by [matthijs](https://huggingface.co/Matthijs). The original code and weights can be found [here for the main model](https://github.com/tensorflow/models/tree/master/research/slim/nets/mobilenet) and [here for DeepLabV3+](https://github.com/tensorflow/models/tree/master/research/deeplab). ## Usage tips - The checkpoints are named **mobilenet\_v2\_*depth*\_*size***, for example **mobilenet\_v2\_1.0\_224**, where **1.0** is the depth multiplier (sometimes also referred to as ""alpha"" or the width multiplier) and **224** is the resolution of the input images the model was trained on. - Even though the checkpoint is trained on images of specific size, the model will work on images of any size. The smallest supported image size is 32x32. - One can use [`MobileNetV2ImageProcessor`] to prepare images for the model. - The available image classification checkpoints are pre-trained on [ImageNet-1k](https://huggingface.co/datasets/imagenet-1k) (also referred to as ILSVRC 2012, a collection of 1.3 million images and 1,000 classes). However, the model predicts 1001 classes: the 1000 classes from ImageNet plus an extra “background” class (index 0). - The segmentation model uses a [DeepLabV3+](https://arxiv.org/abs/1802.02611) head. The available semantic segmentation checkpoints are pre-trained on [PASCAL VOC](http://host.robots.ox.ac.uk/pascal/VOC/). - The original TensorFlow checkpoints use different padding rules than PyTorch, requiring the model to determine the padding amount at inference time, since this depends on the input image size. To use native PyTorch padding behavior, create a [`MobileNetV2Config`] with `tf_padding = False`. Unsupported features: - The [`MobileNetV2Model`] outputs a globally pooled version of the last hidden state. In the original model it is possible to use an average pooling layer with a fixed 7x7 window and stride 1 instead of global pooling. For inputs that are larger than the recommended image size, this gives a pooled output that is larger than 1x1. The Hugging Face implementation does not support this. - The original TensorFlow checkpoints include quantized models. We do not support these models as they include additional ""FakeQuantization"" operations to unquantize the weights. - It's common to extract the output from the expansion layers at indices 10 and 13, as well as the output from the final 1x1 convolution layer, for downstream purposes. Using `output_hidden_states=True` returns the output from all intermediate layers. There is currently no way to limit this to specific layers. - The DeepLabV3+ segmentation head does not use the final convolution layer from the backbone, but this layer gets computed anyway. There is currently no way to tell [`MobileNetV2Model`] up to which layer it should run. ## Resources A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with MobileNetV2. - [`MobileNetV2ForImageClassification`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/pytorch/image-classification) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/image_classification.ipynb). - See also: [Image classification task guide](../tasks/image_classification) **Semantic segmentation** - [Semantic segmentation task guide](../tasks/semantic_segmentation) If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource. ## MobileNetV2Config [[autodoc]] MobileNetV2Config ## MobileNetV2FeatureExtractor [[autodoc]] MobileNetV2FeatureExtractor - preprocess - post_process_semantic_segmentation ## MobileNetV2ImageProcessor [[autodoc]] MobileNetV2ImageProcessor - preprocess - post_process_semantic_segmentation ## MobileNetV2Model [[autodoc]] MobileNetV2Model - forward ## MobileNetV2ForImageClassification [[autodoc]] MobileNetV2ForImageClassification - forward ## MobileNetV2ForSemanticSegmentation [[autodoc]] MobileNetV2ForSemanticSegmentation - forward " model_doc/beit.md," # BEiT ## Overview The BEiT model was proposed in [BEiT: BERT Pre-Training of Image Transformers](https://arxiv.org/abs/2106.08254) by Hangbo Bao, Li Dong and Furu Wei. Inspired by BERT, BEiT is the first paper that makes self-supervised pre-training of Vision Transformers (ViTs) outperform supervised pre-training. Rather than pre-training the model to predict the class of an image (as done in the [original ViT paper](https://arxiv.org/abs/2010.11929)), BEiT models are pre-trained to predict visual tokens from the codebook of OpenAI's [DALL-E model](https://arxiv.org/abs/2102.12092) given masked patches. The abstract from the paper is the following: *We introduce a self-supervised vision representation model BEiT, which stands for Bidirectional Encoder representation from Image Transformers. Following BERT developed in the natural language processing area, we propose a masked image modeling task to pretrain vision Transformers. Specifically, each image has two views in our pre-training, i.e, image patches (such as 16x16 pixels), and visual tokens (i.e., discrete tokens). We first ""tokenize"" the original image into visual tokens. Then we randomly mask some image patches and fed them into the backbone Transformer. The pre-training objective is to recover the original visual tokens based on the corrupted image patches. After pre-training BEiT, we directly fine-tune the model parameters on downstream tasks by appending task layers upon the pretrained encoder. Experimental results on image classification and semantic segmentation show that our model achieves competitive results with previous pre-training methods. For example, base-size BEiT achieves 83.2% top-1 accuracy on ImageNet-1K, significantly outperforming from-scratch DeiT training (81.8%) with the same setup. Moreover, large-size BEiT obtains 86.3% only using ImageNet-1K, even outperforming ViT-L with supervised pre-training on ImageNet-22K (85.2%).* This model was contributed by [nielsr](https://huggingface.co/nielsr). The JAX/FLAX version of this model was contributed by [kamalkraj](https://huggingface.co/kamalkraj). The original code can be found [here](https://github.com/microsoft/unilm/tree/master/beit). ## Usage tips - BEiT models are regular Vision Transformers, but pre-trained in a self-supervised way rather than supervised. They outperform both the [original model (ViT)](vit) as well as [Data-efficient Image Transformers (DeiT)](deit) when fine-tuned on ImageNet-1K and CIFAR-100. You can check out demo notebooks regarding inference as well as fine-tuning on custom data [here](https://github.com/NielsRogge/Transformers-Tutorials/tree/master/VisionTransformer) (you can just replace [`ViTFeatureExtractor`] by [`BeitImageProcessor`] and [`ViTForImageClassification`] by [`BeitForImageClassification`]). - There's also a demo notebook available which showcases how to combine DALL-E's image tokenizer with BEiT for performing masked image modeling. You can find it [here](https://github.com/NielsRogge/Transformers-Tutorials/tree/master/BEiT). - As the BEiT models expect each image to be of the same size (resolution), one can use [`BeitImageProcessor`] to resize (or rescale) and normalize images for the model. - Both the patch resolution and image resolution used during pre-training or fine-tuning are reflected in the name of each checkpoint. For example, `microsoft/beit-base-patch16-224` refers to a base-sized architecture with patch resolution of 16x16 and fine-tuning resolution of 224x224. All checkpoints can be found on the [hub](https://huggingface.co/models?search=microsoft/beit). - The available checkpoints are either (1) pre-trained on [ImageNet-22k](http://www.image-net.org/) (a collection of 14 million images and 22k classes) only, (2) also fine-tuned on ImageNet-22k or (3) also fine-tuned on [ImageNet-1k](http://www.image-net.org/challenges/LSVRC/2012/) (also referred to as ILSVRC 2012, a collection of 1.3 million images and 1,000 classes). - BEiT uses relative position embeddings, inspired by the T5 model. During pre-training, the authors shared the relative position bias among the several self-attention layers. During fine-tuning, each layer's relative position bias is initialized with the shared relative position bias obtained after pre-training. Note that, if one wants to pre-train a model from scratch, one needs to either set the `use_relative_position_bias` or the `use_relative_position_bias` attribute of [`BeitConfig`] to `True` in order to add position embeddings. BEiT pre-training. Taken from the original paper. ## Resources A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with BEiT. - [`BeitForImageClassification`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/pytorch/image-classification) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/image_classification.ipynb). - See also: [Image classification task guide](../tasks/image_classification) **Semantic segmentation** - [Semantic segmentation task guide](../tasks/semantic_segmentation) If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource. ## BEiT specific outputs [[autodoc]] models.beit.modeling_beit.BeitModelOutputWithPooling [[autodoc]] models.beit.modeling_flax_beit.FlaxBeitModelOutputWithPooling ## BeitConfig [[autodoc]] BeitConfig ## BeitFeatureExtractor [[autodoc]] BeitFeatureExtractor - __call__ - post_process_semantic_segmentation ## BeitImageProcessor [[autodoc]] BeitImageProcessor - preprocess - post_process_semantic_segmentation ## BeitModel [[autodoc]] BeitModel - forward ## BeitForMaskedImageModeling [[autodoc]] BeitForMaskedImageModeling - forward ## BeitForImageClassification [[autodoc]] BeitForImageClassification - forward ## BeitForSemanticSegmentation [[autodoc]] BeitForSemanticSegmentation - forward ## FlaxBeitModel [[autodoc]] FlaxBeitModel - __call__ ## FlaxBeitForMaskedImageModeling [[autodoc]] FlaxBeitForMaskedImageModeling - __call__ ## FlaxBeitForImageClassification [[autodoc]] FlaxBeitForImageClassification - __call__ " model_doc/flan-t5.md," # FLAN-T5 ## Overview FLAN-T5 was released in the paper [Scaling Instruction-Finetuned Language Models](https://arxiv.org/pdf/2210.11416.pdf) - it is an enhanced version of T5 that has been finetuned in a mixture of tasks. One can directly use FLAN-T5 weights without finetuning the model: thon >>> from transformers import AutoModelForSeq2SeqLM, AutoTokenizer >>> model = AutoModelForSeq2SeqLM.from_pretrained(""google/flan-t5-small"") >>> tokenizer = AutoTokenizer.from_pretrained(""google/flan-t5-small"") >>> inputs = tokenizer(""A step by step recipe to make bolognese pasta:"", return_tensors=""pt"") >>> outputs = model.generate(**inputs) >>> print(tokenizer.batch_decode(outputs, skip_special_tokens=True)) ['Pour a cup of bolognese into a large bowl and add the pasta'] FLAN-T5 includes the same improvements as T5 version 1.1 (see [here](https://huggingface.co/docs/transformers/model_doc/t5v1.1) for the full details of the model's improvements.) Google has released the following variants: - [google/flan-t5-small](https://huggingface.co/google/flan-t5-small) - [google/flan-t5-base](https://huggingface.co/google/flan-t5-base) - [google/flan-t5-large](https://huggingface.co/google/flan-t5-large) - [google/flan-t5-xl](https://huggingface.co/google/flan-t5-xl) - [google/flan-t5-xxl](https://huggingface.co/google/flan-t5-xxl). The original checkpoints can be found [here](https://github.com/google-research/t5x/blob/main/docs/models.md#flan-t5-checkpoints). Refer to [T5's documentation page](t5) for all API reference, code examples and notebooks. For more details regarding training and evaluation of the FLAN-T5, refer to the model card. " model_doc/blip.md," # BLIP ## Overview The BLIP model was proposed in [BLIP: Bootstrapping Language-Image Pre-training for Unified Vision-Language Understanding and Generation](https://arxiv.org/abs/2201.12086) by Junnan Li, Dongxu Li, Caiming Xiong, Steven Hoi. BLIP is a model that is able to perform various multi-modal tasks including: - Visual Question Answering - Image-Text retrieval (Image-text matching) - Image Captioning The abstract from the paper is the following: *Vision-Language Pre-training (VLP) has advanced the performance for many vision-language tasks. However, most existing pre-trained models only excel in either understanding-based tasks or generation-based tasks. Furthermore, performance improvement has been largely achieved by scaling up the dataset with noisy image-text pairs collected from the web, which is a suboptimal source of supervision. In this paper, we propose BLIP, a new VLP framework which transfers flexibly to both vision-language understanding and generation tasks. BLIP effectively utilizes the noisy web data by bootstrapping the captions, where a captioner generates synthetic captions and a filter removes the noisy ones. We achieve state-of-the-art results on a wide range of vision-language tasks, such as image-text retrieval (+2.7% in average recall@1), image captioning (+2.8% in CIDEr), and VQA (+1.6% in VQA score). BLIP also demonstrates strong generalization ability when directly transferred to videolanguage tasks in a zero-shot manner. Code, models, and datasets are released.* ![BLIP.gif](https://cdn-uploads.huggingface.co/production/uploads/1670928184033-62441d1d9fdefb55a0b7d12c.gif) This model was contributed by [ybelkada](https://huggingface.co/ybelkada). The original code can be found [here](https://github.com/salesforce/BLIP). ## Resources - [Jupyter notebook](https://github.com/huggingface/notebooks/blob/main/examples/image_captioning_blip.ipynb) on how to fine-tune BLIP for image captioning on a custom dataset ## BlipConfig [[autodoc]] BlipConfig - from_text_vision_configs ## BlipTextConfig [[autodoc]] BlipTextConfig ## BlipVisionConfig [[autodoc]] BlipVisionConfig ## BlipProcessor [[autodoc]] BlipProcessor ## BlipImageProcessor [[autodoc]] BlipImageProcessor - preprocess ## BlipModel [[autodoc]] BlipModel - forward - get_text_features - get_image_features ## BlipTextModel [[autodoc]] BlipTextModel - forward ## BlipVisionModel [[autodoc]] BlipVisionModel - forward ## BlipForConditionalGeneration [[autodoc]] BlipForConditionalGeneration - forward ## BlipForImageTextRetrieval [[autodoc]] BlipForImageTextRetrieval - forward ## BlipForQuestionAnswering [[autodoc]] BlipForQuestionAnswering - forward ## TFBlipModel [[autodoc]] TFBlipModel - call - get_text_features - get_image_features ## TFBlipTextModel [[autodoc]] TFBlipTextModel - call ## TFBlipVisionModel [[autodoc]] TFBlipVisionModel - call ## TFBlipForConditionalGeneration [[autodoc]] TFBlipForConditionalGeneration - call ## TFBlipForImageTextRetrieval [[autodoc]] TFBlipForImageTextRetrieval - call ## TFBlipForQuestionAnswering [[autodoc]] TFBlipForQuestionAnswering - call " model_doc/falcon.md," # Falcon ## Overview Falcon is a class of causal decoder-only models built by [TII](https://www.tii.ae/). The largest Falcon checkpoints have been trained on >=1T tokens of text, with a particular emphasis on the [RefinedWeb](https://arxiv.org/abs/2306.01116) corpus. They are made available under the Apache 2.0 license. Falcon's architecture is modern and optimized for inference, with multi-query attention and support for efficient attention variants like `FlashAttention`. Both 'base' models trained only as causal language models as well as 'instruct' models that have received further fine-tuning are available. Falcon models are (as of 2023) some of the largest and most powerful open-source language models, and consistently rank highly in the [OpenLLM leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard). ## Converting custom checkpoints Falcon models were initially added to the Hugging Face Hub as custom code checkpoints. However, Falcon is now fully supported in the Transformers library. If you fine-tuned a model from a custom code checkpoint, we recommend converting your checkpoint to the new in-library format, as this should give significant improvements to stability and performance, especially for generation, as well as removing the need to use `trust_remote_code=True`! You can convert custom code checkpoints to full Transformers checkpoints using the `convert_custom_code_checkpoint.py` script located in the [Falcon model directory](https://github.com/huggingface/transformers/tree/main/src/transformers/models/falcon) of the Transformers library. To use this script, simply call it with `python convert_custom_code_checkpoint.py --checkpoint_dir my_model`. This will convert your checkpoint in-place, and you can immediately load it from the directory afterwards with e.g. `from_pretrained()`. If your model hasn't been uploaded to the Hub, we recommend making a backup before attempting the conversion, just in case! ## FalconConfig [[autodoc]] FalconConfig - all ## FalconModel [[autodoc]] FalconModel - forward ## FalconForCausalLM [[autodoc]] FalconForCausalLM - forward ## FalconForSequenceClassification [[autodoc]] FalconForSequenceClassification - forward ## FalconForTokenClassification [[autodoc]] FalconForTokenClassification - forward ## FalconForQuestionAnswering [[autodoc]] FalconForQuestionAnswering - forward " model_doc/deta.md," # DETA ## Overview The DETA model was proposed in [NMS Strikes Back](https://arxiv.org/abs/2212.06137) by Jeffrey Ouyang-Zhang, Jang Hyun Cho, Xingyi Zhou, Philipp Krähenbühl. DETA (short for Detection Transformers with Assignment) improves [Deformable DETR](deformable_detr) by replacing the one-to-one bipartite Hungarian matching loss with one-to-many label assignments used in traditional detectors with non-maximum suppression (NMS). This leads to significant gains of up to 2.5 mAP. The abstract from the paper is the following: *Detection Transformer (DETR) directly transforms queries to unique objects by using one-to-one bipartite matching during training and enables end-to-end object detection. Recently, these models have surpassed traditional detectors on COCO with undeniable elegance. However, they differ from traditional detectors in multiple designs, including model architecture and training schedules, and thus the effectiveness of one-to-one matching is not fully understood. In this work, we conduct a strict comparison between the one-to-one Hungarian matching in DETRs and the one-to-many label assignments in traditional detectors with non-maximum supervision (NMS). Surprisingly, we observe one-to-many assignments with NMS consistently outperform standard one-to-one matching under the same setting, with a significant gain of up to 2.5 mAP. Our detector that trains Deformable-DETR with traditional IoU-based label assignment achieved 50.2 COCO mAP within 12 epochs (1x schedule) with ResNet50 backbone, outperforming all existing traditional or transformer-based detectors in this setting. On multiple datasets, schedules, and architectures, we consistently show bipartite matching is unnecessary for performant detection transformers. Furthermore, we attribute the success of detection transformers to their expressive transformer architecture.* DETA overview. Taken from the original paper. This model was contributed by [nielsr](https://huggingface.co/nielsr). The original code can be found [here](https://github.com/jozhang97/DETA). ## Resources A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with DETA. - Demo notebooks for DETA can be found [here](https://github.com/NielsRogge/Transformers-Tutorials/tree/master/DETA). - See also: [Object detection task guide](../tasks/object_detection) If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource. ## DetaConfig [[autodoc]] DetaConfig ## DetaImageProcessor [[autodoc]] DetaImageProcessor - preprocess - post_process_object_detection ## DetaModel [[autodoc]] DetaModel - forward ## DetaForObjectDetection [[autodoc]] DetaForObjectDetection - forward " model_doc/ernie_m.md," # ErnieM ## Overview The ErnieM model was proposed in [ERNIE-M: Enhanced Multilingual Representation by Aligning Cross-lingual Semantics with Monolingual Corpora](https://arxiv.org/abs/2012.15674) by Xuan Ouyang, Shuohuan Wang, Chao Pang, Yu Sun, Hao Tian, Hua Wu, Haifeng Wang. The abstract from the paper is the following: *Recent studies have demonstrated that pre-trained cross-lingual models achieve impressive performance in downstream cross-lingual tasks. This improvement benefits from learning a large amount of monolingual and parallel corpora. Although it is generally acknowledged that parallel corpora are critical for improving the model performance, existing methods are often constrained by the size of parallel corpora, especially for lowresource languages. In this paper, we propose ERNIE-M, a new training method that encourages the model to align the representation of multiple languages with monolingual corpora, to overcome the constraint that the parallel corpus size places on the model performance. Our key insight is to integrate back-translation into the pre-training process. We generate pseudo-parallel sentence pairs on a monolingual corpus to enable the learning of semantic alignments between different languages, thereby enhancing the semantic modeling of cross-lingual models. Experimental results show that ERNIE-M outperforms existing cross-lingual models and delivers new state-of-the-art results in various cross-lingual downstream tasks.* This model was contributed by [Susnato Dhar](https://huggingface.co/susnato). The original code can be found [here](https://github.com/PaddlePaddle/PaddleNLP/tree/develop/paddlenlp/transformers/ernie_m). ## Usage tips - Ernie-M is a BERT-like model so it is a stacked Transformer Encoder. - Instead of using MaskedLM for pretraining (like BERT) the authors used two novel techniques: `Cross-attention Masked Language Modeling` and `Back-translation Masked Language Modeling`. For now these two LMHead objectives are not implemented here. - It is a multilingual language model. - Next Sentence Prediction was not used in pretraining process. ## Resources - [Text classification task guide](../tasks/sequence_classification) - [Token classification task guide](../tasks/token_classification) - [Question answering task guide](../tasks/question_answering) - [Multiple choice task guide](../tasks/multiple_choice) ## ErnieMConfig [[autodoc]] ErnieMConfig ## ErnieMTokenizer [[autodoc]] ErnieMTokenizer - build_inputs_with_special_tokens - get_special_tokens_mask - create_token_type_ids_from_sequences - save_vocabulary ## ErnieMModel [[autodoc]] ErnieMModel - forward ## ErnieMForSequenceClassification [[autodoc]] ErnieMForSequenceClassification - forward ## ErnieMForMultipleChoice [[autodoc]] ErnieMForMultipleChoice - forward ## ErnieMForTokenClassification [[autodoc]] ErnieMForTokenClassification - forward ## ErnieMForQuestionAnswering [[autodoc]] ErnieMForQuestionAnswering - forward ## ErnieMForInformationExtraction [[autodoc]] ErnieMForInformationExtraction - forward " model_doc/marian.md," # MarianMT ## Overview A framework for translation models, using the same models as BART. Translations should be similar, but not identical to output in the test set linked to in each model card. This model was contributed by [sshleifer](https://huggingface.co/sshleifer). ## Implementation Notes - Each model is about 298 MB on disk, there are more than 1,000 models. - The list of supported language pairs can be found [here](https://huggingface.co/Helsinki-NLP). - Models were originally trained by [Jörg Tiedemann](https://researchportal.helsinki.fi/en/persons/j%C3%B6rg-tiedemann) using the [Marian](https://marian-nmt.github.io/) C++ library, which supports fast training and translation. - All models are transformer encoder-decoders with 6 layers in each component. Each model's performance is documented in a model card. - The 80 opus models that require BPE preprocessing are not supported. - The modeling code is the same as [`BartForConditionalGeneration`] with a few minor modifications: - static (sinusoid) positional embeddings (`MarianConfig.static_position_embeddings=True`) - no layernorm_embedding (`MarianConfig.normalize_embedding=False`) - the model starts generating with `pad_token_id` (which has 0 as a token_embedding) as the prefix (Bart uses ``), - Code to bulk convert models can be found in `convert_marian_to_pytorch.py`. ## Naming - All model names use the following format: `Helsinki-NLP/opus-mt-{src}-{tgt}` - The language codes used to name models are inconsistent. Two digit codes can usually be found [here](https://developers.google.com/admin-sdk/directory/v1/languages), three digit codes require googling ""language code {code}"". - Codes formatted like `es_AR` are usually `code_{region}`. That one is Spanish from Argentina. - The models were converted in two stages. The first 1000 models use ISO-639-2 codes to identify languages, the second group use a combination of ISO-639-5 codes and ISO-639-2 codes. ## Examples - Since Marian models are smaller than many other translation models available in the library, they can be useful for fine-tuning experiments and integration tests. - [Fine-tune on GPU](https://github.com/huggingface/transformers/blob/master/examples/legacy/seq2seq/train_distil_marian_enro.sh) ## Multilingual Models - All model names use the following format: `Helsinki-NLP/opus-mt-{src}-{tgt}`: - If a model can output multiple languages, and you should specify a language code by prepending the desired output language to the `src_text`. - You can see a models's supported language codes in its model card, under target constituents, like in [opus-mt-en-roa](https://huggingface.co/Helsinki-NLP/opus-mt-en-roa). - Note that if a model is only multilingual on the source side, like `Helsinki-NLP/opus-mt-roa-en`, no language codes are required. New multi-lingual models from the [Tatoeba-Challenge repo](https://github.com/Helsinki-NLP/Tatoeba-Challenge) require 3 character language codes: thon >>> from transformers import MarianMTModel, MarianTokenizer >>> src_text = [ "">>fra<< this is a sentence in english that we want to translate to french"", "">>por<< This should go to portuguese"", "">>esp<< And this to Spanish"", ] >>> model_name = ""Helsinki-NLP/opus-mt-en-roa"" >>> tokenizer = MarianTokenizer.from_pretrained(model_name) >>> print(tokenizer.supported_language_codes) ['>>zlm_Latn<<', '>>mfe<<', '>>hat<<', '>>pap<<', '>>ast<<', '>>cat<<', '>>ind<<', '>>glg<<', '>>wln<<', '>>spa<<', '>>fra<<', '>>ron<<', '>>por<<', '>>ita<<', '>>oci<<', '>>arg<<', '>>min<<'] >>> model = MarianMTModel.from_pretrained(model_name) >>> translated = model.generate(**tokenizer(src_text, return_tensors=""pt"", padding=True)) >>> [tokenizer.decode(t, skip_special_tokens=True) for t in translated] [""c'est une phrase en anglais que nous voulons traduire en français"", 'Isto deve ir para o português.', 'Y esto al español'] Here is the code to see all available pretrained models on the hub: thon from huggingface_hub import list_models model_list = list_models() org = ""Helsinki-NLP"" model_ids = [x.modelId for x in model_list if x.modelId.startswith(org)] suffix = [x.split(""/"")[1] for x in model_ids] old_style_multi_models = [f""{org}/{s}"" for s in suffix if s != s.lower()] ## Old Style Multi-Lingual Models These are the old style multi-lingual models ported from the OPUS-MT-Train repo: and the members of each language group: thon no-style ['Helsinki-NLP/opus-mt-NORTH_EU-NORTH_EU', 'Helsinki-NLP/opus-mt-ROMANCE-en', 'Helsinki-NLP/opus-mt-SCANDINAVIA-SCANDINAVIA', 'Helsinki-NLP/opus-mt-de-ZH', 'Helsinki-NLP/opus-mt-en-CELTIC', 'Helsinki-NLP/opus-mt-en-ROMANCE', 'Helsinki-NLP/opus-mt-es-NORWAY', 'Helsinki-NLP/opus-mt-fi-NORWAY', 'Helsinki-NLP/opus-mt-fi-ZH', 'Helsinki-NLP/opus-mt-fi_nb_no_nn_ru_sv_en-SAMI', 'Helsinki-NLP/opus-mt-sv-NORWAY', 'Helsinki-NLP/opus-mt-sv-ZH'] GROUP_MEMBERS = { 'ZH': ['cmn', 'cn', 'yue', 'ze_zh', 'zh_cn', 'zh_CN', 'zh_HK', 'zh_tw', 'zh_TW', 'zh_yue', 'zhs', 'zht', 'zh'], 'ROMANCE': ['fr', 'fr_BE', 'fr_CA', 'fr_FR', 'wa', 'frp', 'oc', 'ca', 'rm', 'lld', 'fur', 'lij', 'lmo', 'es', 'es_AR', 'es_CL', 'es_CO', 'es_CR', 'es_DO', 'es_EC', 'es_ES', 'es_GT', 'es_HN', 'es_MX', 'es_NI', 'es_PA', 'es_PE', 'es_PR', 'es_SV', 'es_UY', 'es_VE', 'pt', 'pt_br', 'pt_BR', 'pt_PT', 'gl', 'lad', 'an', 'mwl', 'it', 'it_IT', 'co', 'nap', 'scn', 'vec', 'sc', 'ro', 'la'], 'NORTH_EU': ['de', 'nl', 'fy', 'af', 'da', 'fo', 'is', 'no', 'nb', 'nn', 'sv'], 'SCANDINAVIA': ['da', 'fo', 'is', 'no', 'nb', 'nn', 'sv'], 'SAMI': ['se', 'sma', 'smj', 'smn', 'sms'], 'NORWAY': ['nb_NO', 'nb', 'nn_NO', 'nn', 'nog', 'no_nb', 'no'], 'CELTIC': ['ga', 'cy', 'br', 'gd', 'kw', 'gv'] } Example of translating english to many romance languages, using old-style 2 character language codes thon >>> from transformers import MarianMTModel, MarianTokenizer >>> src_text = [ "">>fr<< this is a sentence in english that we want to translate to french"", "">>pt<< This should go to portuguese"", "">>es<< And this to Spanish"", ] >>> model_name = ""Helsinki-NLP/opus-mt-en-ROMANCE"" >>> tokenizer = MarianTokenizer.from_pretrained(model_name) >>> model = MarianMTModel.from_pretrained(model_name) >>> translated = model.generate(**tokenizer(src_text, return_tensors=""pt"", padding=True)) >>> tgt_text = [tokenizer.decode(t, skip_special_tokens=True) for t in translated] [""c'est une phrase en anglais que nous voulons traduire en français"", 'Isto deve ir para o português.', 'Y esto al español'] ## Resources - [Translation task guide](../tasks/translation) - [Summarization task guide](../tasks/summarization) - [Causal language modeling task guide](../tasks/language_modeling) ## MarianConfig [[autodoc]] MarianConfig ## MarianTokenizer [[autodoc]] MarianTokenizer - build_inputs_with_special_tokens ## MarianModel [[autodoc]] MarianModel - forward ## MarianMTModel [[autodoc]] MarianMTModel - forward ## MarianForCausalLM [[autodoc]] MarianForCausalLM - forward ## TFMarianModel [[autodoc]] TFMarianModel - call ## TFMarianMTModel [[autodoc]] TFMarianMTModel - call ## FlaxMarianModel [[autodoc]] FlaxMarianModel - __call__ ## FlaxMarianMTModel [[autodoc]] FlaxMarianMTModel - __call__ " model_doc/vision-text-dual-encoder.md," # VisionTextDualEncoder ## Overview The [`VisionTextDualEncoderModel`] can be used to initialize a vision-text dual encoder model with any pretrained vision autoencoding model as the vision encoder (*e.g.* [ViT](vit), [BEiT](beit), [DeiT](deit)) and any pretrained text autoencoding model as the text encoder (*e.g.* [RoBERTa](roberta), [BERT](bert)). Two projection layers are added on top of both the vision and text encoder to project the output embeddings to a shared latent space. The projection layers are randomly initialized so the model should be fine-tuned on a downstream task. This model can be used to align the vision-text embeddings using CLIP like contrastive image-text training and then can be used for zero-shot vision tasks such image-classification or retrieval. In [LiT: Zero-Shot Transfer with Locked-image Text Tuning](https://arxiv.org/abs/2111.07991) it is shown how leveraging pre-trained (locked/frozen) image and text model for contrastive learning yields significant improvement on new zero-shot vision tasks such as image classification or retrieval. ## VisionTextDualEncoderConfig [[autodoc]] VisionTextDualEncoderConfig ## VisionTextDualEncoderProcessor [[autodoc]] VisionTextDualEncoderProcessor ## VisionTextDualEncoderModel [[autodoc]] VisionTextDualEncoderModel - forward ## FlaxVisionTextDualEncoderModel [[autodoc]] FlaxVisionTextDualEncoderModel - __call__ ## TFVisionTextDualEncoderModel [[autodoc]] TFVisionTextDualEncoderModel - call " model_doc/ctrl.md," # CTRL ## Overview CTRL model was proposed in [CTRL: A Conditional Transformer Language Model for Controllable Generation](https://arxiv.org/abs/1909.05858) by Nitish Shirish Keskar*, Bryan McCann*, Lav R. Varshney, Caiming Xiong and Richard Socher. It's a causal (unidirectional) transformer pre-trained using language modeling on a very large corpus of ~140 GB of text data with the first token reserved as a control code (such as Links, Books, Wikipedia etc.). The abstract from the paper is the following: *Large-scale language models show promising text generation capabilities, but users cannot easily control particular aspects of the generated text. We release CTRL, a 1.63 billion-parameter conditional transformer language model, trained to condition on control codes that govern style, content, and task-specific behavior. Control codes were derived from structure that naturally co-occurs with raw text, preserving the advantages of unsupervised learning while providing more explicit control over text generation. These codes also allow CTRL to predict which parts of the training data are most likely given a sequence. This provides a potential method for analyzing large amounts of data via model-based source attribution.* This model was contributed by [keskarnitishr](https://huggingface.co/keskarnitishr). The original code can be found [here](https://github.com/salesforce/ctrl). ## Usage tips - CTRL makes use of control codes to generate text: it requires generations to be started by certain words, sentences or links to generate coherent text. Refer to the [original implementation](https://github.com/salesforce/ctrl) for more information. - CTRL is a model with absolute position embeddings so it's usually advised to pad the inputs on the right rather than the left. - CTRL was trained with a causal language modeling (CLM) objective and is therefore powerful at predicting the next token in a sequence. Leveraging this feature allows CTRL to generate syntactically coherent text as it can be observed in the *run_generation.py* example script. - The PyTorch models can take the `past_key_values` as input, which is the previously computed key/value attention pairs. TensorFlow models accepts `past` as input. Using the `past_key_values` value prevents the model from re-computing pre-computed values in the context of text generation. See the [`forward`](model_doc/ctrl#transformers.CTRLModel.forward) method for more information on the usage of this argument. ## Resources - [Text classification task guide](../tasks/sequence_classification) - [Causal language modeling task guide](../tasks/language_modeling) ## CTRLConfig [[autodoc]] CTRLConfig ## CTRLTokenizer [[autodoc]] CTRLTokenizer - save_vocabulary ## CTRLModel [[autodoc]] CTRLModel - forward ## CTRLLMHeadModel [[autodoc]] CTRLLMHeadModel - forward ## CTRLForSequenceClassification [[autodoc]] CTRLForSequenceClassification - forward ## TFCTRLModel [[autodoc]] TFCTRLModel - call ## TFCTRLLMHeadModel [[autodoc]] TFCTRLLMHeadModel - call ## TFCTRLForSequenceClassification [[autodoc]] TFCTRLForSequenceClassification - call " model_doc/convnextv2.md," # ConvNeXt V2 ## Overview The ConvNeXt V2 model was proposed in [ConvNeXt V2: Co-designing and Scaling ConvNets with Masked Autoencoders](https://arxiv.org/abs/2301.00808) by Sanghyun Woo, Shoubhik Debnath, Ronghang Hu, Xinlei Chen, Zhuang Liu, In So Kweon, Saining Xie. ConvNeXt V2 is a pure convolutional model (ConvNet), inspired by the design of Vision Transformers, and a successor of [ConvNeXT](convnext). The abstract from the paper is the following: *Driven by improved architectures and better representation learning frameworks, the field of visual recognition has enjoyed rapid modernization and performance boost in the early 2020s. For example, modern ConvNets, represented by ConvNeXt, have demonstrated strong performance in various scenarios. While these models were originally designed for supervised learning with ImageNet labels, they can also potentially benefit from self-supervised learning techniques such as masked autoencoders (MAE). However, we found that simply combining these two approaches leads to subpar performance. In this paper, we propose a fully convolutional masked autoencoder framework and a new Global Response Normalization (GRN) layer that can be added to the ConvNeXt architecture to enhance inter-channel feature competition. This co-design of self-supervised learning techniques and architectural improvement results in a new model family called ConvNeXt V2, which significantly improves the performance of pure ConvNets on various recognition benchmarks, including ImageNet classification, COCO detection, and ADE20K segmentation. We also provide pre-trained ConvNeXt V2 models of various sizes, ranging from an efficient 3.7M-parameter Atto model with 76.7% top-1 accuracy on ImageNet, to a 650M Huge model that achieves a state-of-the-art 88.9% accuracy using only public training data.* ConvNeXt V2 architecture. Taken from the original paper. This model was contributed by [adirik](https://huggingface.co/adirik). The original code can be found [here](https://github.com/facebookresearch/ConvNeXt-V2). ## Resources A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with ConvNeXt V2. - [`ConvNextV2ForImageClassification`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/pytorch/image-classification) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/image_classification.ipynb). If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource. ## ConvNextV2Config [[autodoc]] ConvNextV2Config ## ConvNextV2Model [[autodoc]] ConvNextV2Model - forward ## ConvNextV2ForImageClassification [[autodoc]] ConvNextV2ForImageClassification - forward ## TFConvNextV2Model [[autodoc]] TFConvNextV2Model - call ## TFConvNextV2ForImageClassification [[autodoc]] TFConvNextV2ForImageClassification - call " model_doc/fnet.md," # FNet ## Overview The FNet model was proposed in [FNet: Mixing Tokens with Fourier Transforms](https://arxiv.org/abs/2105.03824) by James Lee-Thorp, Joshua Ainslie, Ilya Eckstein, Santiago Ontanon. The model replaces the self-attention layer in a BERT model with a fourier transform which returns only the real parts of the transform. The model is significantly faster than the BERT model because it has fewer parameters and is more memory efficient. The model achieves about 92-97% accuracy of BERT counterparts on GLUE benchmark, and trains much faster than the BERT model. The abstract from the paper is the following: *We show that Transformer encoder architectures can be sped up, with limited accuracy costs, by replacing the self-attention sublayers with simple linear transformations that ""mix"" input tokens. These linear mixers, along with standard nonlinearities in feed-forward layers, prove competent at modeling semantic relationships in several text classification tasks. Most surprisingly, we find that replacing the self-attention sublayer in a Transformer encoder with a standard, unparameterized Fourier Transform achieves 92-97% of the accuracy of BERT counterparts on the GLUE benchmark, but trains 80% faster on GPUs and 70% faster on TPUs at standard 512 input lengths. At longer input lengths, our FNet model is significantly faster: when compared to the ""efficient"" Transformers on the Long Range Arena benchmark, FNet matches the accuracy of the most accurate models, while outpacing the fastest models across all sequence lengths on GPUs (and across relatively shorter lengths on TPUs). Finally, FNet has a light memory footprint and is particularly efficient at smaller model sizes; for a fixed speed and accuracy budget, small FNet models outperform Transformer counterparts.* This model was contributed by [gchhablani](https://huggingface.co/gchhablani). The original code can be found [here](https://github.com/google-research/google-research/tree/master/f_net). ## Usage tips The model was trained without an attention mask as it is based on Fourier Transform. The model was trained with maximum sequence length 512 which includes pad tokens. Hence, it is highly recommended to use the same maximum sequence length for fine-tuning and inference. ## Resources - [Text classification task guide](../tasks/sequence_classification) - [Token classification task guide](../tasks/token_classification) - [Question answering task guide](../tasks/question_answering) - [Masked language modeling task guide](../tasks/masked_language_modeling) - [Multiple choice task guide](../tasks/multiple_choice) ## FNetConfig [[autodoc]] FNetConfig ## FNetTokenizer [[autodoc]] FNetTokenizer - build_inputs_with_special_tokens - get_special_tokens_mask - create_token_type_ids_from_sequences - save_vocabulary ## FNetTokenizerFast [[autodoc]] FNetTokenizerFast ## FNetModel [[autodoc]] FNetModel - forward ## FNetForPreTraining [[autodoc]] FNetForPreTraining - forward ## FNetForMaskedLM [[autodoc]] FNetForMaskedLM - forward ## FNetForNextSentencePrediction [[autodoc]] FNetForNextSentencePrediction - forward ## FNetForSequenceClassification [[autodoc]] FNetForSequenceClassification - forward ## FNetForMultipleChoice [[autodoc]] FNetForMultipleChoice - forward ## FNetForTokenClassification [[autodoc]] FNetForTokenClassification - forward ## FNetForQuestionAnswering [[autodoc]] FNetForQuestionAnswering - forward " model_doc/pvt.md," # Pyramid Vision Transformer (PVT) ## Overview The PVT model was proposed in [Pyramid Vision Transformer: A Versatile Backbone for Dense Prediction without Convolutions](https://arxiv.org/abs/2102.12122) by Wenhai Wang, Enze Xie, Xiang Li, Deng-Ping Fan, Kaitao Song, Ding Liang, Tong Lu, Ping Luo, Ling Shao. The PVT is a type of vision transformer that utilizes a pyramid structure to make it an effective backbone for dense prediction tasks. Specifically it allows for more fine-grained inputs (4 x 4 pixels per patch) to be used, while simultaneously shrinking the sequence length of the Transformer as it deepens - reducing the computational cost. Additionally, a spatial-reduction attention (SRA) layer is used to further reduce the resource consumption when learning high-resolution features. The abstract from the paper is the following: *Although convolutional neural networks (CNNs) have achieved great success in computer vision, this work investigates a simpler, convolution-free backbone network useful for many dense prediction tasks. Unlike the recently proposed Vision Transformer (ViT) that was designed for image classification specifically, we introduce the Pyramid Vision Transformer (PVT), which overcomes the difficulties of porting Transformer to various dense prediction tasks. PVT has several merits compared to current state of the arts. Different from ViT that typically yields low resolution outputs and incurs high computational and memory costs, PVT not only can be trained on dense partitions of an image to achieve high output resolution, which is important for dense prediction, but also uses a progressive shrinking pyramid to reduce the computations of large feature maps. PVT inherits the advantages of both CNN and Transformer, making it a unified backbone for various vision tasks without convolutions, where it can be used as a direct replacement for CNN backbones. We validate PVT through extensive experiments, showing that it boosts the performance of many downstream tasks, including object detection, instance and semantic segmentation. For example, with a comparable number of parameters, PVT+RetinaNet achieves 40.4 AP on the COCO dataset, surpassing ResNet50+RetinNet (36.3 AP) by 4.1 absolute AP (see Figure 2). We hope that PVT could serve as an alternative and useful backbone for pixel-level predictions and facilitate future research.* This model was contributed by [Xrenya](>> from transformers import LukeTokenizer, LukeModel, LukeForEntityPairClassification >>> model = LukeModel.from_pretrained(""studio-ousia/luke-base"") >>> tokenizer = LukeTokenizer.from_pretrained(""studio-ousia/luke-base"") # Example 1: Computing the contextualized entity representation corresponding to the entity mention ""Beyoncé"" >>> text = ""Beyoncé lives in Los Angeles."" >>> entity_spans = [(0, 7)] # character-based entity span corresponding to ""Beyoncé"" >>> inputs = tokenizer(text, entity_spans=entity_spans, add_prefix_space=True, return_tensors=""pt"") >>> outputs = model(**inputs) >>> word_last_hidden_state = outputs.last_hidden_state >>> entity_last_hidden_state = outputs.entity_last_hidden_state # Example 2: Inputting Wikipedia entities to obtain enriched contextualized representations >>> entities = [ ""Beyoncé"", ""Los Angeles"", ] # Wikipedia entity titles corresponding to the entity mentions ""Beyoncé"" and ""Los Angeles"" >>> entity_spans = [(0, 7), (17, 28)] # character-based entity spans corresponding to ""Beyoncé"" and ""Los Angeles"" >>> inputs = tokenizer(text, entities=entities, entity_spans=entity_spans, add_prefix_space=True, return_tensors=""pt"") >>> outputs = model(**inputs) >>> word_last_hidden_state = outputs.last_hidden_state >>> entity_last_hidden_state = outputs.entity_last_hidden_state # Example 3: Classifying the relationship between two entities using LukeForEntityPairClassification head model >>> model = LukeForEntityPairClassification.from_pretrained(""studio-ousia/luke-large-finetuned-tacred"") >>> tokenizer = LukeTokenizer.from_pretrained(""studio-ousia/luke-large-finetuned-tacred"") >>> entity_spans = [(0, 7), (17, 28)] # character-based entity spans corresponding to ""Beyoncé"" and ""Los Angeles"" >>> inputs = tokenizer(text, entity_spans=entity_spans, return_tensors=""pt"") >>> outputs = model(**inputs) >>> logits = outputs.logits >>> predicted_class_idx = int(logits[0].argmax()) >>> print(""Predicted class:"", model.config.id2label[predicted_class_idx]) ## Resources - [A demo notebook on how to fine-tune [`LukeForEntityPairClassification`] for relation classification](https://github.com/NielsRogge/Transformers-Tutorials/tree/master/LUKE) - [Notebooks showcasing how you to reproduce the results as reported in the paper with the HuggingFace implementation of LUKE](https://github.com/studio-ousia/luke/tree/master/notebooks) - [Text classification task guide](../tasks/sequence_classification) - [Token classification task guide](../tasks/token_classification) - [Question answering task guide](../tasks/question_answering) - [Masked language modeling task guide](../tasks/masked_language_modeling) - [Multiple choice task guide](../tasks/multiple_choice) ## LukeConfig [[autodoc]] LukeConfig ## LukeTokenizer [[autodoc]] LukeTokenizer - __call__ - save_vocabulary ## LukeModel [[autodoc]] LukeModel - forward ## LukeForMaskedLM [[autodoc]] LukeForMaskedLM - forward ## LukeForEntityClassification [[autodoc]] LukeForEntityClassification - forward ## LukeForEntityPairClassification [[autodoc]] LukeForEntityPairClassification - forward ## LukeForEntitySpanClassification [[autodoc]] LukeForEntitySpanClassification - forward ## LukeForSequenceClassification [[autodoc]] LukeForSequenceClassification - forward ## LukeForMultipleChoice [[autodoc]] LukeForMultipleChoice - forward ## LukeForTokenClassification [[autodoc]] LukeForTokenClassification - forward ## LukeForQuestionAnswering [[autodoc]] LukeForQuestionAnswering - forward " model_doc/gpt_neox.md," # GPT-NeoX ## Overview We introduce GPT-NeoX-20B, a 20 billion parameter autoregressive language model trained on the Pile, whose weights will be made freely and openly available to the public through a permissive license. It is, to the best of our knowledge, the largest dense autoregressive model that has publicly available weights at the time of submission. In this work, we describe GPT-NeoX-20B's architecture and training and evaluate its performance on a range of language-understanding, mathematics, and knowledge-based tasks. We find that GPT-NeoX-20B is a particularly powerful few-shot reasoner and gains far more in performance when evaluated five-shot than similarly sized GPT-3 and FairSeq models. We open-source the training and evaluation code, as well as the model weights, at [https://github.com/EleutherAI/gpt-neox](https://github.com/EleutherAI/gpt-neox). Development of the model was led by Sid Black, Stella Biderman and Eric Hallahan, and the model was trained with generous the support of [CoreWeave](https://www.coreweave.com/). GPT-NeoX-20B was trained with fp16, thus it is recommended to initialize the model as follows: thon model = GPTNeoXForCausalLM.from_pretrained(""EleutherAI/gpt-neox-20b"").half().cuda() GPT-NeoX-20B also has a different tokenizer from the one used in GPT-J-6B and GPT-Neo. The new tokenizer allocates additional tokens to whitespace characters, making the model more suitable for certain tasks like code generation. ## Usage example The `generate()` method can be used to generate text using GPT Neo model. thon >>> from transformers import GPTNeoXForCausalLM, GPTNeoXTokenizerFast >>> model = GPTNeoXForCausalLM.from_pretrained(""EleutherAI/gpt-neox-20b"") >>> tokenizer = GPTNeoXTokenizerFast.from_pretrained(""EleutherAI/gpt-neox-20b"") >>> prompt = ""GPTNeoX20B is a 20B-parameter autoregressive Transformer model developed by EleutherAI."" >>> input_ids = tokenizer(prompt, return_tensors=""pt"").input_ids >>> gen_tokens = model.generate( input_ids, do_sample=True, temperature=0.9, max_length=100, ) >>> gen_text = tokenizer.batch_decode(gen_tokens)[0] ## Resources - [Causal language modeling task guide](../tasks/language_modeling) ## GPTNeoXConfig [[autodoc]] GPTNeoXConfig ## GPTNeoXTokenizerFast [[autodoc]] GPTNeoXTokenizerFast ## GPTNeoXModel [[autodoc]] GPTNeoXModel - forward ## GPTNeoXForCausalLM [[autodoc]] GPTNeoXForCausalLM - forward ## GPTNeoXForQuestionAnswering [[autodoc]] GPTNeoXForQuestionAnswering - forward ## GPTNeoXForSequenceClassification [[autodoc]] GPTNeoXForSequenceClassification - forward ## GPTNeoXForTokenClassification [[autodoc]] GPTNeoXForTokenClassification - forward " model_doc/t5v1.1.md," # T5v1.1 ## Overview T5v1.1 was released in the [google-research/text-to-text-transfer-transformer](https://github.com/google-research/text-to-text-transfer-transformer/blob/main/released_checkpoints.md#t511) repository by Colin Raffel et al. It's an improved version of the original T5 model. This model was contributed by [patrickvonplaten](https://huggingface.co/patrickvonplaten). The original code can be found [here](https://github.com/google-research/text-to-text-transfer-transformer/blob/main/released_checkpoints.md#t511). ## Usage tips One can directly plug in the weights of T5v1.1 into a T5 model, like so: thon >>> from transformers import T5ForConditionalGeneration >>> model = T5ForConditionalGeneration.from_pretrained(""google/t5-v1_1-base"") T5 Version 1.1 includes the following improvements compared to the original T5 model: - GEGLU activation in the feed-forward hidden layer, rather than ReLU. See [this paper](https://arxiv.org/abs/2002.05202). - Dropout was turned off in pre-training (quality win). Dropout should be re-enabled during fine-tuning. - Pre-trained on C4 only without mixing in the downstream tasks. - No parameter sharing between the embedding and classifier layer. - ""xl"" and ""xxl"" replace ""3B"" and ""11B"". The model shapes are a bit different - larger `d_model` and smaller `num_heads` and `d_ff`. Note: T5 Version 1.1 was only pre-trained on [C4](https://huggingface.co/datasets/c4) excluding any supervised training. Therefore, this model has to be fine-tuned before it is usable on a downstream task, unlike the original T5 model. Since t5v1.1 was pre-trained unsupervisedly, there's no real advantage to using a task prefix during single-task fine-tuning. If you are doing multi-task fine-tuning, you should use a prefix. Google has released the following variants: - [google/t5-v1_1-small](https://huggingface.co/google/t5-v1_1-small) - [google/t5-v1_1-base](https://huggingface.co/google/t5-v1_1-base) - [google/t5-v1_1-large](https://huggingface.co/google/t5-v1_1-large) - [google/t5-v1_1-xl](https://huggingface.co/google/t5-v1_1-xl) - [google/t5-v1_1-xxl](https://huggingface.co/google/t5-v1_1-xxl). Refer to [T5's documentation page](t5) for all API reference, tips, code examples and notebooks. " model_doc/wav2vec2_phoneme.md," # Wav2Vec2Phoneme ## Overview The Wav2Vec2Phoneme model was proposed in [Simple and Effective Zero-shot Cross-lingual Phoneme Recognition (Xu et al., 2021](https://arxiv.org/abs/2109.11680) by Qiantong Xu, Alexei Baevski, Michael Auli. The abstract from the paper is the following: *Recent progress in self-training, self-supervised pretraining and unsupervised learning enabled well performing speech recognition systems without any labeled data. However, in many cases there is labeled data available for related languages which is not utilized by these methods. This paper extends previous work on zero-shot cross-lingual transfer learning by fine-tuning a multilingually pretrained wav2vec 2.0 model to transcribe unseen languages. This is done by mapping phonemes of the training languages to the target language using articulatory features. Experiments show that this simple method significantly outperforms prior work which introduced task-specific architectures and used only part of a monolingually pretrained model.* Relevant checkpoints can be found under https://huggingface.co/models?other=phoneme-recognition. This model was contributed by [patrickvonplaten](https://huggingface.co/patrickvonplaten) The original code can be found [here](https://github.com/pytorch/fairseq/tree/master/fairseq/models/wav2vec). ## Usage tips - Wav2Vec2Phoneme uses the exact same architecture as Wav2Vec2 - Wav2Vec2Phoneme is a speech model that accepts a float array corresponding to the raw waveform of the speech signal. - Wav2Vec2Phoneme model was trained using connectionist temporal classification (CTC) so the model output has to be decoded using [`Wav2Vec2PhonemeCTCTokenizer`]. - Wav2Vec2Phoneme can be fine-tuned on multiple language at once and decode unseen languages in a single forward pass to a sequence of phonemes - By default, the model outputs a sequence of phonemes. In order to transform the phonemes to a sequence of words one should make use of a dictionary and language model. Wav2Vec2Phoneme's architecture is based on the Wav2Vec2 model, for API reference, check out [`Wav2Vec2`](wav2vec2)'s documentation page except for the tokenizer. ## Wav2Vec2PhonemeCTCTokenizer [[autodoc]] Wav2Vec2PhonemeCTCTokenizer - __call__ - batch_decode - decode - phonemize " model_doc/pix2struct.md," # Pix2Struct ## Overview The Pix2Struct model was proposed in [Pix2Struct: Screenshot Parsing as Pretraining for Visual Language Understanding](https://arxiv.org/abs/2210.03347) by Kenton Lee, Mandar Joshi, Iulia Turc, Hexiang Hu, Fangyu Liu, Julian Eisenschlos, Urvashi Khandelwal, Peter Shaw, Ming-Wei Chang, Kristina Toutanova. The abstract from the paper is the following: > Visually-situated language is ubiquitous -- sources range from textbooks with diagrams to web pages with images and tables, to mobile apps with buttons and forms. Perhaps due to this diversity, previous work has typically relied on domain-specific recipes with limited sharing of the underlying data, model architectures, and objectives. We present Pix2Struct, a pretrained image-to-text model for purely visual language understanding, which can be finetuned on tasks containing visually-situated language. Pix2Struct is pretrained by learning to parse masked screenshots of web pages into simplified HTML. The web, with its richness of visual elements cleanly reflected in the HTML structure, provides a large source of pretraining data well suited to the diversity of downstream tasks. Intuitively, this objective subsumes common pretraining signals such as OCR, language modeling, image captioning. In addition to the novel pretraining strategy, we introduce a variable-resolution input representation and a more flexible integration of language and vision inputs, where language prompts such as questions are rendered directly on top of the input image. For the first time, we show that a single pretrained model can achieve state-of-the-art results in six out of nine tasks across four domains: documents, illustrations, user interfaces, and natural images. Tips: Pix2Struct has been fine tuned on a variety of tasks and datasets, ranging from image captioning, visual question answering (VQA) over different inputs (books, charts, science diagrams), captioning UI components etc. The full list can be found in Table 1 of the paper. We therefore advise you to use these models for the tasks they have been fine tuned on. For instance, if you want to use Pix2Struct for UI captioning, you should use the model fine tuned on the UI dataset. If you want to use Pix2Struct for image captioning, you should use the model fine tuned on the natural images captioning dataset and so on. If you want to use the model to perform conditional text captioning, make sure to use the processor with `add_special_tokens=False`. This model was contributed by [ybelkada](https://huggingface.co/ybelkada). The original code can be found [here](https://github.com/google-research/pix2struct). ## Resources - [Fine-tuning Notebook](https://github.com/huggingface/notebooks/blob/main/examples/image_captioning_pix2struct.ipynb) - [All models](https://huggingface.co/models?search=pix2struct) ## Pix2StructConfig [[autodoc]] Pix2StructConfig - from_text_vision_configs ## Pix2StructTextConfig [[autodoc]] Pix2StructTextConfig ## Pix2StructVisionConfig [[autodoc]] Pix2StructVisionConfig ## Pix2StructProcessor [[autodoc]] Pix2StructProcessor ## Pix2StructImageProcessor [[autodoc]] Pix2StructImageProcessor - preprocess ## Pix2StructTextModel [[autodoc]] Pix2StructTextModel - forward ## Pix2StructVisionModel [[autodoc]] Pix2StructVisionModel - forward ## Pix2StructForConditionalGeneration [[autodoc]] Pix2StructForConditionalGeneration - forward " model_doc/transfo-xl.md," # Transformer XL This model is in maintenance mode only, so we won't accept any new PRs changing its code. This model was deprecated due to security issues linked to `pickle.load`. We recommend switching to more recent models for improved security. In case you would still like to use `TransfoXL` in your experiments, we recommend using the [Hub checkpoint](https://huggingface.co/transfo-xl-wt103) with a specific revision to ensure you are downloading safe files from the Hub: from transformers import TransfoXLTokenizer, TransfoXLLMHeadModel checkpoint = 'transfo-xl-wt103' revision = '40a186da79458c9f9de846edfaea79c412137f97' tokenizer = TransfoXLTokenizer.from_pretrained(checkpoint, revision=revision) model = TransfoXLLMHeadModel.from_pretrained(checkpoint, revision=revision) If you run into any issues running this model, please reinstall the last version that supported this model: v4.35.0. You can do so by running the following command: `pip install -U transformers==4.35.0`. ## Overview The Transformer-XL model was proposed in [Transformer-XL: Attentive Language Models Beyond a Fixed-Length Context](https://arxiv.org/abs/1901.02860) by Zihang Dai, Zhilin Yang, Yiming Yang, Jaime Carbonell, Quoc V. Le, Ruslan Salakhutdinov. It's a causal (uni-directional) transformer with relative positioning (sinusoïdal) embeddings which can reuse previously computed hidden-states to attend to longer context (memory). This model also uses adaptive softmax inputs and outputs (tied). The abstract from the paper is the following: *Transformers have a potential of learning longer-term dependency, but are limited by a fixed-length context in the setting of language modeling. We propose a novel neural architecture Transformer-XL that enables learning dependency beyond a fixed length without disrupting temporal coherence. It consists of a segment-level recurrence mechanism and a novel positional encoding scheme. Our method not only enables capturing longer-term dependency, but also resolves the context fragmentation problem. As a result, Transformer-XL learns dependency that is 80% longer than RNNs and 450% longer than vanilla Transformers, achieves better performance on both short and long sequences, and is up to 1,800+ times faster than vanilla Transformers during evaluation. Notably, we improve the state-of-the-art results of bpc/perplexity to 0.99 on enwiki8, 1.08 on text8, 18.3 on WikiText-103, 21.8 on One Billion Word, and 54.5 on Penn Treebank (without finetuning). When trained only on WikiText-103, Transformer-XL manages to generate reasonably coherent, novel text articles with thousands of tokens.* This model was contributed by [thomwolf](https://huggingface.co/thomwolf). The original code can be found [here](https://github.com/kimiyoung/transformer-xl). ## Usage tips - Transformer-XL uses relative sinusoidal positional embeddings. Padding can be done on the left or on the right. The original implementation trains on SQuAD with padding on the left, therefore the padding defaults are set to left. - Transformer-XL is one of the few models that has no sequence length limit. - Same as a regular GPT model, but introduces a recurrence mechanism for two consecutive segments (similar to a regular RNNs with two consecutive inputs). In this context, a segment is a number of consecutive tokens (for instance 512) that may span across multiple documents, and segments are fed in order to the model. - Basically, the hidden states of the previous segment are concatenated to the current input to compute the attention scores. This allows the model to pay attention to information that was in the previous segment as well as the current one. By stacking multiple attention layers, the receptive field can be increased to multiple previous segments. - This changes the positional embeddings to positional relative embeddings (as the regular positional embeddings would give the same results in the current input and the current hidden state at a given position) and needs to make some adjustments in the way attention scores are computed. TransformerXL does **not** work with *torch.nn.DataParallel* due to a bug in PyTorch, see [issue #36035](https://github.com/pytorch/pytorch/issues/36035) ## Resources - [Text classification task guide](../tasks/sequence_classification) - [Causal language modeling task guide](../tasks/language_modeling) ## TransfoXLConfig [[autodoc]] TransfoXLConfig ## TransfoXLTokenizer [[autodoc]] TransfoXLTokenizer - save_vocabulary ## TransfoXL specific outputs [[autodoc]] models.deprecated.transfo_xl.modeling_transfo_xl.TransfoXLModelOutput [[autodoc]] models.deprecated.transfo_xl.modeling_transfo_xl.TransfoXLLMHeadModelOutput [[autodoc]] models.deprecated.transfo_xl.modeling_tf_transfo_xl.TFTransfoXLModelOutput [[autodoc]] models.deprecated.transfo_xl.modeling_tf_transfo_xl.TFTransfoXLLMHeadModelOutput ## TransfoXLModel [[autodoc]] TransfoXLModel - forward ## TransfoXLLMHeadModel [[autodoc]] TransfoXLLMHeadModel - forward ## TransfoXLForSequenceClassification [[autodoc]] TransfoXLForSequenceClassification - forward ## TFTransfoXLModel [[autodoc]] TFTransfoXLModel - call ## TFTransfoXLLMHeadModel [[autodoc]] TFTransfoXLLMHeadModel - call ## TFTransfoXLForSequenceClassification [[autodoc]] TFTransfoXLForSequenceClassification - call ## Internal Layers [[autodoc]] AdaptiveEmbedding [[autodoc]] TFAdaptiveEmbedding " model_doc/gpt-sw3.md," # GPT-Sw3 ## Overview The GPT-Sw3 model was first proposed in [Lessons Learned from GPT-SW3: Building the First Large-Scale Generative Language Model for Swedish](http://www.lrec-conf.org/proceedings/lrec2022/pdf/2022.lrec-1.376.pdf) by Ariel Ekgren, Amaru Cuba Gyllensten, Evangelia Gogoulou, Alice Heiman, Severine Verlinden, Joey Öhman, Fredrik Carlsson, Magnus Sahlgren. Since that first paper the authors have extended their work and trained new models on their new 1.2TB corpora named The Nordic Pile. GPT-Sw3 is a collection of large decoder-only pretrained transformer language models that were developed by AI Sweden in collaboration with RISE and the WASP WARA for Media and Language. GPT-Sw3 has been trained on a dataset containing 320B tokens in Swedish, Norwegian, Danish, Icelandic, English, and programming code. The model was pretrained using a causal language modeling (CLM) objective utilizing the NeMo Megatron GPT implementation. This model was contributed by [AI Sweden](https://huggingface.co/AI-Sweden). ## Usage example thon >>> from transformers import AutoTokenizer, AutoModelForCausalLM >>> tokenizer = AutoTokenizer.from_pretrained(""AI-Sweden/gpt-sw3-356m"") >>> model = AutoModelForCausalLM.from_pretrained(""AI-Sweden/gpt-sw3-356m"") >>> input_ids = tokenizer(""Träd är fina för att"", return_tensors=""pt"")[""input_ids""] >>> generated_token_ids = model.generate(inputs=input_ids, max_new_tokens=10, do_sample=True)[0] >>> print(tokenizer.decode(generated_token_ids)) Träd är fina för att de är färgstarka. Men ibland är det fint ## Resources - [Text classification task guide](../tasks/sequence_classification) - [Token classification task guide](../tasks/token_classification) - [Causal language modeling task guide](../tasks/language_modeling) The implementation uses the `GPT2Model` coupled with our `GPTSw3Tokenizer`. Refer to [GPT2Model documentation](gpt2) for API reference and examples. Note that sentencepiece is required to use our tokenizer and can be installed with `pip install transformers[sentencepiece]` or `pip install sentencepiece` ## GPTSw3Tokenizer [[autodoc]] GPTSw3Tokenizer - save_vocabulary " model_doc/visual_bert.md," # VisualBERT ## Overview The VisualBERT model was proposed in [VisualBERT: A Simple and Performant Baseline for Vision and Language](https://arxiv.org/pdf/1908.03557) by Liunian Harold Li, Mark Yatskar, Da Yin, Cho-Jui Hsieh, Kai-Wei Chang. VisualBERT is a neural network trained on a variety of (image, text) pairs. The abstract from the paper is the following: *We propose VisualBERT, a simple and flexible framework for modeling a broad range of vision-and-language tasks. VisualBERT consists of a stack of Transformer layers that implicitly align elements of an input text and regions in an associated input image with self-attention. We further propose two visually-grounded language model objectives for pre-training VisualBERT on image caption data. Experiments on four vision-and-language tasks including VQA, VCR, NLVR2, and Flickr30K show that VisualBERT outperforms or rivals with state-of-the-art models while being significantly simpler. Further analysis demonstrates that VisualBERT can ground elements of language to image regions without any explicit supervision and is even sensitive to syntactic relationships, tracking, for example, associations between verbs and image regions corresponding to their arguments.* This model was contributed by [gchhablani](https://huggingface.co/gchhablani). The original code can be found [here](https://github.com/uclanlp/visualbert). ## Usage tips 1. Most of the checkpoints provided work with the [`VisualBertForPreTraining`] configuration. Other checkpoints provided are the fine-tuned checkpoints for down-stream tasks - VQA ('visualbert-vqa'), VCR ('visualbert-vcr'), NLVR2 ('visualbert-nlvr2'). Hence, if you are not working on these downstream tasks, it is recommended that you use the pretrained checkpoints. 2. For the VCR task, the authors use a fine-tuned detector for generating visual embeddings, for all the checkpoints. We do not provide the detector and its weights as a part of the package, but it will be available in the research projects, and the states can be loaded directly into the detector provided. VisualBERT is a multi-modal vision and language model. It can be used for visual question answering, multiple choice, visual reasoning and region-to-phrase correspondence tasks. VisualBERT uses a BERT-like transformer to prepare embeddings for image-text pairs. Both the text and visual features are then projected to a latent space with identical dimension. To feed images to the model, each image is passed through a pre-trained object detector and the regions and the bounding boxes are extracted. The authors use the features generated after passing these regions through a pre-trained CNN like ResNet as visual embeddings. They also add absolute position embeddings, and feed the resulting sequence of vectors to a standard BERT model. The text input is concatenated in the front of the visual embeddings in the embedding layer, and is expected to be bound by [CLS] and a [SEP] tokens, as in BERT. The segment IDs must also be set appropriately for the textual and visual parts. The [`BertTokenizer`] is used to encode the text. A custom detector/image processor must be used to get the visual embeddings. The following example notebooks show how to use VisualBERT with Detectron-like models: - [VisualBERT VQA demo notebook](https://github.com/huggingface/transformers/tree/main/examples/research_projects/visual_bert) : This notebook contains an example on VisualBERT VQA. - [Generate Embeddings for VisualBERT (Colab Notebook)](https://colab.research.google.com/drive/1bLGxKdldwqnMVA5x4neY7-l_8fKGWQYI?usp=sharing) : This notebook contains an example on how to generate visual embeddings. The following example shows how to get the last hidden state using [`VisualBertModel`]: thon >>> import torch >>> from transformers import BertTokenizer, VisualBertModel >>> model = VisualBertModel.from_pretrained(""uclanlp/visualbert-vqa-coco-pre"") >>> tokenizer = BertTokenizer.from_pretrained(""bert-base-uncased"") >>> inputs = tokenizer(""What is the man eating?"", return_tensors=""pt"") >>> # this is a custom function that returns the visual embeddings given the image path >>> visual_embeds = get_visual_embeddings(image_path) >>> visual_token_type_ids = torch.ones(visual_embeds.shape[:-1], dtype=torch.long) >>> visual_attention_mask = torch.ones(visual_embeds.shape[:-1], dtype=torch.float) >>> inputs.update( { ""visual_embeds"": visual_embeds, ""visual_token_type_ids"": visual_token_type_ids, ""visual_attention_mask"": visual_attention_mask, } ) >>> outputs = model(**inputs) >>> last_hidden_state = outputs.last_hidden_state ## VisualBertConfig [[autodoc]] VisualBertConfig ## VisualBertModel [[autodoc]] VisualBertModel - forward ## VisualBertForPreTraining [[autodoc]] VisualBertForPreTraining - forward ## VisualBertForQuestionAnswering [[autodoc]] VisualBertForQuestionAnswering - forward ## VisualBertForMultipleChoice [[autodoc]] VisualBertForMultipleChoice - forward ## VisualBertForVisualReasoning [[autodoc]] VisualBertForVisualReasoning - forward ## VisualBertForRegionToPhraseAlignment [[autodoc]] VisualBertForRegionToPhraseAlignment - forward " model_doc/dinov2.md," # DINOv2 ## Overview The DINOv2 model was proposed in [DINOv2: Learning Robust Visual Features without Supervision](https://arxiv.org/abs/2304.07193) by Maxime Oquab, Timothée Darcet, Théo Moutakanni, Huy Vo, Marc Szafraniec, Vasil Khalidov, Pierre Fernandez, Daniel Haziza, Francisco Massa, Alaaeldin El-Nouby, Mahmoud Assran, Nicolas Ballas, Wojciech Galuba, Russell Howes, Po-Yao Huang, Shang-Wen Li, Ishan Misra, Michael Rabbat, Vasu Sharma, Gabriel Synnaeve, Hu Xu, Hervé Jegou, Julien Mairal, Patrick Labatut, Armand Joulin, Piotr Bojanowski. DINOv2 is an upgrade of [DINO](https://arxiv.org/abs/2104.14294), a self-supervised method applied on [Vision Transformers](vit). This method enables all-purpose visual features, i.e., features that work across image distributions and tasks without finetuning. The abstract from the paper is the following: *The recent breakthroughs in natural language processing for model pretraining on large quantities of data have opened the way for similar foundation models in computer vision. These models could greatly simplify the use of images in any system by producing all-purpose visual features, i.e., features that work across image distributions and tasks without finetuning. This work shows that existing pretraining methods, especially self-supervised methods, can produce such features if trained on enough curated data from diverse sources. We revisit existing approaches and combine different techniques to scale our pretraining in terms of data and model size. Most of the technical contributions aim at accelerating and stabilizing the training at scale. In terms of data, we propose an automatic pipeline to build a dedicated, diverse, and curated image dataset instead of uncurated data, as typically done in the self-supervised literature. In terms of models, we train a ViT model (Dosovitskiy et al., 2020) with 1B parameters and distill it into a series of smaller models that surpass the best available all-purpose features, OpenCLIP (Ilharco et al., 2021) on most of the benchmarks at image and pixel levels.* This model was contributed by [nielsr](https://huggingface.co/nielsr). The original code can be found [here](https://github.com/facebookresearch/dinov2). ## Usage tips The model can be traced using `torch.jit.trace` which leverages JIT compilation to optimize the model making it faster to run. Note this still produces some mis-matched elements and the difference between the original model and the traced model is of the order of 1e-4. thon import torch from transformers import AutoImageProcessor, AutoModel from PIL import Image import requests url = 'http://images.cocodataset.org/val2017/000000039769.jpg' image = Image.open(requests.get(url, stream=True).raw) processor = AutoImageProcessor.from_pretrained('facebook/dinov2-base') model = AutoModel.from_pretrained('facebook/dinov2-base') inputs = processor(images=image, return_tensors=""pt"") outputs = model(**inputs) last_hidden_states = outputs[0] # We have to force return_dict=False for tracing model.config.return_dict = False with torch.no_grad(): traced_model = torch.jit.trace(model, [inputs.pixel_values]) traced_outputs = traced_model(inputs.pixel_values) print((last_hidden_states - traced_outputs[0]).abs().max()) ## Resources A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with DPT. - Demo notebooks for DINOv2 can be found [here](https://github.com/NielsRogge/Transformers-Tutorials/tree/master/DINOv2). 🌎 - [`Dinov2ForImageClassification`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/pytorch/image-classification) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/image_classification.ipynb). - See also: [Image classification task guide](../tasks/image_classification) If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource. ## Dinov2Config [[autodoc]] Dinov2Config ## Dinov2Model [[autodoc]] Dinov2Model - forward ## Dinov2ForImageClassification [[autodoc]] Dinov2ForImageClassification - forward " model_doc/canine.md," # CANINE ## Overview The CANINE model was proposed in [CANINE: Pre-training an Efficient Tokenization-Free Encoder for Language Representation](https://arxiv.org/abs/2103.06874) by Jonathan H. Clark, Dan Garrette, Iulia Turc, John Wieting. It's among the first papers that trains a Transformer without using an explicit tokenization step (such as Byte Pair Encoding (BPE), WordPiece or SentencePiece). Instead, the model is trained directly at a Unicode character-level. Training at a character-level inevitably comes with a longer sequence length, which CANINE solves with an efficient downsampling strategy, before applying a deep Transformer encoder. The abstract from the paper is the following: *Pipelined NLP systems have largely been superseded by end-to-end neural modeling, yet nearly all commonly-used models still require an explicit tokenization step. While recent tokenization approaches based on data-derived subword lexicons are less brittle than manually engineered tokenizers, these techniques are not equally suited to all languages, and the use of any fixed vocabulary may limit a model's ability to adapt. In this paper, we present CANINE, a neural encoder that operates directly on character sequences, without explicit tokenization or vocabulary, and a pre-training strategy that operates either directly on characters or optionally uses subwords as a soft inductive bias. To use its finer-grained input effectively and efficiently, CANINE combines downsampling, which reduces the input sequence length, with a deep transformer stack, which encodes context. CANINE outperforms a comparable mBERT model by 2.8 F1 on TyDi QA, a challenging multilingual benchmark, despite having 28% fewer model parameters.* This model was contributed by [nielsr](https://huggingface.co/nielsr). The original code can be found [here](https://github.com/google-research/language/tree/master/language/canine). ## Usage tips - CANINE uses no less than 3 Transformer encoders internally: 2 ""shallow"" encoders (which only consist of a single layer) and 1 ""deep"" encoder (which is a regular BERT encoder). First, a ""shallow"" encoder is used to contextualize the character embeddings, using local attention. Next, after downsampling, a ""deep"" encoder is applied. Finally, after upsampling, a ""shallow"" encoder is used to create the final character embeddings. Details regarding up- and downsampling can be found in the paper. - CANINE uses a max sequence length of 2048 characters by default. One can use [`CanineTokenizer`] to prepare text for the model. - Classification can be done by placing a linear layer on top of the final hidden state of the special [CLS] token (which has a predefined Unicode code point). For token classification tasks however, the downsampled sequence of tokens needs to be upsampled again to match the length of the original character sequence (which is 2048). The details for this can be found in the paper. Model checkpoints: - [google/canine-c](https://huggingface.co/google/canine-c): Pre-trained with autoregressive character loss, 12-layer, 768-hidden, 12-heads, 121M parameters (size ~500 MB). - [google/canine-s](https://huggingface.co/google/canine-s): Pre-trained with subword loss, 12-layer, 768-hidden, 12-heads, 121M parameters (size ~500 MB). ## Usage example CANINE works on raw characters, so it can be used **without a tokenizer**: thon >>> from transformers import CanineModel >>> import torch >>> model = CanineModel.from_pretrained(""google/canine-c"") # model pre-trained with autoregressive character loss >>> text = ""hello world"" >>> # use Python's built-in ord() function to turn each character into its unicode code point id >>> input_ids = torch.tensor([[ord(char) for char in text]]) >>> outputs = model(input_ids) # forward pass >>> pooled_output = outputs.pooler_output >>> sequence_output = outputs.last_hidden_state For batched inference and training, it is however recommended to make use of the tokenizer (to pad/truncate all sequences to the same length): thon >>> from transformers import CanineTokenizer, CanineModel >>> model = CanineModel.from_pretrained(""google/canine-c"") >>> tokenizer = CanineTokenizer.from_pretrained(""google/canine-c"") >>> inputs = [""Life is like a box of chocolates."", ""You never know what you gonna get.""] >>> encoding = tokenizer(inputs, padding=""longest"", truncation=True, return_tensors=""pt"") >>> outputs = model(**encoding) # forward pass >>> pooled_output = outputs.pooler_output >>> sequence_output = outputs.last_hidden_state ## Resources - [Text classification task guide](../tasks/sequence_classification) - [Token classification task guide](../tasks/token_classification) - [Question answering task guide](../tasks/question_answering) - [Multiple choice task guide](../tasks/multiple_choice) ## CanineConfig [[autodoc]] CanineConfig ## CanineTokenizer [[autodoc]] CanineTokenizer - build_inputs_with_special_tokens - get_special_tokens_mask - create_token_type_ids_from_sequences ## CANINE specific outputs [[autodoc]] models.canine.modeling_canine.CanineModelOutputWithPooling ## CanineModel [[autodoc]] CanineModel - forward ## CanineForSequenceClassification [[autodoc]] CanineForSequenceClassification - forward ## CanineForMultipleChoice [[autodoc]] CanineForMultipleChoice - forward ## CanineForTokenClassification [[autodoc]] CanineForTokenClassification - forward ## CanineForQuestionAnswering [[autodoc]] CanineForQuestionAnswering - forward " model_doc/upernet.md," # UPerNet ## Overview The UPerNet model was proposed in [Unified Perceptual Parsing for Scene Understanding](https://arxiv.org/abs/1807.10221) by Tete Xiao, Yingcheng Liu, Bolei Zhou, Yuning Jiang, Jian Sun. UPerNet is a general framework to effectively segment a wide range of concepts from images, leveraging any vision backbone like [ConvNeXt](convnext) or [Swin](swin). The abstract from the paper is the following: *Humans recognize the visual world at multiple levels: we effortlessly categorize scenes and detect objects inside, while also identifying the textures and surfaces of the objects along with their different compositional parts. In this paper, we study a new task called Unified Perceptual Parsing, which requires the machine vision systems to recognize as many visual concepts as possible from a given image. A multi-task framework called UPerNet and a training strategy are developed to learn from heterogeneous image annotations. We benchmark our framework on Unified Perceptual Parsing and show that it is able to effectively segment a wide range of concepts from images. The trained networks are further applied to discover visual knowledge in natural scenes.* UPerNet framework. Taken from the original paper. This model was contributed by [nielsr](https://huggingface.co/nielsr). The original code is based on OpenMMLab's mmsegmentation [here](https://github.com/open-mmlab/mmsegmentation/blob/master/mmseg/models/decode_heads/uper_head.py). ## Usage examples UPerNet is a general framework for semantic segmentation. It can be used with any vision backbone, like so: from transformers import SwinConfig, UperNetConfig, UperNetForSemanticSegmentation backbone_config = SwinConfig(out_features=[""stage1"", ""stage2"", ""stage3"", ""stage4""]) config = UperNetConfig(backbone_config=backbone_config) model = UperNetForSemanticSegmentation(config) To use another vision backbone, like [ConvNeXt](convnext), simply instantiate the model with the appropriate backbone: from transformers import ConvNextConfig, UperNetConfig, UperNetForSemanticSegmentation backbone_config = ConvNextConfig(out_features=[""stage1"", ""stage2"", ""stage3"", ""stage4""]) config = UperNetConfig(backbone_config=backbone_config) model = UperNetForSemanticSegmentation(config) Note that this will randomly initialize all the weights of the model. ## Resources A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with UPerNet. - Demo notebooks for UPerNet can be found [here](https://github.com/NielsRogge/Transformers-Tutorials/tree/master/UPerNet). - [`UperNetForSemanticSegmentation`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/pytorch/semantic-segmentation) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/semantic_segmentation.ipynb). - See also: [Semantic segmentation task guide](../tasks/semantic_segmentation) If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource. ## UperNetConfig [[autodoc]] UperNetConfig ## UperNetForSemanticSegmentation [[autodoc]] UperNetForSemanticSegmentation - forward" model_doc/phi.md," # Phi ## Overview The Phi-1 model was proposed in [Textbooks Are All You Need](https://arxiv.org/abs/2306.11644) by Suriya Gunasekar, Yi Zhang, Jyoti Aneja, Caio César Teodoro Mendes, Allie Del Giorno, Sivakanth Gopi, Mojan Javaheripi, Piero Kauffmann, Gustavo de Rosa, Olli Saarikivi, Adil Salim, Shital Shah, Harkirat Singh Behl, Xin Wang, Sébastien Bubeck, Ronen Eldan, Adam Tauman Kalai, Yin Tat Lee and Yuanzhi Li. The Phi-1.5 model was proposed in [Textbooks Are All You Need II: phi-1.5 technical report](https://arxiv.org/abs/2309.05463) by Yuanzhi Li, Sébastien Bubeck, Ronen Eldan, Allie Del Giorno, Suriya Gunasekar and Yin Tat Lee. ### Summary In Phi-1 and Phi-1.5 papers, the authors showed how important the quality of the data is in training relative to the model size. They selected high quality ""textbook"" data alongside with synthetically generated data for training their small sized Transformer based model Phi-1 with 1.3B parameters. Despite this small scale, phi-1 attains pass@1 accuracy 50.6% on HumanEval and 55.5% on MBPP. They follow the same strategy for Phi-1.5 and created another 1.3B parameter model with performance on natural language tasks comparable to models 5x larger, and surpassing most non-frontier LLMs. Phi-1.5 exhibits many of the traits of much larger LLMs such as the ability to “think step by step” or perform some rudimentary in-context learning. With these two experiments the authors successfully showed the huge impact of quality of training data when training machine learning models. The abstract from the Phi-1 paper is the following: *We introduce phi-1, a new large language model for code, with significantly smaller size than competing models: phi-1 is a Transformer-based model with 1.3B parameters, trained for 4 days on 8 A100s, using a selection of “textbook quality” data from the web (6B tokens) and synthetically generated textbooks and exercises with GPT-3.5 (1B tokens). Despite this small scale, phi-1 attains pass@1 accuracy 50.6% on HumanEval and 55.5% on MBPP. It also displays surprising emergent properties compared to phi-1-base, our model before our finetuning stage on a dataset of coding exercises, and phi-1-small, a smaller model with 350M parameters trained with the same pipeline as phi-1 that still achieves 45% on HumanEval.* The abstract from the Phi-1.5 paper is the following: *We continue the investigation into the power of smaller Transformer-based language models as initiated by TinyStories – a 10 million parameter model that can produce coherent English – and the follow-up work on phi-1, a 1.3 billion parameter model with Python coding performance close to the state-of-the-art. The latter work proposed to use existing Large Language Models (LLMs) to generate “textbook quality” data as a way to enhance the learning process compared to traditional web data. We follow the “Textbooks Are All You Need” approach, focusing this time on common sense reasoning in natural language, and create a new 1.3 billion parameter model named phi-1.5, with performance on natural language tasks comparable to models 5x larger, and surpassing most non-frontier LLMs on more complex reasoning tasks such as grade-school mathematics and basic coding. More generally, phi-1.5 exhibits many of the traits of much larger LLMs, both good –such as the ability to “think step by step” or perform some rudimentary in-context learning– and bad, including hallucinations and the potential for toxic and biased generations –encouragingly though, we are seeing improvement on that front thanks to the absence of web data. We open-source phi-1.5 to promote further research on these urgent topics.* This model was contributed by [Susnato Dhar](https://huggingface.co/susnato). The original code for Phi-1 and Phi-1.5 can be found [here](https://huggingface.co/microsoft/phi-1/blob/main/modeling_mixformer_sequential.py) and [here](https://huggingface.co/microsoft/phi-1_5/blob/main/modeling_mixformer_sequential.py) respectively. ## Usage tips - This model is quite similar to `Llama` with the main difference in [`PhiDecoderLayer`], where they used [`PhiAttention`] and [`PhiMLP`] layers in parallel configuration. - The tokenizer used for this model is identical to the [`CodeGenTokenizer`]. ### Example : thon >>> from transformers import PhiForCausalLM, AutoTokenizer >>> # define the model and tokenzier. >>> model = PhiForCausalLM.from_pretrained(""susnato/phi-1_5_dev"") >>> tokenizer = AutoTokenizer.from_pretrained(""susnato/phi-1_5_dev"") >>> # feel free to change the prompt to your liking. >>> prompt = ""If I were an AI that had just achieved"" >>> # apply the tokenizer. >>> tokens = tokenizer(prompt, return_tensors=""pt"") >>> # use the model to generate new tokens. >>> generated_output = model.generate(**tokens, use_cache=True, max_new_tokens=10) >>> tokenizer.batch_decode(generated_output)[0] 'If I were an AI that had just achieved a breakthrough in machine learning, I would be thrilled' ## PhiConfig [[autodoc]] PhiConfig ## PhiModel [[autodoc]] PhiModel - forward ## PhiForCausalLM [[autodoc]] PhiForCausalLM - forward - generate ## PhiForSequenceClassification [[autodoc]] PhiForSequenceClassification - forward ## PhiForTokenClassification [[autodoc]] PhiForTokenClassification - forward " model_doc/idefics.md," # IDEFICS ## Overview The IDEFICS model was proposed in [OBELICS: An Open Web-Scale Filtered Dataset of Interleaved Image-Text Documents ](https://huggingface.co/papers/2306.16527 ) by Hugo Laurençon, Lucile Saulnier, Léo Tronchon, Stas Bekman, Amanpreet Singh, Anton Lozhkov, Thomas Wang, Siddharth Karamcheti, Alexander M. Rush, Douwe Kiela, Matthieu Cord, Victor Sanh The abstract from the paper is the following: *Large multimodal models trained on natural documents, which interleave images and text, outperform models trained on image-text pairs on various multimodal benchmarks that require reasoning over one or multiple images to generate a text. However, the datasets used to train these models have not been released, and the collection process has not been fully specified. We introduce the OBELICS dataset, an open web-scale filtered dataset of interleaved image-text documents comprising 141 million web pages extracted from Common Crawl, 353 million associated images, and 115 billion text tokens. We describe the dataset creation process, present comprehensive filtering rules, and provide an analysis of the dataset's content. To show the viability of OBELISC, we train an 80 billion parameters vision and language model on the dataset and obtain competitive performance on various multimodal benchmarks. We release the code to reproduce the dataset along with the dataset itself.* This model was contributed by [HuggingFaceM4](https://huggingface.co/HuggingFaceM4). The original code can be found [here](). (TODO: don't have a public link yet). IDEFICS modeling code in Transformers is for finetuning and inferencing the pre-trained IDEFICS models. To train a new IDEFICS model from scratch use the m4 codebase (a link will be provided once it's made public) ## IdeficsConfig [[autodoc]] IdeficsConfig ## IdeficsModel [[autodoc]] IdeficsModel - forward ## IdeficsForVisionText2Text [[autodoc]] IdeficsForVisionText2Text - forward ## IdeficsImageProcessor [[autodoc]] IdeficsImageProcessor - preprocess ## IdeficsProcessor [[autodoc]] IdeficsProcessor - __call__ " model_doc/mra.md," # MRA ## Overview The MRA model was proposed in [Multi Resolution Analysis (MRA) for Approximate Self-Attention](https://arxiv.org/abs/2207.10284) by Zhanpeng Zeng, Sourav Pal, Jeffery Kline, Glenn M Fung, and Vikas Singh. The abstract from the paper is the following: *Transformers have emerged as a preferred model for many tasks in natural language processing and vision. Recent efforts on training and deploying Transformers more efficiently have identified many strategies to approximate the self-attention matrix, a key module in a Transformer architecture. Effective ideas include various prespecified sparsity patterns, low-rank basis expansions and combinations thereof. In this paper, we revisit classical Multiresolution Analysis (MRA) concepts such as Wavelets, whose potential value in this setting remains underexplored thus far. We show that simple approximations based on empirical feedback and design choices informed by modern hardware and implementation challenges, eventually yield a MRA-based approach for self-attention with an excellent performance profile across most criteria of interest. We undertake an extensive set of experiments and demonstrate that this multi-resolution scheme outperforms most efficient self-attention proposals and is favorable for both short and long sequences. Code is available at https://github.com/mlpen/mra-attention.* This model was contributed by [novice03](https://huggingface.co/novice03). The original code can be found [here](https://github.com/mlpen/mra-attention). ## MraConfig [[autodoc]] MraConfig ## MraModel [[autodoc]] MraModel - forward ## MraForMaskedLM [[autodoc]] MraForMaskedLM - forward ## MraForSequenceClassification [[autodoc]] MraForSequenceClassification - forward ## MraForMultipleChoice [[autodoc]] MraForMultipleChoice - forward ## MraForTokenClassification [[autodoc]] MraForTokenClassification - forward ## MraForQuestionAnswering [[autodoc]] MraForQuestionAnswering - forward" model_doc/gptj.md," # GPT-J ## Overview The GPT-J model was released in the [kingoflolz/mesh-transformer-jax](https://github.com/kingoflolz/mesh-transformer-jax) repository by Ben Wang and Aran Komatsuzaki. It is a GPT-2-like causal language model trained on [the Pile](https://pile.eleuther.ai/) dataset. This model was contributed by [Stella Biderman](https://huggingface.co/stellaathena). ## Usage tips - To load [GPT-J](https://huggingface.co/EleutherAI/gpt-j-6B) in float32 one would need at least 2x model size RAM: 1x for initial weights and another 1x to load the checkpoint. So for GPT-J it would take at least 48GB RAM to just load the model. To reduce the RAM usage there are a few options. The `torch_dtype` argument can be used to initialize the model in half-precision on a CUDA device only. There is also a fp16 branch which stores the fp16 weights, which could be used to further minimize the RAM usage: thon >>> from transformers import GPTJForCausalLM >>> import torch >>> device = ""cuda"" >>> model = GPTJForCausalLM.from_pretrained( ""EleutherAI/gpt-j-6B"", revision=""float16"", torch_dtype=torch.float16, ).to(device) - The model should fit on 16GB GPU for inference. For training/fine-tuning it would take much more GPU RAM. Adam optimizer for example makes four copies of the model: model, gradients, average and squared average of the gradients. So it would need at least 4x model size GPU memory, even with mixed precision as gradient updates are in fp32. This is not including the activations and data batches, which would again require some more GPU RAM. So one should explore solutions such as DeepSpeed, to train/fine-tune the model. Another option is to use the original codebase to train/fine-tune the model on TPU and then convert the model to Transformers format for inference. Instructions for that could be found [here](https://github.com/kingoflolz/mesh-transformer-jax/blob/master/howto_finetune.md) - Although the embedding matrix has a size of 50400, only 50257 entries are used by the GPT-2 tokenizer. These extra tokens are added for the sake of efficiency on TPUs. To avoid the mismatch between embedding matrix size and vocab size, the tokenizer for [GPT-J](https://huggingface.co/EleutherAI/gpt-j-6B) contains 143 extra tokens `<|extratoken_1|> <|extratoken_143|>`, so the `vocab_size` of tokenizer also becomes 50400. ## Usage examples The [`~generation.GenerationMixin.generate`] method can be used to generate text using GPT-J model. thon >>> from transformers import AutoModelForCausalLM, AutoTokenizer >>> model = AutoModelForCausalLM.from_pretrained(""EleutherAI/gpt-j-6B"") >>> tokenizer = AutoTokenizer.from_pretrained(""EleutherAI/gpt-j-6B"") >>> prompt = ( ""In a shocking finding, scientists discovered a herd of unicorns living in a remote, "" ""previously unexplored valley, in the Andes Mountains. Even more surprising to the "" ""researchers was the fact that the unicorns spoke perfect English."" ) >>> input_ids = tokenizer(prompt, return_tensors=""pt"").input_ids >>> gen_tokens = model.generate( input_ids, do_sample=True, temperature=0.9, max_length=100, ) >>> gen_text = tokenizer.batch_decode(gen_tokens)[0] or in float16 precision: thon >>> from transformers import GPTJForCausalLM, AutoTokenizer >>> import torch >>> device = ""cuda"" >>> model = GPTJForCausalLM.from_pretrained(""EleutherAI/gpt-j-6B"", torch_dtype=torch.float16).to(device) >>> tokenizer = AutoTokenizer.from_pretrained(""EleutherAI/gpt-j-6B"") >>> prompt = ( ""In a shocking finding, scientists discovered a herd of unicorns living in a remote, "" ""previously unexplored valley, in the Andes Mountains. Even more surprising to the "" ""researchers was the fact that the unicorns spoke perfect English."" ) >>> input_ids = tokenizer(prompt, return_tensors=""pt"").input_ids.to(device) >>> gen_tokens = model.generate( input_ids, do_sample=True, temperature=0.9, max_length=100, ) >>> gen_text = tokenizer.batch_decode(gen_tokens)[0] ## Resources A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with GPT-J. If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource. - Description of [GPT-J](https://huggingface.co/EleutherAI/gpt-j-6B). - A blog on how to [Deploy GPT-J 6B for inference using Hugging Face Transformers and Amazon SageMaker](https://huggingface.co/blog/gptj-sagemaker). - A blog on how to [Accelerate GPT-J inference with DeepSpeed-Inference on GPUs](https://www.philschmid.de/gptj-deepspeed-inference). - A blog post introducing [GPT-J-6B: 6B JAX-Based Transformer](https://arankomatsuzaki.wordpress.com/2021/06/04/gpt-j/). 🌎 - A notebook for [GPT-J-6B Inference Demo](https://colab.research.google.com/github/kingoflolz/mesh-transformer-jax/blob/master/colab_demo.ipynb). 🌎 - Another notebook demonstrating [Inference with GPT-J-6B](https://colab.research.google.com/github/NielsRogge/Transformers-Tutorials/blob/master/GPT-J-6B/Inference_with_GPT_J_6B.ipynb). - [Causal language modeling](https://huggingface.co/course/en/chapter7/6?fw=pt#training-a-causal-language-model-from-scratch) chapter of the 🤗 Hugging Face Course. - [`GPTJForCausalLM`] is supported by this [causal language modeling example script](https://github.com/huggingface/transformers/tree/main/examples/pytorch/language-modeling#gpt-2gpt-and-causal-language-modeling), [text generation example script](https://github.com/huggingface/transformers/tree/main/examples/pytorch/text-generation), and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/language_modeling.ipynb). - [`TFGPTJForCausalLM`] is supported by this [causal language modeling example script](https://github.com/huggingface/transformers/tree/main/examples/tensorflow/language-modeling#run_clmpy) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/language_modeling-tf.ipynb). - [`FlaxGPTJForCausalLM`] is supported by this [causal language modeling example script](https://github.com/huggingface/transformers/tree/main/examples/flax/language-modeling#causal-language-modeling) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/causal_language_modeling_flax.ipynb). **Documentation resources** - [Text classification task guide](../tasks/sequence_classification) - [Question answering task guide](../tasks/question_answering) - [Causal language modeling task guide](../tasks/language_modeling) ## GPTJConfig [[autodoc]] GPTJConfig - all ## GPTJModel [[autodoc]] GPTJModel - forward ## GPTJForCausalLM [[autodoc]] GPTJForCausalLM - forward ## GPTJForSequenceClassification [[autodoc]] GPTJForSequenceClassification - forward ## GPTJForQuestionAnswering [[autodoc]] GPTJForQuestionAnswering - forward ## TFGPTJModel [[autodoc]] TFGPTJModel - call ## TFGPTJForCausalLM [[autodoc]] TFGPTJForCausalLM - call ## TFGPTJForSequenceClassification [[autodoc]] TFGPTJForSequenceClassification - call ## TFGPTJForQuestionAnswering [[autodoc]] TFGPTJForQuestionAnswering - call ## FlaxGPTJModel [[autodoc]] FlaxGPTJModel - __call__ ## FlaxGPTJForCausalLM [[autodoc]] FlaxGPTJForCausalLM - __call__ " model_doc/clap.md," # CLAP ## Overview The CLAP model was proposed in [Large Scale Contrastive Language-Audio pretraining with feature fusion and keyword-to-caption augmentation](https://arxiv.org/pdf/2211.06687.pdf) by Yusong Wu, Ke Chen, Tianyu Zhang, Yuchen Hui, Taylor Berg-Kirkpatrick, Shlomo Dubnov. CLAP (Contrastive Language-Audio Pretraining) is a neural network trained on a variety of (audio, text) pairs. It can be instructed in to predict the most relevant text snippet, given an audio, without directly optimizing for the task. The CLAP model uses a SWINTransformer to get audio features from a log-Mel spectrogram input, and a RoBERTa model to get text features. Both the text and audio features are then projected to a latent space with identical dimension. The dot product between the projected audio and text features is then used as a similar score. The abstract from the paper is the following: *Contrastive learning has shown remarkable success in the field of multimodal representation learning. In this paper, we propose a pipeline of contrastive language-audio pretraining to develop an audio representation by combining audio data with natural language descriptions. To accomplish this target, we first release LAION-Audio-630K, a large collection of 633,526 audio-text pairs from different data sources. Second, we construct a contrastive language-audio pretraining model by considering different audio encoders and text encoders. We incorporate the feature fusion mechanism and keyword-to-caption augmentation into the model design to further enable the model to process audio inputs of variable lengths and enhance the performance. Third, we perform comprehensive experiments to evaluate our model across three tasks: text-to-audio retrieval, zero-shot audio classification, and supervised audio classification. The results demonstrate that our model achieves superior performance in text-to-audio retrieval task. In audio classification tasks, the model achieves state-of-the-art performance in the zeroshot setting and is able to obtain performance comparable to models' results in the non-zero-shot setting. LAION-Audio-6* This model was contributed by [Younes Belkada](https://huggingface.co/ybelkada) and [Arthur Zucker](https://huggingface.co/ArthurZ) . The original code can be found [here](https://github.com/LAION-AI/Clap). ## ClapConfig [[autodoc]] ClapConfig - from_text_audio_configs ## ClapTextConfig [[autodoc]] ClapTextConfig ## ClapAudioConfig [[autodoc]] ClapAudioConfig ## ClapFeatureExtractor [[autodoc]] ClapFeatureExtractor ## ClapProcessor [[autodoc]] ClapProcessor ## ClapModel [[autodoc]] ClapModel - forward - get_text_features - get_audio_features ## ClapTextModel [[autodoc]] ClapTextModel - forward ## ClapTextModelWithProjection [[autodoc]] ClapTextModelWithProjection - forward ## ClapAudioModel [[autodoc]] ClapAudioModel - forward ## ClapAudioModelWithProjection [[autodoc]] ClapAudioModelWithProjection - forward " model_doc/roberta-prelayernorm.md," # RoBERTa-PreLayerNorm ## Overview The RoBERTa-PreLayerNorm model was proposed in [fairseq: A Fast, Extensible Toolkit for Sequence Modeling](https://arxiv.org/abs/1904.01038) by Myle Ott, Sergey Edunov, Alexei Baevski, Angela Fan, Sam Gross, Nathan Ng, David Grangier, Michael Auli. It is identical to using the `--encoder-normalize-before` flag in [fairseq](https://fairseq.readthedocs.io/). The abstract from the paper is the following: *fairseq is an open-source sequence modeling toolkit that allows researchers and developers to train custom models for translation, summarization, language modeling, and other text generation tasks. The toolkit is based on PyTorch and supports distributed training across multiple GPUs and machines. We also support fast mixed-precision training and inference on modern GPUs.* This model was contributed by [andreasmaden](https://huggingface.co/andreasmadsen). The original code can be found [here](https://github.com/princeton-nlp/DinkyTrain). ## Usage tips - The implementation is the same as [Roberta](roberta) except instead of using _Add and Norm_ it does _Norm and Add_. _Add_ and _Norm_ refers to the Addition and LayerNormalization as described in [Attention Is All You Need](https://arxiv.org/abs/1706.03762). - This is identical to using the `--encoder-normalize-before` flag in [fairseq](https://fairseq.readthedocs.io/). ## Resources - [Text classification task guide](../tasks/sequence_classification) - [Token classification task guide](../tasks/token_classification) - [Question answering task guide](../tasks/question_answering) - [Causal language modeling task guide](../tasks/language_modeling) - [Masked language modeling task guide](../tasks/masked_language_modeling) - [Multiple choice task guide](../tasks/multiple_choice) ## RobertaPreLayerNormConfig [[autodoc]] RobertaPreLayerNormConfig ## RobertaPreLayerNormModel [[autodoc]] RobertaPreLayerNormModel - forward ## RobertaPreLayerNormForCausalLM [[autodoc]] RobertaPreLayerNormForCausalLM - forward ## RobertaPreLayerNormForMaskedLM [[autodoc]] RobertaPreLayerNormForMaskedLM - forward ## RobertaPreLayerNormForSequenceClassification [[autodoc]] RobertaPreLayerNormForSequenceClassification - forward ## RobertaPreLayerNormForMultipleChoice [[autodoc]] RobertaPreLayerNormForMultipleChoice - forward ## RobertaPreLayerNormForTokenClassification [[autodoc]] RobertaPreLayerNormForTokenClassification - forward ## RobertaPreLayerNormForQuestionAnswering [[autodoc]] RobertaPreLayerNormForQuestionAnswering - forward ## TFRobertaPreLayerNormModel [[autodoc]] TFRobertaPreLayerNormModel - call ## TFRobertaPreLayerNormForCausalLM [[autodoc]] TFRobertaPreLayerNormForCausalLM - call ## TFRobertaPreLayerNormForMaskedLM [[autodoc]] TFRobertaPreLayerNormForMaskedLM - call ## TFRobertaPreLayerNormForSequenceClassification [[autodoc]] TFRobertaPreLayerNormForSequenceClassification - call ## TFRobertaPreLayerNormForMultipleChoice [[autodoc]] TFRobertaPreLayerNormForMultipleChoice - call ## TFRobertaPreLayerNormForTokenClassification [[autodoc]] TFRobertaPreLayerNormForTokenClassification - call ## TFRobertaPreLayerNormForQuestionAnswering [[autodoc]] TFRobertaPreLayerNormForQuestionAnswering - call ## FlaxRobertaPreLayerNormModel [[autodoc]] FlaxRobertaPreLayerNormModel - __call__ ## FlaxRobertaPreLayerNormForCausalLM [[autodoc]] FlaxRobertaPreLayerNormForCausalLM - __call__ ## FlaxRobertaPreLayerNormForMaskedLM [[autodoc]] FlaxRobertaPreLayerNormForMaskedLM - __call__ ## FlaxRobertaPreLayerNormForSequenceClassification [[autodoc]] FlaxRobertaPreLayerNormForSequenceClassification - __call__ ## FlaxRobertaPreLayerNormForMultipleChoice [[autodoc]] FlaxRobertaPreLayerNormForMultipleChoice - __call__ ## FlaxRobertaPreLayerNormForTokenClassification [[autodoc]] FlaxRobertaPreLayerNormForTokenClassification - __call__ ## FlaxRobertaPreLayerNormForQuestionAnswering [[autodoc]] FlaxRobertaPreLayerNormForQuestionAnswering - __call__ " model_doc/herbert.md," # HerBERT ## Overview The HerBERT model was proposed in [KLEJ: Comprehensive Benchmark for Polish Language Understanding](https://www.aclweb.org/anthology/2020.acl-main.111.pdf) by Piotr Rybak, Robert Mroczkowski, Janusz Tracz, and Ireneusz Gawlik. It is a BERT-based Language Model trained on Polish Corpora using only MLM objective with dynamic masking of whole words. The abstract from the paper is the following: *In recent years, a series of Transformer-based models unlocked major improvements in general natural language understanding (NLU) tasks. Such a fast pace of research would not be possible without general NLU benchmarks, which allow for a fair comparison of the proposed methods. However, such benchmarks are available only for a handful of languages. To alleviate this issue, we introduce a comprehensive multi-task benchmark for the Polish language understanding, accompanied by an online leaderboard. It consists of a diverse set of tasks, adopted from existing datasets for named entity recognition, question-answering, textual entailment, and others. We also introduce a new sentiment analysis task for the e-commerce domain, named Allegro Reviews (AR). To ensure a common evaluation scheme and promote models that generalize to different NLU tasks, the benchmark includes datasets from varying domains and applications. Additionally, we release HerBERT, a Transformer-based model trained specifically for the Polish language, which has the best average performance and obtains the best results for three out of nine tasks. Finally, we provide an extensive evaluation, including several standard baselines and recently proposed, multilingual Transformer-based models.* This model was contributed by [rmroczkowski](https://huggingface.co/rmroczkowski). The original code can be found [here](https://github.com/allegro/HerBERT). ## Usage example thon >>> from transformers import HerbertTokenizer, RobertaModel >>> tokenizer = HerbertTokenizer.from_pretrained(""allegro/herbert-klej-cased-tokenizer-v1"") >>> model = RobertaModel.from_pretrained(""allegro/herbert-klej-cased-v1"") >>> encoded_input = tokenizer.encode(""Kto ma lepszą sztukę, ma lepszy rząd – to jasne."", return_tensors=""pt"") >>> outputs = model(encoded_input) >>> # HerBERT can also be loaded using AutoTokenizer and AutoModel: >>> import torch >>> from transformers import AutoModel, AutoTokenizer >>> tokenizer = AutoTokenizer.from_pretrained(""allegro/herbert-klej-cased-tokenizer-v1"") >>> model = AutoModel.from_pretrained(""allegro/herbert-klej-cased-v1"") Herbert implementation is the same as `BERT` except for the tokenization method. Refer to [BERT documentation](bert) for API reference and examples. ## HerbertTokenizer [[autodoc]] HerbertTokenizer ## HerbertTokenizerFast [[autodoc]] HerbertTokenizerFast " model_doc/bridgetower.md," # BridgeTower ## Overview The BridgeTower model was proposed in [BridgeTower: Building Bridges Between Encoders in Vision-Language Representative Learning](https://arxiv.org/abs/2206.08657) by Xiao Xu, Chenfei Wu, Shachar Rosenman, Vasudev Lal, Wanxiang Che, Nan Duan. The goal of this model is to build a bridge between each uni-modal encoder and the cross-modal encoder to enable comprehensive and detailed interaction at each layer of the cross-modal encoder thus achieving remarkable performance on various downstream tasks with almost negligible additional performance and computational costs. This paper has been accepted to the [AAAI'23](https://aaai.org/Conferences/AAAI-23/) conference. The abstract from the paper is the following: *Vision-Language (VL) models with the TWO-TOWER architecture have dominated visual-language representation learning in recent years. Current VL models either use lightweight uni-modal encoders and learn to extract, align and fuse both modalities simultaneously in a deep cross-modal encoder, or feed the last-layer uni-modal representations from the deep pre-trained uni-modal encoders into the top cross-modal encoder. Both approaches potentially restrict vision-language representation learning and limit model performance. In this paper, we propose BRIDGETOWER, which introduces multiple bridge layers that build a connection between the top layers of uni-modal encoders and each layer of the crossmodal encoder. This enables effective bottom-up cross-modal alignment and fusion between visual and textual representations of different semantic levels of pre-trained uni-modal encoders in the cross-modal encoder. Pre-trained with only 4M images, BRIDGETOWER achieves state-of-the-art performance on various downstream vision-language tasks. In particular, on the VQAv2 test-std set, BRIDGETOWER achieves an accuracy of 78.73%, outperforming the previous state-of-the-art model METER by 1.09% with the same pre-training data and almost negligible additional parameters and computational costs. Notably, when further scaling the model, BRIDGETOWER achieves an accuracy of 81.15%, surpassing models that are pre-trained on orders-of-magnitude larger datasets.* BridgeTower architecture. Taken from the original paper. This model was contributed by [Anahita Bhiwandiwalla](https://huggingface.co/anahita-b), [Tiep Le](https://huggingface.co/Tile) and [Shaoyen Tseng](https://huggingface.co/shaoyent). The original code can be found [here](https://github.com/microsoft/BridgeTower). ## Usage tips and examples BridgeTower consists of a visual encoder, a textual encoder and cross-modal encoder with multiple lightweight bridge layers. The goal of this approach was to build a bridge between each uni-modal encoder and the cross-modal encoder to enable comprehensive and detailed interaction at each layer of the cross-modal encoder. In principle, one can apply any visual, textual or cross-modal encoder in the proposed architecture. The [`BridgeTowerProcessor`] wraps [`RobertaTokenizer`] and [`BridgeTowerImageProcessor`] into a single instance to both encode the text and prepare the images respectively. The following example shows how to run contrastive learning using [`BridgeTowerProcessor`] and [`BridgeTowerForContrastiveLearning`]. thon >>> from transformers import BridgeTowerProcessor, BridgeTowerForContrastiveLearning >>> import requests >>> from PIL import Image >>> url = ""http://images.cocodataset.org/val2017/000000039769.jpg"" >>> image = Image.open(requests.get(url, stream=True).raw) >>> texts = [""An image of two cats chilling on a couch"", ""A football player scoring a goal""] >>> processor = BridgeTowerProcessor.from_pretrained(""BridgeTower/bridgetower-large-itm-mlm-itc"") >>> model = BridgeTowerForContrastiveLearning.from_pretrained(""BridgeTower/bridgetower-large-itm-mlm-itc"") >>> # forward pass >>> scores = dict() >>> for text in texts: # prepare inputs encoding = processor(image, text, return_tensors=""pt"") outputs = model(**encoding) scores[text] = outputs The following example shows how to run image-text retrieval using [`BridgeTowerProcessor`] and [`BridgeTowerForImageAndTextRetrieval`]. thon >>> from transformers import BridgeTowerProcessor, BridgeTowerForImageAndTextRetrieval >>> import requests >>> from PIL import Image >>> url = ""http://images.cocodataset.org/val2017/000000039769.jpg"" >>> image = Image.open(requests.get(url, stream=True).raw) >>> texts = [""An image of two cats chilling on a couch"", ""A football player scoring a goal""] >>> processor = BridgeTowerProcessor.from_pretrained(""BridgeTower/bridgetower-base-itm-mlm"") >>> model = BridgeTowerForImageAndTextRetrieval.from_pretrained(""BridgeTower/bridgetower-base-itm-mlm"") >>> # forward pass >>> scores = dict() >>> for text in texts: # prepare inputs encoding = processor(image, text, return_tensors=""pt"") outputs = model(**encoding) scores[text] = outputs.logits[0, 1].item() The following example shows how to run masked language modeling using [`BridgeTowerProcessor`] and [`BridgeTowerForMaskedLM`]. thon >>> from transformers import BridgeTowerProcessor, BridgeTowerForMaskedLM >>> from PIL import Image >>> import requests >>> url = ""http://images.cocodataset.org/val2017/000000360943.jpg"" >>> image = Image.open(requests.get(url, stream=True).raw).convert(""RGB"") >>> text = ""a looking out of the window"" >>> processor = BridgeTowerProcessor.from_pretrained(""BridgeTower/bridgetower-base-itm-mlm"") >>> model = BridgeTowerForMaskedLM.from_pretrained(""BridgeTower/bridgetower-base-itm-mlm"") >>> # prepare inputs >>> encoding = processor(image, text, return_tensors=""pt"") >>> # forward pass >>> outputs = model(**encoding) >>> results = processor.decode(outputs.logits.argmax(dim=-1).squeeze(0).tolist()) >>> print(results) .a cat looking out of the window. Tips: - This implementation of BridgeTower uses [`RobertaTokenizer`] to generate text embeddings and OpenAI's CLIP/ViT model to compute visual embeddings. - Checkpoints for pre-trained [bridgeTower-base](https://huggingface.co/BridgeTower/bridgetower-base) and [bridgetower masked language modeling and image text matching](https://huggingface.co/BridgeTower/bridgetower-base-itm-mlm) are released. - Please refer to [Table 5](https://arxiv.org/pdf/2206.08657.pdf) for BridgeTower's performance on Image Retrieval and other down stream tasks. - The PyTorch version of this model is only available in torch 1.10 and higher. ## BridgeTowerConfig [[autodoc]] BridgeTowerConfig ## BridgeTowerTextConfig [[autodoc]] BridgeTowerTextConfig ## BridgeTowerVisionConfig [[autodoc]] BridgeTowerVisionConfig ## BridgeTowerImageProcessor [[autodoc]] BridgeTowerImageProcessor - preprocess ## BridgeTowerProcessor [[autodoc]] BridgeTowerProcessor - __call__ ## BridgeTowerModel [[autodoc]] BridgeTowerModel - forward ## BridgeTowerForContrastiveLearning [[autodoc]] BridgeTowerForContrastiveLearning - forward ## BridgeTowerForMaskedLM [[autodoc]] BridgeTowerForMaskedLM - forward ## BridgeTowerForImageAndTextRetrieval [[autodoc]] BridgeTowerForImageAndTextRetrieval - forward " model_doc/cpmant.md," # CPMAnt ## Overview CPM-Ant is an open-source Chinese pre-trained language model (PLM) with 10B parameters. It is also the first milestone of the live training process of CPM-Live. The training process is cost-effective and environment-friendly. CPM-Ant also achieves promising results with delta tuning on the CUGE benchmark. Besides the full model, we also provide various compressed versions to meet the requirements of different hardware configurations. [See more](https://github.com/OpenBMB/CPM-Live/tree/cpm-ant/cpm-live) This model was contributed by [OpenBMB](https://huggingface.co/openbmb). The original code can be found [here](https://github.com/OpenBMB/CPM-Live/tree/cpm-ant/cpm-live). ## Resources - A tutorial on [CPM-Live](https://github.com/OpenBMB/CPM-Live/tree/cpm-ant/cpm-live). ## CpmAntConfig [[autodoc]] CpmAntConfig - all ## CpmAntTokenizer [[autodoc]] CpmAntTokenizer - all ## CpmAntModel [[autodoc]] CpmAntModel - all ## CpmAntForCausalLM [[autodoc]] CpmAntForCausalLM - all" model_doc/focalnet.md," # FocalNet ## Overview The FocalNet model was proposed in [Focal Modulation Networks](https://arxiv.org/abs/2203.11926) by Jianwei Yang, Chunyuan Li, Xiyang Dai, Lu Yuan, Jianfeng Gao. FocalNets completely replace self-attention (used in models like [ViT](vit) and [Swin](swin)) by a focal modulation mechanism for modeling token interactions in vision. The authors claim that FocalNets outperform self-attention based models with similar computational costs on the tasks of image classification, object detection, and segmentation. The abstract from the paper is the following: *We propose focal modulation networks (FocalNets in short), where self-attention (SA) is completely replaced by a focal modulation mechanism for modeling token interactions in vision. Focal modulation comprises three components: (i) hierarchical contextualization, implemented using a stack of depth-wise convolutional layers, to encode visual contexts from short to long ranges, (ii) gated aggregation to selectively gather contexts for each query token based on its content, and (iii) element-wise modulation or affine transformation to inject the aggregated context into the query. Extensive experiments show FocalNets outperform the state-of-the-art SA counterparts (e.g., Swin and Focal Transformers) with similar computational costs on the tasks of image classification, object detection, and segmentation. Specifically, FocalNets with tiny and base size achieve 82.3% and 83.9% top-1 accuracy on ImageNet-1K. After pretrained on ImageNet-22K in 224 resolution, it attains 86.5% and 87.3% top-1 accuracy when finetuned with resolution 224 and 384, respectively. When transferred to downstream tasks, FocalNets exhibit clear superiority. For object detection with Mask R-CNN, FocalNet base trained with 1\times outperforms the Swin counterpart by 2.1 points and already surpasses Swin trained with 3\times schedule (49.0 v.s. 48.5). For semantic segmentation with UPerNet, FocalNet base at single-scale outperforms Swin by 2.4, and beats Swin at multi-scale (50.5 v.s. 49.7). Using large FocalNet and Mask2former, we achieve 58.5 mIoU for ADE20K semantic segmentation, and 57.9 PQ for COCO Panoptic Segmentation. Using huge FocalNet and DINO, we achieved 64.3 and 64.4 mAP on COCO minival and test-dev, respectively, establishing new SoTA on top of much larger attention-based models like Swinv2-G and BEIT-3.* This model was contributed by [nielsr](https://huggingface.co/nielsr). The original code can be found [here](https://github.com/microsoft/FocalNet). ## FocalNetConfig [[autodoc]] FocalNetConfig ## FocalNetModel [[autodoc]] FocalNetModel - forward ## FocalNetForMaskedImageModeling [[autodoc]] FocalNetForMaskedImageModeling - forward ## FocalNetForImageClassification [[autodoc]] FocalNetForImageClassification - forward " model_doc/opt.md," # OPT ## Overview The OPT model was proposed in [Open Pre-trained Transformer Language Models](https://arxiv.org/pdf/2205.01068) by Meta AI. OPT is a series of open-sourced large causal language models which perform similar in performance to GPT3. The abstract from the paper is the following: *Large language models, which are often trained for hundreds of thousands of compute days, have shown remarkable capabilities for zero- and few-shot learning. Given their computational cost, these models are difficult to replicate without significant capital. For the few that are available through APIs, no access is granted to the full model weights, making them difficult to study. We present Open Pre-trained Transformers (OPT), a suite of decoder-only pre-trained transformers ranging from 125M to 175B parameters, which we aim to fully and responsibly share with interested researchers. We show that OPT-175B is comparable to GPT-3, while requiring only 1/7th the carbon footprint to develop. We are also releasing our logbook detailing the infrastructure challenges we faced, along with code for experimenting with all of the released models.* This model was contributed by [Arthur Zucker](https://huggingface.co/ArthurZ), [Younes Belkada](https://huggingface.co/ybelkada), and [Patrick Von Platen](https://huggingface.co/patrickvonplaten). The original code can be found [here](https://github.com/facebookresearch/metaseq). Tips: - OPT has the same architecture as [`BartDecoder`]. - Contrary to GPT2, OPT adds the EOS token `` to the beginning of every prompt. ## Resources A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with OPT. If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we will review it. The resource should ideally demonstrate something new instead of duplicating an existing resource. - A notebook on [fine-tuning OPT with PEFT, bitsandbytes, and Transformers](https://colab.research.google.com/drive/1jCkpikz0J2o20FBQmYmAGdiKmJGOMo-o?usp=sharing). 🌎 - A blog post on [decoding strategies with OPT](https://huggingface.co/blog/introducing-csearch#62-example-two---opt). - [Causal language modeling](https://huggingface.co/course/en/chapter7/6?fw=pt#training-a-causal-language-model-from-scratch) chapter of the 🤗 Hugging Face Course. - [`OPTForCausalLM`] is supported by this [causal language modeling example script](https://github.com/huggingface/transformers/tree/main/examples/pytorch/language-modeling#gpt-2gpt-and-causal-language-modeling) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/language_modeling.ipynb). - [`TFOPTForCausalLM`] is supported by this [causal language modeling example script](https://github.com/huggingface/transformers/tree/main/examples/tensorflow/language-modeling#run_clmpy) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/language_modeling-tf.ipynb). - [`FlaxOPTForCausalLM`] is supported by this [causal language modeling example script](https://github.com/huggingface/transformers/tree/main/examples/flax/language-modeling#causal-language-modeling). - [Text classification task guide](sequence_classification.md) - [`OPTForSequenceClassification`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/pytorch/text-classification) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/text_classification.ipynb). - [`OPTForQuestionAnswering`] is supported by this [question answering example script](https://github.com/huggingface/transformers/tree/main/examples/pytorch/question-answering) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/question_answering.ipynb). - [Question answering](https://huggingface.co/course/chapter7/7?fw=pt) chapter of the 🤗 Hugging Face Course. ⚡️ Inference - A blog post on [How 🤗 Accelerate runs very large models thanks to PyTorch](https://huggingface.co/blog/accelerate-large-models) with OPT. ## Combining OPT and Flash Attention 2 First, make sure to install the latest version of Flash Attention 2 to include the sliding window attention feature. ```bash pip install -U flash-attn --no-build-isolation Make also sure that you have a hardware that is compatible with Flash-Attention 2. Read more about it in the official documentation of flash-attn repository. Make also sure to load your model in half-precision (e.g. `torch.float16``) To load and run a model using Flash Attention 2, refer to the snippet below: thon >>> import torch >>> from transformers import OPTForCausalLM, GPT2Tokenizer >>> device = ""cuda"" # the device to load the model onto >>> model = OPTForCausalLM.from_pretrained(""facebook/opt-350m"", torch_dtype=torch.float16, use_flash_attention_2=True) >>> tokenizer = GPT2Tokenizer.from_pretrained(""facebook/opt-350m"") >>> prompt = (""A chat between a curious human and the Statue of Liberty.\n\nHuman: What is your name?\nStatue: I am the "" ""Statue of Liberty.\nHuman: Where do you live?\nStatue: New York City.\nHuman: How long have you lived "" ""there?"") >>> model_inputs = tokenizer([prompt], return_tensors=""pt"").to(device) >>> model.to(device) >>> generated_ids = model.generate(**model_inputs, max_new_tokens=30, do_sample=False) >>> tokenizer.batch_decode(generated_ids)[0] 'A chat between a curious human and the Statue of Liberty.\n\nHuman: What is your name?\nStatue: I am the Statue of Liberty.\nHuman: Where do you live?\nStatue: New York City.\nHuman: How long have you lived there?\nStatue: I have lived here for about a year.\nHuman: What is your favorite place to eat?\nStatue: I love' ### Expected speedups Below is an expected speedup diagram that compares pure inference time between the native implementation in transformers using `facebook/opt-2.7b` checkpoint and the Flash Attention 2 version of the model using two different sequence lengths. Below is an expected speedup diagram that compares pure inference time between the native implementation in transformers using `facebook/opt-350m` checkpoint and the Flash Attention 2 version of the model using two different sequence lengths. ## OPTConfig [[autodoc]] OPTConfig ## OPTModel [[autodoc]] OPTModel - forward ## OPTForCausalLM [[autodoc]] OPTForCausalLM - forward ## OPTForSequenceClassification [[autodoc]] OPTForSequenceClassification - forward ## OPTForQuestionAnswering [[autodoc]] OPTForQuestionAnswering - forward ## TFOPTModel [[autodoc]] TFOPTModel - call ## TFOPTForCausalLM [[autodoc]] TFOPTForCausalLM - call ## FlaxOPTModel [[autodoc]] FlaxOPTModel - __call__ ## FlaxOPTForCausalLM [[autodoc]] FlaxOPTForCausalLM - __call__ " model_doc/blenderbot-small.md," # Blenderbot Small Note that [`BlenderbotSmallModel`] and [`BlenderbotSmallForConditionalGeneration`] are only used in combination with the checkpoint [facebook/blenderbot-90M](https://huggingface.co/facebook/blenderbot-90M). Larger Blenderbot checkpoints should instead be used with [`BlenderbotModel`] and [`BlenderbotForConditionalGeneration`] ## Overview The Blender chatbot model was proposed in [Recipes for building an open-domain chatbot](https://arxiv.org/pdf/2004.13637.pdf) Stephen Roller, Emily Dinan, Naman Goyal, Da Ju, Mary Williamson, Yinhan Liu, Jing Xu, Myle Ott, Kurt Shuster, Eric M. Smith, Y-Lan Boureau, Jason Weston on 30 Apr 2020. The abstract of the paper is the following: *Building open-domain chatbots is a challenging area for machine learning research. While prior work has shown that scaling neural models in the number of parameters and the size of the data they are trained on gives improved results, we show that other ingredients are important for a high-performing chatbot. Good conversation requires a number of skills that an expert conversationalist blends in a seamless way: providing engaging talking points and listening to their partners, and displaying knowledge, empathy and personality appropriately, while maintaining a consistent persona. We show that large scale models can learn these skills when given appropriate training data and choice of generation strategy. We build variants of these recipes with 90M, 2.7B and 9.4B parameter models, and make our models and code publicly available. Human evaluations show our best models are superior to existing approaches in multi-turn dialogue in terms of engagingness and humanness measurements. We then discuss the limitations of this work by analyzing failure cases of our models.* This model was contributed by [patrickvonplaten](https://huggingface.co/patrickvonplaten). The authors' code can be found [here](https://github.com/facebookresearch/ParlAI). ## Usage tips Blenderbot Small is a model with absolute position embeddings so it's usually advised to pad the inputs on the right rather than the left. ## Resources - [Causal language modeling task guide](../tasks/language_modeling) - [Translation task guide](../tasks/translation) - [Summarization task guide](../tasks/summarization) ## BlenderbotSmallConfig [[autodoc]] BlenderbotSmallConfig ## BlenderbotSmallTokenizer [[autodoc]] BlenderbotSmallTokenizer - build_inputs_with_special_tokens - get_special_tokens_mask - create_token_type_ids_from_sequences - save_vocabulary ## BlenderbotSmallTokenizerFast [[autodoc]] BlenderbotSmallTokenizerFast ## BlenderbotSmallModel [[autodoc]] BlenderbotSmallModel - forward ## BlenderbotSmallForConditionalGeneration [[autodoc]] BlenderbotSmallForConditionalGeneration - forward ## BlenderbotSmallForCausalLM [[autodoc]] BlenderbotSmallForCausalLM - forward ## TFBlenderbotSmallModel [[autodoc]] TFBlenderbotSmallModel - call ## TFBlenderbotSmallForConditionalGeneration [[autodoc]] TFBlenderbotSmallForConditionalGeneration - call ## FlaxBlenderbotSmallModel [[autodoc]] FlaxBlenderbotSmallModel - __call__ - encode - decode ## FlaxBlenderbotForConditionalGeneration [[autodoc]] FlaxBlenderbotSmallForConditionalGeneration - __call__ - encode - decode " model_doc/mobilevitv2.md," # MobileViTV2 ## Overview The MobileViTV2 model was proposed in [Separable Self-attention for Mobile Vision Transformers](https://arxiv.org/abs/2206.02680) by Sachin Mehta and Mohammad Rastegari. MobileViTV2 is the second version of MobileViT, constructed by replacing the multi-headed self-attention in MobileViT with separable self-attention. The abstract from the paper is the following: *Mobile vision transformers (MobileViT) can achieve state-of-the-art performance across several mobile vision tasks, including classification and detection. Though these models have fewer parameters, they have high latency as compared to convolutional neural network-based models. The main efficiency bottleneck in MobileViT is the multi-headed self-attention (MHA) in transformers, which requires O(k2) time complexity with respect to the number of tokens (or patches) k. Moreover, MHA requires costly operations (e.g., batch-wise matrix multiplication) for computing self-attention, impacting latency on resource-constrained devices. This paper introduces a separable self-attention method with linear complexity, i.e. O(k). A simple yet effective characteristic of the proposed method is that it uses element-wise operations for computing self-attention, making it a good choice for resource-constrained devices. The improved model, MobileViTV2, is state-of-the-art on several mobile vision tasks, including ImageNet object classification and MS-COCO object detection. With about three million parameters, MobileViTV2 achieves a top-1 accuracy of 75.6% on the ImageNet dataset, outperforming MobileViT by about 1% while running 3.2× faster on a mobile device.* This model was contributed by [shehan97](https://huggingface.co/shehan97). The original code can be found [here](https://github.com/apple/ml-cvnets). ## Usage tips - MobileViTV2 is more like a CNN than a Transformer model. It does not work on sequence data but on batches of images. Unlike ViT, there are no embeddings. The backbone model outputs a feature map. - One can use [`MobileViTImageProcessor`] to prepare images for the model. Note that if you do your own preprocessing, the pretrained checkpoints expect images to be in BGR pixel order (not RGB). - The available image classification checkpoints are pre-trained on [ImageNet-1k](https://huggingface.co/datasets/imagenet-1k) (also referred to as ILSVRC 2012, a collection of 1.3 million images and 1,000 classes). - The segmentation model uses a [DeepLabV3](https://arxiv.org/abs/1706.05587) head. The available semantic segmentation checkpoints are pre-trained on [PASCAL VOC](http://host.robots.ox.ac.uk/pascal/VOC/). ## MobileViTV2Config [[autodoc]] MobileViTV2Config ## MobileViTV2Model [[autodoc]] MobileViTV2Model - forward ## MobileViTV2ForImageClassification [[autodoc]] MobileViTV2ForImageClassification - forward ## MobileViTV2ForSemanticSegmentation [[autodoc]] MobileViTV2ForSemanticSegmentation - forward " model_doc/cvt.md," # Convolutional Vision Transformer (CvT) ## Overview The CvT model was proposed in [CvT: Introducing Convolutions to Vision Transformers](https://arxiv.org/abs/2103.15808) by Haiping Wu, Bin Xiao, Noel Codella, Mengchen Liu, Xiyang Dai, Lu Yuan and Lei Zhang. The Convolutional vision Transformer (CvT) improves the [Vision Transformer (ViT)](vit) in performance and efficiency by introducing convolutions into ViT to yield the best of both designs. The abstract from the paper is the following: *We present in this paper a new architecture, named Convolutional vision Transformer (CvT), that improves Vision Transformer (ViT) in performance and efficiency by introducing convolutions into ViT to yield the best of both designs. This is accomplished through two primary modifications: a hierarchy of Transformers containing a new convolutional token embedding, and a convolutional Transformer block leveraging a convolutional projection. These changes introduce desirable properties of convolutional neural networks (CNNs) to the ViT architecture (\ie shift, scale, and distortion invariance) while maintaining the merits of Transformers (\ie dynamic attention, global context, and better generalization). We validate CvT by conducting extensive experiments, showing that this approach achieves state-of-the-art performance over other Vision Transformers and ResNets on ImageNet-1k, with fewer parameters and lower FLOPs. In addition, performance gains are maintained when pretrained on larger datasets (\eg ImageNet-22k) and fine-tuned to downstream tasks. Pre-trained on ImageNet-22k, our CvT-W24 obtains a top-1 accuracy of 87.7\% on the ImageNet-1k val set. Finally, our results show that the positional encoding, a crucial component in existing Vision Transformers, can be safely removed in our model, simplifying the design for higher resolution vision tasks.* This model was contributed by [anugunj](https://huggingface.co/anugunj). The original code can be found [here](https://github.com/microsoft/CvT). ## Usage tips - CvT models are regular Vision Transformers, but trained with convolutions. They outperform the [original model (ViT)](vit) when fine-tuned on ImageNet-1K and CIFAR-100. - You can check out demo notebooks regarding inference as well as fine-tuning on custom data [here](https://github.com/NielsRogge/Transformers-Tutorials/tree/master/VisionTransformer) (you can just replace [`ViTFeatureExtractor`] by [`AutoImageProcessor`] and [`ViTForImageClassification`] by [`CvtForImageClassification`]). - The available checkpoints are either (1) pre-trained on [ImageNet-22k](http://www.image-net.org/) (a collection of 14 million images and 22k classes) only, (2) also fine-tuned on ImageNet-22k or (3) also fine-tuned on [ImageNet-1k](http://www.image-net.org/challenges/LSVRC/2012/) (also referred to as ILSVRC 2012, a collection of 1.3 million images and 1,000 classes). ## Resources A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with CvT. - [`CvtForImageClassification`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/pytorch/image-classification) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/image_classification.ipynb). - See also: [Image classification task guide](../tasks/image_classification) If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource. ## CvtConfig [[autodoc]] CvtConfig ## CvtModel [[autodoc]] CvtModel - forward ## CvtForImageClassification [[autodoc]] CvtForImageClassification - forward ## TFCvtModel [[autodoc]] TFCvtModel - call ## TFCvtForImageClassification [[autodoc]] TFCvtForImageClassification - call " model_doc/data2vec.md," # Data2Vec ## Overview The Data2Vec model was proposed in [data2vec: A General Framework for Self-supervised Learning in Speech, Vision and Language](https://arxiv.org/pdf/2202.03555) by Alexei Baevski, Wei-Ning Hsu, Qiantong Xu, Arun Babu, Jiatao Gu and Michael Auli. Data2Vec proposes a unified framework for self-supervised learning across different data modalities - text, audio and images. Importantly, predicted targets for pre-training are contextualized latent representations of the inputs, rather than modality-specific, context-independent targets. The abstract from the paper is the following: *While the general idea of self-supervised learning is identical across modalities, the actual algorithms and objectives differ widely because they were developed with a single modality in mind. To get us closer to general self-supervised learning, we present data2vec, a framework that uses the same learning method for either speech, NLP or computer vision. The core idea is to predict latent representations of the full input data based on a masked view of the input in a selfdistillation setup using a standard Transformer architecture. Instead of predicting modality-specific targets such as words, visual tokens or units of human speech which are local in nature, data2vec predicts contextualized latent representations that contain information from the entire input. Experiments on the major benchmarks of speech recognition, image classification, and natural language understanding demonstrate a new state of the art or competitive performance to predominant approaches. Models and code are available at www.github.com/pytorch/fairseq/tree/master/examples/data2vec.* This model was contributed by [edugp](https://huggingface.co/edugp) and [patrickvonplaten](https://huggingface.co/patrickvonplaten). [sayakpaul](https://github.com/sayakpaul) and [Rocketknight1](https://github.com/Rocketknight1) contributed Data2Vec for vision in TensorFlow. The original code (for NLP and Speech) can be found [here](https://github.com/pytorch/fairseq/tree/main/examples/data2vec). The original code for vision can be found [here](https://github.com/facebookresearch/data2vec_vision/tree/main/beit). ## Usage tips - Data2VecAudio, Data2VecText, and Data2VecVision have all been trained using the same self-supervised learning method. - For Data2VecAudio, preprocessing is identical to [`Wav2Vec2Model`], including feature extraction - For Data2VecText, preprocessing is identical to [`RobertaModel`], including tokenization. - For Data2VecVision, preprocessing is identical to [`BeitModel`], including feature extraction. ## Resources A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with Data2Vec. - [`Data2VecVisionForImageClassification`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/pytorch/image-classification) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/image_classification.ipynb). - To fine-tune [`TFData2VecVisionForImageClassification`] on a custom dataset, see [this notebook](https://colab.research.google.com/github/sayakpaul/TF-2.0-Hacks/blob/master/data2vec_vision_image_classification.ipynb). **Data2VecText documentation resources** - [Text classification task guide](../tasks/sequence_classification) - [Token classification task guide](../tasks/token_classification) - [Question answering task guide](../tasks/question_answering) - [Causal language modeling task guide](../tasks/language_modeling) - [Masked language modeling task guide](../tasks/masked_language_modeling) - [Multiple choice task guide](../tasks/multiple_choice) **Data2VecAudio documentation resources** - [Audio classification task guide](../tasks/audio_classification) - [Automatic speech recognition task guide](../tasks/asr) **Data2VecVision documentation resources** - [Image classification](../tasks/image_classification) - [Semantic segmentation](../tasks/semantic_segmentation) If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource. ## Data2VecTextConfig [[autodoc]] Data2VecTextConfig ## Data2VecAudioConfig [[autodoc]] Data2VecAudioConfig ## Data2VecVisionConfig [[autodoc]] Data2VecVisionConfig ## Data2VecAudioModel [[autodoc]] Data2VecAudioModel - forward ## Data2VecAudioForAudioFrameClassification [[autodoc]] Data2VecAudioForAudioFrameClassification - forward ## Data2VecAudioForCTC [[autodoc]] Data2VecAudioForCTC - forward ## Data2VecAudioForSequenceClassification [[autodoc]] Data2VecAudioForSequenceClassification - forward ## Data2VecAudioForXVector [[autodoc]] Data2VecAudioForXVector - forward ## Data2VecTextModel [[autodoc]] Data2VecTextModel - forward ## Data2VecTextForCausalLM [[autodoc]] Data2VecTextForCausalLM - forward ## Data2VecTextForMaskedLM [[autodoc]] Data2VecTextForMaskedLM - forward ## Data2VecTextForSequenceClassification [[autodoc]] Data2VecTextForSequenceClassification - forward ## Data2VecTextForMultipleChoice [[autodoc]] Data2VecTextForMultipleChoice - forward ## Data2VecTextForTokenClassification [[autodoc]] Data2VecTextForTokenClassification - forward ## Data2VecTextForQuestionAnswering [[autodoc]] Data2VecTextForQuestionAnswering - forward ## Data2VecVisionModel [[autodoc]] Data2VecVisionModel - forward ## Data2VecVisionForImageClassification [[autodoc]] Data2VecVisionForImageClassification - forward ## Data2VecVisionForSemanticSegmentation [[autodoc]] Data2VecVisionForSemanticSegmentation - forward ## TFData2VecVisionModel [[autodoc]] TFData2VecVisionModel - call ## TFData2VecVisionForImageClassification [[autodoc]] TFData2VecVisionForImageClassification - call ## TFData2VecVisionForSemanticSegmentation [[autodoc]] TFData2VecVisionForSemanticSegmentation - call " model_doc/nllb.md," # NLLB ## Updated tokenizer behavior **DISCLAIMER:** The default behaviour for the tokenizer was fixed and thus changed in April 2023. The previous version adds `[self.eos_token_id, self.cur_lang_code]` at the end of the token sequence for both target and source tokenization. This is wrong as the NLLB paper mentions (page 48, 6.1.1. Model Architecture) : *Note that we prefix the source sequence with the source language, as opposed to the target language as previously done in several works (Arivazhagan et al., 2019; Johnson et al., 2017). This is primarily because we prioritize optimizing zero-shot performance of our model on any pair of 200 languages at a minor cost to supervised performance.* Previous behaviour: thon >>> from transformers import NllbTokenizer >>> tokenizer = NllbTokenizer.from_pretrained(""facebook/nllb-200-distilled-600M"") >>> tokenizer(""How was your day?"").input_ids [13374, 1398, 4260, 4039, 248130, 2, 256047] >>> # 2: '' >>> # 256047 : 'eng_Latn' New behaviour thon >>> from transformers import NllbTokenizer >>> tokenizer = NllbTokenizer.from_pretrained(""facebook/nllb-200-distilled-600M"") >>> tokenizer(""How was your day?"").input_ids [256047, 13374, 1398, 4260, 4039, 248130, 2] Enabling the old behaviour can be done as follows: thon >>> from transformers import NllbTokenizer >>> tokenizer = NllbTokenizer.from_pretrained(""facebook/nllb-200-distilled-600M"", legacy_behaviour=True) For more details, feel free to check the linked [PR](https://github.com/huggingface/transformers/pull/22313) and [Issue](https://github.com/huggingface/transformers/issues/19943). ## Overview The NLLB model was presented in [No Language Left Behind: Scaling Human-Centered Machine Translation](https://arxiv.org/abs/2207.04672) by Marta R. Costa-jussà, James Cross, Onur Çelebi, Maha Elbayad, Kenneth Heafield, Kevin Heffernan, Elahe Kalbassi, Janice Lam, Daniel Licht, Jean Maillard, Anna Sun, Skyler Wang, Guillaume Wenzek, Al Youngblood, Bapi Akula, Loic Barrault, Gabriel Mejia Gonzalez, Prangthip Hansanti, John Hoffman, Semarley Jarrett, Kaushik Ram Sadagopan, Dirk Rowe, Shannon Spruit, Chau Tran, Pierre Andrews, Necip Fazil Ayan, Shruti Bhosale, Sergey Edunov, Angela Fan, Cynthia Gao, Vedanuj Goswami, Francisco Guzmán, Philipp Koehn, Alexandre Mourachko, Christophe Ropers, Safiyyah Saleem, Holger Schwenk, and Jeff Wang. The abstract of the paper is the following: *Driven by the goal of eradicating language barriers on a global scale, machine translation has solidified itself as a key focus of artificial intelligence research today. However, such efforts have coalesced around a small subset of languages, leaving behind the vast majority of mostly low-resource languages. What does it take to break the 200 language barrier while ensuring safe, high quality results, all while keeping ethical considerations in mind? In No Language Left Behind, we took on this challenge by first contextualizing the need for low-resource language translation support through exploratory interviews with native speakers. Then, we created datasets and models aimed at narrowing the performance gap between low and high-resource languages. More specifically, we developed a conditional compute model based on Sparsely Gated Mixture of Experts that is trained on data obtained with novel and effective data mining techniques tailored for low-resource languages. We propose multiple architectural and training improvements to counteract overfitting while training on thousands of tasks. Critically, we evaluated the performance of over 40,000 different translation directions using a human-translated benchmark, Flores-200, and combined human evaluation with a novel toxicity benchmark covering all languages in Flores-200 to assess translation safety. Our model achieves an improvement of 44% BLEU relative to the previous state-of-the-art, laying important groundwork towards realizing a universal translation system.* This implementation contains the dense models available on release. **The sparse model NLLB-MoE (Mixture of Expert) is now available! More details [here](nllb-moe)** This model was contributed by [Lysandre](https://huggingface.co/lysandre). The authors' code can be found [here](https://github.com/facebookresearch/fairseq/tree/nllb). ## Generating with NLLB While generating the target text set the `forced_bos_token_id` to the target language id. The following example shows how to translate English to French using the *facebook/nllb-200-distilled-600M* model. Note that we're using the BCP-47 code for French `fra_Latn`. See [here](https://github.com/facebookresearch/flores/blob/main/flores200/README.md#languages-in-flores-200) for the list of all BCP-47 in the Flores 200 dataset. thon >>> from transformers import AutoModelForSeq2SeqLM, AutoTokenizer >>> tokenizer = AutoTokenizer.from_pretrained(""facebook/nllb-200-distilled-600M"") >>> model = AutoModelForSeq2SeqLM.from_pretrained(""facebook/nllb-200-distilled-600M"") >>> article = ""UN Chief says there is no military solution in Syria"" >>> inputs = tokenizer(article, return_tensors=""pt"") >>> translated_tokens = model.generate( **inputs, forced_bos_token_id=tokenizer.lang_code_to_id[""fra_Latn""], max_length=30 ) >>> tokenizer.batch_decode(translated_tokens, skip_special_tokens=True)[0] Le chef de l'ONU dit qu'il n'y a pas de solution militaire en Syrie ### Generating from any other language than English English (`eng_Latn`) is set as the default language from which to translate. In order to specify that you'd like to translate from a different language, you should specify the BCP-47 code in the `src_lang` keyword argument of the tokenizer initialization. See example below for a translation from romanian to german: >>> from transformers import AutoModelForSeq2SeqLM, AutoTokenizer >>> tokenizer = AutoTokenizer.from_pretrained( ""facebook/nllb-200-distilled-600M"", token=True, src_lang=""ron_Latn"" ) >>> model = AutoModelForSeq2SeqLM.from_pretrained(""facebook/nllb-200-distilled-600M"", token=True) >>> article = ""Şeful ONU spune că nu există o soluţie militară în Siria"" >>> inputs = tokenizer(article, return_tensors=""pt"") >>> translated_tokens = model.generate( **inputs, forced_bos_token_id=tokenizer.lang_code_to_id[""deu_Latn""], max_length=30 ) >>> tokenizer.batch_decode(translated_tokens, skip_special_tokens=True)[0] UN-Chef sagt, es gibt keine militärische Lösung in Syrien ## Resources - [Translation task guide](../tasks/translation) - [Summarization task guide](../tasks/summarization) ## NllbTokenizer [[autodoc]] NllbTokenizer - build_inputs_with_special_tokens ## NllbTokenizerFast [[autodoc]] NllbTokenizerFast " model_doc/m2m_100.md," # M2M100 ## Overview The M2M100 model was proposed in [Beyond English-Centric Multilingual Machine Translation](https://arxiv.org/abs/2010.11125) by Angela Fan, Shruti Bhosale, Holger Schwenk, Zhiyi Ma, Ahmed El-Kishky, Siddharth Goyal, Mandeep Baines, Onur Celebi, Guillaume Wenzek, Vishrav Chaudhary, Naman Goyal, Tom Birch, Vitaliy Liptchinsky, Sergey Edunov, Edouard Grave, Michael Auli, Armand Joulin. The abstract from the paper is the following: *Existing work in translation demonstrated the potential of massively multilingual machine translation by training a single model able to translate between any pair of languages. However, much of this work is English-Centric by training only on data which was translated from or to English. While this is supported by large sources of training data, it does not reflect translation needs worldwide. In this work, we create a true Many-to-Many multilingual translation model that can translate directly between any pair of 100 languages. We build and open source a training dataset that covers thousands of language directions with supervised data, created through large-scale mining. Then, we explore how to effectively increase model capacity through a combination of dense scaling and language-specific sparse parameters to create high quality models. Our focus on non-English-Centric models brings gains of more than 10 BLEU when directly translating between non-English directions while performing competitively to the best single systems of WMT. We open-source our scripts so that others may reproduce the data, evaluation, and final M2M-100 model.* This model was contributed by [valhalla](https://huggingface.co/valhalla). ## Usage tips and examples M2M100 is a multilingual encoder-decoder (seq-to-seq) model primarily intended for translation tasks. As the model is multilingual it expects the sequences in a certain format: A special language id token is used as prefix in both the source and target text. The source text format is `[lang_code] X [eos]`, where `lang_code` is source language id for source text and target language id for target text, with `X` being the source or target text. The [`M2M100Tokenizer`] depends on `sentencepiece` so be sure to install it before running the examples. To install `sentencepiece` run `pip install sentencepiece`. **Supervised Training** thon from transformers import M2M100Config, M2M100ForConditionalGeneration, M2M100Tokenizer model = M2M100ForConditionalGeneration.from_pretrained(""facebook/m2m100_418M"") tokenizer = M2M100Tokenizer.from_pretrained(""facebook/m2m100_418M"", src_lang=""en"", tgt_lang=""fr"") src_text = ""Life is like a box of chocolates."" tgt_text = ""La vie est comme une boîte de chocolat."" model_inputs = tokenizer(src_text, text_target=tgt_text, return_tensors=""pt"") loss = model(**model_inputs).loss # forward pass **Generation** M2M100 uses the `eos_token_id` as the `decoder_start_token_id` for generation with the target language id being forced as the first generated token. To force the target language id as the first generated token, pass the *forced_bos_token_id* parameter to the *generate* method. The following example shows how to translate between Hindi to French and Chinese to English using the *facebook/m2m100_418M* checkpoint. thon >>> from transformers import M2M100ForConditionalGeneration, M2M100Tokenizer >>> hi_text = ""जीवन एक चॉकलेट बॉक्स की तरह है।"" >>> chinese_text = ""生活就像一盒巧克力。"" >>> model = M2M100ForConditionalGeneration.from_pretrained(""facebook/m2m100_418M"") >>> tokenizer = M2M100Tokenizer.from_pretrained(""facebook/m2m100_418M"") >>> # translate Hindi to French >>> tokenizer.src_lang = ""hi"" >>> encoded_hi = tokenizer(hi_text, return_tensors=""pt"") >>> generated_tokens = model.generate(**encoded_hi, forced_bos_token_id=tokenizer.get_lang_id(""fr"")) >>> tokenizer.batch_decode(generated_tokens, skip_special_tokens=True) ""La vie est comme une boîte de chocolat."" >>> # translate Chinese to English >>> tokenizer.src_lang = ""zh"" >>> encoded_zh = tokenizer(chinese_text, return_tensors=""pt"") >>> generated_tokens = model.generate(**encoded_zh, forced_bos_token_id=tokenizer.get_lang_id(""en"")) >>> tokenizer.batch_decode(generated_tokens, skip_special_tokens=True) ""Life is like a box of chocolate."" ## Resources - [Translation task guide](../tasks/translation) - [Summarization task guide](../tasks/summarization) ## M2M100Config [[autodoc]] M2M100Config ## M2M100Tokenizer [[autodoc]] M2M100Tokenizer - build_inputs_with_special_tokens - get_special_tokens_mask - create_token_type_ids_from_sequences - save_vocabulary ## M2M100Model [[autodoc]] M2M100Model - forward ## M2M100ForConditionalGeneration [[autodoc]] M2M100ForConditionalGeneration - forward " model_doc/perceiver.md," # Perceiver ## Overview The Perceiver IO model was proposed in [Perceiver IO: A General Architecture for Structured Inputs & Outputs](https://arxiv.org/abs/2107.14795) by Andrew Jaegle, Sebastian Borgeaud, Jean-Baptiste Alayrac, Carl Doersch, Catalin Ionescu, David Ding, Skanda Koppula, Daniel Zoran, Andrew Brock, Evan Shelhamer, Olivier Hénaff, Matthew M. Botvinick, Andrew Zisserman, Oriol Vinyals, João Carreira. Perceiver IO is a generalization of [Perceiver](https://arxiv.org/abs/2103.03206) to handle arbitrary outputs in addition to arbitrary inputs. The original Perceiver only produced a single classification label. In addition to classification labels, Perceiver IO can produce (for example) language, optical flow, and multimodal videos with audio. This is done using the same building blocks as the original Perceiver. The computational complexity of Perceiver IO is linear in the input and output size and the bulk of the processing occurs in the latent space, allowing us to process inputs and outputs that are much larger than can be handled by standard Transformers. This means, for example, Perceiver IO can do BERT-style masked language modeling directly using bytes instead of tokenized inputs. The abstract from the paper is the following: *The recently-proposed Perceiver model obtains good results on several domains (images, audio, multimodal, point clouds) while scaling linearly in compute and memory with the input size. While the Perceiver supports many kinds of inputs, it can only produce very simple outputs such as class scores. Perceiver IO overcomes this limitation without sacrificing the original's appealing properties by learning to flexibly query the model's latent space to produce outputs of arbitrary size and semantics. Perceiver IO still decouples model depth from data size and still scales linearly with data size, but now with respect to both input and output sizes. The full Perceiver IO model achieves strong results on tasks with highly structured output spaces, such as natural language and visual understanding, StarCraft II, and multi-task and multi-modal domains. As highlights, Perceiver IO matches a Transformer-based BERT baseline on the GLUE language benchmark without the need for input tokenization and achieves state-of-the-art performance on Sintel optical flow estimation.* Here's a TLDR explaining how Perceiver works: The main problem with the self-attention mechanism of the Transformer is that the time and memory requirements scale quadratically with the sequence length. Hence, models like BERT and RoBERTa are limited to a max sequence length of 512 tokens. Perceiver aims to solve this issue by, instead of performing self-attention on the inputs, perform it on a set of latent variables, and only use the inputs for cross-attention. In this way, the time and memory requirements don't depend on the length of the inputs anymore, as one uses a fixed amount of latent variables, like 256 or 512. These are randomly initialized, after which they are trained end-to-end using backpropagation. Internally, [`PerceiverModel`] will create the latents, which is a tensor of shape `(batch_size, num_latents, d_latents)`. One must provide `inputs` (which could be text, images, audio, you name it!) to the model, which it will use to perform cross-attention with the latents. The output of the Perceiver encoder is a tensor of the same shape. One can then, similar to BERT, convert the last hidden states of the latents to classification logits by averaging along the sequence dimension, and placing a linear layer on top of that to project the `d_latents` to `num_labels`. This was the idea of the original Perceiver paper. However, it could only output classification logits. In a follow-up work, PerceiverIO, they generalized it to let the model also produce outputs of arbitrary size. How, you might ask? The idea is actually relatively simple: one defines outputs of an arbitrary size, and then applies cross-attention with the last hidden states of the latents, using the outputs as queries, and the latents as keys and values. So let's say one wants to perform masked language modeling (BERT-style) with the Perceiver. As the Perceiver's input length will not have an impact on the computation time of the self-attention layers, one can provide raw bytes, providing `inputs` of length 2048 to the model. If one now masks out certain of these 2048 tokens, one can define the `outputs` as being of shape: `(batch_size, 2048, 768)`. Next, one performs cross-attention with the final hidden states of the latents to update the `outputs` tensor. After cross-attention, one still has a tensor of shape `(batch_size, 2048, 768)`. One can then place a regular language modeling head on top, to project the last dimension to the vocabulary size of the model, i.e. creating logits of shape `(batch_size, 2048, 262)` (as Perceiver uses a vocabulary size of 262 byte IDs). Perceiver IO architecture. Taken from the original paper This model was contributed by [nielsr](https://huggingface.co/nielsr). The original code can be found [here](https://github.com/deepmind/deepmind-research/tree/master/perceiver). Perceiver does **not** work with `torch.nn.DataParallel` due to a bug in PyTorch, see [issue #36035](https://github.com/pytorch/pytorch/issues/36035) ## Resources - The quickest way to get started with the Perceiver is by checking the [tutorial notebooks](https://github.com/NielsRogge/Transformers-Tutorials/tree/master/Perceiver). - Refer to the [blog post](https://huggingface.co/blog/perceiver) if you want to fully understand how the model works and is implemented in the library. Note that the models available in the library only showcase some examples of what you can do with the Perceiver. There are many more use cases, including question answering, named-entity recognition, object detection, audio classification, video classification, etc. - [Text classification task guide](../tasks/sequence_classification) - [Masked language modeling task guide](../tasks/masked_language_modeling) - [Image classification task guide](../tasks/image_classification) ## Perceiver specific outputs [[autodoc]] models.perceiver.modeling_perceiver.PerceiverModelOutput [[autodoc]] models.perceiver.modeling_perceiver.PerceiverDecoderOutput [[autodoc]] models.perceiver.modeling_perceiver.PerceiverMaskedLMOutput [[autodoc]] models.perceiver.modeling_perceiver.PerceiverClassifierOutput ## PerceiverConfig [[autodoc]] PerceiverConfig ## PerceiverTokenizer [[autodoc]] PerceiverTokenizer - __call__ ## PerceiverFeatureExtractor [[autodoc]] PerceiverFeatureExtractor - __call__ ## PerceiverImageProcessor [[autodoc]] PerceiverImageProcessor - preprocess ## PerceiverTextPreprocessor [[autodoc]] models.perceiver.modeling_perceiver.PerceiverTextPreprocessor ## PerceiverImagePreprocessor [[autodoc]] models.perceiver.modeling_perceiver.PerceiverImagePreprocessor ## PerceiverOneHotPreprocessor [[autodoc]] models.perceiver.modeling_perceiver.PerceiverOneHotPreprocessor ## PerceiverAudioPreprocessor [[autodoc]] models.perceiver.modeling_perceiver.PerceiverAudioPreprocessor ## PerceiverMultimodalPreprocessor [[autodoc]] models.perceiver.modeling_perceiver.PerceiverMultimodalPreprocessor ## PerceiverProjectionDecoder [[autodoc]] models.perceiver.modeling_perceiver.PerceiverProjectionDecoder ## PerceiverBasicDecoder [[autodoc]] models.perceiver.modeling_perceiver.PerceiverBasicDecoder ## PerceiverClassificationDecoder [[autodoc]] models.perceiver.modeling_perceiver.PerceiverClassificationDecoder ## PerceiverOpticalFlowDecoder [[autodoc]] models.perceiver.modeling_perceiver.PerceiverOpticalFlowDecoder ## PerceiverBasicVideoAutoencodingDecoder [[autodoc]] models.perceiver.modeling_perceiver.PerceiverBasicVideoAutoencodingDecoder ## PerceiverMultimodalDecoder [[autodoc]] models.perceiver.modeling_perceiver.PerceiverMultimodalDecoder ## PerceiverProjectionPostprocessor [[autodoc]] models.perceiver.modeling_perceiver.PerceiverProjectionPostprocessor ## PerceiverAudioPostprocessor [[autodoc]] models.perceiver.modeling_perceiver.PerceiverAudioPostprocessor ## PerceiverClassificationPostprocessor [[autodoc]] models.perceiver.modeling_perceiver.PerceiverClassificationPostprocessor ## PerceiverMultimodalPostprocessor [[autodoc]] models.perceiver.modeling_perceiver.PerceiverMultimodalPostprocessor ## PerceiverModel [[autodoc]] PerceiverModel - forward ## PerceiverForMaskedLM [[autodoc]] PerceiverForMaskedLM - forward ## PerceiverForSequenceClassification [[autodoc]] PerceiverForSequenceClassification - forward ## PerceiverForImageClassificationLearned [[autodoc]] PerceiverForImageClassificationLearned - forward ## PerceiverForImageClassificationFourier [[autodoc]] PerceiverForImageClassificationFourier - forward ## PerceiverForImageClassificationConvProcessing [[autodoc]] PerceiverForImageClassificationConvProcessing - forward ## PerceiverForOpticalFlow [[autodoc]] PerceiverForOpticalFlow - forward ## PerceiverForMultimodalAutoencoding [[autodoc]] PerceiverForMultimodalAutoencoding - forward " model_doc/yolos.md," # YOLOS ## Overview The YOLOS model was proposed in [You Only Look at One Sequence: Rethinking Transformer in Vision through Object Detection](https://arxiv.org/abs/2106.00666) by Yuxin Fang, Bencheng Liao, Xinggang Wang, Jiemin Fang, Jiyang Qi, Rui Wu, Jianwei Niu, Wenyu Liu. YOLOS proposes to just leverage the plain [Vision Transformer (ViT)](vit) for object detection, inspired by DETR. It turns out that a base-sized encoder-only Transformer can also achieve 42 AP on COCO, similar to DETR and much more complex frameworks such as Faster R-CNN. The abstract from the paper is the following: *Can Transformer perform 2D object- and region-level recognition from a pure sequence-to-sequence perspective with minimal knowledge about the 2D spatial structure? To answer this question, we present You Only Look at One Sequence (YOLOS), a series of object detection models based on the vanilla Vision Transformer with the fewest possible modifications, region priors, as well as inductive biases of the target task. We find that YOLOS pre-trained on the mid-sized ImageNet-1k dataset only can already achieve quite competitive performance on the challenging COCO object detection benchmark, e.g., YOLOS-Base directly adopted from BERT-Base architecture can obtain 42.0 box AP on COCO val. We also discuss the impacts as well as limitations of current pre-train schemes and model scaling strategies for Transformer in vision through YOLOS.* YOLOS architecture. Taken from the original paper. This model was contributed by [nielsr](https://huggingface.co/nielsr). The original code can be found [here](https://github.com/hustvl/YOLOS). ## Resources A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with YOLOS. - All example notebooks illustrating inference + fine-tuning [`YolosForObjectDetection`] on a custom dataset can be found [here](https://github.com/NielsRogge/Transformers-Tutorials/tree/master/YOLOS). - See also: [Object detection task guide](../tasks/object_detection) If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource. Use [`YolosImageProcessor`] for preparing images (and optional targets) for the model. Contrary to [DETR](detr), YOLOS doesn't require a `pixel_mask` to be created. ## YolosConfig [[autodoc]] YolosConfig ## YolosImageProcessor [[autodoc]] YolosImageProcessor - preprocess - pad - post_process_object_detection ## YolosFeatureExtractor [[autodoc]] YolosFeatureExtractor - __call__ - pad - post_process_object_detection ## YolosModel [[autodoc]] YolosModel - forward ## YolosForObjectDetection [[autodoc]] YolosForObjectDetection - forward " model_doc/vision-encoder-decoder.md," # Vision Encoder Decoder Models ## Overview The [`VisionEncoderDecoderModel`] can be used to initialize an image-to-text model with any pretrained Transformer-based vision model as the encoder (*e.g.* [ViT](vit), [BEiT](beit), [DeiT](deit), [Swin](swin)) and any pretrained language model as the decoder (*e.g.* [RoBERTa](roberta), [GPT2](gpt2), [BERT](bert), [DistilBERT](distilbert)). The effectiveness of initializing image-to-text-sequence models with pretrained checkpoints has been shown in (for example) [TrOCR: Transformer-based Optical Character Recognition with Pre-trained Models](https://arxiv.org/abs/2109.10282) by Minghao Li, Tengchao Lv, Lei Cui, Yijuan Lu, Dinei Florencio, Cha Zhang, Zhoujun Li, Furu Wei. After such a [`VisionEncoderDecoderModel`] has been trained/fine-tuned, it can be saved/loaded just like any other models (see the examples below for more information). An example application is image captioning, in which the encoder is used to encode the image, after which an autoregressive language model generates the caption. Another example is optical character recognition. Refer to [TrOCR](trocr), which is an instance of [`VisionEncoderDecoderModel`]. ## Randomly initializing `VisionEncoderDecoderModel` from model configurations. [`VisionEncoderDecoderModel`] can be randomly initialized from an encoder and a decoder config. In the following example, we show how to do this using the default [`ViTModel`] configuration for the encoder and the default [`BertForCausalLM`] configuration for the decoder. thon >>> from transformers import BertConfig, ViTConfig, VisionEncoderDecoderConfig, VisionEncoderDecoderModel >>> config_encoder = ViTConfig() >>> config_decoder = BertConfig() >>> config = VisionEncoderDecoderConfig.from_encoder_decoder_configs(config_encoder, config_decoder) >>> model = VisionEncoderDecoderModel(config=config) ## Initialising `VisionEncoderDecoderModel` from a pretrained encoder and a pretrained decoder. [`VisionEncoderDecoderModel`] can be initialized from a pretrained encoder checkpoint and a pretrained decoder checkpoint. Note that any pretrained Transformer-based vision model, *e.g.* [Swin](swin), can serve as the encoder and both pretrained auto-encoding models, *e.g.* BERT, pretrained causal language models, *e.g.* GPT2, as well as the pretrained decoder part of sequence-to-sequence models, *e.g.* decoder of BART, can be used as the decoder. Depending on which architecture you choose as the decoder, the cross-attention layers might be randomly initialized. Initializing [`VisionEncoderDecoderModel`] from a pretrained encoder and decoder checkpoint requires the model to be fine-tuned on a downstream task, as has been shown in [the *Warm-starting-encoder-decoder blog post*](https://huggingface.co/blog/warm-starting-encoder-decoder). To do so, the `VisionEncoderDecoderModel` class provides a [`VisionEncoderDecoderModel.from_encoder_decoder_pretrained`] method. thon >>> from transformers import VisionEncoderDecoderModel >>> model = VisionEncoderDecoderModel.from_encoder_decoder_pretrained( ""microsoft/swin-base-patch4-window7-224-in22k"", ""bert-base-uncased"" ) ## Loading an existing `VisionEncoderDecoderModel` checkpoint and perform inference. To load fine-tuned checkpoints of the `VisionEncoderDecoderModel` class, [`VisionEncoderDecoderModel`] provides the `from_pretrained()` method just like any other model architecture in Transformers. To perform inference, one uses the [`generate`] method, which allows to autoregressively generate text. This method supports various forms of decoding, such as greedy, beam search and multinomial sampling. thon >>> import requests >>> from PIL import Image >>> from transformers import GPT2TokenizerFast, ViTImageProcessor, VisionEncoderDecoderModel >>> # load a fine-tuned image captioning model and corresponding tokenizer and image processor >>> model = VisionEncoderDecoderModel.from_pretrained(""nlpconnect/vit-gpt2-image-captioning"") >>> tokenizer = GPT2TokenizerFast.from_pretrained(""nlpconnect/vit-gpt2-image-captioning"") >>> image_processor = ViTImageProcessor.from_pretrained(""nlpconnect/vit-gpt2-image-captioning"") >>> # let's perform inference on an image >>> url = ""http://images.cocodataset.org/val2017/000000039769.jpg"" >>> image = Image.open(requests.get(url, stream=True).raw) >>> pixel_values = image_processor(image, return_tensors=""pt"").pixel_values >>> # autoregressively generate caption (uses greedy decoding by default) >>> generated_ids = model.generate(pixel_values) >>> generated_text = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0] >>> print(generated_text) a cat laying on a blanket next to a cat laying on a bed ## Loading a PyTorch checkpoint into `TFVisionEncoderDecoderModel`. [`TFVisionEncoderDecoderModel.from_pretrained`] currently doesn't support initializing the model from a PyTorch checkpoint. Passing `from_pt=True` to this method will throw an exception. If there are only PyTorch checkpoints for a particular vision encoder-decoder model, a workaround is: thon >>> from transformers import VisionEncoderDecoderModel, TFVisionEncoderDecoderModel >>> _model = VisionEncoderDecoderModel.from_pretrained(""nlpconnect/vit-gpt2-image-captioning"") >>> _model.encoder.save_pretrained(""./encoder"") >>> _model.decoder.save_pretrained(""./decoder"") >>> model = TFVisionEncoderDecoderModel.from_encoder_decoder_pretrained( ""./encoder"", ""./decoder"", encoder_from_pt=True, decoder_from_pt=True ) >>> # This is only for copying some specific attributes of this particular model. >>> model.config = _model.config ## Training Once the model is created, it can be fine-tuned similar to BART, T5 or any other encoder-decoder model on a dataset of (image, text) pairs. As you can see, only 2 inputs are required for the model in order to compute a loss: `pixel_values` (which are the images) and `labels` (which are the `input_ids` of the encoded target sequence). thon >>> from transformers import ViTImageProcessor, BertTokenizer, VisionEncoderDecoderModel >>> from datasets import load_dataset >>> image_processor = ViTImageProcessor.from_pretrained(""google/vit-base-patch16-224-in21k"") >>> tokenizer = BertTokenizer.from_pretrained(""bert-base-uncased"") >>> model = VisionEncoderDecoderModel.from_encoder_decoder_pretrained( ""google/vit-base-patch16-224-in21k"", ""bert-base-uncased"" ) >>> model.config.decoder_start_token_id = tokenizer.cls_token_id >>> model.config.pad_token_id = tokenizer.pad_token_id >>> dataset = load_dataset(""huggingface/cats-image"") >>> image = dataset[""test""][""image""][0] >>> pixel_values = image_processor(image, return_tensors=""pt"").pixel_values >>> labels = tokenizer( ""an image of two cats chilling on a couch"", return_tensors=""pt"", ).input_ids >>> # the forward function automatically creates the correct decoder_input_ids >>> loss = model(pixel_values=pixel_values, labels=labels).loss This model was contributed by [nielsr](https://github.com/nielsrogge). This model's TensorFlow and Flax versions were contributed by [ydshieh](https://github.com/ydshieh). ## VisionEncoderDecoderConfig [[autodoc]] VisionEncoderDecoderConfig ## VisionEncoderDecoderModel [[autodoc]] VisionEncoderDecoderModel - forward - from_encoder_decoder_pretrained ## TFVisionEncoderDecoderModel [[autodoc]] TFVisionEncoderDecoderModel - call - from_encoder_decoder_pretrained ## FlaxVisionEncoderDecoderModel [[autodoc]] FlaxVisionEncoderDecoderModel - __call__ - from_encoder_decoder_pretrained " model_doc/codegen.md," # CodeGen ## Overview The CodeGen model was proposed in [A Conversational Paradigm for Program Synthesis](https://arxiv.org/abs/2203.13474) by Erik Nijkamp, Bo Pang, Hiroaki Hayashi, Lifu Tu, Huan Wang, Yingbo Zhou, Silvio Savarese, and Caiming Xiong. CodeGen is an autoregressive language model for program synthesis trained sequentially on [The Pile](https://pile.eleuther.ai/), BigQuery, and BigPython. The abstract from the paper is the following: *Program synthesis strives to generate a computer program as a solution to a given problem specification. We propose a conversational program synthesis approach via large language models, which addresses the challenges of searching over a vast program space and user intent specification faced in prior approaches. Our new approach casts the process of writing a specification and program as a multi-turn conversation between a user and a system. It treats program synthesis as a sequence prediction problem, in which the specification is expressed in natural language and the desired program is conditionally sampled. We train a family of large language models, called CodeGen, on natural language and programming language data. With weak supervision in the data and the scaling up of data size and model size, conversational capacities emerge from the simple autoregressive language modeling. To study the model behavior on conversational program synthesis, we develop a multi-turn programming benchmark (MTPB), where solving each problem requires multi-step synthesis via multi-turn conversation between the user and the model. Our findings show the emergence of conversational capabilities and the effectiveness of the proposed conversational program synthesis paradigm. In addition, our model CodeGen (with up to 16B parameters trained on TPU-v4) outperforms OpenAI's Codex on the HumanEval benchmark. We make the training library JaxFormer including checkpoints available as open source contribution: [this https URL](https://github.com/salesforce/codegen).* This model was contributed by [Hiroaki Hayashi](https://huggingface.co/rooa). The original code can be found [here](https://github.com/salesforce/codegen). ## Checkpoint Naming * CodeGen model [checkpoints](https://huggingface.co/models?other=codegen) are available on different pre-training data with variable sizes. * The format is: `Salesforce/codegen-{size}-{data}`, where * `size`: `350M`, `2B`, `6B`, `16B` * `data`: * `nl`: Pre-trained on the Pile * `multi`: Initialized with `nl`, then further pre-trained on multiple programming languages data * `mono`: Initialized with `multi`, then further pre-trained on Python data * For example, `Salesforce/codegen-350M-mono` offers a 350 million-parameter checkpoint pre-trained sequentially on the Pile, multiple programming languages, and Python. ## Usage example thon >>> from transformers import AutoModelForCausalLM, AutoTokenizer >>> checkpoint = ""Salesforce/codegen-350M-mono"" >>> model = AutoModelForCausalLM.from_pretrained(checkpoint) >>> tokenizer = AutoTokenizer.from_pretrained(checkpoint) >>> text = ""def hello_world():"" >>> completion = model.generate(**tokenizer(text, return_tensors=""pt"")) >>> print(tokenizer.decode(completion[0])) def hello_world(): print(""Hello World"") hello_world() ## Resources - [Causal language modeling task guide](../tasks/language_modeling) ## CodeGenConfig [[autodoc]] CodeGenConfig - all ## CodeGenTokenizer [[autodoc]] CodeGenTokenizer - save_vocabulary ## CodeGenTokenizerFast [[autodoc]] CodeGenTokenizerFast ## CodeGenModel [[autodoc]] CodeGenModel - forward ## CodeGenForCausalLM [[autodoc]] CodeGenForCausalLM - forward " model_doc/dpt.md," # DPT ## Overview The DPT model was proposed in [Vision Transformers for Dense Prediction](https://arxiv.org/abs/2103.13413) by René Ranftl, Alexey Bochkovskiy, Vladlen Koltun. DPT is a model that leverages the [Vision Transformer (ViT)](vit) as backbone for dense prediction tasks like semantic segmentation and depth estimation. The abstract from the paper is the following: *We introduce dense vision transformers, an architecture that leverages vision transformers in place of convolutional networks as a backbone for dense prediction tasks. We assemble tokens from various stages of the vision transformer into image-like representations at various resolutions and progressively combine them into full-resolution predictions using a convolutional decoder. The transformer backbone processes representations at a constant and relatively high resolution and has a global receptive field at every stage. These properties allow the dense vision transformer to provide finer-grained and more globally coherent predictions when compared to fully-convolutional networks. Our experiments show that this architecture yields substantial improvements on dense prediction tasks, especially when a large amount of training data is available. For monocular depth estimation, we observe an improvement of up to 28% in relative performance when compared to a state-of-the-art fully-convolutional network. When applied to semantic segmentation, dense vision transformers set a new state of the art on ADE20K with 49.02% mIoU. We further show that the architecture can be fine-tuned on smaller datasets such as NYUv2, KITTI, and Pascal Context where it also sets the new state of the art.* DPT architecture. Taken from the original paper. This model was contributed by [nielsr](https://huggingface.co/nielsr). The original code can be found [here](https://github.com/isl-org/DPT). ## Usage tips DPT is compatible with the [`AutoBackbone`] class. This allows to use the DPT framework with various computer vision backbones available in the library, such as [`VitDetBackbone`] or [`Dinov2Backbone`]. One can create it as follows: thon from transformers import Dinov2Config, DPTConfig, DPTForDepthEstimation # initialize with a Transformer-based backbone such as DINOv2 # in that case, we also specify `reshape_hidden_states=False` to get feature maps of shape (batch_size, num_channels, height, width) backbone_config = Dinov2Config.from_pretrained(""facebook/dinov2-base"", out_features=[""stage1"", ""stage2"", ""stage3"", ""stage4""], reshape_hidden_states=False) config = DPTConfig(backbone_config=backbone_config) model = DPTForDepthEstimation(config=config) ## Resources A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with DPT. - Demo notebooks for [`DPTForDepthEstimation`] can be found [here](https://github.com/NielsRogge/Transformers-Tutorials/tree/master/DPT). - [Semantic segmentation task guide](../tasks/semantic_segmentation) - [Monocular depth estimation task guide](../tasks/monocular_depth_estimation) If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource. ## DPTConfig [[autodoc]] DPTConfig ## DPTFeatureExtractor [[autodoc]] DPTFeatureExtractor - __call__ - post_process_semantic_segmentation ## DPTImageProcessor [[autodoc]] DPTImageProcessor - preprocess - post_process_semantic_segmentation ## DPTModel [[autodoc]] DPTModel - forward ## DPTForDepthEstimation [[autodoc]] DPTForDepthEstimation - forward ## DPTForSemanticSegmentation [[autodoc]] DPTForSemanticSegmentation - forward " main_classes/deepspeed.md," # DeepSpeed Integration [DeepSpeed](https://github.com/microsoft/DeepSpeed) implements everything described in the [ZeRO paper](https://arxiv.org/abs/1910.02054). Currently it provides full support for: 1. Optimizer state partitioning (ZeRO stage 1) 2. Gradient partitioning (ZeRO stage 2) 3. Parameter partitioning (ZeRO stage 3) 4. Custom mixed precision training handling 5. A range of fast CUDA-extension-based optimizers 6. ZeRO-Offload to CPU and NVMe ZeRO-Offload has its own dedicated paper: [ZeRO-Offload: Democratizing Billion-Scale Model Training](https://arxiv.org/abs/2101.06840). And NVMe-support is described in the paper [ZeRO-Infinity: Breaking the GPU Memory Wall for Extreme Scale Deep Learning](https://arxiv.org/abs/2104.07857). DeepSpeed ZeRO-2 is primarily used only for training, as its features are of no use to inference. DeepSpeed ZeRO-3 can be used for inference as well, since it allows huge models to be loaded on multiple GPUs, which won't be possible on a single GPU. 🤗 Transformers integrates [DeepSpeed](https://github.com/microsoft/DeepSpeed) via 2 options: 1. Integration of the core DeepSpeed features via [`Trainer`]. This is an everything-done-for-you type of integration - just supply your custom config file or use our template and you have nothing else to do. Most of this document is focused on this feature. 2. If you don't use [`Trainer`] and want to use your own Trainer where you integrated DeepSpeed yourself, core functionality functions like `from_pretrained` and `from_config` include integration of essential parts of DeepSpeed like `zero.Init` for ZeRO stage 3 and higher. To tap into this feature read the docs on [non-Trainer DeepSpeed Integration](#nontrainer-deepspeed-integration). What is integrated: Training: 1. DeepSpeed ZeRO training supports the full ZeRO stages 1, 2 and 3 with ZeRO-Infinity (CPU and NVME offload). Inference: 1. DeepSpeed ZeRO Inference supports ZeRO stage 3 with ZeRO-Infinity. It uses the same ZeRO protocol as training, but it doesn't use an optimizer and a lr scheduler and only stage 3 is relevant. For more details see: [zero-inference](#zero-inference). There is also DeepSpeed Inference - this is a totally different technology which uses Tensor Parallelism instead of ZeRO (coming soon). ## Trainer Deepspeed Integration ### Installation Install the library via pypi: ```bash pip install deepspeed or via `transformers`' `extras`: ```bash pip install transformers[deepspeed] or find more details on [the DeepSpeed's GitHub page](https://github.com/microsoft/deepspeed#installation) and [advanced install](https://www.deepspeed.ai/tutorials/advanced-install/). If you're still struggling with the build, first make sure to read [CUDA Extension Installation Notes](trainer#cuda-extension-installation-notes). If you don't prebuild the extensions and rely on them to be built at run time and you tried all of the above solutions to no avail, the next thing to try is to pre-build the modules before installing them. To make a local build for DeepSpeed: ```bash git clone https://github.com/microsoft/DeepSpeed/ cd DeepSpeed rm -rf build TORCH_CUDA_ARCH_LIST=""8.6"" DS_BUILD_CPU_ADAM=1 DS_BUILD_UTILS=1 pip install . \ --global-option=""build_ext"" --global-option=""-j8"" --no-cache -v \ --disable-pip-version-check 2>&1 | tee build.log If you intend to use NVMe offload you will also need to include `DS_BUILD_AIO=1` in the instructions above (and also install *libaio-dev* system-wide). Edit `TORCH_CUDA_ARCH_LIST` to insert the code for the architectures of the GPU cards you intend to use. Assuming all your cards are the same you can get the arch via: ```bash CUDA_VISIBLE_DEVICES=0 python -c ""import torch; print(torch.cuda.get_device_capability())"" So if you get `8, 6`, then use `TORCH_CUDA_ARCH_LIST=""8.6""`. If you have multiple different cards, you can list all of them like so `TORCH_CUDA_ARCH_LIST=""6.1;8.6""` If you need to use the same setup on multiple machines, make a binary wheel: ```bash git clone https://github.com/microsoft/DeepSpeed/ cd DeepSpeed rm -rf build TORCH_CUDA_ARCH_LIST=""8.6"" DS_BUILD_CPU_ADAM=1 DS_BUILD_UTILS=1 \ python setup.py build_ext -j8 bdist_wheel it will generate something like `dist/deepspeed-0.3.13+8cd046f-cp38-cp38-linux_x86_64.whl` which now you can install as `pip install deepspeed-0.3.13+8cd046f-cp38-cp38-linux_x86_64.whl` locally or on any other machine. Again, remember to ensure to adjust `TORCH_CUDA_ARCH_LIST` to the target architectures. You can find the complete list of NVIDIA GPUs and their corresponding **Compute Capabilities** (same as arch in this context) [here](https://developer.nvidia.com/cuda-gpus). You can check the archs pytorch was built with using: ```bash python -c ""import torch; print(torch.cuda.get_arch_list())"" Here is how to find out the arch for one of the installed GPUs. For example, for GPU 0: ```bash CUDA_VISIBLE_DEVICES=0 python -c ""import torch; \ print(torch.cuda.get_device_properties(torch.device('cuda')))"" If the output is: ```bash _CudaDeviceProperties(name='GeForce RTX 3090', major=8, minor=6, total_memory=24268MB, multi_processor_count=82) then you know that this card's arch is `8.6`. You can also leave `TORCH_CUDA_ARCH_LIST` out completely and then the build program will automatically query the architecture of the GPUs the build is made on. This may or may not match the GPUs on the target machines, that's why it's best to specify the desired archs explicitly. If after trying everything suggested you still encounter build issues, please, proceed with the GitHub Issue of [Deepspeed](https://github.com/microsoft/DeepSpeed/issues), ### Deployment with multiple GPUs To deploy the DeepSpeed integration adjust the [`Trainer`] command line arguments to include a new argument `--deepspeed ds_config.json`, where `ds_config.json` is the DeepSpeed configuration file as documented [here](https://www.deepspeed.ai/docs/config-json/). The file naming is up to you. It's recommended to use DeepSpeed's `add_config_arguments` utility to add the necessary command line arguments to your code. For more information please see [DeepSpeed's Argument Parsing](https://deepspeed.readthedocs.io/en/latest/initialize.html#argument-parsing) doc. You can use a launcher of your choice here. You can continue using the pytorch launcher: ```bash torch.distributed.run --nproc_per_node=2 your_program.py --deepspeed ds_config.json or use the launcher provided by `deepspeed`: ```bash deepspeed --num_gpus=2 your_program.py --deepspeed ds_config.json As you can see the arguments aren't the same, but for most needs either of them works. The full details on how to configure various nodes and GPUs can be found [here](https://www.deepspeed.ai/getting-started/#resource-configuration-multi-node). When you use the `deepspeed` launcher and you want to use all available gpus you can just omit the `--num_gpus` flag. Here is an example of running `run_translation.py` under DeepSpeed deploying all available GPUs: ```bash deepspeed examples/pytorch/translation/run_translation.py \ --deepspeed tests/deepspeed/ds_config_zero3.json \ --model_name_or_path t5-small --per_device_train_batch_size 1 \ --output_dir output_dir --overwrite_output_dir --fp16 \ --do_train --max_train_samples 500 --num_train_epochs 1 \ --dataset_name wmt16 --dataset_config ""ro-en"" \ --source_lang en --target_lang ro Note that in the DeepSpeed documentation you are likely to see `--deepspeed --deepspeed_config ds_config.json` - i.e. two DeepSpeed-related arguments, but for the sake of simplicity, and since there are already so many arguments to deal with, we combined the two into a single argument. For some practical usage examples, please, see this [post](https://github.com/huggingface/transformers/issues/8771#issuecomment-759248400). ### Deployment with one GPU To deploy DeepSpeed with one GPU adjust the [`Trainer`] command line arguments as follows: ```bash deepspeed --num_gpus=1 examples/pytorch/translation/run_translation.py \ --deepspeed tests/deepspeed/ds_config_zero2.json \ --model_name_or_path t5-small --per_device_train_batch_size 1 \ --output_dir output_dir --overwrite_output_dir --fp16 \ --do_train --max_train_samples 500 --num_train_epochs 1 \ --dataset_name wmt16 --dataset_config ""ro-en"" \ --source_lang en --target_lang ro This is almost the same as with multiple-GPUs, but here we tell DeepSpeed explicitly to use just one GPU via `--num_gpus=1`. By default, DeepSpeed deploys all GPUs it can see on the given node. If you have only 1 GPU to start with, then you don't need this argument. The following [documentation](https://www.deepspeed.ai/getting-started/#resource-configuration-multi-node) discusses the launcher options. Why would you want to use DeepSpeed with just one GPU? 1. It has a ZeRO-offload feature which can delegate some computations and memory to the host's CPU and RAM, and thus leave more GPU resources for model's needs - e.g. larger batch size, or enabling a fitting of a very big model which normally won't fit. 2. It provides a smart GPU memory management system, that minimizes memory fragmentation, which again allows you to fit bigger models and data batches. While we are going to discuss the configuration in details next, the key to getting a huge improvement on a single GPU with DeepSpeed is to have at least the following configuration in the configuration file: ```json { ""zero_optimization"": { ""stage"": 2, ""offload_optimizer"": { ""device"": ""cpu"", ""pin_memory"": true }, ""allgather_partitions"": true, ""allgather_bucket_size"": 2e8, ""reduce_scatter"": true, ""reduce_bucket_size"": 2e8, ""overlap_comm"": true, ""contiguous_gradients"": true } } which enables optimizer offload and some other important features. You may experiment with the buffer sizes, you will find more details in the discussion below. For a practical usage example of this type of deployment, please, see this [post](https://github.com/huggingface/transformers/issues/8771#issuecomment-759176685). You may also try the ZeRO-3 with CPU and NVMe offload as explained further in this document. Notes: - if you need to run on a specific GPU, which is different from GPU 0, you can't use `CUDA_VISIBLE_DEVICES` to limit the visible scope of available GPUs. Instead, you have to use the following syntax: ```bash deepspeed --include localhost:1 examples/pytorch/translation/run_translation.py In this example, we tell DeepSpeed to use GPU 1 (second gpu). ### Deployment with multiple Nodes The information in this section isn't not specific to the DeepSpeed integration and is applicable to any multi-node program. But DeepSpeed provides a `deepspeed` launcher that is easier to use than other launchers unless you are in a SLURM environment. For the duration of this section let's assume that you have 2 nodes with 8 gpus each. And you can reach the first node with `ssh hostname1` and second node with `ssh hostname2`, and both must be able to reach each other via ssh locally without a password. Of course, you will need to rename these host (node) names to the actual host names you are working with. #### The torch.distributed.run launcher For example, to use `torch.distributed.run`, you could do: ```bash python -m torch.distributed.run --nproc_per_node=8 --nnode=2 --node_rank=0 --master_addr=hostname1 \ --master_port=9901 your_program.py --deepspeed ds_config.json You have to ssh to each node and run this same command on each one of them! There is no rush, the launcher will wait until both nodes will synchronize. For more information please see [torchrun](https://pytorch.org/docs/stable/elastic/run.html). Incidentally, this is also the launcher that replaced `torch.distributed.launch` a few pytorch versions back. #### The deepspeed launcher To use the `deepspeed` launcher instead, you have to first create a `hostfile` file: hostname1 slots=8 hostname2 slots=8 and then you can launch it as: ```bash deepspeed --num_gpus 8 --num_nodes 2 --hostfile hostfile --master_addr hostname1 --master_port=9901 \ your_program.py --deepspeed ds_config.json Unlike the `torch.distributed.run` launcher, `deepspeed` will automatically launch this command on both nodes! For more information please see [Resource Configuration (multi-node)](https://www.deepspeed.ai/getting-started/#resource-configuration-multi-node). #### Launching in a SLURM environment In the SLURM environment the following approach can be used. The following is a slurm script `launch.slurm` which you will need to adapt it to your specific SLURM environment. ```bash #SBATCH --job-name=test-nodes # name #SBATCH --nodes=2 # nodes #SBATCH --ntasks-per-node=1 # crucial - only 1 task per dist per node! #SBATCH --cpus-per-task=10 # number of cores per tasks #SBATCH --gres=gpu:8 # number of gpus #SBATCH --time 20:00:00 # maximum execution time (HH:MM:SS) #SBATCH --output=%x-%j.out # output file name export GPUS_PER_NODE=8 export MASTER_ADDR=$(scontrol show hostnames $SLURM_JOB_NODELIST | head -n 1) export MASTER_PORT=9901 srun --jobid $SLURM_JOBID bash -c 'python -m torch.distributed.run \ --nproc_per_node $GPUS_PER_NODE --nnodes $SLURM_NNODES --node_rank $SLURM_PROCID \ --master_addr $MASTER_ADDR --master_port $MASTER_PORT \ your_program.py --deepspeed ds_config.json' All is left is to schedule it to run: ```bash sbatch launch.slurm `srun` will take care of launching the program simultaneously on all nodes. #### Use of Non-shared filesystem By default DeepSpeed expects that a multi-node environment uses a shared storage. If this is not the case and each node can only see the local filesystem, you need to adjust the config file to include a [`checkpoint`_section](https://www.deepspeed.ai/docs/config-json/#checkpoint-options) with the following setting: ```json { ""checkpoint"": { ""use_node_local_storage"": true } } Alternatively, you can also use the [`Trainer`]'s `--save_on_each_node` argument, and the above config will be added automatically for you. ### Deployment in Notebooks The problem with running notebook cells as a script is that there is no normal `deepspeed` launcher to rely on, so under certain setups we have to emulate it. If you're using only 1 GPU, here is how you'd have to adjust your training code in the notebook to use DeepSpeed. thon # DeepSpeed requires a distributed environment even when only one process is used. # This emulates a launcher in the notebook import os os.environ[""MASTER_ADDR""] = ""localhost"" os.environ[""MASTER_PORT""] = ""9994"" # modify if RuntimeError: Address already in use os.environ[""RANK""] = ""0"" os.environ[""LOCAL_RANK""] = ""0"" os.environ[""WORLD_SIZE""] = ""1"" # Now proceed as normal, plus pass the deepspeed config file training_args = TrainingArguments(, deepspeed=""ds_config_zero3.json"") trainer = Trainer() trainer.train() Note: `` stands for the normal arguments that you'd pass to the functions. If you want to use more than 1 GPU, you must use a multi-process environment for DeepSpeed to work. That is, you have to use the launcher for that purpose and this cannot be accomplished by emulating the distributed environment presented at the beginning of this section. If you want to create the config file on the fly in the notebook in the current directory, you could have a dedicated cell with: thon no-style %%bash cat <<'EOT' > ds_config_zero3.json { ""fp16"": { ""enabled"": ""auto"", ""loss_scale"": 0, ""loss_scale_window"": 1000, ""initial_scale_power"": 16, ""hysteresis"": 2, ""min_loss_scale"": 1 }, ""optimizer"": { ""type"": ""AdamW"", ""params"": { ""lr"": ""auto"", ""betas"": ""auto"", ""eps"": ""auto"", ""weight_decay"": ""auto"" } }, ""scheduler"": { ""type"": ""WarmupLR"", ""params"": { ""warmup_min_lr"": ""auto"", ""warmup_max_lr"": ""auto"", ""warmup_num_steps"": ""auto"" } }, ""zero_optimization"": { ""stage"": 3, ""offload_optimizer"": { ""device"": ""cpu"", ""pin_memory"": true }, ""offload_param"": { ""device"": ""cpu"", ""pin_memory"": true }, ""overlap_comm"": true, ""contiguous_gradients"": true, ""sub_group_size"": 1e9, ""reduce_bucket_size"": ""auto"", ""stage3_prefetch_bucket_size"": ""auto"", ""stage3_param_persistence_threshold"": ""auto"", ""stage3_max_live_parameters"": 1e9, ""stage3_max_reuse_distance"": 1e9, ""stage3_gather_16bit_weights_on_model_save"": true }, ""gradient_accumulation_steps"": ""auto"", ""gradient_clipping"": ""auto"", ""steps_per_print"": 2000, ""train_batch_size"": ""auto"", ""train_micro_batch_size_per_gpu"": ""auto"", ""wall_clock_breakdown"": false } EOT If the training script is in a normal file and not in the notebook cells, you can launch `deepspeed` normally via shell from a cell. For example, to use `run_translation.py` you would launch it with: thon no-style !git clone https://github.com/huggingface/transformers !cd transformers; deepspeed examples/pytorch/translation/run_translation.py or with `%%bash` magic, where you can write a multi-line code for the shell program to run: thon no-style %%bash git clone https://github.com/huggingface/transformers cd transformers deepspeed examples/pytorch/translation/run_translation.py In such case you don't need any of the code presented at the beginning of this section. Note: While `%%bash` magic is neat, but currently it buffers the output so you won't see the logs until the process completes. ### Configuration For the complete guide to the DeepSpeed configuration options that can be used in its configuration file please refer to the [following documentation](https://www.deepspeed.ai/docs/config-json/). You can find dozens of DeepSpeed configuration examples that address various practical needs in [the DeepSpeedExamples repo](https://github.com/microsoft/DeepSpeedExamples): ```bash git clone https://github.com/microsoft/DeepSpeedExamples cd DeepSpeedExamples find . -name '*json' Continuing the code from above, let's say you're looking to configure the Lamb optimizer. So you can search through the example `.json` files with: ```bash grep -i Lamb $(find . -name '*json') Some more examples are to be found in the [main repo](https://github.com/microsoft/DeepSpeed) as well. When using DeepSpeed you always need to supply a DeepSpeed configuration file, yet some configuration parameters have to be configured via the command line. You will find the nuances in the rest of this guide. To get an idea of what DeepSpeed configuration file looks like, here is one that activates ZeRO stage 2 features, including optimizer states cpu offload, uses `AdamW` optimizer and `WarmupLR` scheduler and will enable mixed precision training if `--fp16` is passed: ```json { ""fp16"": { ""enabled"": ""auto"", ""loss_scale"": 0, ""loss_scale_window"": 1000, ""initial_scale_power"": 16, ""hysteresis"": 2, ""min_loss_scale"": 1 }, ""optimizer"": { ""type"": ""AdamW"", ""params"": { ""lr"": ""auto"", ""betas"": ""auto"", ""eps"": ""auto"", ""weight_decay"": ""auto"" } }, ""scheduler"": { ""type"": ""WarmupLR"", ""params"": { ""warmup_min_lr"": ""auto"", ""warmup_max_lr"": ""auto"", ""warmup_num_steps"": ""auto"" } }, ""zero_optimization"": { ""stage"": 2, ""offload_optimizer"": { ""device"": ""cpu"", ""pin_memory"": true }, ""allgather_partitions"": true, ""allgather_bucket_size"": 2e8, ""overlap_comm"": true, ""reduce_scatter"": true, ""reduce_bucket_size"": 2e8, ""contiguous_gradients"": true }, ""gradient_accumulation_steps"": ""auto"", ""gradient_clipping"": ""auto"", ""train_batch_size"": ""auto"", ""train_micro_batch_size_per_gpu"": ""auto"", } When you execute the program, DeepSpeed will log the configuration it received from the [`Trainer`] to the console, so you can see exactly what was the final configuration passed to it. ### Passing Configuration As discussed in this document normally the DeepSpeed configuration is passed as a path to a json file, but if you're not using the command line interface to configure the training, and instead instantiate the [`Trainer`] via [`TrainingArguments`] then for the `deepspeed` argument you can pass a nested `dict`. This allows you to create the configuration on the fly and doesn't require you to write it to the file system before passing it to [`TrainingArguments`]. To summarize you can do: thon TrainingArguments(, deepspeed=""/path/to/ds_config.json"") or: thon ds_config_dict = dict(scheduler=scheduler_params, optimizer=optimizer_params) TrainingArguments(, deepspeed=ds_config_dict) ### Shared Configuration This section is a must-read Some configuration values are required by both the [`Trainer`] and DeepSpeed to function correctly, therefore, to prevent conflicting definitions, which could lead to hard to detect errors, we chose to configure those via the [`Trainer`] command line arguments. Additionally, some configuration values are derived automatically based on the model's configuration, so instead of remembering to manually adjust multiple values, it's the best to let the [`Trainer`] do the majority of configuration for you. Therefore, in the rest of this guide you will find a special configuration value: `auto`, which when set will be automatically replaced with the correct or most efficient value. Please feel free to choose to ignore this recommendation and set the values explicitly, in which case be very careful that your the [`Trainer`] arguments and DeepSpeed configurations agree. For example, are you using the same learning rate, or batch size, or gradient accumulation settings? if these mismatch the training may fail in very difficult to detect ways. You have been warned. There are multiple other values that are specific to DeepSpeed-only and those you will have to set manually to suit your needs. In your own programs, you can also use the following approach if you'd like to modify the DeepSpeed config as a master and configure [`TrainingArguments`] based on that. The steps are: 1. Create or load the DeepSpeed configuration to be used as a master configuration 2. Create the [`TrainingArguments`] object based on these values Do note that some values, such as `scheduler.params.total_num_steps` are calculated by [`Trainer`] during `train`, but you can of course do the math yourself. ### ZeRO [Zero Redundancy Optimizer (ZeRO)](https://www.deepspeed.ai/tutorials/zero/) is the workhorse of DeepSpeed. It supports 3 different levels (stages) of optimization. The first one is not quite interesting for scalability purposes, therefore this document focuses on stages 2 and 3. Stage 3 is further improved by the latest addition of ZeRO-Infinity. You will find more indepth information in the DeepSpeed documentation. The `zero_optimization` section of the configuration file is the most important part ([docs](https://www.deepspeed.ai/docs/config-json/#zero-optimizations-for-fp16-training)), since that is where you define which ZeRO stages you want to enable and how to configure them. You will find the explanation for each parameter in the DeepSpeed docs. This section has to be configured exclusively via DeepSpeed configuration - the [`Trainer`] provides no equivalent command line arguments. Note: currently DeepSpeed doesn't validate parameter names, so if you misspell any, it'll use the default setting for the parameter that got misspelled. You can watch the DeepSpeed engine start up log messages to see what values it is going to use. #### ZeRO-2 Config The following is an example of configuration for ZeRO stage 2: ```json { ""zero_optimization"": { ""stage"": 2, ""offload_optimizer"": { ""device"": ""cpu"", ""pin_memory"": true }, ""allgather_partitions"": true, ""allgather_bucket_size"": 5e8, ""overlap_comm"": true, ""reduce_scatter"": true, ""reduce_bucket_size"": 5e8, ""contiguous_gradients"": true } } **Performance tuning:** - enabling `offload_optimizer` should reduce GPU RAM usage (it requires `""stage"": 2`) - `""overlap_comm"": true` trades off increased GPU RAM usage to lower all-reduce latency. `overlap_comm` uses 4.5x the `allgather_bucket_size` and `reduce_bucket_size` values. So if they are set to 5e8, this requires a 9GB footprint (`5e8 x 2Bytes x 2 x 4.5`). Therefore, if you have a GPU with 8GB or less RAM, to avoid getting OOM-errors you will need to reduce those parameters to about `2e8`, which would require 3.6GB. You will want to do the same on larger capacity GPU as well, if you're starting to hit OOM. - when reducing these buffers you're trading communication speed to avail more GPU RAM. The smaller the buffer size is, the slower the communication gets, and the more GPU RAM will be available to other tasks. So if a bigger batch size is important, getting a slightly slower training time could be a good trade. Additionally, `deepspeed==0.4.4` added a new option `round_robin_gradients` which you can enable with: ```json { ""zero_optimization"": { ""round_robin_gradients"": true } } This is a stage 2 optimization for CPU offloading that parallelizes gradient copying to CPU memory among ranks by fine-grained gradient partitioning. Performance benefit grows with gradient accumulation steps (more copying between optimizer steps) or GPU count (increased parallelism). #### ZeRO-3 Config The following is an example of configuration for ZeRO stage 3: ```json { ""zero_optimization"": { ""stage"": 3, ""offload_optimizer"": { ""device"": ""cpu"", ""pin_memory"": true }, ""offload_param"": { ""device"": ""cpu"", ""pin_memory"": true }, ""overlap_comm"": true, ""contiguous_gradients"": true, ""sub_group_size"": 1e9, ""reduce_bucket_size"": ""auto"", ""stage3_prefetch_bucket_size"": ""auto"", ""stage3_param_persistence_threshold"": ""auto"", ""stage3_max_live_parameters"": 1e9, ""stage3_max_reuse_distance"": 1e9, ""stage3_gather_16bit_weights_on_model_save"": true } } If you are getting OOMs, because your model or activations don't fit into the GPU memory and you have unutilized CPU memory offloading the optimizer states and parameters to CPU memory with `""device"": ""cpu""` may solve this limitation. If you don't want to offload to CPU memory, use `none` instead of `cpu` for the `device` entry. Offloading to NVMe is discussed further down. Pinned memory is enabled with `pin_memory` set to `true`. This feature can improve the throughput at the cost of making less memory available to other processes. Pinned memory is set aside to the specific process that requested it and its typically accessed much faster than normal CPU memory. **Performance tuning:** - `stage3_max_live_parameters`: `1e9` - `stage3_max_reuse_distance`: `1e9` If hitting OOM reduce `stage3_max_live_parameters` and `stage3_max_reuse_distance`. They should have minimal impact on performance unless you are doing activation checkpointing. `1e9` would consume ~2GB. The memory is shared by `stage3_max_live_parameters` and `stage3_max_reuse_distance`, so it's not additive, it's just 2GB total. `stage3_max_live_parameters` is the upper limit on how many full parameters you want to keep on the GPU at any given time. ""reuse distance"" is a metric we are using to figure out when will a parameter be used again in the future, and we use the `stage3_max_reuse_distance` to decide whether to throw away the parameter or to keep it. If a parameter is going to be used again in near future (less than `stage3_max_reuse_distance`) then we keep it to reduce communication overhead. This is super helpful when you have activation checkpointing enabled, where we do a forward recompute and backward passes a single layer granularity and want to keep the parameter in the forward recompute till the backward The following configuration values depend on the model's hidden size: - `reduce_bucket_size`: `hidden_size*hidden_size` - `stage3_prefetch_bucket_size`: `0.9 * hidden_size * hidden_size` - `stage3_param_persistence_threshold`: `10 * hidden_size` therefore set these values to `auto` and the [`Trainer`] will automatically assign the recommended values. But, of course, feel free to set these explicitly as well. `stage3_gather_16bit_weights_on_model_save` enables model fp16 weights consolidation when model gets saved. With large models and multiple GPUs this is an expensive operation both in terms of memory and speed. It's currently required if you plan to resume the training. Watch out for future updates that will remove this limitation and make things more flexible. If you're migrating from ZeRO-2 configuration note that `allgather_partitions`, `allgather_bucket_size` and `reduce_scatter` configuration parameters are not used in ZeRO-3. If you keep these in the config file they will just be ignored. - `sub_group_size`: `1e9` `sub_group_size` controls the granularity in which parameters are updated during optimizer steps. Parameters are grouped into buckets of `sub_group_size` and each buckets is updated one at a time. When used with NVMe offload in ZeRO-Infinity, `sub_group_size` therefore controls the granularity in which model states are moved in and out of CPU memory from NVMe during the optimizer step. This prevents running out of CPU memory for extremely large models. You can leave `sub_group_size` to its default value of *1e9* when not using NVMe offload. You may want to change its default value in the following cases: 1. Running into OOM during optimizer step: Reduce `sub_group_size` to reduce memory utilization of temporary buffers 2. Optimizer Step is taking a long time: Increase `sub_group_size` to improve bandwidth utilization as a result of the increased data buffers. #### ZeRO-0 Config Note that we're listing Stage 0 and 1 last since they are rarely used. Stage 0 is disabling all types of sharding and just using DeepSpeed as DDP. You can turn it on with: ```json { ""zero_optimization"": { ""stage"": 0 } } This will essentially disable ZeRO without you needing to change anything else. #### ZeRO-1 Config Stage 1 is Stage 2 minus gradient sharding. You can always try it to speed things a tiny bit to only shard the optimizer states with: ```json { ""zero_optimization"": { ""stage"": 1 } } ### NVMe Support ZeRO-Infinity allows for training incredibly large models by extending GPU and CPU memory with NVMe memory. Thanks to smart partitioning and tiling algorithms each GPU needs to send and receive very small amounts of data during offloading so modern NVMe proved to be fit to allow for an even larger total memory pool available to your training process. ZeRO-Infinity requires ZeRO-3 enabled. The following configuration example enables NVMe to offload both optimizer states and the params: ```json { ""zero_optimization"": { ""stage"": 3, ""offload_optimizer"": { ""device"": ""nvme"", ""nvme_path"": ""/local_nvme"", ""pin_memory"": true, ""buffer_count"": 4, ""fast_init"": false }, ""offload_param"": { ""device"": ""nvme"", ""nvme_path"": ""/local_nvme"", ""pin_memory"": true, ""buffer_count"": 5, ""buffer_size"": 1e8, ""max_in_cpu"": 1e9 }, ""aio"": { ""block_size"": 262144, ""queue_depth"": 32, ""thread_count"": 1, ""single_submit"": false, ""overlap_events"": true }, ""overlap_comm"": true, ""contiguous_gradients"": true, ""sub_group_size"": 1e9, ""reduce_bucket_size"": ""auto"", ""stage3_prefetch_bucket_size"": ""auto"", ""stage3_param_persistence_threshold"": ""auto"", ""stage3_max_live_parameters"": 1e9, ""stage3_max_reuse_distance"": 1e9, ""stage3_gather_16bit_weights_on_model_save"": true }, } You can choose to offload both optimizer states and params to NVMe, or just one of them or none. For example, if you have copious amounts of CPU memory available, by all means offload to CPU memory only as it'd be faster (hint: *""device"": ""cpu""*). Here is the full documentation for offloading [optimizer states](https://www.deepspeed.ai/docs/config-json/#optimizer-offloading) and [parameters](https://www.deepspeed.ai/docs/config-json/#parameter-offloading). Make sure that your `nvme_path` is actually an NVMe, since it will work with the normal hard drive or SSD, but it'll be much much slower. The fast scalable training was designed with modern NVMe transfer speeds in mind (as of this writing one can have ~3.5GB/s read, ~3GB/s write peak speeds). In order to figure out the optimal `aio` configuration block you must run a benchmark on your target setup, as [explained here](https://github.com/microsoft/DeepSpeed/issues/998). #### ZeRO-2 vs ZeRO-3 Performance ZeRO-3 is likely to be slower than ZeRO-2 if everything else is configured the same because the former has to gather model weights in addition to what ZeRO-2 does. If ZeRO-2 meets your needs and you don't need to scale beyond a few GPUs then you may choose to stick to it. It's important to understand that ZeRO-3 enables a much higher scalability capacity at a cost of speed. It's possible to adjust ZeRO-3 configuration to make it perform closer to ZeRO-2: - set `stage3_param_persistence_threshold` to a very large number - larger than the largest parameter, e.g., `6 * hidden_size * hidden_size`. This will keep the parameters on the GPUs. - turn off `offload_params` since ZeRO-2 doesn't have that option. The performance will likely improve significantly with just `offload_params` turned off, even if you don't change `stage3_param_persistence_threshold`. Of course, these changes will impact the size of the model you can train. So these help you to trade scalability for speed depending on your needs. #### ZeRO-2 Example Here is a full ZeRO-2 auto-configuration file `ds_config_zero2.json`: ```json { ""fp16"": { ""enabled"": ""auto"", ""loss_scale"": 0, ""loss_scale_window"": 1000, ""initial_scale_power"": 16, ""hysteresis"": 2, ""min_loss_scale"": 1 }, ""optimizer"": { ""type"": ""AdamW"", ""params"": { ""lr"": ""auto"", ""betas"": ""auto"", ""eps"": ""auto"", ""weight_decay"": ""auto"" } }, ""scheduler"": { ""type"": ""WarmupLR"", ""params"": { ""warmup_min_lr"": ""auto"", ""warmup_max_lr"": ""auto"", ""warmup_num_steps"": ""auto"" } }, ""zero_optimization"": { ""stage"": 2, ""offload_optimizer"": { ""device"": ""cpu"", ""pin_memory"": true }, ""allgather_partitions"": true, ""allgather_bucket_size"": 2e8, ""overlap_comm"": true, ""reduce_scatter"": true, ""reduce_bucket_size"": 2e8, ""contiguous_gradients"": true }, ""gradient_accumulation_steps"": ""auto"", ""gradient_clipping"": ""auto"", ""steps_per_print"": 2000, ""train_batch_size"": ""auto"", ""train_micro_batch_size_per_gpu"": ""auto"", ""wall_clock_breakdown"": false } Here is a full ZeRO-2 all-enabled manually set configuration file. It is here mainly for you to see what the typical values look like, but we highly recommend using the one with multiple `auto` settings in it. ```json { ""fp16"": { ""enabled"": true, ""loss_scale"": 0, ""loss_scale_window"": 1000, ""initial_scale_power"": 16, ""hysteresis"": 2, ""min_loss_scale"": 1 }, ""optimizer"": { ""type"": ""AdamW"", ""params"": { ""lr"": 3e-5, ""betas"": [0.8, 0.999], ""eps"": 1e-8, ""weight_decay"": 3e-7 } }, ""scheduler"": { ""type"": ""WarmupLR"", ""params"": { ""warmup_min_lr"": 0, ""warmup_max_lr"": 3e-5, ""warmup_num_steps"": 500 } }, ""zero_optimization"": { ""stage"": 2, ""offload_optimizer"": { ""device"": ""cpu"", ""pin_memory"": true }, ""allgather_partitions"": true, ""allgather_bucket_size"": 2e8, ""overlap_comm"": true, ""reduce_scatter"": true, ""reduce_bucket_size"": 2e8, ""contiguous_gradients"": true }, ""steps_per_print"": 2000, ""wall_clock_breakdown"": false } #### ZeRO-3 Example Here is a full ZeRO-3 auto-configuration file `ds_config_zero3.json`: ```json { ""fp16"": { ""enabled"": ""auto"", ""loss_scale"": 0, ""loss_scale_window"": 1000, ""initial_scale_power"": 16, ""hysteresis"": 2, ""min_loss_scale"": 1 }, ""optimizer"": { ""type"": ""AdamW"", ""params"": { ""lr"": ""auto"", ""betas"": ""auto"", ""eps"": ""auto"", ""weight_decay"": ""auto"" } }, ""scheduler"": { ""type"": ""WarmupLR"", ""params"": { ""warmup_min_lr"": ""auto"", ""warmup_max_lr"": ""auto"", ""warmup_num_steps"": ""auto"" } }, ""zero_optimization"": { ""stage"": 3, ""offload_optimizer"": { ""device"": ""cpu"", ""pin_memory"": true }, ""offload_param"": { ""device"": ""cpu"", ""pin_memory"": true }, ""overlap_comm"": true, ""contiguous_gradients"": true, ""sub_group_size"": 1e9, ""reduce_bucket_size"": ""auto"", ""stage3_prefetch_bucket_size"": ""auto"", ""stage3_param_persistence_threshold"": ""auto"", ""stage3_max_live_parameters"": 1e9, ""stage3_max_reuse_distance"": 1e9, ""stage3_gather_16bit_weights_on_model_save"": true }, ""gradient_accumulation_steps"": ""auto"", ""gradient_clipping"": ""auto"", ""steps_per_print"": 2000, ""train_batch_size"": ""auto"", ""train_micro_batch_size_per_gpu"": ""auto"", ""wall_clock_breakdown"": false } Here is a full ZeRO-3 all-enabled manually set configuration file. It is here mainly for you to see what the typical values look like, but we highly recommend using the one with multiple `auto` settings in it. ```json { ""fp16"": { ""enabled"": true, ""loss_scale"": 0, ""loss_scale_window"": 1000, ""initial_scale_power"": 16, ""hysteresis"": 2, ""min_loss_scale"": 1 }, ""optimizer"": { ""type"": ""AdamW"", ""params"": { ""lr"": 3e-5, ""betas"": [0.8, 0.999], ""eps"": 1e-8, ""weight_decay"": 3e-7 } }, ""scheduler"": { ""type"": ""WarmupLR"", ""params"": { ""warmup_min_lr"": 0, ""warmup_max_lr"": 3e-5, ""warmup_num_steps"": 500 } }, ""zero_optimization"": { ""stage"": 3, ""offload_optimizer"": { ""device"": ""cpu"", ""pin_memory"": true }, ""offload_param"": { ""device"": ""cpu"", ""pin_memory"": true }, ""overlap_comm"": true, ""contiguous_gradients"": true, ""sub_group_size"": 1e9, ""reduce_bucket_size"": 1e6, ""stage3_prefetch_bucket_size"": 0.94e6, ""stage3_param_persistence_threshold"": 1e4, ""stage3_max_live_parameters"": 1e9, ""stage3_max_reuse_distance"": 1e9, ""stage3_gather_16bit_weights_on_model_save"": true }, ""steps_per_print"": 2000, ""wall_clock_breakdown"": false } #### How to Choose Which ZeRO Stage and Offloads To Use For Best Performance So now you know there are all these different stages. How to decide which of them to use? This section will attempt to address this question. In general the following applies: - Speed-wise (left is faster than right) Stage 0 (DDP) > Stage 1 > Stage 2 > Stage 2 + offload > Stage 3 > Stage 3 + offloads - GPU Memory usage-wise (right is more GPU memory efficient than left) Stage 0 (DDP) < Stage 1 < Stage 2 < Stage 2 + offload < Stage 3 < Stage 3 + offloads So when you want to get the fastest execution while fitting into minimal number of GPUs, here is the process you could follow. We start with the fastest approach and if running into GPU OOM we then go to the next slower approach, but which will use less GPU memory. And so on and so forth. First of all set batch size to 1 (you can always use gradient accumulation for any desired effective batch size). 1. Enable `--gradient_checkpointing 1` (HF Trainer) or directly `model.gradient_checkpointing_enable()` - if OOM then 2. Try ZeRO stage 2 first. if OOM then 3. Try ZeRO stage 2 + `offload_optimizer` - if OOM then 4. Switch to ZeRO stage 3 - if OOM then 5. Enable `offload_param` to `cpu` - if OOM then 6. Enable `offload_optimizer` to `cpu` - if OOM then 7. If you still can't fit a batch size of 1 first check various default values and lower them if you can. For example, if you use `generate` and you don't use a wide search beam make it narrower as it'd take a lot of memory. 8. Definitely use mixed half-precision over fp32 - so bf16 on Ampere and higher GPUs and fp16 on older gpu architectures. 9. If you still OOM you could add more hardware or enable ZeRO-Infinity - that is switch offloads `offload_param` and `offload_optimizer` to `nvme`. You need to make sure it's a very fast nvme. As an anecdote I was able to infer BLOOM-176B on a tiny GPU using ZeRO-Infinity except it was extremely slow. But it worked! You can, of course, work through these steps in reverse by starting with the most GPU memory efficient config and then going backwards. Or try bi-secting it. Once you have your batch size 1 not leading to OOM, measure your effective throughput. Next try to increase the batch size to as large as you can, since the higher the batch size the more efficient the GPUs are as they perform the best when matrices they multiply are huge. Now the performance optimization game starts. You can turn off some offload features or step down in ZeRO stages and increase/decrease batch size and again measure your effective throughput. Rinse and repeat until satisfied. Don't spend forever on it, but if you're about to start a 3 months training - do spend a few days on it to find the most effective throughput-wise setup. So that your training cost will be the lowest and you will finish training faster. In the current crazy-paced ML world, if it takes you an extra month to train something you are likely to miss a golden opportunity. Of course, this is only me sharing an observation and in no way I'm trying to rush you. Before beginning to train BLOOM-176B I spent 2 days on this process and was able to increase throughput from 90 to 150 TFLOPs! This effort saved us more than one month of training time. These notes were written primarily for the training mode, but they should mostly apply for inference as well. For example, during inference Gradient Checkpointing is a no-op since it is only useful during training. Additionally, we found out that if you are doing a multi-GPU inference and not using [DeepSpeed-Inference](https://www.deepspeed.ai/tutorials/inference-tutorial/), [Accelerate](https://huggingface.co/blog/bloom-inference-pytorch-scripts) should provide a superior performance. Other quick related performance notes: - if you are training something from scratch always try to have tensors with shapes that are divisible by 16 (e.g. hidden size). For batch size try divisible by 2 at least. There are [wave and tile quanitization](https://developer.nvidia.com/blog/optimizing-gpu-performance-tensor-cores/) divisibility that is hardware-specific if you want to squeeze even higher performance from your GPUs. ### Activation Checkpointing or Gradient Checkpointing Activation checkpointing and gradient checkpointing are two distinct terms that refer to the same methodology. It's very confusing but this is how it is. Gradient checkpointing allows one to trade speed for GPU memory, which either allows one to overcome a GPU OOM, or increase their batch size, which often leads to a better performance. HF Transformers models don't know anything about DeepSpeed's activation checkpointing, so if you try to enable that feature in the DeepSpeed config file, nothing will happen. Therefore you have two ways to take advantage of this very beneficial feature: 1. If you want to use a HF Transformers models you can do `model.gradient_checkpointing_enable()` or use `--gradient_checkpointing` in the HF Trainer, which will automatically enable this for you. `torch.utils.checkpoint` is used there. 2. If you write your own model and you want to use DeepSpeed's activation checkpointing you can use the [API prescribed there](https://deepspeed.readthedocs.io/en/latest/activation-checkpointing.html). You can also take the HF Transformers modeling code and replace `torch.utils.checkpoint` with the DeepSpeed's API. The latter is more flexible since it allows you to offload the forward activations to the CPU memory instead of recalculating them. ### Optimizer and Scheduler As long as you don't enable `offload_optimizer` you can mix and match DeepSpeed and HuggingFace schedulers and optimizers, with the exception of using the combination of HuggingFace scheduler and DeepSpeed optimizer: | Combos | HF Scheduler | DS Scheduler | |:-------------|:-------------|:-------------| | HF Optimizer | Yes | Yes | | DS Optimizer | No | Yes | It is possible to use a non-DeepSpeed optimizer when `offload_optimizer` is enabled, as long as it has both CPU and GPU implementation (except LAMB). #### Optimizer DeepSpeed's main optimizers are Adam, AdamW, OneBitAdam, and Lamb. These have been thoroughly tested with ZeRO and are thus recommended to be used. It, however, can import other optimizers from `torch`. The full documentation is [here](https://www.deepspeed.ai/docs/config-json/#optimizer-parameters). If you don't configure the `optimizer` entry in the configuration file, the [`Trainer`] will automatically set it to `AdamW` and will use the supplied values or the defaults for the following command line arguments: `--learning_rate`, `--adam_beta1`, `--adam_beta2`, `--adam_epsilon` and `--weight_decay`. Here is an example of the auto-configured `optimizer` entry for `AdamW`: ```json { ""optimizer"": { ""type"": ""AdamW"", ""params"": { ""lr"": ""auto"", ""betas"": ""auto"", ""eps"": ""auto"", ""weight_decay"": ""auto"" } } } Note that the command line arguments will set the values in the configuration file. This is so that there is one definitive source of the values and to avoid hard to find errors when for example, the learning rate is set to different values in different places. Command line rules. The values that get overridden are: - `lr` with the value of `--learning_rate` - `betas` with the value of `--adam_beta1 --adam_beta2` - `eps` with the value of `--adam_epsilon` - `weight_decay` with the value of `--weight_decay` Therefore please remember to tune the shared hyperparameters on the command line. You can also set the values explicitly: ```json { ""optimizer"": { ""type"": ""AdamW"", ""params"": { ""lr"": 0.001, ""betas"": [0.8, 0.999], ""eps"": 1e-8, ""weight_decay"": 3e-7 } } } But then you're on your own synchronizing the [`Trainer`] command line arguments and the DeepSpeed configuration. If you want to use another optimizer which is not listed above, you will have to add to the top level configuration. ```json { ""zero_allow_untested_optimizer"": true } Similarly to `AdamW`, you can configure other officially supported optimizers. Just remember that those may have different config values. e.g. for Adam you will want `weight_decay` around `0.01`. Additionally, offload works the best when it's used with Deepspeed's CPU Adam optimizer. If you want to use a different optimizer with offload, since `deepspeed==0.8.3` you need to also add: ```json { ""zero_force_ds_cpu_optimizer"": false } to the top level configuration. #### Scheduler DeepSpeed supports `LRRangeTest`, `OneCycle`, `WarmupLR` and `WarmupDecayLR` learning rate schedulers. The full documentation is [here](https://www.deepspeed.ai/docs/config-json/#scheduler-parameters). Here is where the schedulers overlap between 🤗 Transformers and DeepSpeed: - `WarmupLR` via `--lr_scheduler_type constant_with_warmup` - `WarmupDecayLR` via `--lr_scheduler_type linear`. This is also the default value for `--lr_scheduler_type`, therefore, if you don't configure the scheduler this is scheduler that will get configured by default. If you don't configure the `scheduler` entry in the configuration file, the [`Trainer`] will use the values of `--lr_scheduler_type`, `--learning_rate` and `--warmup_steps` or `--warmup_ratio` to configure a 🤗 Transformers version of it. Here is an example of the auto-configured `scheduler` entry for `WarmupLR`: ```json { ""scheduler"": { ""type"": ""WarmupLR"", ""params"": { ""warmup_min_lr"": ""auto"", ""warmup_max_lr"": ""auto"", ""warmup_num_steps"": ""auto"" } } } Since *""auto""* is used the [`Trainer`] arguments will set the correct values in the configuration file. This is so that there is one definitive source of the values and to avoid hard to find errors when, for example, the learning rate is set to different values in different places. Command line rules. The values that get set are: - `warmup_min_lr` with the value of `0`. - `warmup_max_lr` with the value of `--learning_rate`. - `warmup_num_steps` with the value of `--warmup_steps` if provided. Otherwise will use `--warmup_ratio` multiplied by the number of training steps and rounded up. - `total_num_steps` with either the value of `--max_steps` or if it is not provided, derived automatically at run time based on the environment and the size of the dataset and other command line arguments (needed for `WarmupDecayLR`). You can, of course, take over any or all of the configuration values and set those yourself: ```json { ""scheduler"": { ""type"": ""WarmupLR"", ""params"": { ""warmup_min_lr"": 0, ""warmup_max_lr"": 0.001, ""warmup_num_steps"": 1000 } } } But then you're on your own synchronizing the [`Trainer`] command line arguments and the DeepSpeed configuration. For example, for `WarmupDecayLR`, you can use the following entry: ```json { ""scheduler"": { ""type"": ""WarmupDecayLR"", ""params"": { ""last_batch_iteration"": -1, ""total_num_steps"": ""auto"", ""warmup_min_lr"": ""auto"", ""warmup_max_lr"": ""auto"", ""warmup_num_steps"": ""auto"" } } } and `total_num_steps`, `warmup_max_lr`, `warmup_num_steps` and `total_num_steps` will be set at loading time. ### fp32 Precision Deepspeed supports the full fp32 and the fp16 mixed precision. Because of the much reduced memory needs and faster speed one gets with the fp16 mixed precision, the only time you will want to not use it is when the model you're using doesn't behave well under this training mode. Typically this happens when the model wasn't pretrained in the fp16 mixed precision (e.g. often this happens with bf16-pretrained models). Such models may overflow or underflow leading to `NaN` loss. If this is your case then you will want to use the full fp32 mode, by explicitly disabling the otherwise default fp16 mixed precision mode with: ```json { ""fp16"": { ""enabled"": false, } } If you're using the Ampere-architecture based GPU, pytorch version 1.7 and higher will automatically switch to using the much more efficient tf32 format for some operations, but the results will still be in fp32. For details and benchmarks, please, see [TensorFloat-32(TF32) on Ampere devices](https://pytorch.org/docs/stable/notes/cuda.html#tensorfloat-32-tf32-on-ampere-devices). The document includes instructions on how to disable this automatic conversion if for some reason you prefer not to use it. With the 🤗 Trainer you can use `--tf32` to enable it, or disable it with `--tf32 0` or `--no_tf32`. By default the PyTorch default is used. ### Automatic Mixed Precision You can use automatic mixed precision with either a pytorch-like AMP way or the apex-like way: ### fp16 To configure pytorch AMP-like mode with fp16 (float16) set: ```json { ""fp16"": { ""enabled"": ""auto"", ""loss_scale"": 0, ""loss_scale_window"": 1000, ""initial_scale_power"": 16, ""hysteresis"": 2, ""min_loss_scale"": 1 } } and the [`Trainer`] will automatically enable or disable it based on the value of `args.fp16_backend`. The rest of config values are up to you. This mode gets enabled when `--fp16 --fp16_backend amp` or `--fp16_full_eval` command line args are passed. You can also enable/disable this mode explicitly: ```json { ""fp16"": { ""enabled"": true, ""loss_scale"": 0, ""loss_scale_window"": 1000, ""initial_scale_power"": 16, ""hysteresis"": 2, ""min_loss_scale"": 1 } } But then you're on your own synchronizing the [`Trainer`] command line arguments and the DeepSpeed configuration. Here is the [documentation](https://www.deepspeed.ai/docs/config-json/#fp16-training-options). ### bf16 If bf16 (bfloat16) is desired instead of fp16 then the following configuration section is to be used: ```json { ""bf16"": { ""enabled"": ""auto"" } } bf16 has the same dynamic range as fp32 and thus doesn't require loss scaling. This mode gets enabled when `--bf16` or `--bf16_full_eval` command line args are passed. You can also enable/disable this mode explicitly: ```json { ""bf16"": { ""enabled"": true } } As of `deepspeed==0.6.0` the bf16 support is new and experimental. If you use [gradient accumulation](#gradient-accumulation) with bf16-enabled, you need to be aware that it'll accumulate gradients in bf16, which may not be what you want due to this format's low precision, as it may lead to a lossy accumulation. A work is being done to fix that and provide an option to use a higher precision `dtype` (fp16 or fp32). ### NCCL Collectives There is the `dtype` of the training regime and there is a separate `dtype` that is used for communication collectives like various reduction and gathering/scattering operations. All gather/scatter ops are performed in the same `dtype` the data is in, so if you're using bf16 training regime it gets gathered in bf16 - gathering is a non-lossy operation. Various reduce operations can be quite lossy, for example when gradients are averaged across multiple-gpus, if the communications are done in fp16 or bf16 the outcome is likely be lossy - since when one ads multiple numbers in low precision the result isn't exact. More so with bf16 as it has a lower precision than fp16. Often fp16 is good enough as the loss is minimal when averaging grads which are typically very small. Therefore, by default for half precision training fp16 is used as the default for reduction operations. But you have full control over this functionality and if you choose you can add a small overhead and ensure that reductions will be using fp32 as the accumulation dtype and only when the result is ready it'll get downcast to the half precision `dtype` you're training in. In order to override the default you simply add a new configuration entry: ```json { ""communication_data_type"": ""fp32"" } The valid values as of this writing are ""fp16"", ""bfp16"", ""fp32"". note: stage zero 3 had a bug with regards to bf16 comm dtype that was fixed in `deepspeed==0.8.1` ### apex To configure apex AMP-like mode set: ```json ""amp"": { ""enabled"": ""auto"", ""opt_level"": ""auto"" } and the [`Trainer`] will automatically configure it based on the values of `args.fp16_backend` and `args.fp16_opt_level`. This mode gets enabled when `--fp16 --fp16_backend apex --fp16_opt_level 01` command line args are passed. You can also configure this mode explicitly: ```json { ""amp"": { ""enabled"": true, ""opt_level"": ""O1"" } } But then you're on your own synchronizing the [`Trainer`] command line arguments and the DeepSpeed configuration. Here is the [documentation](https://www.deepspeed.ai/docs/config-json/#automatic-mixed-precision-amp-training-options). ### Batch Size To configure batch size, use: ```json { ""train_batch_size"": ""auto"", ""train_micro_batch_size_per_gpu"": ""auto"" } and the [`Trainer`] will automatically set `train_micro_batch_size_per_gpu` to the value of `args.per_device_train_batch_size` and `train_batch_size` to `args.world_size * args.per_device_train_batch_size * args.gradient_accumulation_steps`. You can also set the values explicitly: ```json { ""train_batch_size"": 12, ""train_micro_batch_size_per_gpu"": 4 } But then you're on your own synchronizing the [`Trainer`] command line arguments and the DeepSpeed configuration. ### Gradient Accumulation To configure gradient accumulation set: ```json { ""gradient_accumulation_steps"": ""auto"" } and the [`Trainer`] will automatically set it to the value of `args.gradient_accumulation_steps`. You can also set the value explicitly: ```json { ""gradient_accumulation_steps"": 3 } But then you're on your own synchronizing the [`Trainer`] command line arguments and the DeepSpeed configuration. ### Gradient Clipping To configure gradient gradient clipping set: ```json { ""gradient_clipping"": ""auto"" } and the [`Trainer`] will automatically set it to the value of `args.max_grad_norm`. You can also set the value explicitly: ```json { ""gradient_clipping"": 1.0 } But then you're on your own synchronizing the [`Trainer`] command line arguments and the DeepSpeed configuration. ### Getting The Model Weights Out As long as you continue training and resuming using DeepSpeed you don't need to worry about anything. DeepSpeed stores fp32 master weights in its custom checkpoint optimizer files, which are `global_step*/*optim_states.pt` (this is glob pattern), and are saved under the normal checkpoint. **FP16 Weights:** When a model is saved under ZeRO-2, you end up having the normal `pytorch_model.bin` file with the model weights, but they are only the fp16 version of the weights. Under ZeRO-3, things are much more complicated, since the model weights are partitioned out over multiple GPUs, therefore `""stage3_gather_16bit_weights_on_model_save"": true` is required to get the `Trainer` to save the fp16 version of the weights. If this setting is `False` `pytorch_model.bin` won't be created. This is because by default DeepSpeed's `state_dict` contains a placeholder and not the real weights. If we were to save this `state_dict` it won't be possible to load it back. ```json { ""zero_optimization"": { ""stage3_gather_16bit_weights_on_model_save"": true } } **FP32 Weights:** While the fp16 weights are fine for resuming training, if you finished finetuning your model and want to upload it to the [models hub](https://huggingface.co/models) or pass it to someone else you most likely will want to get the fp32 weights. This ideally shouldn't be done during training since this is a process that requires a lot of memory, and therefore best to be performed offline after the training is complete. But if desired and you have plenty of free CPU memory it can be done in the same training script. The following sections will discuss both approaches. **Live FP32 Weights Recovery:** This approach may not work if you model is large and you have little free CPU memory left, at the end of the training. If you have saved at least one checkpoint, and you want to use the latest one, you can do the following: thon from transformers.trainer_utils import get_last_checkpoint from deepspeed.utils.zero_to_fp32 import load_state_dict_from_zero_checkpoint checkpoint_dir = get_last_checkpoint(trainer.args.output_dir) fp32_model = load_state_dict_from_zero_checkpoint(trainer.model, checkpoint_dir) If you're using the `--load_best_model_at_end` class:*~transformers.TrainingArguments* argument (to track the best checkpoint), then you can finish the training by first saving the final model explicitly and then do the same as above: thon from deepspeed.utils.zero_to_fp32 import load_state_dict_from_zero_checkpoint checkpoint_dir = os.path.join(trainer.args.output_dir, ""checkpoint-final"") trainer.deepspeed.save_checkpoint(checkpoint_dir) fp32_model = load_state_dict_from_zero_checkpoint(trainer.model, checkpoint_dir) Note, that once `load_state_dict_from_zero_checkpoint` was run, the `model` will no longer be usable in the DeepSpeed context of the same application. i.e. you will need to re-initialize the deepspeed engine, since `model.load_state_dict(state_dict)` will remove all the DeepSpeed magic from it. So do this only at the very end of the training. Of course, you don't have to use class:*~transformers.Trainer* and you can adjust the examples above to your own trainer. If for some reason you want more refinement, you can also extract the fp32 `state_dict` of the weights and apply these yourself as is shown in the following example: thon from deepspeed.utils.zero_to_fp32 import get_fp32_state_dict_from_zero_checkpoint state_dict = get_fp32_state_dict_from_zero_checkpoint(checkpoint_dir) # already on cpu model = model.cpu() model.load_state_dict(state_dict) **Offline FP32 Weights Recovery:** DeepSpeed creates a special conversion script `zero_to_fp32.py` which it places in the top-level of the checkpoint folder. Using this script you can extract the weights at any point. The script is standalone and you no longer need to have the configuration file or a `Trainer` to do the extraction. Let's say your checkpoint folder looks like this: ```bash $ ls -l output_dir/checkpoint-1/ -rw-rw-r-- 1 stas stas 1.4K Mar 27 20:42 config.json drwxrwxr-x 2 stas stas 4.0K Mar 25 19:52 global_step1/ -rw-rw-r-- 1 stas stas 12 Mar 27 13:16 latest -rw-rw-r-- 1 stas stas 827K Mar 27 20:42 optimizer.pt -rw-rw-r-- 1 stas stas 231M Mar 27 20:42 pytorch_model.bin -rw-rw-r-- 1 stas stas 623 Mar 27 20:42 scheduler.pt -rw-rw-r-- 1 stas stas 1.8K Mar 27 20:42 special_tokens_map.json -rw-rw-r-- 1 stas stas 774K Mar 27 20:42 spiece.model -rw-rw-r-- 1 stas stas 1.9K Mar 27 20:42 tokenizer_config.json -rw-rw-r-- 1 stas stas 339 Mar 27 20:42 trainer_state.json -rw-rw-r-- 1 stas stas 2.3K Mar 27 20:42 training_args.bin -rwxrw-r-- 1 stas stas 5.5K Mar 27 13:16 zero_to_fp32.py* In this example there is just one DeepSpeed checkpoint sub-folder *global_step1*. Therefore to reconstruct the fp32 weights just run: ```bash python zero_to_fp32.py . pytorch_model.bin This is it. `pytorch_model.bin` will now contain the full fp32 model weights consolidated from multiple GPUs. The script will automatically be able to handle either a ZeRO-2 or ZeRO-3 checkpoint. `python zero_to_fp32.py -h` will give you usage details. The script will auto-discover the deepspeed sub-folder using the contents of the file `latest`, which in the current example will contain `global_step1`. Note: currently the script requires 2x general RAM of the final fp32 model weights. ### ZeRO-3 and Infinity Nuances ZeRO-3 is quite different from ZeRO-2 because of its param sharding feature. ZeRO-Infinity further extends ZeRO-3 to support NVMe memory and multiple other speed and scalability improvements. While all the efforts were made for things to just work without needing any special changes to your models, in certain circumstances you may find the following information to be needed. #### Constructing Massive Models DeepSpeed/ZeRO-3 can handle models with Trillions of parameters which may not fit onto the existing RAM. In such cases, but also if you want the initialization to happen much faster, initialize the model using *deepspeed.zero.Init()* context manager (which is also a function decorator), like so: thon from transformers import T5ForConditionalGeneration, T5Config import deepspeed with deepspeed.zero.Init(): config = T5Config.from_pretrained(""t5-small"") model = T5ForConditionalGeneration(config) As you can see this gives you a randomly initialized model. If you want to use a pretrained model, `model_class.from_pretrained` will activate this feature as long as `is_deepspeed_zero3_enabled()` returns `True`, which currently is setup by the [`TrainingArguments`] object if the passed DeepSpeed configuration file contains ZeRO-3 config section. Thus you must create the [`TrainingArguments`] object **before** calling `from_pretrained`. Here is an example of a possible sequence: thon from transformers import AutoModel, Trainer, TrainingArguments training_args = TrainingArguments(, deepspeed=ds_config) model = AutoModel.from_pretrained(""t5-small"") trainer = Trainer(model=model, args=training_args, ) If you're using the official example scripts and your command line arguments include `--deepspeed ds_config.json` with ZeRO-3 config enabled, then everything is already done for you, since this is how example scripts are written. Note: If the fp16 weights of the model can't fit onto the memory of a single GPU this feature must be used. For full details on this method and other related features please refer to [Constructing Massive Models](https://deepspeed.readthedocs.io/en/latest/zero3.html#constructing-massive-models). Also when loading fp16-pretrained models, you will want to tell `from_pretrained` to use `torch_dtype=torch.float16`. For details, please, see [from_pretrained-torch-dtype](#from_pretrained-torch-dtype). #### Gathering Parameters Under ZeRO-3 on multiple GPUs no single GPU has all the parameters unless it's the parameters for the currently executing layer. So if you need to access all parameters from all layers at once there is a specific method to do it. Most likely you won't need it, but if you do please refer to [Gathering Parameters](https://deepspeed.readthedocs.io/en/latest/zero3.html#manual-parameter-coordination) We do however use it internally in several places, one such example is when loading pretrained model weights in `from_pretrained`. We load one layer at a time and immediately partition it to all participating GPUs, as for very large models it won't be possible to load it on one GPU and then spread it out to multiple GPUs, due to memory limitations. Also under ZeRO-3, if you write your own code and run into a model parameter weight that looks like: thon tensor([1.0], device=""cuda:0"", dtype=torch.float16, requires_grad=True) stress on `tensor([1.])`, or if you get an error where it says the parameter is of size `1`, instead of some much larger multi-dimensional shape, this means that the parameter is partitioned and what you see is a ZeRO-3 placeholder. ### ZeRO Inference ZeRO Inference uses the same config as ZeRO-3 Training. You just don't need the optimizer and scheduler sections. In fact you can leave these in the config file if you want to share the same one with the training. They will just be ignored. Otherwise you just need to pass the usual [`TrainingArguments`] arguments. For example: ```bash deepspeed --num_gpus=2 your_program.py --do_eval --deepspeed ds_config.json The only important thing is that you need to use a ZeRO-3 configuration, since ZeRO-2 provides no benefit whatsoever for the inference as only ZeRO-3 performs sharding of parameters, whereas ZeRO-1 shards gradients and optimizer states. Here is an example of running `run_translation.py` under DeepSpeed deploying all available GPUs: ```bash deepspeed examples/pytorch/translation/run_translation.py \ --deepspeed tests/deepspeed/ds_config_zero3.json \ --model_name_or_path t5-small --output_dir output_dir \ --do_eval --max_eval_samples 50 --warmup_steps 50 \ --max_source_length 128 --val_max_target_length 128 \ --overwrite_output_dir --per_device_eval_batch_size 4 \ --predict_with_generate --dataset_config ""ro-en"" --fp16 \ --source_lang en --target_lang ro --dataset_name wmt16 \ --source_prefix ""translate English to Romanian: "" Since for inference there is no need for additional large memory used by the optimizer states and the gradients you should be able to fit much larger batches and/or sequence length onto the same hardware. Additionally DeepSpeed is currently developing a related product called Deepspeed-Inference which has no relationship to the ZeRO technology, but instead uses tensor parallelism to scale models that can't fit onto a single GPU. This is a work in progress and we will provide the integration once that product is complete. ### Memory Requirements Since Deepspeed ZeRO can offload memory to CPU (and NVMe) the framework provides utils that allow one to tell how much CPU and GPU memory will be needed depending on the number of GPUs being used. Let's estimate how much memory is needed to finetune ""bigscience/T0_3B"" on a single GPU: ```bash $ python -c 'from transformers import AutoModel; \ from deepspeed.runtime.zero.stage3 import estimate_zero3_model_states_mem_needs_all_live; \ model = AutoModel.from_pretrained(""bigscience/T0_3B""); \ estimate_zero3_model_states_mem_needs_all_live(model, num_gpus_per_node=1, num_nodes=1)' [] Estimated memory needed for params, optim states and gradients for a: HW: Setup with 1 node, 1 GPU per node. SW: Model with 2783M total params, 65M largest layer params. per CPU | per GPU | Options 70.00GB | 0.25GB | offload_param=cpu , offload_optimizer=cpu , zero_init=1 70.00GB | 0.25GB | offload_param=cpu , offload_optimizer=cpu , zero_init=0 62.23GB | 5.43GB | offload_param=none, offload_optimizer=cpu , zero_init=1 62.23GB | 5.43GB | offload_param=none, offload_optimizer=cpu , zero_init=0 0.37GB | 46.91GB | offload_param=none, offload_optimizer=none, zero_init=1 15.56GB | 46.91GB | offload_param=none, offload_optimizer=none, zero_init=0 So you can fit it on a single 80GB GPU and no CPU offload, or a tiny 8GB GPU but then need ~60GB of CPU memory. (Remember this is just the memory for params, optimizer states and gradients - you will need a bit more memory for cuda kernels, activations and temps.) Then it's a tradeoff of cost vs speed. It'll be cheaper to buy/rent a smaller GPU (or less GPUs since you can use multiple GPUs with Deepspeed ZeRO. But then it'll be slower, so even if you don't care about how fast something will be done, the slowdown has a direct impact on the duration of using the GPU and thus bigger cost. So experiment and compare which works the best. If you have enough GPU memory make sure to disable the CPU/NVMe offload as it'll make everything faster. For example, let's repeat the same for 2 GPUs: ```bash $ python -c 'from transformers import AutoModel; \ from deepspeed.runtime.zero.stage3 import estimate_zero3_model_states_mem_needs_all_live; \ model = AutoModel.from_pretrained(""bigscience/T0_3B""); \ estimate_zero3_model_states_mem_needs_all_live(model, num_gpus_per_node=2, num_nodes=1)' [] Estimated memory needed for params, optim states and gradients for a: HW: Setup with 1 node, 2 GPUs per node. SW: Model with 2783M total params, 65M largest layer params. per CPU | per GPU | Options 70.00GB | 0.25GB | offload_param=cpu , offload_optimizer=cpu , zero_init=1 70.00GB | 0.25GB | offload_param=cpu , offload_optimizer=cpu , zero_init=0 62.23GB | 2.84GB | offload_param=none, offload_optimizer=cpu , zero_init=1 62.23GB | 2.84GB | offload_param=none, offload_optimizer=cpu , zero_init=0 0.74GB | 23.58GB | offload_param=none, offload_optimizer=none, zero_init=1 31.11GB | 23.58GB | offload_param=none, offload_optimizer=none, zero_init=0 So here you'd want 2x 32GB GPUs or higher without offloading to CPU. For full information please see [memory estimators](https://deepspeed.readthedocs.io/en/latest/memory.html). ### Filing Issues Here is how to file an issue so that we could quickly get to the bottom of the issue and help you to unblock your work. In your report please always include: 1. the full Deepspeed config file in the report 2. either the command line arguments if you were using the [`Trainer`] or [`TrainingArguments`] arguments if you were scripting the Trainer setup yourself. Please do not dump the [`TrainingArguments`] as it has dozens of entries that are irrelevant. 3. Output of: ```bash python -c 'import torch; print(f""torch: {torch.__version__}"")' python -c 'import transformers; print(f""transformers: {transformers.__version__}"")' python -c 'import deepspeed; print(f""deepspeed: {deepspeed.__version__}"")' 4. If possible include a link to a Google Colab notebook that we can reproduce the problem with. You can use this [notebook](https://github.com/stas00/porting/blob/master/transformers/deepspeed/DeepSpeed_on_colab_CLI.ipynb) as a starting point. 5. Unless it's impossible please always use a standard dataset that we can use and not something custom. 6. If possible try to use one of the existing [examples](https://github.com/huggingface/transformers/tree/main/examples/pytorch) to reproduce the problem with. Things to consider: - Deepspeed is often not the cause of the problem. Some of the filed issues proved to be Deepspeed-unrelated. That is once Deepspeed was removed from the setup, the problem was still there. Therefore, if it's not absolutely obvious it's a DeepSpeed-related problem, as in you can see that there is an exception and you can see that DeepSpeed modules are involved, first re-test your setup without DeepSpeed in it. And only if the problem persists then do mentioned Deepspeed and supply all the required details. - If it's clear to you that the issue is in the DeepSpeed core and not the integration part, please file the Issue directly with [Deepspeed](https://github.com/microsoft/DeepSpeed/). If you aren't sure, please do not worry, either Issue tracker will do, we will figure it out once you posted it and redirect you to another Issue tracker if need be. ### Troubleshooting #### the `deepspeed` process gets killed at startup without a traceback If the `deepspeed` process gets killed at launch time without a traceback, that usually means that the program tried to allocate more CPU memory than your system has or your process is allowed to allocate and the OS kernel killed that process. This is because your configuration file most likely has either `offload_optimizer` or `offload_param` or both configured to offload to `cpu`. If you have NVMe, experiment with offloading to NVMe if you're running under ZeRO-3. Here is how you can [estimate how much memory is needed for a specific model](https://deepspeed.readthedocs.io/en/latest/memory.html). #### training and/or eval/predict loss is `NaN` This often happens when one takes a model pre-trained in bf16 mixed precision mode and tries to use it under fp16 (with or without mixed precision). Most models trained on TPU and often the ones released by Google are in this category (e.g. almost all t5-based models). Here the solution is to either use fp32 or bf16 if your hardware supports it (TPU, Ampere GPUs or newer). The other problem may have to do with using fp16. When you configure this section: ```json { ""fp16"": { ""enabled"": ""auto"", ""loss_scale"": 0, ""loss_scale_window"": 1000, ""initial_scale_power"": 16, ""hysteresis"": 2, ""min_loss_scale"": 1 } } and you see in your log that Deepspeed reports `OVERFLOW!` as follows: 0%| | 0/189 [00:00, ?it/s] [deepscale] OVERFLOW! Rank 0 Skipping step. Attempted loss scale: 262144, reducing to 262144 1%|▌ | 1/189 [00:00<01:26, 2.17it/s] [deepscale] OVERFLOW! Rank 0 Skipping step. Attempted loss scale: 262144, reducing to 131072.0 1%|█▏ [] [deepscale] OVERFLOW! Rank 0 Skipping step. Attempted loss scale: 1, reducing to 1 14%|████████████████▌ | 27/189 [00:14<01:13, 2.21it/s] [deepscale] OVERFLOW! Rank 0 Skipping step. Attempted loss scale: 1, reducing to 1 15%|█████████████████▏ | 28/189 [00:14<01:13, 2.18it/s] [deepscale] OVERFLOW! Rank 0 Skipping step. Attempted loss scale: 1, reducing to 1 15%|█████████████████▊ | 29/189 [00:15<01:13, 2.18it/s] [deepscale] OVERFLOW! Rank 0 Skipping step. Attempted loss scale: 1, reducing to 1 [] that means that the Deepspeed loss scaler can't figure out a scaling co-efficient that overcomes loss overflow. (the log was massaged to be more readable here.) In this case you usually need to raise the value of `initial_scale_power`. Setting it to `""initial_scale_power"": 32` will typically resolve the problem. ### Notes - DeepSpeed works with the PyTorch [`Trainer`] but not TF [`TFTrainer`]. - While DeepSpeed has a pip installable PyPI package, it is highly recommended that it gets installed from [source](https://github.com/microsoft/deepspeed#installation) to best match your hardware and also if you need to enable certain features, like 1-bit Adam, which aren't available in the pypi distribution. - You don't have to use the [`Trainer`] to use DeepSpeed with 🤗 Transformers - you can use any model with your own trainer, and you will have to adapt the latter according to [the DeepSpeed integration instructions](https://www.deepspeed.ai/getting-started/#writing-deepspeed-models). ## Non-Trainer Deepspeed Integration The [`~integrations.HfDeepSpeedConfig`] is used to integrate Deepspeed into the 🤗 Transformers core functionality, when [`Trainer`] is not used. The only thing that it does is handling Deepspeed ZeRO-3 param gathering and automatically splitting the model onto multiple gpus during `from_pretrained` call. Everything else you have to do by yourself. When using [`Trainer`] everything is automatically taken care of. When not using [`Trainer`], to efficiently deploy DeepSpeed ZeRO-3, you must instantiate the [`~integrations.HfDeepSpeedConfig`] object before instantiating the model and keep that object alive. If you're using Deepspeed ZeRO-1 or ZeRO-2 you don't need to use `HfDeepSpeedConfig` at all. For example for a pretrained model: thon from transformers.integrations import HfDeepSpeedConfig from transformers import AutoModel import deepspeed ds_config = {} # deepspeed config object or path to the file # must run before instantiating the model to detect zero 3 dschf = HfDeepSpeedConfig(ds_config) # keep this object alive model = AutoModel.from_pretrained(""gpt2"") engine = deepspeed.initialize(model=model, config_params=ds_config, ) or for non-pretrained model: thon from transformers.integrations import HfDeepSpeedConfig from transformers import AutoModel, AutoConfig import deepspeed ds_config = {} # deepspeed config object or path to the file # must run before instantiating the model to detect zero 3 dschf = HfDeepSpeedConfig(ds_config) # keep this object alive config = AutoConfig.from_pretrained(""gpt2"") model = AutoModel.from_config(config) engine = deepspeed.initialize(model=model, config_params=ds_config, ) Please note that if you're not using the [`Trainer`] integration, you're completely on your own. Basically follow the documentation on the [Deepspeed](https://www.deepspeed.ai/) website. Also you have to configure explicitly the config file - you can't use `""auto""` values and you will have to put real values instead. ## HfDeepSpeedConfig [[autodoc]] integrations.HfDeepSpeedConfig - all ### Custom DeepSpeed ZeRO Inference Here is an example of how one could do DeepSpeed ZeRO Inference without using [`Trainer`] when one can't fit a model onto a single GPU. The solution includes using additional GPUs or/and offloading GPU memory to CPU memory. The important nuance to understand here is that the way ZeRO is designed you can process different inputs on different GPUs in parallel. The example has copious notes and is self-documenting. Make sure to: 1. disable CPU offload if you have enough GPU memory (since it slows things down) 2. enable bf16 if you own an Ampere or a newer GPU to make things faster. If you don't have that hardware you may enable fp16 as long as you don't use any model that was pre-trained in bf16 mixed precision (such as most t5 models). These usually overflow in fp16 and you will see garbage as output. thon #!/usr/bin/env python # This script demonstrates how to use Deepspeed ZeRO in an inference mode when one can't fit a model # into a single GPU # # 1. Use 1 GPU with CPU offload # 2. Or use multiple GPUs instead # # First you need to install deepspeed: pip install deepspeed # # Here we use a 3B ""bigscience/T0_3B"" model which needs about 15GB GPU RAM - so 1 largish or 2 # small GPUs can handle it. or 1 small GPU and a lot of CPU memory. # # To use a larger model like ""bigscience/T0"" which needs about 50GB, unless you have an 80GB GPU - # you will need 2-4 gpus. And then you can adapt the script to handle more gpus if you want to # process multiple inputs at once. # # The provided deepspeed config also activates CPU memory offloading, so chances are that if you # have a lot of available CPU memory and you don't mind a slowdown you should be able to load a # model that doesn't normally fit into a single GPU. If you have enough GPU memory the program will # run faster if you don't want offload to CPU - so disable that section then. # # To deploy on 1 gpu: # # deepspeed --num_gpus 1 t0.py # or: # python -m torch.distributed.run --nproc_per_node=1 t0.py # # To deploy on 2 gpus: # # deepspeed --num_gpus 2 t0.py # or: # python -m torch.distributed.run --nproc_per_node=2 t0.py from transformers import AutoTokenizer, AutoConfig, AutoModelForSeq2SeqLM from transformers.integrations import HfDeepSpeedConfig import deepspeed import os import torch os.environ[""TOKENIZERS_PARALLELISM""] = ""false"" # To avoid warnings about parallelism in tokenizers # distributed setup local_rank = int(os.getenv(""LOCAL_RANK"", ""0"")) world_size = int(os.getenv(""WORLD_SIZE"", ""1"")) torch.cuda.set_device(local_rank) deepspeed.init_distributed() model_name = ""bigscience/T0_3B"" config = AutoConfig.from_pretrained(model_name) model_hidden_size = config.d_model # batch size has to be divisible by world_size, but can be bigger than world_size train_batch_size = 1 * world_size # ds_config notes # # - enable bf16 if you use Ampere or higher GPU - this will run in mixed precision and will be # faster. # # - for older GPUs you can enable fp16, but it'll only work for non-bf16 pretrained models - e.g. # all official t5 models are bf16-pretrained # # - set offload_param.device to ""none"" or completely remove the `offload_param` section if you don't # - want CPU offload # # - if using `offload_param` you can manually finetune stage3_param_persistence_threshold to control # - which params should remain on gpus - the larger the value the smaller the offload size # # For indepth info on Deepspeed config see # https://huggingface.co/docs/transformers/main/main_classes/deepspeed # keeping the same format as json for consistency, except it uses lower case for true/false # fmt: off ds_config = { ""fp16"": { ""enabled"": False }, ""bf16"": { ""enabled"": False }, ""zero_optimization"": { ""stage"": 3, ""offload_param"": { ""device"": ""cpu"", ""pin_memory"": True }, ""overlap_comm"": True, ""contiguous_gradients"": True, ""reduce_bucket_size"": model_hidden_size * model_hidden_size, ""stage3_prefetch_bucket_size"": 0.9 * model_hidden_size * model_hidden_size, ""stage3_param_persistence_threshold"": 10 * model_hidden_size }, ""steps_per_print"": 2000, ""train_batch_size"": train_batch_size, ""train_micro_batch_size_per_gpu"": 1, ""wall_clock_breakdown"": False } # fmt: on # next line instructs transformers to partition the model directly over multiple gpus using # deepspeed.zero.Init when model's `from_pretrained` method is called. # # **it has to be run before loading the model AutoModelForSeq2SeqLM.from_pretrained(model_name)** # # otherwise the model will first be loaded normally and only partitioned at forward time which is # less efficient and when there is little CPU RAM may fail dschf = HfDeepSpeedConfig(ds_config) # keep this object alive # now a model can be loaded. model = AutoModelForSeq2SeqLM.from_pretrained(model_name) # initialise Deepspeed ZeRO and store only the engine object ds_engine = deepspeed.initialize(model=model, config_params=ds_config)[0] ds_engine.module.eval() # inference # Deepspeed ZeRO can process unrelated inputs on each GPU. So for 2 gpus you process 2 inputs at once. # If you use more GPUs adjust for more. # And of course if you have just one input to process you then need to pass the same string to both gpus # If you use only one GPU, then you will have only rank 0. rank = torch.distributed.get_rank() if rank == 0: text_in = ""Is this review positive or negative? Review: this is the best cast iron skillet you will ever buy"" elif rank == 1: text_in = ""Is this review positive or negative? Review: this is the worst restaurant ever"" tokenizer = AutoTokenizer.from_pretrained(model_name) inputs = tokenizer.encode(text_in, return_tensors=""pt"").to(device=local_rank) with torch.no_grad(): outputs = ds_engine.module.generate(inputs, synced_gpus=True) text_out = tokenizer.decode(outputs[0], skip_special_tokens=True) print(f""rank{rank}:\n in={text_in}\n out={text_out}"") Let's save it as `t0.py` and run it: $ deepspeed --num_gpus 2 t0.py rank0: in=Is this review positive or negative? Review: this is the best cast iron skillet you will ever buy out=Positive rank1: in=Is this review positive or negative? Review: this is the worst restaurant ever out=negative This was a very basic example and you will want to adapt it to your needs. ### `generate` nuances When using multiple GPUs with ZeRO Stage-3, one has to synchronize the GPUs by calling `generate(, synced_gpus=True)`. If this is not done if one GPU finished generating before other GPUs the whole system will hang as the rest of the GPUs will not be able to received the shard of weights from the GPU that stopped generating. Starting from `transformers=4.28`, if `synced_gpus` isn't explicitly specified, it'll be set to `True` automatically if these conditions are detected. But you can still override the value of `synced_gpus` if need to. ## Testing Deepspeed Integration If you submit a PR that involves DeepSpeed integration please note our CircleCI PR CI setup has no GPUs, so we only run tests requiring gpus on a different CI nightly. Therefore if you get a green CI report in your PR it doesn't mean DeepSpeed tests pass. To run DeepSpeed tests, please run at least: RUN_SLOW=1 pytest tests/deepspeed/test_deepspeed.py If you changed any of the modeling or pytorch examples code, then run the model zoo tests as well. The following will run all DeepSpeed tests: RUN_SLOW=1 pytest tests/deepspeed ## Main DeepSpeed Resources - [Project's github](https://github.com/microsoft/deepspeed) - [Usage docs](https://www.deepspeed.ai/getting-started/) - [API docs](https://deepspeed.readthedocs.io/en/latest/index.html) - [Blog posts](https://www.microsoft.com/en-us/research/search/?q=deepspeed) Papers: - [ZeRO: Memory Optimizations Toward Training Trillion Parameter Models](https://arxiv.org/abs/1910.02054) - [ZeRO-Offload: Democratizing Billion-Scale Model Training](https://arxiv.org/abs/2101.06840) - [ZeRO-Infinity: Breaking the GPU Memory Wall for Extreme Scale Deep Learning](https://arxiv.org/abs/2104.07857) Finally, please, remember that, HuggingFace [`Trainer`] only integrates DeepSpeed, therefore if you have any problems or questions with regards to DeepSpeed usage, please, file an issue with [DeepSpeed GitHub](https://github.com/microsoft/DeepSpeed/issues). " main_classes/data_collator.md," # Data Collator Data collators are objects that will form a batch by using a list of dataset elements as input. These elements are of the same type as the elements of `train_dataset` or `eval_dataset`. To be able to build batches, data collators may apply some processing (like padding). Some of them (like [`DataCollatorForLanguageModeling`]) also apply some random data augmentation (like random masking) on the formed batch. Examples of use can be found in the [example scripts](../examples) or [example notebooks](../notebooks). ## Default data collator [[autodoc]] data.data_collator.default_data_collator ## DefaultDataCollator [[autodoc]] data.data_collator.DefaultDataCollator ## DataCollatorWithPadding [[autodoc]] data.data_collator.DataCollatorWithPadding ## DataCollatorForTokenClassification [[autodoc]] data.data_collator.DataCollatorForTokenClassification ## DataCollatorForSeq2Seq [[autodoc]] data.data_collator.DataCollatorForSeq2Seq ## DataCollatorForLanguageModeling [[autodoc]] data.data_collator.DataCollatorForLanguageModeling - numpy_mask_tokens - tf_mask_tokens - torch_mask_tokens ## DataCollatorForWholeWordMask [[autodoc]] data.data_collator.DataCollatorForWholeWordMask - numpy_mask_tokens - tf_mask_tokens - torch_mask_tokens ## DataCollatorForPermutationLanguageModeling [[autodoc]] data.data_collator.DataCollatorForPermutationLanguageModeling - numpy_mask_tokens - tf_mask_tokens - torch_mask_tokens " main_classes/model.md," # Models The base classes [`PreTrainedModel`], [`TFPreTrainedModel`], and [`FlaxPreTrainedModel`] implement the common methods for loading/saving a model either from a local file or directory, or from a pretrained model configuration provided by the library (downloaded from HuggingFace's AWS S3 repository). [`PreTrainedModel`] and [`TFPreTrainedModel`] also implement a few methods which are common among all the models to: - resize the input token embeddings when new tokens are added to the vocabulary - prune the attention heads of the model. The other methods that are common to each model are defined in [`~modeling_utils.ModuleUtilsMixin`] (for the PyTorch models) and [`~modeling_tf_utils.TFModuleUtilsMixin`] (for the TensorFlow models) or for text generation, [`~generation.GenerationMixin`] (for the PyTorch models), [`~generation.TFGenerationMixin`] (for the TensorFlow models) and [`~generation.FlaxGenerationMixin`] (for the Flax/JAX models). ## PreTrainedModel [[autodoc]] PreTrainedModel - push_to_hub - all ### Large model loading In Transformers 4.20.0, the [`~PreTrainedModel.from_pretrained`] method has been reworked to accommodate large models using [Accelerate](https://huggingface.co/docs/accelerate/big_modeling). This requires Accelerate >= 0.9.0 and PyTorch >= 1.9.0. Instead of creating the full model, then loading the pretrained weights inside it (which takes twice the size of the model in RAM, one for the randomly initialized model, one for the weights), there is an option to create the model as an empty shell, then only materialize its parameters when the pretrained weights are loaded. This option can be activated with `low_cpu_mem_usage=True`. The model is first created on the Meta device (with empty weights) and the state dict is then loaded inside it (shard by shard in the case of a sharded checkpoint). This way the maximum RAM used is the full size of the model only. from transformers import AutoModelForSeq2SeqLM t0pp = AutoModelForSeq2SeqLM.from_pretrained(""bigscience/T0pp"", low_cpu_mem_usage=True) Moreover, you can directly place the model on different devices if it doesn't fully fit in RAM (only works for inference for now). With `device_map=""auto""`, Accelerate will determine where to put each layer to maximize the use of your fastest devices (GPUs) and offload the rest on the CPU, or even the hard drive if you don't have enough GPU RAM (or CPU RAM). Even if the model is split across several devices, it will run as you would normally expect. When passing a `device_map`, `low_cpu_mem_usage` is automatically set to `True`, so you don't need to specify it: from transformers import AutoModelForSeq2SeqLM t0pp = AutoModelForSeq2SeqLM.from_pretrained(""bigscience/T0pp"", device_map=""auto"") You can inspect how the model was split across devices by looking at its `hf_device_map` attribute: t0pp.hf_device_map thon out {'shared': 0, 'decoder.embed_tokens': 0, 'encoder': 0, 'decoder.block.0': 0, 'decoder.block.1': 1, 'decoder.block.2': 1, 'decoder.block.3': 1, 'decoder.block.4': 1, 'decoder.block.5': 1, 'decoder.block.6': 1, 'decoder.block.7': 1, 'decoder.block.8': 1, 'decoder.block.9': 1, 'decoder.block.10': 1, 'decoder.block.11': 1, 'decoder.block.12': 1, 'decoder.block.13': 1, 'decoder.block.14': 1, 'decoder.block.15': 1, 'decoder.block.16': 1, 'decoder.block.17': 1, 'decoder.block.18': 1, 'decoder.block.19': 1, 'decoder.block.20': 1, 'decoder.block.21': 1, 'decoder.block.22': 'cpu', 'decoder.block.23': 'cpu', 'decoder.final_layer_norm': 'cpu', 'decoder.dropout': 'cpu', 'lm_head': 'cpu'} You can also write your own device map following the same format (a dictionary layer name to device). It should map all parameters of the model to a given device, but you don't have to detail where all the submodules of one layer go if that layer is entirely on the same device. For instance, the following device map would work properly for T0pp (as long as you have the GPU memory): thon device_map = {""shared"": 0, ""encoder"": 0, ""decoder"": 1, ""lm_head"": 1} Another way to minimize the memory impact of your model is to instantiate it at a lower precision dtype (like `torch.float16`) or use direct quantization techniques as described below. ### Model Instantiation dtype Under Pytorch a model normally gets instantiated with `torch.float32` format. This can be an issue if one tries to load a model whose weights are in fp16, since it'd require twice as much memory. To overcome this limitation, you can either explicitly pass the desired `dtype` using `torch_dtype` argument: thon model = T5ForConditionalGeneration.from_pretrained(""t5"", torch_dtype=torch.float16) or, if you want the model to always load in the most optimal memory pattern, you can use the special value `""auto""`, and then `dtype` will be automatically derived from the model's weights: thon model = T5ForConditionalGeneration.from_pretrained(""t5"", torch_dtype=""auto"") Models instantiated from scratch can also be told which `dtype` to use with: thon config = T5Config.from_pretrained(""t5"") model = AutoModel.from_config(config) Due to Pytorch design, this functionality is only available for floating dtypes. ## ModuleUtilsMixin [[autodoc]] modeling_utils.ModuleUtilsMixin ## TFPreTrainedModel [[autodoc]] TFPreTrainedModel - push_to_hub - all ## TFModelUtilsMixin [[autodoc]] modeling_tf_utils.TFModelUtilsMixin ## FlaxPreTrainedModel [[autodoc]] FlaxPreTrainedModel - push_to_hub - all ## Pushing to the Hub [[autodoc]] utils.PushToHubMixin ## Sharded checkpoints [[autodoc]] modeling_utils.load_sharded_checkpoint " main_classes/processors.md," # Processors Processors can mean two different things in the Transformers library: - the objects that pre-process inputs for multi-modal models such as [Wav2Vec2](../model_doc/wav2vec2) (speech and text) or [CLIP](../model_doc/clip) (text and vision) - deprecated objects that were used in older versions of the library to preprocess data for GLUE or SQUAD. ## Multi-modal processors Any multi-modal model will require an object to encode or decode the data that groups several modalities (among text, vision and audio). This is handled by objects called processors, which group together two or more processing objects such as tokenizers (for the text modality), image processors (for vision) and feature extractors (for audio). Those processors inherit from the following base class that implements the saving and loading functionality: [[autodoc]] ProcessorMixin ## Deprecated processors All processors follow the same architecture which is that of the [`~data.processors.utils.DataProcessor`]. The processor returns a list of [`~data.processors.utils.InputExample`]. These [`~data.processors.utils.InputExample`] can be converted to [`~data.processors.utils.InputFeatures`] in order to be fed to the model. [[autodoc]] data.processors.utils.DataProcessor [[autodoc]] data.processors.utils.InputExample [[autodoc]] data.processors.utils.InputFeatures ## GLUE [General Language Understanding Evaluation (GLUE)](https://gluebenchmark.com/) is a benchmark that evaluates the performance of models across a diverse set of existing NLU tasks. It was released together with the paper [GLUE: A multi-task benchmark and analysis platform for natural language understanding](https://openreview.net/pdf?id=rJ4km2R5t7) This library hosts a total of 10 processors for the following tasks: MRPC, MNLI, MNLI (mismatched), CoLA, SST2, STSB, QQP, QNLI, RTE and WNLI. Those processors are: - [`~data.processors.utils.MrpcProcessor`] - [`~data.processors.utils.MnliProcessor`] - [`~data.processors.utils.MnliMismatchedProcessor`] - [`~data.processors.utils.Sst2Processor`] - [`~data.processors.utils.StsbProcessor`] - [`~data.processors.utils.QqpProcessor`] - [`~data.processors.utils.QnliProcessor`] - [`~data.processors.utils.RteProcessor`] - [`~data.processors.utils.WnliProcessor`] Additionally, the following method can be used to load values from a data file and convert them to a list of [`~data.processors.utils.InputExample`]. [[autodoc]] data.processors.glue.glue_convert_examples_to_features ## XNLI [The Cross-Lingual NLI Corpus (XNLI)](https://www.nyu.edu/projects/bowman/xnli/) is a benchmark that evaluates the quality of cross-lingual text representations. XNLI is crowd-sourced dataset based on [*MultiNLI*](http://www.nyu.edu/projects/bowman/multinli/): pairs of text are labeled with textual entailment annotations for 15 different languages (including both high-resource language such as English and low-resource languages such as Swahili). It was released together with the paper [XNLI: Evaluating Cross-lingual Sentence Representations](https://arxiv.org/abs/1809.05053) This library hosts the processor to load the XNLI data: - [`~data.processors.utils.XnliProcessor`] Please note that since the gold labels are available on the test set, evaluation is performed on the test set. An example using these processors is given in the [run_xnli.py](https://github.com/huggingface/transformers/tree/main/examples/pytorch/text-classification/run_xnli.py) script. ## SQuAD [The Stanford Question Answering Dataset (SQuAD)](https://rajpurkar.github.io/SQuAD-explorer//) is a benchmark that evaluates the performance of models on question answering. Two versions are available, v1.1 and v2.0. The first version (v1.1) was released together with the paper [SQuAD: 100,000+ Questions for Machine Comprehension of Text](https://arxiv.org/abs/1606.05250). The second version (v2.0) was released alongside the paper [Know What You Don't Know: Unanswerable Questions for SQuAD](https://arxiv.org/abs/1806.03822). This library hosts a processor for each of the two versions: ### Processors Those processors are: - [`~data.processors.utils.SquadV1Processor`] - [`~data.processors.utils.SquadV2Processor`] They both inherit from the abstract class [`~data.processors.utils.SquadProcessor`] [[autodoc]] data.processors.squad.SquadProcessor - all Additionally, the following method can be used to convert SQuAD examples into [`~data.processors.utils.SquadFeatures`] that can be used as model inputs. [[autodoc]] data.processors.squad.squad_convert_examples_to_features These processors as well as the aforementioned method can be used with files containing the data as well as with the *tensorflow_datasets* package. Examples are given below. ### Example usage Here is an example using the processors as well as the conversion method using data files: thon # Loading a V2 processor processor = SquadV2Processor() examples = processor.get_dev_examples(squad_v2_data_dir) # Loading a V1 processor processor = SquadV1Processor() examples = processor.get_dev_examples(squad_v1_data_dir) features = squad_convert_examples_to_features( examples=examples, tokenizer=tokenizer, max_seq_length=max_seq_length, doc_stride=args.doc_stride, max_query_length=max_query_length, is_training=not evaluate, ) Using *tensorflow_datasets* is as easy as using a data file: thon # tensorflow_datasets only handle Squad V1. tfds_examples = tfds.load(""squad"") examples = SquadV1Processor().get_examples_from_dataset(tfds_examples, evaluate=evaluate) features = squad_convert_examples_to_features( examples=examples, tokenizer=tokenizer, max_seq_length=max_seq_length, doc_stride=args.doc_stride, max_query_length=max_query_length, is_training=not evaluate, ) Another example using these processors is given in the [run_squad.py](https://github.com/huggingface/transformers/tree/main/examples/legacy/question-answering/run_squad.py) script. " main_classes/tokenizer.md," # Tokenizer A tokenizer is in charge of preparing the inputs for a model. The library contains tokenizers for all the models. Most of the tokenizers are available in two flavors: a full python implementation and a ""Fast"" implementation based on the Rust library [🤗 Tokenizers](https://github.com/huggingface/tokenizers). The ""Fast"" implementations allows: 1. a significant speed-up in particular when doing batched tokenization and 2. additional methods to map between the original string (character and words) and the token space (e.g. getting the index of the token comprising a given character or the span of characters corresponding to a given token). The base classes [`PreTrainedTokenizer`] and [`PreTrainedTokenizerFast`] implement the common methods for encoding string inputs in model inputs (see below) and instantiating/saving python and ""Fast"" tokenizers either from a local file or directory or from a pretrained tokenizer provided by the library (downloaded from HuggingFace's AWS S3 repository). They both rely on [`~tokenization_utils_base.PreTrainedTokenizerBase`] that contains the common methods, and [`~tokenization_utils_base.SpecialTokensMixin`]. [`PreTrainedTokenizer`] and [`PreTrainedTokenizerFast`] thus implement the main methods for using all the tokenizers: - Tokenizing (splitting strings in sub-word token strings), converting tokens strings to ids and back, and encoding/decoding (i.e., tokenizing and converting to integers). - Adding new tokens to the vocabulary in a way that is independent of the underlying structure (BPE, SentencePiece). - Managing special tokens (like mask, beginning-of-sentence, etc.): adding them, assigning them to attributes in the tokenizer for easy access and making sure they are not split during tokenization. [`BatchEncoding`] holds the output of the [`~tokenization_utils_base.PreTrainedTokenizerBase`]'s encoding methods (`__call__`, `encode_plus` and `batch_encode_plus`) and is derived from a Python dictionary. When the tokenizer is a pure python tokenizer, this class behaves just like a standard python dictionary and holds the various model inputs computed by these methods (`input_ids`, `attention_mask`). When the tokenizer is a ""Fast"" tokenizer (i.e., backed by HuggingFace [tokenizers library](https://github.com/huggingface/tokenizers)), this class provides in addition several advanced alignment methods which can be used to map between the original string (character and words) and the token space (e.g., getting the index of the token comprising a given character or the span of characters corresponding to a given token). ## PreTrainedTokenizer [[autodoc]] PreTrainedTokenizer - __call__ - add_tokens - add_special_tokens - apply_chat_template - batch_decode - decode - encode - push_to_hub - all ## PreTrainedTokenizerFast The [`PreTrainedTokenizerFast`] depend on the [tokenizers](https://huggingface.co/docs/tokenizers) library. The tokenizers obtained from the 🤗 tokenizers library can be loaded very simply into 🤗 transformers. Take a look at the [Using tokenizers from 🤗 tokenizers](../fast_tokenizers) page to understand how this is done. [[autodoc]] PreTrainedTokenizerFast - __call__ - add_tokens - add_special_tokens - apply_chat_template - batch_decode - decode - encode - push_to_hub - all ## BatchEncoding [[autodoc]] BatchEncoding " main_classes/trainer.md," # Trainer The [`Trainer`] class provides an API for feature-complete training in PyTorch for most standard use cases. It's used in most of the [example scripts](https://github.com/huggingface/transformers/tree/main/examples). If you're looking to fine-tune a language model like Llama-2 or Mistral on a text dataset using autoregressive techniques, consider using [`trl`](https://github.com/huggingface/trl)'s [`~trl.SFTTrainer`]. The [`~trl.SFTTrainer`] wraps the [`Trainer`] and is specially optimized for this particular task and supports sequence packing, LoRA, quantization, and DeepSpeed for efficient scaling to any model size. On the other hand, the [`Trainer`] is a more versatile option, suitable for a broader spectrum of tasks. Before instantiating your [`Trainer`], create a [`TrainingArguments`] to access all the points of customization during training. The API supports distributed training on multiple GPUs/TPUs, mixed precision through [NVIDIA Apex](https://github.com/NVIDIA/apex) and Native AMP for PyTorch. The [`Trainer`] contains the basic training loop which supports the above features. To inject custom behavior you can subclass them and override the following methods: - **get_train_dataloader** -- Creates the training DataLoader. - **get_eval_dataloader** -- Creates the evaluation DataLoader. - **get_test_dataloader** -- Creates the test DataLoader. - **log** -- Logs information on the various objects watching training. - **create_optimizer_and_scheduler** -- Sets up the optimizer and learning rate scheduler if they were not passed at init. Note, that you can also subclass or override the `create_optimizer` and `create_scheduler` methods separately. - **create_optimizer** -- Sets up the optimizer if it wasn't passed at init. - **create_scheduler** -- Sets up the learning rate scheduler if it wasn't passed at init. - **compute_loss** - Computes the loss on a batch of training inputs. - **training_step** -- Performs a training step. - **prediction_step** -- Performs an evaluation/test step. - **evaluate** -- Runs an evaluation loop and returns metrics. - **predict** -- Returns predictions (with metrics if labels are available) on a test set. The [`Trainer`] class is optimized for 🤗 Transformers models and can have surprising behaviors when you use it on other models. When using it on your own model, make sure: - your model always return tuples or subclasses of [`~utils.ModelOutput`]. - your model can compute the loss if a `labels` argument is provided and that loss is returned as the first element of the tuple (if your model returns tuples) - your model can accept multiple label arguments (use the `label_names` in your [`TrainingArguments`] to indicate their name to the [`Trainer`]) but none of them should be named `""label""`. Here is an example of how to customize [`Trainer`] to use a weighted loss (useful when you have an unbalanced training set): thon from torch import nn from transformers import Trainer class CustomTrainer(Trainer): def compute_loss(self, model, inputs, return_outputs=False): labels = inputs.pop(""labels"") # forward pass outputs = model(**inputs) logits = outputs.get(""logits"") # compute custom loss (suppose one has 3 labels with different weights) loss_fct = nn.CrossEntropyLoss(weight=torch.tensor([1.0, 2.0, 3.0], device=model.device)) loss = loss_fct(logits.view(-1, self.model.config.num_labels), labels.view(-1)) return (loss, outputs) if return_outputs else loss Another way to customize the training loop behavior for the PyTorch [`Trainer`] is to use [callbacks](callback) that can inspect the training loop state (for progress reporting, logging on TensorBoard or other ML platforms) and take decisions (like early stopping). ## Trainer [[autodoc]] Trainer - all ## Seq2SeqTrainer [[autodoc]] Seq2SeqTrainer - evaluate - predict ## TrainingArguments [[autodoc]] TrainingArguments - all ## Seq2SeqTrainingArguments [[autodoc]] Seq2SeqTrainingArguments - all ## Checkpoints By default, [`Trainer`] will save all checkpoints in the `output_dir` you set in the [`TrainingArguments`] you are using. Those will go in subfolder named `checkpoint-xxx` with xxx being the step at which the training was at. Resuming training from a checkpoint can be done when calling [`Trainer.train`] with either: - `resume_from_checkpoint=True` which will resume training from the latest checkpoint - `resume_from_checkpoint=checkpoint_dir` which will resume training from the specific checkpoint in the directory passed. In addition, you can easily save your checkpoints on the Model Hub when using `push_to_hub=True`. By default, all the models saved in intermediate checkpoints are saved in different commits, but not the optimizer state. You can adapt the `hub-strategy` value of your [`TrainingArguments`] to either: - `""checkpoint""`: the latest checkpoint is also pushed in a subfolder named last-checkpoint, allowing you to resume training easily with `trainer.train(resume_from_checkpoint=""output_dir/last-checkpoint"")`. - `""all_checkpoints""`: all checkpoints are pushed like they appear in the output folder (so you will get one checkpoint folder per folder in your final repository) ## Logging By default [`Trainer`] will use `logging.INFO` for the main process and `logging.WARNING` for the replicas if any. These defaults can be overridden to use any of the 5 `logging` levels with [`TrainingArguments`]'s arguments: - `log_level` - for the main process - `log_level_replica` - for the replicas Further, if [`TrainingArguments`]'s `log_on_each_node` is set to `False` only the main node will use the log level settings for its main process, all other nodes will use the log level settings for replicas. Note that [`Trainer`] is going to set `transformers`'s log level separately for each node in its [`Trainer.__init__`]. So you may want to set this sooner (see the next example) if you tap into other `transformers` functionality before creating the [`Trainer`] object. Here is an example of how this can be used in an application: thon [] logger = logging.getLogger(__name__) # Setup logging logging.basicConfig( format=""%(asctime)s - %(levelname)s - %(name)s - %(message)s"", datefmt=""%m/%d/%Y %H:%M:%S"", handlers=[logging.StreamHandler(sys.stdout)], ) # set the main code and the modules it uses to the same log-level according to the node log_level = training_args.get_process_log_level() logger.setLevel(log_level) datasets.utils.logging.set_verbosity(log_level) transformers.utils.logging.set_verbosity(log_level) trainer = Trainer() And then if you only want to see warnings on the main node and all other nodes to not print any most likely duplicated warnings you could run it as: ```bash my_app.py --log_level warning --log_level_replica error In the multi-node environment if you also don't want the logs to repeat for each node's main process, you will want to change the above to: ```bash my_app.py --log_level warning --log_level_replica error --log_on_each_node 0 and then only the main process of the first node will log at the ""warning"" level, and all other processes on the main node and all processes on other nodes will log at the ""error"" level. If you need your application to be as quiet as possible you could do: ```bash my_app.py --log_level error --log_level_replica error --log_on_each_node 0 (add `--log_on_each_node 0` if on multi-node environment) ## Randomness When resuming from a checkpoint generated by [`Trainer`] all efforts are made to restore the _python_, _numpy_ and _pytorch_ RNG states to the same states as they were at the moment of saving that checkpoint, which should make the ""stop and resume"" style of training as close as possible to non-stop training. However, due to various default non-deterministic pytorch settings this might not fully work. If you want full determinism please refer to [Controlling sources of randomness](https://pytorch.org/docs/stable/notes/randomness). As explained in the document, that some of those settings that make things deterministic (.e.g., `torch.backends.cudnn.deterministic`) may slow things down, therefore this can't be done by default, but you can enable those yourself if needed. ## Specific GPUs Selection Let's discuss how you can tell your program which GPUs are to be used and in what order. When using [`DistributedDataParallel`](https://pytorch.org/docs/stable/generated/torch.nn.parallel.DistributedDataParallel.html) to use only a subset of your GPUs, you simply specify the number of GPUs to use. For example, if you have 4 GPUs, but you wish to use the first 2 you can do: ```bash python -m torch.distributed.launch --nproc_per_node=2 trainer-program.py if you have either [`accelerate`](https://github.com/huggingface/accelerate) or [`deepspeed`](https://github.com/microsoft/DeepSpeed) installed you can also accomplish the same by using one of: ```bash accelerate launch --num_processes 2 trainer-program.py ```bash deepspeed --num_gpus 2 trainer-program.py You don't need to use the Accelerate or [the Deepspeed integration](deepspeed) features to use these launchers. Until now you were able to tell the program how many GPUs to use. Now let's discuss how to select specific GPUs and control their order. The following environment variables help you control which GPUs to use and their order. **`CUDA_VISIBLE_DEVICES`** If you have multiple GPUs and you'd like to use only 1 or a few of those GPUs, set the environment variable `CUDA_VISIBLE_DEVICES` to a list of the GPUs to be used. For example, let's say you have 4 GPUs: 0, 1, 2 and 3. To run only on the physical GPUs 0 and 2, you can do: ```bash CUDA_VISIBLE_DEVICES=0,2 python -m torch.distributed.launch trainer-program.py So now pytorch will see only 2 GPUs, where your physical GPUs 0 and 2 are mapped to `cuda:0` and `cuda:1` correspondingly. You can even change their order: ```bash CUDA_VISIBLE_DEVICES=2,0 python -m torch.distributed.launch trainer-program.py Here your physical GPUs 0 and 2 are mapped to `cuda:1` and `cuda:0` correspondingly. The above examples were all for `DistributedDataParallel` use pattern, but the same method works for [`DataParallel`](https://pytorch.org/docs/stable/generated/torch.nn.DataParallel.html) as well: ```bash CUDA_VISIBLE_DEVICES=2,0 python trainer-program.py To emulate an environment without GPUs simply set this environment variable to an empty value like so: ```bash CUDA_VISIBLE_DEVICES= python trainer-program.py As with any environment variable you can, of course, export those instead of adding these to the command line, as in: ```bash export CUDA_VISIBLE_DEVICES=0,2 python -m torch.distributed.launch trainer-program.py but this approach can be confusing since you may forget you set up the environment variable earlier and not understand why the wrong GPUs are used. Therefore, it's a common practice to set the environment variable just for a specific run on the same command line as it's shown in most examples of this section. **`CUDA_DEVICE_ORDER`** There is an additional environment variable `CUDA_DEVICE_ORDER` that controls how the physical devices are ordered. The two choices are: 1. ordered by PCIe bus IDs (matches `nvidia-smi`'s order) - this is the default. ```bash export CUDA_DEVICE_ORDER=PCI_BUS_ID 2. ordered by GPU compute capabilities ```bash export CUDA_DEVICE_ORDER=FASTEST_FIRST Most of the time you don't need to care about this environment variable, but it's very helpful if you have a lopsided setup where you have an old and a new GPUs physically inserted in such a way so that the slow older card appears to be first. One way to fix that is to swap the cards. But if you can't swap the cards (e.g., if the cooling of the devices gets impacted) then setting `CUDA_DEVICE_ORDER=FASTEST_FIRST` will always put the newer faster card first. It'll be somewhat confusing though since `nvidia-smi` will still report them in the PCIe order. The other solution to swapping the order is to use: ```bash export CUDA_VISIBLE_DEVICES=1,0 In this example we are working with just 2 GPUs, but of course the same would apply to as many GPUs as your computer has. Also if you do set this environment variable it's the best to set it in your `~/.bashrc` file or some other startup config file and forget about it. ## Trainer Integrations The [`Trainer`] has been extended to support libraries that may dramatically improve your training time and fit much bigger models. Currently it supports third party solutions, [DeepSpeed](https://github.com/microsoft/DeepSpeed) and [PyTorch FSDP](https://pytorch.org/docs/stable/fsdp.html), which implement parts of the paper [ZeRO: Memory Optimizations Toward Training Trillion Parameter Models, by Samyam Rajbhandari, Jeff Rasley, Olatunji Ruwase, Yuxiong He](https://arxiv.org/abs/1910.02054). This provided support is new and experimental as of this writing. While the support for DeepSpeed and PyTorch FSDP is active and we welcome issues around it, we don't support the FairScale integration anymore since it has been integrated in PyTorch main (see the [PyTorch FSDP integration](#pytorch-fully-sharded-data-parallel)) ### CUDA Extension Installation Notes As of this writing, Deepspeed require compilation of CUDA C++ code, before it can be used. While all installation issues should be dealt with through the corresponding GitHub Issues of [Deepspeed](https://github.com/microsoft/DeepSpeed/issues), there are a few common issues that one may encounter while building any PyTorch extension that needs to build CUDA extensions. Therefore, if you encounter a CUDA-related build issue while doing the following: ```bash pip install deepspeed please, read the following notes first. In these notes we give examples for what to do when `pytorch` has been built with CUDA `10.2`. If your situation is different remember to adjust the version number to the one you are after. #### Possible problem #1 While, Pytorch comes with its own CUDA toolkit, to build these two projects you must have an identical version of CUDA installed system-wide. For example, if you installed `pytorch` with `cudatoolkit==10.2` in the Python environment, you also need to have CUDA `10.2` installed system-wide. The exact location may vary from system to system, but `/usr/local/cuda-10.2` is the most common location on many Unix systems. When CUDA is correctly set up and added to the `PATH` environment variable, one can find the installation location by doing: ```bash which nvcc If you don't have CUDA installed system-wide, install it first. You will find the instructions by using your favorite search engine. For example, if you're on Ubuntu you may want to search for: [ubuntu cuda 10.2 install](https://www.google.com/search?q=ubuntu+cuda+10.2+install). #### Possible problem #2 Another possible common problem is that you may have more than one CUDA toolkit installed system-wide. For example you may have: ```bash /usr/local/cuda-10.2 /usr/local/cuda-11.0 Now, in this situation you need to make sure that your `PATH` and `LD_LIBRARY_PATH` environment variables contain the correct paths to the desired CUDA version. Typically, package installers will set these to contain whatever the last version was installed. If you encounter the problem, where the package build fails because it can't find the right CUDA version despite you having it installed system-wide, it means that you need to adjust the 2 aforementioned environment variables. First, you may look at their contents: ```bash echo $PATH echo $LD_LIBRARY_PATH so you get an idea of what is inside. It's possible that `LD_LIBRARY_PATH` is empty. `PATH` lists the locations of where executables can be found and `LD_LIBRARY_PATH` is for where shared libraries are to looked for. In both cases, earlier entries have priority over the later ones. `:` is used to separate multiple entries. Now, to tell the build program where to find the specific CUDA toolkit, insert the desired paths to be listed first by doing: ```bash export PATH=/usr/local/cuda-10.2/bin:$PATH export LD_LIBRARY_PATH=/usr/local/cuda-10.2/lib64:$LD_LIBRARY_PATH Note that we aren't overwriting the existing values, but prepending instead. Of course, adjust the version number, the full path if need be. Check that the directories you assign actually do exist. `lib64` sub-directory is where the various CUDA `.so` objects, like `libcudart.so` reside, it's unlikely that your system will have it named differently, but if it is adjust it to reflect your reality. #### Possible problem #3 Some older CUDA versions may refuse to build with newer compilers. For example, you my have `gcc-9` but it wants `gcc-7`. There are various ways to go about it. If you can install the latest CUDA toolkit it typically should support the newer compiler. Alternatively, you could install the lower version of the compiler in addition to the one you already have, or you may already have it but it's not the default one, so the build system can't see it. If you have `gcc-7` installed but the build system complains it can't find it, the following might do the trick: ```bash sudo ln -s /usr/bin/gcc-7 /usr/local/cuda-10.2/bin/gcc sudo ln -s /usr/bin/g++-7 /usr/local/cuda-10.2/bin/g++ Here, we are making a symlink to `gcc-7` from `/usr/local/cuda-10.2/bin/gcc` and since `/usr/local/cuda-10.2/bin/` should be in the `PATH` environment variable (see the previous problem's solution), it should find `gcc-7` (and `g++7`) and then the build will succeed. As always make sure to edit the paths in the example to match your situation. ### PyTorch Fully Sharded Data parallel To accelerate training huge models on larger batch sizes, we can use a fully sharded data parallel model. This type of data parallel paradigm enables fitting more data and larger models by sharding the optimizer states, gradients and parameters. To read more about it and the benefits, check out the [Fully Sharded Data Parallel blog](https://pytorch.org/blog/introducing-pytorch-fully-sharded-data-parallel-api/). We have integrated the latest PyTorch's Fully Sharded Data Parallel (FSDP) training feature. All you need to do is enable it through the config. **Required PyTorch version for FSDP support**: PyTorch >=2.1.0 **Usage**: - Make sure you have added the distributed launcher `-m torch.distributed.launch --nproc_per_node=NUMBER_OF_GPUS_YOU_HAVE` if you haven't been using it already. - **Sharding Strategy**: - FULL_SHARD : Shards optimizer states + gradients + model parameters across data parallel workers/GPUs. For this, add `--fsdp full_shard` to the command line arguments. - SHARD_GRAD_OP : Shards optimizer states + gradients across data parallel workers/GPUs. For this, add `--fsdp shard_grad_op` to the command line arguments. - NO_SHARD : No sharding. For this, add `--fsdp no_shard` to the command line arguments. - HYBRID_SHARD : No sharding. For this, add `--fsdp hybrid_shard` to the command line arguments. - HYBRID_SHARD_ZERO2 : No sharding. For this, add `--fsdp hybrid_shard_zero2` to the command line arguments. - To offload the parameters and gradients to the CPU, add `--fsdp ""full_shard offload""` or `--fsdp ""shard_grad_op offload""` to the command line arguments. - To automatically recursively wrap layers with FSDP using `default_auto_wrap_policy`, add `--fsdp ""full_shard auto_wrap""` or `--fsdp ""shard_grad_op auto_wrap""` to the command line arguments. - To enable both CPU offloading and auto wrapping, add `--fsdp ""full_shard offload auto_wrap""` or `--fsdp ""shard_grad_op offload auto_wrap""` to the command line arguments. - Remaining FSDP config is passed via `--fsdp_config `. It is either a location of FSDP json config file (e.g., `fsdp_config.json`) or an already loaded json file as `dict`. - If auto wrapping is enabled, you can either use transformer based auto wrap policy or size based auto wrap policy. - For transformer based auto wrap policy, it is recommended to specify `transformer_layer_cls_to_wrap` in the config file. If not specified, the default value is `model._no_split_modules` when available. This specifies the list of transformer layer class name (case-sensitive) to wrap ,e.g, [`BertLayer`], [`GPTJBlock`], [`T5Block`] . This is important because submodules that share weights (e.g., embedding layer) should not end up in different FSDP wrapped units. Using this policy, wrapping happens for each block containing Multi-Head Attention followed by couple of MLP layers. Remaining layers including the shared embeddings are conveniently wrapped in same outermost FSDP unit. Therefore, use this for transformer based models. - For size based auto wrap policy, please add `min_num_params` in the config file. It specifies FSDP's minimum number of parameters for auto wrapping. - `backward_prefetch` can be specified in the config file. It controls when to prefetch next set of parameters. `backward_pre` and `backward_pos` are available options. For more information refer `torch.distributed.fsdp.fully_sharded_data_parallel.BackwardPrefetch` - `forward_prefetch` can be specified in the config file. It controls when to prefetch next set of parameters. If `""True""`, FSDP explicitly prefetches the next upcoming all-gather while executing in the forward pass. - `limit_all_gathers` can be specified in the config file. If `""True""`, FSDP explicitly synchronizes the CPU thread to prevent too many in-flight all-gathers. - `activation_checkpointing` can be specified in the config file. If `""True""`, FSDP activation checkpointing is a technique to reduce memory usage by clearing activations of certain layers and recomputing them during a backward pass. Effectively, this trades extra computation time for reduced memory usage. - `use_orig_params` can be specified in the config file. If True, allows non-uniform `requires_grad` during init, which means support for interspersed frozen and trainable paramteres. Useful in cases such as parameter-efficient fine-tuning. This also enables to have different optimizer param groups. This should be `True` when creating optimizer object before preparing/wrapping the model with FSDP. Please refer this [blog](https://dev-discuss.pytorch.org/t/rethinking-pytorch-fully-sharded-data-parallel-fsdp-from-first-principles/1019). **Saving and loading** Saving entire intermediate checkpoints using `FULL_STATE_DICT` state_dict_type with CPU offloading on rank 0 takes a lot of time and often results in NCCL Timeout errors due to indefinite hanging during broadcasting. However, at the end of training, we want the whole model state dict instead of the sharded state dict which is only compatible with FSDP. Use `SHARDED_STATE_DICT` (default) state_dict_type to save the intermediate checkpoints and optimizer states in this format recommended by the PyTorch team. Saving the final checkpoint in transformers format using default `safetensors` format requires below changes. thon if trainer.is_fsdp_enabled: trainer.accelerator.state.fsdp_plugin.set_state_dict_type(""FULL_STATE_DICT"") trainer.save_model(script_args.output_dir) **Few caveats to be aware of** - it is incompatible with `generate`, thus is incompatible with `--predict_with_generate` in all seq2seq/clm scripts (translation/summarization/clm etc.). Please refer issue [#21667](https://github.com/huggingface/transformers/issues/21667) ### PyTorch/XLA Fully Sharded Data parallel For all the TPU users, great news! PyTorch/XLA now supports FSDP. All the latest Fully Sharded Data Parallel (FSDP) training are supported. For more information refer to the [Scaling PyTorch models on Cloud TPUs with FSDP](https://pytorch.org/blog/scaling-pytorch-models-on-cloud-tpus-with-fsdp/) and [PyTorch/XLA implementation of FSDP](https://github.com/pytorch/xla/tree/master/torch_xla/distributed/fsdp) All you need to do is enable it through the config. **Required PyTorch/XLA version for FSDP support**: >=2.0 **Usage**: Pass `--fsdp ""full shard""` along with following changes to be made in `--fsdp_config `: - `xla` should be set to `True` to enable PyTorch/XLA FSDP. - `xla_fsdp_settings` The value is a dictionary which stores the XLA FSDP wrapping parameters. For a complete list of options, please see [here]( https://github.com/pytorch/xla/blob/master/torch_xla/distributed/fsdp/xla_fully_sharded_data_parallel.py). - `xla_fsdp_grad_ckpt`. When `True`, uses gradient checkpointing over each nested XLA FSDP wrapped layer. This setting can only be used when the xla flag is set to true, and an auto wrapping policy is specified through `min_num_params` or `transformer_layer_cls_to_wrap`. - You can either use transformer based auto wrap policy or size based auto wrap policy. - For transformer based auto wrap policy, it is recommended to specify `transformer_layer_cls_to_wrap` in the config file. If not specified, the default value is `model._no_split_modules` when available. This specifies the list of transformer layer class name (case-sensitive) to wrap ,e.g, [`BertLayer`], [`GPTJBlock`], [`T5Block`] . This is important because submodules that share weights (e.g., embedding layer) should not end up in different FSDP wrapped units. Using this policy, wrapping happens for each block containing Multi-Head Attention followed by couple of MLP layers. Remaining layers including the shared embeddings are conveniently wrapped in same outermost FSDP unit. Therefore, use this for transformer based models. - For size based auto wrap policy, please add `min_num_params` in the config file. It specifies FSDP's minimum number of parameters for auto wrapping. ### Using Trainer for accelerated PyTorch Training on Mac With PyTorch v1.12 release, developers and researchers can take advantage of Apple silicon GPUs for significantly faster model training. This unlocks the ability to perform machine learning workflows like prototyping and fine-tuning locally, right on Mac. Apple's Metal Performance Shaders (MPS) as a backend for PyTorch enables this and can be used via the new `""mps""` device. This will map computational graphs and primitives on the MPS Graph framework and tuned kernels provided by MPS. For more information please refer official documents [Introducing Accelerated PyTorch Training on Mac](https://pytorch.org/blog/introducing-accelerated-pytorch-training-on-mac/) and [MPS BACKEND](https://pytorch.org/docs/stable/notes/mps.html). We strongly recommend to install PyTorch >= 1.13 (nightly version at the time of writing) on your MacOS machine. It has major fixes related to model correctness and performance improvements for transformer based models. Please refer to https://github.com/pytorch/pytorch/issues/82707 for more details. **Benefits of Training and Inference using Apple Silicon Chips** 1. Enables users to train larger networks or batch sizes locally 2. Reduces data retrieval latency and provides the GPU with direct access to the full memory store due to unified memory architecture. Therefore, improving end-to-end performance. 3. Reduces costs associated with cloud-based development or the need for additional local GPUs. **Pre-requisites**: To install torch with mps support, please follow this nice medium article [GPU-Acceleration Comes to PyTorch on M1 Macs](https://medium.com/towards-data-science/gpu-acceleration-comes-to-pytorch-on-m1-macs-195c399efcc1). **Usage**: `mps` device will be used by default if available similar to the way `cuda` device is used. Therefore, no action from user is required. For example, you can run the official Glue text classififcation task (from the root folder) using Apple Silicon GPU with below command: ```bash export TASK_NAME=mrpc python examples/pytorch/text-classification/run_glue.py \ --model_name_or_path bert-base-cased \ --task_name $TASK_NAME \ --do_train \ --do_eval \ --max_seq_length 128 \ --per_device_train_batch_size 32 \ --learning_rate 2e-5 \ --num_train_epochs 3 \ --output_dir /tmp/$TASK_NAME/ \ --overwrite_output_dir **A few caveats to be aware of** 1. Some PyTorch operations have not been implemented in mps and will throw an error. One way to get around that is to set the environment variable `PYTORCH_ENABLE_MPS_FALLBACK=1`, which will fallback to CPU for these operations. It still throws a UserWarning however. 2. Distributed setups `gloo` and `nccl` are not working with `mps` device. This means that currently only single GPU of `mps` device type can be used. Finally, please, remember that, 🤗 `Trainer` only integrates MPS backend, therefore if you have any problems or questions with regards to MPS backend usage, please, file an issue with [PyTorch GitHub](https://github.com/pytorch/pytorch/issues). ## Using Accelerate Launcher with Trainer Accelerate now powers Trainer. In terms of what users should expect: - They can keep using the Trainer ingterations such as FSDP, DeepSpeed vis trainer arguments without any changes on their part. - They can now use Accelerate Launcher with Trainer (recommended). Steps to use Accelerate Launcher with Trainer: 1. Make sure 🤗 Accelerate is installed, you can't use the `Trainer` without it anyway. If not `pip install accelerate`. You may also need to update your version of Accelerate: `pip install accelerate --upgrade` 2. Run `accelerate config` and fill the questionnaire. Below are example accelerate configs: a. DDP Multi-node Multi-GPU config: ```yaml compute_environment: LOCAL_MACHINE distributed_type: MULTI_GPU downcast_bf16: 'no' gpu_ids: all machine_rank: 0 #change rank as per the node main_process_ip: 192.168.20.1 main_process_port: 9898 main_training_function: main mixed_precision: fp16 num_machines: 2 num_processes: 8 rdzv_backend: static same_network: true tpu_env: [] tpu_use_cluster: false tpu_use_sudo: false use_cpu: false b. FSDP config: ```yaml compute_environment: LOCAL_MACHINE distributed_type: FSDP downcast_bf16: 'no' fsdp_config: fsdp_auto_wrap_policy: TRANSFORMER_BASED_WRAP fsdp_backward_prefetch_policy: BACKWARD_PRE fsdp_forward_prefetch: true fsdp_offload_params: false fsdp_sharding_strategy: 1 fsdp_state_dict_type: FULL_STATE_DICT fsdp_sync_module_states: true fsdp_transformer_layer_cls_to_wrap: BertLayer fsdp_use_orig_params: true machine_rank: 0 main_training_function: main mixed_precision: bf16 num_machines: 1 num_processes: 2 rdzv_backend: static same_network: true tpu_env: [] tpu_use_cluster: false tpu_use_sudo: false use_cpu: false c. DeepSpeed config pointing to a file: ```yaml compute_environment: LOCAL_MACHINE deepspeed_config: deepspeed_config_file: /home/user/configs/ds_zero3_config.json zero3_init_flag: true distributed_type: DEEPSPEED downcast_bf16: 'no' machine_rank: 0 main_training_function: main num_machines: 1 num_processes: 4 rdzv_backend: static same_network: true tpu_env: [] tpu_use_cluster: false tpu_use_sudo: false use_cpu: false d. DeepSpeed config using accelerate plugin: ```yaml compute_environment: LOCAL_MACHINE deepspeed_config: gradient_accumulation_steps: 1 gradient_clipping: 0.7 offload_optimizer_device: cpu offload_param_device: cpu zero3_init_flag: true zero_stage: 2 distributed_type: DEEPSPEED downcast_bf16: 'no' machine_rank: 0 main_training_function: main mixed_precision: bf16 num_machines: 1 num_processes: 4 rdzv_backend: static same_network: true tpu_env: [] tpu_use_cluster: false tpu_use_sudo: false use_cpu: false 3. Run the Trainer script with args other than the ones handled above by accelerate config or launcher args. Below is an example to run `run_glue.py` using `accelerate launcher` with FSDP config from above. ```bash cd transformers accelerate launch \ ./examples/pytorch/text-classification/run_glue.py \ --model_name_or_path bert-base-cased \ --task_name $TASK_NAME \ --do_train \ --do_eval \ --max_seq_length 128 \ --per_device_train_batch_size 16 \ --learning_rate 5e-5 \ --num_train_epochs 3 \ --output_dir /tmp/$TASK_NAME/ \ --overwrite_output_dir 4. You can also directly use the cmd args for `accelerate launch`. Above example would map to: ```bash cd transformers accelerate launch --num_processes=2 \ --use_fsdp \ --mixed_precision=bf16 \ --fsdp_auto_wrap_policy=TRANSFORMER_BASED_WRAP \ --fsdp_transformer_layer_cls_to_wrap=""BertLayer"" \ --fsdp_sharding_strategy=1 \ --fsdp_state_dict_type=FULL_STATE_DICT \ ./examples/pytorch/text-classification/run_glue.py --model_name_or_path bert-base-cased \ --task_name $TASK_NAME \ --do_train \ --do_eval \ --max_seq_length 128 \ --per_device_train_batch_size 16 \ --learning_rate 5e-5 \ --num_train_epochs 3 \ --output_dir /tmp/$TASK_NAME/ \ --overwrite_output_dir For more information, please refer the 🤗 Accelerate CLI guide: [Launching your 🤗 Accelerate scripts](https://huggingface.co/docs/accelerate/basic_tutorials/launch). Sections that were moved: [ DeepSpeed | Installation | Deployment with multiple GPUs | Deployment with one GPU | Deployment in Notebooks | Configuration | Passing Configuration | Shared Configuration | ZeRO | ZeRO-2 Config | ZeRO-3 Config | NVMe Support | ZeRO-2 vs ZeRO-3 Performance | ZeRO-2 Example | ZeRO-3 Example | Optimizer | Scheduler | fp32 Precision | Automatic Mixed Precision | Batch Size | Gradient Accumulation | Gradient Clipping | Getting The Model Weights Out ] ## Boost your fine-tuning performances using NEFTune NEFTune is a technique to boost the performance of chat models and was introduced by the paper “NEFTune: Noisy Embeddings Improve Instruction Finetuning” from Jain et al. it consists of adding noise to the embedding vectors during training. According to the abstract of the paper: > Standard finetuning of LLaMA-2-7B using Alpaca achieves 29.79% on AlpacaEval, which rises to 64.69% using noisy embeddings. NEFTune also improves over strong baselines on modern instruction datasets. Models trained with Evol-Instruct see a 10% improvement, with ShareGPT an 8% improvement, and with OpenPlatypus an 8% improvement. Even powerful models further refined with RLHF such as LLaMA-2-Chat benefit from additional training with NEFTune. To use it in `Trainer` simply pass `neftune_noise_alpha` when creating your `TrainingArguments` instance. Note that to avoid any surprising behaviour, NEFTune is disabled after training to retrieve back the original behaviour of the embedding layer. thon from transformers import Trainer, TrainingArguments args = TrainingArguments(, neftune_noise_alpha=0.1) trainer = Trainer(, args=args) trainer.train() " main_classes/onnx.md," # Exporting 🤗 Transformers models to ONNX 🤗 Transformers provides a `transformers.onnx` package that enables you to convert model checkpoints to an ONNX graph by leveraging configuration objects. See the [guide](../serialization) on exporting 🤗 Transformers models for more details. ## ONNX Configurations We provide three abstract classes that you should inherit from, depending on the type of model architecture you wish to export: * Encoder-based models inherit from [`~onnx.config.OnnxConfig`] * Decoder-based models inherit from [`~onnx.config.OnnxConfigWithPast`] * Encoder-decoder models inherit from [`~onnx.config.OnnxSeq2SeqConfigWithPast`] ### OnnxConfig [[autodoc]] onnx.config.OnnxConfig ### OnnxConfigWithPast [[autodoc]] onnx.config.OnnxConfigWithPast ### OnnxSeq2SeqConfigWithPast [[autodoc]] onnx.config.OnnxSeq2SeqConfigWithPast ## ONNX Features Each ONNX configuration is associated with a set of _features_ that enable you to export models for different types of topologies or tasks. ### FeaturesManager [[autodoc]] onnx.features.FeaturesManager " main_classes/optimizer_schedules.md," # Optimization The `.optimization` module provides: - an optimizer with weight decay fixed that can be used to fine-tuned models, and - several schedules in the form of schedule objects that inherit from `_LRSchedule`: - a gradient accumulation class to accumulate the gradients of multiple batches ## AdamW (PyTorch) [[autodoc]] AdamW ## AdaFactor (PyTorch) [[autodoc]] Adafactor ## AdamWeightDecay (TensorFlow) [[autodoc]] AdamWeightDecay [[autodoc]] create_optimizer ## Schedules ### Learning Rate Schedules (Pytorch) [[autodoc]] SchedulerType [[autodoc]] get_scheduler [[autodoc]] get_constant_schedule [[autodoc]] get_constant_schedule_with_warmup [[autodoc]] get_cosine_schedule_with_warmup [[autodoc]] get_cosine_with_hard_restarts_schedule_with_warmup [[autodoc]] get_linear_schedule_with_warmup [[autodoc]] get_polynomial_decay_schedule_with_warmup [[autodoc]] get_inverse_sqrt_schedule ### Warmup (TensorFlow) [[autodoc]] WarmUp ## Gradient Strategies ### GradientAccumulator (TensorFlow) [[autodoc]] GradientAccumulator " main_classes/feature_extractor.md," # Feature Extractor A feature extractor is in charge of preparing input features for audio or vision models. This includes feature extraction from sequences, e.g., pre-processing audio files to generate Log-Mel Spectrogram features, feature extraction from images, e.g., cropping image files, but also padding, normalization, and conversion to NumPy, PyTorch, and TensorFlow tensors. ## FeatureExtractionMixin [[autodoc]] feature_extraction_utils.FeatureExtractionMixin - from_pretrained - save_pretrained ## SequenceFeatureExtractor [[autodoc]] SequenceFeatureExtractor - pad ## BatchFeature [[autodoc]] BatchFeature ## ImageFeatureExtractionMixin [[autodoc]] image_utils.ImageFeatureExtractionMixin " main_classes/text_generation.md," # Generation Each framework has a generate method for text generation implemented in their respective `GenerationMixin` class: - PyTorch [`~generation.GenerationMixin.generate`] is implemented in [`~generation.GenerationMixin`]. - TensorFlow [`~generation.TFGenerationMixin.generate`] is implemented in [`~generation.TFGenerationMixin`]. - Flax/JAX [`~generation.FlaxGenerationMixin.generate`] is implemented in [`~generation.FlaxGenerationMixin`]. Regardless of your framework of choice, you can parameterize the generate method with a [`~generation.GenerationConfig`] class instance. Please refer to this class for the complete list of generation parameters, which control the behavior of the generation method. To learn how to inspect a model's generation configuration, what are the defaults, how to change the parameters ad hoc, and how to create and save a customized generation configuration, refer to the [text generation strategies guide](../generation_strategies). The guide also explains how to use related features, like token streaming. ## GenerationConfig [[autodoc]] generation.GenerationConfig - from_pretrained - from_model_config - save_pretrained ## GenerationMixin [[autodoc]] generation.GenerationMixin - generate - compute_transition_scores - greedy_search - sample - beam_search - beam_sample - contrastive_search - group_beam_search - constrained_beam_search ## TFGenerationMixin [[autodoc]] generation.TFGenerationMixin - generate - compute_transition_scores ## FlaxGenerationMixin [[autodoc]] generation.FlaxGenerationMixin - generate " main_classes/configuration.md," # Configuration The base class [`PretrainedConfig`] implements the common methods for loading/saving a configuration either from a local file or directory, or from a pretrained model configuration provided by the library (downloaded from HuggingFace's AWS S3 repository). Each derived config class implements model specific attributes. Common attributes present in all config classes are: `hidden_size`, `num_attention_heads`, and `num_hidden_layers`. Text models further implement: `vocab_size`. ## PretrainedConfig [[autodoc]] PretrainedConfig - push_to_hub - all " main_classes/callback.md," # Callbacks Callbacks are objects that can customize the behavior of the training loop in the PyTorch [`Trainer`] (this feature is not yet implemented in TensorFlow) that can inspect the training loop state (for progress reporting, logging on TensorBoard or other ML platforms) and take decisions (like early stopping). Callbacks are ""read only"" pieces of code, apart from the [`TrainerControl`] object they return, they cannot change anything in the training loop. For customizations that require changes in the training loop, you should subclass [`Trainer`] and override the methods you need (see [trainer](trainer) for examples). By default, `TrainingArguments.report_to` is set to `""all""`, so a [`Trainer`] will use the following callbacks. - [`DefaultFlowCallback`] which handles the default behavior for logging, saving and evaluation. - [`PrinterCallback`] or [`ProgressCallback`] to display progress and print the logs (the first one is used if you deactivate tqdm through the [`TrainingArguments`], otherwise it's the second one). - [`~integrations.TensorBoardCallback`] if tensorboard is accessible (either through PyTorch >= 1.4 or tensorboardX). - [`~integrations.WandbCallback`] if [wandb](https://www.wandb.com/) is installed. - [`~integrations.CometCallback`] if [comet_ml](https://www.comet.ml/site/) is installed. - [`~integrations.MLflowCallback`] if [mlflow](https://www.mlflow.org/) is installed. - [`~integrations.NeptuneCallback`] if [neptune](https://neptune.ai/) is installed. - [`~integrations.AzureMLCallback`] if [azureml-sdk](https://pypi.org/project/azureml-sdk/) is installed. - [`~integrations.CodeCarbonCallback`] if [codecarbon](https://pypi.org/project/codecarbon/) is installed. - [`~integrations.ClearMLCallback`] if [clearml](https://github.com/allegroai/clearml) is installed. - [`~integrations.DagsHubCallback`] if [dagshub](https://dagshub.com/) is installed. - [`~integrations.FlyteCallback`] if [flyte](https://flyte.org/) is installed. - [`~integrations.DVCLiveCallback`] if [dvclive](https://dvc.org/doc/dvclive) is installed. If a package is installed but you don't wish to use the accompanying integration, you can change `TrainingArguments.report_to` to a list of just those integrations you want to use (e.g. `[""azure_ml"", ""wandb""]`). The main class that implements callbacks is [`TrainerCallback`]. It gets the [`TrainingArguments`] used to instantiate the [`Trainer`], can access that Trainer's internal state via [`TrainerState`], and can take some actions on the training loop via [`TrainerControl`]. ## Available Callbacks Here is the list of the available [`TrainerCallback`] in the library: [[autodoc]] integrations.CometCallback - setup [[autodoc]] DefaultFlowCallback [[autodoc]] PrinterCallback [[autodoc]] ProgressCallback [[autodoc]] EarlyStoppingCallback [[autodoc]] integrations.TensorBoardCallback [[autodoc]] integrations.WandbCallback - setup [[autodoc]] integrations.MLflowCallback - setup [[autodoc]] integrations.AzureMLCallback [[autodoc]] integrations.CodeCarbonCallback [[autodoc]] integrations.NeptuneCallback [[autodoc]] integrations.ClearMLCallback [[autodoc]] integrations.DagsHubCallback [[autodoc]] integrations.FlyteCallback [[autodoc]] integrations.DVCLiveCallback - setup ## TrainerCallback [[autodoc]] TrainerCallback Here is an example of how to register a custom callback with the PyTorch [`Trainer`]: thon class MyCallback(TrainerCallback): ""A callback that prints a message at the beginning of training"" def on_train_begin(self, args, state, control, **kwargs): print(""Starting training"") trainer = Trainer( model, args, train_dataset=train_dataset, eval_dataset=eval_dataset, callbacks=[MyCallback], # We can either pass the callback class this way or an instance of it (MyCallback()) ) Another way to register a callback is to call `trainer.add_callback()` as follows: thon trainer = Trainer() trainer.add_callback(MyCallback) # Alternatively, we can pass an instance of the callback class trainer.add_callback(MyCallback()) ## TrainerState [[autodoc]] TrainerState ## TrainerControl [[autodoc]] TrainerControl " main_classes/quantization.md," # Quantize 🤗 Transformers models ## AWQ integration AWQ method has been introduced in the [*AWQ: Activation-aware Weight Quantization for LLM Compression and Acceleration* paper](https://arxiv.org/abs/2306.00978). With AWQ you can run models in 4-bit precision, while preserving its original quality (i.e. no performance degradation) with a superior throughput that other quantization methods presented below - reaching similar throughput as pure `float16` inference. We now support inference with any AWQ model, meaning anyone can load and use AWQ weights that are pushed on the Hub or saved locally. Note that using AWQ requires to have access to a NVIDIA GPU. CPU inference is not supported yet. ### Quantizing a model We advise users to look at different existing tools in the ecosystem to quantize their models with AWQ algorithm, such as: - [`llm-awq`](https://github.com/mit-han-lab/llm-awq) from MIT Han Lab - [`autoawq`](https://github.com/casper-hansen/AutoAWQ) from [`casper-hansen`](https://github.com/casper-hansen) - Intel neural compressor from Intel - through [`optimum-intel`](https://huggingface.co/docs/optimum/main/en/intel/optimization_inc) Many other tools might exist in the ecosystem, please feel free to open a PR to add them to the list. Currently the integration with 🤗 Transformers is only available for models that have been quantized using `autoawq` library and `llm-awq`. Most of the models quantized with `auto-awq` can be found under [`TheBloke`](https://huggingface.co/TheBloke) namespace of 🤗 Hub, and to quantize models with `llm-awq` please refer to the [`convert_to_hf.py`](https://github.com/mit-han-lab/llm-awq/blob/main/examples/convert_to_hf.py) script in the examples folder of [`llm-awq`](https://github.com/mit-han-lab/llm-awq/). ### Load a quantized model You can load a quantized model from the Hub using the `from_pretrained` method. Make sure that the pushed weights are quantized, by checking that the attribute `quantization_config` is present in the model's configuration file (`configuration.json`). You can confirm that the model is quantized in the AWQ format by checking the field `quantization_config.quant_method` which should be set to `""awq""`. Note that loading the model will set other weights in `float16` by default for performance reasons. If you want to change that behavior, you can pass `torch_dtype` argument to `torch.float32` or `torch.bfloat16`. You can find in the sections below some example snippets and notebook. ## Example usage First, you need to install [`autoawq`](https://github.com/casper-hansen/AutoAWQ) library ```bash pip install autoawq thon from transformers import AutoModelForCausalLM, AutoTokenizer model_id = ""TheBloke/zephyr-7B-alpha-AWQ"" model = AutoModelForCausalLM.from_pretrained(model_id, device_map=""cuda:0"") In case you first load your model on CPU, make sure to move it to your GPU device before using thon from transformers import AutoModelForCausalLM, AutoTokenizer model_id = ""TheBloke/zephyr-7B-alpha-AWQ"" model = AutoModelForCausalLM.from_pretrained(model_id).to(""cuda:0"") ### Combining AWQ and Flash Attention You can combine AWQ quantization with Flash Attention to get a model that is both quantized and faster. Simply load the model using `from_pretrained` and pass `use_flash_attention_2=True` argument. thon from transformers import AutoModelForCausalLM, AutoTokenizer model = AutoModelForCausalLM.from_pretrained(""TheBloke/zephyr-7B-alpha-AWQ"", use_flash_attention_2=True, device_map=""cuda:0"") ### Benchmarks We performed some speed, throughput and latency benchmarks using [`optimum-benchmark`](https://github.com/huggingface/optimum-benchmark) library. Note at that time of writing this documentation section, the available quantization methods were: `awq`, `gptq` and `bitsandbytes`. The benchmark was run on a NVIDIA-A100 instance and the model used was [`TheBloke/Mistral-7B-v0.1-AWQ`](https://huggingface.co/TheBloke/Mistral-7B-v0.1-AWQ) for the AWQ model, [`TheBloke/Mistral-7B-v0.1-GPTQ`](https://huggingface.co/TheBloke/Mistral-7B-v0.1-GPTQ) for the GPTQ model. We also benchmarked it against `bitsandbytes` quantization methods and native `float16` model. Some results are shown below: You can find the full results together with packages versions in [this link](https://github.com/huggingface/optimum-benchmark/tree/main/examples/running-mistrals). From the results it appears that AWQ quantization method is the fastest quantization method for inference, text generation and among the lowest peak memory for text generation. However, AWQ seems to have the largest forward latency per batch size. ### Google colab demo Check out how to use this integration throughout this [Google Colab demo](https://colab.research.google.com/drive/1HzZH89yAXJaZgwJDhQj9LqSBux932BvY)! ### AwqConfig [[autodoc]] AwqConfig ## `AutoGPTQ` Integration 🤗 Transformers has integrated `optimum` API to perform GPTQ quantization on language models. You can load and quantize your model in 8, 4, 3 or even 2 bits without a big drop of performance and faster inference speed! This is supported by most GPU hardwares. To learn more about the quantization model, check out: - the [GPTQ](https://arxiv.org/pdf/2210.17323.pdf) paper - the `optimum` [guide](https://huggingface.co/docs/optimum/llm_quantization/usage_guides/quantization) on GPTQ quantization - the [`AutoGPTQ`](https://github.com/PanQiWei/AutoGPTQ) library used as the backend ### Requirements You need to have the following requirements installed to run the code below: - Install latest `AutoGPTQ` library `pip install auto-gptq` - Install latest `optimum` from source `pip install git+https://github.com/huggingface/optimum.git` - Install latest `transformers` from source `pip install git+https://github.com/huggingface/transformers.git` - Install latest `accelerate` library `pip install --upgrade accelerate` Note that GPTQ integration supports for now only text models and you may encounter unexpected behaviour for vision, speech or multi-modal models. ### Load and quantize a model GPTQ is a quantization method that requires weights calibration before using the quantized models. If you want to quantize transformers model from scratch, it might take some time before producing the quantized model (~5 min on a Google colab for `facebook/opt-350m` model). Hence, there are two different scenarios where you want to use GPTQ-quantized models. The first use case would be to load models that has been already quantized by other users that are available on the Hub, the second use case would be to quantize your model from scratch and save it or push it on the Hub so that other users can also use it. #### GPTQ Configuration In order to load and quantize a model, you need to create a [`GPTQConfig`]. You need to pass the number of `bits`, a `dataset` in order to calibrate the quantization and the `tokenizer` of the model in order prepare the dataset. thon model_id = ""facebook/opt-125m"" tokenizer = AutoTokenizer.from_pretrained(model_id) gptq_config = GPTQConfig(bits=4, dataset = ""c4"", tokenizer=tokenizer) Note that you can pass your own dataset as a list of string. However, it is highly recommended to use the dataset from the GPTQ paper. thon dataset = [""auto-gptq is an easy-to-use model quantization library with user-friendly apis, based on GPTQ algorithm.""] quantization = GPTQConfig(bits=4, dataset = dataset, tokenizer=tokenizer) #### Quantization You can quantize a model by using `from_pretrained` and setting the `quantization_config`. thon from transformers import AutoModelForCausalLM model = AutoModelForCausalLM.from_pretrained(model_id, quantization_config=gptq_config) Note that you will need a GPU to quantize a model. We will put the model in the cpu and move the modules back and forth to the gpu in order to quantize them. If you want to maximize your gpus usage while using cpu offload, you can set `device_map = ""auto""`. thon from transformers import AutoModelForCausalLM model = AutoModelForCausalLM.from_pretrained(model_id, device_map=""auto"", quantization_config=gptq_config) Note that disk offload is not supported. Furthermore, if you are out of memory because of the dataset, you may have to pass `max_memory` in `from_pretained`. Checkout this [guide](https://huggingface.co/docs/accelerate/usage_guides/big_modeling#designing-a-device-map) to learn more about `device_map` and `max_memory`. GPTQ quantization only works for text model for now. Futhermore, the quantization process can a lot of time depending on one's hardware (175B model = 4 gpu hours using NVIDIA A100). Please check on the hub if there is not a GPTQ quantized version of the model. If not, you can submit a demand on github. ### Push quantized model to 🤗 Hub You can push the quantized model like any 🤗 model to Hub with `push_to_hub`. The quantization config will be saved and pushed along the model. thon quantized_model.push_to_hub(""opt-125m-gptq"") tokenizer.push_to_hub(""opt-125m-gptq"") If you want to save your quantized model on your local machine, you can also do it with `save_pretrained`: thon quantized_model.save_pretrained(""opt-125m-gptq"") tokenizer.save_pretrained(""opt-125m-gptq"") Note that if you have quantized your model with a `device_map`, make sure to move the entire model to one of your gpus or the `cpu` before saving it. thon quantized_model.to(""cpu"") quantized_model.save_pretrained(""opt-125m-gptq"") ### Load a quantized model from the 🤗 Hub You can load a quantized model from the Hub by using `from_pretrained`. Make sure that the pushed weights are quantized, by checking that the attribute `quantization_config` is present in the model configuration object. thon from transformers import AutoModelForCausalLM model = AutoModelForCausalLM.from_pretrained(""{your_username}/opt-125m-gptq"") If you want to load a model faster and without allocating more memory than needed, the `device_map` argument also works with quantized model. Make sure that you have `accelerate` library installed. thon from transformers import AutoModelForCausalLM model = AutoModelForCausalLM.from_pretrained(""{your_username}/opt-125m-gptq"", device_map=""auto"") ### Exllama kernels for faster inference For 4-bit model, you can use the exllama kernels in order to a faster inference speed. It is activated by default. You can change that behavior by passing `use_exllama` in [`GPTQConfig`]. This will overwrite the quantization config stored in the config. Note that you will only be able to overwrite the attributes related to the kernels. Furthermore, you need to have the entire model on gpus if you want to use exllama kernels. Also, you can perform CPU inference using Auto-GPTQ for Auto-GPTQ version > 0.4.2 by passing `device_map` = ""cpu"". For CPU inference, you have to pass `use_exllama = False` in the `GPTQConfig.` import torch gptq_config = GPTQConfig(bits=4) model = AutoModelForCausalLM.from_pretrained(""{your_username}/opt-125m-gptq"", device_map=""auto"", quantization_config=gptq_config) With the release of the exllamav2 kernels, you can get faster inference speed compared to the exllama kernels. You just need to pass `exllama_config={""version"": 2}` in [`GPTQConfig`]: import torch gptq_config = GPTQConfig(bits=4, exllama_config={""version"":2}) model = AutoModelForCausalLM.from_pretrained(""{your_username}/opt-125m-gptq"", device_map=""auto"", quantization_config = gptq_config) Note that only 4-bit models are supported for now. Furthermore, it is recommended to deactivate the exllama kernels if you are finetuning a quantized model with peft. You can find the benchmark of these kernels [here](https://github.com/huggingface/optimum/tree/main/tests/benchmark#gptq-benchmark) #### Fine-tune a quantized model With the official support of adapters in the Hugging Face ecosystem, you can fine-tune models that have been quantized with GPTQ. Please have a look at [`peft`](https://github.com/huggingface/peft) library for more details. ### Example demo Check out the Google Colab [notebook](https://colab.research.google.com/drive/1_TIrmuKOFhuRRiTWN94iLKUFu6ZX4ceb?usp=sharing) to learn how to quantize your model with GPTQ and how finetune the quantized model with peft. ### GPTQConfig [[autodoc]] GPTQConfig ## `bitsandbytes` Integration 🤗 Transformers is closely integrated with most used modules on `bitsandbytes`. You can load your model in 8-bit precision with few lines of code. This is supported by most of the GPU hardwares since the `0.37.0` release of `bitsandbytes`. Learn more about the quantization method in the [LLM.int8()](https://arxiv.org/abs/2208.07339) paper, or the [blogpost](https://huggingface.co/blog/hf-bitsandbytes-integration) about the collaboration. Since its `0.39.0` release, you can load any model that supports `device_map` using 4-bit quantization, leveraging FP4 data type. If you want to quantize your own pytorch model, check out this [documentation](https://huggingface.co/docs/accelerate/main/en/usage_guides/quantization) from 🤗 Accelerate library. Here are the things you can do using `bitsandbytes` integration ### General usage You can quantize a model by using the `load_in_8bit` or `load_in_4bit` argument when calling the [`~PreTrainedModel.from_pretrained`] method as long as your model supports loading with 🤗 Accelerate and contains `torch.nn.Linear` layers. This should work for any modality as well. thon from transformers import AutoModelForCausalLM model_8bit = AutoModelForCausalLM.from_pretrained(""facebook/opt-350m"", load_in_8bit=True) model_4bit = AutoModelForCausalLM.from_pretrained(""facebook/opt-350m"", load_in_4bit=True) By default all other modules (e.g. `torch.nn.LayerNorm`) will be converted in `torch.float16`, but if you want to change their `dtype` you can overwrite the `torch_dtype` argument: thon >>> import torch >>> from transformers import AutoModelForCausalLM >>> model_8bit = AutoModelForCausalLM.from_pretrained(""facebook/opt-350m"", load_in_8bit=True, torch_dtype=torch.float32) >>> model_8bit.model.decoder.layers[-1].final_layer_norm.weight.dtype torch.float32 ### FP4 quantization #### Requirements Make sure that you have installed the requirements below before running any of the code snippets below. - Latest `bitsandbytes` library `pip install bitsandbytes>=0.39.0` - Install latest `accelerate` `pip install --upgrade accelerate` - Install latest `transformers` `pip install --upgrade transformers` #### Tips and best practices - **Advanced usage:** Refer to [this Google Colab notebook](https://colab.research.google.com/drive/1ge2F1QSK8Q7h0hn3YKuBCOAS0bK8E0wf) for advanced usage of 4-bit quantization with all the possible options. - **Faster inference with `batch_size=1` :** Since the `0.40.0` release of bitsandbytes, for `batch_size=1` you can benefit from fast inference. Check out [these release notes](https://github.com/TimDettmers/bitsandbytes/releases/tag/0.40.0) and make sure to have a version that is greater than `0.40.0` to benefit from this feature out of the box. - **Training:** According to [QLoRA paper](https://arxiv.org/abs/2305.14314), for training 4-bit base models (e.g. using LoRA adapters) one should use `bnb_4bit_quant_type='nf4'`. - **Inference:** For inference, `bnb_4bit_quant_type` does not have a huge impact on the performance. However for consistency with the model's weights, make sure you use the same `bnb_4bit_compute_dtype` and `torch_dtype` arguments. #### Load a large model in 4bit By using `load_in_4bit=True` when calling the `.from_pretrained` method, you can divide your memory use by 4 (roughly). thon # pip install transformers accelerate bitsandbytes from transformers import AutoModelForCausalLM, AutoTokenizer model_id = ""bigscience/bloom-1b7"" tokenizer = AutoTokenizer.from_pretrained(model_id) model = AutoModelForCausalLM.from_pretrained(model_id, device_map=""auto"", load_in_4bit=True) Note that once a model has been loaded in 4-bit it is currently not possible to push the quantized weights on the Hub. Note also that you cannot train 4-bit weights as this is not supported yet. However you can use 4-bit models to train extra parameters, this will be covered in the next section. ### Load a large model in 8bit You can load a model by roughly halving the memory requirements by using `load_in_8bit=True` argument when calling `.from_pretrained` method thon # pip install transformers accelerate bitsandbytes from transformers import AutoModelForCausalLM, AutoTokenizer model_id = ""bigscience/bloom-1b7"" tokenizer = AutoTokenizer.from_pretrained(model_id) model = AutoModelForCausalLM.from_pretrained(model_id, device_map=""auto"", load_in_8bit=True) Then, use your model as you would usually use a [`PreTrainedModel`]. You can check the memory footprint of your model with `get_memory_footprint` method. thon print(model.get_memory_footprint()) With this integration we were able to load large models on smaller devices and run them without any issue. Note that once a model has been loaded in 8-bit it is currently not possible to push the quantized weights on the Hub except if you use the latest `transformers` and `bitsandbytes`. Note also that you cannot train 8-bit weights as this is not supported yet. However you can use 8-bit models to train extra parameters, this will be covered in the next section. Note also that `device_map` is optional but setting `device_map = 'auto'` is prefered for inference as it will dispatch efficiently the model on the available ressources. #### Advanced use cases Here we will cover some advanced use cases you can perform with FP4 quantization ##### Change the compute dtype The compute dtype is used to change the dtype that will be used during computation. For example, hidden states could be in `float32` but computation can be set to bf16 for speedups. By default, the compute dtype is set to `float32`. thon import torch from transformers import BitsAndBytesConfig quantization_config = BitsAndBytesConfig(load_in_4bit=True, bnb_4bit_compute_dtype=torch.bfloat16) ##### Using NF4 (Normal Float 4) data type You can also use the NF4 data type, which is a new 4bit datatype adapted for weights that have been initialized using a normal distribution. For that run: thon from transformers import BitsAndBytesConfig nf4_config = BitsAndBytesConfig( load_in_4bit=True, bnb_4bit_quant_type=""nf4"", ) model_nf4 = AutoModelForCausalLM.from_pretrained(model_id, quantization_config=nf4_config) ##### Use nested quantization for more memory efficient inference We also advise users to use the nested quantization technique. This saves more memory at no additional performance - from our empirical observations, this enables fine-tuning llama-13b model on an NVIDIA-T4 16GB with a sequence length of 1024, batch size of 1 and gradient accumulation steps of 4. thon from transformers import BitsAndBytesConfig double_quant_config = BitsAndBytesConfig( load_in_4bit=True, bnb_4bit_use_double_quant=True, ) model_double_quant = AutoModelForCausalLM.from_pretrained(model_id, quantization_config=double_quant_config) ### Push quantized models on the 🤗 Hub You can push a quantized model on the Hub by naively using `push_to_hub` method. This will first push the quantization configuration file, then push the quantized model weights. Make sure to use `bitsandbytes>0.37.2` (at this time of writing, we tested it on `bitsandbytes==0.38.0.post1`) to be able to use this feature. thon from transformers import AutoModelForCausalLM, AutoTokenizer model = AutoModelForCausalLM.from_pretrained(""bigscience/bloom-560m"", device_map=""auto"", load_in_8bit=True) tokenizer = AutoTokenizer.from_pretrained(""bigscience/bloom-560m"") model.push_to_hub(""bloom-560m-8bit"") Pushing 8bit models on the Hub is strongely encouraged for large models. This will allow the community to benefit from the memory footprint reduction and loading for example large models on a Google Colab. ### Load a quantized model from the 🤗 Hub You can load a quantized model from the Hub by using `from_pretrained` method. Make sure that the pushed weights are quantized, by checking that the attribute `quantization_config` is present in the model configuration object. thon from transformers import AutoModelForCausalLM, AutoTokenizer model = AutoModelForCausalLM.from_pretrained(""{your_username}/bloom-560m-8bit"", device_map=""auto"") Note that in this case, you don't need to specify the arguments `load_in_8bit=True`, but you need to make sure that `bitsandbytes` and `accelerate` are installed. Note also that `device_map` is optional but setting `device_map = 'auto'` is prefered for inference as it will dispatch efficiently the model on the available ressources. ### Advanced use cases This section is intended to advanced users, that want to explore what it is possible to do beyond loading and running 8-bit models. #### Offload between `cpu` and `gpu` One of the advanced use case of this is being able to load a model and dispatch the weights between `CPU` and `GPU`. Note that the weights that will be dispatched on CPU **will not** be converted in 8-bit, thus kept in `float32`. This feature is intended for users that want to fit a very large model and dispatch the model between GPU and CPU. First, load a [`BitsAndBytesConfig`] from `transformers` and set the attribute `llm_int8_enable_fp32_cpu_offload` to `True`: thon from transformers import AutoModelForCausalLM, AutoTokenizer, BitsAndBytesConfig quantization_config = BitsAndBytesConfig(llm_int8_enable_fp32_cpu_offload=True) Let's say you want to load `bigscience/bloom-1b7` model, and you have just enough GPU RAM to fit the entire model except the `lm_head`. Therefore write a custom device_map as follows: thon device_map = { ""transformer.word_embeddings"": 0, ""transformer.word_embeddings_layernorm"": 0, ""lm_head"": ""cpu"", ""transformer.h"": 0, ""transformer.ln_f"": 0, } And load your model as follows: thon model_8bit = AutoModelForCausalLM.from_pretrained( ""bigscience/bloom-1b7"", device_map=device_map, quantization_config=quantization_config, ) And that's it! Enjoy your model! #### Play with `llm_int8_threshold` You can play with the `llm_int8_threshold` argument to change the threshold of the outliers. An ""outlier"" is a hidden state value that is greater than a certain threshold. This corresponds to the outlier threshold for outlier detection as described in `LLM.int8()` paper. Any hidden states value that is above this threshold will be considered an outlier and the operation on those values will be done in fp16. Values are usually normally distributed, that is, most values are in the range [-3.5, 3.5], but there are some exceptional systematic outliers that are very differently distributed for large models. These outliers are often in the interval [-60, -6] or [6, 60]. Int8 quantization works well for values of magnitude ~5, but beyond that, there is a significant performance penalty. A good default threshold is 6, but a lower threshold might be needed for more unstable models (small models, fine-tuning). This argument can impact the inference speed of the model. We suggest to play with this parameter to find which one is the best for your use case. thon from transformers import AutoModelForCausalLM, AutoTokenizer, BitsAndBytesConfig model_id = ""bigscience/bloom-1b7"" quantization_config = BitsAndBytesConfig( llm_int8_threshold=10, ) model_8bit = AutoModelForCausalLM.from_pretrained( model_id, device_map=device_map, quantization_config=quantization_config, ) tokenizer = AutoTokenizer.from_pretrained(model_id) #### Skip the conversion of some modules Some models has several modules that needs to be not converted in 8-bit to ensure stability. For example Jukebox model has several `lm_head` modules that should be skipped. Play with `llm_int8_skip_modules` thon from transformers import AutoModelForCausalLM, AutoTokenizer, BitsAndBytesConfig model_id = ""bigscience/bloom-1b7"" quantization_config = BitsAndBytesConfig( llm_int8_skip_modules=[""lm_head""], ) model_8bit = AutoModelForCausalLM.from_pretrained( model_id, device_map=device_map, quantization_config=quantization_config, ) tokenizer = AutoTokenizer.from_pretrained(model_id) #### Fine-tune a model that has been loaded in 8-bit With the official support of adapters in the Hugging Face ecosystem, you can fine-tune models that have been loaded in 8-bit. This enables fine-tuning large models such as `flan-t5-large` or `facebook/opt-6.7b` in a single google Colab. Please have a look at [`peft`](https://github.com/huggingface/peft) library for more details. Note that you don't need to pass `device_map` when loading the model for training. It will automatically load your model on your GPU. You can also set the device map to a specific device if needed (e.g. `cuda:0`, `0`, `torch.device('cuda:0')`). Please note that `device_map=auto` should be used for inference only. ### BitsAndBytesConfig [[autodoc]] BitsAndBytesConfig ## Quantization with 🤗 `optimum` Please have a look at [Optimum documentation](https://huggingface.co/docs/optimum/index) to learn more about quantization methods that are supported by `optimum` and see if these are applicable for your use case. " main_classes/pipelines.md," # Pipelines The pipelines are a great and easy way to use models for inference. These pipelines are objects that abstract most of the complex code from the library, offering a simple API dedicated to several tasks, including Named Entity Recognition, Masked Language Modeling, Sentiment Analysis, Feature Extraction and Question Answering. See the [task summary](../task_summary) for examples of use. There are two categories of pipeline abstractions to be aware about: - The [`pipeline`] which is the most powerful object encapsulating all other pipelines. - Task-specific pipelines are available for [audio](#audio), [computer vision](#computer-vision), [natural language processing](#natural-language-processing), and [multimodal](#multimodal) tasks. ## The pipeline abstraction The *pipeline* abstraction is a wrapper around all the other available pipelines. It is instantiated as any other pipeline but can provide additional quality of life. Simple call on one item: thon >>> pipe = pipeline(""text-classification"") >>> pipe(""This restaurant is awesome"") [{'label': 'POSITIVE', 'score': 0.9998743534088135}] If you want to use a specific model from the [hub](https://huggingface.co) you can ignore the task if the model on the hub already defines it: thon >>> pipe = pipeline(model=""roberta-large-mnli"") >>> pipe(""This restaurant is awesome"") [{'label': 'NEUTRAL', 'score': 0.7313136458396912}] To call a pipeline on many items, you can call it with a *list*. thon >>> pipe = pipeline(""text-classification"") >>> pipe([""This restaurant is awesome"", ""This restaurant is awful""]) [{'label': 'POSITIVE', 'score': 0.9998743534088135}, {'label': 'NEGATIVE', 'score': 0.9996669292449951}] To iterate over full datasets it is recommended to use a `dataset` directly. This means you don't need to allocate the whole dataset at once, nor do you need to do batching yourself. This should work just as fast as custom loops on GPU. If it doesn't don't hesitate to create an issue. thon import datasets from transformers import pipeline from transformers.pipelines.pt_utils import KeyDataset from tqdm.auto import tqdm pipe = pipeline(""automatic-speech-recognition"", model=""facebook/wav2vec2-base-960h"", device=0) dataset = datasets.load_dataset(""superb"", name=""asr"", split=""test"") # KeyDataset (only *pt*) will simply return the item in the dict returned by the dataset item # as we're not interested in the *target* part of the dataset. For sentence pair use KeyPairDataset for out in tqdm(pipe(KeyDataset(dataset, ""file""))): print(out) # {""text"": ""NUMBER TEN FRESH NELLY IS WAITING ON YOU GOOD NIGHT HUSBAND""} # {""text"": .} # . For ease of use, a generator is also possible: thon from transformers import pipeline pipe = pipeline(""text-classification"") def data(): while True: # This could come from a dataset, a database, a queue or HTTP request # in a server # Caveat: because this is iterative, you cannot use `num_workers > 1` variable # to use multiple threads to preprocess data. You can still have 1 thread that # does the preprocessing while the main runs the big inference yield ""This is a test"" for out in pipe(data()): print(out) # {""text"": ""NUMBER TEN FRESH NELLY IS WAITING ON YOU GOOD NIGHT HUSBAND""} # {""text"": .} # . [[autodoc]] pipeline ## Pipeline batching All pipelines can use batching. This will work whenever the pipeline uses its streaming ability (so when passing lists or `Dataset` or `generator`). thon from transformers import pipeline from transformers.pipelines.pt_utils import KeyDataset import datasets dataset = datasets.load_dataset(""imdb"", name=""plain_text"", split=""unsupervised"") pipe = pipeline(""text-classification"", device=0) for out in pipe(KeyDataset(dataset, ""text""), batch_size=8, truncation=""only_first""): print(out) # [{'label': 'POSITIVE', 'score': 0.9998743534088135}] # Exactly the same output as before, but the content are passed # as batches to the model However, this is not automatically a win for performance. It can be either a 10x speedup or 5x slowdown depending on hardware, data and the actual model being used. Example where it's mostly a speedup: thon from transformers import pipeline from torch.utils.data import Dataset from tqdm.auto import tqdm pipe = pipeline(""text-classification"", device=0) class MyDataset(Dataset): def __len__(self): return 5000 def __getitem__(self, i): return ""This is a test"" dataset = MyDataset() for batch_size in [1, 8, 64, 256]: print(""-"" * 30) print(f""Streaming batch_size={batch_size}"") for out in tqdm(pipe(dataset, batch_size=batch_size), total=len(dataset)): pass # On GTX 970 ------------------------------ Streaming no batching 100%|██████████████████████████████████████████████████████████████████████| 5000/5000 [00:26<00:00, 187.52it/s] ------------------------------ Streaming batch_size=8 100%|█████████████████████████████████████████████████████████████████████| 5000/5000 [00:04<00:00, 1205.95it/s] ------------------------------ Streaming batch_size=64 100%|█████████████████████████████████████████████████████████████████████| 5000/5000 [00:02<00:00, 2478.24it/s] ------------------------------ Streaming batch_size=256 100%|█████████████████████████████████████████████████████████████████████| 5000/5000 [00:01<00:00, 2554.43it/s] (diminishing returns, saturated the GPU) Example where it's most a slowdown: thon class MyDataset(Dataset): def __len__(self): return 5000 def __getitem__(self, i): if i % 64 == 0: n = 100 else: n = 1 return ""This is a test"" * n This is a occasional very long sentence compared to the other. In that case, the **whole** batch will need to be 400 tokens long, so the whole batch will be [64, 400] instead of [64, 4], leading to the high slowdown. Even worse, on bigger batches, the program simply crashes. ------------------------------ Streaming no batching 100%|█████████████████████████████████████████████████████████████████████| 1000/1000 [00:05<00:00, 183.69it/s] ------------------------------ Streaming batch_size=8 100%|█████████████████████████████████████████████████████████████████████| 1000/1000 [00:03<00:00, 265.74it/s] ------------------------------ Streaming batch_size=64 100%|██████████████████████████████████████████████████████████████████████| 1000/1000 [00:26<00:00, 37.80it/s] ------------------------------ Streaming batch_size=256 0%| | 0/1000 [00:00, ?it/s] Traceback (most recent call last): File ""/home/nicolas/src/transformers/test.py"", line 42, in