Transformers documentation

CPU

Hugging Face's logo
Join the Hugging Face community

and get access to the augmented documentation experience

to get started

CPU

A modern CPU is capable of efficiently training large models by leveraging the underlying optimizations built into the hardware and training on fp16 or bf16 data types.

This guide focuses on how to train large models on an Intel CPU using mixed precision and the Intel Extension for PyTorch (IPEX) library.

You can Find your PyTorch version by running the command below.

pip list | grep torch

Install IPEX with the PyTorch version from above.

pip install intel_extension_for_pytorch==<version_name> -f https://developer.intel.com/ipex-whl-stable-cpu

Refer to the IPEX installation guide for more details.

IPEX provides additional performance optimizations for Intel CPUs. These include additional CPU instruction level architecture (ISA) support such as Intel AVX512-VNNI and Intel AMX. Both of these features are designed to accelerate matrix multiplication. Older AMD and Intel CPUs with only Intel AVX2, however, aren’t guaranteed better performance with IPEX.

IPEX also supports Auto Mixed Precision (AMP) training with the fp16 and bf16 data types. Reducing precision speeds up training and reduces memory usage because it requires less computation. The loss in accuracy from using full-precision is minimal. 3rd, 4th, and 5th generation Intel Xeon Scalable processors natively support bf16, and the 6th generation processor also natively supports fp16 in addition to bf16.

AMP is enabled for CPU backends training with PyTorch.

Trainer supports AMP training with a CPU by adding the --use_cpu, --use_ipex, and --bf16 parameters. The example below demonstrates the run_qa.py script.

python run_qa.py \
 --model_name_or_path google-bert/bert-base-uncased \
 --dataset_name squad \
 --do_train \
 --do_eval \
 --per_device_train_batch_size 12 \
 --learning_rate 3e-5 \
 --num_train_epochs 2 \
 --max_seq_length 384 \
 --doc_stride 128 \
 --output_dir /tmp/debug_squad/ \
 --use_ipex \
 --bf16 \
 --use_cpu

These parameters can also be added to TrainingArguments as shown below.

training_args = TrainingArguments(
    output_dir="./outputs",
    bf16=True,
    use_ipex=True,
    use_cpu=True,
)

Resources

Learn more about training on Intel CPUs in the Accelerating PyTorch Transformers with Intel Sapphire Rapids blog post.

< > Update on GitHub