transformers and pytorch and nvidia pytorch container versions that needed
tl;dr You need
transformers v.4.54.0.dev0 (or greater?)
pytorch 2.7 (or greater?)
or if you're running with NVIDIA PyTorch container, (see https://catalog.ngc.nvidia.com/orgs/nvidia/containers/pytorch/tags)
25.02 or greater (for pytorch 2.7)
FYI, indeed " install Hugging Face transformers from source (v4.54.0.dev0). " on say v.4.52.0 transformers will complain hard:
E ValueError: The checkpoint you are trying to load has model type `lfm2` but Transformers does not recognize this architecture. This could be because of an issue with the checkpoint, or because your version of Transformers is out of date.
E
E You can update Transformers with the command `pip install --upgrade transformers`. If this does not work, and the checkpoint is very new, then there may not be a release version that supports this model yet. In this case, you can get the most up-to-date code by installing Transformers from source with the command `pip install git+https://github.com/huggingface/transformers.git`
Then on PyTorch 2.6, it'll complain:
/usr/lib/python3.12/importlib/__init__.py:90: in import_module
return _bootstrap._gcd_import(name[level:], package, level)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
/ThirdParty/transformers/src/transformers/generation/utils.py:49: in <module>
from ..masking_utils import create_masks_for_generate
/ThirdParty/transformers/src/transformers/masking_utils.py:38: in <module>
from torch._dynamo._trace_wrapped_higher_order_op import TransformGetItemToIndex
E ImportError: cannot import name 'TransformGetItemToIndex' from 'torch._dynamo._trace_wrapped_higher_order_op' (/usr/local/lib/python3.12/dist-packages/torch/_dynamo/_trace_wrapped_higher_order_op.py)
My hope is to normalize giving full, required versions on dependencies to help with deployment of models locally on huggingface.
Also, I wanna help normalize providing hardware requirements and a list of what hardware this model has been run on.
I can definitively say that a NVIDIA GeForce 1050 wouldn't work on this :) This is made obvious by the cuda capability requirements:
else:
# Note: don't use named arguments in `torch.isin`, see https://github.com/pytorch/pytorch/issues/126045
> return torch.isin(elements, test_elements)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
E RuntimeError: CUDA error: no kernel image is available for execution on the device
E CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.
E For debugging consider passing CUDA_LAUNCH_BLOCKING=1
E Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.
/ThirdParty/transformers/src/transformers/pytorch_utils.py:339: RuntimeError
=============================================================================================== warnings summary ===============================================================================================
tests/integration_tests/Wrappers/Models/LLMs/test_LiquidAI_LFM2-1-2B.py::test_generate_works
/usr/local/lib/python3.12/dist-packages/torch/cuda/__init__.py:210: UserWarning:
Found GPU0 NVIDIA GeForce GTX 1050 which is of cuda capability 6.1.
PyTorch no longer supports this GPU because it is too old.
The minimum cuda capability supported by this library is 7.5.