llm_topic_modelling / requirements_gpu.txt
seanpedrickcase's picture
Trying out inference with unsloth vs transformers
4d01a46
raw
history blame
929 Bytes
pandas==2.3.2
gradio==5.44.1
huggingface_hub[hf_xet]==0.34.4
transformers==4.56.0
spaces==0.40.1
boto3==1.40.22
pyarrow==21.0.0
openpyxl==3.1.5
markdown==3.7
tabulate==0.9.0
lxml==5.3.0
google-genai==1.33.0
azure-ai-inference==1.0.0b9
azure-core==1.35.0
html5lib==1.1
beautifulsoup4==4.12.3
rapidfuzz==3.13.0
python-dotenv==1.1.0
# Torch/Unsloth
# Latest compatible with CUDA 12.4
torch==2.6.0 --extra-index-url https://download.pytorch.org/whl/cu124
unsloth[cu124-torch260]==2025.9.4
unsloth_zoo==2025.9.5
# Additional for Windows and CUDA 12.4 older GPUS (RTX 3x or similar):
#triton-windows<3.3
timm==1.0.19
# Llama CPP Python
# For Linux:
#https://github.com/abetlen/llama-cpp-python/releases/download/v0.3.16-cu124/llama_cpp_python-0.3.16-cp311-cp311-linux_x86_64.whl
# For Windows:
https://github.com/seanpedrick-case/llama-cpp-python-whl-builder/releases/download/v0.1.0/llama_cpp_python-0.3.16-cp311-cp311-win_amd64.whl