This repo provides the wheel for Flash Attention 3 for Hopper architecture for the current default cuda version on ZeroGPU spaces, which at the moment is Torch version: 2.8.0+cu128.
You can add this to the requirements.txt file of your space
flash-attn-3 @ https://huggingface.co/alexnasa/flash-attn-3/resolve/main/128/flash_attn_3-3.0.0b1-cp39-abi3-linux_x86_64.whl
	Inference Providers
	NEW
	
	
	This model isn't deployed by any Inference Provider.
	π
			
		Ask for provider support
