Hugging Face
Models
Datasets
Spaces
Community
Docs
Enterprise
Pricing
Log In
Sign Up
kernels-community
/
paged-attention
like
2
Follow
kernels-community
114
kernel
License:
apache-2.0
Model card
Files
Files and versions
Community
2
20990f8
paged-attention
Ctrl+K
Ctrl+K
5 contributors
History:
16 commits
danieldk
HF Staff
Build (AArch64)
20990f8
3 months ago
build
Build (AArch64)
3 months ago
cuda-utils
Port vLLM attention kernels
5 months ago
paged-attention
Port vLLM attention kernels
5 months ago
tests
Rename to paged-attention
5 months ago
torch-ext
Update for build.toml changes
4 months ago
.gitattributes
Safe
1.56 kB
Port vLLM attention kernels
5 months ago
README.md
Safe
128 Bytes
Update README.md (#1)
5 months ago
build.toml
1.13 kB
Use default CUDA capabilities
3 months ago
flake.lock
Safe
3.03 kB
Update flake inputs
3 months ago
flake.nix
Safe
332 Bytes
Fix flake input
3 months ago