Hugging Face
Models
Datasets
Spaces
Posts
Docs
Enterprise
Pricing
Log In
Sign Up
kernels-community
/
paged-attention
like
0
Follow
kernels-community
35
kernel
License:
apache-2.0
Model card
Files
Files and versions
Community
1
0f86240
paged-attention
Ctrl+K
Ctrl+K
3 contributors
History:
14 commits
danieldk
HF Staff
Use default CUDA capabilities
0f86240
17 days ago
build
Build (aarch64)
19 days ago
cuda-utils
Port vLLM attention kernels
3 months ago
paged-attention
Port vLLM attention kernels
3 months ago
tests
Rename to paged-attention
3 months ago
torch-ext
Update for build.toml changes
2 months ago
.gitattributes
Safe
1.56 kB
Port vLLM attention kernels
3 months ago
README.md
Safe
128 Bytes
Update README.md (#1)
2 months ago
build.toml
1.13 kB
Use default CUDA capabilities
17 days ago
flake.lock
3.03 kB
Fix flake input
19 days ago
flake.nix
332 Bytes
Fix flake input
19 days ago