--- title: Kernel Community emoji: 🔥 colorFrom: gray colorTo: green sdk: static pinned: false --- # Kernel Community The Kernel Hub allows Python libraries and applications to **load optimized compute kernels directly from the Hugging Face Hub**. Think of it like the Model Hub, but for low-level, high-performance code snippets (kernels) that accelerate specific operations, often on GPUs. Instead of manually managing complex dependencies, wrestling with compilation flags, or building libraries like Triton or CUTLASS from source, you can use the `kernels` library to instantly fetch and run pre-compiled, optimized kernels. ## Projects The kernel hub team maintains two projects to make interacting with the kernel hub as easy as possible.
kernel-builder logo

kernel-builder

Creates compliant kernels that meet strict criteria for portability and compatibility.

kernel logo

kernels

Python library to load compute kernels directly from the Hub.

## What are Compliant Kernels? Kernels on the Hub are designed to be: - **Portable**: Load from paths outside `PYTHONPATH` - **Unique**: Multiple versions can run in the same process - **Compatible**: Support various Python versions, PyTorch builds, and C++ ABIs