flashinfer-ai / flashinfer
FlashInfer: Kernel Library for LLM Serving
See what the GitHub community is most excited about today.
FlashInfer: Kernel Library for LLM Serving
DeepGEMM: clean and efficient FP8 GEMM kernels with fine-grained scaling
CUDA accelerated rasterization of gaussian splatting
NCCL Tests
GPU accelerated decision optimization
Instant neural graphics primitives: lightning fast NeRF and more
[ICLR2025, ICML2025, NeurIPS2025 Spotlight] Quantized Attention achieves speedup of 2-5x compared to FlashAttention, without losing end-to-end metrics across language, image, and video models.
CUDA Kernel Benchmarking Library
DeepEP: an efficient expert-parallel communication library
Graphics Processing Units Molecular Dynamics
cuVS - a library for vector search and clustering on the GPU
[ARCHIVED] Cooperative primitives for CUDA C++. See https://github.com/NVIDIA/cccl
CUDA Library Samples
Causal depthwise conv1d in CUDA, with a PyTorch interface
This package contains the original 2012 AlexNet code.
Fast CUDA matrix multiplication from scratch
Tile primitives for speedy kernels
LLM training in simple, raw C/CUDA