Change the repository type filter
All
Repositories list
639 repositories
- TensorRT LLM provides users with an easy-to-use Python API to define Large Language Models (LLMs) and supports state-of-the-art optimizations to perform inference efficiently on NVIDIA GPUs. TensorRT LLM also contains components to create Python and C++ runtimes that orchestrate the inference execution in a performant way.
- GPU accelerated decision optimization
- Ongoing research training transformer models at scale
- Documentation repository for NVIDIA Cloud Native Technologies
- CUDA Kernel Benchmarking Library
NVSentinel
Public- CUDA Core Compute Libraries
edk2
Public- C++ and Python support for the CUDA Quantum programming model for heterogeneous quantum-classical workflows
- Open-source deep-learning framework for exploring, building and deploying AI weather/climate workflows.
- A Python framework for accelerated simulation, data generation and spatial computing.