Tensors and Dynamic neural networks in Python with strong GPU acceleration
-
Updated
Nov 6, 2024 - Python
Tensors and Dynamic neural networks in Python with strong GPU acceleration
The fastai deep learning library
DeepSpeed is a deep learning optimization library that makes distributed training and inference easy, efficient, and effective.
Suite of tools for deploying and training deep learning models using the JVM. Highlights include model import for keras, tensorflow, and onnx/pytorch, a modular and tiny c++ library for running math code and a java based math library on top of the core c++ library. Also includes samediff: a pytorch/tensorflow like library for running deep learn...
Open deep learning compiler stack for cpu, gpu and specialized accelerators
Open3D: A Modern Library for 3D Data Processing
Productive, portable, and performant GPU programming in Python.
A python library built to empower developers to build applications and systems with self-contained Computer Vision capabilities
Build and run Docker containers leveraging NVIDIA GPUs
H2O is an Open Source, Distributed, Fast & Scalable Machine Learning Platform: Deep Learning, Gradient Boosting (GBM) & XGBoost, Random Forest, Generalized Linear Modeling (GLM with Elastic Net), K-Means, PCA, Generalized Additive Models (GAM), RuleFit, Support Vector Machine (SVM), Stacked Ensembles, Automatic Machine Learning (AutoML), etc.
Real-Time and Accurate Full-Body Multi-Person Pose Estimation&Tracking System
An open-source, low-code machine learning library in Python
Play with fluids in your browser (works even on mobile)
The Triton Inference Server provides an optimized cloud and edge inferencing solution.
Deep Learning GPU Training System
A flexible framework of neural networks for deep learning
Accelerate local LLM inference and finetuning (LLaMA, Mistral, ChatGLM, Qwen, Mixtral, Gemma, Phi, MiniCPM, Qwen-VL, MiniCPM-V, etc.) on Intel XPU (e.g., local PC with iGPU and NPU, discrete GPU such as Arc, Flex and Max); seamlessly integrate with llama.cpp, Ollama, HuggingFace, LangChain, LlamaIndex, vLLM, GraphRAG, DeepSpeed, Axolotl, etc
A fast, scalable, high performance Gradient Boosting on Decision Trees library, used for ranking, classification, regression and other machine learning tasks for Python, R, Java, C++. Supports computation on CPU and GPU.
Add a description, image, and links to the gpu topic page so that developers can more easily learn about it.
To associate your repository with the gpu topic, visit your repo's landing page and select "manage topics."