simple python cpu and gpu temp monitor with drag function for windows using pyqt
-
Updated
Aug 8, 2024 - Python
simple python cpu and gpu temp monitor with drag function for windows using pyqt
Tensors and Dynamic neural networks in Python with strong GPU acceleration
Accelerate local LLM inference and finetuning (LLaMA, Mistral, ChatGLM, Qwen, Baichuan, Mixtral, Gemma, Phi, MiniCPM, etc.) on Intel CPU and GPU (e.g., local PC with iGPU, discrete GPU such as Arc, Flex and Max); seamlessly integrate with llama.cpp, Ollama, HuggingFace, LangChain, LlamaIndex, GraphRAG, DeepSpeed, vLLM, FastChat, Axolotl, etc.
Collection of best practices, reference architectures, model training examples and utilities to train large models on AWS.
SkyPilot: Run AI and batch jobs on any infra (Kubernetes or 12+ clouds). Get unified execution, cost savings, and high GPU availability via a simple interface.
DeepSpeed is a deep learning optimization library that makes distributed training and inference easy, efficient, and effective.
Pytorch domain library for recommendation systems
A library for accelerating Transformer models on NVIDIA GPUs, including using 8-bit floating point (FP8) precision on Hopper and Ada GPUs, to provide better performance with lower memory utilization in both training and inference.
The Triton Inference Server provides an optimized cloud and edge inferencing solution.
A fast, scalable, high performance Gradient Boosting on Decision Trees library, used for ranking, classification, regression and other machine learning tasks for Python, R, Java, C++. Supports computation on CPU and GPU.
JetStream is a throughput and memory optimized engine for LLM inference on XLA devices, starting with TPUs (and GPUs in future -- PRs welcome).
A Pythonic framework to simplify AI service building
Stretching GPU performance for GEMMs and tensor contractions.
Doing non-Cartesian MR Imaging has never been so easy.
Intel(R) Extension for Scikit-learn is a seamless way to speed up your Scikit-learn application
Horizon chart for CPU/GPU/Neural Engine utilization monitoring on Apple M1/M2 and nVidia GPUs on Linux
📊 Simple package for monitoring and control your NVIDIA Jetson [Orin, Xavier, Nano, TX] series
A framework for quantum computing
Add a description, image, and links to the gpu topic page so that developers can more easily learn about it.
To associate your repository with the gpu topic, visit your repo's landing page and select "manage topics."