Stars
MiniCPM-o 2.6: A GPT-4o Level MLLM for Vision, Speech and Multimodal Live Streaming on Your Phone
The lean application framework for Python. Build sophisticated user interfaces with a simple Python API. Run your apps in the terminal and a web browser.
[EMNLP 2024 (Oral)] Leave No Document Behind: Benchmarking Long-Context LLMs with Extended Multi-Doc QA
Official repository for "Scaling Retrieval-Based Langauge Models with a Trillion-Token Datastore".
RAGFlow is an open-source RAG (Retrieval-Augmented Generation) engine based on deep document understanding.
TabReD: Analyzing Pitfalls and Filling the Gaps in Tabular Deep Learning Benchmarks
Python & Command-line tool to gather text and metadata on the Web: Crawling, scraping, extraction, output as CSV, JSON, HTML, MD, TXT, XML
An implementation of the Prompt-to-Prompt paper for the SDXL architecture
Open-source low code data preparation library in python. Collect, clean and visualization your data in python with a few lines of code.
📊 An infographics generator with 30+ plugins and 300+ options to display stats about your GitHub account and render them as SVG, Markdown, PDF or JSON!
A compositional diagramming and animation library as an eDSL in Python
Set of tools to assess and improve LLM security.
Uncertainty quantification with PyTorch
Simple and reliable optimization with local, global, population-based and sequential techniques in numerical discrete search spaces.
PyTorch-Direct code on top of PyTorch-1.8.0nightly (e152ca5) for Large Graph Convolutional Network Training with GPU-Oriented Data Communication Architecture (accepted by PVLDB)
A GPU-accelerated library containing highly optimized building blocks and an execution engine for data processing to accelerate deep learning training and inference applications.
MinHash, LSH, LSH Forest, Weighted MinHash, HyperLogLog, HyperLogLog++, LSH Ensemble and HNSW
A fast, clean, responsive Hugo theme.
Make PyTorch models up to 40% faster! Thunder is a source to source compiler for PyTorch. It enables using different hardware executors at once; across one or thousands of GPUs.
What would you do with 1000 H100s...