Skip to content
Intel_logo_2023

Develop on Intel, from AI PC to Data Center

Speed up AI development using Intel®-optimized software on the latest Intel® Core™ Ultra processor, Intel® Xeon® processor, Intel® Gaudi® AI Accelerator, and GPU compute. You can get started right away on the Intel® Tiber™ AI Cloud for free.

As a participant in the open source software community since 1989, Intel uses industry collaboration, co-engineering, and open source contributions to deliver a steady stream of code and optimizations that work across multiple platforms and use cases. We push our contributions upstream so developers get the most current and optimized software that works across multiple platforms and maintains security.

Check out the following repositories to jumpstart your development work on Intel:

  • OPEA GenAI Examples - Examples such as ChatQnA which illustrate the pipeline capabilities of the Open Platform for Enterprise AI (OPEA) project.
  • AI PC Notebooks - A collection of notebooks designed to showcase generative AI workloads on AI PC
  • Open3D - A modern library for 3D data processing
  • Optimum Intel - Accelerate inference with Intel optimization tools
  • Optimum Habana - Easy and lightning fast training of 🤗 Transformers on Habana Gaudi processor (HPU)
  • Intel Neural Compressor - SOTA low-bit LLM quantization (INT8/FP8/INT4/FP4/NF4) & sparsity; leading model compression techniques on TensorFlow, PyTorch, and ONNX Runtime
  • OpenVINO Notebooks - 📚 Jupyter notebook tutorials for OpenVINO™
  • SetFit - Efficient few-shot learning with Sentence Transformers
  • FastRAG - Efficient retrieval augmentation and generation (RAG) framework

DevHub Discord

Join us on the Intel DevHub Discord server to chat with other developers in channels like #dev-projects, #gaudi, and #large-language-models.

Learn more about Intel's open source efforts

Visit open.intel.com to find out more, or follow us on X or LinkedIn!

Pinned Loading

  1. cve-bin-tool Public

    The CVE Binary Tool helps you determine if your system includes known vulnerabilities. You can scan binaries for over 350 common, vulnerable components (openssl, libpng, libxml2, expat and others),…

    Python 1.4k 540

  2. intel-extension-for-pytorch Public

    A Python package for extending the official PyTorch that can easily obtain performance on Intel platform

    Python 1.9k 282

  3. neural-compressor Public

    SOTA low-bit LLM quantization (INT8/FP8/INT4/FP4/NF4) & sparsity; leading model compression techniques on TensorFlow, PyTorch, and ONNX Runtime

    Python 2.4k 274

  4. ai Public

    Explore our open source AI portfolio! Develop, train, and deploy your AI solutions with performance- and productivity-optimized tools from Intel.

    44 7

  5. intel-one-mono Public

    Intel One Mono font repository

    9.6k 314

  6. rohd Public

    The Rapid Open Hardware Development (ROHD) framework is a framework for describing and verifying hardware in the Dart programming language.

    Dart 423 76

Repositories

Showing 10 of 1277 repositories
  • llvm Public

    Intel staging area for llvm.org contribution. Home for Intel LLVM-based projects.

    LLVM 1,343 770 628 (18 issues need help) 242 Updated Jun 16, 2025
  • edge-developer-kit-reference-scripts Public

    Developer kits reference setup scripts for various kinds of Intel platforms and GPUs

    Python 29 Apache-2.0 5 0 0 Updated Jun 16, 2025
  • auto-round Public

    Advanced Quantization Algorithm for LLMs and VLMs, with support for CPU, Intel GPU, CUDA and HPU. Seamlessly integrated with Torchao, Transformers, and vLLM.

    Python 499 Apache-2.0 42 25 8 Updated Jun 16, 2025
  • Python 101 MPL-2.0 48 5 38 Updated Jun 16, 2025
  • neural-compressor Public

    SOTA low-bit LLM quantization (INT8/FP8/INT4/FP4/NF4) & sparsity; leading model compression techniques on TensorFlow, PyTorch, and ONNX Runtime

    Python 2,428 Apache-2.0 274 43 6 Updated Jun 16, 2025
  • torch-xpu-ops Public
    C++ 46 Apache-2.0 41 122 (1 issue needs help) 55 Updated Jun 16, 2025
  • onnxruntime Public Forked from microsoft/onnxruntime

    ONNX Runtime: cross-platform, high performance scoring engine for ML models

    C++ 65 MIT 3,339 6 23 Updated Jun 16, 2025
  • ecfw-zephyr Public
    C 60 Apache-2.0 38 1 1 Updated Jun 16, 2025
  • C++ 17 BSD-3-Clause 2 0 0 Updated Jun 16, 2025
  • media-driver Public

    Intel Graphics Media Driver to support hardware decode, encode and video processing.

    C 1,102 361 123 97 Updated Jun 16, 2025