Skip to content
/ pruna Public

Pruna is a model optimization framework built for developers, enabling you to deliver faster, more efficient models with minimal overhead.

License

Notifications You must be signed in to change notification settings

PrunaAI/pruna

Pruna AI Logo

Element Simply make AI models faster, cheaper, smaller, greener! Element


Documentation


GitHub License GitHub Actions Workflow Status GitHub Actions Workflow Status GitHub Release GitHub commit activity PyPI - Downloads Codacy

Website X (formerly Twitter) URL Devto Reddit Discord Huggingface Replicate


Pruna AI Logo

Pruna Cool Introduction

Pruna is a model optimization framework built for developers, enabling you to deliver faster, more efficient models with minimal overhead. It provides a comprehensive suite of compression algorithms including caching, quantization, pruning, distillation and compilation techniques to make your models:

  • Faster: Accelerate inference times through advanced optimization techniques
  • Smaller: Reduce model size while maintaining quality
  • Cheaper: Lower computational costs and resource requirements
  • Greener: Decrease energy consumption and environmental impact

The toolkit is designed with simplicity in mind - requiring just a few lines of code to optimize your models. It supports various model types including LLMs, Diffusion and Flow Matching Models, Vision Transformers, Speech Recognition Models and more.

Pruna Pro

To move at top speed, we offer Pruna Pro, our enterprise solution that unlocks advanced optimization features, our OptimizationAgent, priority support, and much more.

Pruna Cool Installation

Pruna is currently available for installation on Linux, MacOS and Windows. However, some algorithms impose restrictions on the operating system and might not be available on all platforms.

Before installing, ensure you have:

  • Python 3.9 or higher
  • Optional: CUDA toolkit for GPU support

Option 1: Install Pruna using pip

Pruna is available on PyPI, so you can install it using pip:

pip install pruna

Option 2: Install Pruna from source

You can also install Pruna directly from source by cloning the repository and installing the package in editable mode:

git clone https://github.com/pruna-ai/pruna.git
cd pruna
pip install -e .

Pruna Cool Quick Start

Getting started with Pruna is easy-peasy pruna-squeezy!

First, load any pre-trained model. Here's an example using Stable Diffusion:

from diffusers import StableDiffusionPipeline
base_model = StableDiffusionPipeline.from_pretrained("CompVis/stable-diffusion-v1-4")

Then, use Pruna's smash function to optimize your model. Pruna provides a variety of different optimization algorithms, allowing you to combine different algorithms to get the best possible results. You can customize the optimization process using SmashConfig:

from pruna import smash, SmashConfig

# Create and smash your model
smash_config = SmashConfig()
smash_config["cacher"] = "deepcache"
smash_config["compiler"] = "stable_fast"
smashed_model = smash(model=base_model, smash_config=smash_config)

Your model is now optimized and you can use it as you would use the original model:

smashed_model("An image of a cute prune.").images[0]

You can then use our evaluation interface to measure the performance of your model:

from pruna.evaluation.task import Task
from pruna.evaluation.evaluation_agent import EvaluationAgent
from pruna.data.pruna_datamodule import PrunaDataModule

task = Task("image_generation_quality", datamodule=PrunaDataModule.from_string("LAION256"))
eval_agent = EvaluationAgent(task)
eval_agent.evaluate(smashed_model)

This was the minimal example, but you are looking for the maximal example? You can check out our documentation for an overview of all supported algorithms as well as our tutorials for more use-cases and examples.

Pruna Heart Pruna Pro

Pruna has everything you need to get started on optimizing your own models. To push the efficiency of your models even further, we offer Pruna Pro. To give you a glimpse of what is possible with Pruna Pro, let us consider three of the most widely used diffusers pipelines and see how much smaller and faster we can make them. In addition to popular open-source algorithms, we use our proprietary Auto Caching algorithm. We compare the fidelity of the compressed models. Fidelity measures the similarity between the images of the compressed models and the images of the original model.

Stable Diffusion XL

For Stable Diffusion XL, we compare Auto Caching with DeepCache (available with Pruna). We combine these caching algorithms with torch.compile to get an additional 9% reduction in inference latency, and we use HQQ 8-bit quantization to reduce the size of the model from 8.8GB to 6.7GB.

SDXL Benchmark

FLUX [dev]

For FLUX [dev], we compare Auto Caching with the popular TeaCache algorithm. In this case, we used Stable Fast to reduce the latency of Auto Caching by additional 13%, and HQQ with 8-bit reduced the size of FLUX from 33GB to 23GB.

FLUX [dev] Benchmark

HunyuanVideo

For HunyuanVideo, we compare Auto Caching with TeaCache. Applying HQQ 8-bit quantization to the model reduced the size from 41GB to 29GB.

HunyuanVideo Benchmark

Pruna Cool Algorithm Overview

Since Pruna offers a broad range of optimization algorithms, the following table provides a high-level overview of all methods available in Pruna. For a detailed description of each algorithm, have a look at our documentation.

Technique Description Speed Memory Quality
batcher Groups multiple inputs together to be processed simultaneously, improving computational efficiency and reducing processing time.
cacher Stores intermediate results of computations to speed up subsequent operations.
compiler Optimises the model with instructions for specific hardware.
distiller Trains a smaller, simpler model to mimic a larger, more complex model.
quantizer Reduces the precision of weights and activations, lowering memory requirements.
pruner Removes less important or redundant connections and neurons, resulting in a sparser, more efficient network.
recoverer Restores the performance of a model after compression.
factorizer Factorization batches several small matrix multiplications into one large fused operation.
enhancer Enhances the model output by applying post-processing algorithms such as denoising or upscaling. -

✅ (improves), ➖ (approx. the same), ❌ (worsens)



Pruna AI Logo


Pruna Sad FAQ and Troubleshooting

If you can not find an answer to your question or problem in our documentation, in our FAQs or in an existing issue, we are happy to help you! You can either get help from the Pruna community on Discord, join our Office Hours or open an issue on GitHub.

Pruna Heart Contributors

The Pruna package was made with 💜 by the Pruna AI team and our amazing contributors. Contribute to the repository to become part of the Pruna family!

Contributors

Pruna Emotional Citation

If you use Pruna in your research, feel free to cite the project! 💜

    @misc{pruna,
    title = {Efficient Machine Learning with Pruna},
    year = {2023},
    note = {Software available from pruna.ai},
    url={https://www.pruna.ai/}
    }

Pruna AI Logo

About

Pruna is a model optimization framework built for developers, enabling you to deliver faster, more efficient models with minimal overhead.

Resources

License

Code of conduct

Security policy

Stars

Watchers

Forks

Packages

No packages published

Contributors 14

Languages