FHE-AI-Inference is a Python library enabling secure neural network inference with Fully Homomorphic Encryption (FHE) using the OpenFHE CKKS scheme. It allows privacy-preserving AI applications for sensitive data scenarios, such as encrypted medical diagnostics (healthcare) and encrypted fraud detection (finance).
- Easy Setup: Fully automated environment setup with a powerful Makefile.
- Truly Pythonic API: Clean, intuitive, and fully tested Python interfaces.
- Comprehensive Docs & Tutorials: Clear, practical documentation and tutorials.
- Developer-Friendly: Quickly integrate secure inference into your AI workflow.
- Secure Inference: Encrypt data and perform neural network inference securely.
- Neural Network Optimized: Ideal for shallow neural networks (2-5 layers).
- Fully Tested: 100% test coverage ensuring reliability and robustness.
- Open-Source: MIT-licensed and welcoming contributions.
- Pythonic and tested API ready.
- Automated setup environment using Makefile.
- Initial documentation and beginner-friendly tutorials available.
- Planning further integration with PyTorch, TensorFlow, and ONNX.
Check the Roadmap for upcoming milestones.
One command to install everything (macOS / Linux):
make installAfter the installation runs, you should see:
✅ OpenFHE context successfully initialized with CKKS.
This process installs Homebrew dependencies, builds & installs the OpenFHE C++ core, installs the Python bindings from source, and appends the proper DYLD_LIBRARY_PATH export into your ~/.zshrc.
Reload your shell so the new library path is picked up:
source ~/.zshrcTo rebuild from scratch:
make clean
make installQuick smoke test:
python scripts/test_openfhe_init.pyYou should see:
✅ OpenFHE context successfully initialized with CKKS.
Full test suite & coverage (via Hatch):
make testAfter setup, run your first encrypted neural inference example:
python tutorials/getting_started_with_openfhe.pyYou should see a successful encryption, decryption, and homomorphic operations demonstration.
Instead of binding developers to verbose C++ cryptographic idioms, this library offers a natural, Pythonic interface to FHE operations.
All homomorphic encryption, decryption, and evaluation logic is cleanly exposed:
from fhe_ai_inference.fheai import FHEAI
# Step 1: Create a homomorphic context
fhe = FHEAI(mult_depth=3, scale_mod_size=50)
# Step 2: Encrypt data
enc_x = fhe.encrypt([1.0, 2.0, 3.0])
enc_y = fhe.encrypt([4.0, 5.0, 6.0])
# Step 3: Compute homomorphically
enc_sum = fhe.add(enc_x, enc_y)
enc_prod = fhe.multiply(enc_x, enc_y)
# Step 4: Decrypt result
print(fhe.decrypt(enc_sum)) # [5.0, 7.0, 9.0]
print(fhe.decrypt(enc_prod)) # [4.0, 10.0, 18.0]Bootstrapping refreshes a ciphertext’s noise budget, enabling deeper encrypted computations.
With fhe-ai-inference, enabling bootstrapping is seamless:
from fhe_ai_inference.fheai import FHEAI
# Initialize with bootstrapping support
fhe = FHEAI(bootstrappable=True)
# Encrypt data at a high level
enc = fhe.encrypt([3.14], level=10)
# Refresh the ciphertext
refreshed = fhe.bootstrap(enc)
# Decrypt and verify accuracy
print(fhe.decrypt(refreshed)) # [3.14...] — ciphertext refreshedAll features work out of the box via make install.
- Tutorials Index: Practical guides covering key aspects of OpenFHE usage.
- Getting Started with OpenFHE (CKKS): Detailed introduction covering encryption, decryption, and homomorphic operations.
Adaptive Logic: Bootstrapping is applied only when needed (i.e., if the ciphertext’s level exceeds the bootstrapping threshold). This ensures optimal performance during deep computations.
See scripts/bootstrap_demo.py for a working example that computes x^16 homomorphically.
python3 -m http.server --directory docsThen open your browser to http://localhost:8000.
The project uses Ruff for linting and ensuring clean code:
ruff check . --fixYou can also run
hatch run devwhich will check lint, run tests, and update documents.
Automatically format and lint your commits with Black and Ruff:
pre-commit installFHE-AI-Inference is evolving rapidly:
- Phase 1 (Complete): Environment automation, initial Pythonic API, testing.
- Phase 2: Integration with popular ML frameworks (PyTorch).
- Phase 3: Advanced features such as bootstrapping and secure deep neural networks.
Contributions are warmly welcomed! Check out CONTRIBUTING.md.
David William Silva 📧 contact@davidwsilva.com
Feel free to reach out with questions, ideas, or to collaborate!