Skip to content

mithril-security/aicert

Repository files navigation


Logo

AICert

Website Blog

Making AI Traceable and Transparent

Table of Contents
  1. About the project
  2. Getting started
  3. Limitations
  4. Contact

πŸ”’ About The Project

πŸ› οΈ AICert aims to make AI traceable and transparent by enabling AI builders to create certificates with cryptographic proofs binding the weights to the training data and code. AI builders can be foundational model providers or companies that finetune the foundational models to their needs.

πŸ‘©β€πŸ’» End users are the final consumers of the AI builders’ models. They can then verify these AI certificates to have proof that the model they talk to comes from a specific training set and code, and therefore alleviates copyright, security and safety issues.


TPM We leverage Trusted Platform Modules (TPMs) in order to attest the whole stack used for producing the model, from the UEFI, all the way to the code and data, through the OS.

Measuring the software stack, training code and inputs and binding them to the final weights allows the derivation of certificates that contain irrefutable proof of model provenance.

βœ… Use cases

AICert addresses some of the most urgent concerns related to AI provenance. It allows AI builders to:

  • Prove their AI model was not trained on copyrighted, biased or non-consensual PII data

  • Provide an AI Bill of Material about the data and code used, which makes it harder to poison the model by injecting backdoors in the weights

  • Provide a strong audit trail with irrefutable proof for compliance and transparency

    ⚠️ WARNING: AICert is still under development. Do not use it in production! If you want to contribute to this project, do not hesitate to raise an issue.

πŸ” Features

  • AI model traceability: create AI model ID cards that provide cryptographic proof binding model weights to a specific training set and code
  • Non-forgeable proofs: leverage TPMs to ensure non-forgeable AI model ID cards
  • Flexible training: use your preferred tooling for training
  • No slowdown induced during training
  • Azure support

Workflow:

Workflow

Prerequisites

To run this example, you will need acess to an Azure subscription with quota for VMs with GPUs. Install terraform and az cli. The client code requires python 3.11 or later.

# qemu-utils to resize disk to conform to azure disk specifications
sudo apt-get update && sudo apt-get install qemu-utils tpm2-tools pesign jq
# Azure CLI
curl -sL https://aka.ms/InstallAzureCLIDeb | sudo bash
# azcopy to copy disk to azure 
https://aka.ms/downloadazcopy-v10-linux
# Install python 3.11
sudo apt-get install python3.11
# Install terraform
wget -O- https://apt.releases.hashicorp.com/gpg | sudo gpg --dearmor -o /usr/share/keyrings/hashicorp-archive-keyring.gpg
echo "deb [signed-by=/usr/share/keyrings/hashicorp-archive-keyring.gpg] https://apt.releases.hashicorp.com $(lsb_release -cs) main" | sudo tee /etc/apt/sources.list.d/hashicorp.list
sudo apt update && sudo apt install terraform
# Install earthly
sudo /bin/sh -c 'wget https://github.com/earthly/earthly/releases/latest/download/earthly-linux-amd64 -O /usr/local/bin/earthly && chmod +x /usr/local/bin/earthly && /usr/local/bin/earthly bootstrap --with-autocomplete'
# Install poetry
curl -sSL https://install.python-poetry.org | python3 -

1 - Configuration

We must first configure the Azure region and resource names before we begin creating the Mithril OS image.

The resource group name, gallery name and region can be set in the upload_config.sh

AZ_RESOURCE_GROUP="your-resource-group"
AZ_REGION="your-region"
GALLERY_NAME="your-gallery-name"

The size of the Azure VM can be set in variables.tf

variable "instance_type" {
  type        = string
  default     = "Standard_NC24ads_A100_v4"
  description = "Type/Size of VM to create."
}

The default size is Standard_NC24ads_A100_v4

2 - Preparing the image

AIcert finetunes models inside Mithril OS enclaves. The OS image packages the server, reverse proxy and axolotl container images within it.

To make subsequent finetunes easier, the OS disk only needs to be built once and uploaded to Azure.

The "create_MithrilOS.sh" script creates the disk, uploads it to Azure, and converts it into an OS image. It also generates the OS measurements.

# log in to Azure CLI
az login

# Create OS image
sudo ./create_MithrilOS.sh

3 - Install the client

cd client
# If your default python is below python3.10
poetry env use python3.11
poetry shell
poetry install

3.1 - Copy measurements

If the client is run on a different machine than the one on which the OS was built, copy the following measurements to the client machine:

  • container_measurements.json
  • measurements_azure.json
  • measurements_qemu.json (only required if the OS is being run locally for testing)

Place these files in the security_config folder in the client.

4 - Finetune a model

AIcert pefroms the following functions when the finetune command is run:

  • Creates a VM with the Mithril OS image
  • Connects to the server VM using aTLS
  • Sends the axolotl configuration in the aicert.yaml
  • Waits for the finetuned model and attestation report to be returned
cd axolotl_yaml
# There is a sample axolotl configuration present in this folder named aicert.yaml
# This specifies the model, dataset, and training parameters
aicert finetune

We recommend placing each axolotl config in a dedicated folder so as not to overwrite the attestation.json (the AIBOM) returned after the finetuning during the next finetuning run.

4.1 - Changes to the Axolotl configuration file

When Axolotl runs the fine-tuning, the container has no connection to the outside and can not pull other models that those gathered at the initialization part. Pining the model and dataset makes it possible to verify the versionning and track exactly which dataset and model were used on the procedure. Some changes must be taken into account when supplying an Axolotl configuration yaml.

  • The model name that is specified must be pinned to a specific version :
    # example:
    base_model: TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T@sha1:036fa4651240b9a1487f709833b9e4b96b4c1574
  • The dataset name must also be pinned to a specific version and the relative path to the data must also be specified in name (This is due to Axolotl not being able to pull a dataset locally).
    datasets:
    - path: mhenrichsen/alpaca_2k_test@sha1:d05c1cb585e462b16532a44314aa4859cb7450c6
      name: /alpaca_2000.parquet
      type: alpaca

These are the only changes that differs from a normal Axolotl configuration file.

5 - Network policy

While the network policy is part of the OS image, it is interesting to explore it further, as it is important for security and privacy.

The network policy that will be used will be included in the measurement of the OS. For instance, we will use the following one to allow data to be loaded inside the enclave, but nothing will leave it except the output of the AI model that will be sent back to the requester.

The network policy file can be found in the annex.

The measurement file contains the PCR values of the OS. A sample measurement file is as follows:

{
    "measurements": {
        "0": "f3a7e99a5f819a034386bce753a48a73cfdaa0bea0ecfc124bedbf5a8c4799be",
        "1": "3d458cfe55cc03ea1f443f1562beec8df51c75e14a9fcf9a7234a13f198e7969",
        "2": "3d458cfe55cc03ea1f443f1562beec8df51c75e14a9fcf9a7234a13f198e7969",
        "3": "3d458cfe55cc03ea1f443f1562beec8df51c75e14a9fcf9a7234a13f198e7969",
        "4": "dd2ccfebe24db4c43ed6913d3cbd7f700395a88679d3bb3519ab6bace1d064c0",
        "12": "0000000000000000000000000000000000000000000000000000000000000000",
        "13": "0000000000000000000000000000000000000000000000000000000000000000"
    }
}

To understand better what it means, each PCR measures different parts of the stack: PCRs 0, 1, 2, and 3 are firmware related measurements. PCR 4 measures the UKI (initrd, kernel image, and boot stub) PCR 12 and 13 measure the kernel command line and system extensions. We do not want any of those to be enabled, so we ensure they are 0s.

Annex - Information on network Policy and Isolation

The network policy is implemented individually for the k3s pods as well as for the host. The host network is controlled by iptables rules. The exact rules are:

*filter
# Allow localhost connections to permit communication between k3s components
-A INPUT -p tcp -s localhost -d localhost -j ACCEPT
-A OUTPUT -p tcp -s localhost -d localhost -j ACCEPT
# Allow connection to Azure IMDS to get the VM Instance userdata
-A INPUT -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A OUTPUT -p tcp -d 169.254.169.254 --dport 80 -j ACCEPT
-A OUTPUT -p tcp -d 168.63.129.16 --dport 80 -j ACCEPT
# DNS over UDP
-A INPUT -p udp --sport 53 -j ACCEPT
-A INPUT -p udp --dport 53 -j ACCEPT
-A OUTPUT -p udp --sport 53 -j ACCEPT
-A OUTPUT -p udp --dport 53 -j ACCEPT
# DNS over TCP
-A INPUT -p tcp --sport 53 -j ACCEPT
-A INPUT -p tcp --dport 53 -j ACCEPT
-A OUTPUT -p tcp --sport 53 -j ACCEPT
-A OUTPUT -p tcp --dport 53 -j ACCEPT
# Drop all other traffic
-A OUTPUT -j DROP
-A INPUT -j DROP
COMMIT

In the repository they can be found in rules.v4 and rules.v6

These rules block all incoming and outgoing traffic except for DNS queries and localhost connections. The rules are applied on boot by the iptables-persistent package. You can verify that the package is installed if you take a look at the mkosi.conf file.

When the client tries to connect to the server it first retrieves the attestation report which is a quote from the TPM. The client uses the measurements stored in the security_config to validate the quote received from the TPM.

If there are any changes in the host networking rules, it will reflect in the PCR values (PCR 4) of the OS measurement and the connection will be terminated.

⚠️ Limitations

While we provide traceability and ensure that a given set of weights comes from applying a specific training code on a specific dataset, there are still challenges to solve:

  • The training code and data have to be inspected. AICert does not audit the code or input data for threats, such as backdoors injected into a model by the code or poisonous data. It will simply allow us to prove model provenance. It is up to the AI community or end-user to inspect or prove the trustworthiness of the code and data.
  • AICert itself has to be inspected, all the way from the OS we choose to the HTTP server and the app we provide to run the code on the training data.

We are well aware that AICert is not a silver bullet, as to have a fully trustworthy process, it requires scrutiny of both our code and the code and data of the AI builder.

However, by combining both, we can have a solid foundation for the AI supply chain.

(back to top)

πŸ“‡ Contact

Contact us Twitter LinkedIn

(back to top)