From 3f78296102a8b2cc7c2cbca226997d376a98e7fa Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Sa=C5=A1a=20Tomi=C4=87?= Date: Mon, 29 Jan 2024 10:53:31 +0100 Subject: [PATCH] docs: Update the readme and some more docs (#117) * docs: Update the readme and some more docs * Update docs for trustworthy metrics --- README.md | 226 ++---------------- docs/contributing.md | 213 +++++++++++++++++ docs/how-to-update-docs.md | 2 +- docs/nns-proposals.md | 31 +-- docs/trustworthy-metrics/architecture.md | 1 + .../trustworthy-metrics.md | 200 ++++++++++------ 6 files changed, 360 insertions(+), 313 deletions(-) create mode 100644 docs/contributing.md diff --git a/README.md b/README.md index 3b3dd9fd..8fef8466 100644 --- a/README.md +++ b/README.md @@ -1,219 +1,31 @@ -# Documentation in Github Pages +# Decentralized Reliability Engineering (DRE) -Searchable docs are available as GitHub pages at https://dfinity.github.io/dre/ - -# Pre-requisites - -## 1. Install dependencies - -### pipenv / pyenv - -#### On Linux - -Install pyenv to make it easier to manage python versions (Tested on ubuntu -22.04 where the default python version is 3.10). You can use the [pyenv -installer](https://github.com/pyenv/pyenv-installer) to do it easily, or go -as simple as: - -``` bash -curl https://pyenv.run | bash -``` - -Then log off and log back on, in order to ensure that the -`~/.local/bin` directory (used by `pip` and `pipenv`) is -available in your session's `$PATH`, as well as the pyenv -shims directory. - -#### On Mac OS - -On Mac, pipenv can be installed with Brew https://brew.sh/ -```bash -brew install pyenv -``` - -If `pipenv shell` results in an error `configure: error: C compiler cannot create executables`, -you may not have recent development tools. Run the following: -```bash -sudo rm -rf /Library/Developer/CommandLineTools -sudo xcode-select --install -``` - -You should verify that a new terminal session has added -the pyenv shims directory to your `$PATH`, then continue -in that new terminal session from now on. - -### 2. Install the Python packages needed by the repo - - -#### Linux dependencies - -pyenv will install a clean Python for you. This installation will -insist on a few important libraries which you should have on your -system before it installs our chosen Python development version. - -```bash -sudo apt install -y libncurses-dev libbz2-dev libreadline-dev \ - libssl-dev make build-essential libssl-dev zlib1g-dev \ - libbz2-dev libreadline-dev libsqlite3-dev wget curl llvm \ - libncursesw5-dev xz-utils tk-dev libxml2-dev libxmlsec1-dev \ - libffi-dev liblzma-dev -``` - -Note: if the list of dependencies above changes, update the -[docker/Dockerfile] file accordingly, so CI stays in sync -with local development environments. - -#### poetry installation - -Run the following from the repo root: - -```bash -# change into the directory of the repo -# cd ~/src/release -pyenv install 3.8.16 # installs Python 3.8.16 via pyenv -pyenv local 3.8.16 # tells pyenv to use 3.8.16 for this repo -pip3 install poetry # installs poetry to your 3.8.16 -poetry env use $(which python) # instructs poetry to use 3.8.16 -poetry install # installs all our dependencies to 3.8.16 -``` - -Follow the instructions onscreen. Once the install is done, -close and open your shell window, or run `bash` again. -When you change into the `release` directory (this repo), -typing `poetry env info` should show that the current -folder is associated with a 3.8-based virtualenv. - -Should problems arise during the install, you'll have to remove -the environment poby running `pipenv --rm`. - -You can see the full path to your virtualenv's Python interpreter -with the command `poetry env info -p`. This is the interpreter -you should use in your IDE and in day-to-day commands with regards -to the Python programs in this repo. To activate the use of -this interpreter on the shell: - -```bash -source "$(poetry env info -p)/bin/activate" -``` - -### 3. Install pre-commit +## Documentation in Github Pages -Install and enable pre-commit. - -``` -# cd ~/src/release -# source "$(poetry env info -p)/bin/activate" -pip3 install --user pre-commit -pre-commit install -``` - -More detailed instructions at https://pre-commit.com/#installation . - -### 4. Install cargo - -You need an installation of `rustup` and `cargo`. You can follow the instructions from https://www.rust-lang.org/tools/install -This is typically as simple as running - -```sh -curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh -``` -#### On Linux -```sh -command -v apt && sudo apt install -y clang mold protobuf-compiler || true -``` -#### On Mac OS -No need to install Clang for Mac OS user since it comes with Xcode. -```sh -brew install mold protobuff -``` -Make sure you add `$HOME/.cargo/bin` to your PATH, as written in the page above. -> In the Rust development environment, all tools are installed to the ~/.cargo/bin directory, and this is where you will find the Rust toolchain, including rustc, cargo, and rustup. - -### Check the Rust / Cargo installation - -To check if your Rust tooling is set up correctly, you can go to the repo root and then -```sh -cd rs -cargo check -``` - -This should succeed. - -## 5. Install nvm, node, yarn - -### 1. Install nvm - -https://github.com/nvm-sh/nvm#installing-and-updating - -### 2. Install node - -```sh -nvm install 14 -nvm use 14 -``` - -### 3. Install yarn - -```sh -npm install --global yarn -``` - -### "No disk space left" when building with Bazel on Linux? - -``` -sudo sysctl -w fs.inotify.max_user_watches=1048576 -``` - -Bazel eats up a lot of inotify user watches. - -# CI container builds - -This repository creates a container that is used in CI. - -To build this container locally: - -``` -# cd ~/src/release -python3 docker/docker-update-image.py -# To diagnose *just* the build, run: -# docker build -f docker/Dockerfile . -# in the root of the repository. -``` - -You can export variables `BUILDER=buildah` and `CREATOR=podman` to use -Podman and Buildah during builds, instead of Docker. To make Buildah -use intermediate layers -- speeds up unchanged intermediate steps -- -simply export `BUILDAH_LAYERS=true`. - -# IC Network Internal Dashboard - -## Pre-requisites +Searchable docs are available as GitHub pages at https://dfinity.github.io/dre/ -### 1. Install cargo-watch +## Installation -```sh -cargo install cargo-watch -``` +Please follow [getting started](docs/getting-started.md). -### 2. Install yarn dependencies +## Usage -``` -cd dashboard -yarn install -``` +In this repo we build: +* DRE cli tool +* Internal DRE dashboard, both frontend and backend +* Service discovery, which creates a list of IC targets for logs and metrics +* Log fetcher for IC nodes: Host, Guest, Boundary nodes +* Canister log fetcher +* Node Provider notifications, to notify node providers if node becomes unhealthy (unfinished and unmaintained code) -## Running +The DRE cli tool is built as an release artifact and published on GitHub: https://github.com/dfinity/dre/releases -To start the release dashboard locally, run the following from dashboard folder +Some examples of DRE cli tool usage are at [NNS proposals](nns-proposals.md), and elsewhere in the documentation. The documentation published on GitHub pages has quite good search, so please use that. -```sh -yarn dev -``` +## Contributing -To use the `dre` CLI tool with the local dashboard instance run it with `--dev` flag. +Please follow [contributing](docs/contributing.md). -E.g. +## License -```sh -dre --dev subnet --id replace -o1 -``` +The contents of this repo are licensed under the [Apache 2 license](LICENSE). diff --git a/docs/contributing.md b/docs/contributing.md new file mode 100644 index 00000000..0ef5e532 --- /dev/null +++ b/docs/contributing.md @@ -0,0 +1,213 @@ +# Pre-requisites + +## 1. Install dependencies + +### pixi + +[Pixi](https://pixi.sh/) is a package management tool for developers. It allows the developer to install libraries and applications in a reproducible way. Use pixi cross-platform, on Windows, Mac and Linux. + +Installation: +``` +curl -fsSL https://pixi.sh/install.sh | bash +``` + +Then logout and login and you can then install Python with: +``` +pixi global install python==3.11 +``` + +### pyenv + +pyenv is a more conventional alternative to pixi. It installs slower but it's more tested. Use it if pixi doesn't work for you. + +#### On Linux + +In order to manage python versions, you can use the [pyenv +installer](https://github.com/pyenv/pyenv-installer). + +Installing pyenv would be something like: + +``` bash +curl https://pyenv.run | bash +``` + +Then log off and log back on, in order to ensure that the +`~/.local/bin` directory (used by `pip`) is +available in your session's `$PATH`, as well as the pyenv +shims directory. + +#### On Mac OS + +On Mac, pyenv can be installed with Brew https://brew.sh/ +```bash +brew install pyenv +``` + +If you get an error `configure: error: C compiler cannot create executables`, +you may not have recent development tools. Run the following: +```bash +sudo rm -rf /Library/Developer/CommandLineTools +sudo xcode-select --install +``` + +You should verify that a new terminal session has added +the pyenv shims directory to your `$PATH`, then continue +in that new terminal session from now on. + +### 2. Install the Python packages needed by the repo + + +#### Linux dependencies + +pyenv will install a clean Python for you. This installation will +insist on a few important libraries which you should have on your +system before it installs our chosen Python development version. + +```bash +sudo apt install -y libncurses-dev libbz2-dev libreadline-dev \ + libssl-dev make build-essential libssl-dev zlib1g-dev \ + libbz2-dev libreadline-dev libsqlite3-dev wget curl llvm \ + libncursesw5-dev xz-utils tk-dev libxml2-dev libxmlsec1-dev \ + libffi-dev liblzma-dev +``` + +Note: if the list of dependencies above changes, update the +[docker/Dockerfile] file accordingly, so CI stays in sync +with local development environments. + +#### poetry installation + +Run the following from the repo root: + +```bash +# change into the directory of the repo +# cd ~/src/release +pyenv install 3.11.6 # installs Python 3.11.6 via pyenv +pyenv local 3.11.6 # tells pyenv to use 3.11.6 for this repo +pip3 install poetry # installs poetry to your 3.11.6 +poetry env use $(which python) # instructs poetry to use 3.11.6 +poetry install # installs all our dependencies to 3.11.6 +``` + +Follow the instructions onscreen. Once the install is done, +close and open your shell window, or run `bash` again. +When you change into the `release` directory (this repo), +typing `poetry env info` should show that the current +folder is associated with a 3.11-based virtualenv. + +You can see the full path to your virtualenv's Python interpreter +with the command `poetry env info -p`. This is the interpreter +you should use in your IDE and in day-to-day commands with regards +to the Python programs in this repo. To activate the use of +this interpreter on the shell: + +```bash +source "$(poetry env info -p)/bin/activate" +``` + +### 3. Install pre-commit + +Install and enable pre-commit. It's highly recommended in order to prevent pushing code to github that will surely cause failures. + +``` +# cd ~/src/release +# source "$(poetry env info -p)/bin/activate" +pip3 install --user pre-commit +pre-commit install +``` + +More detailed instructions at https://pre-commit.com/#installation . + +### 4.a Install cargo (optional) + +If you build with cargo, and not with bazel, you need an installation of `rustup` and `cargo`. You can follow the instructions from https://www.rust-lang.org/tools/install +This is typically as simple as running + +```sh +curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh +``` +#### On Linux +```sh +command -v apt && sudo apt install -y clang mold protobuf-compiler || true +``` +#### On Mac OS +No need to install Clang for Mac OS user since it comes with Xcode. +```sh +brew install mold protobuff +``` +Make sure you add `$HOME/.cargo/bin` to your PATH, as written in the page above. +> In the Rust development environment, all tools are installed to the ~/.cargo/bin directory, and this is where you will find the Rust toolchain, including rustc, cargo, and rustup. + +### Check the Rust / Cargo installation + +To check if your Rust tooling is set up correctly, you can go to the repo root and then +```sh +cd rs +cargo check +``` + +This should succeed. + +### 4.b Install bazel + +To install bazel, do not use the version provided by your OS package manager. Please make sure you use [bazelisk](https://bazel.build/install/bazelisk). + +## 5. Install nvm, node, yarn + +### 1. Install nvm + +https://github.com/nvm-sh/nvm#installing-and-updating + +### 2. Install node + +```sh +nvm install 14 +nvm use 14 +``` + +### 3. Install yarn + +```sh +npm install --global yarn +``` + +### "No disk space left" when building with Bazel on Linux? + +``` +sudo sysctl -w fs.inotify.max_user_watches=1048576 +``` + +Bazel eats up a lot of inotify user watches. + +# IC Network Internal Dashboard + +## Pre-requisites + +### 1. Install cargo-watch + +```sh +cargo install cargo-watch +``` + +### 2. Install yarn dependencies + +``` +cd dashboard +yarn install +``` + +## Running + +To start the release dashboard locally, run the following from dashboard folder + +```sh +yarn dev +``` + +To use the `dre` CLI tool with the local dashboard instance run it with `--dev` flag. + +E.g. + +```sh +dre --dev subnet --id replace -o1 +``` diff --git a/docs/how-to-update-docs.md b/docs/how-to-update-docs.md index 0afccd5e..b7b81068 100644 --- a/docs/how-to-update-docs.md +++ b/docs/how-to-update-docs.md @@ -1,5 +1,5 @@ -# Documentation +# How to Update Documentation We use MkDocs to generate, serve, and search the team documentation. For full documentation visit [mkdocs.org](https://www.mkdocs.org). diff --git a/docs/nns-proposals.md b/docs/nns-proposals.md index ce012547..5c8dbcad 100644 --- a/docs/nns-proposals.md +++ b/docs/nns-proposals.md @@ -1,4 +1,4 @@ -# NNS/ic-admin operations +# Submitting NNS proposals Most of the commands here can be run in multiple ways. Currently we are putting in the effort to make `dre` as useful as possible. As such it provides support for `dry_run` as default and that can be highly beneficial in most scenarios (for eg. if someone is asking you to submit a proposal for them the best practice way is to run a `dry_run` and ask them to double check the command and the payload that would be submitted) and that is why we recommend using `dre` whenever possible. In some use-cases `dre` cannot help you, and that is when you should use whatever tool/script is at hand. @@ -29,7 +29,7 @@ dre get firewall-rules replica_nodes | jq ### Get the Node Rewards Table, used for the Node Provider compensation ```bash -ic-admin --nns-url https://ic0.app get-node-rewards-table +dre get node-rewards-table { "table": { "Asia": { @@ -37,41 +37,14 @@ ic-admin --nns-url https://ic0.app get-node-rewards-table } ``` -- Alternative - -```bash -dre get node-rewards-table -``` - ### Update the Node Rewards Table -```bash -ic-admin --nns-url https://ic0.app --use-hsm --pin $(cat ~/.hsm-pin) --key-id 01 --slot 0 propose-to-update-node-rewards-table --proposer $PROPOSER_NEURON_INDEX --summary-file 2022-12-type3.md --updated-node-rewards "$(cat 2022-12-type3-rewards.json)" -``` - -- Alternative - ```bash dre propose update-node-rewards-table --summary-file 2022-12-type3.md --updated-node-rewards "$(cat 2022-12-type3-rewards.json | jq -c)" ``` ### Enable the HTTPs outcalls on a subnet -```bash -cargo run --bin ic-admin -- --nns-url https://ic0.app/ \ - --use-hsm \ - --pin $(cat ~/.hsm-pin) \ - --key-id 01 \ - --slot 0 \ - propose-to-update-subnet \ - --proposer \ - --features "http_requests" \ - --subnet uzr34-akd3s-xrdag-3ql62-ocgoh-ld2ao-tamcv-54e7j-krwgb-2gm4z-oqe \ - --summary "Enable the HTTPS outcalls feature on the non-whitelisted uzr34 subnet so that the exchange rate canister can query exchange rate data." -``` - -- Alternative - ```bash dre propose update-subnet \ --features "http_requests" \ diff --git a/docs/trustworthy-metrics/architecture.md b/docs/trustworthy-metrics/architecture.md index f374b396..a8a23dd0 100644 --- a/docs/trustworthy-metrics/architecture.md +++ b/docs/trustworthy-metrics/architecture.md @@ -4,6 +4,7 @@ This document offers a deeper look at the architectural design of the Trustworthy Node Metrics feature on the Internet Computer (IC). It is tailored for IC stakeholders and technical professionals, providing a detailed understanding of both the functional and structural aspects. +For a higher-level document please take a look [here](./trustworthy-metrics.md). ## Objectives diff --git a/docs/trustworthy-metrics/trustworthy-metrics.md b/docs/trustworthy-metrics/trustworthy-metrics.md index 3db00220..98182a69 100644 --- a/docs/trustworthy-metrics/trustworthy-metrics.md +++ b/docs/trustworthy-metrics/trustworthy-metrics.md @@ -1,100 +1,148 @@ # Get trustworthy metrics from the IC Mainnet -## Introduction and prerequisites +## Introduction + +Trustworthy Node Metrics provide greater visibility into node performance, in a trustworthy manner. Trustworthy here means that the metrics are generated and served by the IC itself, without an intermediary, and without a possibility that any single node can fake their health status. + +Medium term objective is to use these metrics to adjust node rewards based on the contributions of individual nodes to the core IC protocol. For this purpose, we currently expose the metric on how many block rounds a particular node was, or was not, contributing to the protocol. In each round, the *block maker* node is [selected based on the random beacon](https://eprint.iacr.org/2022/087.pdf). In order for the node to "make a block", it needs to be up to date, so must have sufficient network connectivity, and it must be fast enough, so the compute and storage resources must be sufficient for it. This makes the block maker a good metric for the node contributions to the protocol. + +The information that all nodes in the subnet have about who is the block maker, and who failed to be the block maker, was already stored in the consensus. We know exposed it to the public through the subnet's replicated state and the management canister. Since this information comes from the consensus, it can be considered to be trustworthy. + +Note that IC is split into subnets, and each subnet has its own consensus, replicated state, and management canister. We developed and open sourced tooling that fetches trustworthy metrics from all subnets, joins it together, and provides it to the IC community for analysis and inspection. + +This entire process is shown in the following diagram: + +```mermaid +%%{init: {'theme':'forest'}}%% +graph TD + subgraph "Subnet 1" + S1["Consensus"] -->|Produces Trustworthy Data| M1["Management Canister 1"] + end + subgraph "Subnet 2" + S2["Consensus"] -->|Produces Trustworthy Data| M2["Management Canister 2"] + end + subgraph "Subnet 3" + S3["Consensus"] -->|Produces Trustworthy Data| M3["Management Canister 3"] + end + M1 --> DRE["DRE tool (open source)"] + M2 --> DRE + M3 --> DRE + DRE --> User + User --> |Analyze & Process Data| F["Trustworthy Node Metrics"] + + style S1 fill:#f9f,stroke:#333,stroke-width:2px + style S2 fill:#f9f,stroke:#333,stroke-width:2px + style S3 fill:#f9f,stroke:#333,stroke-width:2px + style DRE fill:#ff9,stroke:#333,stroke-width:2px + style F fill:#9ff,stroke:#333,stroke-width:2px +``` -To be able to fetch trustworthy metrics there are a couple of things needed prior to running this extension: +## Prerequisites + +To be able to fetch trustworthy metrics, a couple of things are currently needed. While we are looking for ways to simplify the process, for security reasons at the moment one still needs to use wallet canister and fetch metrics with update calls, and these update calls go through the consensus as well and need to be paid for. Hence the requirement for the wallet canister. + +??? tip "Click here to learn how to create a wallet canister, if you don't have one already" + + 1. You need a dfx principal. If needed you can create a new one with + + ```bash + # You can use the one from your HSM but there are some caveats to that that will be addressed later + dfx identity new + ``` + + or follow instructions from the [IC SDK Docs](https://internetcomputer.org/docs/current/developer-docs/setup/cycles/cycles-wallet/#creating-a-cycles-wallet-on-the-mainnet) + + 2. You can list available dfx identities with `dfx identity list` and then need to select that identity and get its principal. + + ```bash + dfx identity use + dfx identity get-principal + ``` + + 3. Check the current balance for the principal + + ```bash + dfx ledger --network ic balance + ``` + + If you have less than 2 Trillion Cycles (TC) worth of ICP, based on the [current ICP value](https://www.coinbase.com/converter/icp/xdr), you can top up the ICP balance by sending funds to the principal, e.g., from [https://ic0.app/wallet/](https://ic0.app/wallet/). + + 1 TC corresponds to 1 XDR at the time of conversion. XDR is the currency symbol of the IMF SDR, a basket of five fiat currencies, corresponding to 1.33 U.S. dollar at the time of writing. Canister creation itself will cost 1 TC, and you will need some cycles more to execute commands. + + 4. Create the wallet canister, after that you will get the wallet canister id in the output. + + ```bash + dfx ledger --network ic create-canister --amount 0.5 + ``` + + You may need to adjust the amount of ICPs if needed, based on the current ICP value. More info can be found in the [IC SDK Docs](https://internetcomputer.org/docs/current/references/cli-reference/dfx-ledger/#options). + + 5. Deploy the wallet canister code + + ```bash + dfx identity --network ic deploy-wallet + ``` -1. You need a dfx principal. If needed you can create a new one with - - ```bash - # You can use the one from your HSM but there are some caveats to that that will be addressed later - dfx identity new - ``` - - or follow instructions from the [IC SDK Docs](https://internetcomputer.org/docs/current/developer-docs/setup/cycles/cycles-wallet/#creating-a-cycles-wallet-on-the-mainnet) - -2. You can list available dfx identities with `dfx identity list` and then need to select that identity and get its principal. - - ```bash - dfx identity use - dfx identity get-principal - ``` - -3. Check the current balance for the principal - - ```bash - dfx ledger --network ic balance - ``` - - If you have less than 2 Trillion Cycles (TC) worth of ICP, based on the [current ICP value](https://www.coinbase.com/converter/icp/xdr), you can top up the ICP balance by sending funds to the principal, e.g., from [https://ic0.app/wallet/](https://ic0.app/wallet/). - - 1 TC corresponds to 1 XDR at the time of conversion. XDR is the currency symbol of the IMF SDR, a basket of five fiat currencies, corresponding to 1.33 U.S. dollar at the time of writing. Canister creation itself will cost 1 TC, and you will need some cycles more to execute commands. - -4. Create the wallet canister, after that you will get the wallet canister id in the output. - - ```bash - dfx ledger --network ic create-canister --amount 0.5 - ``` - - You may need to adjust the amount of ICPs if needed, based on the current ICP value. More info can be found in the [IC SDK Docs](https://internetcomputer.org/docs/current/references/cli-reference/dfx-ledger/#options). - -5. Deploy the wallet canister code - - ```bash - dfx identity --network ic deploy-wallet - ``` - ### Using the cli -You can obtain the DRE tool by following [getting started](../getting-started.md) +You can obtain the DRE tool by following the instructions from [getting started](../getting-started.md) To test out the command you can run the following command ```bash -dre trustworthy-metrics [...] +dre trustworthy-metrics [...] ``` -Arguments explanation: +??? tip "Explanation of the arguments" -1. `wallet-canister-id` - id of the created wallet canister created in the step 4 above, or obtained by - ```bash - dfx identity --network ic get-wallet - ``` -2. `start-at-timestamp` - used for filtering the output. To get all metrics, provide 0 -3. `subnet-id` - subnets to query, if empty will provide metrics for all subnets -4. `key-params` - depending on which identity you used to deploy the wallet canister you have two options: + 1. `auth-params` - depending on which identity you used to deploy the wallet canister you have two options: + 2. `wallet-canister-id` - id of the created wallet canister created in the step 4 above, or obtained by + ```bash + dfx identity --network ic get-wallet + ``` + 3. `start-at-timestamp` - used for filtering the output. To get all metrics, provide 0 + 4. `subnet-id` - subnets to query, if empty will provide metrics for all subnets -If you used a purely new identity (which is advised since the tool can then parallelise the querying) you have to: -1. export identity as `.pem` file which you can do as follows: - - ```bash - dfx identity export > identity.pem - ``` - -2. replace `` in the command with something like: `--private-key-pem identity.pem` +#### Authentication -If you used an HSM then replace `` with: `--hsm-slot 0 --hsm-key-id 0 --hsm-pin $(cat )`. Note that the HSM is less parallel than the key file due to hardware limits, so getting metrics with an HSM will be a bit slower. +Both authentication with a private key, and with an HSM are supported. +Authentication with a private key is recommended, since it allows for more parallelism. -Even if created the wallet canister with an HSM you can still add another file-based controller to the wallet canister: +??? tip "Click here to see how to export a private key with `dfx`" -1. Get the principal of new identity - - ```bash - dfx identity use && dfx identity get-principal - ``` - -2. Add the identity as the controller of canister - - ```bash - dfx identity use - dfx wallet --network ic add-controller - ``` - -3. Use the newly created identity while running the tool. + 1. export identity as a `.pem` file: + + ```bash + dfx identity export > identity.pem + ``` + + 2. replace `` in the command with something like: `--private-key-pem identity.pem` + +??? tip "Click here to see how to authenticate with an HSM" + + Replace `` with: `--hsm-slot 0 --hsm-key-id 0 --hsm-pin `. Note that the HSM operations are slower than the key file due to hardware limits, so getting metrics with an HSM will be a bit slower. + +??? tip "Click here to see how to add multiple controllers to the wallet canister" + + There are many reasons why this can be useful. For instance, allowing more team members to use the same wallet canister, or adding a private-key based controller in addition to an HSM, etc. + + 1. Get the principal of new identity + + ```bash + dfx identity use && dfx identity get-principal + ``` + + 2. Add the identity as the controller of canister + + ```bash + dfx identity use + dfx wallet --network ic add-controller + ``` + And that's it! From now on, you can use the second identity while running the tool. # Example use