Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
8 changes: 3 additions & 5 deletions .github/workflows/test.yml
Original file line number Diff line number Diff line change
Expand Up @@ -16,12 +16,9 @@ jobs:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
# - uses: chartboost/ruff-action@v1
# Until this gets updated we need to use this commit hash (or later)
- uses: chartboost/ruff-action@491342200cdd1cf4d5132a30ddc546b3b5bc531b
- uses: chartboost/ruff-action@v1
with:
args: 'format --check'
changed-files: 'true'
build-image:
needs: [ruff, ruff-format]
runs-on: ubuntu-latest
Expand Down Expand Up @@ -52,10 +49,11 @@ jobs:
runs-on: ubuntu-latest
strategy:
matrix:
test: [scenarios_test.py, rpc_test.py, graph_test.py, ln_test.py, dag_connection_test.py]
test: [scenarios_test.py, rpc_test.py, graph_test.py, ln_test.py, dag_connection_test.py, logging_test.py]
steps:
- uses: actions/checkout@v4
- uses: hynek/setup-cached-uv@v1
- uses: azure/setup-helm@v4.2.0
- uses: medyagh/setup-minikube@master
with:
mount-path: ${{ github.workspace }}:/mnt/src
Expand Down
36 changes: 19 additions & 17 deletions docs/graph.md
Original file line number Diff line number Diff line change
Expand Up @@ -50,6 +50,7 @@ lightning network channel (see [lightning.md](lightning.md)).
<key id="bitcoin_config" attr.name="bitcoin_config" attr.type="string" for="node" />
<key id="tc_netem" attr.name="tc_netem" attr.type="string" for="node" />
<key id="exporter" attr.name="exporter" attr.type="boolean" for="node" />
<key id="metrics" attr.name="metrics" attr.type="string" for="node" />
<key id="collect_logs" attr.name="collect_logs" attr.type="boolean" for="node" />
<key id="build_args" attr.name="build_args" attr.type="string" for="node" />
<key id="ln" attr.name="ln" attr.type="string" for="node" />
Expand All @@ -66,20 +67,21 @@ lightning network channel (see [lightning.md](lightning.md)).
</graphml>
```

| key | for | type | default | explanation |
|----------------|-------|---------|-----------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| services | graph | string | | A space-separated list of extra service containers to deploy in the network. See [docs/services.md](services.md) for complete list of available services |
| version | node | string | | Bitcoin Core version with an available Warnet tank image on Dockerhub. May also be a GitHub repository with format user/repository:branch to build from source code |
| image | node | string | | Bitcoin Core Warnet tank image on Dockerhub with the format repository/image:tag |
| bitcoin_config | node | string | | A string of Bitcoin Core options in command-line format, e.g. '-debug=net -blocksonly' |
| tc_netem | node | string | | A tc-netem command as a string beginning with 'tc qdisc add dev eth0 root netem' |
| exporter | node | boolean | False | Whether to attach a Prometheus data exporter to the tank |
| collect_logs | node | boolean | False | Whether to collect Bitcoin Core debug logs with Promtail |
| build_args | node | string | | A string of configure options used when building Bitcoin Core from source code, e.g. '--without-gui --disable-tests' |
| ln | node | string | | Attach a lightning network node of this implementation (currently only supports 'lnd') |
| ln_image | node | string | | Specify a lightning network node image from Dockerhub with the format repository/image:tag |
| ln_cb_image | node | string | | Specify a lnd Circuit Breaker image from Dockerhub with the format repository/image:tag |
| ln_config | node | string | | A string of arguments for the lightning network node in command-line format, e.g. '--protocol.wumbo-channels --bitcoin.timelockdelta=80' |
| channel_open | edge | string | | Indicate that this edge is a lightning channel with these arguments passed to lnd openchannel |
| source_policy | edge | string | | Update the channel originator policy by passing these arguments passed to lnd updatechanpolicy |
| target_policy | edge | string | | Update the channel partner policy by passing these arguments passed to lnd updatechanpolicy |
| key | for | type | default | explanation |
|----------------|-------|---------|-----------------------------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| services | graph | string | | A space-separated list of extra service containers to deploy in the network. See [docs/services.md](services.md) for complete list of available services |
| version | node | string | | Bitcoin Core version with an available Warnet tank image on Dockerhub. May also be a GitHub repository with format user/repository:branch to build from source code |
| image | node | string | | Bitcoin Core Warnet tank image on Dockerhub with the format repository/image:tag |
| bitcoin_config | node | string | | A string of Bitcoin Core options in command-line format, e.g. '-debug=net -blocksonly' |
| tc_netem | node | string | | A tc-netem command as a string beginning with 'tc qdisc add dev eth0 root netem' |
| exporter | node | boolean | False | Whether to attach a Prometheus data exporter to the tank |
| metrics | node | string | Block count, peers in/out, mempool size | A space-separated string of RPC queries to scrape by prometheus |
| collect_logs | node | boolean | False | Whether to collect Bitcoin Core debug logs with Promtail |
| build_args | node | string | | A string of configure options used when building Bitcoin Core from source code, e.g. '--without-gui --disable-tests' |
| ln | node | string | | Attach a lightning network node of this implementation (currently only supports 'lnd' or 'cln') |
| ln_image | node | string | | Specify a lightning network node image from Dockerhub with the format repository/image:tag |
| ln_cb_image | node | string | | Specify a lnd Circuit Breaker image from Dockerhub with the format repository/image:tag |
| ln_config | node | string | | A string of arguments for the lightning network node in command-line format, e.g. '--protocol.wumbo-channels --bitcoin.timelockdelta=80' |
| channel_open | edge | string | | Indicate that this edge is a lightning channel with these arguments passed to lnd openchannel |
| source_policy | edge | string | | Update the channel originator policy by passing these arguments passed to lnd updatechanpolicy |
| target_policy | edge | string | | Update the channel partner policy by passing these arguments passed to lnd updatechanpolicy |
106 changes: 88 additions & 18 deletions docs/monitoring.md
Original file line number Diff line number Diff line change
@@ -1,32 +1,102 @@
# Monitoring

## Monitoring container resource usage
## Prometheus

When run in docker, a few additional containers are started up:
To monitor RPC return values over time, a Prometheus data exporter can be connected
to any Bitcoin Tank and configured to scrape any available RPC results.

* CAdvisor (container Monitoring)
* Prometheus (log scraper)
* Grafana (graphing/dashboard tool)
The `bitcoin-exporter` image is defined in `resources/images/exporter` and
maintained in the BitcoinDevProject dockerhub organization. To add the exporter
in the Tank pod with Bitcoin Core add the `"exporter"` key to the node in the graphml file:

## CAdvisor
```xml
<node id="0">
<data key="version">27.0</data>
<data key="exporter">true</data>
</node>
```

CAdvisor needs no additional setup, and can be accessed from the docker host at
localhost:8080
The default metrics are defined in the `bitcoin-exporter` image:
- Block count
- Number of inbound peers
- Number of outbound peers
- Mempool size (# of TXs)

## Prometheus
Metrics can be configured by setting a `"metrics"` key to the node in the graphml file.
The metrics value is a space-separated list of labels, RPC commands with arguments, and
JSON keys to resolve the desired data:

```
label=method(arguments)[JSON result key][...]
```

For example, the default metrics listed above are defined as:

```xml
<node id="0">
<data key="version">27.0</data>
<data key="exporter">true</data>
<data key="metrics">blocks=getblockcount() inbounds=getnetworkinfo()["connections_in"] outbounds=getnetworkinfo()["connections_in"] mempool_size=getmempoolinfo()["size"]</data>
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let's add a metrics key to both default graph and generated graphs from warcli graph create so that users can easily add this key without getting missing key (at the graph level) errors.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

warnet-rpc  | 2024-08-06 09:11:13 | ERROR   | server   | Error bring up warnet: Bad GraphML data: no key metrics
2024-08-06 09:11:13 | ERROR   | warnet.server | jsonrpc error
2024-08-06 09:11:13 | ERROR   | warnet.server | 
Traceback (most recent call last):
warnet-rpc  |   File "/usr/local/lib/python3.12/site-packages/networkx/readwrite/graphml.py", line 966, in decode_data_elements
    data_name = graphml_keys[key]["name"]
warnet-rpc  |                 ~~~~~~~~~~~~^^^^^
KeyError: 'metrics'

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

cool done

</node>
```

Prometheus should also not need any additional setup, and can be accessed from
the docker host at localhost:9090
The data can be retrieved from the Prometheus exporter on port `9332`, example:

```
# HELP blocks getblockcount()
# TYPE blocks gauge
blocks 704.0
# HELP inbounds getnetworkinfo()["connections_in"]
# TYPE inbounds gauge
inbounds 0.0
# HELP outbounds getnetworkinfo()["connections_in"]
# TYPE outbounds gauge
outbounds 0.0
# HELP mempool_size getmempoolinfo()["size"]
# TYPE mempool_size gauge
mempool_size 0.0
```

## Grafana

Grafana is provisioned with a single default dashboard, but alternative
dashboards can be added or created.
Data from Prometheus exporters can be collected and fed into Grafana for a
web-based interface.

### Install logging infrastructure

First make sure you have `helm` installed, then run the `install_logging` script:

```bash
resources/scripts/install_logging.sh
```

To forward port `3000` and view the Grafana dashboard run the `connect_logging` script:

```bash
resources/scripts/connect_logging.sh
```

The Grafana dashboard (and API) will be accessible without requiring authentication
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I know it's probably because I'm too dumb, but where are the actual logs? When I open grafana on localhost:3000 I don't see any connected logs coming in?

image

This is with patch to default graph:

diff
diff --git a/resources/graphs/default.graphml b/resources/graphs/default.graphml
index 153bd52..8c276a0 100644
--- a/resources/graphs/default.graphml
+++ b/resources/graphs/default.graphml
@@ -6,12 +6,14 @@
   <key attr.name="exporter" attr.type="boolean" for="node" id="exporter"/>
   <key attr.name="collect_logs" attr.type="boolean" for="node" id="collect_logs"/>
   <key attr.name="image" attr.type="string" for="node" id="image"/>
+  <key attr.name="metrics" attr.type="string" for="node" id="metrics"/>
   <graph edgedefault="directed">
     <node id="0">
         <data key="version">27.0</data>
         <data key="bitcoin_config">-uacomment=w0</data>
         <data key="exporter">true</data>
         <data key="collect_logs">true</data>
+        <data key="metrics">blocks=getblockcount() inbounds=getnetworkinfo()["connections_in"] outbounds=getnetworkinfo()["connections_in"] mempool_size=getmempoolinfo()["size"]</data>
     </node>
     <node id="1">
         <data key="version">27.0</data>

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should I see running logging containers? This is all I see:

image

The script appeared to run successfully:

will@ubuntu in ~/src/warnet on  rpc-gauge [$!?⇕] is 📦 v0.9.11 : 🐍 (warnet)
₿ just installlogging
resources/scripts/install_logging.sh
"grafana" already exists with the same configuration, skipping
"prometheus-community" already exists with the same configuration, skipping
Hang tight while we grab the latest from your chart repositories...
...Successfully got an update from the "grafana" chart repository
...Successfully got an update from the "prometheus-community" chart repository
Update Complete. ⎈Happy Helming!⎈
Release "loki" does not exist. Installing it now.
NAME: loki
LAST DEPLOYED: Tue Aug  6 10:11:49 2024
NAMESPACE: warnet-logging
STATUS: deployed
REVISION: 1
NOTES:
***********************************************************************
 Welcome to Grafana Loki
 Chart version: 5.47.2
 Loki version: 2.9.6
***********************************************************************

Installed components:
* gateway
* minio
* read
* write
* backend
Release "promtail" does not exist. Installing it now.
NAME: promtail
LAST DEPLOYED: Tue Aug  6 10:12:38 2024
NAMESPACE: warnet-logging
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
***********************************************************************
 Welcome to Grafana Promtail
 Chart version: 6.16.4
 Promtail version: 3.0.0
***********************************************************************

Verify the application is working by running these commands:
* kubectl --namespace warnet-logging port-forward daemonset/promtail 3101
* curl http://127.0.0.1:3101/metrics
Release "prometheus" does not exist. Installing it now.
NAME: prometheus
LAST DEPLOYED: Tue Aug  6 10:12:40 2024
NAMESPACE: warnet-logging
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
kube-prometheus-stack has been installed. Check its status by running:
  kubectl --namespace warnet-logging get pods -l "release=prometheus"

Visit https://github.com/prometheus-operator/kube-prometheus for instructions on how to create & configure Alertmanager and Prometheus instances using the Operator.
Release "loki-grafana" does not exist. Installing it now.
NAME: loki-grafana
LAST DEPLOYED: Tue Aug  6 10:12:54 2024
NAMESPACE: warnet-logging
STATUS: deployed
REVISION: 1
NOTES:
1. Get your 'admin' user password by running:

   kubectl get secret --namespace warnet-logging loki-grafana -o jsonpath="{.data.admin-password}" | base64 --decode ; echo


2. The Grafana server can be accessed via port 80 on the following DNS name from within your cluster:

   loki-grafana.warnet-logging.svc.cluster.local

   Get the Grafana URL to visit by running these commands in the same shell:
     export POD_NAME=$(kubectl get pods --namespace warnet-logging -l "app.kubernetes.io/name=grafana,app.kubernetes.io/instance=loki-grafana" -o jsonpath="{.items[0].metadata.name}")
     kubectl --namespace warnet-logging port-forward $POD_NAME 3000

3. Login with the password from step 1 and the username: admin
#################################################################################
######   WARNING: Persistence is disabled!!! You will lose your data when   #####
######            the Grafana pod is terminated.                            #####
#################################################################################

will@ubuntu in ~/src/warnet on  rpc-gauge [$!?⇕] is 📦 v0.9.11 : 🐍 (warnet) took 1m8s
₿ just connectlogging
resources/scripts/connect_logging.sh
Go to http://localhost:3000
Grafana pod name: loki-grafana-6c855549d4-wsv88
Attempting to start Grafana port forwarding
Forwarding from 127.0.0.1:3000 -> 3000
Forwarding from [::1]:3000 -> 3000
Handling connection for 3000

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

kubectl does show preometheus running:

₿ kubectl --namespace warnet-logging get pods -l "release=prometheus"
NAME                                                   READY   STATUS    RESTARTS   AGE
prometheus-kube-prometheus-operator-6c5998f7dc-hjvwx   1/1     Running   0          9m27s
prometheus-kube-state-metrics-688d66b5b8-8srsw         1/1     Running   0          9m27s
prometheus-prometheus-node-exporter-8tt7q              1/1     Running   0          9m27s

But I don't see any node exporters?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

server logs don't appear to contain any errors:

warnet-rpc  | 2024-08-06 09:11:34 | DEBUG   | tank     | Parsed graph node: 0 with attributes: ['version=27.0', 'image=None', 'bitcoin_config=-uacomment=w0', 'tc_netem=None', 'exporter=True', 'metrics=blocks=getblockcount() inbounds=getnetworkinfo()["connections_in"] outbounds=getnetworkinfo()["connections_in"] mempool_size=getmempoolinfo()["size"]', 'collect_logs=True', 'build_args=', 'ln=None', 'ln_image=None', 'ln_cb_image=None', 'ln_config=None']
warnet-rpc  | 2024-08-06 09:11:34 | DEBUG   | tank     | Parsed graph node: 1 with attributes: ['version=27.0', 'image=None', 'bitcoin_config=-uacomment=w1', 'tc_netem=None', 'exporter=True', 'metrics=None', 'collect_logs=True', 'build_args=', 'ln=None', 'ln_image=None', 'ln_cb_image=None', 'ln_config=None']
warnet-rpc  | 2024-08-06 09:11:34 | DEBUG   | tank     | Parsed graph node: 2 with attributes: ['version=None', 'image=bitcoindevproject/bitcoin:26.0', 'bitcoin_config=-uacomment=w2 -debug=mempool', 'tc_netem=None', 'exporter=True', 'metrics=None', 'collect_logs=True', 'build_args=', 'ln=None', 'ln_image=None', 'ln_cb_image=None', 'ln_config=None']
warnet-rpc  | 2024-08-06 09:11:34 | DEBUG   | tank     | Parsed graph node: 3 with attributes: ['version=27.0', 'image=None', 'bitcoin_config=-uacomment=w3', 'tc_netem=None', 'exporter=True', 'metrics=None', 'collect_logs=False', 'build_args=', 'ln=None', 'ln_image=None', 'ln_cb_image=None', 'ln_config=None']
warnet-rpc  | 2024-08-06 09:11:34 | DEBUG   | tank     | Parsed graph node: 4 with attributes: ['version=27.0', 'image=None', 'bitcoin_config=-uacomment=w4', 'tc_netem=None', 'exporter=True', 'metrics=None', 'collect_logs=False', 'build_args=', 'ln=None', 'ln_image=None', 'ln_cb_image=None', 'ln_config=None']
warnet-rpc  | 2024-08-06 09:11:34 | DEBUG   | tank     | Parsed graph node: 5 with attributes: ['version=27.0', 'image=None', 'bitcoin_config=-uacomment=w5', 'tc_netem=None', 'exporter=True', 'metrics=None', 'collect_logs=False', 'build_args=', 'ln=None', 'ln_image=None', 'ln_cb_image=None', 'ln_config=None']
warnet-rpc  | 2024-08-06 09:11:34 | DEBUG   | tank     | Parsed graph node: 6 with attributes: ['version=27.0', 'image=None', 'bitcoin_config=-uacomment=w6', 'tc_netem=None', 'exporter=False', 'metrics=None', 'collect_logs=False', 'build_args=', 'ln=None', 'ln_image=None', 'ln_cb_image=None', 'ln_config=None']
warnet-rpc  | 2024-08-06 09:11:34 | DEBUG   | tank     | Parsed graph node: 7 with attributes: ['version=27.0', 'image=None', 'bitcoin_config=-uacomment=w7', 'tc_netem=None', 'exporter=False', 'metrics=None', 'collect_logs=False', 'build_args=', 'ln=None', 'ln_image=None', 'ln_cb_image=None', 'ln_config=None']
warnet-rpc  | 2024-08-06 09:11:34 | DEBUG   | tank     | Parsed graph node: 8 with attributes: ['version=27.0', 'image=None', 'bitcoin_config=-uacomment=w8', 'tc_netem=None', 'exporter=False', 'metrics=None', 'collect_logs=False', 'build_args=', 'ln=None', 'ln_image=None', 'ln_cb_image=None', 'ln_config=None']
warnet-rpc  | 2024-08-06 09:11:34 | DEBUG   | tank     | Parsed graph node: 9 with attributes: ['version=27.0', 'image=None', 'bitcoin_config=-uacomment=w9', 'tc_netem=None', 'exporter=False', 'metrics=None', 'collect_logs=False', 'build_args=', 'ln=None', 'ln_image=None', 'ln_cb_image=None', 'ln_config=None']
warnet-rpc  | 2024-08-06 09:11:34 | DEBUG   | tank     | Parsed graph node: 10 with attributes: ['version=27.0', 'image=None', 'bitcoin_config=-uacomment=w10', 'tc_netem=None', 'exporter=False', 'metrics=None', 'collect_logs=False', 'build_args=', 'ln=None', 'ln_image=None', 'ln_cb_image=None', 'ln_config=None']
warnet-rpc  | 2024-08-06 09:11:34 | DEBUG   | tank     | Parsed graph node: 11 with attributes: ['version=27.0', 'image=None', 'bitcoin_config=-uacomment=w11', 'tc_netem=None', 'exporter=False', 'metrics=None', 'collect_logs=False', 'build_args=', 'ln=None', 'ln_image=None', 'ln_cb_image=None', 'ln_config=None']
2024-08-06 09:11:34 | INFO    | warnet   | Imported 12 tanks from graph
warnet-rpc  | 2024-08-06 09:11:34 | INFO    | warnet   | Created Warnet using directory /root/.warnet/warnet/warnet
2024-08-06 09:11:34 | DEBUG   | k8s      | Deploying pods
warnet-rpc  | 2024-08-06 09:11:34 | DEBUG   | k8s      | Creating bitcoind container for tank 0

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I do see a single running exporter (I think):

✗ kubectl --namespace warnet-logging get pods
NAME                                                     READY   STATUS    RESTARTS   AGE
alertmanager-prometheus-kube-prometheus-alertmanager-0   2/2     Running   0          14m
loki-backend-0                                           2/2     Running   0          16m
loki-backend-1                                           2/2     Running   0          16m
loki-backend-2                                           2/2     Running   0          16m
loki-canary-z7vgh                                        1/1     Running   0          16m
loki-gateway-6b57fdb5dd-ktspk                            1/1     Running   0          16m
loki-grafana-6c855549d4-wsv88                            1/1     Running   0          15m
loki-grafana-agent-operator-b8f4865b9-lq2fc              1/1     Running   0          16m
loki-minio-0                                             1/1     Running   0          16m
loki-read-5d8755d4cf-74zwb                               1/1     Running   0          16m
loki-read-5d8755d4cf-9wctf                               1/1     Running   0          16m
loki-read-5d8755d4cf-gdb6z                               1/1     Running   0          16m
loki-write-0                                             1/1     Running   0          16m
loki-write-1                                             1/1     Running   0          16m
loki-write-2                                             1/1     Running   0          16m
prometheus-kube-prometheus-operator-6c5998f7dc-hjvwx     1/1     Running   0          15m
prometheus-kube-state-metrics-688d66b5b8-8srsw           1/1     Running   0          15m
prometheus-prometheus-kube-prometheus-prometheus-0       2/2     Running   0          14m
prometheus-prometheus-node-exporter-8tt7q                1/1     Running   0          15m
promtail-7h8jk                                           1/1     Running   0          15m

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The exporters are inside the tank pods, next to the bitcoin containers.

Screenshot 2024-08-06 at 10 27 58 AM

The stuff in warnet-logging is the grafana api server and the prometheus scraper that reads from the individual tank exporters. I know there is a "node exporter" pod in warnet-logging as well, I dunno what that is actually for and on my system, it never works anyway:
Screenshot 2024-08-06 at 10 26 52 AM

As far as seeing something in Grafana right away you're right I didn't document that, I will push another commit today that hopefully makes a default dashboard easy to load

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@willcl-ark logs are in Loki not Prometheus. There are no additional containers for Loki to get it's data as it collects it via k8s directly similar to how you can do k logs rpd-0.

at http://localhost:3000

## Dashboards

To view the default metrics in the included default dashboard, upload the dashboard
JSON file to the Grafana server:

```
curl localhost:3000/api/dashboards/db \
-H "Content-Type: application/json" \
--data "{\"dashboard\": $(cat resources/configs/grafana/default_dashboard.json)}"
```

Note the URL in the reply from the server (example):

```
{"folderUid":"","id":2,"slug":"default-warnet-dashboard","status":"success","uid":"fdu0pda1z6a68b","url":"/d/fdu0pda1z6a68b/default-warnet-dashboard","version":1}(
```

Grafana can be accessed on the docker host from localhost:3000 using username
`admin` and password `admin` by default.
Open the dashboard in your browser (example):

The default dashboard is called "Docker Container & Host Metrics" and can be
accessed via the "dashboards" tab, or from the bottom right of the home screen.
`http://localhost:3000/d/fdu0pda1z6a68b/default-warnet-dashboard`

Additional dashboards and datasources may be added in the future.
38 changes: 21 additions & 17 deletions docs/warcli.md
Original file line number Diff line number Diff line change
Expand Up @@ -26,10 +26,6 @@ options:
|----------|--------|------------|-----------|
| commands | String | | |

### `warcli setup`
Run the Warnet quick start setup script


## Bitcoin

### `warcli bitcoin debug-log`
Expand Down Expand Up @@ -73,6 +69,18 @@ options:

## Cluster

### `warcli cluster deploy`
Setup Warnet using the current kubectl-configured cluster


### `warcli cluster minikube-clean`
Reinit minikube images


### `warcli cluster minikube-setup`
Setup minikube for use with Warnet


### `warcli cluster port-start`
Port forward (runs as a detached process)

Expand All @@ -81,11 +89,7 @@ Port forward (runs as a detached process)
Stop the port forwarding process


### `warcli cluster start`
Setup and start Warnet with minikube


### `warcli cluster stop`
### `warcli cluster teardown`
Stop the warnet server and tear down the cluster


Expand Down Expand Up @@ -129,19 +133,19 @@ options:
## Image

### `warcli image build`
Build bitcoind and bitcoin-cli from \<repo>/\<branch> as \<registry>:\<tag>.
Build bitcoind and bitcoin-cli from \<repo> at \<commit_sha> as \<registry>:\<tag>.
Optionally deploy to remote registry using --action=push, otherwise image is loaded to local registry.

options:
| name | type | required | default |
|------------|--------|------------|-----------|
| repo | String | yes | |
| branch | String | yes | |
| commit_sha | String | yes | |
| registry | String | yes | |
| tag | String | yes | |
| build_args | String | | |
| arches | String | | |
| action | String | | |
| action | String | | "load" |

## Ln

Expand Down Expand Up @@ -207,11 +211,11 @@ options:
Start a warnet with topology loaded from a \<graph_file> into [network]

options:
| name | type | required | default |
|------------|--------|------------|-----------------------------------|
| graph_file | Path | | src/warnet/graphs/default.graphml |
| force | Bool | | False |
| network | String | | "warnet" |
| name | type | required | default |
|------------|--------|------------|----------------------------------|
| graph_file | Path | | resources/graphs/default.graphml |
| force | Bool | | False |
| network | String | | "warnet" |

### `warcli network status`
Get status of a warnet named [network]
Expand Down
4 changes: 2 additions & 2 deletions justfile
Original file line number Diff line number Diff line change
Expand Up @@ -61,7 +61,7 @@ stop:
set -euxo pipefail

kubectl delete namespace warnet
kubectl delete namespace warnet-logging
kubectl delete namespace warnet-logging --ignore-not-found
kubectl config set-context --current --namespace=default

minikube image rm warnet/dev
Expand All @@ -84,7 +84,7 @@ startd:
stopd:
# Delete all resources
kubectl delete namespace warnet
kubectl delete namespace warnet-logging
kubectl delete namespace warnet-logging --ignore-not-found
kubectl config set-context --current --namespace=default

echo Done...
Expand Down
1 change: 1 addition & 0 deletions pyproject.toml
Original file line number Diff line number Diff line change
Expand Up @@ -41,6 +41,7 @@ where = ["src", "resources"]
[tool.ruff]
extend-exclude = [
"src/test_framework/*.py",
"resources/images/exporter/authproxy.py",
]
line-length = 100
indent-width = 4
Expand Down
Loading