Skip to content

Running the IBM Spectrum Scale Performance Monitoring Bridge in a docker container

hwassman edited this page May 16, 2024 · 9 revisions

IMPORTANT The IBM Storage Scale system must run 5.1.1 or above.

On a host running docker/podman perform the following steps:

  1. Clone this repository using git in your favourite directory
# git clone https://github.com/IBM/ibm-spectrum-scale-bridge-for-grafana.git grafana_bridge
  1. Create the bridge container image
# cd grafana_bridge

# podman build -t bridge_image:latest .
  1. Create a share of the ZIMonSensors.cfg file from the pmcollector node

The ZIMonSensors.cfg file located in the /opt/IBM/zimon folder on the pmcollector node must be mounted in the bridge container at startup. Make sure that /opt/IBM/zimon/ZIMonSensors.cfg is shared with the host on which you want to run the bridge container.

  1. Start the bridge in a container, for example, listening on the OpenTSDB HTTP port
# podman run -dt -p 4242:4242 -e "SERVER=9.XXX.XXX.XXX" -e "PORT=4242" -e "BASICAUTHPASSW=XXXXXXXXXXXXXXXXXXXXXXX" -e "APIKEYVALUE=XXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX" --mount type=bind,src=/opt/IBM/zimon/ZIMonSensors.cfg,target=/opt/IBM/zimon/ZIMonSensors.cfg,ro=true --pod new:my-pod --name grafana_bridge bridge_image:latest
         
# podman logs grafana_bridge
2024-04-15 13:08 - MainThread                               - INFO     -  *** IBM Storage Scale bridge for Grafana - Version: 8.0.0 ***
2024-04-15 13:08 - MainThread                               - INFO     - Successfully retrieved MetaData
2024-04-15 13:08 - MainThread                               - INFO     - Received sensors:CPU, DiskFree, GPFSBufMgr, GPFSFilesystem, GPFSFilesystemAPI, GPFSNSDDisk, GPFSNSDFS, GPFSNSDPool, GPFSNode, GPFSNodeAPI, GPFSRPCS, GPFSVFSX, GPFSWaiters, IPFIX, Load, Memory, Netstat, Network, TopProc, CTDBDBStats, CTDBStats, SMBGlobalStats, SMBStats, GPFSDiskCap, GPFSFileset, GPFSInodeCap, GPFSPool, GPFSPoolCap
2024-04-15 13:08 - MainThread                               - INFO     - Initial cherryPy server engine start have been invoked. Python version: 3.9.18 (main, Jan  4 2024, 00:00:00)
[GCC 11.4.1 20230605 (Red Hat 11.4.1-2)], cherryPy version: 18.9.0.
2024-04-15 13:08 - MainThread                               - INFO     - Registered applications:
 OpenTSDB Api listening on Grafana queries
2024-04-15 13:08 - MainThread                               - INFO     - server started

Now you can add the host running the bridge container to the Grafana monitoring Datasource list.

Using HTTPS(SSL) connection for the IBM Storage Scale Performance Monitoring Bridge running in a container

  1. Follow the instructions Generate SSL certificates to generate a private ssl key and a ssl certificate

  2. Start the bridge running in a container:

# podman run -dt -p 4242:4242,8443:8443 -e "SERVER=9.XXX.XXX.XXX" -e "APIKEYVALUE=XXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX" -e "PORT=8443" -e "PROTOCOL=https" -e "TLSKEYPATH=/etc/bridge_ssl/certs" -e "TLSKEYFILE=privkey.pem" -e "TLSCERTFILE=cert.pem" -v /tmp:/var/log/ibm_bridge_for_grafana -v /etc/bridge_ssl/certs:/etc/bridge_ssl/certs --mount type=bind,src=/opt/IBM/zimon/ZIMonSensors.cfg,target=/opt/IBM/zimon/ZIMonSensors.cfg,ro=true --pod new:my-bridge-ssl-test-pod --name bridge-ssl-test bridge_image:latest

# podman logs bridge-ssl-test
2021-04-25 16:05 - INFO     -  *** IBM Spectrum Scale bridge for Grafana - Version: 7.0 ***
2021-04-25 16:05 - INFO     - Successfully retrieved MetaData
2021-04-25 16:05 - INFO     - Received sensors:CPU, DiskFree, GPFSFilesystem, GPFSFilesystemAPI, GPFSNSDDisk, GPFSNSDFS, GPFSNSDPool, GPFSNode, GPFSNodeAPI, GPFSRPCS, GPFSVFSX, GPFSWaiters, Load, Memory, Netstat, Network, TopProc, CTDBDBStats, CTDBStats, SMBGlobalStats, SMBStats, GPFSDiskCap, GPFSFileset, GPFSInodeCap, GPFSPool, GPFSPoolCap
2021-04-25 16:05 - INFO     - Initial cherryPy server engine start have been invoked. Python version: 3.6.8 (default, Aug 18 2020, 08:33:21)
[GCC 8.3.1 20191121 (Red Hat 8.3.1-5)], cherryPy version: 18.6.0.
2021-04-25 16:05 - INFO     - server started

Run the IBM Storage Scale Performance Monitoring Bridge as Prometheus exporter in a container

  1. Follow the instructions Generate SSL certificates to generate a private ssl key and a ssl certificate

  2. Start the bridge running in a container:

# podman run -dt -p 9250:9250 -e "SERVER=9.XXX.XXX.XXX" -e "APIKEYVALUE=XXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX" -e "PROMETHEUS=9250" -e "TLSKEYPATH=/etc/bridge_ssl/certs" -e "TLSKEYFILE=privkey.pem" -e "TLSCERTFILE=cert.pem" -v /tmp:/var/log/ibm_bridge_for_grafana -v /etc/bridge_ssl/certs:/etc/bridge_ssl/certs --mount type=bind,src=/home/zimon/ZIMonSensors.cfg,target=/opt/IBM/zimon/ZIMonSensors.cfg,ro=true --pod new:my-bridge --name prometheus-exporter bridge_image:latest

# podman logs prometheus-exporter
2024-03-12 20:47 - MainThread                               - INFO     -  *** IBM Storage Scale bridge for Grafana - Version: 8.0.0 ***
2024-03-12 20:47 - MainThread                               - INFO     - Successfully retrieved MetaData
2024-03-12 20:47 - MainThread                               - INFO     - Received sensors:CPU, DiskFree, GPFSBufMgr, GPFSFilesystem, GPFSFilesystemAPI, GPFSNSDDisk, GPFSNSDFS, GPFSNSDPool, GPFSNode, GPFSNodeAPI, GPFSRPCS, GPFSVFSX, GPFSWaiters, IPFIX, Load, Memory, Netstat, Network, TopProc, CTDBDBStats, CTDBStats, NFSIO, SMBGlobalStats, SMBStats, GPFSDiskCap, GPFSFileset, GPFSFilesetQuota, GPFSInodeCap, GPFSPool, GPFSPoolCap
2024-03-12 20:47 - MainThread                               - INFO     - Initial cherryPy server engine start have been invoked. Python version: 3.9.18 (main, Jan  4 2024, 00:00:00)
[GCC 11.4.1 20230605 (Red Hat 11.4.1-2)], cherryPy version: 18.9.0.
2024-03-12 20:47 - MainThread                               - INFO     - Registered applications:
 Prometheus Exporter Api listening on Prometheus requests
2024-03-12 20:47 - MainThread                               - INFO     - server started

User Guide

Installation

Configuration

Maintenance

Troubleshooting

Use cases

Designing dashboards

Developer Guide

Clone this wiki locally