Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Poller output not empty all the time #4687

Closed
TheNetworkIsDown opened this issue Apr 9, 2022 · 26 comments
Closed

Poller output not empty all the time #4687

TheNetworkIsDown opened this issue Apr 9, 2022 · 26 comments
Labels
question A question not a bug resolved A fixed issue
Milestone

Comments

@TheNetworkIsDown
Copy link
Contributor

TheNetworkIsDown commented Apr 9, 2022

I'm getting this stuff all the time:

2022/04/09 01:57:51 - SYSTEM STATS: Time:169.0462 Method:cmd.php Processes:1 Threads:1 Hosts:177 HostsPerProcess:177 DataSources:5235 RRDsProcessed:3035 
2022/04/09 01:55:02 - POLLER: Poller[Main Poller] PID[23758] WARNING: Poller Output Table not Empty. Issues: 2, DS[SW_F2 - Traffic - 1:49 (Up) [ISC_B-F1_1]] Graphs[SW_F2 - Traffic - 1:49 (Up) [ISC_B-F1_1] ]

The graphs seem to be rendered ok, but I still get an alert email all the time.

Looking at the actual table it looks like this:

+---------------+-------------+---------------------+----------------+
| local_data_id | rrd_name    | time                | output         |
+---------------+-------------+---------------------+----------------+
|          6482 | traffic_in  | 2022-04-09 01:45:42 | 41487596159437 |
|          6482 | traffic_in  | 2022-04-09 01:50:45 | 41488856327543 |
|          6482 | traffic_in  | 2022-04-09 01:55:51 | 41490182632144 |
|          6482 | traffic_out | 2022-04-09 01:47:06 | 33628854252    |
|          6482 | traffic_out | 2022-04-09 01:52:12 | 33629294595    |
|          6482 | traffic_out | 2022-04-09 01:57:18 | 33629750504    |
+---------------+-------------+---------------------+----------------+

local_data_id 6482 is just a random port of some switch.

As a matter of fact, previously, "Poller Output Table not Empty" was due to id 6481. So I identified that graph and datasource and deleted both and recreated them. The new one got id 8000...something and works fine. But after I had done that, it started complaining about id 6482 (6481 +1). There seems to be something magic going on at that point.

What does that mean? What could be wrong here?

Cacti 1.2.20
Mainly using "SNMP - Interface Statistics" Data Query

There are no PHP errors that I could find.

PS I tried to dig into the details of what Cacti does and there is for example an image missing on the documentation page that could be helpful https://docs.cacti.net/Principles-of-Operation.md -> Principles of Operation. https://github.com/Cacti/documentation/blob/develop/images/principles-of-operation.png seems to be just empty.

@TheWitness
Copy link
Member

Repopulate the poller cache. Verify that traffic_in is also in the poller cache.

@TheNetworkIsDown
Copy link
Contributor Author

TheNetworkIsDown commented Apr 10, 2022

I have already re-saved the device and it's in there.

image

@TheNetworkIsDown
Copy link
Contributor Author

I've now had a very interesting phenomenon. Say we are monitoring services on devices D1, D2, D3 & D4.

A datasource on D1 is the one producing the "Poller Output table not Empty" messages.

Today we had a power failure and device D2 (and more) were not available.

During that time the error appeared no longer on D1 but on D3 and D4 (unrelated devices).
D1, D3 and D4 were not affected by the power failure.

After D2 was available again, the error moved back to D1 ....

I am attaching the log. viewlog.txt

@netniV
Copy link
Member

netniV commented Apr 12, 2022

The order of polling may not be the same as the order of your devices. When your polling cycle completes, do you have the DSSTATS lines that show how much was processed and what is your polling cycle set to ?

@TheNetworkIsDown
Copy link
Contributor Author

TheNetworkIsDown commented Apr 12, 2022

I am attaching a more detailed logfile below.

(Power failure that I was writing about went from 2022/04/11 13:30 to about 14:15)
During that time, other hosts are creating errors. (While the one previously creating errors was still reachable, not affected by the power failure!)

log2.txt

@netniV
Copy link
Member

netniV commented Apr 12, 2022

What is your polling cycle?

@TheNetworkIsDown
Copy link
Contributor Author

Ups sorry. 5 minutes.

@TheWitness
Copy link
Member

Can you use spine? What is your snmp timeout set to? Are you logging poller errors for the device?

@TheWitness
Copy link
Member

Also, note that the packages included in the 1.2.20 release were damaged. Whoops. If this was an upgrade, no issue, but if it's a new install, I suggest you grab the packages from the latest 1.2.x branch, and hit the reset button.

@TheWitness
Copy link
Member

Another interesting point is that the poller is supposed to catch and remove these before it ends in 1.2.20. So, my brain is hurting at the moment.

@TheWitness
Copy link
Member

When using cmd.php, you should be able to do this, replacing 58 with the actual device id

php -q cmd.php --first=58 --last=58 --debug --mibs --poller=1

@TheWitness
Copy link
Member

You should get something like this

2022-04-12 18:59:03 - POLLER: Poller[1] PID[26778] Device[58] STATUS: Device '192.168.11.200' is UP.
2022-04-12 18:59:03 - POLLER: Poller[1] PID[26778] Device[58] RECACHE: Processing 4 items in the auto reindex cache for '192.168.11.200'.
2022-04-12 18:59:04 - POLLER: Poller[1] PID[26778] Device[58] DQ[4] RECACHE OID: .1.3.6.1.2.1.1.3.0, (assert:133575600 < output:133575900)
2022-04-12 18:59:04 - POLLER: Poller[1] PID[26778] Device[58] DQ[7] RECACHE OID: .1.3.6.1.2.1.1.3.0, (assert:133575600 < output:133575900)
2022-04-12 18:59:04 - POLLER: Poller[1] PID[26778] Device[58] DQ[8] RECACHE OID: .1.3.6.1.2.1.1.3.0, (assert:133575600 < output:133575900)
2022-04-12 18:59:04 - POLLER: Poller[1] PID[26778] Device[58] DQ[9] RECACHE OID: .1.3.6.1.2.1.1.3.0, (assert:133575600 < output:133575900)
2022-04-12 18:59:04 - POLLER: Poller[1] PID[26778] Device[58] DS[126] TT[0.55] SERVER: /var/www/html/cacti/scripts/ss_hstats.php ss_hstats '58' polling_time, output: 0.284
2022-04-12 18:59:04 - POLLER: Poller[1] PID[26778] Device[58] DS[168] TT[0.2] SERVER: /var/www/html/cacti/scripts/ss_hstats.php ss_hstats '58' uptime, output: 133575900
2022-04-12 18:59:04 - POLLER: Poller[1] PID[26778] Device[58] DS[255] TT[36.49] SNMP: v3: 192.168.11.200, dsname: load_1min, oid: .1.3.6.1.4.1.2021.10.1.3.1, output: 1.76
2022-04-12 18:59:04 - POLLER: Poller[1] PID[26778] Device[58] DS[274] TT[37.11] SNMP: v3: 192.168.11.200, dsname: load_15min, oid: .1.3.6.1.4.1.2021.10.1.3.3, output: 1.87
2022-04-12 18:59:04 - POLLER: Poller[1] PID[26778] Device[58] DS[288] TT[37.84] SNMP: v3: 192.168.11.200, dsname: load_5min, oid: .1.3.6.1.4.1.2021.10.1.3.2, output: 1.90
2022-04-12 18:59:04 - POLLER: Poller[1] PID[26778] Device[58] DS[406] TT[40.59] SNMP: v3: 192.168.11.200, dsname: mem_buffers, oid: .1.3.6.1.4.1.2021.4.14.0, output: 2260
2022-04-12 18:59:04 - POLLER: Poller[1] PID[26778] Device[58] DS[426] TT[37.77] SNMP: v3: 192.168.11.200, dsname: mem_cache, oid: .1.3.6.1.4.1.2021.4.15.0, output: 4948128
2022-04-12 18:59:04 - POLLER: Poller[1] PID[26778] Device[58] DS[443] TT[38.65] SNMP: v3: 192.168.11.200, dsname: mem_free, oid: .1.3.6.1.4.1.2021.4.6.0, output: 2059416
2022-04-12 18:59:04 - POLLER: Poller[1] PID[26778] Device[58] DS[458] TT[37.77] SNMP: v3: 192.168.11.200, dsname: mem_total, oid: .1.3.6.1.4.1.2021.4.5.0, output: 16209248
2022-04-12 18:59:04 - POLLER: Poller[1] PID[26778] Device[58] DS[556] TT[37.32] SNMP: v3: 192.168.11.200, dsname: users, oid: .1.3.6.1.2.1.25.1.5.0, output: 1
2022-04-12 18:59:04 - POLLER: Poller[1] PID[26778] Device[58] DS[599] TT[54.78] SNMP: v3: 192.168.11.200, dsname: proc, oid: .1.3.6.1.2.1.25.1.6.0, output: 235
2022-04-12 18:59:04 - POLLER: Poller[1] PID[26778] Device[58] DS[690] TT[50.98] SNMP: v3: 192.168.11.200, dsname: ssCpuIdle, oid: .1.3.6.1.4.1.2021.11.11.0, output: 72
2022-04-12 18:59:04 - POLLER: Poller[1] PID[26778] Device[58] DS[702] TT[44.87] SNMP: v3: 192.168.11.200, dsname: ssCpuSystem, oid: .1.3.6.1.4.1.2021.11.10.0, output: 3
2022-04-12 18:59:04 - POLLER: Poller[1] PID[26778] Device[58] DS[713] TT[40.4] SNMP: v3: 192.168.11.200, dsname: ssCpuUser, oid: .1.3.6.1.4.1.2021.11.9.0, output: 11
2022-04-12 18:59:04 - POLLER: Poller[1] PID[26778] Device[58] DS[782] TT[37.24] SNMP: v3: 192.168.11.200, dsname: ssSysInterrupts, oid: .1.3.6.1.4.1.2021.11.7.0, output: 4598
2022-04-12 18:59:04 - POLLER: Poller[1] PID[26778] Device[58] DS[820] TT[37.87] SNMP: v3: 192.168.11.200, dsname: ssSysContext, oid: .1.3.6.1.4.1.2021.11.8.0, output: 5519
2022-04-12 18:59:04 - POLLER: Poller[1] PID[26778] Device[58] DS[858] TT[178.85] SERVER: /var/www/html/cacti/scripts/ss_net_snmp_disk_io.php ss_net_snmp_disk_io '192.168.11.200', output: reads:0 writes:34
2022-04-12 18:59:05 - POLLER: Poller[1] PID[26778] Device[58] DS[888] TT[148.83] SERVER: /var/www/html/cacti/scripts/ss_net_snmp_disk_bytes.php ss_net_snmp_disk_bytes '192.168.11.200', output: bytesread:0 byteswritten:288256
2022-04-12 18:59:05 - POLLER: Poller[1] PID[26778] Device[58] DS[2005] TT[40.69] SNMP: v3: 192.168.11.200, dsname: traffic_in, oid: .1.3.6.1.2.1.31.1.1.1.6.3, output: 48748476664
2022-04-12 18:59:05 - POLLER: Poller[1] PID[26778] Device[58] DS[2005] TT[39.54] SNMP: v3: 192.168.11.200, dsname: traffic_out, oid: .1.3.6.1.2.1.31.1.1.1.10.3, output: 6033456632
2022-04-12 18:59:05 - POLLER: Poller[1] PID[26778] Device[58] DS[2083] TT[37.46] SNMP: v3: 192.168.11.200, dsname: discards_in, oid: .1.3.6.1.2.1.2.2.1.13.3, output: 0
2022-04-12 18:59:05 - POLLER: Poller[1] PID[26778] Device[58] DS[2083] TT[51.21] SNMP: v3: 192.168.11.200, dsname: discards_out, oid: .1.3.6.1.2.1.2.2.1.19.3, output: 0
2022-04-12 18:59:05 - POLLER: Poller[1] PID[26778] Device[58] DS[2083] TT[47.3] SNMP: v3: 192.168.11.200, dsname: errors_in, oid: .1.3.6.1.2.1.2.2.1.14.3, output: 0
2022-04-12 18:59:05 - POLLER: Poller[1] PID[26778] Device[58] DS[2083] TT[47.99] SNMP: v3: 192.168.11.200, dsname: errors_out, oid: .1.3.6.1.2.1.2.2.1.20.3, output: 0
2022-04-12 18:59:05 - POLLER: Poller[1] PID[26778] Device[58] Time[1.70] Items[24] Errors[0]
2022-04-12 18:59:05 - POLLER: Poller[1] PID[26778] Time: 1.8012 s, Poller: 1, Threads: N/A, Devices: 1, Items: 24, Errors: 0

@TheWitness TheWitness added bug Undesired behaviour unverified Some days we don't have a clue labels Apr 12, 2022
@TheNetworkIsDown
Copy link
Contributor Author

Can you use spine?

Probably. But it will be something new. Never dug into this.

What is your snmp timeout set to?

500 ms

Are you logging poller errors for the device?

Settings > General > Generic Log Level: LOW - Statistics and Errors

Also, note that the packages included in the 1.2.20 release were damaged.

It's an update indeed. What do you mean by "packages" and "damaged"? Usually upgrade means replacing the entire thing according to https://docs.cacti.net/Upgrading-Cacti.md, does it not?

When using cmd.php, you should be able to do this, replacing 58 with the actual device id

# php -q /path/to/cacti/cmd.php --first=215 --last=215 --debug --mibs --poller=1
2022/04/13 04:24:07 - POLLER: Poller[1] PID[28541] Device[215] STATUS: Device '192.168.1.222' is UP.
2022/04/13 04:24:07 - POLLER: Poller[1] PID[28541] Device[215] RECACHE: Processing 1 items in the auto reindex cache for '192.168.1.222'.
2022/04/13 04:24:07 - POLLER: Poller[1] PID[28541] Device[215] DQ[1] RECACHE OID: .1.3.6.1.2.1.1.3.0, (assert:1178212900 < output:1178233700)
2022/04/13 04:24:07 - POLLER: Poller[1] PID[28541] Device[215] DS[6434] TT[3.19] SNMP: v3: 192.168.1.222, dsname: traffic_in, oid: .1.3.6.1.2.1.31.1.1.1.6.1001, output: 3713557929
2022/04/13 04:24:07 - POLLER: Poller[1] PID[28541] Device[215] DS[6434] TT[2.64] SNMP: v3: 192.168.1.222, dsname: traffic_out, oid: .1.3.6.1.2.1.31.1.1.1.10.1001, output: 92566601495
2022/04/13 04:24:07 - POLLER: Poller[1] PID[28541] Device[215] DS[6435] TT[2.43] SNMP: v3: 192.168.1.222, dsname: traffic_in, oid: .1.3.6.1.2.1.31.1.1.1.6.1002, output: 249055296372
2022/04/13 04:24:07 - POLLER: Poller[1] PID[28541] Device[215] DS[6435] TT[2.25] SNMP: v3: 192.168.1.222, dsname: traffic_out, oid: .1.3.6.1.2.1.31.1.1.1.10.1002, output: 398380942770
2022/04/13 04:24:07 - POLLER: Poller[1] PID[28541] Device[215] DS[6436] TT[2.71] SNMP: v3: 192.168.1.222, dsname: traffic_in, oid: .1.3.6.1.2.1.31.1.1.1.6.1003, output: 67800014139
...
2022/04/13 04:24:08 - POLLER: Poller[1] PID[28541] Device[215] DS[6521] TT[2.53] SNMP: v3: 192.168.1.222, dsname: errors_in, oid: .1.3.6.1.2.1.2.2.1.14.1053, output: 0
2022/04/13 04:24:08 - POLLER: Poller[1] PID[28541] Device[215] DS[6521] TT[2.3] SNMP: v3: 192.168.1.222, dsname: errors_out, oid: .1.3.6.1.2.1.2.2.1.20.1053, output: 0
2022/04/13 04:24:08 - POLLER: Poller[1] PID[28541] Device[215] DS[8448] TT[2.48] SNMP: v3: 192.168.1.222, dsname: traffic_in, oid: .1.3.6.1.2.1.31.1.1.1.6.1048, output: 0
2022/04/13 04:24:08 - POLLER: Poller[1] PID[28541] Device[215] DS[8448] TT[2.28] SNMP: v3: 192.168.1.222, dsname: traffic_out, oid: .1.3.6.1.2.1.31.1.1.1.10.1048, output: 0
2022/04/13 04:24:08 - POLLER: Poller[1] PID[28541] Device[215] Time[0.43] Items[128] Errors[0]
2022/04/13 04:24:08 - POLLER: Poller[1] PID[28541] Time: 0.4423 s, Poller: 1, Threads: N/A, Devices: 1, Items: 128, Errors: 0

I've now cleaned up a little, removed down and unknown devices, removed everything generating invalid output, made sure all rrds are being updated by looking at the filesystem etc.

During the cleanup it's all over the place. Whenever the device/graph list changes, the problems moves to another device/DS.

On one of the devices/DS/graphs it is currently complaining about: rra/172/5633.rrd

Filesystem today:

# l rra/172
total 1240
drwxr-xr-x   2 wwwrun www   4096 Apr 13 05:58 ./
drwxrwxr-x 175 wwwrun www   4096 Apr 13 05:27 ../
-rw-r--r--   1 wwwrun www  94816 Apr 13 05:55 5625.rrd
-rw-r--r--   1 wwwrun www  94816 Apr 13 05:55 5626.rrd
-rw-r--r--   1 wwwrun www  94816 Apr 13 05:55 5627.rrd
-rw-r--r--   1 wwwrun www  94816 Apr 13 05:55 5628.rrd
-rw-r--r--   1 wwwrun www  94816 Apr 13 05:55 5629.rrd
-rw-r--r--   1 wwwrun www 188464 Apr 13 05:55 5630.rrd
-rw-r--r--   1 wwwrun www 188464 Apr 13 05:55 5631.rrd
-rw-r--r--   1 wwwrun www 188464 Apr 13 05:55 5632.rrd
-rw-r--r--   1 wwwrun www 188464 Apr 13 05:55 5634.rrd

Filesystem yesterday:

04/12/22 21:15                    94816 5625.rrd
04/12/22 21:15                    94816 5626.rrd
04/12/22 21:15                    94816 5627.rrd
04/12/22 21:15                    94816 5628.rrd
04/12/22 21:15                    94816 5629.rrd
04/12/22 21:15                   188464 5630.rrd
04/12/22 21:15                   188464 5631.rrd
04/12/22 21:15                   188464 5632.rrd
04/12/22 21:15                   188464 5633.rrd
04/12/22 21:15                   188464 5634.rrd

Why did it remove 5633.rrd and is now not generating it...

Currently the log is clean.

2022/04/13 06:31:02 - SYSTEM DSSTATS STATS: Time:0.08 Type:HOURLY
2022/04/13 06:31:02 - SYSTEM STATS: Time:59.9068 Method:cmd.php Processes:1 Threads:1 Hosts:146 HostsPerProcess:146 DataSources:4849 RRDsProcessed:3007
2022/04/13 06:26:01 - SYSTEM DSSTATS STATS: Time:0.13 Type:HOURLY
2022/04/13 06:26:00 - SYSTEM STATS: Time:58.8637 Method:cmd.php Processes:1 Threads:1 Hosts:146 HostsPerProcess:146 DataSources:4849 RRDsProcessed:3007
2022/04/13 06:20:58 - SYSTEM DSSTATS STATS: Time:0.04 Type:HOURLY
2022/04/13 06:20:58 - SYSTEM STATS: Time:57.0882 Method:cmd.php Processes:1 Threads:1 Hosts:146 HostsPerProcess:146 DataSources:4849 RRDsProcessed:3007
2022/04/13 06:15:57 - SYSTEM DSSTATS STATS: Time:0.02 Type:HOURLY
2022/04/13 06:15:57 - SYSTEM STATS: Time:55.4859 Method:cmd.php Processes:1 Threads:1 Hosts:146 HostsPerProcess:146 DataSources:4849 RRDsProcessed:3007 

But (new) stuff is still piling up in poller_output table...

local_data_id   rrd_name        time    output
5633    discards_in     2022-04-13 06:20:14     383910
5633    discards_in     2022-04-13 06:25:14     383936
5633    discards_in     2022-04-13 06:30:15     384450
5633    discards_out    2022-04-13 06:20:23     4048
5633    discards_out    2022-04-13 06:25:23     4048
5633    discards_out    2022-04-13 06:30:27     4048
5633    errors_in       2022-04-13 06:20:23     0
5633    errors_in       2022-04-13 06:25:23     0
5633    errors_in       2022-04-13 06:30:27     0
5633    errors_out      2022-04-13 06:20:23     0
5633    errors_out      2022-04-13 06:25:23     0
5633    errors_out      2022-04-13 06:30:27     0
6175    traffic_in      2022-04-13 06:20:23     11706124177352
6175    traffic_in      2022-04-13 06:25:23     11706127865293
6175    traffic_in      2022-04-13 06:30:27     11706130429406
6175    traffic_out     2022-04-13 06:20:36     160124733015
6175    traffic_out     2022-04-13 06:25:38     160127759725
6175    traffic_out     2022-04-13 06:30:39     160129649226
7697    hdd_total       2022-04-13 06:20:36     6442446848
7697    hdd_total       2022-04-13 06:25:38     6442446848
7697    hdd_total       2022-04-13 06:30:39     6442446848
7697    hdd_used        2022-04-13 06:20:57     174587904
7697    hdd_used        2022-04-13 06:25:59     174587904
7697    hdd_used        2022-04-13 06:31:01     174587904

@TheNetworkIsDown
Copy link
Contributor Author

Are there instructions for debugging the logic flow somewhere? I think I once tried with xdebug and phpstorm but there was a problem. I think it was the asynchronous calls.

@netniV
Copy link
Member

netniV commented Apr 13, 2022

Because of concurrency, makes diagnosing things harder. This is why we suggest testing against individual hosts and see if any of the indicators are there for problems. The real issue can sometimes be with the host the was polled prior to the first problem.

@TheWitness
Copy link
Member

Spine is going to be way more tolerant to down devices too and can get the OID's in bulk. So, it should be more reliable at scale.

@TheWitness TheWitness added question A question not a bug and removed bug Undesired behaviour unverified Some days we don't have a clue labels Apr 13, 2022
@TheWitness
Copy link
Member

Switching this from a bug to a question. @NotAProfessionalDeveloper, when you are comfortable, you can close.

@TheNetworkIsDown
Copy link
Contributor Author

Well closing is the last thing coming to my mind right now 😄 There is definitely a problem which needs to be solved. But we don't know how to debug. That's not good.
In a next step I'll try polling everything manually using cmd.php with the expected result that individual polling is ok.

So what is the recommended alternative? Add spine to the equation? What is the exact reason for that?

@TheWitness
Copy link
Member

Well, what I would check is that in the poller_item table, there is a column, it's rrd_num. For the traffic_in and traffic_out data sources, bot those rows should have an rrd_num equal to 2. Let me know what the MAX_OID's is for the Device in question? Maybe that's the issue with cmd.php.

TheWitness added a commit that referenced this issue Apr 14, 2022
Poller output not empty all the time
@TheWitness
Copy link
Member

Update to the latest 1.2.x branch and let us know if the issue is resolved.

@TheWitness TheWitness added the resolved A fixed issue label Apr 14, 2022
@TheWitness TheWitness added this to the v1.2.21 milestone Apr 14, 2022
@TheNetworkIsDown
Copy link
Contributor Author

TheNetworkIsDown commented Apr 14, 2022

Something has changed at least, and there are no more errors that I can see

2022/04/14 14:06:07 - SYSTEM STATS: Time:65.9426 Method:cmd.php Processes:1 Threads:1 Hosts:146 HostsPerProcess:146 DataSources:4849 RRDsProcessed:3007
2022/04/14 14:06:08 - SYSTEM DSSTATS STATS: Time:0.19 Type:HOURLY
2022/04/14 14:11:04 - SYSTEM STATS: Time:62.7237 Method:cmd.php Processes:1 Threads:1 Hosts:146 HostsPerProcess:146 DataSources:4849 RRDsProcessed:3010
2022/04/14 14:11:05 - SYSTEM DSSTATS STATS: Time:0.16 Type:HOURLY

Note "RRDsProcessed" which has increased from 3007 to 3010.
I'll monitor the behavior.
Thanks for now ;)

@TheWitness
Copy link
Member

Yea, I changed the behavior to mimic spine vs digging in deep. Spine works. So, it was the easy choice.

@TheWitness
Copy link
Member

Anything new yet @NotAProfessionalDeveloper ?

@TheNetworkIsDown
Copy link
Contributor Author

Nope. LGTM
Thanks 💯

@netniV
Copy link
Member

netniV commented Apr 21, 2022

If you noticed any missing documentation image please open an issue tracker on the document ration repo so we can get that resolved :)

@bmfmancini
Copy link
Member

bmfmancini commented Oct 11, 2022 via email

@github-actions github-actions bot locked and limited conversation to collaborators Jan 10, 2023
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
question A question not a bug resolved A fixed issue
Projects
None yet
Development

No branches or pull requests

4 participants