Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

When syncing data collectors, a reindex event may be triggered unnecessarily #3931

Closed
bmfmancini opened this issue Oct 9, 2020 · 46 comments
Closed
Labels
bug Undesired behaviour resolved A fixed issue
Milestone

Comments

@bmfmancini
Copy link
Member

Hey All

I am having a weird issue and I see a descrpency between spine and netsnmp
I have some new devices in my lab around 400 of them I recently started noticing that a handful of them seem to always be set for recache

Digging furthure I see that the recache has been triggered because of .1.3.6.1.2.1.1.3.0 this is a wireless modem and the OID puts out the uptime of the wireless connection and not the modem system uptime for what ever reason

Here is the log

2020-10-09 15:07:19 - SPINE: Poller[1] PID[4082] Device[609] HT[1] RECACHE: Processing 2 items in the auto reindex cache for 'IP'
2020-10-09 15:07:19 - SPINE: Poller[1] PID[4082] Device[655] HT[1] RECACHE: Processing 2 items in the auto reindex cache for 'IP'
2020-10-09 15:07:19 - SPINE: Poller[1] PID[4082] Device[613] HT[1] DQ[14] RECACHE OID: .1.3.6.1.2.1.1.3.0, (assert: 4129044 < output: 4135832)
2020-10-09 15:07:19 - SPINE: Poller[1] PID[4082] Device[613] HT[1] DQ[4] RECACHE OID: .1.3.6.1.2.1.1.3.0, (assert: 4129044 < output: 4135832)
2020-10-09 15:07:19 - SPINE: Poller[1] PID[4082] Device[613] HT[1] RECACHE: Processing 2 items in the auto reindex cache for 'IP'
2020-10-09 15:07:19 - SPINE: Poller[1] PID[4082] Device[611] HT[1] DQ[14] RECACHE OID: .1.3.6.1.2.1.1.3.0, (assert: 1416513 < output: 1423302)
2020-10-09 15:07:19 - SPINE: Poller[1] PID[4082] Device[611] HT[1] DQ[4] RECACHE OID: .1.3.6.1.2.1.1.3.0, (assert: 1416513 < output: 1423302)
2020-10-09 15:07:19 - SPINE: Poller[1] PID[4082] Device[586] HT[1] DQ[14] RECACHE OID: .1.3.6.1.2.1.1.3.0, (assert: 4112543 < output: 4119352)
2020-10-09 15:07:19 - SPINE: Poller[1] PID[4082] Device[586] HT[1] DQ[4] RECACHE OID: .1.3.6.1.2.1.1.3.0, (assert: 4112543 < output: 4119352)
2020-10-09 15:07:19 - SPINE: Poller[1] PID[4082] Device[611] HT[1] RECACHE: Processing 2 items in the auto reindex cache for 'IP'
2020-10-09 15:07:19 - SPINE: Poller[1] PID[4082] Device[586] HT[1] RECACHE: Processing 2 items in the auto reindex cache for 'IP'

I also started seeing the following in the error log

2020-10-09 15:27:21 - POLLER: Poller[1] ERROR: Process being killed due to timeout! (commands, master, 1, 21242)
2020-10-09 15:27:19 - SPINE: Poller[1] PID[8933] ERROR: Failed to get oid '.1.3.6.1.2.1.1.3.0' for Device[642]
2020-10-09 15:27:19 - SPINE: Poller[1] PID[8933] ERROR: Failed to get oid '.1.3.6.1.2.1.1.3.0' for Device[670]
2020-10-09 15:27:19 - SPINE: Poller[1] PID[8933] ERROR: Failed to get oid '.1.3.6.1.2.1.1.3.0' for Device[694]
2020-10-09 15:27:18 - SPINE: Poller[1] PID[8933] ERROR: Failed to get oid '.1.3.6.1.2.1.1.3.0' for Device[661]
2020-10-09 15:27:18 - SPINE: Poller[1] PID[8933] ERROR: Failed to get oid '.1.3.6.1.2.1.1.3.0' for Device[611]
2020-10-09 15:27:17 - SPINE: Poller[1] PID[8933] ERROR: Failed to get oid '.1.3.6.1.2.1.1.3.0' for Device[664]

When I do a poll direct from spine I get the following

./spine -R -H 329 | more
2020-10-09 15:29:33 - SPINE: Poller[1] PID[18512] Device[329] WARNING: snmp_pdu_create(.1.3.6.1.2.1.1.3.0)
2020-10-09 15:29:33 - SPINE: Poller[1] PID[18512] Device[329] WARNING: snmp_pdu_create(.1.3.6.1.2.1.1.3.0) [complete]
2020-10-09 15:29:33 - SPINE: Poller[1] PID[18512] Device[329] WARNING: snmp_parse_oid(.1.3.6.1.2.1.1.3.0)
2020-10-09 15:29:33 - SPINE: Poller[1] PID[18512] Device[329] WARNING: snmp_parse_oid(.1.3.6.1.2.1.1.3.0) [complete]
2020-10-09 15:29:33 - SPINE: Poller[1] PID[18512] Device[329] WARNING: snmp_add_null_var(.1.3.6.1.2.1.1.3.0)
2020-10-09 15:29:33 - SPINE: Poller[1] PID[18512] Device[329] WARNING: snmp_add_null_var(.1.3.6.1.2.1.1.3.0) [complete]
2020-10-09 15:29:33 - SPINE: Poller[1] PID[18512] Device[329] WARNING: snmp_sess_sync_response(.1.3.6.1.2.1.1.3.0)
2020-10-09 15:29:35 - SPINE: Poller[1] PID[18512] Device[329] WARNING: snmp_sess_sync_response(.1.3.6.1.2.1.1.3.0) [complete]
2020-10-09 15:29:35 - SPINE: Poller[1] PID[18512] ERROR: Failed to get oid '.1.3.6.1.2.1.1.3.0' for Device[329]

But if I do a snmpwalk on that OID it is responding fine

Spine v1.2.12
Cacti V1.2.12
NET-SNMP version: 5.7.2

@bmfmancini
Copy link
Member Author

I am trying this on cmd.php right now to see if the same behavior happens

@bmfmancini
Copy link
Member Author

Hmm same thing with cmd.php

@bmfmancini
Copy link
Member Author

I also found the following

2020-10-09 15:49:25 - PCOMMAND Device[489] WARNING: Recache Event Detected for Device
2020-10-09 15:49:04 - SPINE: Poller[1] PID[7263] Device[489] WARNING: snmp_sess_sync_response(.1.3.6.1.2.1.1.3.0) [complete]
2020-10-09 15:49:04 - SPINE: Poller[1] PID[7263] Device[489] WARNING: snmp_sess_sync_response(.1.3.6.1.2.1.1.3.0)
2020-10-09 15:49:04 - SPINE: Poller[1] PID[7263] Device[489] WARNING: snmp_add_null_var(.1.3.6.1.2.1.1.3.0) [complete]
2020-10-09 15:49:04 - SPINE: Poller[1] PID[7263] Device[489] WARNING: snmp_add_null_var(.1.3.6.1.2.1.1.3.0)
2020-10-09 15:49:04 - SPINE: Poller[1] PID[7263] Device[489] WARNING: snmp_parse_oid(.1.3.6.1.2.1.1.3.0) [complete]
2020-10-09 15:49:04 - SPINE: Poller[1] PID[7263] Device[489] WARNING: snmp_parse_oid(.1.3.6.1.2.1.1.3.0)
2020-10-09 15:49:04 - SPINE: Poller[1] PID[7263] Device[489] WARNING: snmp_pdu_create(.1.3.6.1.2.1.1.3.0) [complete]
2020-10-09 15:49:04 - SPINE: Poller[1] PID[7263] Device[489] WARNING: snmp_pdu_create(.1.3.6.1.2.1.1.3.0)
2020-10-09 15:49:04 - SPINE: Poller[1] PID[7263] Device[489] WARNING: snmp_sess_sync_response(.1.3.6.1.2.1.1.6.0) [complete]
2020-10-09 15:49:03 - SPINE: Poller[1] PID[7263] Device[489] WARNING: snmp_sess_sync_response(.1.3.6.1.2.1.1.6.0)
2020-10-09 15:49:03 - SPINE: Poller[1] PID[7263] Device[489] WARNING: snmp_add_null_var(.1.3.6.1.2.1.1.6.0) [complete]
2020-10-09 15:49:03 - SPINE: Poller[1] PID[7263] Device[489] WARNING: snmp_add_null_var(.1.3.6.1.2.1.1.6.0)
2020-10-09 15:49:03 - SPINE: Poller[1] PID[7263] Device[489] WARNING: snmp_parse_oid(.1.3.6.1.2.1.1.6.0) [complete]
2020-10-09 15:49:03 - SPINE: Poller[1] PID[7263] Device[489] WARNING: snmp_parse_oid(.1.3.6.1.2.1.1.6.0)
2020-10-09 15:49:03 - SPINE: Poller[1] PID[7263] Device[489] WARNING: snmp_pdu_create(.1.3.6.1.2.1.1.6.0) [complete]
2020-10-09 15:49:03 - SPINE: Poller[1] PID[7263] Device[489] WARNING: snmp_pdu_create(.1.3.6.1.2.1.1.6.0)
2020-10-09 15:49:03 - SPINE: Poller[1] PID[7263] Device[489] WARNING: snmp_sess_sync_response(.1.3.6.1.2.1.1.5.0) [complete]
2020-10-09 15:49:03 - SPINE: Poller[1] PID[7263] Device[489] WARNING: snmp_sess_sync_response(.1.3.6.1.2.1.1.5.0)
2020-10-09 15:49:03 - SPINE: Poller[1] PID[7263] Device[489] WARNING: snmp_add_null_var(.1.3.6.1.2.1.1.5.0) [complete]
2020-10-09 15:49:03 - SPINE: Poller[1] PID[7263] Device[489] WARNING: snmp_add_null_var(.1.3.6.1.2.1.1.5.0)
2020-10-09 15:49:03 - SPINE: Poller[1] PID[7263] Device[489] WARNING: snmp_parse_oid(.1.3.6.1.2.1.1.5.0) [complete]
2020-10-09 15:49:03 - SPINE: Poller[1] PID[7263] Device[489] WARNING: snmp_parse_oid(.1.3.6.1.2.1.1.5.0)
2020-10-09 15:49:03 - SPINE: Poller[1] PID[7263] Device[489] WARNING: snmp_pdu_create(.1.3.6.1.2.1.1.5.0) [complete]
2020-10-09 15:49:03 - SPINE: Poller[1] PID[7263] Device[489] WARNING: snmp_pdu_create(.1.3.6.1.2.1.1.5.0)
2020-10-09 15:49:03 - SPINE: Poller[1] PID[7263] Device[489] WARNING: snmp_sess_sync_response(.1.3.6.1.2.1.1.4.0) [complete]
2020-10-09 15:49:03 - SPINE: Poller[1] PID[7263] Device[489] WARNING: snmp_sess_sync_response(.1.3.6.1.2.1.1.4.0)
2020-10-09 15:49:03 - SPINE: Poller[1] PID[7263] Device[489] WARNING: snmp_add_null_var(.1.3.6.1.2.1.1.4.0) [complete]
2020-10-09 15:49:03 - SPINE: Poller[1] PID[7263] Device[489] WARNING: snmp_add_null_var(.1.3.6.1.2.1.1.4.0)
2020-10-09 15:49:03 - SPINE: Poller[1] PID[7263] Device[489] WARNING: snmp_parse_oid(.1.3.6.1.2.1.1.4.0) [complete]
2020-10-09 15:49:03 - SPINE: Poller[1] PID[7263] Device[489] WARNING: snmp_parse_oid(.1.3.6.1.2.1.1.4.0)
2020-10-09 15:49:03 - SPINE: Poller[1] PID[7263] Device[489] WARNING: snmp_pdu_create(.1.3.6.1.2.1.1.4.0) [complete]
2020-10-09 15:49:03 - SPINE: Poller[1] PID[7263] Device[489] WARNING: snmp_pdu_create(.1.3.6.1.2.1.1.4.0)
2020-10-09 15:49:03 - SPINE: Poller[1] PID[7263] Device[489] WARNING: snmp_sess_sync_response(.1.3.6.1.2.1.1.3.0) [complete]
2020-10-09 15:49:03 - SPINE: Poller[1] PID[7263] Device[489] WARNING: snmp_sess_sync_response(.1.3.6.1.2.1.1.3.0)
2020-10-09 15:49:03 - SPINE: Poller[1] PID[7263] Device[489] WARNING: snmp_add_null_var(.1.3.6.1.2.1.1.3.0) [complete]
2020-10-09 15:49:03 - SPINE: Poller[1] PID[7263] Device[489] WARNING: snmp_add_null_var(.1.3.6.1.2.1.1.3.0)
2020-10-09 15:49:03 - SPINE: Poller[1] PID[7263] Device[489] WARNING: snmp_parse_oid(.1.3.6.1.2.1.1.3.0) [complete]
2020-10-09 15:49:03 - SPINE: Poller[1] PID[7263] Device[489] WARNING: snmp_parse_oid(.1.3.6.1.2.1.1.3.0)
2020-10-09 15:49:03 - SPINE: Poller[1] PID[7263] Device[489] WARNING: snmp_pdu_create(.1.3.6.1.2.1.1.3.0) [complete]
2020-10-09 15:49:03 - SPINE: Poller[1] PID[7263] Device[489] WARNING: snmp_pdu_create(.1.3.6.1.2.1.1.3.0)
2020-10-09 15:49:03 - SPINE: Poller[1] PID[7263] Device[489] WARNING: snmp_sess_sync_response(.1.3.6.1.2.1.1.2.0) [complete]
2020-10-09 15:49:03 - SPINE: Poller[1] PID[7263] Device[489] WARNING: snmp_sess_sync_response(.1.3.6.1.2.1.1.2.0)
2020-10-09 15:49:03 - SPINE: Poller[1] PID[7263] Device[489] WARNING: snmp_add_null_var(.1.3.6.1.2.1.1.2.0) [complete]
2020-10-09 15:49:03 - SPINE: Poller[1] PID[7263] Device[489] WARNING: snmp_add_null_var(.1.3.6.1.2.1.1.2.0)
2020-10-09 15:49:03 - SPINE: Poller[1] PID[7263] Device[489] WARNING: snmp_parse_oid(.1.3.6.1.2.1.1.2.0) [complete]
2020-10-09 15:49:03 - SPINE: Poller[1] PID[7263] Device[489] WARNING: snmp_parse_oid(.1.3.6.1.2.1.1.2.0)
2020-10-09 15:49:03 - SPINE: Poller[1] PID[7263] Device[489] WARNING: snmp_pdu_create(.1.3.6.1.2.1.1.2.0) [complete]
2020-10-09 15:49:03 - SPINE: Poller[1] PID[7263] Device[489] WARNING: snmp_pdu_create(.1.3.6.1.2.1.1.2.0)
2020-10-09 15:49:03 - SPINE: Poller[1] PID[7263] Device[489] WARNING: snmp_sess_sync_response(.1.3.6.1.2.1.1.1.0) [complete]
2020-10-09 15:49:03 - SPINE: Poller[1] PID[7263] Device[489] WARNING: snmp_sess_sync_response(.1.3.6.1.2.1.1.1.0)
2020-10-09 15:49:03 - SPINE: Poller[1] PID[7263] Device[489] WARNING: snmp_add_null_var(.1.3.6.1.2.1.1.1.0) [complete]
2020-10-09 15:49:03 - SPINE: Poller[1] PID[7263] Device[489] WARNING: snmp_add_null_var(.1.3.6.1.2.1.1.1.0)
2020-10-09 15:49:03 - SPINE: Poller[1] PID[7263] Device[489] WARNING: snmp_parse_oid(.1.3.6.1.2.1.1.1.0) [complete]
2020-10-09 15:49:03 - SPINE: Poller[1] PID[7263] Device[489] WARNING: snmp_parse_oid(.1.3.6.1.2.1.1.1.0)
2020-10-09 15:49:03 - SPINE: Poller[1] PID[7263] Device[489] WARNING: snmp_pdu_create(.1.3.6.1.2.1.1.1.0) [complete]
2020-10-09 15:49:03 - SPINE: Poller[1] PID[7263] Device[489] WARNING: snmp_pdu_create(.1.3.6.1.2.1.1.1.0)
2020-10-09 15:49:03 - SPINE: Poller[1] PID[7263] Device[489] WARNING: snmp_sess_sync_response(.1.3.6.1.2.1.1.3.0) [complete]
2020-10-09 15:49:03 - SPINE: Poller[1] PID[7263] Device[489] WARNING: snmp_sess_sync_response(.1.3.6.1.2.1.1.3.0)
2020-10-09 15:49:03 - SPINE: Poller[1] PID[7263] Device[489] WARNING: snmp_add_null_var(.1.3.6.1.2.1.1.3.0) [complete]
2020-10-09 15:49:03 - SPINE: Poller[1] PID[7263] Device[489] WARNING: snmp_add_null_var(.1.3.6.1.2.1.1.3.0)
2020-10-09 15:49:03 - SPINE: Poller[1] PID[7263] Device[489] WARNING: snmp_parse_oid(.1.3.6.1.2.1.1.3.0) [complete]
2020-10-09 15:49:03 - SPINE: Poller[1] PID[7263] Device[489] WARNING: snmp_parse_oid(.1.3.6.1.2.1.1.3.0)
2020-10-09 15:49:03 - SPINE: Poller[1] PID[7263] Device[489] WARNING: snmp_pdu_create(.1.3.6.1.2.1.1.3.0) [complete]
2020-10-09 15:49:03 - SPINE: Poller[1] PID[7263] Device[489] WARNING: snmp_pdu_create(.1.3.6.1.2.1.1.3.0)

Its seems spine is doing alot with this OID almost like a mini loop

@TheWitness
Copy link
Member

Disable data collector sync. We've found that this causes the issue.

@TheWitness
Copy link
Member

Might be something else too.

@bmfmancini
Copy link
Member Author

bmfmancini commented Oct 10, 2020 via email

@bmfmancini
Copy link
Member Author

So deeper look it turns out its not just a handful of devices its all of this specific device but the error remains the same unable to fetch .1.3.6.1.2.1.1.3.0' snmpwalk still works fine on the same device fir that oid

@TheWitness TheWitness changed the title Spine V1.2.12 Recache due to failed to get OID but SNMPWALK works Recache due to failed to get OID but SNMPWALK works Nov 21, 2020
@TheWitness
Copy link
Member

Did this happen to happen when we switched from DST to ST?

@TheWitness
Copy link
Member

Nope:

2020: Sunday, March 8 and Sunday, November 1

@TheWitness
Copy link
Member

@bmfmancini, someone is messing with the clocks on those devices. That's the reason.

@bmfmancini
Copy link
Member Author

bmfmancini commented Nov 21, 2020 via email

@TheWitness
Copy link
Member

What is the re-index method?

@bmfmancini
Copy link
Member Author

bmfmancini commented Nov 21, 2020 via email

@TheWitness
Copy link
Member

Okay, so each polling cycle, Cacti takes the value from poller_reindex, the "assert_value" and compares it against the "arg1" snmpget of that value, and if the operator is wrong, then it causes that error. From my system below. Tinker with calling snmpget with the oid and what is stored in that table. These are also remote devices right?

image

@TheWitness
Copy link
Member

This could be an artifact from a recent change BTW.

@TheWitness
Copy link
Member

Okay, there is a problem. Lucky day!

@TheWitness TheWitness reopened this Nov 21, 2020
@TheWitness
Copy link
Member

Should have the solution shortly.

@TheWitness
Copy link
Member

Turns out this is a Cacti issue. Committing in a bit.

@TheWitness TheWitness transferred this issue from Cacti/spine Nov 21, 2020
TheWitness added a commit that referenced this issue Nov 21, 2020
- Recache due to failed to get OID but SNMPWALK works
- Certain Device actions cause the removal of poller items from the remote data collector
@TheWitness TheWitness added bug Undesired behaviour resolved A fixed issue labels Nov 21, 2020
@TheWitness TheWitness added this to the v1.2.16 milestone Nov 21, 2020
@TheWitness
Copy link
Member

Test ASAP. This will force us to move ahead the release.

@bmfmancini
Copy link
Member Author

bmfmancini commented Nov 21, 2020 via email

@TheWitness
Copy link
Member

Sooner the better, I don't want this bug hanging out there for too long.

@bmfmancini
Copy link
Member Author

@TheWitness

I tested this morning I grabbed the files off the 1.2.x branch same behavior is seen

@bmfmancini
Copy link
Member Author

Should I try to rebuild the poller cache?

@TheWitness
Copy link
Member

Yes.

@bmfmancini
Copy link
Member Author

Ok just rebuilt the cache will report back soon

@bmfmancini
Copy link
Member Author

@TheWitness No change on my side

@TheWitness
Copy link
Member

Are your PHP file replicating to the remotes?

@bmfmancini
Copy link
Member Author

bmfmancini commented Nov 25, 2020 via email

@TheWitness
Copy link
Member

It was buggered up on my system again. Research tonight.

@bmfmancini
Copy link
Member Author

bmfmancini commented Nov 25, 2020 via email

@TheWitness
Copy link
Member

Okay, take the latest lib/poller.php and do a full sync to your pollers. This should get it fixed.

@TheWitness
Copy link
Member

Related to #3916 and #3915

@bmfmancini
Copy link
Member Author

Sorry man no Dice still coming in

@TheWitness
Copy link
Member

TheWitness commented Nov 26, 2020

So, the re-index warnings happen after replication still then? I want to ensure that we are not mixing issues. The poller cache evaporating vs. reindex errors.

@bmfmancini
Copy link
Member Author

recache OID warnings still come even after replication

@TheWitness
Copy link
Member

Yea, just about to make an update.

TheWitness added a commit that referenced this issue Nov 26, 2020
* forgot to handle the poller_reindex cache
@TheWitness
Copy link
Member

Okay, should be fixed now.

@bmfmancini
Copy link
Member Author

Ok testing now

@TheWitness
Copy link
Member

Bump!

@bmfmancini
Copy link
Member Author

bmfmancini commented Nov 27, 2020 via email

@TheWitness
Copy link
Member

Okay, take this watch and re-write it for your system, and then run it. While it's running note the "assert" values. Then, do a full sync to the remote, the value on the remote should go back.

 watch 'echo "Main Hosts";mysql -e "select * from poller_reindex where host_id=13" cacti;echo "Remote Host";mysql -ucactiuser -pcactiuser -hvmhost1 -e "select * from poller_reindex where host_id=13" cacti';

Output should look something like:

Every 2.0s: echo "Main Hosts";mysql -e "select * from poller_reindex where host_id=13" cacti;echo "Remote Host";mysql -u...  Fri Nov 27 10:30:39 2020

Main Hosts
host_id data_query_id   action  present op      assert_value    arg1
13      4       0       1       =       6       .1.3.6.1.2.1.2.1.0
13      6       0       1       <       590286159       .1.3.6.1.2.1.1.3.0
13      7       0       1       <       633714161       .1.3.6.1.2.1.1.3.0
13      8       0       1       <       633714235       .1.3.6.1.2.1.1.3.0
13      9       0       1       <       633714273       .1.3.6.1.2.1.1.3.0
Remote Host
host_id data_query_id   action  present op      assert_value    arg1
13      4       0       1       =       6       .1.3.6.1.2.1.2.1.0
13      6       0       1       <       639166433       .1.3.6.1.2.1.1.3.0
13      7       0       1       <       639166435       .1.3.6.1.2.1.1.3.0
13      8       0       1       <       639166437       .1.3.6.1.2.1.1.3.0
13      9       0       1       <       639166439       .1.3.6.1.2.1.1.3.0

@TheWitness
Copy link
Member

You should notice that the Main collectors time should be back from before the last sync, and it should not be updated afterwards. The only person writing that assert_value will be the remote data collector unless you have another collector pushing data into your production system, which would be a setup error I think. For that, it might be good to log the connection attempts and where they are coming from.

@TheWitness
Copy link
Member

If the later is the case, you might want to consider a more strict ACL on who can connect to what databases, if you don't have that already. I don't want to jump to any conclusion here though.

@bmfmancini
Copy link
Member Author

I am trying this now the thing is I have devices doing this that are on the Main poller

@TheWitness
Copy link
Member

That's very odd then. Might even be some device issue.

@netniV netniV changed the title Recache due to failed to get OID but SNMPWALK works When syncing data collectors, a reindex event may be triggered unnecessarily Nov 30, 2020
@TheWitness
Copy link
Member

@bmfmancini, I'm marking this one closed as it was a real problem for remote data collectors, and that issue is resolved. As far as main data collector based devices, I suggest a zone collision or some hardware issue.

@github-actions github-actions bot locked and limited conversation to collaborators Mar 5, 2021
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
bug Undesired behaviour resolved A fixed issue
Projects
None yet
Development

No branches or pull requests

2 participants