You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The original bug reported in LP#1713165 [0] suggested that the following part of the haproxy check was a problem when there were backups of the config file present, as it recursively greps the haproxy config directory.
The suggestion fix was -
AUTH=$(egrep "stats auth" /etc/haproxy/haproxy.cfg | awk 'NR=1{print $3}')
The implemented fix was -
AUTH=$(egrep "stats auth" /etc/haproxy/haproxy.cfg | awk 'NR=1{print $4}')
Note the awk token was not correct, and per comments on the original bug [1] and GitHub PR [2]
Despite this, this change has also made it into the OpenStack charms via a charmhelpers sync, which means we also have a regression in the Charms.
In addition to this, the fix also needs to be applied to 'check_haproxy_queue_depth.sh' in the same directory.
So far, I have found this has been merged into the following OpenStack charms, but I suspect that there are likely many more charms which would have synced the latest charmhelpers -
ceilometer
ceilometer-agent
ceph-radosgw
cinder
cinder-backup
cinder-ceph
glance
heat
neutron-api
neutron-gateway
neutron-openvswitch
nova-cloud-controller
nova-compute
openstack-dashboard
rabbitmq-server
swift-proxy
swift-storage
The impact is that all of these charms install NRPE checks for haproxy, which report all backends as being down (which is a critical level alert) falsely due to not being able to grab status API credentials correctly.
I have also filed over at [3] but please kill with fire as appropriate, I know things have moved/are moving to GitHub.
The original bug reported in LP#1713165 [0] suggested that the following part of the haproxy check was a problem when there were backups of the config file present, as it recursively greps the haproxy config directory.
AUTH=$(grep -r "stats auth" /etc/haproxy | awk 'NR=1{print $4}')
The suggestion fix was -
AUTH=$(egrep "stats auth" /etc/haproxy/haproxy.cfg | awk 'NR=1{print $3}')
The implemented fix was -
AUTH=$(egrep "stats auth" /etc/haproxy/haproxy.cfg | awk 'NR=1{print $4}')
Note the awk token was not correct, and per comments on the original bug [1] and GitHub PR [2]
Despite this, this change has also made it into the OpenStack charms via a charmhelpers sync, which means we also have a regression in the Charms.
In addition to this, the fix also needs to be applied to 'check_haproxy_queue_depth.sh' in the same directory.
So far, I have found this has been merged into the following OpenStack charms, but I suspect that there are likely many more charms which would have synced the latest charmhelpers -
The impact is that all of these charms install NRPE checks for haproxy, which report all backends as being down (which is a critical level alert) falsely due to not being able to grab status API credentials correctly.
I have also filed over at [3] but please kill with fire as appropriate, I know things have moved/are moving to GitHub.
[0] https://bugs.launchpad.net/charm-helpers/+bug/1713165
[1] https://bugs.launchpad.net/charm-helpers/+bug/1713165/comments/3
[2] #6 (comment)
[3] https://bugs.launchpad.net/charm-helpers/+bug/1743287
The text was updated successfully, but these errors were encountered: