Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

qa/suites: escape the parenthesis of the whitelist text #16722

Merged
merged 3 commits into from Aug 2, 2017
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Jump to
Jump to file
Failed to load files.
Diff view
Diff view
4 changes: 2 additions & 2 deletions qa/cephfs/overrides/whitelist_wrongly_marked_down.yaml
Expand Up @@ -2,8 +2,8 @@ overrides:
ceph:
log-whitelist:
- overall HEALTH_
- (OSD_DOWN)
- (OSD_
- \(OSD_DOWN\)
- \(OSD_
- but it is still running
# MDS daemon 'b' is not responding, replacing it as rank 0 with standby 'a'
- is not responding
Expand Down
4 changes: 2 additions & 2 deletions qa/suites/ceph-disk/basic/tasks/ceph-disk.yaml
Expand Up @@ -18,8 +18,8 @@ tasks:
fs: xfs # this implicitly means /dev/vd? are used instead of directories
wait-for-scrub: false
log-whitelist:
- (OSD_
- (PG_
- \(OSD_
- \(PG_
conf:
global:
mon pg warn min per osd: 2
Expand Down
10 changes: 5 additions & 5 deletions qa/suites/fs/basic_functional/overrides/whitelist_health.yaml
Expand Up @@ -2,8 +2,8 @@ overrides:
ceph:
log-whitelist:
- overall HEALTH_
- (FS_DEGRADED)
- (MDS_FAILED)
- (MDS_DEGRADED)
- (FS_WITH_FAILED_MDS)
- (MDS_DAMAGE)
- \(FS_DEGRADED\)
- \(MDS_FAILED\)
- \(MDS_DEGRADED\)
- \(FS_WITH_FAILED_MDS\)
- \(MDS_DAMAGE\)
8 changes: 4 additions & 4 deletions qa/suites/fs/thrash/overrides/whitelist_health.yaml
Expand Up @@ -2,7 +2,7 @@ overrides:
ceph:
log-whitelist:
- overall HEALTH_
- (FS_DEGRADED)
- (MDS_FAILED)
- (MDS_DEGRADED)
- (FS_WITH_FAILED_MDS)
- \(FS_DEGRADED\)
- \(MDS_FAILED\)
- \(MDS_DEGRADED\)
- \(FS_WITH_FAILED_MDS\)
2 changes: 1 addition & 1 deletion qa/suites/powercycle/osd/tasks/rados_api_tests.yaml
Expand Up @@ -2,7 +2,7 @@ overrides:
ceph:
log-whitelist:
- reached quota
- (POOL_APP_NOT_ENABLED)
- \(POOL_APP_NOT_ENABLED\)
tasks:
- ceph-fuse:
- workunit:
Expand Down
2 changes: 1 addition & 1 deletion qa/suites/powercycle/osd/whitelist_health.yaml
@@ -1,4 +1,4 @@
overrides:
ceph:
log-whitelist:
- (MDS_TRIM)
- \(MDS_TRIM\)
8 changes: 4 additions & 4 deletions qa/suites/rados/basic/tasks/rados_python.yaml
Expand Up @@ -3,10 +3,10 @@ overrides:
log-whitelist:
- but it is still running
- overall HEALTH_
- (OSDMAP_FLAGS)
- (PG_
- (OSD_
- (OBJECT_
- \(OSDMAP_FLAGS\)
- \(PG_
- \(OSD_
- \(OBJECT_
tasks:
- workunit:
clients:
Expand Down
4 changes: 2 additions & 2 deletions qa/suites/rados/basic/tasks/rados_stress_watch.yaml
Expand Up @@ -2,8 +2,8 @@ overrides:
ceph:
log-whitelist:
- overall HEALTH_
- (CACHE_POOL_NO_HIT_SET)
- (TOO_FEW_PGS)
- \(CACHE_POOL_NO_HIT_SET\)
- \(TOO_FEW_PGS\)
tasks:
- workunit:
clients:
Expand Down
Expand Up @@ -3,7 +3,7 @@ overrides:
log-whitelist:
- but it is still running
- overall HEALTH_
- (POOL_APP_NOT_ENABLED)
- \(POOL_APP_NOT_ENABLED\)
tasks:
- workunit:
clients:
Expand Down
Expand Up @@ -3,7 +3,7 @@ overrides:
log-whitelist:
- but it is still running
- overall HEALTH_
- (POOL_APP_NOT_ENABLED)
- \(POOL_APP_NOT_ENABLED\)
tasks:
- workunit:
clients:
Expand Down
Expand Up @@ -3,7 +3,7 @@ overrides:
log-whitelist:
- but it is still running
- overall HEALTH_
- (POOL_APP_NOT_ENABLED)
- \(POOL_APP_NOT_ENABLED\)
tasks:
- workunit:
clients:
Expand Down
4 changes: 2 additions & 2 deletions qa/suites/rados/mgr/tasks/failover.yaml
Expand Up @@ -7,8 +7,8 @@ tasks:
wait-for-scrub: false
log-whitelist:
- overall HEALTH_
- (MGR_DOWN)
- (PG_
- \(MGR_DOWN\)
- \(PG_
- cephfs_test_runner:
modules:
- tasks.mgr.test_failover
4 changes: 2 additions & 2 deletions qa/suites/rados/monthrash/thrashers/force-sync-many.yaml
Expand Up @@ -2,8 +2,8 @@ overrides:
ceph:
log-whitelist:
- overall HEALTH_
- (MON_DOWN)
- (TOO_FEW_PGS)
- \(MON_DOWN\)
- \(TOO_FEW_PGS\)
tasks:
- mon_thrash:
revive_delay: 90
Expand Down
2 changes: 1 addition & 1 deletion qa/suites/rados/monthrash/thrashers/many.yaml
Expand Up @@ -2,7 +2,7 @@ overrides:
ceph:
log-whitelist:
- overall HEALTH_
- (MON_DOWN)
- \(MON_DOWN\)
conf:
osd:
mon client ping interval: 4
Expand Down
2 changes: 1 addition & 1 deletion qa/suites/rados/monthrash/thrashers/one.yaml
Expand Up @@ -2,7 +2,7 @@ overrides:
ceph:
log-whitelist:
- overall HEALTH_
- (MON_DOWN)
- \(MON_DOWN\)
tasks:
- mon_thrash:
revive_delay: 20
Expand Down
2 changes: 1 addition & 1 deletion qa/suites/rados/monthrash/thrashers/sync-many.yaml
Expand Up @@ -2,7 +2,7 @@ overrides:
ceph:
log-whitelist:
- overall HEALTH_
- (MON_DOWN)
- \(MON_DOWN\)
conf:
mon:
paxos min: 10
Expand Down
2 changes: 1 addition & 1 deletion qa/suites/rados/monthrash/thrashers/sync.yaml
Expand Up @@ -2,7 +2,7 @@ overrides:
ceph:
log-whitelist:
- overall HEALTH_
- (MON_DOWN)
- \(MON_DOWN\)
conf:
mon:
paxos min: 10
Expand Down
Expand Up @@ -3,7 +3,7 @@ overrides:
log-whitelist:
- slow request
- overall HEALTH_
- (POOL_APP_NOT_ENABLED)
- \(POOL_APP_NOT_ENABLED\)
tasks:
- exec:
client.0:
Expand Down
2 changes: 1 addition & 1 deletion qa/suites/rados/monthrash/workloads/rados_5925.yaml
Expand Up @@ -2,7 +2,7 @@ overrides:
ceph:
log-whitelist:
- overall HEALTH_
- (POOL_APP_NOT_ENABLED)
- \(POOL_APP_NOT_ENABLED\)
tasks:
- exec:
client.0:
Expand Down
12 changes: 6 additions & 6 deletions qa/suites/rados/monthrash/workloads/rados_api_tests.yaml
Expand Up @@ -3,12 +3,12 @@ overrides:
log-whitelist:
- reached quota
- overall HEALTH_
- (CACHE_POOL_NO_HIT_SET)
- (POOL_FULL)
- (REQUEST_SLOW)
- (MON_DOWN)
- (PG_
- (POOL_APP_NOT_ENABLED)
- \(CACHE_POOL_NO_HIT_SET\)
- \(POOL_FULL\)
- \(REQUEST_SLOW\)
- \(MON_DOWN\)
- \(PG_
- \(POOL_APP_NOT_ENABLED\)
conf:
global:
debug objecter: 20
Expand Down
4 changes: 2 additions & 2 deletions qa/suites/rados/monthrash/workloads/rados_mon_workunits.yaml
Expand Up @@ -3,8 +3,8 @@ overrides:
log-whitelist:
- but it is still running
- overall HEALTH_
- (PG_
- (MON_DOWN)
- \(PG_
- \(MON_DOWN\)
tasks:
- workunit:
clients:
Expand Down
2 changes: 1 addition & 1 deletion qa/suites/rados/multimon/tasks/mon_recovery.yaml
Expand Up @@ -3,5 +3,5 @@ tasks:
- ceph:
log-whitelist:
- overall HEALTH_
- (MON_DOWN)
- \(MON_DOWN\)
- mon_recovery:
9 changes: 5 additions & 4 deletions qa/suites/rados/objectstore/ceph_objectstore_tool.yaml
Expand Up @@ -14,9 +14,10 @@ tasks:
osd max object namespace len: 64
log-whitelist:
- overall HEALTH_
- (OSDMAP_FLAGS)
- (OSD_
- (PG_
- (TOO_FEW_PGS)
- \(OSDMAP_FLAGS\)
- \(OSD_
- \(PG_
- \(TOO_FEW_PGS\)
- \(POOL_APP_NOT_ENABLED\)
- ceph_objectstore_tool:
objects: 20
2 changes: 1 addition & 1 deletion qa/suites/rados/rest/mgr-restful.yaml
Expand Up @@ -5,7 +5,7 @@ tasks:
- ceph:
log-whitelist:
- overall HEALTH_
- (MGR_DOWN)
- \(MGR_DOWN\)
- exec:
mon.a:
- ceph restful create-key admin
Expand Down
2 changes: 1 addition & 1 deletion qa/suites/rados/singleton-nomsgr/all/cache-fs-trunc.yaml
Expand Up @@ -5,7 +5,7 @@ tasks:
- ceph:
log-whitelist:
- overall HEALTH_
- (CACHE_POOL_NO_HIT_SET)
- \(CACHE_POOL_NO_HIT_SET\)
conf:
global:
osd max object name len: 460
Expand Down
Expand Up @@ -10,7 +10,7 @@ tasks:
- ceph:
log-whitelist:
- overall HEALTH_
- (CACHE_POOL_NO_HIT_SET)
- \(CACHE_POOL_NO_HIT_SET\)
conf:
global:
osd max object name len: 460
Expand Down
7 changes: 4 additions & 3 deletions qa/suites/rados/singleton-nomsgr/all/full-tiering.yaml
Expand Up @@ -6,9 +6,10 @@ overrides:
log-whitelist:
- is full
- overall HEALTH_
- (POOL_FULL)
- (POOL_NEAR_FULL)
- (CACHE_POOL_NO_HIT_SET)
- \(POOL_FULL\)
- \(POOL_NEAR_FULL\)
- \(CACHE_POOL_NO_HIT_SET\)
- \(CACHE_POOL_NEAR_FULL\)
tasks:
- install:
- ceph:
Expand Down
6 changes: 3 additions & 3 deletions qa/suites/rados/singleton-nomsgr/all/health-warnings.yaml
Expand Up @@ -11,9 +11,9 @@ tasks:
log-whitelist:
- but it is still running
- overall HEALTH_
- (OSDMAP_FLAGS)
- (OSD_
- (PG_
- \(OSDMAP_FLAGS\)
- \(OSD_
- \(PG_
- workunit:
clients:
all:
Expand Down
Expand Up @@ -13,9 +13,9 @@ tasks:
- ceph:
log-whitelist:
- overall HEALTH_
- (PG_
- (OSD_
- (OBJECT_
- \(PG_
- \(OSD_
- \(OBJECT_
conf:
osd:
osd debug reject backfill probability: .3
Expand Down
2 changes: 1 addition & 1 deletion qa/suites/rados/singleton-nomsgr/all/valgrind-leaks.yaml
Expand Up @@ -9,7 +9,7 @@ overrides:
ceph:
log-whitelist:
- overall HEALTH_
- (PG_
- \(PG_
conf:
global:
osd heartbeat grace: 40
Expand Down
9 changes: 5 additions & 4 deletions qa/suites/rados/singleton/all/divergent_priors.yaml
Expand Up @@ -14,10 +14,11 @@ overrides:
ceph:
log-whitelist:
- overall HEALTH_
- (OSDMAP_FLAGS)
- (OSD_
- (PG_
- (OBJECT_DEGRADED)
- \(OSDMAP_FLAGS\)
- \(OSD_
- \(PG_
- \(OBJECT_DEGRADED\)
- \(POOL_APP_NOT_ENABLED\)
conf:
osd:
debug osd: 5
Expand Down
9 changes: 5 additions & 4 deletions qa/suites/rados/singleton/all/divergent_priors2.yaml
Expand Up @@ -14,10 +14,11 @@ overrides:
ceph:
log-whitelist:
- overall HEALTH_
- (OSDMAP_FLAGS)
- (OSD_
- (PG_
- (OBJECT_DEGRADED)
- \(OSDMAP_FLAGS\)
- \(OSD_
- \(PG_
- \(OBJECT_DEGRADED\)
- \(POOL_APP_NOT_ENABLED\)
conf:
osd:
debug osd: 5
Expand Down
6 changes: 3 additions & 3 deletions qa/suites/rados/singleton/all/dump-stuck.yaml
Expand Up @@ -13,7 +13,7 @@ tasks:
log-whitelist:
- but it is still running
- overall HEALTH_
- (OSDMAP_FLAGS)
- (OSD_
- (PG_
- \(OSDMAP_FLAGS\)
- \(OSD_
- \(PG_
- dump_stuck:
8 changes: 4 additions & 4 deletions qa/suites/rados/singleton/all/ec-lost-unfound.yaml
Expand Up @@ -17,8 +17,8 @@ tasks:
log-whitelist:
- objects unfound and apparently lost
- overall HEALTH_
- (OSDMAP_FLAGS)
- (OSD_
- (PG_
- (OBJECT_
- \(OSDMAP_FLAGS\)
- \(OSD_
- \(PG_
- \(OBJECT_
- ec_lost_unfound:
8 changes: 4 additions & 4 deletions qa/suites/rados/singleton/all/lost-unfound-delete.yaml
Expand Up @@ -16,8 +16,8 @@ tasks:
log-whitelist:
- objects unfound and apparently lost
- overall HEALTH_
- (OSDMAP_FLAGS)
- (OSD_
- (PG_
- (OBJECT_
- \(OSDMAP_FLAGS\)
- \(OSD_
- \(PG_
- \(OBJECT_
- rep_lost_unfound_delete: