Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[QoS] Resolve an issue in the sequence where a referenced object removed and then the referencing object deleting and then re-adding #2210

Merged
merged 9 commits into from
Apr 18, 2022

Conversation

stephenxs
Copy link
Collaborator

@stephenxs stephenxs commented Mar 30, 2022

What I did
Resolve an issue in the following scenario
Suppose object a in table A references object b in table B. When a user is going to remove items in table A and B, the notifications can be received in the following order:

  1. The notification of removing object b
  2. Then the notification of removing object a
  3. And then the notification of re-adding object a
  4. The notification of re-adding object b.

Object b can not be removed in the 1st step because it is still referenced by object a. In case the system is busy, the notification of removing a remains in m_toSync when the notification of re-adding it is coming, which results in both notifications being handled together and the reference to object b not being cleared.
As a result, notification of removing b will never be handled and remain in m_toSync forever.

Solution:

  • Introduce a flag pending remove indicating whether an object is about to be removed but pending on some reference.
    • pending remove is set once a DEL notification is received for an object with non-zero reference.
  • When resolving references in step 3, a pending remove object will be skipped and the notification will remain in m_toSync.
  • SET operation will not be carried out in case there is a pending remove flag on the object to be set.

By doing so, when object a is re-added in step 3, it can not retrieve the dependent object b. And step 1 will be handled and drained successfully.

Why I did it
Fix bug.

How I verified it
Mock test and manually test (eg. config qos reload)

Details if related

  1. Orch::parseReference skips a pending-remove object which it is resolving a reference.
  2. A common logic for objects that can be referenced by other objects,
    • Return task_process_status::task_need_retry when a pending-remove object is about to be created. This is to prevent step 4, which is re-adding the pending-remove object, from being executed.
    • Initialize pendingRemove to false on creating an object
    • Set pendingRemove to true when an object can not be removed because of non empty references

including the following objects

`BUFFER_POOL` which is handled by `BufferOrch::processBufferPool`,
`BUFFER_PROFILE` which is handled by `BufferOrch::processBufferProfile`,
`SCHEDULER` which is handled by `QosOrch::handleSchedulerTable`,
All types of QoS maps which are handled by `QosMapHandler::processWorkItem`
  1. Remove function QosOrch::ResolveMapAndApplyToPort. I remove it in this PR because
    • I have to adjust the function accordingly
    • The function has never been called by any other function since 2018, which means I'm not able to cover the change in unit test. To meet the coverage requirement, I remove it.

keboliu
keboliu previously approved these changes Mar 30, 2022
@liat-grozovik liat-grozovik changed the title [Bugfix][QoS] Resolve an issue in the sequence where a referenced object removed and then the referencing object deleting and then re-adding [QoS] Resolve an issue in the sequence where a referenced object removed and then the referencing object deleting and then re-adding Mar 30, 2022
@stephenxs
Copy link
Collaborator Author

Don't quite understand the coverage report. Looks like many sentences covered by the mock test were eventually identified as uncovered.
@liat-grozovik @prsunny where can I find some documents related to this coverage report? thanks

@prsunny
Copy link
Collaborator

prsunny commented Mar 30, 2022

@theasianpianist , could you please help @stephenxs with the coverage info?

@stephenxs
Copy link
Collaborator Author

@theasianpianist , could you please help @stephenxs with the coverage info?

Uncovered lines reported for qosorch.cpp are 117-118,164,1134,1136-1137,1260,1665,1668-1669,1671,1673-1674,1684

  • For line 164 and 1260, they are definitely covered by mock tests (see below backtrace). I don't understand why they were identified as uncovered.
  • For the rest, they are not covered.

Coverage of line 164:

Thread 1 "tests" hit Breakpoint 2, QosMapHandler::processWorkItem (this=<optimized out>, consumer=...) at ../../orchagent/qosorch.cpp:164
164                 (*(QosOrch::getTypeMap()[qos_map_type_name]))[qos_object_name].m_pendingRemove = true;
(gdb) bt
#0  QosMapHandler::processWorkItem (this=<optimized out>, consumer=...) at ../../orchagent/qosorch.cpp:164
#1  0x0000555555759d1f in QosOrch::handleDscpToTcTable (this=<optimized out>, consumer=...) at ../../orchagent/qosorch.cpp:335
#2  0x0000555555758870 in QosOrch::doTask (this=0x555555bb2580, consumer=...) at /usr/include/c++/8/bits/basic_string.h:936
#3  0x0000555555758b47 in QosOrch::doTask (this=<optimized out>) at ../../orchagent/qosorch.cpp:1887
#4  0x00005555556678fe in qosorch_test::QosOrchTest_QosOrchTestPortQosMapRemoveOneField_Test::TestBody (this=0x555555c7cac0) at qosorch_ut.cpp:441

Coverage of L1260

Thread 1 "tests" hit Breakpoint 4, QosOrch::handleSchedulerTable (this=0x555555bb2580, consumer=...) at ../../orchagent/qosorch.cpp:1260
1260                (*(m_qos_maps[qos_map_type_name]))[qos_object_name].m_pendingRemove = true;
(gdb) bt
#0  QosOrch::handleSchedulerTable (this=0x555555bb2580, consumer=...) at ../../orchagent/qosorch.cpp:1260
#1  0x0000555555758870 in QosOrch::doTask (this=0x555555bb2580, consumer=...) at /usr/include/c++/8/bits/basic_string.h:936
#2  0x0000555555758b47 in QosOrch::doTask (this=<optimized out>) at ../../orchagent/qosorch.cpp:1887
#3  0x00005555556697c3 in qosorch_test::QosOrchTest_QosOrchTestQueueRemoveScheduler_Test::TestBody (this=0x555555c36230) at qosorch_ut.cpp:538

@theasianpianist
Copy link
Contributor

@stephenxs I'm looking in to the coverage discrepancies, hope to have it resolved soon. Don't believe it's a problem on your end.

Signed-off-by: Stephen Sun <stephens@nvidia.com>
…r to retry

Signed-off-by: Stephen Sun <stephens@nvidia.com>
Signed-off-by: Stephen Sun <stephens@nvidia.com>
Signed-off-by: Stephen Sun <stephens@nvidia.com>
Cover the case where a referenced object is created when it is pending remove

Signed-off-by: Stephen Sun <stephens@nvidia.com>
…nce 2018

Signed-off-by: Stephen Sun <stephens@nvidia.com>
Signed-off-by: Stephen Sun <stephens@nvidia.com>
@stephenxs
Copy link
Collaborator Author

/azpw run

@mssonicbld
Copy link
Collaborator

/AzurePipelines run

@azure-pipelines
Copy link

Azure Pipelines successfully started running 1 pipeline(s).

@stephenxs
Copy link
Collaborator Author

Mock test has been added for buffer orch. Now all the changes have been covered

@liat-grozovik
Copy link
Collaborator

@neethajohn could you please help to review?

@liat-grozovik
Copy link
Collaborator

@theasianpianist, @neethajohn could you please help to signoff?

orchagent/orch.cpp Show resolved Hide resolved
orchagent/qosorch.cpp Show resolved Hide resolved
@neethajohn
Copy link
Contributor

LGTM. but code coverage still fails

Signed-off-by: Stephen Sun <stephens@nvidia.com>
@stephenxs
Copy link
Collaborator Author

/azpw run

@mssonicbld
Copy link
Collaborator

/AzurePipelines run

@azure-pipelines
Copy link

Azure Pipelines successfully started running 1 pipeline(s).

@stephenxs
Copy link
Collaborator Author

@theasianpianist
I noticed the coverage was identified as 43% but all the changes have been covered by the mocked tests.
Is there anything I can do to overcome the issue or can we merge the PR without coverage identified as "meet"?
Thanks.

@stephenxs
Copy link
Collaborator Author

List of the coverage of all the code changes (by gdb)

Thread 1 "tests" hit Breakpoint 3, BufferOrch::processBufferPool (this=0x555555cb33b0, tuple=std::tuple containing = {...})
    at ../../orchagent/bufferorch.cpp:573
573                 (*(m_buffer_type_maps[map_type_name]))[object_name].m_pendingRemove = false;
(gdb) bt
#0  BufferOrch::processBufferPool (this=0x555555cb33b0, tuple=std::tuple containing = {...}) at ../../orchagent/bufferorch.cpp:573
#1  0x00005555557710ac in BufferOrch::doTask (this=0x555555cb33b0, consumer=...) at /usr/include/c++/8/bits/basic_string.h:936
#2  0x000055555577149d in BufferOrch::doTask (this=0x555555cb33b0) at /usr/include/c++/8/bits/stl_tree.h:285
#3  0x00005555556524ce in portsorch_test::PortsOrchTest_PortReadinessColdBoot_Test::TestBody (this=0x555555ca07f0) at portsorch_ut.cpp:360


Thread 1 "tests" hit Breakpoint 6, BufferOrch::processBufferProfile (this=0x555555cb33b0, tuple=std::tuple containing = {...})
    at ../../orchagent/bufferorch.cpp:802
802                 (*(m_buffer_type_maps[map_type_name]))[object_name].m_pendingRemove = false;
(gdb) bt
#0  BufferOrch::processBufferProfile (this=0x555555cb33b0, tuple=std::tuple containing = {...}) at ../../orchagent/bufferorch.cpp:802
#1  0x00005555557710ac in BufferOrch::doTask (this=0x555555cb33b0, consumer=...) at /usr/include/c++/8/bits/basic_string.h:936
#2  0x00005555557714ad in BufferOrch::doTask (this=0x555555cb33b0) at /usr/include/c++/8/bits/stl_tree.h:285
#3  0x00005555556524ce in portsorch_test::PortsOrchTest_PortReadinessColdBoot_Test::TestBody (this=0x555555ca07f0) at portsorch_ut.cpp:360


Thread 1 "tests" hit Breakpoint 7, BufferOrch::processBufferProfile (this=0x555555c31840, tuple=...) at ../../orchagent/bufferorch.cpp:815
815                 (*(m_buffer_type_maps[map_type_name]))[object_name].m_pendingRemove = true;
(gdb) bt
#0  BufferOrch::processBufferProfile (this=0x555555c31840, tuple=...) at ../../orchagent/bufferorch.cpp:815
#1  0x00005555557710ac in BufferOrch::doTask (this=0x555555c31840, consumer=...) at /usr/include/c++/8/bits/basic_string.h:936
#2  0x00005555557714ad in BufferOrch::doTask (this=0x555555c31840) at /usr/include/c++/8/bits/stl_tree.h:285
#3  0x00005555556801d7 in bufferorch_test::BufferOrchTest_BufferOrchTestBufferPgReferencingObjRemoveThenAdd_Test::TestBody (this=0x555555cdf0c0) at bufferorch_ut.cpp:299


Thread 1 "tests" hit Breakpoint 5, BufferOrch::processBufferProfile (this=0x555555e35610, tuple=std::tuple containing = {...})
    at ../../orchagent/bufferorch.cpp:659
659                 SWSS_LOG_NOTICE("Entry %s %s is pending remove, need retry", map_type_name.c_str(), object_name.c_str());
(gdb) bt
#0  BufferOrch::processBufferProfile (this=0x555555e35610, tuple=std::tuple containing = {...}) at ../../orchagent/bufferorch.cpp:659
#1  0x00005555557710ac in BufferOrch::doTask (this=0x555555e35610, consumer=...) at /usr/include/c++/8/bits/basic_string.h:936
#2  0x00005555557714ad in BufferOrch::doTask (this=0x555555e35610) at /usr/include/c++/8/bits/stl_tree.h:285
#3  0x00005555556814aa in bufferorch_test::BufferOrchTest_BufferOrchTestReferencingObjRemoveThenAdd_Test::TestBody (this=0x555555d00330)
    at bufferorch_ut.cpp:373


Thread 1 "tests" hit Breakpoint 4, BufferOrch::processBufferPool (this=0x555555e35610, tuple=std::tuple containing = {...})
    at ../../orchagent/bufferorch.cpp:588
588                 (*(m_buffer_type_maps[map_type_name]))[object_name].m_pendingRemove = true;
(gdb) bt
#0  BufferOrch::processBufferPool (this=0x555555e35610, tuple=std::tuple containing = {...}) at ../../orchagent/bufferorch.cpp:588
#1  0x00005555557710ac in BufferOrch::doTask (this=0x555555e35610, consumer=...) at /usr/include/c++/8/bits/basic_string.h:936
#2  0x000055555577149d in BufferOrch::doTask (this=0x555555e35610) at /usr/include/c++/8/bits/stl_tree.h:285
#3  0x00005555556819a0 in bufferorch_test::BufferOrchTest_BufferOrchTestReferencingObjRemoveThenAdd_Test::TestBody (this=0x555555d00330) at bufferorch_ut.cpp:396

Thread 1 "tests" hit Breakpoint 2, BufferOrch::processBufferPool (this=0x555555e35610, tuple=std::tuple containing = {...})
    at ../../orchagent/bufferorch.cpp:394
394                 SWSS_LOG_NOTICE("Entry %s %s is pending remove, need retry", map_type_name.c_str(), object_name.c_str());
(gdb) bt
#0  BufferOrch::processBufferPool (this=0x555555e35610, tuple=std::tuple containing = {...}) at ../../orchagent/bufferorch.cpp:394
#1  0x00005555557710ac in BufferOrch::doTask (this=0x555555e35610, consumer=...) at /usr/include/c++/8/bits/basic_string.h:936
#2  0x000055555577149d in BufferOrch::doTask (this=0x555555e35610) at /usr/include/c++/8/bits/stl_tree.h:285
#3  0x00005555556819a0 in bufferorch_test::BufferOrchTest_BufferOrchTestReferencingObjRemoveThenAdd_Test::TestBody (this=0x555555d00330) at bufferorch_ut.cpp:396


Thread 1 "tests" hit Breakpoint 8, Orch::parseReference (this=<optimized out>, type_maps=std::map with 13 elements = {...}, ref_in="AZURE",
    type_name="DSCP_TO_TC_MAP", object_name="") at ../../orchagent/orch.cpp:362
362             SWSS_LOG_NOTICE("map:%s contains a pending removed object %s, skip\n", type_name.c_str(), ref_in.c_str());
(gdb) bt
#0  Orch::parseReference (this=<optimized out>, type_maps=std::map with 13 elements = {...}, ref_in="AZURE", type_name="DSCP_TO_TC_MAP", object_name="")
    at ../../orchagent/orch.cpp:362
#1  0x00005555556b9998 in Orch::resolveFieldRefValue (this=this@entry=0x555555d92150, type_maps=std::map with 13 elements = {...},
    field_name="dscp_to_tc_map", ref_type_name="DSCP_TO_TC_MAP", tuple=std::tuple containing = {...}, sai_object=@0x7fffffffd840: 140737488345168,
    referenced_object_name="") at ../../orchagent/orch.cpp:392
#2  0x000055555576e136 in QosOrch::handlePortQosMapTable (this=0x555555d92150, consumer=..., tuple=std::tuple containing = {...})
    at /usr/include/c++/8/bits/basic_string.h:390
#3  0x0000555555769844 in QosOrch::doTask (this=0x555555d92150, consumer=...) at /usr/include/c++/8/bits/basic_string.h:936
#4  0x0000555555769b5d in QosOrch::doTask (this=<optimized out>) at ../../orchagent/qosorch.cpp:1813
#5  0x0000555555675824 in qosorch_test::QosOrchTest_QosOrchTestPortQosMapReferencingObjRemoveThenAdd_Test::TestBody (this=0x555555c93b60) at qosorch_ut.cpp:801


Thread 1 "tests" hit Breakpoint 1, QosMapHandler::processWorkItem (this=0x7fffffffd128, consumer=..., tuple=...) at ../../orchagent/qosorch.cpp:146
146                 (*(QosOrch::getTypeMap()[qos_map_type_name]))[qos_object_name].m_pendingRemove = false;
(gdb) bt
#0  QosMapHandler::processWorkItem (this=0x7fffffffd128, consumer=..., tuple=...) at ../../orchagent/qosorch.cpp:146
#1  0x000055555576ab17 in QosOrch::handleDscpToTcTable (this=<optimized out>, consumer=..., tuple=std::tuple containing = {...})
    at ../../orchagent/qosorch.cpp:333
#2  0x0000555555769844 in QosOrch::doTask (this=0x555555d92150, consumer=...) at /usr/include/c++/8/bits/basic_string.h:936
#3  0x0000555555769b17 in QosOrch::doTask (this=<optimized out>) at ../../orchagent/qosorch.cpp:1810
#4  0x000055555567d7a2 in qosorch_test::QosOrchTest::SetUp (this=<optimized out>) at qosorch_ut.cpp:370


Thread 1 "tests" hit Breakpoint 2, QosOrch::handleSchedulerTable (this=0x555555d92150, consumer=..., tuple=...) at ../../orchagent/qosorch.cpp:1243
1243                (*(m_qos_maps[qos_map_type_name]))[qos_object_name].m_pendingRemove = false;
(gdb) bt
#0  QosOrch::handleSchedulerTable (this=0x555555d92150, consumer=..., tuple=...) at ../../orchagent/qosorch.cpp:1243
#1  0x0000555555769844 in QosOrch::doTask (this=0x555555d92150, consumer=...) at /usr/include/c++/8/bits/basic_string.h:936
#2  0x0000555555769b17 in QosOrch::doTask (this=<optimized out>) at ../../orchagent/qosorch.cpp:1810
#3  0x000055555567d7a2 in qosorch_test::QosOrchTest::SetUp (this=<optimized out>) at qosorch_ut.cpp:370


Thread 1 "tests" hit Breakpoint 2, QosMapHandler::processWorkItem (this=<optimized out>, consumer=..., tuple=...) at ../../orchagent/qosorch.cpp:162
162                 (*(QosOrch::getTypeMap()[qos_map_type_name]))[qos_object_name].m_pendingRemove = true;
(gdb) bt
#0  QosMapHandler::processWorkItem (this=<optimized out>, consumer=..., tuple=...) at ../../orchagent/qosorch.cpp:162
#1  0x000055555576ab17 in QosOrch::handleDscpToTcTable (this=<optimized out>, consumer=..., tuple=std::tuple containing = {...})
    at ../../orchagent/qosorch.cpp:333
#2  0x0000555555769844 in QosOrch::doTask (this=0x555555d92150, consumer=...) at /usr/include/c++/8/bits/basic_string.h:936
#3  0x0000555555769b17 in QosOrch::doTask (this=<optimized out>) at ../../orchagent/qosorch.cpp:1810
#4  0x000055555566ea8e in qosorch_test::QosOrchTest_QosOrchTestPortQosMapRemoveOneField_Test::TestBody (this=0x555555c963a0) at qosorch_ut.cpp:441


Thread 1 "tests" hit Breakpoint 4, QosOrch::handleSchedulerTable (this=0x555555d92150, consumer=..., tuple=std::tuple containing = {...})
    at ../../orchagent/qosorch.cpp:1257
1257                (*(m_qos_maps[qos_map_type_name]))[qos_object_name].m_pendingRemove = true;
(gdb) bt
#0  QosOrch::handleSchedulerTable (this=0x555555d92150, consumer=..., tuple=std::tuple containing = {...}) at ../../orchagent/qosorch.cpp:1257
#1  0x0000555555769844 in QosOrch::doTask (this=0x555555d92150, consumer=...) at /usr/include/c++/8/bits/basic_string.h:936
#2  0x0000555555769b17 in QosOrch::doTask (this=<optimized out>) at ../../orchagent/qosorch.cpp:1810
#3  0x0000555555670b10 in qosorch_test::QosOrchTest_QosOrchTestQueueRemoveScheduler_Test::TestBody (this=0x555555d04020) at qosorch_ut.cpp:538


Thread 1 "tests" hit Breakpoint 1, QosMapHandler::processWorkItem (this=0x7fffffffdc58, consumer=..., tuple=std::tuple containing = {...})
    at ../../orchagent/qosorch.cpp:115
115                 SWSS_LOG_NOTICE("Entry %s %s is pending remove, need retry", qos_map_type_name.c_str(), qos_object_name.c_str());
(gdb) bt
#0  QosMapHandler::processWorkItem (this=0x7fffffffdc58, consumer=..., tuple=std::tuple containing = {...}) at ../../orchagent/qosorch.cpp:115
#1  0x000055555576ab17 in QosOrch::handleDscpToTcTable (this=<optimized out>, consumer=..., tuple=std::tuple containing = {...})
    at ../../orchagent/qosorch.cpp:333
#2  0x0000555555769844 in QosOrch::doTask (this=0x555555d92150, consumer=...) at /usr/include/c++/8/bits/basic_string.h:936
#3  0x0000555555769b17 in QosOrch::doTask (this=<optimized out>) at ../../orchagent/qosorch.cpp:1810
#4  0x0000555555675dd3 in qosorch_test::QosOrchTest_QosOrchTestPortQosMapReferencingObjRemoveThenAdd_Test::TestBody (this=0x555555c93b60) at qosorch_ut.cpp:836


Thread 1 "tests" hit Breakpoint 3, QosOrch::handleSchedulerTable (this=0x555555d92150, consumer=..., tuple=std::tuple containing = {...})
    at ../../orchagent/qosorch.cpp:1133
1133                SWSS_LOG_NOTICE("Entry %s %s is pending remove, need retry", qos_map_type_name.c_str(), qos_object_name.c_str());
(gdb) bt
#0  QosOrch::handleSchedulerTable (this=0x555555d92150, consumer=..., tuple=std::tuple containing = {...}) at ../../orchagent/qosorch.cpp:1133
#1  0x0000555555769844 in QosOrch::doTask (this=0x555555d92150, consumer=...) at /usr/include/c++/8/bits/basic_string.h:936
#2  0x0000555555769b17 in QosOrch::doTask (this=<optimized out>) at ../../orchagent/qosorch.cpp:1810
#3  0x0000555555677518 in qosorch_test::QosOrchTest_QosOrchTestQueueReferencingObjRemoveThenAdd_Test::TestBody (this=0x555555cc8800) at qosorch_ut.cpp:922

@liat-grozovik
Copy link
Collaborator

as discussed offline, coverage is ok for new code and the error is on different test. not relevant for these changes.

@liat-grozovik liat-grozovik merged commit d8fadc6 into sonic-net:master Apr 18, 2022
@stephenxs stephenxs deleted the fix-add-del-seq branch April 18, 2022 06:18
judyjoseph pushed a commit that referenced this pull request Apr 25, 2022
…ved and then the referencing object deleting and then re-adding (#2210)

- What I did
Resolve an issue in the following scenario
Suppose object a in table A references object b in table B. When a user is going to remove items in table A and B, the notifications can be received in the following order:

The notification of removing object b
Then the notification of removing object a
And then the notification of re-adding object a
The notification of re-adding object b.
Object b can not be removed in the 1st step because it is still referenced by object a. In case the system is busy, the notification of removing a remains in m_toSync when the notification of re-adding it is coming, which results in both notifications being handled together and the reference to object b not being cleared.
As a result, notification of removing b will never be handled and remain in m_toSync forever.

Solution:

Introduce a flag pending remove indicating whether an object is about to be removed but pending on some reference.
pending remove is set once a DEL notification is received for an object with non-zero reference.
When resolving references in step 3, a pending remove object will be skipped and the notification will remain in m_toSync.
SET operation will not be carried out in case there is a pending remove flag on the object to be set.
By doing so, when object a is re-added in step 3, it can not retrieve the dependent object b. And step 1 will be handled and drained successfully.

- Why I did it
Fix bug.

- How I verified it
Mock test and manually test (eg. config qos reload)

Signed-off-by: Stephen Sun <stephens@nvidia.com>
preetham-singh pushed a commit to preetham-singh/sonic-swss that referenced this pull request Aug 6, 2022
…ved and then the referencing object deleting and then re-adding (sonic-net#2210)

- What I did
Resolve an issue in the following scenario
Suppose object a in table A references object b in table B. When a user is going to remove items in table A and B, the notifications can be received in the following order:

The notification of removing object b
Then the notification of removing object a
And then the notification of re-adding object a
The notification of re-adding object b.
Object b can not be removed in the 1st step because it is still referenced by object a. In case the system is busy, the notification of removing a remains in m_toSync when the notification of re-adding it is coming, which results in both notifications being handled together and the reference to object b not being cleared.
As a result, notification of removing b will never be handled and remain in m_toSync forever.

Solution:

Introduce a flag pending remove indicating whether an object is about to be removed but pending on some reference.
pending remove is set once a DEL notification is received for an object with non-zero reference.
When resolving references in step 3, a pending remove object will be skipped and the notification will remain in m_toSync.
SET operation will not be carried out in case there is a pending remove flag on the object to be set.
By doing so, when object a is re-added in step 3, it can not retrieve the dependent object b. And step 1 will be handled and drained successfully.

- Why I did it
Fix bug.

- How I verified it
Mock test and manually test (eg. config qos reload)

Signed-off-by: Stephen Sun <stephens@nvidia.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

8 participants