Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

mgr/dashboard: progress: support rbd_support module async tasks #29424

Open
wants to merge 3 commits into
base: master
from

Conversation

@rjfd
Copy link
Contributor

commented Jul 31, 2019

This PR adds the support for rbd_support module async tasks by showing those tasks events (from the progress module) as dashboard rbd tasks.

In practice, when the user deletes an RBD image from the CLI using the async task from the rbd_support module, the dashboard frontend automatically shows that operation and it's progress in the rbd list page.

Signed-off-by: Ricardo Dias rdias@suse.com

rjfd added some commits Jul 31, 2019

mgr/dashboard: progress: support rbd_support module async tasks
Signed-off-by: Ricardo Dias <rdias@suse.com>
mgr/dashboard: frontend: fix defaultBuilder call in task-list service
Signed-off-by: Ricardo Dias <rdias@suse.com>
mgr/dashboard: frontend: use default task builder in rbd-list component
Signed-off-by: Ricardo Dias <rdias@suse.com>

@rjfd rjfd requested a review from ricardoasmarques Jul 31, 2019

'flatten': "flatten",
'trash remove': "trash/remove"
}
action = action_map[refs['action']] if refs['action'] in action_map else refs['action']

This comment has been minimized.

Copy link
@dillaman

dillaman Jul 31, 2019

Contributor

Nit: action = action_map.get(refs['action'], refs['action'])?

@@ -14,37 +14,67 @@
from .. import mgr, logger


def _progress_event_to_dashboard_task_common(event, task):
if event['refs'] and isinstance(event['refs'], dict):

This comment has been minimized.

Copy link
@dillaman

dillaman Jul 31, 2019

Contributor

Nit: I don't think event['refs'] is necessary. If refs isn't in the dict, you'd get a KeyError and if it's not a dict, it is being checked by the second condition.

@rjfd

This comment has been minimized.

Copy link
Contributor Author

commented Jul 31, 2019

@dillaman I have two questions about the rbd_support tasks:

  • why are the tasks schedule to be run sequentially? i.e., why can't we run a limited number of tasks in parallel?

  • the current implementation checks if a task was already scheduled not only by looking at the ones executing (or waiting to execute) but also by looking at the list of tasks that already finished. This creates a non-intuitive behavior that can be reproduced with the following steps:

    1. create rbd image "disk1"
    2. schedule delete task (rbd task add remove disk1), and wait for it to finish
    3. create rbd image "disk1"
    4. schedule delete task (rbd task add remove disk1), <- does not do anything because it thinks it is replaying step 2.

Can we change this behavior?

@dillaman

This comment has been minimized.

Copy link
Contributor

commented Jul 31, 2019

* why are the tasks schedule to be run sequentially? i.e., why can't we run a limited number of tasks in parallel?

Because there is an immidiate need to get this stuff backported to Mimic+Nautilus to support ceph-csi. Parallel tasks could be done in the future but it wasn't needed for initial support and would just delay the work product.

* the current implementation checks if a task was already scheduled not only by looking at the ones executing (or waiting to execute) but also by looking at the list of tasks that already finished. This creates a non-intuitive behavior that can be reproduced with the following steps:

It's a requirement for all ceph CLI commands to be idempotent. See the last commit in my PR. It could be tweaked to also store image ids in the refs to help avoid your scenario.

@rjfd

This comment has been minimized.

Copy link
Contributor Author

commented Jul 31, 2019

@dillaman thanks!

@runsisi

This comment has been minimized.

Copy link
Contributor

commented Aug 1, 2019

hi @dillaman , i am wondering what will happen if the current active mgr fails over to another mgr? will those aync tasks be restarted? thanks.

@dillaman

This comment has been minimized.

Copy link
Contributor

commented Aug 1, 2019

hi @dillaman , i am wondering what will happen if the current active mgr fails over to another mgr? will those aync tasks be restarted? thanks.

Indeed. The tasks are persisted in a new rbd_task object in each pool/namespace. At MGR start-up, these tasks are reloaded from the OSD and rescheduled.

@runsisi

This comment has been minimized.

Copy link
Contributor

commented Aug 1, 2019

@dillaman it makes sense. thank you!

@rjfd

This comment has been minimized.

Copy link
Contributor Author

commented Aug 2, 2019

jenkins retest this please

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
3 participants
You can’t perform that action at this time.