Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Changing refresh target for vm creation #10934

Merged
merged 1 commit into from Oct 4, 2016
Merged

Conversation

pkliczewski
Copy link
Contributor

@pkliczewski pkliczewski commented Sep 1, 2016

This PR improves performance of RHEV provider when new vm is created and fixes:

https://bugzilla.redhat.com/1069707

@pkliczewski
Copy link
Contributor Author

@miq-bot assign @blomquisg

@blomquisg
Copy link
Member

blomquisg commented Sep 7, 2016

@pkliczewski the problem with this current approach is that the event is never saved to the database, because you're circumventing the entire event handling process.

I've looked over what this PR is doing and did a refresher course on the new event handling via automate, and I think I have a suggestion for you.

Add a new method in ems_event/automate.rb called refresh_new_target. Then, a new method in the automation engine called miq_event_action_refresh_new_target. And, then you'd have to add a new event handler fixture similar to the event action refresh handler.

Then, your handlers for user_add_vm and user_add_vm_finished_success could call this new event handler instead of event_action_refresh with the target as the source VM.

This way, you can get rid of the special handling in the event catcher runner that bypasses the standard event handling.

The new method in ems_event/automate.rb called refresh_new_target can do the special handling that's done by refresh_new in this PR. Also, the refresh_new_target method can do a couple things to be cleaner:

  1. Look up manager object from the event, like ems = event.ext_management_system.
  2. Guard against other types of managers accidentally using this with the Support Feature Mixin. The end result could look like:
ems = event.ext_management_system
if ems.supports_refresh_new_target?
  vm_class = ems.class::Vm
  target = vm_class.create_from_event(event)
  EmsRefresh.refresh(target)
end

Now, when the event is caught, it's parsed, saved to the DB, and handled through the normal event handling mechanism. And, this should have removed any (or most) of the RHEV-specific references in the code.

/cc @Fryguy, @durandom thoughts? Is that about right?

@pkliczewski
Copy link
Contributor Author

@blomquisg Thank you for reviewing. Will follow your suggestions

@blomquisg
Copy link
Member

@pkliczewski wait to see if @durandom or @Fryguy have any other ideas ... I've pinged them both to get some other insight.

@durandom
Copy link
Member

durandom commented Sep 8, 2016

the problem with this current approach is that the event is never saved to the database

Is that really the case? To me it looks like MiqQueue.put_or_update does put the event on the queue and into the db.
By doing the way through Automate, as @blomquisg suggested, this looks cleaner though and more in line with the current event system. But it still feels like a workaround. I think the Vm should be created through refresh code that generates those hashes and the calls some save_inventory methods...

It does add those methods to the Automate side of things instead of to the event code :) Does this buy us more customizability by users? Or more verbose logging? I dont know :(

@pkliczewski
Copy link
Contributor Author

@durandom We are not able to create Vm through refresh code because it is too late. Limitation of targeted refresh is that we can target only existing Vms. The goal of my work is to target everything since full refresh is very costly.

What we agree on it to create it in refresh worker so we have the same guaranties of db access serialization as refresh code has.

@blomquisg
Copy link
Member

@durandom the trick with MiqQueue.put_or_update is that it creates a message on the queue. But, we never get anything saved to the event_streams table. Which is where we store all the events (used to be ems_events table).

Bypassing the standard event handling means that we don't store the event in the event_streams and we'll lose it on the timeline.

@@ -415,6 +411,20 @@ def self.vm_inv_to_hashes(inv, _storage_inv, storage_uids, cluster_uids, host_ui
return result, result_uids
end

def self.create_vm_hash(data)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you pass these in as named parameters? It'll be easier to read than just as an array

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@agrare Will do

@Fryguy
Copy link
Member

Fryguy commented Sep 8, 2016

@pkliczewski @oourfali In the interest of time, I've transferred this over to @agrare and he is completely up to speed on our discussions.

@@ -9,4 +9,4 @@ object:
description:
fields:
- rel5:
value: "/System/event_handlers/event_action_refresh?target=src_ems_cluster"
value: "/System/event_handlers/event_action_refresh?target=src_vm"
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So instead of calling event_action_refresh here and having to do some special casing in the event catcher (here) a better approach would be to create a new method like event_action_new_vm

You'll need a builtin automate method similar to miq_event_action_refresh and an EmsEvent method similar to refresh which puts a new_vm action on the queue for the Refresh Worker. This way creating the VM record is done by the Refresh Worker

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sure, will do

@pkliczewski
Copy link
Contributor Author

@agrare @blomquisg @Fryguy @durandom I applied your comments, please review

ems = ext_management_system
if ems.supports_refresh_new_target?
vm_class = ems.class::Vm
target = vm_class.create_from_event(self)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Okay making progress :) So this is still creating the VM record in the Event Hander, the next step is to put an item on the queue for the Refresh Worker that way we don't have any race conditions with two workers creating a record for the same VM.

@@ -55,6 +55,10 @@ def self.queue_refresh(target, id = nil, sync = false)
end
end

def self.queue_event_for_refresh(event, ems)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You'll want to keep the method names consistent, typically the naming pattern for a queued method named refresh_new_target would be queue_refresh_new_target that way its more obvious what is going to get queued.


def save_new_target(target_hash)
vm_hash = target_hash[:vm]
existing_vm = VmOrTemplate.find_by(:ems_ref => vm_hash[:ems_ref])
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As we get to creating new targets that aren't VMs we'll want to check for target_hash[:vm].nil?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I added the check

@agrare
Copy link
Member

agrare commented Sep 30, 2016

Couple of minor things but I think they can be fixed later, @Fryguy can you take a quick look?

@miq-bot
Copy link
Member

miq-bot commented Sep 30, 2016

<pr_mergeability_checker />This pull request is not mergeable. Please rebase and repush.

def save_new_target(target_hash)
unless target_hash[:vm].nil?
vm_hash = target_hash[:vm]
existing_vm = VmOrTemplate.find_by(:ems_ref => vm_hash[:ems_ref])
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This needs to be scoped by the ems_id. The ems_ref could be identical (think vms/1) in multiple EMSes.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done

@@ -149,4 +149,24 @@ def relation_values(association, target)

top_level && (target == true || target.nil? || parent == target) ? :use_association : []
end

def get_cluster(ems, cluster_hash, rp_hash, dc_hash)
cluster = EmsCluster.find_by(:ems_ref => cluster_hash[:ems_ref])
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also needs scoping by ems_id

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done

cluster.add_resource_pool(rp)
cluster.save!

dc = Datacenter.find_by(:name => dc_hash[:name])
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same on the ems_id. (Also, is there no ems_ref in this table?)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done

cluster = EmsCluster.find_by(:ems_ref => cluster_hash[:ems_ref])
if cluster.nil?
rp = ems.resource_pools.build(rp_hash)
rp.save!
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If you are going to save it immediately, then instead of build/save! just call create!

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done


def self.parse_new_cluster(cluster_ref, cluster_id, cluster_name)
{
:ems_ref => cluster_ref,
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm wondering if the ems_id should be in these other payloads as well for consistency. @agrare Thoughts?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@Fryguy the :ems_id for the new target is given here so that should cover the rest of these (vm, cluster, etc...)

dc_name = nil
ems.with_provider_connection do |rhevm|
dc_name = Ovirt::DataCenter.find_by_href(rhevm, dc_ref).try(:[], :name)
end
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think you should be able to do the following, and not have to play games with the variable manipulation.

dc_name = ems.with_provider_connection do |rhevm|
  Ovirt::DataCenter.find_by_href(rhevm, dc_ref).try(:[], :name)
end

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done

cluster.add_resource_pool(rp)
cluster.save!

dc = Datacenter.find_by(:name => dc_hash[:name], :ems_id => ems.id)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@pkliczewski just to confirm, is the name of the datacenter guaranteed unique within an EMS? Also is it not feasible to get the ems_ref for the dc?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The name guarantees uniqueness of the DC in rhev. Potentially we could use ems_ref here if we wanted to.

Copy link
Member

@agrare agrare Oct 4, 2016

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If possible :ems_ref would be better since it's guaranteed unique within an EMS for all providers and this is common code

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

sure will change

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks @pkliczewski !

When a vm is created we want to tagret this specific vm when running
referesh. We used to run full refresh which may take a lot of time.

There are 3 events that we want to handle:
- USER_ADD_VM - when new vm is created
- USER_ADD_VM_FINISHED_SUCCESS - when a vm is created from a template

Changing refresh target to vm for above events requires to create a
vm before actual refresh is created. In order to mitigate race conditions
we want to create new vm just before refresh in EmsRefresh.
@miq-bot
Copy link
Member

miq-bot commented Oct 4, 2016

Checked commit pkliczewski@7c2295b with ruby 2.2.5, rubocop 0.37.2, and haml-lint 0.16.1
12 files checked, 4 offenses detected

app/models/ems_refresh/save_inventory.rb

  • ❕ - Line 313, Col 3 - Metrics/AbcSize - Assignment Branch Condition size for save_new_target is too high. [29.48/20]

app/models/manageiq/providers/redhat/infra_manager/refresh_parser.rb

app/models/manageiq/providers/redhat/infra_manager/refresher.rb

app/models/mixins/supports_feature_mixin.rb

  • ❗ - Line 79, Col 5 - Style/AlignHash - Align the elements of a hash literal if they span more than one line.

@agrare agrare merged commit 92655f2 into ManageIQ:master Oct 4, 2016
@agrare agrare added this to the Sprint 48 Ending Oct 24, 2016 milestone Oct 4, 2016
@agrare agrare added the euwe/yes label Oct 4, 2016
@agrare
Copy link
Member

agrare commented Oct 4, 2016

Great job @pkliczewski

@Fryguy
Copy link
Member

Fryguy commented Oct 4, 2016

Great work @pkliczewski !! 🎉

@pkliczewski
Copy link
Contributor Author

Cool!!

chessbyte pushed a commit that referenced this pull request Oct 5, 2016
Changing refresh target for vm creation
(cherry picked from commit 92655f2)
@chessbyte
Copy link
Member

Euwe Backport details:

$ git log
commit f6b13aa59450a2202b7e5bf40628cec657d62346
Author: Adam Grare <agrare@redhat.com>
Date:   Tue Oct 4 12:55:17 2016 -0400

    Merge pull request #10934 from pkliczewski/master

    Changing refresh target for vm creation
    (cherry picked from commit 92655f24f4a8f7f9c4a6d7584583bb03c0ee6be8)

agrare added a commit to agrare/manageiq that referenced this pull request Nov 30, 2016
Changing refresh target for vm creation
(cherry picked from commit 92655f2)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

7 participants