New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Changing refresh target for vm creation #10934
Conversation
@miq-bot assign @blomquisg |
aa592b8
to
cfcdff3
Compare
@pkliczewski the problem with this current approach is that the event is never saved to the database, because you're circumventing the entire event handling process. I've looked over what this PR is doing and did a refresher course on the new event handling via automate, and I think I have a suggestion for you. Add a new method in Then, your handlers for This way, you can get rid of the special handling in the The new method in
ems = event.ext_management_system
if ems.supports_refresh_new_target?
vm_class = ems.class::Vm
target = vm_class.create_from_event(event)
EmsRefresh.refresh(target)
end Now, when the event is caught, it's parsed, saved to the DB, and handled through the normal event handling mechanism. And, this should have removed any (or most) of the RHEV-specific references in the code. |
@blomquisg Thank you for reviewing. Will follow your suggestions |
@pkliczewski wait to see if @durandom or @Fryguy have any other ideas ... I've pinged them both to get some other insight. |
Is that really the case? To me it looks like MiqQueue.put_or_update does put the event on the queue and into the db. It does add those methods to the Automate side of things instead of to the event code :) Does this buy us more customizability by users? Or more verbose logging? I dont know :( |
@durandom We are not able to create Vm through refresh code because it is too late. Limitation of targeted refresh is that we can target only existing Vms. The goal of my work is to target everything since full refresh is very costly. What we agree on it to create it in refresh worker so we have the same guaranties of db access serialization as refresh code has. |
@durandom the trick with MiqQueue.put_or_update is that it creates a message on the queue. But, we never get anything saved to the Bypassing the standard event handling means that we don't store the event in the |
@@ -415,6 +411,20 @@ def self.vm_inv_to_hashes(inv, _storage_inv, storage_uids, cluster_uids, host_ui | |||
return result, result_uids | |||
end | |||
|
|||
def self.create_vm_hash(data) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can you pass these in as named parameters? It'll be easier to read than just as an array
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@agrare Will do
@pkliczewski @oourfali In the interest of time, I've transferred this over to @agrare and he is completely up to speed on our discussions. |
@@ -9,4 +9,4 @@ object: | |||
description: | |||
fields: | |||
- rel5: | |||
value: "/System/event_handlers/event_action_refresh?target=src_ems_cluster" | |||
value: "/System/event_handlers/event_action_refresh?target=src_vm" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So instead of calling event_action_refresh
here and having to do some special casing in the event catcher (here) a better approach would be to create a new method like event_action_new_vm
You'll need a builtin automate method similar to miq_event_action_refresh and an EmsEvent method similar to refresh which puts a new_vm
action on the queue for the Refresh Worker
. This way creating the VM record is done by the Refresh Worker
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sure, will do
1eae92f
to
8a8eade
Compare
@agrare @blomquisg @Fryguy @durandom I applied your comments, please review |
ems = ext_management_system | ||
if ems.supports_refresh_new_target? | ||
vm_class = ems.class::Vm | ||
target = vm_class.create_from_event(self) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Okay making progress :) So this is still creating the VM record in the Event Hander
, the next step is to put an item on the queue for the Refresh Worker
that way we don't have any race conditions with two workers creating a record for the same VM.
5d58d45
to
8c41a89
Compare
@@ -55,6 +55,10 @@ def self.queue_refresh(target, id = nil, sync = false) | |||
end | |||
end | |||
|
|||
def self.queue_event_for_refresh(event, ems) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You'll want to keep the method names consistent, typically the naming pattern for a queued method named refresh_new_target
would be queue_refresh_new_target
that way its more obvious what is going to get queued.
|
||
def save_new_target(target_hash) | ||
vm_hash = target_hash[:vm] | ||
existing_vm = VmOrTemplate.find_by(:ems_ref => vm_hash[:ems_ref]) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As we get to creating new targets that aren't VMs we'll want to check for target_hash[:vm].nil?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I added the check
Couple of minor things but I think they can be fixed later, @Fryguy can you take a quick look? |
<pr_mergeability_checker />This pull request is not mergeable. Please rebase and repush. |
def save_new_target(target_hash) | ||
unless target_hash[:vm].nil? | ||
vm_hash = target_hash[:vm] | ||
existing_vm = VmOrTemplate.find_by(:ems_ref => vm_hash[:ems_ref]) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This needs to be scoped by the ems_id. The ems_ref could be identical (think vms/1
) in multiple EMSes.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done
@@ -149,4 +149,24 @@ def relation_values(association, target) | |||
|
|||
top_level && (target == true || target.nil? || parent == target) ? :use_association : [] | |||
end | |||
|
|||
def get_cluster(ems, cluster_hash, rp_hash, dc_hash) | |||
cluster = EmsCluster.find_by(:ems_ref => cluster_hash[:ems_ref]) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Also needs scoping by ems_id
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done
cluster.add_resource_pool(rp) | ||
cluster.save! | ||
|
||
dc = Datacenter.find_by(:name => dc_hash[:name]) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Same on the ems_id. (Also, is there no ems_ref in this table?)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done
cluster = EmsCluster.find_by(:ems_ref => cluster_hash[:ems_ref]) | ||
if cluster.nil? | ||
rp = ems.resource_pools.build(rp_hash) | ||
rp.save! |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If you are going to save it immediately, then instead of build/save!
just call create!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done
|
||
def self.parse_new_cluster(cluster_ref, cluster_id, cluster_name) | ||
{ | ||
:ems_ref => cluster_ref, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm wondering if the ems_id should be in these other payloads as well for consistency. @agrare Thoughts?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
dc_name = nil | ||
ems.with_provider_connection do |rhevm| | ||
dc_name = Ovirt::DataCenter.find_by_href(rhevm, dc_ref).try(:[], :name) | ||
end |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think you should be able to do the following, and not have to play games with the variable manipulation.
dc_name = ems.with_provider_connection do |rhevm|
Ovirt::DataCenter.find_by_href(rhevm, dc_ref).try(:[], :name)
end
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done
ca19ccd
to
d8b252d
Compare
cluster.add_resource_pool(rp) | ||
cluster.save! | ||
|
||
dc = Datacenter.find_by(:name => dc_hash[:name], :ems_id => ems.id) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@pkliczewski just to confirm, is the name of the datacenter guaranteed unique within an EMS? Also is it not feasible to get the ems_ref for the dc?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The name guarantees uniqueness of the DC in rhev. Potentially we could use ems_ref here if we wanted to.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If possible :ems_ref
would be better since it's guaranteed unique within an EMS for all providers and this is common code
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
sure will change
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks @pkliczewski !
When a vm is created we want to tagret this specific vm when running referesh. We used to run full refresh which may take a lot of time. There are 3 events that we want to handle: - USER_ADD_VM - when new vm is created - USER_ADD_VM_FINISHED_SUCCESS - when a vm is created from a template Changing refresh target to vm for above events requires to create a vm before actual refresh is created. In order to mitigate race conditions we want to create new vm just before refresh in EmsRefresh.
Checked commit pkliczewski@7c2295b with ruby 2.2.5, rubocop 0.37.2, and haml-lint 0.16.1 app/models/ems_refresh/save_inventory.rb
app/models/manageiq/providers/redhat/infra_manager/refresh_parser.rb
app/models/manageiq/providers/redhat/infra_manager/refresher.rb
app/models/mixins/supports_feature_mixin.rb
|
Great job @pkliczewski |
Great work @pkliczewski !! 🎉 |
Cool!! |
Changing refresh target for vm creation (cherry picked from commit 92655f2)
Euwe Backport details: $ git log
commit f6b13aa59450a2202b7e5bf40628cec657d62346
Author: Adam Grare <agrare@redhat.com>
Date: Tue Oct 4 12:55:17 2016 -0400
Merge pull request #10934 from pkliczewski/master
Changing refresh target for vm creation
(cherry picked from commit 92655f24f4a8f7f9c4a6d7584583bb03c0ee6be8) |
Changing refresh target for vm creation (cherry picked from commit 92655f2)
This PR improves performance of RHEV provider when new vm is created and fixes:
https://bugzilla.redhat.com/1069707