-
Notifications
You must be signed in to change notification settings - Fork 88
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Create ArgoCD clients on demand #75
Conversation
10d5d3a
to
1285ec2
Compare
This pull request introduces 1 alert when merging 1285ec2 into 4bd78ea - view on LGTM.com new alerts:
|
This pull request introduces 1 alert when merging 4c28eb9 into 4bd78ea - view on LGTM.com new alerts:
|
This pull request introduces 1 alert when merging 9596107 into 4bd78ea - view on LGTM.com new alerts:
|
This pr is stale because it has been open 14 days with no activity. Remove stale label or comment or this will be closed in 10 days. |
Unfortunately, there is no answer from the PR author of #62 |
If you can wait for next Tuesday, I can have a look at the remaining tasks. Otherwise, feel free to complete the pull request yourself. |
Great. I'm also happy to help if necessary 😁 |
This pr is stale because it has been open 14 days with no activity. Remove stale label or comment or this will be closed in 10 days. |
Stale status: waiting on #62 |
This pr is stale because it has been open 14 days with no activity. Remove stale label or comment or this will be closed in 10 days. |
This pr is stale because it has been open 14 days with no activity. Remove stale label or comment or this will be closed in 10 days. |
This pr is stale because it has been open 14 days with no activity. Remove stale label or comment or this will be closed in 10 days. |
@oboukili is it ok like this? |
Yes, we just need to wait for the CI tests to pass, I think I noticed a test case error earlier. Let's see. |
Apparently there's a race condition
I never had the opportunity to debug this so I can't really offer serious guidance, but I remember |
I'm guessing this might have to do with multiple resources trying to init the clients at the same time. I could probably add a flag on the clients to mark them as initialized and avoid this behavior. |
We should use some kind of locking mechanism such as a sync.RWMutex or try to use a sync.syncMap. If you use the RWMutex, I think it should be located out of the clients' objects. |
NOTE: @oboukili I used a |
@raphink the tests timeout quickly now, I believe it's because your lock is an exclusive one, not a RWMutex one, therefore even for a read a lock is positioned. Please try and use a RWMutex lock, thanks. EDIT: just read your answer after my post. I don't think using an exclusive lock will be a good thing performance-wise in any case, I think the initClients() function should decide whether to set a RLock or a Lock depending on the state of the API client. (same think for unlocking -> RUnlock if a RLock has been set earlier, etc) |
argocd/features.go
Outdated
@@ -43,7 +43,7 @@ type ServerInterface struct { | |||
ServerVersionMessage *version.VersionMessage | |||
ProviderData *schema.ResourceData | |||
|
|||
InitLock sync.Mutex | |||
InitLock sync.RWMutex |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is not enough, the locking mechanism has to be updated as well in the function with RLock/Lock and RUnlock/Unlock
@oboukili I'm probably misunderstanding something I guess. If I use an RLock, then some resources will be able to read the |
Extremely naively (sorry I don't have so much time to spare at the moment) I would say: Modify the initClients() function to: func (p *ServerInterface) initClients() error {
if p.Initialized {
return nil
}
p.InitLock.Lock()
d := p.ProviderData
... and position a RLock very early on on every resource CRUD function: func resourceArgoCDApplicationCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics {
objectMeta, spec, diags := expandApplication(d)
if diags != nil {
return diags
}
server := meta.(*ServerInterface)
server.InitLock.RLock()
defer server.InitLock.RUnlock()
... Good luck and thanks! |
A syncMap would be most interesting though! If applicable. |
9eb3cea
to
ee27bdd
Compare
The problem is not really with a map though (or is it?) but with the various fields of the |
Signed-off-by: Raphaël Pinson <raphael.pinson@camptocamp.com>
Signed-off-by: Raphaël Pinson <raphael.pinson@camptocamp.com>
Signed-off-by: Raphaël Pinson <raphael.pinson@camptocamp.com>
Signed-off-by: Raphaël Pinson <raphael.pinson@camptocamp.com>
Signed-off-by: Raphaël Pinson <raphael.pinson@camptocamp.com>
3ba46cb
to
6c4c651
Compare
Signed-off-by: Raphaël Pinson <raphael.pinson@camptocamp.com>
6c4c651
to
d8e778a
Compare
@oboukili I managed to get it to pass (at least locally) with a simple mutex by just using |
I see the acceptance tests are now passing. @oboukili is that good for you? |
Oh great! Awesome work @raphink ! Let me merge that and make a major release ;) |
All other resources managed by this provider have `Upsert: false` when creating. This is because we want an error to occur if a duplicate cluster is created otherwise we can end up with two different Terraform resources managed the same resource in ArgoCD. Existing clusters must be imported into Terraform state. Change to set Upsert = true was introduced in argoproj-labs#75. PR is in general unrelated to this functionality.
This PR aims to create ArgoCD clients on demand, in order to allow setting the
provider dynamically, based on other resource attributes.
The idea is to store the provider's
ResourceData
in themeta
and use it toinitialize the clients when resources need them.
This PR is based on top of #62.
TODO:
ResourceData
as pointer to resourcesinitClients()
in all resource types