-
Notifications
You must be signed in to change notification settings - Fork 10.1k
add Dyn provider #2807
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
add Dyn provider #2807
Conversation
6c35d56 to
c86ce1d
Compare
|
solves #459 |
|
Thanks for contributing this @dwradcliffe ! I didn't see anything major at first glance, but I'll do a more through review once docs are in. Can you also please contribute tests? Let me know if you need some guidance there. |
d4913be to
f6b2c92
Compare
f6b2c92 to
338a9e1
Compare
338a9e1 to
76dcc66
Compare
|
@catsby Docs have been added, code refactored, tests added. The acceptance tests pass for me too. I tried to follow the pattern from the other providers but if I'm missing something or did something wrong let me know! 😄 |
|
I'm very interested in seeing this provider be added to terraform as Dyn support is something I've been sorely missing. I've done some testing myself and it satisfies my needs, works well, and the docs are clear. +1 |
|
@dwradcliffe Looks like multiple "dyn_record" resources in a single terraform config can step on each other and cause API 400 errors. At least I assume that's what's happening. I can sometimes run This should produce A records redis1.example.com through redis${var.host_count}.example.com. If Thanks! |
|
@daveadams are you saying that the Dyn API starts to reject requests if too many happen concurrently? If so, a similar problem exists with the AWS API but is abstracted away by the underlying AWS library that Terraform uses, which detects the rate limiting error codes and automatically retries. Could be reasonable to take a similar approach with this one, as long as Dyn's API responses provide enough information to distinguish a rate limit enforcement from other kinds of error. |
|
@daveadams I hadn't tested multiple records at the same time :( Due to how the Dyn api works, I can totally see how this might break. Adding Dyn DNS records requires 2 api calls, one to add the record to the current session, and one to publish that session to actually create the record. I'm pretty sure you can't have multiple sessions going at the same time. I'm not familiar with the terraform internals enough to know if there's a better way to handle this. @catsby any ideas? |
|
I'm not much of a Go hacker, so it's taken me a while, but as @dwradcliffe points out, each change actually requires a pair of requests to the Dyn API: one to make the change and one to publish the new zone. Ideally, you'd queue up the changes and run one Publish step at the very end, and so since Terraform tries to run multiple changes in parallel, the requests overlap something like (for two adds): Create Record 1 Of course the actual order would be unpredictable, and for larger numbers of changes it gets more complicated. Worst of all is that Terraform thinks record1 failed to create, but in reality both record1 and record2 get created, but terraform only knows about and can only destroy one of them (usually record2 in my testing). So I experimented with using a very naive Mutex approach and that seems to keep things straight. And even when the Dyn API does return an error (which happens from time to time), the state is registered correctly, so a terraform apply will be able to fix it and a terraform destroy does the right thing. I'm not sure how best to submit a patch to a pull request, but I'll try to paste it in below. The changes are simple: in resource_dyn_record.go, add a global Mutex variable, lock it at the beginning of each of the Create, Update, and Delete functions, and unlock it just before each possible return path. |
|
@daveadams Nice work! I'll try to pull this in and test it on my end too. |
|
One suggestion: remove all of the |
|
Great tip, @thegedge! Glad to say I learned something new and awesome about Go today thanks to you. I did some more testing with bigger numbers of records, and my patch still falls short. Looks like the Read step will need to use the mutex lock as well. This also (best I can tell) prevents using So here's the latest patch which I've tested with apply/destroy on 30 records: |
… create resource works; TODO support more fields and add tests
|
What do we have to do to get this pulled into the next release? We're anxious to have this built-in. |
|
Any hope to get this merged? @phinze ? |
|
@daveadams sorry for the delay on this! We'll make sure this gets looked at soon. |
|
Looking good - landing this. Thanks for all your work here @dwradcliffe and @daveadams! 🙇 |
|
I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues. If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further. |

This PR adds a Dyn provider for managing Dyn DNS records. At the moment this only supports CNAME and A records although I plan on expanding that shortly. I'm also waiting on a few PRs to land in the upstream go-dynect library so that I can clean up this code a bit. I wanted to get the PR submitted to get some initial feedback now. I've been using this locally and all CRUD operations are working.
Ref: nesv/go-dynect#5
Todo:
@cbednarski @mitchellh @catsby @radeksimko for review
cc: @dalehamel @thegedge @wvanbergen