Skip to content
This repository has been archived by the owner on Apr 17, 2019. It is now read-only.

[ansible] vagrant openstack error: Multiple possible networks found, use a Network ID to be more specific. #349

Closed
v1k0d3n opened this issue Jan 4, 2016 · 7 comments
Labels
area/ansible lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.

Comments

@v1k0d3n
Copy link

v1k0d3n commented Jan 4, 2016

No matter what I seem to do, this error doesn't go away. I have added the variable in the openstack_config.yml file via NetName and NetID. I have added it directly to the vagrant file, thinking that would resolve the issue...no luck. Are others having this issue as well?

@jayunit100
Copy link
Contributor

I haven't ran the openstack provider recently. Thanks for noting this, I think it sounds like an issue with the plugin, so maybe paste the plugin version and openstack version here....

@v1k0d3n
Copy link
Author

v1k0d3n commented Jan 10, 2016

@jayunit100 thanks looking into it!

you're right, i think the issue is ultimately related to: ggiamarchi/vagrant-openstack-provider#236 (comment)

but i'm wondering how you guys worked around it? i saw that both eparis and you created or commented on issues with the vagrant plugin, but it took me a couple of days to trace the issue back to the provider (i commented on the closed ticket), but I'm not sure how much "love" i'm going to get there. how did you guys work around it?

@jayunit100
Copy link
Contributor

Thanks for digging.!
I use my Mac for vagrant, so maybe I lucked out and never hit it?

Sometimes I just hack into the plugins in ~/.vagrant/ and change the source in my local machine to debug stuff. Usually when I do that, I forget to write down what I did :).

@v1k0d3n
Copy link
Author

v1k0d3n commented Jan 10, 2016

I used my Mac first, and that caused even bigger problems. I'll have to try again, but If it doesn't work, I guess I have to think of something else...I'm not very good with Ruby and I'm just starting to learn Ansible. I'm not sure I'd be much help at this point. Like, at this rate, I'm even having a hard time determining all of the correct dependencies...this seems to be a nested dependency issue.

@fejta-bot
Copy link

Issues go stale after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

Prevent issues from auto-closing with an /lifecycle frozen comment.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or @fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Dec 16, 2017
@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or @fejta.
/lifecycle rotten
/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Jan 15, 2018
@fejta-bot
Copy link

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
area/ansible lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.
Projects
None yet
Development

No branches or pull requests

5 participants