-
Notifications
You must be signed in to change notification settings - Fork 56
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Addition of Cisco 1000v and 9300v with Containerlab Support #1168
Comments
CSR1000v should be a simple case of interface name mapping. The interfaces will have one name within the container, and another name within the virtual machine inside the container (I'm assuming you're doing this stuff: https://containerlab.dev/manual/kinds/vr-csr/) To do that, you have to define a bunch of stuff under devices.csr.clab. The easiest way to start would be to define them in the topology file:
I'm guessing the interface.name bit based on vMX definition. Next, you'll have to write an 'are we ready' task list because the container starts "immediately" while the VM within the container takes "forever". See https://github.com/ipspace/netlab/blob/dev/netsim/ansible/tasks/readiness-check/vptx.yml for an example). Alternatively, if you could somehow get the two Docker images over to me, I will try to figure it all out ;) |
Apologies for taking so long to come back to this topic. I did the following updates to my topology.yml file:
I then went into the netsim/ansible/tasks/readiness-check/ and added a nxos.yml that contained a simple wait 15min:
This seemed to create the proper naming structure for the clab.yml:
I set everything up this way since I started toying around with the 9000v, and figured the NXOS model would match the closet as it runs NXOS. I am running into issues when creating the lab where the group_vars/nxos/topology.yml file created is using a different password than the 9000v defaults (admin/admin):
I can get past this by manually changing those and then firing up the lab. However, should I create a different device type for the 9000v that sets the proper default password or is the better approach to override the default value for this? (I just haven't quite figured out the appropriate syntax to get that ansible_ssh_pass and ansible_user overridden) |
So glad to hear you got this far, although I'd prefer a more robust readiness check, maybe something along the lines of what @ssasso did for vMX: https://github.com/ipspace/netlab/blob/dev/netsim/ansible/tasks/vmx/initial.yml#L8 (it should be moved into readiness_check but that's a different story). Ansible variables are easy. Just set devices.nxos.clab.group_vars to whatever values you need. See https://github.com/ipspace/netlab/blob/dev/netsim/devices/eos.yml#L85 for an example. |
I was able to get the 9000v working with the following updates : topology.yml:
nxos.yml added to the readiness-check tasks :
I'll look at the 1000v next and see what changes are required to make those come up. Let me know if you have any other ideas on how I should clean this up, or if this is how I should proceed. Would it be possible to integrate the nxos.yml into the main? I wondered if the other nxos VMs had similar issues to what I encountered or if it's just how I am implementing in containerlab. Thanks! |
Gee, if only you were a day faster, they would have been in 1.8.3 ;)
This looks pretty decent to me. Not much I would change. I would probably repackage your code into a more generic "vm-in-container" task list and include it in nxos-clab.yml.
Of course. I'll add it, noting where it came from. Would it be possible for you to test the solution once I do the packaging?
NXOS has a generic problem that it claims it's ready before its interfaces are ready (Junos on vPTX seems to have a similar problem). We're dealing with that in the NXOS config deployment task list, and I planned to move that to the readyness check for a long time. Now I have a good reason to get it done ;) What you're experiencing though is specific to the way the VM is packaged in a container with an SSH proxy sitting in front of it. |
Would it be possible for you to test the solution once I do the packaging?
|
So I tried the hellt/vrnetlab project and nxos keeps crashing. I will not waste any more time trying to troubleshoot that. Anyway, I copied your settings (apart from the image name) into nxos.yml, moved "Ethernet 1/1" readyness check into nxos-specific task list, added a generic "test if the VM in a container is ready" test, and nxos-clab.yml task list that just invokes the other two. The results are in the nxos-clab branch (changes in dev...nxos-clab) There is pretty high probability that this should work, but I can't be 100% sure ;) Anyway, pull down the latest changes, switch to nxos-clab branch and give it a try. Keeping my fingers crossed ;)) As for CSR 1Kv, you'll have to use the same readyness check (see comments in https://github.com/ipspace/netlab/blob/dev/netsim/ansible/tasks/vmx/initial.yml for details). Copy the nxos-clab.yml into csr-clab.yml, and remove the "check for Ethernet 1/1" include_tasks |
I'm an idiot. I tried to build a nxos container, not n9kv one. It all works now. I changed the image name to what hellt/vrnetlab generates and reduced the retries to 20 (my setup worked after three retries, as each retry takes 30 seconds to time out). |
FWIW, I added the CSR part. It should work once you get the container up and running (it didn't work for me out of the box and I didn't have time to troubleshoot it) |
I'll test it out today, thanks ! Interesting how quick your 9000v spun up. Mine definitely takes the 12 minutes , I'll have to double check the specs I gave the VM I've been running everything on. I'll also take a look at which rev I have, maybe a contributing factor. |
I'm looking for support for the Cisco 9000v (9300v) and 1000v with the containerlab platform. I've tested it out, and it looks like it may be close, less the interface names that netlab sets up and passes into the clab.yml file. As an example :
If the ifname was changed to "eth1", this would align with containerlab topology file and I believe the remaining pieces would not require change.
I'm happy to help with testing out any implementations to assist in this update.
Thanks!
The text was updated successfully, but these errors were encountered: