Skip to content

Commit

Permalink
Added non-vr prefixed kind names (#1710)
Browse files Browse the repository at this point in the history
* remove nxos

* added non vr kind names

* fix link

* fix link

* use clean kind names in examples
  • Loading branch information
hellt committed Nov 12, 2023
1 parent 3c24d9b commit 7c84208
Show file tree
Hide file tree
Showing 55 changed files with 278 additions and 379 deletions.
2 changes: 1 addition & 1 deletion Makefile
Original file line number Diff line number Diff line change
Expand Up @@ -99,7 +99,7 @@ serve-docs:

.PHONY: htmltest
htmltest:
docker run --rm -v $(CURDIR):/docs squidfunk/mkdocs-material:$(MKDOCS_VER) build --clean --strict
docker run --rm -v $(CURDIR):/docs ghcr.io/srl-labs/mkdocs-material-insiders:$(MKDOCS_INS_VER) build --clean --strict
docker run --rm -v $(CURDIR):/test wjdp/htmltest --conf ./site/htmltest-w-github.yml
rm -rf ./site

Expand Down
4 changes: 1 addition & 3 deletions clab/register.go
Original file line number Diff line number Diff line change
Expand Up @@ -25,15 +25,14 @@ import (
vr_csr "github.com/srl-labs/containerlab/nodes/vr_csr"
vr_ftosv "github.com/srl-labs/containerlab/nodes/vr_ftosv"
vr_n9kv "github.com/srl-labs/containerlab/nodes/vr_n9kv"
vr_nxos "github.com/srl-labs/containerlab/nodes/vr_nxos"
vr_pan "github.com/srl-labs/containerlab/nodes/vr_pan"
vr_ros "github.com/srl-labs/containerlab/nodes/vr_ros"
vr_sros "github.com/srl-labs/containerlab/nodes/vr_sros"
vr_veos "github.com/srl-labs/containerlab/nodes/vr_veos"
vr_vjunosswitch "github.com/srl-labs/containerlab/nodes/vr_vjunosswitch"
vr_vmx "github.com/srl-labs/containerlab/nodes/vr_vmx"
vr_vqfx "github.com/srl-labs/containerlab/nodes/vr_vqfx"
vr_vsrx "github.com/srl-labs/containerlab/nodes/vr_vsrx"
vr_vjunosswitch "github.com/srl-labs/containerlab/nodes/vr_vjunosswitch"
vr_xrv "github.com/srl-labs/containerlab/nodes/vr_xrv"
vr_xrv9k "github.com/srl-labs/containerlab/nodes/vr_xrv9k"
xrd "github.com/srl-labs/containerlab/nodes/xrd"
Expand All @@ -58,7 +57,6 @@ func (c *CLab) RegisterNodes() {
vr_csr.Register(c.Reg)
vr_ftosv.Register(c.Reg)
vr_n9kv.Register(c.Reg)
vr_nxos.Register(c.Reg)
vr_pan.Register(c.Reg)
vr_ros.Register(c.Reg)
vr_sros.Register(c.Reg)
Expand Down
9 changes: 4 additions & 5 deletions docs/manual/kinds/vr-aoscx.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@ search:
---
# Aruba ArubaOS-CX

ArubaOS-CX virtualized switch is identified with `vr-aoscx` or `vr-aruba_aoscx` kind in the [topology file](../topo-def-file.md). It is built using [vrnetlab](../vrnetlab.md) project and essentially is a Qemu VM packaged in a docker container format.
ArubaOS-CX virtualized switch is identified with `aruba_aoscx` kind in the [topology file](../topo-def-file.md). It is built using [vrnetlab](../vrnetlab.md) project and essentially is a Qemu VM packaged in a docker container format.

## Managing vr-aoscx nodes

Expand Down Expand Up @@ -33,15 +33,15 @@ Aruba AOS-CX node launched with containerlab can be managed via the following in
* `eth0` - management interface connected to the containerlab management network
* `eth1+` - second and subsequent data interface

When containerlab launches vr-aoscx node, it will assign IPv4 address to the `eth0` interface. These addresses can be used to reach management plane of the router.
When containerlab launches ArubaOS-CX node, it will assign IPv4 address to the `eth0` interface. These addresses can be used to reach management plane of the router.

Data interfaces `eth1+` needs to be configured with IP addressing manually using CLI/management protocols.

## Features and options

### Node configuration

vr-aoscx nodes come up with a basic configuration where only the control plane and line cards are provisioned, as well as the `admin` user with the provided password.
ArubaOS-CX nodes come up with a basic configuration where only the control plane and line cards are provisioned, as well as the `admin` user with the provided password.

#### Startup configuration

Expand All @@ -51,11 +51,10 @@ It is possible to make ArubaOS-CX nodes boot up with a user-defined startup-conf
topology:
nodes:
node:
kind: vr-aoscx
kind: aruba_aoscx
startup-config: myconfig.txt
```

With this knob containerlab is instructed to take a file `myconfig.txt` from the directory that hosts the topology file, and copy it to the lab directory for that specific node under the `/config/startup-config.cfg` name. Then the directory that hosts the startup-config dir is mounted to the container. This will result in this config being applied at startup by the node.

Configuration is applied after the node is started, thus it can contain partial configuration snippets that you desire to add on top of the default config that a node boots up with.

20 changes: 11 additions & 9 deletions docs/manual/kinds/vr-csr.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,11 +4,11 @@ search:
---
# Cisco CSR1000v

Cisco CSR1000v virtualized router is identified with `vr-csr` or `vr-cisco_csr1000v` kind in the [topology file](../topo-def-file.md). It is built using [vrnetlab](../vrnetlab.md) project and essentially is a Qemu VM packaged in a docker container format.
Cisco CSR1000v virtualized router is identified with `cisco_csr1000v` kind in the [topology file](../topo-def-file.md). It is built using [vrnetlab](../vrnetlab.md) project and essentially is a Qemu VM packaged in a docker container format.

vr-csr nodes launched with containerlab comes up pre-provisioned with SSH, SNMP, NETCONF and gNMI services enabled.
Cisco CSR1000v nodes launched with containerlab comes up pre-provisioned with SSH, SNMP, NETCONF and gNMI services enabled.

## Managing vr-csr nodes
## Managing Cisco CSR1000v nodes

!!!note
Containers with CSR1000v inside will take ~6min to fully boot.
Expand All @@ -17,7 +17,7 @@ vr-csr nodes launched with containerlab comes up pre-provisioned with SSH, SNMP,
Cisco CSR1000v node launched with containerlab can be managed via the following interfaces:

=== "bash"
to connect to a `bash` shell of a running vr-csr container:
to connect to a `bash` shell of a running Cisco CSR1000v container:
```bash
docker exec -it <container-name/id> bash
```
Expand All @@ -36,30 +36,32 @@ Cisco CSR1000v node launched with containerlab can be managed via the following
Default user credentials: `admin:admin`

## Interfaces mapping
vr-csr container can have up to 144 interfaces and uses the following mapping rules:

Cisco CSR1000v container can have up to 144 interfaces and uses the following mapping rules:

* `eth0` - management interface connected to the containerlab management network
* `eth1` - first data interface, mapped to first data port of CSR1000v line card
* `eth2+` - second and subsequent data interface

When containerlab launches vr-csr node, it will assign IPv4/6 address to the `eth0` interface. These addresses can be used to reach management plane of the router.
When containerlab launches Cisco CSR1000v node, it will assign IPv4/6 address to the `eth0` interface. These addresses can be used to reach management plane of the router.

Data interfaces `eth1+` needs to be configured with IP addressing manually using CLI/management protocols.


## Features and options

### Node configuration
vr-csr nodes come up with a basic configuration where only `admin` user and management interfaces such as NETCONF provisioned.

Cisco CSR1000v nodes come up with a basic configuration where only `admin` user and management interfaces such as NETCONF provisioned.

#### Startup configuration

It is possible to make CSR1000V nodes boot up with a user-defined startup-config instead of a built-in one. With a [`startup-config`](../nodes.md#startup-config) property of the node/kind user sets the path to the config file that will be mounted to a container and used as a startup-config:

```yaml
topology:
nodes:
node:
kind: vr-csr
kind: cisco_csr1000v
startup-config: myconfig.txt
```

Expand Down
21 changes: 12 additions & 9 deletions docs/manual/kinds/vr-ftosv.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,11 +4,11 @@ search:
---
# Dell FTOSv (OS10) / ftosv

Dell FTOSv (OS10) virtualized router/switch is identified with `vr-ftosv` or `vr-dell_ftosv` kind in the [topology file](../topo-def-file.md). It is built using [vrnetlab](../vrnetlab.md) project and essentially is a Qemu VM packaged in a docker container format.
Dell FTOSv (OS10) virtualized router/switch is identified with `dell_ftosv` kind in the [topology file](../topo-def-file.md). It is built using [vrnetlab](../vrnetlab.md) project and essentially is a Qemu VM packaged in a docker container format.

vr-ftosv nodes launched with containerlab comes up pre-provisioned with SSH and SNMP services enabled.
Dell FTOSv nodes launched with containerlab comes up pre-provisioned with SSH and SNMP services enabled.

## Managing vr-ftosv nodes
## Managing Dell FTOSv nodes

!!!note
Containers with FTOS10v inside will take ~2-4min to fully boot.
Expand All @@ -17,7 +17,7 @@ vr-ftosv nodes launched with containerlab comes up pre-provisioned with SSH and
Dell FTOS10v node launched with containerlab can be managed via the following interfaces:

=== "bash"
to connect to a `bash` shell of a running vr-ftosv container:
to connect to a `bash` shell of a running Dell FTOSv container:
```bash
docker exec -it <container-name/id> bash
```
Expand All @@ -31,29 +31,32 @@ Dell FTOS10v node launched with containerlab can be managed via the following in
Default user credentials: `admin:admin`

## Interfaces mapping
vr-ftosv container can have different number of available interfaces which depends on platform used under FTOS10 virtualization .qcow2 disk and container image built using [vrnetlab](../vrnetlab.md) project. Interfaces uses the following mapping rules (in topology file):

Dell FTOSv container can have different number of available interfaces which depends on platform used under FTOS10 virtualization .qcow2 disk and container image built using [vrnetlab](../vrnetlab.md) project. Interfaces uses the following mapping rules (in topology file):

* `eth0` - management interface connected to the containerlab management network
* `eth1` - first data interface, mapped to first data port of FTOS10v line card
* `eth2+` - second and subsequent data interface

When containerlab launches vr-ftosv node, it will assign IPv4/6 address to the `eth0` interface. These addresses can be used to reach management plane of the router.
When containerlab launches Dell FTOSv node, it will assign IPv4/6 address to the `eth0` interface. These addresses can be used to reach management plane of the router.

Data interfaces `eth1+` needs to be configured with IP addressing manually using CLI/management protocols.


## Features and options

### Node configuration
vr-ftosv nodes come up with a basic configuration where only `admin` user and management interfaces such as SSH provisioned.

Dell FTOSv nodes come up with a basic configuration where only `admin` user and management interfaces such as SSH provisioned.

#### Startup configuration

It is possible to make vMX nodes boot up with a user-defined startup-config instead of a built-in one. With a [`startup-config`](../nodes.md#startup-config) property of the node/kind user sets the path to the config file that will be mounted to a container and used as a startup-config:

```yaml
topology:
nodes:
node:
kind: vr-ftosv
kind: dell_ftosv
startup-config: myconfig.txt
```

Expand Down
20 changes: 11 additions & 9 deletions docs/manual/kinds/vr-n9kv.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,11 +4,11 @@ search:
---
# Cisco Nexus 9000v

Cisco Nexus9000v virtualized router is identified with `vr-n9kv` or `vr-cisco_n9kv` kind in the [topology file](../topo-def-file.md). It is built using [vrnetlab](../vrnetlab.md) project and essentially is a Qemu VM packaged in a docker container format.
Cisco Nexus9000v virtualized router is identified with `cisco_n9kv` kind in the [topology file](../topo-def-file.md). It is built using [vrnetlab](../vrnetlab.md) project and essentially is a Qemu VM packaged in a docker container format.

vr-n9kv nodes launched with containerlab comes up pre-provisioned with SSH, SNMP, NETCONF, NXAPI and gRPC services enabled.
Cisco Nexus 9000v nodes launched with containerlab comes up pre-provisioned with SSH, SNMP, NETCONF, NXAPI and gRPC services enabled.

## Managing vr-n9kv nodes
## Managing Cisco Nexus 9000v nodes

!!!note
Containers with Nexus 9000v inside will take ~8-10min to fully boot.
Expand All @@ -17,7 +17,7 @@ vr-n9kv nodes launched with containerlab comes up pre-provisioned with SSH, SNMP
Cisco Nexus 9000v node launched with containerlab can be managed via the following interfaces:

=== "bash"
to connect to a `bash` shell of a running vr-n9kv container:
to connect to a `bash` shell of a running Cisco Nexus 9000v container:
```bash
docker exec -it <container-name/id> bash
```
Expand All @@ -38,30 +38,32 @@ Cisco Nexus 9000v node launched with containerlab can be managed via the followi
Default user credentials: `admin:admin`

## Interfaces mapping
vr-n9kv container can have up to 128 interfaces and uses the following mapping rules:

Cisco Nexus 9000v container can have up to 128 interfaces and uses the following mapping rules:

* `eth0` - management interface connected to the containerlab management network
* `eth1` - first data interface, mapped to first data port of Nexus 9000v line card
* `eth2+` - second and subsequent data interface

When containerlab launches vr-n9kv node, it will assign IPv4/6 address to the `eth0` interface. These addresses can be used to reach management plane of the router.
When containerlab launches Cisco Nexus 9000v node, it will assign IPv4/6 address to the `eth0` interface. These addresses can be used to reach management plane of the router.

Data interfaces `eth1+` needs to be configured with IP addressing manually using CLI/management protocols.


## Features and options

### Node configuration
vr-n9kv nodes come up with a basic configuration where only `admin` user and management interfaces such as NETCONF, NXAPI and GRPC provisioned.

Cisco Nexus 9000v nodes come up with a basic configuration where only `admin` user and management interfaces such as NETCONF, NXAPI and GRPC provisioned.

#### Startup configuration

It is possible to make n9kv nodes boot up with a user-defined startup-config instead of a built-in one. With a [`startup-config`](../nodes.md#startup-config) property of the node/kind user sets the path to the config file that will be mounted to a container and used as a startup-config:

```yaml
topology:
nodes:
node:
kind: vr-n9kv
kind: cisco_n9kv
startup-config: myconfig.txt
```

Expand Down
62 changes: 0 additions & 62 deletions docs/manual/kinds/vr-nxos.md

This file was deleted.

18 changes: 9 additions & 9 deletions docs/manual/kinds/vr-pan.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,11 +4,11 @@ search:
---
# Palo Alto PA-VM

Palo Alto PA-VM virtualized firewall is identified with `vr-pan` or `vr-paloalto_panos` kind in the [topology file](../topo-def-file.md). It is built using [boxen](https://github.com/carlmontanari/boxen/) project and essentially is a Qemu VM packaged in a docker container format.
Palo Alto PA-VM virtualized firewall is identified with `paloalto_panos` kind in the [topology file](../topo-def-file.md). It is built using [boxen](https://github.com/carlmontanari/boxen/) project and essentially is a Qemu VM packaged in a docker container format.

vr-pan nodes launched with containerlab come up pre-provisioned with SSH, and HTTPS services enabled.
Palo Alto PA-VM nodes launched with containerlab come up pre-provisioned with SSH, and HTTPS services enabled.

## Managing vr-pan nodes
## Managing Palo Alto PA-VM nodes

!!!note
Containers with Palo Alto PA-VM inside will take ~8min to fully boot.
Expand All @@ -17,7 +17,7 @@ vr-pan nodes launched with containerlab come up pre-provisioned with SSH, and HT
Palo Alto PA-VM node launched with containerlab can be managed via the following interfaces:

=== "bash"
to connect to a `bash` shell of a running vr-pan container:
to connect to a `bash` shell of a running Palo Alto PA-VM container:
```bash
docker exec -it <container-name/id> bash
```
Expand All @@ -34,13 +34,13 @@ Palo Alto PA-VM node launched with containerlab can be managed via the following

## Interfaces mapping

vr-pan container supports up to 24 interfaces (plus mgmt) and uses the following mapping rules:
Palo Alto PA-VM container supports up to 24 interfaces (plus mgmt) and uses the following mapping rules:

* `eth0` - management interface connected to the containerlab management network
* `eth1` - first data interface, mapped to first data port of PAN VM
* `eth2+` - second and subsequent data interface

When containerlab launches vr-pan node, it will assign IPv4/6 address to the `mgmt` interface. These addresses can be used to reach management plane of the router.
When containerlab launches Palo Alto PA-VM node, it will assign IPv4/6 address to the `mgmt` interface. These addresses can be used to reach management plane of the router.

Data interfaces `eth1+` need to be configured with IP addressing manually using CLI/management protocols.

Expand All @@ -51,17 +51,17 @@ Data interfaces `eth1+` need to be configured with IP addressing manually using

### Node configuration

vr-pan nodes come up with a basic configuration where only `admin` user and management interface is provisioned.
Palo Alto PA-VM nodes come up with a basic configuration where only `admin` user and management interface is provisioned.

### User defined config

It is possible to make `vr-pan` nodes to boot up with a user-defined config instead of a built-in one. With a [`startup-config`](../nodes.md#startup-config) property a user sets the path to the config file that will be mounted to a container and used as a startup config:
It is possible to make Palo Alto PA-VM nodes to boot up with a user-defined config instead of a built-in one. With a [`startup-config`](../nodes.md#startup-config) property a user sets the path to the config file that will be mounted to a container and used as a startup config:

```yaml
name: lab
topology:
nodes:
ceos:
kind: vr-paloalto_panos
kind: paloalto_panos
startup-config: myconfig.conf
```
Loading

0 comments on commit 7c84208

Please sign in to comment.