Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support for vr-aoscx #1488

Merged
merged 2 commits into from
Jul 28, 2023
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -42,6 +42,7 @@ In addition to native containerized NOSes, containerlab can launch traditional v
* [Palo Alto PAN](https://containerlab.dev/manual/kinds/vr-pan)
* [IPInfusion OcNOS](https://containerlab.dev/manual/kinds/ipinfusion-ocnos)
* [Check Point Cloudguard](https://containerlab.dev/manual/kinds/checkpoint_cloudguard/)
* [Aruba AOS-CX](https://containerlab.dev/manual/kinds/vr-aoscx)

And, of course, containerlab is perfectly capable of wiring up arbitrary linux containers which can host your network applications, virtual functions or simply be a test client. With all that, containerlab provides a single IaaC interface to manage labs which can span contain all the needed variants of nodes:

Expand Down
2 changes: 2 additions & 0 deletions clab/register.go
Original file line number Diff line number Diff line change
Expand Up @@ -21,6 +21,7 @@ import (
rare "github.com/srl-labs/containerlab/nodes/rare"
sonic "github.com/srl-labs/containerlab/nodes/sonic"
srl "github.com/srl-labs/containerlab/nodes/srl"
vr_aoscx "github.com/srl-labs/containerlab/nodes/vr_aoscx"
vr_csr "github.com/srl-labs/containerlab/nodes/vr_csr"
vr_ftosv "github.com/srl-labs/containerlab/nodes/vr_ftosv"
vr_n9kv "github.com/srl-labs/containerlab/nodes/vr_n9kv"
Expand Down Expand Up @@ -52,6 +53,7 @@ func (c *CLab) RegisterNodes() {
ovs.Register(c.Reg)
sonic.Register(c.Reg)
srl.Register(c.Reg)
vr_aoscx.Register(c.Reg)
vr_csr.Register(c.Reg)
vr_ftosv.Register(c.Reg)
vr_n9kv.Register(c.Reg)
Expand Down
1 change: 1 addition & 0 deletions docs/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -44,6 +44,7 @@ In addition to native containerized NOSes, containerlab can launch traditional v
* [Palo Alto PAN](manual/kinds/vr-pan.md)
* [IPInfusion OcNOS](manual/kinds/ipinfusion-ocnos.md)
* [Check Point Cloudguard](manual/kinds/checkpoint_cloudguard.md)
* [Aruba AOS-CX](manual/kinds/vr-aoscx.md)

And, of course, containerlab is perfectly capable of wiring up arbitrary linux containers which can host your network applications, virtual functions or simply be a test client. With all that, containerlab provides a single IaaC interface to manage labs which can span all the needed variants of nodes:

Expand Down
1 change: 1 addition & 0 deletions docs/manual/kinds/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -57,5 +57,6 @@ Within each predefined kind, we store the necessary information that is used to
| **OvS bridge** | [`ovs-bridge`](ovs-bridge.md) | supported | N/A |
| **mysocketio node** | [`mysocketio`](../published-ports.md) | supported | N/A |
| **RARE/freeRtr node** | [`rare`](rare-freertr.md) | supported | container |
| **Aruba ArubaOS-CX** | [`vr-aoscx/vr-aruba_aoscx`](vr-aoscx.md) | supported | VM |

Refer to a specific kind documentation article for kind-specific details.
61 changes: 61 additions & 0 deletions docs/manual/kinds/vr-aoscx.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,61 @@
---
search:
boost: 4
---
# Aruba ArubaOS-CX

ArubaOS-CX virtualized switch is identified with `vr-aoscx` or `vr-aruba_aoscx` kind in the [topology file](../topo-def-file.md). It is built using [vrnetlab](../vrnetlab.md) project and essentially is a Qemu VM packaged in a docker container format.

## Managing vr-aoscx nodes

!!!note
Containers with AOS-CX inside will take ~2min to fully boot.
You can monitor the progress with `docker logs -f <container-name>`.

Aruba AOS-CX node launched with containerlab can be managed via the following interfaces:

=== "bash"
to connect to a `bash` shell of a running vr-aoscx container:
```bash
docker exec -it <container-name/id> bash
```
=== "CLI via SSH"
to connect to the AOS-CX CLI (password `admin`)
```bash
ssh admin@<container-name/id>
```

!!!info
Default user credentials: `admin:admin`

## Interfaces mapping

* `eth0` - management interface connected to the containerlab management network
* `eth1+` - second and subsequent data interface

When containerlab launches vr-aoscx node, it will assign IPv4 address to the `eth0` interface. These addresses can be used to reach management plane of the router.

Data interfaces `eth1+` needs to be configured with IP addressing manually using CLI/management protocols.

## Features and options

### Node configuration

vr-aoscx nodes come up with a basic configuration where only the control plane and line cards are provisioned, as well as the `admin` user with the provided password.

#### Startup configuration

It is possible to make ArubaOS-CX nodes boot up with a user-defined startup-config instead of a built-in one. With a [`startup-config`](../nodes.md#startup-config) property of the node/kind user sets the path to the config file that will be mounted to a container and used as a startup-config:

```yaml
topology:
nodes:
node:
kind: vr-aoscx
startup-config: myconfig.txt
```

With this knob containerlab is instructed to take a file `myconfig.txt` from the directory that hosts the topology file, and copy it to the lab directory for that specific node under the `/config/startup-config.cfg` name. Then the directory that hosts the startup-config dir is mounted to the container. This will result in this config being applied at startup by the node.

Configuration is applied after the node is started, thus it can contain partial configuration snippets that you desire to add on top of the default config that a node boots up with.

1 change: 1 addition & 0 deletions mkdocs.yml
Original file line number Diff line number Diff line change
Expand Up @@ -24,6 +24,7 @@ nav:
- Cisco Nexus 9000v: manual/kinds/vr-n9kv.md
- Cisco 8000: manual/kinds/c8000.md
- Cumulus VX: manual/kinds/cvx.md
- Aruba AOS-CX: manual/kinds/vr-aoscx.md
- SONiC: manual/kinds/sonic-vs.md
- Dell FTOS10v: manual/kinds/vr-ftosv.md
- MikroTik RouterOS: manual/kinds/vr-ros.md
Expand Down
80 changes: 80 additions & 0 deletions nodes/vr_aoscx/vr-aoscx.go
Original file line number Diff line number Diff line change
@@ -0,0 +1,80 @@
package vr_aoscx

import (
"context"
"fmt"
"path"

"github.com/srl-labs/containerlab/nodes"
"github.com/srl-labs/containerlab/types"
"github.com/srl-labs/containerlab/utils"
)

var (
kindnames = []string{"vr-aoscx", "vr-aruba_aoscx"}
defaultCredentials = nodes.NewCredentials("admin", "admin")
)

const (
configDirName = "config"
startupCfgFName = "startup-config.cfg"
)

// Register registers the node in the NodeRegistry.
func Register(r *nodes.NodeRegistry) {
r.Register(kindnames, func() nodes.Node {
return new(vrAosCX)
}, defaultCredentials)
}

type vrAosCX struct {
nodes.DefaultNode
}

func (n *vrAosCX) Init(cfg *types.NodeConfig, opts ...nodes.NodeOption) error {
// Init DefaultNode
n.DefaultNode = *nodes.NewDefaultNode(n)
// set virtualization requirement
n.HostRequirements.VirtRequired = true

n.Cfg = cfg
for _, o := range opts {
o(n)
}
// env vars are used to set launch.py arguments in vrnetlab container
defEnv := map[string]string{
"CONNECTION_MODE": nodes.VrDefConnMode,
"USERNAME": defaultCredentials.GetUsername(),
"PASSWORD": defaultCredentials.GetPassword(),
"DOCKER_NET_V4_ADDR": n.Mgmt.IPv4Subnet,
"DOCKER_NET_V6_ADDR": n.Mgmt.IPv6Subnet,
}
n.Cfg.Env = utils.MergeStringMaps(defEnv, n.Cfg.Env)

// mount config dir to support startup-config functionality
n.Cfg.Binds = append(n.Cfg.Binds, fmt.Sprint(path.Join(n.Cfg.LabDir, configDirName), ":/config"))

if n.Cfg.Env["CONNECTION_MODE"] == "macvtap" {
// mount dev dir to enable macvtap
n.Cfg.Binds = append(n.Cfg.Binds, "/dev:/dev")
}

n.Cfg.Cmd = fmt.Sprintf("--username %s --password %s --hostname %s --connection-mode %s --trace",
defaultCredentials.GetUsername(), defaultCredentials.GetPassword(), n.Cfg.ShortName, n.Cfg.Env["CONNECTION_MODE"])

return nil
}

func (n *vrAosCX) PreDeploy(_ context.Context, params *nodes.PreDeployParams) error {
utils.CreateDirectory(n.Cfg.LabDir, 0777)
_, err := n.LoadOrGenerateCertificate(params.Cert, params.TopologyName)
if err != nil {
return nil
}
return nodes.LoadStartupConfigFileVr(n, configDirName, startupCfgFName)
}

// CheckInterfaceName checks if a name of the interface referenced in the topology file correct.
func (n *vrAosCX) CheckInterfaceName() error {
return nodes.GenericVMInterfaceCheck(n.Cfg.ShortName, n.Cfg.Endpoints)
}
10 changes: 9 additions & 1 deletion schemas/clab.schema.json
Original file line number Diff line number Diff line change
Expand Up @@ -73,6 +73,8 @@
"vr-cisco_n9kv",
"vr-ftosv",
"vr-dell_ftosv",
"vr-aoscx",
"vr-aruba_aoscx",
"linux",
"bridge",
"ovs-bridge",
Expand Down Expand Up @@ -636,6 +638,12 @@
"vr-vsrx": {
"$ref": "#/definitions/node-config"
},
"vr-aruba_aoscx": {
"$ref": "#/definitions/node-config"
},
"vr-aoscx": {
"$ref": "#/definitions/node-config"
},
"vr-cisco_xrv": {
"$ref": "#/definitions/node-config"
},
Expand Down Expand Up @@ -727,4 +735,4 @@
"name",
"topology"
]
}
}