Skip to content

Containerlab Service Deploy Netbox Task¤

task api name: deploy_netbox

Containerlab service deploy_netbox task is designed to deploy network topologies using devices data retrieved from Netbox. This task automates the deployment process by fetching nodes and links data from Netbox and constructing the topology file, organizing it into a specific folder structure, and executing the containerlab deploy command with the appropriate arguments.

Containerlab Deploy Netbox Task Overview¤

The deploy_netbox task provides the following features:

  • Automated Topology Deployment: Deploys a topology sourcing nodes and links data from Netbox using either or all of these:

    • Netbox Devices List - use provided device names to construct topology and deploy lab
    • Netbox Tenant - source devices for given tenant and deploy the lab
    • Netbox Device Filters - fetch devices data from Netbox GraphQL API and deploy the lab
  • Topology Links Sourcing - links formed using Netbox devices' connections and circuits data.

  • Reconfiguration: Supports reconfiguring an already deployed lab.
  • Node Filtering: Allows deploying specific nodes using a filter.

How It Works¤

deploy_netbox task uses Netbox service get_containerlab_inventory task to fetch topology inventory data from Netbox.

Containerlab Deploy Netbox

  1. Client submits request to Containerlab service to deploy a lab

  2. Containerlab worker sends job request to Netbox service to retrieve topology data for requested devices

  3. Netbox service fetches data from the Netbox and constructs Containerlab inventory

  4. Netbox returns lab inventor data back to Containerlab worker

  5. Containerlab worker deploys lab using topology data provided by Netbox service

Device Config Context Containerlab Parameters¤

Containerlab node details can be defined under device configuration context norfab.containerlab path, for example:

{
    "norfab": {
        "containerlab": {
            "kind": "ceos",
            "image": "ceos:latest",
            "mgmt-ipv4": "172.100.100.10/24",
            "ports": [
                {10000: 22},
                {10001: 830}
            ],

            ... any other containerlab node parameters ...

            "interfaces_rename": [
                {
                    "find": "Ethernet",
                    "replace": "eth",
                    "use_regex": false
                }
            ]
        }
    }
}
  • interfaces_rename - is a list of one or more interface renaming instructions, each item must have find and replace defined, optional use_regex flag specifies whether to use regex based pattern substitution.
  • kind - uses Netbox device platform field value by default
  • image - uses image value if provided, otherwise uses {kind}:latest

Containerlab Deploy Netbox Task Sample Usage¤

Below is an example of how to use the Containerlab deploy Netbox task to deploy a topology.

Examples

Containerlab Deploy Netbox Demo

nf#containerlab deploy-netbox lab-name foobar devices fceos4 fceos5
--------------------------------------------- Job Events -----------------------------------------------
31-May-2025 13:02:29.525 9e3b29210e1140f8b3a311e8c4669ca4 job started
31-May-2025 13:02:29.533 INFO containerlab-worker-1 running containerlab.deploy_netbox  - Checking existing containers
31-May-2025 13:02:29.573 INFO containerlab-worker-1 running containerlab.deploy_netbox  - Existing containers found, retrieving details
31-May-2025 13:02:29.574 INFO containerlab-worker-1 running containerlab.deploy_netbox  - 172.100.100.0/24 already in use, allocating new subnet
31-May-2025 13:02:29.575 INFO containerlab-worker-1 running containerlab.deploy_netbox  - Collected TCP/UDP ports used by existing containers
31-May-2025 13:02:29.576 INFO containerlab-worker-1 running containerlab.deploy_netbox  - foobar allocated new subnet 172.100.102.0/24
31-May-2025 13:02:29.576 INFO containerlab-worker-1 running containerlab.deploy_netbox  - foobar fetching lab topology data from Netbox
31-May-2025 13:02:31.090 INFO containerlab-worker-1 running containerlab.deploy_netbox  - foobar topology data retrieved from Netbox
31-May-2025 13:02:31.094 INFO containerlab-worker-1 running containerlab.deploy_netbox  - foobar topology data saved to 
'/home/norfab/norfab/tests/nf_containerlab/__norfab__/files/worker/containerlab-worker-1/topologies/foobar/foobar.yaml'
31-May-2025 13:02:31.095 INFO containerlab-worker-1 running containerlab.deploy_netbox  - foobar deploying lab using foobar.yaml topology file
31-May-2025 13:02:31.123 INFO containerlab-worker-1 running containerlab.deploy_netbox  - 13:02:31 INFO Containerlab started version=0.67.0
31-May-2025 13:02:31.134 INFO containerlab-worker-1 running containerlab.deploy_netbox  - 13:02:31 INFO Parsing & checking topology file=foobar.yaml
31-May-2025 13:02:31.145 INFO containerlab-worker-1 running containerlab.deploy_netbox  - 13:02:31 INFO Creating docker network name=br-foobar IPv4 subnet=172.100.102.0/24 IPv6 subnet="" MTU=0
31-May-2025 13:02:31.257 INFO containerlab-worker-1 running containerlab.deploy_netbox  - 13:02:31 INFO Creating lab directory 
path=/home/norfab/norfab/tests/nf_containerlab/__norfab__/files/worker/containerlab-worker-1/topologies/foobar/clab-foobar
31-May-2025 13:02:31.268 INFO containerlab-worker-1 running containerlab.deploy_netbox  - 13:02:31 INFO config file 
'/home/norfab/norfab/tests/nf_containerlab/__norfab__/files/worker/containerlab-worker-1/topologies/foobar/clab-foobar/fceos4/flash/startup-config' for node 'fceos4' already exists and will not be generated/reset
31-May-2025 13:02:31.280 INFO containerlab-worker-1 running containerlab.deploy_netbox  - 13:02:31 INFO Creating container name=fceos4
31-May-2025 13:02:31.291 INFO containerlab-worker-1 running containerlab.deploy_netbox  - 13:02:31 INFO config file 
'/home/norfab/norfab/tests/nf_containerlab/__norfab__/files/worker/containerlab-worker-1/topologies/foobar/clab-foobar/fceos5/flash/startup-config' for node 'fceos5' already exists and will not be generated/reset
31-May-2025 13:02:31.302 INFO containerlab-worker-1 running containerlab.deploy_netbox  - 13:02:31 INFO Creating container name=fceos5
31-May-2025 13:02:32.065 INFO containerlab-worker-1 running containerlab.deploy_netbox  - 13:02:32 INFO Created link: fceos5:eth1 ▪┄┄▪ fceos4:eth1
31-May-2025 13:02:32.220 INFO containerlab-worker-1 running containerlab.deploy_netbox  - 13:02:32 INFO Created link: fceos5:eth2 ▪┄┄▪ fceos4:eth2
31-May-2025 13:02:32.592 INFO containerlab-worker-1 running containerlab.deploy_netbox  - 13:02:32 INFO Created link: fceos5:eth3 ▪┄┄▪ fceos4:eth3
31-May-2025 13:02:32.744 INFO containerlab-worker-1 running containerlab.deploy_netbox  - 13:02:32 INFO Created link: fceos5:eth4 ▪┄┄▪ fceos4:eth4
31-May-2025 13:02:32.844 INFO containerlab-worker-1 running containerlab.deploy_netbox  - 13:02:32 INFO Created link: fceos5:eth6 ▪┄┄▪ fceos4:eth6
31-May-2025 13:02:32.953 INFO containerlab-worker-1 running containerlab.deploy_netbox  - 13:02:32 INFO Created link: fceos5:eth7 ▪┄┄▪ fceos4:eth7
31-May-2025 13:02:32.964 INFO containerlab-worker-1 running containerlab.deploy_netbox  - 13:02:32 INFO Running postdeploy actions for Arista cEOS 'fceos5' node
31-May-2025 13:02:33.005 INFO containerlab-worker-1 running containerlab.deploy_netbox  - 13:02:33 INFO Created link: fceos5:eth8 ▪┄┄▪ fceos4:eth101
31-May-2025 13:02:33.053 INFO containerlab-worker-1 running containerlab.deploy_netbox  - 13:02:33 INFO Created link: fceos5:eth11 ▪┄┄▪ fceos4:eth11
31-May-2025 13:02:33.064 INFO containerlab-worker-1 running containerlab.deploy_netbox  - 13:02:33 INFO Running postdeploy actions for Arista cEOS 'fceos4' node
31-May-2025 13:02:54.730 INFO containerlab-worker-1 running containerlab.deploy_netbox  - 13:02:54 INFO Adding host entries path=/etc/hosts
31-May-2025 13:02:54.742 INFO containerlab-worker-1 running containerlab.deploy_netbox  - 13:02:54 INFO Adding SSH config for nodes path=/etc/ssh/ssh_config.d/clab-foobar.conf
31-May-2025 13:02:54.859 9e3b29210e1140f8b3a311e8c4669ca4 job completed in 25.334 seconds

--------------------------------------------- Job Results --------------------------------------------

containerlab-worker-1:
    ----------
    containers:
        |_
        ----------
        lab_name:
            foobar
        labPath:
            foobar.yaml
        name:
            clab-foobar-fceos4
        container_id:
            c24aa0089eca
        image:
            ceosimage:4.30.0F
        kind:
            ceos
        state:
            running
        ipv4_address:
            172.100.102.2/24
            N/A
        owner:
            norfab
        |_
        lab_name:
            foobar
        labPath:
            foobar.yaml
        name:
            clab-foobar-fceos5
        container_id:
            2098290d7a79
        image:
            ceosimage:4.30.0F
        kind:
            ceos
        state:
            running
        ipv4_address:
            172.100.102.3/24
        ipv6_address:
            N/A
        owner:
            norfab
nf#show containerlab labs
containerlab-worker-1:
    - foobar
nf#

In this example:

  • nfcli command starts the NorFab Interactive Shell
  • containerlab command switches to the Containerlab sub-shell
  • deploy_netbox command instruct Containerlab service to deploy a topology
  • devices specifies list of devices to fetch data and links for from netbox

This code is complete and can run as is.

import pprint

from norfab.core.nfapi import NorFab

if __name__ == '__main__':
    nf = NorFab(inventory="inventory.yaml")
    nf.start()

    client = nf.make_client()

    res = client.run_job(
        service="containerlab",
        task="deploy_netbox",
        kwargs={
            "devices": ["fceos4", "fceos5"],
            "lab_name": "foobar"
        }
    )

    pprint.pprint(res)

    nf.destroy()

Reconfiguring an Existing Lab¤

The deploy_netbox task supports reconfiguring an already deployed lab by using the reconfigure argument. This allows you to update the lab configuration without destroying and redeploying it.

Filtering Nodes for Deployment¤

The deploy_netbox task allows you to deploy specific nodes in a topology using the node_filter argument. This is useful for testing or updating specific parts of a lab without affecting the entire topology.

NORFAB Containerlab CLI Shell Reference¤

Below are the commands supported by the deploy_netbox task:

nf#man tree containerlab.deploy-netbox
└── containerlab:    Containerlab service
    └── deploy-netbox:    Spins up a lab using devices data from Netbox
        ├── timeout:    Job timeout
        ├── workers:    Filter worker to target, default 'any'
        ├── verbose-result:    Control output details, default 'False'
        ├── lab-name:    Lab name to generate lab inventory for
        ├── tenant:    Tenant name to generate lab inventory for
        ├── filters:    Netbox device filters to generate lab inventory for
        │   ├── tenant:    Filter devices by tenants
        │   ├── device-name-contains:    Filter devices by name pattern
        │   ├── model:    Filter devices by models
        │   ├── platform:    Filter devices by platforms
        │   ├── region:    Filter devices by regions
        │   ├── role:    Filter devices by roles
        │   ├── site:    Filter devices by sites
        │   ├── status:    Filter devices by statuses
        │   └── tag:    Filter devices by tags
        ├── devices:    List of devices to generate lab inventory for
        ├── progress:    Display progress events, default 'True'
        ├── netbox-instance:    Name of Netbox instance to pull inventory from
        ├── ipv4-subnet:    IPv4 management subnet to use for lab, default '172.100.100.0/24'
        ├── image:    Docker image to use for all nodes
        ├── ports:    Range of TCP/UDP ports to use for nodes, default '[12000, 13000]'
        ├── reconfigure:    Destroy the lab and then re-deploy it., default 'False'
        └── dry-run:    Do not deploy, only fetch inventory from Netbox
nf#

* - mandatory/required command argument

Python API Reference¤

Deploys a containerlab topology using device data from the Netbox database.

This method orchestrates the deployment of a containerlab topology by:

  • Inspecting existing containers to determine subnets and ports in use.
  • Allocating a management IPv4 subnet for the new lab, avoiding conflicts.
  • Downloading inventory data from Netbox for the specified lab and filters.
  • Saving the generated topology file to a dedicated folder.
  • Executing the containerlab deploy command with appropriate arguments.

To retrieve topology data from Netbox at least one of these arguments must be provided to identify a set of devices to include into Containerlab topology:

  • tenant - deploys lab using all devices and links that belong to this tenant
  • devices - lab deployed only using devices in the lists
  • filters - list of device filters to retrieve from Netbox

If multiple of above arguments provided, resulting lab topology is a sum of all devices matched.

Parameters:

Name Type Description Default
lab_name str

The name to use for the lab to deploy.

None
tenant str

Deploy lab for given tenant, lab name if not set becomes equal to tenant name.

None
filters list

List of filters to apply when fetching devices from Netbox.

None
devices list

List of specific devices to include in the topology.

None
instance str

Netbox instance identifier.

None
image str

Container image to use for devices.

None
ipv4_subnet str

Management IPv4 subnet for the lab.

'172.100.100.0/24'
ports tuple

Tuple specifying the range of ports to allocate.

(12000, 15000)
progress bool

If True, emits progress events.

False
reconfigure bool

If True, reconfigures an already deployed lab.

False
timeout int

Timeout in seconds for the deployment process.

600
node_filter str

Comma-separated string of nodes to deploy.

None
dry_run bool

If True, only generates and returns the topology inventory without deploying.

False

Returns:

Name Type Description
Result Result

deployment results with a list of nodes deployed

Raises:

Type Description
Exception

If the topology file cannot be fetched.

Source code in norfab\workers\containerlab_worker.py
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
def deploy_netbox(
    self,
    lab_name: str = None,
    tenant: str = None,
    filters: list = None,
    devices: list = None,
    instance: str = None,
    image: str = None,
    ipv4_subnet: str = "172.100.100.0/24",
    ports: tuple = (12000, 15000),
    progress: bool = False,
    reconfigure: bool = False,
    timeout: int = 600,
    node_filter: str = None,
    dry_run: bool = False,
) -> Result:
    """
    Deploys a containerlab topology using device data from the Netbox database.

    This method orchestrates the deployment of a containerlab topology by:

    - Inspecting existing containers to determine subnets and ports in use.
    - Allocating a management IPv4 subnet for the new lab, avoiding conflicts.
    - Downloading inventory data from Netbox for the specified lab and filters.
    - Saving the generated topology file to a dedicated folder.
    - Executing the `containerlab deploy` command with appropriate arguments.

    To retrieve topology data from Netbox at least one of these arguments must be provided
    to identify a set of devices to include into Containerlab topology:

    - `tenant` - deploys lab using all devices and links that belong to this tenant
    - `devices` - lab deployed only using devices in the lists
    - `filters` - list of device filters to retrieve from Netbox

    If multiple of above arguments provided, resulting lab topology is a sum of all
    devices matched.

    Args:
        lab_name (str, optional): The name to use for the lab to deploy.
        tenant (str, optional): Deploy lab for given tenant, lab name if not set
            becomes equal to tenant name.
        filters (list, optional): List of filters to apply when fetching devices from Netbox.
        devices (list, optional): List of specific devices to include in the topology.
        instance (str, optional): Netbox instance identifier.
        image (str, optional): Container image to use for devices.
        ipv4_subnet (str, optional): Management IPv4 subnet for the lab.
        ports (tuple, optional): Tuple specifying the range of ports to allocate.
        progress (bool, optional): If True, emits progress events.
        reconfigure (bool, optional): If True, reconfigures an already deployed lab.
        timeout (int, optional): Timeout in seconds for the deployment process.
        node_filter (str, optional): Comma-separated string of nodes to deploy.
        dry_run (bool, optional): If True, only generates and returns the topology
            inventory without deploying.

    Returns:
        Result: deployment results with a list of nodes deployed

    Raises:
        Exception: If the topology file cannot be fetched.
    """
    ret = Result(task=f"{self.name}:deploy_netbox")
    subnets_in_use = set()
    ports_in_use = {}

    # handle lab name and tenant name
    if lab_name is None and tenant:
        lab_name = tenant

    # inspect existing containers
    if progress:
        self.event(f"Checking existing containers")
    get_containers = self.inspect(details=True)
    if get_containers.failed is False:
        if progress:
            self.event(f"Existing containers found, retrieving details")
        for container in get_containers.result:
            clab_name = container["Labels"]["containerlab"]
            clab_topo = container["Labels"]["clab-topo-file"]
            node_name = container["Labels"]["clab-node-name"]
            # collect ports that are in use
            ports_in_use[node_name] = list(
                set(
                    [
                        f"{p['host_port']}:{p['port']}/{p['protocol']}"
                        for p in container["Ports"]
                        if "host_port" in p and "port" in p and "protocol" in p
                    ]
                )
            )
            # check existing subnets
            if (
                container["NetworkSettings"]["IPv4addr"]
                and container["NetworkSettings"]["IPv4pLen"]
            ):
                ip = ipaddress.ip_interface(
                    f"{container['NetworkSettings']['IPv4addr']}/"
                    f"{container['NetworkSettings']['IPv4pLen']}"
                )
                subnet = str(ip.network.with_prefixlen)
            else:
                with open(clab_topo, encoding="utf-8") as f:
                    clab_topo_data = yaml.safe_load(f.read())
                    if clab_topo_data.get("mgmt", {}).get("ipv4-subnet"):
                        subnet = clab_topo_data["mgmt"]["ipv4-subnet"]
                    else:
                        msg = f"{clab_name} lab {node_name} node failed to determine mgmt subnet"
                        log.warning(msg)
                        if progress:
                            self.event(msg, severity="WARNING")
                        continue
            subnets_in_use.add(subnet)
            # re-use existing lab subnet
            if clab_name == lab_name:
                ipv4_subnet = subnet
                if progress:
                    self.event(
                        f"{ipv4_subnet} not in use by existing containers, using it"
                    )
            # allocate new subnet if its in use by other lab
            elif clab_name != lab_name and ipv4_subnet == subnet:
                msg = f"{ipv4_subnet} already in use, allocating new subnet"
                log.info(msg)
                if progress:
                    self.event(msg)
                ipv4_subnet = None
        if progress:
            self.event(f"Collected TCP/UDP ports used by existing containers")

    # allocate new subnet
    if ipv4_subnet is None:
        pool = set(f"172.100.{i}.0/24" for i in range(100, 255))
        ipv4_subnet = list(sorted(pool.difference(subnets_in_use)))[0]
        if progress:
            self.event(f"{lab_name} allocated new subnet {ipv4_subnet}")

    if progress:
        self.event(f"{lab_name} fetching lab topology data from Netbox")

    # download inventory data from Netbox
    netbox_reply = self.client.run_job(
        service="netbox",
        task="get_containerlab_inventory",
        workers="any",
        timeout=timeout,
        retry=3,
        kwargs={
            "lab_name": lab_name,
            "tenant": tenant,
            "filters": filters,
            "devices": devices,
            "instance": instance,
            "image": image,
            "ipv4_subnet": ipv4_subnet,
            "ports": ports,
            "ports_map": ports_in_use,
            "progress": progress,
        },
    )

    # use inventory from first worker that returned hosts data
    for wname, wdata in netbox_reply.items():
        if wdata["failed"] is False and wdata["result"]:
            topology_inventory = wdata["result"]
            break
    else:
        msg = f"{self.name} - Netbox returned no data for '{lab_name}' lab"
        log.error(msg)
        raise RuntimeError(msg)

    if progress:
        self.event(f"{lab_name} topology data retrieved from Netbox")

    if dry_run is True:
        ret.result = topology_inventory
        return ret

    # create folder to store topology
    topology_folder = os.path.join(self.topologies_dir, lab_name)
    os.makedirs(topology_folder, exist_ok=True)

    # create topology file
    topology_file = os.path.join(topology_folder, f"{lab_name}.yaml")
    with open(topology_file, "w", encoding="utf-8") as tf:
        tf.write(yaml.dump(topology_inventory, default_flow_style=False))

    if progress:
        self.event(f"{lab_name} topology data saved to '{topology_file}'")

    # form command arguments
    args = ["containerlab", "deploy", "-f", "json", "-t", topology_file]
    if reconfigure is True:
        args.append("--reconfigure")
        if progress:
            self.event(
                f"{lab_name} re-deploying lab using {os.path.split(topology_file)[-1]} topology file"
            )
    else:
        if progress:
            self.event(
                f"{lab_name} deploying lab using {os.path.split(topology_file)[-1]} topology file"
            )
    if node_filter is not None:
        args.append("--node-filter")
        args.append(node_filter)

    # add needed env variables
    env = dict(os.environ)
    env["CLAB_VERSION_CHECK"] = "disable"

    # run containerlab command
    return self.run_containerlab_command(
        args, cwd=topology_folder, timeout=timeout, ret=ret, env=env
    )