Skip to content

Netbox Get Containerlab Inventory Taskยค

task api name: get_containerlab_inventory

This task designed to provide Containerlab workers with inventory data sourced from Netbox to deploy lab topologies.

Get Containerlab Inventory Sample Usageยค

Below is an example of how to fetch Containerlab topology inventory data from Netbox for two devices named fceos4 and fceos5.

nf#netbox get containerlab-inventory devices fceos4 fceos5 lab-name foobar
--------------------------------------------- Job Events -----------------------------------------------
31-May-2025 13:10:14.477 7d434ed4e24c4a69af5d52797d7a187e job started
31-May-2025 13:10:14.525 INFO netbox-worker-1.1 running netbox.get_containerlab_inventory  - Fetching devices data from Netbox
31-May-2025 13:10:14.594 INFO netbox-worker-1.1 running netbox.get_containerlab_inventory  - Node added fceos4
31-May-2025 13:10:14.600 INFO netbox-worker-1.1 running netbox.get_containerlab_inventory  - Node added fceos5
31-May-2025 13:10:14.606 INFO netbox-worker-1.1 running netbox.get_containerlab_inventory  - Fetching connections data from Netbox
31-May-2025 13:10:15.211 INFO netbox-worker-1.1 running netbox.get_containerlab_inventory  - Link added fceos5:eth1 - fceos4:eth1
31-May-2025 13:10:15.217 INFO netbox-worker-1.1 running netbox.get_containerlab_inventory  - Link added fceos5:eth2 - fceos4:eth2
31-May-2025 13:10:15.225 INFO netbox-worker-1.1 running netbox.get_containerlab_inventory  - Link added fceos5:eth3 - fceos4:eth3
31-May-2025 13:10:15.232 INFO netbox-worker-1.1 running netbox.get_containerlab_inventory  - Link added fceos5:eth4 - fceos4:eth4
31-May-2025 13:10:15.238 INFO netbox-worker-1.1 running netbox.get_containerlab_inventory  - Link added fceos5:eth6 - fceos4:eth6
31-May-2025 13:10:15.244 INFO netbox-worker-1.1 running netbox.get_containerlab_inventory  - Link added fceos5:eth7 - fceos4:eth7
31-May-2025 13:10:15.250 INFO netbox-worker-1.1 running netbox.get_containerlab_inventory  - Link added fceos5:eth8 - fceos4:eth101
31-May-2025 13:10:15.257 INFO netbox-worker-1.1 running netbox.get_containerlab_inventory  - Link added fceos5:eth11 - fceos4:eth11
31-May-2025 13:10:15.580 INFO netbox-worker-1.1 running netbox.get_containerlab_inventory  - Renaming fceos4 interfaces
31-May-2025 13:10:15.587 INFO netbox-worker-1.1 running netbox.get_containerlab_inventory  - Renaming fceos5 interfaces
31-May-2025 13:10:15.808 7d434ed4e24c4a69af5d52797d7a187e job completed in 1.331 seconds

--------------------------------------------- Job Results --------------------------------------------

netbox-worker-1.1:
  mgmt:
    ipv4-subnet: 172.100.100.0/24
    network: br-foobar
  name: foobar
  topology:
    links:
    - endpoints:
      - interface: eth1
        node: fceos5
      - interface: eth1
        node: fceos4
      type: veth
    - endpoints:
      - interface: eth2
        node: fceos5
      - interface: eth2
        node: fceos4
      type: veth
    - endpoints:
      - interface: eth3
        node: fceos5
      - interface: eth3
        node: fceos4
      type: veth
    - endpoints:
      - interface: eth4
        node: fceos5
      - interface: eth4
        node: fceos4
      type: veth
    - endpoints:
      - interface: eth6
        node: fceos5
      - interface: eth6
        node: fceos4
      type: veth
    - endpoints:
      - interface: eth7
        node: fceos5
      - interface: eth7
        node: fceos4
      type: veth
    - endpoints:
      - interface: eth8
        node: fceos5
      - interface: eth101
        node: fceos4
      type: veth
    - endpoints:
      - interface: eth11
        node: fceos5
      - interface: eth11
        node: fceos4
      type: veth
    nodes:
      fceos4:
        image: ceosimage:4.30.0F
        kind: ceos
        mgmt-ipv4: 172.100.100.2
        ports:
        - 12000:22/tcp
        - 12001:23/tcp
        - 12002:80/tcp
        - 12003:161/udp
        - 12005:830/tcp
        - 12006:8080/tcp
      fceos5:
        image: ceosimage:4.30.0F
        kind: ceos
        mgmt-ipv4: 172.100.100.3
        ports:
        - 12007:22/tcp
        - 12008:23/tcp
        - 12009:80/tcp
        - 12010:161/udp
        - 12011:443/tcp
        - 12012:830/tcp
        - 12013:8080/tcp

nf#

NORFAB Netbox Get Containerlab Inventory Command Shell Referenceยค

NorFab shell supports these command options for Netbox get_containerlab_inventory task:

nf#man tree netbox.get.containerlab-inventory
root
โ””โ”€โ”€ netbox:    Netbox service
    โ””โ”€โ”€ get:    Query data from Netbox
        โ””โ”€โ”€ containerlab-inventory:    Query Netbox and construct Containerlab inventory
            โ”œโ”€โ”€ timeout:    Job timeout
            โ”œโ”€โ”€ workers:    Filter worker to target, default 'any'
            โ”œโ”€โ”€ verbose-result:    Control output details, default 'False'
            โ”œโ”€โ”€ lab-name:    Lab name to generate lab inventory for
            โ”œโ”€โ”€ tenant:    Tenant name to generate lab inventory for
            โ”‚   โ”œโ”€โ”€ tenant:    Filter devices by tenants
            โ”‚   โ”œโ”€โ”€ device-name-contains:    Filter devices by name pattern
            โ”‚   โ”œโ”€โ”€ model:    Filter devices by models
            โ”‚   โ”œโ”€โ”€ platform:    Filter devices by platforms
            โ”‚   โ”œโ”€โ”€ region:    Filter devices by regions
            โ”‚   โ”œโ”€โ”€ role:    Filter devices by roles
            โ”‚   โ”œโ”€โ”€ site:    Filter devices by sites
            โ”‚   โ”œโ”€โ”€ status:    Filter devices by statuses
            โ”‚   โ””โ”€โ”€ tag:    Filter devices by tags
            โ”œโ”€โ”€ devices:    List of devices to generate lab inventory for
            โ”œโ”€โ”€ progress:    Display progress events, default 'True'
            โ”œโ”€โ”€ netbox-instance:    Name of Netbox instance to pull inventory from
            โ”œโ”€โ”€ ipv4-subnet:    IPv4 management subnet to use for lab, default '172.100.100.0/24'
            โ”œโ”€โ”€ image:    Docker image to use for all nodes
            โ””โ”€โ”€ ports:    Range of TCP/UDP ports to use for nodes, default '[12000, 13000]'
nf#

Python API Referenceยค

Retrieve and construct Containerlab inventory from NetBox data.

Containerlab node details must be defined under device configuration context norfab.containerlab path, for example:

{
    "norfab": {
        "containerlab": {
            "kind": "ceos",
            "image": "ceos:latest",
            "mgmt-ipv4": "172.100.100.10/24",
            "ports": [
                {10000: 22},
                {10001: 830}
            ],

            ... any other node parameters ...

            "interfaces_rename": [
                {
                    "find": "eth",
                    "replace": "Eth",
                    "use_regex": false
                }
            ]
        }
    }
}

For complete list of parameters refer to Containerlab nodes definition.

Special handling given to these parameters:

  • lab_name - if not provided uses tenant argument value as a lab name
  • kind - uses device platform field value by default
  • image - uses image value if provided, otherwise uses {kind}:latest
  • interfaces_rename - a list of one or more interface renaming instructions, each item must have find and replace defined, optional use_regex flag specifies whether to use regex based pattern substitution.

To retrieve topology data from Netbox at least one of these arguments must be provided to identify a set of devices to include into Containerlab topology:

  • tenant - topology constructed using all devices and links that belong to this tenant
  • devices - creates topology only using devices in the lists
  • filters - list of device filters to retrieve from Netbox and add to topology

If multiple of above arguments provided, resulting lab topology is a sum of all devices matched.

Parameters:

Name Type Description Default
job Job

NorFab Job object containing relevant metadata

required
lab_name (str, Mandatory)

Name of containerlab to construct inventory for.

None
tenant str

Construct topology using given tenant's devices

None
filters list

List of filters to apply when retrieving devices from NetBox.

None
devices list

List of specific devices to retrieve from NetBox.

None
instance str

NetBox instance to use.

None
image str

Default containerlab image to use,

None
ipv4_subnet (str, Optional)

Management subnet to use to IP number nodes starting with 2nd IP in the subnet, in assumption that 1st IP is a default gateway.

'172.100.100.0/24'
ports (tuple, Optional)

Ports range to use for nodes.

(12000, 15000)
ports_map (dict, Optional)

dictionary keyed by node name with list of ports maps to use,

None
cache Union[bool, str]

Cache usage options:

  • True: Use data stored in cache if it is up to date, refresh it otherwise.
  • False: Do not use cache and do not update cache.
  • "refresh": Ignore data in cache and replace it with data fetched from Netbox.
  • "force": Use data in cache without checking if it is up to date.
False

Returns:

Name Type Description
dict Result

Containerlab inventory dictionary containing lab topology data

Source code in norfab\workers\netbox_worker.py
2787
2788
2789
2790
2791
2792
2793
2794
2795
2796
2797
2798
2799
2800
2801
2802
2803
2804
2805
2806
2807
2808
2809
2810
2811
2812
2813
2814
2815
2816
2817
2818
2819
2820
2821
2822
2823
2824
2825
2826
2827
2828
2829
2830
2831
2832
2833
2834
2835
2836
2837
2838
2839
2840
2841
2842
2843
2844
2845
2846
2847
2848
2849
2850
2851
2852
2853
2854
2855
2856
2857
2858
2859
2860
2861
2862
2863
2864
2865
2866
2867
2868
2869
2870
2871
2872
2873
2874
2875
2876
2877
2878
2879
2880
2881
2882
2883
2884
2885
2886
2887
2888
2889
2890
2891
2892
2893
2894
2895
2896
2897
2898
2899
2900
2901
2902
2903
2904
2905
2906
2907
2908
2909
2910
2911
2912
2913
2914
2915
2916
2917
2918
2919
2920
2921
2922
2923
2924
2925
2926
2927
2928
2929
2930
2931
2932
2933
2934
2935
2936
2937
2938
2939
2940
2941
2942
2943
2944
2945
2946
2947
2948
2949
2950
2951
2952
2953
2954
2955
2956
2957
2958
2959
2960
2961
2962
2963
2964
2965
2966
2967
2968
2969
2970
2971
2972
2973
2974
2975
2976
2977
2978
2979
2980
2981
2982
2983
2984
2985
2986
2987
2988
2989
2990
2991
2992
2993
2994
2995
2996
2997
2998
2999
3000
3001
3002
3003
3004
3005
3006
3007
3008
3009
3010
3011
3012
3013
3014
3015
3016
3017
3018
3019
3020
3021
3022
3023
3024
3025
3026
3027
3028
3029
3030
3031
3032
3033
3034
3035
3036
3037
3038
3039
3040
3041
3042
3043
3044
3045
3046
3047
3048
3049
3050
3051
3052
3053
3054
3055
3056
3057
3058
3059
3060
3061
3062
3063
3064
3065
3066
3067
3068
3069
3070
3071
3072
3073
3074
3075
3076
3077
3078
3079
3080
3081
3082
3083
3084
3085
3086
3087
3088
3089
3090
3091
3092
3093
3094
3095
3096
3097
3098
3099
3100
3101
3102
3103
3104
3105
3106
3107
3108
3109
3110
3111
3112
3113
3114
3115
3116
3117
3118
3119
3120
3121
3122
3123
3124
3125
3126
3127
3128
3129
3130
3131
3132
3133
3134
3135
3136
3137
3138
3139
3140
3141
@Task(fastapi={"methods": ["GET"]})
def get_containerlab_inventory(
    self,
    job: Job,
    lab_name: str = None,
    tenant: Union[None, str] = None,
    filters: Union[None, list] = None,
    devices: Union[None, list] = None,
    instance: Union[None, str] = None,
    image: Union[None, str] = None,
    ipv4_subnet: str = "172.100.100.0/24",
    ports: tuple = (12000, 15000),
    ports_map: Union[None, dict] = None,
    cache: Union[bool, str] = False,
) -> Result:
    """
    Retrieve and construct Containerlab inventory from NetBox data.

    Containerlab node details must be defined under device configuration
    context `norfab.containerlab` path, for example:

    ```
    {
        "norfab": {
            "containerlab": {
                "kind": "ceos",
                "image": "ceos:latest",
                "mgmt-ipv4": "172.100.100.10/24",
                "ports": [
                    {10000: 22},
                    {10001: 830}
                ],

                ... any other node parameters ...

                "interfaces_rename": [
                    {
                        "find": "eth",
                        "replace": "Eth",
                        "use_regex": false
                    }
                ]
            }
        }
    }
    ```

    For complete list of parameters refer to
    [Containerlab nodes definition](https://containerlab.dev/manual/nodes/).

    Special handling given to these parameters:

    - `lab_name` - if not provided uses `tenant` argument value as a lab name
    - `kind` - uses device platform field value by default
    - `image` - uses `image` value if provided, otherwise uses `{kind}:latest`
    - `interfaces_rename` - a list of one or more interface renaming instructions,
        each item must have `find` and `replace` defined, optional `use_regex`
        flag specifies whether to use regex based pattern substitution.

    To retrieve topology data from Netbox at least one of these arguments must be provided
    to identify a set of devices to include into Containerlab topology:

    - `tenant` - topology constructed using all devices and links that belong to this tenant
    - `devices` - creates topology only using devices in the lists
    - `filters` - list of device filters to retrieve from Netbox and add to topology

    If multiple of above arguments provided, resulting lab topology is a sum of all
    devices matched.

    Args:
        job: NorFab Job object containing relevant metadata
        lab_name (str, Mandatory): Name of containerlab to construct inventory for.
        tenant (str, optional): Construct topology using given tenant's devices
        filters (list, optional): List of filters to apply when retrieving devices from NetBox.
        devices (list, optional): List of specific devices to retrieve from NetBox.
        instance (str, optional): NetBox instance to use.
        image (str, optional): Default containerlab image to use,
        ipv4_subnet (str, Optional): Management subnet to use to IP number nodes
            starting with 2nd IP in the subnet, in assumption that 1st IP is a default gateway.
        ports (tuple, Optional): Ports range to use for nodes.
        ports_map (dict, Optional): dictionary keyed by node name with list of ports maps to use,
        cache (Union[bool, str], optional): Cache usage options:

            - True: Use data stored in cache if it is up to date, refresh it otherwise.
            - False: Do not use cache and do not update cache.
            - "refresh": Ignore data in cache and replace it with data fetched from Netbox.
            - "force": Use data in cache without checking if it is up to date.

    Returns:
        dict: Containerlab inventory dictionary containing lab topology data
    """
    devices = devices or []
    filters = filters or []
    nodes, links = {}, []
    ports_map = ports_map or {}
    endpts_done = []  # to deduplicate links
    instance = instance or self.default_instance
    # handle lab name and tenant name with filters
    if lab_name is None and tenant:
        lab_name = tenant
    # add tenant filters
    if tenant:
        filters = filters or [{}]
        for filter in filters:
            if self.nb_version[instance] >= (4, 3, 0):
                filter["tenant"] = f'{{name: {{exact: "{tenant}"}}}}'
            else:
                filter["tenant"] = tenant

    # construct inventory
    inventory = {
        "name": lab_name,
        "topology": {"nodes": nodes, "links": links},
        "mgmt": {"ipv4-subnet": ipv4_subnet, "network": f"br-{lab_name}"},
    }
    ret = Result(
        task=f"{self.name}:get_containerlab_inventory",
        result=inventory,
        resources=[instance],
    )
    mgmt_net = ipaddress.ip_network(ipv4_subnet)
    available_ips = list(mgmt_net.hosts())[1:]

    # run checks
    if not available_ips:
        raise ValueError(f"Need IPs to allocate, but '{ipv4_subnet}' given")
    if ports:
        available_ports = list(range(ports[0], ports[1]))
    else:
        raise ValueError(f"Need ports to allocate, but '{ports}' given")

    # check Netbox status
    netbox_status = self.get_netbox_status(job=job, instance=instance)
    if netbox_status.result[instance]["status"] is False:
        ret.failed = True
        ret.messages = [f"Netbox status is no good: {netbox_status}"]
        return ret

    # retrieve devices data
    log.debug(
        f"Fetching devices from {instance} Netbox instance, devices '{devices}', filters '{filters}'"
    )
    job.event("Fetching devices data from Netbox")
    nb_devices = self.get_devices(
        job=job,
        filters=filters,
        devices=devices,
        instance=instance,
        cache=cache,
    )

    # form Containerlab nodes inventory
    for device_name, device in nb_devices.result.items():
        node = device["config_context"].get("norfab", {}).get("containerlab", {})
        # populate node parameters
        if not node.get("kind"):
            if device["platform"]:
                node["kind"] = device["platform"]["name"]
            else:
                msg = (
                    f"{device_name} - has no 'kind' of 'platform' defined, skipping"
                )
                log.warning(msg)
                job.event(msg, severity="WARNING")
                continue
        if not node.get("image"):
            if image:
                node["image"] = image
            else:
                node["image"] = f"{node['kind']}:latest"
        if not node.get("mgmt-ipv4"):
            if available_ips:
                node["mgmt-ipv4"] = f"{available_ips.pop(0)}"
            else:
                raise RuntimeError("Run out of IP addresses to allocate")
        if not node.get("ports"):
            node["ports"] = []
            # use ports map
            if ports_map.get(device_name):
                node["ports"] = ports_map[device_name]
            # allocate next-available ports
            else:
                for port in [
                    "22/tcp",
                    "23/tcp",
                    "80/tcp",
                    "161/udp",
                    "443/tcp",
                    "830/tcp",
                    "8080/tcp",
                ]:
                    if available_ports:
                        node["ports"].append(f"{available_ports.pop(0)}:{port}")
                    else:
                        raise RuntimeError(
                            "Run out of TCP / UDP ports to allocate."
                        )

        # save node content
        nodes[device_name] = node
        job.event(f"Node added {device_name}")

    # return if no nodes found for provided parameters
    if not nodes:
        msg = f"{self.name} - no devices found in Netbox"
        log.error(msg)
        ret.failed = True
        ret.messages = [
            f"{self.name} - no devices found in Netbox, "
            f"devices - '{devices}', filters - '{filters}'"
        ]
        ret.errors = [msg]
        return ret

    job.event("Fetching connections data from Netbox")

    # query interface connections data from netbox
    nb_connections = self.get_connections(
        job=job, devices=list(nodes), instance=instance, cache=cache
    )
    # save connections data to links inventory
    while nb_connections.result:
        device, device_connections = nb_connections.result.popitem()
        for interface, connection in device_connections.items():
            # skip non ethernet links
            if connection.get("termination_type") != "interface":
                continue
            # skip orphaned links
            if not connection.get("remote_interface"):
                continue
            # skip connections to devices that are not part of lab
            if connection["remote_device"] not in nodes:
                continue
            endpoints = []
            link = {
                "type": "veth",
                "endpoints": endpoints,
            }
            # add A node
            endpoints.append(
                {
                    "node": device,
                    "interface": interface,
                }
            )
            # add B node
            endpoints.append({"node": connection["remote_device"]})
            if connection.get("breakout") is True:
                endpoints[-1]["interface"] = connection["remote_interface"][0]
            else:
                endpoints[-1]["interface"] = connection["remote_interface"]
            # save the link
            a_end = (
                endpoints[0]["node"],
                endpoints[0]["interface"],
            )
            b_end = (
                endpoints[1]["node"],
                endpoints[1]["interface"],
            )
            if a_end not in endpts_done and b_end not in endpts_done:
                endpts_done.append(a_end)
                endpts_done.append(b_end)
                links.append(link)
                job.event(
                    f"Link added {endpoints[0]['node']}:{endpoints[0]['interface']}"
                    f" - {endpoints[1]['node']}:{endpoints[1]['interface']}"
                )

    # query circuits connections data from netbox
    nb_circuits = self.get_circuits(
        job=job, devices=list(nodes), instance=instance, cache=cache
    )
    # save circuits data to hosts' inventory
    while nb_circuits.result:
        device, device_circuits = nb_circuits.result.popitem()
        for cid, circuit in device_circuits.items():
            # skip circuits not connected to devices
            if not circuit.get("remote_interface"):
                continue
            # skip circuits to devices that are not part of lab
            if circuit["remote_device"] not in nodes:
                continue
            endpoints = []
            link = {
                "type": "veth",
                "endpoints": endpoints,
            }
            # add A node
            endpoints.append(
                {
                    "node": device,
                    "interface": circuit["interface"],
                }
            )
            # add B node
            endpoints.append(
                {
                    "node": circuit["remote_device"],
                    "interface": circuit["remote_interface"],
                }
            )
            # save the link
            a_end = (
                endpoints[0]["node"],
                endpoints[0]["interface"],
            )
            b_end = (
                endpoints[1]["node"],
                endpoints[1]["interface"],
            )
            if a_end not in endpts_done and b_end not in endpts_done:
                endpts_done.append(a_end)
                endpts_done.append(b_end)
                links.append(link)
                job.event(
                    f"Link added {endpoints[0]['node']}:{endpoints[0]['interface']}"
                    f" - {endpoints[1]['node']}:{endpoints[1]['interface']}"
                )

    # rename links' interfaces
    for node_name, node_data in nodes.items():
        interfaces_rename = node_data.pop("interfaces_rename", [])
        if interfaces_rename:
            job.event(f"Renaming {node_name} interfaces")
        for item in interfaces_rename:
            if not item.get("find") or not item.get("replace"):
                log.error(
                    f"{self.name} - interface rename need to have"
                    f" 'find' and 'replace' defined, skipping: {item}"
                )
                continue
            pattern = item["find"]
            replace = item["replace"]
            use_regex = item.get("use_regex", False)
            # go over links one by one and rename interfaces
            for link in links:
                for endpoint in link["endpoints"]:
                    if endpoint["node"] != node_name:
                        continue
                    if use_regex:
                        renamed = re.sub(
                            pattern,
                            replace,
                            endpoint["interface"],
                        )
                    else:
                        renamed = endpoint["interface"].replace(pattern, replace)
                    if endpoint["interface"] != renamed:
                        msg = f"{node_name} interface {endpoint['interface']} renamed to {renamed}"
                        log.debug(msg)
                        job.event(msg)
                        endpoint["interface"] = renamed

    return ret