Skip to content

Netbox Get Containerlab Inventory Task¤

task api name: get_containerlab_inventory

This task designed to provide Containerlab workers with inventory data sourced from Netbox to deploy lab topologies.

Get Containerlab Inventory Sample Usage¤

Below is an example of how to fetch Containerlab topology inventory data from Netbox for two devices named fceos4 and fceos5.

nf#netbox get containerlab-inventory devices fceos4 fceos5 lab-name foobar
--------------------------------------------- Job Events -----------------------------------------------
31-May-2025 13:10:14.477 7d434ed4e24c4a69af5d52797d7a187e job started
31-May-2025 13:10:14.525 INFO netbox-worker-1.1 running netbox.get_containerlab_inventory  - Fetching devices data from Netbox
31-May-2025 13:10:14.594 INFO netbox-worker-1.1 running netbox.get_containerlab_inventory  - Node added fceos4
31-May-2025 13:10:14.600 INFO netbox-worker-1.1 running netbox.get_containerlab_inventory  - Node added fceos5
31-May-2025 13:10:14.606 INFO netbox-worker-1.1 running netbox.get_containerlab_inventory  - Fetching connections data from Netbox
31-May-2025 13:10:15.211 INFO netbox-worker-1.1 running netbox.get_containerlab_inventory  - Link added fceos5:eth1 - fceos4:eth1
31-May-2025 13:10:15.217 INFO netbox-worker-1.1 running netbox.get_containerlab_inventory  - Link added fceos5:eth2 - fceos4:eth2
31-May-2025 13:10:15.225 INFO netbox-worker-1.1 running netbox.get_containerlab_inventory  - Link added fceos5:eth3 - fceos4:eth3
31-May-2025 13:10:15.232 INFO netbox-worker-1.1 running netbox.get_containerlab_inventory  - Link added fceos5:eth4 - fceos4:eth4
31-May-2025 13:10:15.238 INFO netbox-worker-1.1 running netbox.get_containerlab_inventory  - Link added fceos5:eth6 - fceos4:eth6
31-May-2025 13:10:15.244 INFO netbox-worker-1.1 running netbox.get_containerlab_inventory  - Link added fceos5:eth7 - fceos4:eth7
31-May-2025 13:10:15.250 INFO netbox-worker-1.1 running netbox.get_containerlab_inventory  - Link added fceos5:eth8 - fceos4:eth101
31-May-2025 13:10:15.257 INFO netbox-worker-1.1 running netbox.get_containerlab_inventory  - Link added fceos5:eth11 - fceos4:eth11
31-May-2025 13:10:15.580 INFO netbox-worker-1.1 running netbox.get_containerlab_inventory  - Renaming fceos4 interfaces
31-May-2025 13:10:15.587 INFO netbox-worker-1.1 running netbox.get_containerlab_inventory  - Renaming fceos5 interfaces
31-May-2025 13:10:15.808 7d434ed4e24c4a69af5d52797d7a187e job completed in 1.331 seconds

--------------------------------------------- Job Results --------------------------------------------

netbox-worker-1.1:
  mgmt:
    ipv4-subnet: 172.100.100.0/24
    network: br-foobar
  name: foobar
  topology:
    links:
    - endpoints:
      - interface: eth1
        node: fceos5
      - interface: eth1
        node: fceos4
      type: veth
    - endpoints:
      - interface: eth2
        node: fceos5
      - interface: eth2
        node: fceos4
      type: veth
    - endpoints:
      - interface: eth3
        node: fceos5
      - interface: eth3
        node: fceos4
      type: veth
    - endpoints:
      - interface: eth4
        node: fceos5
      - interface: eth4
        node: fceos4
      type: veth
    - endpoints:
      - interface: eth6
        node: fceos5
      - interface: eth6
        node: fceos4
      type: veth
    - endpoints:
      - interface: eth7
        node: fceos5
      - interface: eth7
        node: fceos4
      type: veth
    - endpoints:
      - interface: eth8
        node: fceos5
      - interface: eth101
        node: fceos4
      type: veth
    - endpoints:
      - interface: eth11
        node: fceos5
      - interface: eth11
        node: fceos4
      type: veth
    nodes:
      fceos4:
        image: ceosimage:4.30.0F
        kind: ceos
        mgmt-ipv4: 172.100.100.2
        ports:
        - 12000:22/tcp
        - 12001:23/tcp
        - 12002:80/tcp
        - 12003:161/udp
        - 12005:830/tcp
        - 12006:8080/tcp
      fceos5:
        image: ceosimage:4.30.0F
        kind: ceos
        mgmt-ipv4: 172.100.100.3
        ports:
        - 12007:22/tcp
        - 12008:23/tcp
        - 12009:80/tcp
        - 12010:161/udp
        - 12011:443/tcp
        - 12012:830/tcp
        - 12013:8080/tcp

nf#

NORFAB Netbox Get Containerlab Inventory Command Shell Reference¤

NorFab shell supports these command options for Netbox get_containerlab_inventory task:

nf#man tree netbox.get.containerlab-inventory
root
└── netbox:    Netbox service
    └── get:    Query data from Netbox
        └── containerlab-inventory:    Query Netbox and construct Containerlab inventory
            ├── timeout:    Job timeout
            ├── workers:    Filter worker to target, default 'any'
            ├── verbose-result:    Control output details, default 'False'
            ├── lab-name:    Lab name to generate lab inventory for
            ├── tenant:    Tenant name to generate lab inventory for
            │   ├── tenant:    Filter devices by tenants
            │   ├── device-name-contains:    Filter devices by name pattern
            │   ├── model:    Filter devices by models
            │   ├── platform:    Filter devices by platforms
            │   ├── region:    Filter devices by regions
            │   ├── role:    Filter devices by roles
            │   ├── site:    Filter devices by sites
            │   ├── status:    Filter devices by statuses
            │   └── tag:    Filter devices by tags
            ├── devices:    List of devices to generate lab inventory for
            ├── progress:    Display progress events, default 'True'
            ├── netbox-instance:    Name of Netbox instance to pull inventory from
            ├── ipv4-subnet:    IPv4 management subnet to use for lab, default '172.100.100.0/24'
            ├── image:    Docker image to use for all nodes
            └── ports:    Range of TCP/UDP ports to use for nodes, default '[12000, 13000]'
nf#

Python API Reference¤

Retrieve and construct Containerlab inventory from NetBox data.

Containerlab node details must be defined under device configuration context norfab.containerlab path, for example:

{
    "norfab": {
        "containerlab": {
            "kind": "ceos",
            "image": "ceos:latest",
            "mgmt-ipv4": "172.100.100.10/24",
            "ports": [
                {10000: 22},
                {10001: 830}
            ],

            ... any other node parameters ...

            "interfaces_rename": [
                {
                    "find": "eth",
                    "replace": "Eth",
                    "use_regex": false
                }
            ]
        }
    }
}

For complete list of parameters refer to Containerlab nodes definition.

Special handling given to these parameters:

  • lab_name - if not provided uses tenant argument value as a lab name
  • kind - uses device platform field value by default
  • image - uses image value if provided, otherwise uses {kind}:latest
  • interfaces_rename - a list of one or more interface renaming instructions, each item must have find and replace defined, optional use_regex flag specifies whether to use regex based pattern substitution.

To retrieve topology data from Netbox at least one of these arguments must be provided to identify a set of devices to include into Containerlab topology:

  • tenant - topology constructed using all devices and links that belong to this tenant
  • devices - creates topology only using devices in the lists
  • filters - list of device filters to retrieve from Netbox and add to topology

If multiple of above arguments provided, resulting lab topology is a sum of all devices matched.

Parameters:

Name Type Description Default
job Job

NorFab Job object containing relevant metadata

required
lab_name (str, Mandatory)

Name of containerlab to construct inventory for.

None
tenant str

Construct topology using given tenant's devices

None
filters list

List of filters to apply when retrieving devices from NetBox.

None
devices list

List of specific devices to retrieve from NetBox.

None
instance str

NetBox instance to use.

None
image str

Default containerlab image to use,

None
ipv4_subnet (str, Optional)

Management subnet to use to IP number nodes starting with 2nd IP in the subnet, in assumption that 1st IP is a default gateway.

'172.100.100.0/24'
ports (tuple, Optional)

Ports range to use for nodes.

(12000, 15000)
ports_map (dict, Optional)

dictionary keyed by node name with list of ports maps to use,

None
cache Union[bool, str]

Cache usage options:

  • True: Use data stored in cache if it is up to date, refresh it otherwise.
  • False: Do not use cache and do not update cache.
  • "refresh": Ignore data in cache and replace it with data fetched from Netbox.
  • "force": Use data in cache without checking if it is up to date.
False

Returns:

Name Type Description
dict Result

Containerlab inventory dictionary containing lab topology data

Source code in norfab\workers\netbox_worker.py
3277
3278
3279
3280
3281
3282
3283
3284
3285
3286
3287
3288
3289
3290
3291
3292
3293
3294
3295
3296
3297
3298
3299
3300
3301
3302
3303
3304
3305
3306
3307
3308
3309
3310
3311
3312
3313
3314
3315
3316
3317
3318
3319
3320
3321
3322
3323
3324
3325
3326
3327
3328
3329
3330
3331
3332
3333
3334
3335
3336
3337
3338
3339
3340
3341
3342
3343
3344
3345
3346
3347
3348
3349
3350
3351
3352
3353
3354
3355
3356
3357
3358
3359
3360
3361
3362
3363
3364
3365
3366
3367
3368
3369
3370
3371
3372
3373
3374
3375
3376
3377
3378
3379
3380
3381
3382
3383
3384
3385
3386
3387
3388
3389
3390
3391
3392
3393
3394
3395
3396
3397
3398
3399
3400
3401
3402
3403
3404
3405
3406
3407
3408
3409
3410
3411
3412
3413
3414
3415
3416
3417
3418
3419
3420
3421
3422
3423
3424
3425
3426
3427
3428
3429
3430
3431
3432
3433
3434
3435
3436
3437
3438
3439
3440
3441
3442
3443
3444
3445
3446
3447
3448
3449
3450
3451
3452
3453
3454
3455
3456
3457
3458
3459
3460
3461
3462
3463
3464
3465
3466
3467
3468
3469
3470
3471
3472
3473
3474
3475
3476
3477
3478
3479
3480
3481
3482
3483
3484
3485
3486
3487
3488
3489
3490
3491
3492
3493
3494
3495
3496
3497
3498
3499
3500
3501
3502
3503
3504
3505
3506
3507
3508
3509
3510
3511
3512
3513
3514
3515
3516
3517
3518
3519
3520
3521
3522
3523
3524
3525
3526
3527
3528
3529
3530
3531
3532
3533
3534
3535
3536
3537
3538
3539
3540
3541
3542
3543
3544
3545
3546
3547
3548
3549
3550
3551
3552
3553
3554
3555
3556
3557
3558
3559
3560
3561
3562
3563
3564
3565
3566
3567
3568
3569
3570
3571
3572
3573
3574
3575
3576
3577
3578
3579
3580
3581
3582
3583
3584
3585
3586
3587
3588
3589
3590
3591
3592
3593
3594
3595
3596
3597
3598
3599
3600
3601
3602
3603
3604
3605
3606
3607
3608
3609
3610
3611
3612
3613
3614
3615
3616
3617
3618
3619
3620
3621
3622
3623
3624
3625
3626
3627
3628
3629
3630
3631
@Task(fastapi={"methods": ["GET"], "schema": NetboxFastApiArgs.model_json_schema()})
def get_containerlab_inventory(
    self,
    job: Job,
    lab_name: str = None,
    tenant: Union[None, str] = None,
    filters: Union[None, list] = None,
    devices: Union[None, list] = None,
    instance: Union[None, str] = None,
    image: Union[None, str] = None,
    ipv4_subnet: str = "172.100.100.0/24",
    ports: tuple = (12000, 15000),
    ports_map: Union[None, dict] = None,
    cache: Union[bool, str] = False,
) -> Result:
    """
    Retrieve and construct Containerlab inventory from NetBox data.

    Containerlab node details must be defined under device configuration
    context `norfab.containerlab` path, for example:

    ```
    {
        "norfab": {
            "containerlab": {
                "kind": "ceos",
                "image": "ceos:latest",
                "mgmt-ipv4": "172.100.100.10/24",
                "ports": [
                    {10000: 22},
                    {10001: 830}
                ],

                ... any other node parameters ...

                "interfaces_rename": [
                    {
                        "find": "eth",
                        "replace": "Eth",
                        "use_regex": false
                    }
                ]
            }
        }
    }
    ```

    For complete list of parameters refer to
    [Containerlab nodes definition](https://containerlab.dev/manual/nodes/).

    Special handling given to these parameters:

    - `lab_name` - if not provided uses `tenant` argument value as a lab name
    - `kind` - uses device platform field value by default
    - `image` - uses `image` value if provided, otherwise uses `{kind}:latest`
    - `interfaces_rename` - a list of one or more interface renaming instructions,
        each item must have `find` and `replace` defined, optional `use_regex`
        flag specifies whether to use regex based pattern substitution.

    To retrieve topology data from Netbox at least one of these arguments must be provided
    to identify a set of devices to include into Containerlab topology:

    - `tenant` - topology constructed using all devices and links that belong to this tenant
    - `devices` - creates topology only using devices in the lists
    - `filters` - list of device filters to retrieve from Netbox and add to topology

    If multiple of above arguments provided, resulting lab topology is a sum of all
    devices matched.

    Args:
        job: NorFab Job object containing relevant metadata
        lab_name (str, Mandatory): Name of containerlab to construct inventory for.
        tenant (str, optional): Construct topology using given tenant's devices
        filters (list, optional): List of filters to apply when retrieving devices from NetBox.
        devices (list, optional): List of specific devices to retrieve from NetBox.
        instance (str, optional): NetBox instance to use.
        image (str, optional): Default containerlab image to use,
        ipv4_subnet (str, Optional): Management subnet to use to IP number nodes
            starting with 2nd IP in the subnet, in assumption that 1st IP is a default gateway.
        ports (tuple, Optional): Ports range to use for nodes.
        ports_map (dict, Optional): dictionary keyed by node name with list of ports maps to use,
        cache (Union[bool, str], optional): Cache usage options:

            - True: Use data stored in cache if it is up to date, refresh it otherwise.
            - False: Do not use cache and do not update cache.
            - "refresh": Ignore data in cache and replace it with data fetched from Netbox.
            - "force": Use data in cache without checking if it is up to date.

    Returns:
        dict: Containerlab inventory dictionary containing lab topology data
    """
    devices = devices or []
    filters = filters or []
    nodes, links = {}, []
    ports_map = ports_map or {}
    endpts_done = []  # to deduplicate links
    instance = instance or self.default_instance
    # handle lab name and tenant name with filters
    if lab_name is None and tenant:
        lab_name = tenant
    # add tenant filters
    if tenant:
        filters = filters or [{}]
        for filter in filters:
            if self.nb_version[instance] >= (4, 3, 0):
                filter["tenant"] = f'{{name: {{exact: "{tenant}"}}}}'
            else:
                filter["tenant"] = tenant

    # construct inventory
    inventory = {
        "name": lab_name,
        "topology": {"nodes": nodes, "links": links},
        "mgmt": {"ipv4-subnet": ipv4_subnet, "network": f"br-{lab_name}"},
    }
    ret = Result(
        task=f"{self.name}:get_containerlab_inventory",
        result=inventory,
        resources=[instance],
    )
    mgmt_net = ipaddress.ip_network(ipv4_subnet)
    available_ips = list(mgmt_net.hosts())[1:]

    # run checks
    if not available_ips:
        raise ValueError(f"Need IPs to allocate, but '{ipv4_subnet}' given")
    if ports:
        available_ports = list(range(ports[0], ports[1]))
    else:
        raise ValueError(f"Need ports to allocate, but '{ports}' given")

    # check Netbox status
    netbox_status = self.get_netbox_status(job=job, instance=instance)
    if netbox_status.result[instance]["status"] is False:
        ret.failed = True
        ret.messages = [f"Netbox status is no good: {netbox_status}"]
        return ret

    # retrieve devices data
    log.debug(
        f"Fetching devices from {instance} Netbox instance, devices '{devices}', filters '{filters}'"
    )
    job.event("Fetching devices data from Netbox")
    nb_devices = self.get_devices(
        job=job,
        filters=filters,
        devices=devices,
        instance=instance,
        cache=cache,
    )

    # form Containerlab nodes inventory
    for device_name, device in nb_devices.result.items():
        node = device["config_context"].get("norfab", {}).get("containerlab", {})
        # populate node parameters
        if not node.get("kind"):
            if device["platform"]:
                node["kind"] = device["platform"]["name"]
            else:
                msg = (
                    f"{device_name} - has no 'kind' of 'platform' defined, skipping"
                )
                log.warning(msg)
                job.event(msg, severity="WARNING")
                continue
        if not node.get("image"):
            if image:
                node["image"] = image
            else:
                node["image"] = f"{node['kind']}:latest"
        if not node.get("mgmt-ipv4"):
            if available_ips:
                node["mgmt-ipv4"] = f"{available_ips.pop(0)}"
            else:
                raise RuntimeError("Run out of IP addresses to allocate")
        if not node.get("ports"):
            node["ports"] = []
            # use ports map
            if ports_map.get(device_name):
                node["ports"] = ports_map[device_name]
            # allocate next-available ports
            else:
                for port in [
                    "22/tcp",
                    "23/tcp",
                    "80/tcp",
                    "161/udp",
                    "443/tcp",
                    "830/tcp",
                    "8080/tcp",
                ]:
                    if available_ports:
                        node["ports"].append(f"{available_ports.pop(0)}:{port}")
                    else:
                        raise RuntimeError(
                            "Run out of TCP / UDP ports to allocate."
                        )

        # save node content
        nodes[device_name] = node
        job.event(f"Node added {device_name}")

    # return if no nodes found for provided parameters
    if not nodes:
        msg = f"{self.name} - no devices found in Netbox"
        log.error(msg)
        ret.failed = True
        ret.messages = [
            f"{self.name} - no devices found in Netbox, "
            f"devices - '{devices}', filters - '{filters}'"
        ]
        ret.errors = [msg]
        return ret

    job.event("Fetching connections data from Netbox")

    # query interface connections data from netbox
    nb_connections = self.get_connections(
        job=job, devices=list(nodes), instance=instance, cache=cache
    )
    # save connections data to links inventory
    while nb_connections.result:
        device, device_connections = nb_connections.result.popitem()
        for interface, connection in device_connections.items():
            # skip non ethernet links
            if connection.get("termination_type") != "interface":
                continue
            # skip orphaned links
            if not connection.get("remote_interface"):
                continue
            # skip connections to devices that are not part of lab
            if connection["remote_device"] not in nodes:
                continue
            endpoints = []
            link = {
                "type": "veth",
                "endpoints": endpoints,
            }
            # add A node
            endpoints.append(
                {
                    "node": device,
                    "interface": interface,
                }
            )
            # add B node
            endpoints.append({"node": connection["remote_device"]})
            if connection.get("breakout") is True:
                endpoints[-1]["interface"] = connection["remote_interface"][0]
            else:
                endpoints[-1]["interface"] = connection["remote_interface"]
            # save the link
            a_end = (
                endpoints[0]["node"],
                endpoints[0]["interface"],
            )
            b_end = (
                endpoints[1]["node"],
                endpoints[1]["interface"],
            )
            if a_end not in endpts_done and b_end not in endpts_done:
                endpts_done.append(a_end)
                endpts_done.append(b_end)
                links.append(link)
                job.event(
                    f"Link added {endpoints[0]['node']}:{endpoints[0]['interface']}"
                    f" - {endpoints[1]['node']}:{endpoints[1]['interface']}"
                )

    # query circuits connections data from netbox
    nb_circuits = self.get_circuits(
        job=job, devices=list(nodes), instance=instance, cache=cache
    )
    # save circuits data to hosts' inventory
    while nb_circuits.result:
        device, device_circuits = nb_circuits.result.popitem()
        for cid, circuit in device_circuits.items():
            # skip circuits not connected to devices
            if not circuit.get("remote_interface"):
                continue
            # skip circuits to devices that are not part of lab
            if circuit["remote_device"] not in nodes:
                continue
            endpoints = []
            link = {
                "type": "veth",
                "endpoints": endpoints,
            }
            # add A node
            endpoints.append(
                {
                    "node": device,
                    "interface": circuit["interface"],
                }
            )
            # add B node
            endpoints.append(
                {
                    "node": circuit["remote_device"],
                    "interface": circuit["remote_interface"],
                }
            )
            # save the link
            a_end = (
                endpoints[0]["node"],
                endpoints[0]["interface"],
            )
            b_end = (
                endpoints[1]["node"],
                endpoints[1]["interface"],
            )
            if a_end not in endpts_done and b_end not in endpts_done:
                endpts_done.append(a_end)
                endpts_done.append(b_end)
                links.append(link)
                job.event(
                    f"Link added {endpoints[0]['node']}:{endpoints[0]['interface']}"
                    f" - {endpoints[1]['node']}:{endpoints[1]['interface']}"
                )

    # rename links' interfaces
    for node_name, node_data in nodes.items():
        interfaces_rename = node_data.pop("interfaces_rename", [])
        if interfaces_rename:
            job.event(f"Renaming {node_name} interfaces")
        for item in interfaces_rename:
            if not item.get("find") or not item.get("replace"):
                log.error(
                    f"{self.name} - interface rename need to have"
                    f" 'find' and 'replace' defined, skipping: {item}"
                )
                continue
            pattern = item["find"]
            replace = item["replace"]
            use_regex = item.get("use_regex", False)
            # go over links one by one and rename interfaces
            for link in links:
                for endpoint in link["endpoints"]:
                    if endpoint["node"] != node_name:
                        continue
                    if use_regex:
                        renamed = re.sub(
                            pattern,
                            replace,
                            endpoint["interface"],
                        )
                    else:
                        renamed = endpoint["interface"].replace(pattern, replace)
                    if endpoint["interface"] != renamed:
                        msg = f"{node_name} interface {endpoint['interface']} renamed to {renamed}"
                        log.debug(msg)
                        job.event(msg)
                        endpoint["interface"] = renamed

    return ret