AVD Quickstart Containerlab

Overview

AVD Quickstart Containerlab

WARNING This repository is still under construction. It's fully functional, but has number of limitations. For example:

  • README is still work-in-progress
  • Lab configuration and adresses are hardcoded and have to be redefined in many different files if you setup is different. That will be simplified before the final release.
  • Some workflow and code optimization required.

Overview

This repository helps to build your own AVD test lab based on Containerlab in minutes. The main target is to provide an easy way to build the environment to learn and test AVD automation. The lab can be used together with CVP VM, but it's not mandatory.

WARNING: if CVP VM is part of the lab, make sure that it's reachable and credentials configured on CVP are matching the lab.

Release Notes:

  • 0.1
    • initial release with many shortcuts
  • 0.2
    • Fix bugs.
    • Improve lab topology.
    • Improve lab workflow.
    • Add EVPN AA scenario.

Lab Prerequisites

The lab requires a single Linux host (Ubuntu server recommended) with Docker and Containerlab installed. It's possible to run Containerlab on MacOS, but that was not tested. Dedicated Linux machine is currently the preferred option.

To test AVD with CVP, KVM can be installed on the same host. To install KVM, check this guide or any other resource available on internet. Once KVM is installed, you can use one of the following repositories to install CVP:

It is definitely possible to run CVP on a dedicated host and a different hypervisor as long as it can be reached by cLab devices.

NOTE: to use CVP VM with container lab it's not required to recompile Linux core. That's only required if you plan to use vEOS on KVM for you lab setup.

The lab setup diagram:

lab diagram

How To Use The Lab

  1. Clone this repository to your lab host: git clone https://github.com/arista-netdevops-community/avd-quickstart-containerlab.git
  2. It is recommended to remove git remote as changes are not supposed to be pushed to the origin: git remote remove origin
  3. Change to the lab directory: cd avd-quickstart-containerlab
  4. Before running the lab it is recommended to create a dedicated git branch for you lab experiments to keep original branch clean.
  5. Check makefile help for the list of commands available: make help
[email protected]:~/avd-quickstart-containerlab$ make help
avd_build_cvp                  build configs and configure switches via eAPI
avd_build_eapi                 build configs and configure switches via eAPI
build                          Build docker image
clab_deploy                    Deploy ceos lab
clab_destroy                   Destroy ceos lab
clab_graph                     Build lab graph
help                           Display help message
inventory_evpn_aa              onboard devices to CVP
inventory_evpn_mlag            onboard devices to CVP
onboard                        onboard devices to CVP
rm                             Remove all containerlab directories
run                            run docker image. This requires cLab "custom_mgmt" to be present
  1. If you don't have cEOS image on your host yet, download it from arista.com and import. Make sure that image name is matching the parameters defined in CSVs_EVPN_AA/clab.yml or CSVs_EVPN_MLAG/clab.yml
  2. Use make build to build avd-quickstart:latest container image. If that was done earlier and the image already exists, you can skip this step.
  3. Run make inventory_evpn_aa or make inventory_evpn_mlag to build the inventory for EVPN AA or MLAG scenario. Ideally AVD inventroy must be a different repository, but for simplicity script will generate inventory in the current directory.
  4. Review the inventory generated by avd-quickstart. You can optionally git commit the changes.
  5. Run make clab_deploy to build the containerlab. Wait until the deployment will finish.
  6. Execute make run to run avd-quickstart container.
  7. If CVP VM is used in the lab, onboard cLab switches with make onboard. Once the script behind this shortcut wil finish, devices will appear in the CVP inventory.
  8. To execute Ansible AVD playbook, use make avd_build_eapi or make avd_build_cvp shortcuts. That will execute playbook/fabric-deploy-eapi.yml or playbook/fabric-deploy-cvp.yml.
  9. Run make avd_validate to execute AVD state validation playbook playbooks/validate-states.yml.
  10. Run make avd_snapshot if you want to collect a network snapshot with playbooks/snapshot.yml.
  11. Connect to hosts and switches and run some pings, show commands, etc. To connect to a lab device, you can type it's hostname in the container:

connect to a device from the container

NOTE: device hostnames are currently hardcoded inside the avd-quickstart container. If you have customized the inventory, ssh to the device manually. That will be improved in the coming versions.

You can optionally git commit the changes and start playing with the lab. Use CSVs to add some VLANs, etc. for example. Re-generate the inventory and check how the AVD repository data changes.

How To Destroy The Lab

  1. Exit the avd-quickstart container by typing exit
  2. Execute make clab_destroy to destroy the containerlab.
  3. Execute make rm to delete the generated AVD inventory.
Owner
Carl Buchmann
Systems Engineer @ Arista Networks Passionate about designing networks and automating them!
Carl Buchmann
Implementation of the GBST block from the Charformer paper, in Pytorch

Charformer - Pytorch Implementation of the GBST (gradient-based subword tokenization) module from the Charformer paper, in Pytorch. The paper proposes

Phil Wang 105 Dec 26, 2022
Code and data accompanying our SVRHM'21 paper.

Code and data accompanying our SVRHM'21 paper. Requires tensorflow 1.13, python 3.7, scikit-learn, and pytorch 1.6.0 to be installed. Python scripts i

5 Nov 17, 2021
Adversarial Framework for (non-) Parametric Image Stylisation Mosaics

Fully Adversarial Mosaics (FAMOS) Pytorch implementation of the paper "Copy the Old or Paint Anew? An Adversarial Framework for (non-) Parametric Imag

Zalando Research 120 Dec 24, 2022
Training neural models with structured signals.

Neural Structured Learning in TensorFlow Neural Structured Learning (NSL) is a new learning paradigm to train neural networks by leveraging structured

955 Jan 02, 2023
😊 Python module for face feature changing

PyWarping Python module for face feature changing Installation pip install pywarping If you get an error: No such file or directory: 'cmake': 'cmake',

Dopevog 10 Sep 10, 2021
Code for CVPR2021 paper 'Where and What? Examining Interpretable Disentangled Representations'.

PS-SC GAN This repository contains the main code for training a PS-SC GAN (a GAN implemented with the Perceptual Simplicity and Spatial Constriction c

Xinqi/Steven Zhu 40 Dec 16, 2022
A pure PyTorch implementation of the loss described in "Online Segment to Segment Neural Transduction"

ssnt-loss ℹ️ This is a WIP project. the implementation is still being tested. A pure PyTorch implementation of the loss described in "Online Segment t

張致強 1 Feb 09, 2022
Official PyTorch Implementation of paper EAN: Event Adaptive Network for Efficient Action Recognition

Official PyTorch Implementation of paper EAN: Event Adaptive Network for Efficient Action Recognition

TianYuan 27 Nov 07, 2022
Code for our work "Activation to Saliency: Forming High-Quality Labels for Unsupervised Salient Object Detection".

A2S-USOD Code for our work "Activation to Saliency: Forming High-Quality Labels for Unsupervised Salient Object Detection". Code will be released upon

15 Dec 16, 2022
Official implementation of "Learning Forward Dynamics Model and Informed Trajectory Sampler for Safe Quadruped Navigation" (RSS 2022)

Intro Official implementation of "Learning Forward Dynamics Model and Informed Trajectory Sampler for Safe Quadruped Navigation" Robotics:Science and

Yunho Kim 21 Dec 07, 2022
AOT (Associating Objects with Transformers) in PyTorch

An efficient modular implementation of Associating Objects with Transformers for Video Object Segmentation in PyTorch

162 Dec 14, 2022
Official PyTorch code for Hierarchical Conditional Flow: A Unified Framework for Image Super-Resolution and Image Rescaling (HCFlow, ICCV2021)

Hierarchical Conditional Flow: A Unified Framework for Image Super-Resolution and Image Rescaling (HCFlow, ICCV2021) This repository is the official P

Jingyun Liang 159 Dec 30, 2022
Program your own vulkan.gpuinfo.org query in Python. Used to determine baseline hardware for WebGPU.

query-gpuinfo-data License This software is not presently released under a license. The data in data/ is obtained under CC BY 4.0 as specified there.

Kai Ninomiya 5 Jul 18, 2022
Tzer: TVM Implementation of "Coverage-Guided Tensor Compiler Fuzzing with Joint IR-Pass Mutation (OOPSLA'22)“.

Artifact • Reproduce Bugs • Quick Start • Installation • Extend Tzer Coverage-Guided Tensor Compiler Fuzzing with Joint IR-Pass Mutation This is the s

12 Dec 29, 2022
Optimizing Deeper Transformers on Small Datasets

DT-Fixup Optimizing Deeper Transformers on Small Datasets Paper published in ACL 2021: arXiv Detailed instructions to replicate our results in the pap

16 Nov 14, 2022
Frequency Domain Image Translation: More Photo-realistic, Better Identity-preserving

Frequency Domain Image Translation: More Photo-realistic, Better Identity-preserving This is the source code for our paper Frequency Domain Image Tran

Mu Cai 52 Dec 23, 2022
Spatial-Location-Constraint-Prototype-Loss-for-Open-Set-Recognition

Spatial Location Constraint Prototype Loss for Open Set Recognition Official PyTorch implementation of "Spatial Location Constraint Prototype Loss for

Xia Ziheng 12 Jun 24, 2022
Process JSON files for neural recording sessions using Medtronic's BrainSense Percept PC neurostimulator

percept_processing This code processes JSON files for streamed neural data using Medtronic's Percept PC neurostimulator with BrainSense Technology for

Maria Olaru 3 Jun 06, 2022
PantheonRL is a package for training and testing multi-agent reinforcement learning environments.

PantheonRL is a package for training and testing multi-agent reinforcement learning environments. PantheonRL supports cross-play, fine-tuning, ad-hoc coordination, and more.

Stanford Intelligent and Interactive Autonomous Systems Group 57 Dec 28, 2022
Türkiye Canlı Mobese Görüntülerinde Profesyonel Nesne Takip Sistemi

Türkiye Mobese Görüntü Takip Türkiye Mobese görüntülerinde OPENCV ve Yolo ile takip sistemi Multiple Object Tracking System in Turkish Mobese with OPE

15 Dec 22, 2022