AVD Quickstart Containerlab

Overview

AVD Quickstart Containerlab

WARNING This repository is still under construction. It's fully functional, but has number of limitations. For example:

  • README is still work-in-progress
  • Lab configuration and adresses are hardcoded and have to be redefined in many different files if you setup is different. That will be simplified before the final release.
  • Some workflow and code optimization required.

Overview

This repository helps to build your own AVD test lab based on Containerlab in minutes. The main target is to provide an easy way to build the environment to learn and test AVD automation. The lab can be used together with CVP VM, but it's not mandatory.

WARNING: if CVP VM is part of the lab, make sure that it's reachable and credentials configured on CVP are matching the lab.

Release Notes:

  • 0.1
    • initial release with many shortcuts
  • 0.2
    • Fix bugs.
    • Improve lab topology.
    • Improve lab workflow.
    • Add EVPN AA scenario.

Lab Prerequisites

The lab requires a single Linux host (Ubuntu server recommended) with Docker and Containerlab installed. It's possible to run Containerlab on MacOS, but that was not tested. Dedicated Linux machine is currently the preferred option.

To test AVD with CVP, KVM can be installed on the same host. To install KVM, check this guide or any other resource available on internet. Once KVM is installed, you can use one of the following repositories to install CVP:

It is definitely possible to run CVP on a dedicated host and a different hypervisor as long as it can be reached by cLab devices.

NOTE: to use CVP VM with container lab it's not required to recompile Linux core. That's only required if you plan to use vEOS on KVM for you lab setup.

The lab setup diagram:

lab diagram

How To Use The Lab

  1. Clone this repository to your lab host: git clone https://github.com/arista-netdevops-community/avd-quickstart-containerlab.git
  2. It is recommended to remove git remote as changes are not supposed to be pushed to the origin: git remote remove origin
  3. Change to the lab directory: cd avd-quickstart-containerlab
  4. Before running the lab it is recommended to create a dedicated git branch for you lab experiments to keep original branch clean.
  5. Check makefile help for the list of commands available: make help
[email protected]:~/avd-quickstart-containerlab$ make help
avd_build_cvp                  build configs and configure switches via eAPI
avd_build_eapi                 build configs and configure switches via eAPI
build                          Build docker image
clab_deploy                    Deploy ceos lab
clab_destroy                   Destroy ceos lab
clab_graph                     Build lab graph
help                           Display help message
inventory_evpn_aa              onboard devices to CVP
inventory_evpn_mlag            onboard devices to CVP
onboard                        onboard devices to CVP
rm                             Remove all containerlab directories
run                            run docker image. This requires cLab "custom_mgmt" to be present
  1. If you don't have cEOS image on your host yet, download it from arista.com and import. Make sure that image name is matching the parameters defined in CSVs_EVPN_AA/clab.yml or CSVs_EVPN_MLAG/clab.yml
  2. Use make build to build avd-quickstart:latest container image. If that was done earlier and the image already exists, you can skip this step.
  3. Run make inventory_evpn_aa or make inventory_evpn_mlag to build the inventory for EVPN AA or MLAG scenario. Ideally AVD inventroy must be a different repository, but for simplicity script will generate inventory in the current directory.
  4. Review the inventory generated by avd-quickstart. You can optionally git commit the changes.
  5. Run make clab_deploy to build the containerlab. Wait until the deployment will finish.
  6. Execute make run to run avd-quickstart container.
  7. If CVP VM is used in the lab, onboard cLab switches with make onboard. Once the script behind this shortcut wil finish, devices will appear in the CVP inventory.
  8. To execute Ansible AVD playbook, use make avd_build_eapi or make avd_build_cvp shortcuts. That will execute playbook/fabric-deploy-eapi.yml or playbook/fabric-deploy-cvp.yml.
  9. Run make avd_validate to execute AVD state validation playbook playbooks/validate-states.yml.
  10. Run make avd_snapshot if you want to collect a network snapshot with playbooks/snapshot.yml.
  11. Connect to hosts and switches and run some pings, show commands, etc. To connect to a lab device, you can type it's hostname in the container:

connect to a device from the container

NOTE: device hostnames are currently hardcoded inside the avd-quickstart container. If you have customized the inventory, ssh to the device manually. That will be improved in the coming versions.

You can optionally git commit the changes and start playing with the lab. Use CSVs to add some VLANs, etc. for example. Re-generate the inventory and check how the AVD repository data changes.

How To Destroy The Lab

  1. Exit the avd-quickstart container by typing exit
  2. Execute make clab_destroy to destroy the containerlab.
  3. Execute make rm to delete the generated AVD inventory.
Owner
Carl Buchmann
Systems Engineer @ Arista Networks Passionate about designing networks and automating them!
Carl Buchmann
Conceptual 12M is a dataset containing (image-URL, caption) pairs collected for vision-and-language pre-training.

Conceptual 12M We introduce the Conceptual 12M (CC12M), a dataset with ~12 million image-text pairs meant to be used for vision-and-language pre-train

Google Research Datasets 226 Dec 07, 2022
Code for Low-Cost Algorithmic Recourse for Users With Uncertain Cost Functions

EMS-COLS-recourse Initial Code for Low-Cost Algorithmic Recourse for Users With Uncertain Cost Functions Folder structure: data folder contains raw an

Prateek Yadav 1 Nov 25, 2022
PyTorch implementation of Glow

glow-pytorch PyTorch implementation of Glow, Generative Flow with Invertible 1x1 Convolutions (https://arxiv.org/abs/1807.03039) Usage: python train.p

Kim Seonghyeon 433 Dec 27, 2022
Pcos-prediction - Predicts the likelihood of Polycystic Ovary Syndrome based on patient attributes and symptoms

PCOS Prediction 🥼 Predicts the likelihood of Polycystic Ovary Syndrome based on

Samantha Van Seters 1 Jan 10, 2022
Kaggle Ultrasound Nerve Segmentation competition [Keras]

Ultrasound nerve segmentation using Keras (1.0.7) Kaggle Ultrasound Nerve Segmentation competition [Keras] #Install (Ubuntu {14,16}, GPU) cuDNN requir

179 Dec 28, 2022
Fre-GAN: Adversarial Frequency-consistent Audio Synthesis

Fre-GAN Vocoder Fre-GAN: Adversarial Frequency-consistent Audio Synthesis Training: python train.py --config config.json Citation: @misc{kim2021frega

Rishikesh (ऋषिकेश) 93 Dec 17, 2022
Container : Context Aggregation Network

Container : Context Aggregation Network If you use this code for a paper please cite: @article{gao2021container, title={Container: Context Aggregati

AI2 47 Dec 16, 2022
Shuwa Gesture Toolkit is a framework that detects and classifies arbitrary gestures in short videos

Shuwa Gesture Toolkit is a framework that detects and classifies arbitrary gestures in short videos

Google 89 Dec 22, 2022
Can we do Customers Segmentation using PHP and Unsupervized Machine Learning ? Yes we can ! 🤡

Customers Segmentation using PHP and Rubix ML PHP Library Can we do Customers Segmentation using PHP and Unsupervized Machine Learning ? Yes we can !

Mickaël Andrieu 11 Oct 08, 2022
Easily benchmark PyTorch model FLOPs, latency, throughput, max allocated memory and energy consumption

⏱ pytorch-benchmark Easily benchmark model inference FLOPs, latency, throughput, max allocated memory and energy consumption Install pip install pytor

Lukas Hedegaard 21 Dec 22, 2022
LoFTR:Detector-Free Local Feature Matching with Transformers CVPR 2021

LoFTR-with-train-script LoFTR:Detector-Free Local Feature Matching with Transformers CVPR 2021 (with train script --- unofficial ---). About Megadepth

Nan Xiaohu 15 Nov 04, 2022
An open source Jetson Nano baseboard and tools to design your own.

My Jetson Nano Baseboard This basic baseboard gives the user the foundation and the flexibility to design their own baseboard for the Jetson Nano. It

NVIDIA AI IOT 57 Dec 29, 2022
GPU-Accelerated Deep Learning Library in Python

Hebel GPU-Accelerated Deep Learning Library in Python Hebel is a library for deep learning with neural networks in Python using GPU acceleration with

Hannes Bretschneider 1.2k Dec 21, 2022
Exploring the link between uncertainty estimates obtained via "exact" Bayesian inference and out-of-distribution (OOD) detection.

Uncertainty-based OOD detection Exploring the link between uncertainty estimates obtained by "exact" Bayesian inference and out-of-distribution (OOD)

Christian Henning 1 Nov 05, 2022
Human Dynamics from Monocular Video with Dynamic Camera Movements

Human Dynamics from Monocular Video with Dynamic Camera Movements Ri Yu, Hwangpil Park and Jehee Lee Seoul National University ACM Transactions on Gra

215 Jan 01, 2023
CVPRW 2021: How to calibrate your event camera

E2Calib: How to Calibrate Your Event Camera This repository contains code that implements video reconstruction from event data for calibration as desc

Robotics and Perception Group 104 Nov 16, 2022
ReAct: Out-of-distribution Detection With Rectified Activations

ReAct: Out-of-distribution Detection With Rectified Activations This is the source code for paper ReAct: Out-of-distribution Detection With Rectified

38 Dec 05, 2022
Official Code for VideoLT: Large-scale Long-tailed Video Recognition (ICCV 2021)

Pytorch Code for VideoLT [Website][Paper] Updates [10/29/2021] Features uploaded to Google Drive, for access please send us an e-mail: zhangxing18 at

Skye 26 Sep 18, 2022
The BCNet related data and inference model.

BCNet This repository includes the some source code and related dataset of paper BCNet: Learning Body and Cloth Shape from A Single Image, ECCV 2020,

81 Dec 12, 2022
Official Implementation of Neural Splines

Neural Splines: Fitting 3D Surfaces with Inifinitely-Wide Neural Networks This repository contains the official implementation of the CVPR 2021 (Oral)

Francis Williams 56 Nov 29, 2022