Sign Up
Log In
Log In
or
Sign Up
Places
All Projects
Status Monitor
Collapse sidebar
isv:SUSEInfra:Containers
vrnetlab
_service:obs_scm:vrnetlab-git1691862071.9187175...
Overview
Repositories
Revisions
Requests
Users
Attributes
Meta
File _service:obs_scm:vrnetlab-git1691862071.9187175.obscpio of Package vrnetlab
07070100000000000041ED00000000000000000000000364D7C43700000000000000000000000000000000000000000000002700000000vrnetlab-git1691862071.9187175/.github07070100000001000041ED00000000000000000000000264D7C43700000000000000000000000000000000000000000000003100000000vrnetlab-git1691862071.9187175/.github/workflows07070100000002000081A400000000000000000000000164D7C437000010D7000000000000000000000000000000000000003A00000000vrnetlab-git1691862071.9187175/.github/workflows/test.ymlname: "Build and Test virtual routers" on: pull_request: push: branches: - main - master env: PNS: ci-${GITHUB_RUN_ID} DOCKER_REGISTRY: ghcr.io/vrnetlab/vrnetlab jobs: test-vr-xcon-and-vr-bgp: runs-on: ubuntu-22.04 steps: - uses: actions/checkout@v3 - name: Build vr-xcon run: | make vr-xcon - name: Test vr-xcon run: | make vr-xcon-test - name: Save vr-xcon logs if: always() run: | make -C vr-xcon docker-test-save-logs - name: Clean test vr-xcon containers if: always() run: make -C vr-xcon docker-test-clean # vr-bgp depends on the vr-xcon container image, so let's build it right # now on the same runner ... - name: Build vr-bgp run: | make vr-bgp - name: Test vr-bgp run: | make vr-bgp-test - name: Save vr-bgp logs if: always() run: | make -C vr-bgp docker-test-save-logs - name: Clean test vr-bgp containers if: always() run: make -C vr-bgp docker-test-clean test-vr: runs-on: ["self-hosted"] container: image: vrnetlab/ci-builder volumes: - /data/gh-runner/vrnetlab-images:/vrnetlab-images strategy: fail-fast: false matrix: platform: ['csr', 'nxos', 'nxos9kv', 'routeros', 'sros', 'veos', 'vmx', 'vqfx', 'vrp', 'vsr1000', 'xrv', 'xrv9k'] steps: - uses: actions/checkout@v3 - name: Fixup dubious ownership run: git config --global --add safe.directory ${GITHUB_WORKSPACE} - name: Use git to check if the source files (platform or shared) have changed uses: dorny/paths-filter@v2 id: source_changes with: filters: | platform: - '${{ matrix.platform }}/**/*' - 'common/*' - name: Compute platform image hashes in bind-mounted volume run: | shasum -a 512 /vrnetlab-images/${{ matrix.platform }}/* > hash-${{ matrix.platform }}.txt || touch hash-${{ matrix.platform }}.txt cat hash-${{ matrix.platform }}.txt - name: Cache image hashes id: cache-hash uses: actions/cache@v3 with: path: cache-hash-${{ matrix.platform }}.txt key: hash-${{ matrix.platform }} - name: Compare computed image hashes with cached hashes id: image_changes run: | if diff hash-${{ matrix.platform }}.txt cache-hash-${{ matrix.platform}}.txt; then echo platform=false >> ${GITHUB_OUTPUT}; echo "No image changes detected for ${{ matrix.platform }}"; else echo platform=true >> ${GITHUB_OUTPUT}; echo "Image changes detected for ${{ matrix.platform }}"; fi - name: Build ${{ matrix.platform }} if: ${{ steps.source_changes.outputs.platform == 'true' || steps.image_changes.outputs.platform == 'true' }} run: | cp /vrnetlab-images/${{ matrix.platform }}/* ${{ matrix.platform }} || true ls -al ${{ matrix.platform }} make ${{ matrix.platform }} - name: Test ${{ matrix.platform }} if: ${{ steps.source_changes.outputs.platform == 'true' || steps.image_changes.outputs.platform == 'true' }} run: | make ${{ matrix.platform }}-test - name: Save ${{ matrix.platform }} logs if: ${{ always() && (steps.source_changes.outputs.platform == 'true' || steps.image_changes.outputs.platform == 'true') }} run: | make -C ${{ matrix.platform }} docker-test-save-logs - uses: actions/upload-artifact@v3 if: ${{ always() && (steps.source_changes.outputs.platform == 'true' || steps.image_changes.outputs.platform == 'true') }} with: name: vr-logs path: | ${{ matrix.platform }}/*.log - name: Clean test ${{ matrix.platform }} containers if: ${{ always() && (steps.source_changes.outputs.platform == 'true' || steps.image_changes.outputs.platform == 'true') }} run: make -C ${{ matrix.platform }} docker-test-clean - name: Persist image hashes run: mv hash-${{ matrix.platform }}.txt cache-hash-${{ matrix.platform }}.txt 07070100000003000081A400000000000000000000000164D7C437000007C9000000000000000000000000000000000000002E00000000vrnetlab-git1691862071.9187175/.gitlab-ci.ymlimage: vrnetlab/ci-builder variables: PNS: ci-${CI_PIPELINE_ID} stages: - build .build: &build-template stage: build tags: - vrnetlab script: # make sure we pulled LFS files - git lfs fetch -I ${CI_JOB_NAME} - git lfs checkout ${CI_JOB_NAME} - ls -l ${CI_JOB_NAME} # We allow the user to control which Docker registry is used through the # env var DOCKER_REGISTRY. If it is not set then we assume we should use # the GitLab built-in Docker registry so we check if it is enabled. # CI_REGISTRY is only set when the GitLab Docker registry is enabled - if [ -z "${DOCKER_REGISTRY}" ]; then if [ -n "${CI_REGISTRY}" ]; then export DOCKER_USER=gitlab-ci-token; export DOCKER_PASSWORD=${CI_JOB_TOKEN}; export DOCKER_REGISTRY=${CI_REGISTRY_IMAGE}; fi; fi - 'echo "DOCKER_REGISTRY: ${DOCKER_REGISTRY}"' # if DOCKER_REGISTRY set, either explicitly by user or implicitly by GitLab # (see above) we login to repo, build images and push them - if [ -n "${DOCKER_REGISTRY}" ]; then docker login -u ${DOCKER_USER} -p=${DOCKER_PASSWORD} ${DOCKER_REGISTRY}; fi - if [ -n "${DOCKER_REGISTRY}" ]; then - make ${CI_JOB_NAME} - make ${CI_JOB_NAME}-test - echo "Pushing images" - make ${CI_JOB_NAME}-push - fi interruptible: true after_script: # save logs for artifacts - make -C ${CI_JOB_NAME} docker-test-save-logs # clean up leftover (failed) test containers - make -C ${CI_JOB_NAME} docker-test-clean artifacts: when: always paths: - ${CI_JOB_NAME}/*.log vr-xcon: <<: *build-template vr-bgp: <<: *build-template csr: <<: *build-template nxos: <<: *build-template nxos9kv: <<: *build-template routeros: <<: *build-template sros: <<: *build-template veos: <<: *build-template vmx: <<: *build-template vsr1000: <<: *build-template vqfx: <<: *build-template vrp: <<: *build-template xrv: <<: *build-template xrv9k: <<: *build-template 07070100000004000081A400000000000000000000000164D7C4370000005E000000000000000000000000000000000000003200000000vrnetlab-git1691862071.9187175/CODE_OF_CONDUCT.md# Code of conduct Please be civilised and nice to each other. We're all humans. Act like one. 07070100000005000081A400000000000000000000000164D7C437000002C2000000000000000000000000000000000000002F00000000vrnetlab-git1691862071.9187175/CONTRIBUTING.md# Contributing to vrnetlab Thank you for (considering) contributing to vrnetlab! ## Bugs? Please report bugs as issues here on GitLab! Also feel free to use issues for asking questions on how to use vrnetlab etc too. ## New feature? Do you want to build something new? Please open an issue describing your idea so that we can align how it is best implemented. Discussing before you start coding significantly increases the chance that your code will be merged smoothly and without lots of refactoring etc. If you don't want to code but have an idea for a new feature, please open an issue. ## Code submission Submit your code as a Pull Request (PR). Make sure it applies cleanly to the master branch! 07070100000006000081A400000000000000000000000164D7C43700000455000000000000000000000000000000000000002700000000vrnetlab-git1691862071.9187175/LICENSEThe MIT License (MIT) Copyright (c) 2016 Kristian Larsson <kristian@spritelink.net> Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. 07070100000007000081A400000000000000000000000164D7C437000001A4000000000000000000000000000000000000002800000000vrnetlab-git1691862071.9187175/MakefileIMAGES_DIR= VRS = vr-xcon vr-bgp csr nxos nxos9kv routeros sros veos vmx vsr1000 vqfx vrp xrv xrv9k VRS_PUSH = $(VRS:=-push) VRS_TEST = $(VRS:=-test) .PHONY: all $(VRS) $(VRS_PUSH) $(VRS_TEST) all: $(VRS) $(VRS): ifneq ($(IMAGES_DIR),) cp -av $(IMAGES_DIR)/$@/* $@/ endif cd $@; $(MAKE) docker-push: $(VRS_PUSH) $(VRS_PUSH): cd $(@:-push=); $(MAKE) docker-push $(VRS_TEST): cd $(@:-test=); $(MAKE) docker-test 07070100000008000081A400000000000000000000000164D7C437000055A0000000000000000000000000000000000000002900000000vrnetlab-git1691862071.9187175/README.mdvrnetlab - VR Network Lab ------------------------- Run your favourite virtual routers in docker for convenient labbing, development and testing. vrnetlab is being developed for the TeraStream project at Deutsche Telekom as part of an automated CI environment for testing our network provisioning system. It supports: * Arista vEOS * Cisco CSR1000v * Cisco Nexus NX-OS (using Titanium emulator) * Cisco XRv * Cisco XRv 9000 * Juniper vMX * Juniper vQFX * Nokia VSR I talk a little about it during my presentation about TeraStream testing at the NetNod autumn meeting 2016 - https://youtu.be/R_vCdGkGeSk?t=9m25s Brian Linkletter has written a good introduction too https://www.brianlinkletter.com/vrnetlab-emulate-networks-using-kvm-and-docker/ Usage ----- You have to build the virtual router docker images yourself since the license agreements of commercial virtual routers do not allow me to distribute the images. See the README files of the respective virtual router types for more details. You need KVM enabled in your kernel for hardware assisted virtualization. While it may be possible to run without it, it has not been tested. Make sure you load the kvm kernel module: `modprobe kvm`. Let's assume you've built the `xrv` router. Start two virtual routers: ``` docker run -d --name vr1 --privileged vr-xrv:5.3.3.51U docker run -d --name vr2 --privileged vr-xrv:5.3.3.51U ``` I'm calling them vr1 and vr2. Note that I'm using XRv 5.3.3.51U - you should fill in your XRv version in the image tag as the "latest" tag is not added to any images. It takes a few minutes for XRv to start but once up you should be able to SSH into each virtual router. You can get the IP address using docker inspect: ``` root@host# docker inspect --format '{{.NetworkSettings.IPAddress}}' vr1 172.17.0.98 ``` Now SSH to that address and login with the default credentials of vrnetlab/VR-netlab9: ``` root@host# ssh -l vrnetlab $(docker inspect --format '{{.NetworkSettings.IPAddress}}' vr1) The authenticity of host '172.17.0.98 (172.17.0.98)' can't be established. RSA key fingerprint is e0:61:28:ba:12:77:59:5e:96:cc:58:e2:36:55:00:fa. Are you sure you want to continue connecting (yes/no)? yes Warning: Permanently added '172.17.0.98' (RSA) to the list of known hosts. IMPORTANT: READ CAREFULLY Welcome to the Demo Version of Cisco IOS XRv (the "Software"). The Software is subject to and governed by the terms and conditions of the End User License Agreement and the Supplemental End User License Agreement accompanying the product, made available at the time of your order, or posted on the Cisco website at www.cisco.com/go/terms (collectively, the "Agreement"). As set forth more fully in the Agreement, use of the Software is strictly limited to internal use in a non-production environment solely for demonstration and evaluation purposes. Downloading, installing, or using the Software constitutes acceptance of the Agreement, and you are binding yourself and the business entity that you represent to the Agreement. If you do not agree to all of the terms of the Agreement, then Cisco is unwilling to license the Software to you and (a) you may not download, install or use the Software, and (b) you may return the Software as more fully set forth in the Agreement. Please login with any configured user/password, or cisco/cisco vrnetlab@172.17.0.98's password: RP/0/0/CPU0:ios#show version Mon Jul 18 09:04:45.261 UTC Cisco IOS XR Software, Version 5.3.3.51U[Default] ... ``` You can also login via NETCONF: ``` root@host# ssh -l vrnetlab $(docker inspect --format '{{.NetworkSettings.IPAddress}}' vr1) -p 830 -s netconf vrnetlab@172.17.0.98's password: <hello xmlns="urn:ietf:params:xml:ns:netconf:base:1.0"> <capabilities> <capability>urn:ietf:params:netconf:base:1.1</capability> <capability>urn:ietf:params:xml:ns:yang:ietf-netconf-monitoring</capability> <capability>urn:ietf:params:netconf:capability:candidate:1.0</capability> <capability>urn:ietf:params:netconf:capability:rollback-on-error:1.0</capability> <capability>urn:ietf:params:netconf:capability:validate:1.1</capability> <capability>urn:ietf:params:netconf:capability:confirmed-commit:1.1</capability> <capability>http://cisco.com/ns/yang/Cisco-IOS-XR-aaa-lib-cfg?module=Cisco-IOS-XR-aaa-lib-cfg&revision=2015-08-27</capability> <capability>http://cisco.com/ns/yang/Cisco-IOS-XR-aaa-locald-admin-cfg?module=Cisco-IOS-XR-aaa-locald-admin-cfg&revision=2015-08-27</capability> <capability>http://cisco.com/ns/yang/Cisco-IOS-XR-aaa-locald-cfg?module=Cisco-IOS-XR-aaa-locald-cfg&revision=2015-08-27</capability> <capability>http://cisco.com/ns/yang/Cisco-IOS-XR-aaa-locald-oper?module=Cisco-IOS-XR-aaa-locald-oper&revision=2015-08-27</capability> <capability>http://cisco.com/ns/yang/Cisco-IOS-XR-bundlemgr-cfg?module=Cisco-IOS-XR-bundlemgr-cfg&revision=2015-08-27</capability> ... ``` The serial console of the devices are mapped to port 5000. Use telnet to connect: ``` root@host# telnet $(docker inspect --format '{{.NetworkSettings.IPAddress}}' vr1) 5000 ``` Just like with any serial port, you can only have one connection at a time and while the router is booting the launch script will connect to the serial port to do the initialization of the router. As soon as it is done the port will be released and made available to the next connection. To connect two virtual routers with each other we can use the `vr-xcon` container. Let's say we want to connect Gi0/0/0/0 of vr1 and vr2 with each other, we would do: ``` docker run -d --name vr-xcon --link vr1 --link vr2 vr-xcon --p2p vr1/1--vr2/1 ``` Configure a link network on vr1 and vr2 and you should be able to ping! ``` P/0/0/CPU0:ios(config)#inte GigabitEthernet 0/0/0/0 RP/0/0/CPU0:ios(config-if)#no shutdown RP/0/0/CPU0:ios(config-if)#ipv4 address 192.168.1.2/24 RP/0/0/CPU0:ios(config-if)#commit Mon Jul 18 09:13:24.196 UTC RP/0/0/CPU0:Jul 18 09:13:24.216 : ifmgr[227]: %PKT_INFRA-LINK-3-UPDOWN : Interface GigabitEthernet0/0/0/0, changed state to Down RP/0/0/CPU0:ios(config-if)#dRP/0/0/CPU0:Jul 18 09:13:24.256 : ifmgr[227]: %PKT_INFRA-LINK-3-UPDOWN : Interface GigabitEthernet0/0/0/0, changed state to Up o ping 192.168.1.1 Mon Jul 18 09:13:26.896 UTC Type escape sequence to abort. Sending 5, 100-byte ICMP Echos to 192.168.1.1, timeout is 2 seconds: !!!!! Success rate is 100 percent (5/5), round-trip min/avg/max = 1/1/1 ms ``` (obviously I configured the other end too!) All of the NICs of the virtual routers are exposed via TCP ports by KVM. TCP port 10001 maps to the first NIC of the virtual router, which in the case of an XR router is GigabitEthernet 0/0/0/0. By simply connecting two of these TCP sockets together we can bridge the traffic between those two NICs and this is exactly what vr-xcon is for. Use the `--p2p` argument to specify the links. The format is X/Y--Z/N where X is the name of the first router and Y is the port on that router. Z is the second router and N is the port on the second router. To set up more than one p2p link, simply add more mappings separated by space and don't forget to link the virtual routers: ``` docker run -d --name vr-xcon --link vr1 --link vr2 --link vr3 vr-xcon --p2p vr1/1--vr2/1 vr1/2--vr3/1 ``` See topology-machine/README.md for details on topology machine which can help you with managing more complex topologies. The containers expose port 22 for SSH, port 161 for SNMP, port 830 for NETCONF and port 5000 is mapped to the virtual serial device (use telnet). All the NICs of the virtual routers are exposed via TCP ports in the range 10001-10099. Use `docker rm -f vr1` to stop and remote a virtual router. Handy shell functions --------------------- There are some handy shell functions in vrnetlab.sh that provides shorthands for connecting to ssh and console. 1. Load the functions into your shell ``` . vrnetlab.sh ``` 2. Login via ssh to router vr1, you can optionally specify a username. If no username is provided, the default of vrnetlab will be used. If sshpass is installed, you will not be promted for password when you login with the default username. ``` vrssh vr1 myuser ``` 3. Connect console to router vr1 ``` vrcons vr1 ``` 4. Create a bridge between two router interfaces, the below command bridges interface 1 of router vr1 with interface 1 of router 2. ``` vrbridge vr1 1 vr2 1 ``` To load these aliases on login, copy it to ~/.vrnetlab_bashrc and add the following to your .bashrc ``` test -f ~/.vrnetlab_bashrc && . ~/.vrnetlab_bashrc ``` Virtual routers --------------- There are a number of virtual routers available on the market: * Cisco XRv * Juniper VRR * Juniper vMX * Nokia VSR All of the above are released as a qcow2 or vmdk file (which can easily be converted into qcow2) making them easy to spin up on a Linux machine. Once spun up there are a few tasks one normally wants to perform: * set an IP address on a management interface * start SSH / NETCONF daemon (and generate crypto keys) * create initial user so we can login There might be more things to the list but this is the bare minimum which makes the router remotely reachable and thus we can configure the rest from the normal provisioning system. vrnetlab aims to make this process as simple and convenient as possible so that it may be used both by humans and automated systems to spin up virtual routers. In addition, there are scripts to help you generate topologies. The virtual machines are packaged up in docker container. Since we need to start KVM the docker containers have to be run with `--privileged` which effectively defeats the security features of docker. Our use of docker is essentially reduced to being a packaging format but a rather good one at that. Also note that since we still rely on KVM the same amount of resources, if not sightly more, will be consumed by vrnetlab. A container is no thinner than a VM if the container contains a VM! The assignment of a management IP address is handed over to docker, so you can use whatever docker IPAM plugin you want. Overall the network setup of the virtual routers are kind of shoe-horned into the world of docker networking. I'm not sure this is a good idea but it seems to work for now and it was fun putting it together ;) It's possible to remotely control a docker engine and tell it to start/stop containers. It's not entirely uncommon to run the CI system in a VM and letting it remotely control another docker engine can give us some flexibility in where the CI runner is executed vs where the virtual routers are running. libvirt can also be remotely controlled so it could potentially be used to the same effect. However, unlike libvirt, docker also has a registry concept which greatly simplifies the distribution of the virtual routers. It's already neatly packaged up into a container image and now we can pull that image through a single command. With libvirt we would need to distribute the VM image and launch scripts as individual files. The launch script differ from router to router. For example, it's possible to feed a Cisco XR router a bootup config via a virtual CD-ROM drive so we can use that to enable SSH/NETCONF and create a user. Nokia VSR however does not, so we need to tell KVM to emulate a serial device and then have the launch script access that virtual serial port via telnet to do the initial config. The intention is to keep the arguments to each virtual router type as similar as possible so that a test orchestrator or similar need minimal knowledge about the different router types. System requirements ------------------- You need to run these docker images on a machine that has a docker engine and that supports KVM, i.e. you need a Linux kernel. Docker is available for OS X and it works by spinning up a Linux VM on top of the xhyve hypervisor. While this means that we do have a docker engine and a Linux kernel, we are unable to use this for vrnetlab as xhyve does not offer nested virtualization and thus we cannot run KVM in the VM running in xhyve. VirtualBox does not offer nested virtualization either. Parallels and VMWare supposedely do but I don't have access to those and can't test with. See the README file of each virtual router type for CPU, RAM and disk requirements. Low performance / virtual routers not starting properly ------------------------------------------------------- If you are having problems with performance, like routers not starting or being very slow, there are a few knobs to tweak in order to improve the situation. The basic problem is an unfortunate combination of CPU throttling and process scheduling causing cache thrasing which in turn leads to terrible performance. No detailed measurements have been done to confirm this exact behaviour but the recommended remedy has been confirmed working in multiple cases. vrnetlab runs virtual machines using qemu/KVM, which appear just as normal processes in Linux and are thus subject to the Linux process scheduler. If a process wants to do work it will be scheduled to run on a core. Now, if not all cores are used, APM will throttle down some of the cores such that the workload can run on the remaining, say 3 out of 12 cores. The Linux scheduler will try to schedule processes on the cores with the higher clock speed but if you have more VMs than cores with high clock speed than it will start moving VMs around. L1/L2 caches are not shared by CPU cores, only L3. Moving a process from one core to another inevitably means that the cache is evicted. When processes are moved around continuously we get cache thrasing and this appears to lower performance for the VMs significantly. For some virtual routers it is to the point where we hit various watchdog timeouts and the VMs will restart. The very first step is to make sure you aren't trying to run too many virtual routers on the same physical host. Some virtual routers, like Nokia SROS, has a rather low idle CPU usage of a few percent typically. Others, like Cisco XRV9k and Juniper vMX have a forwarding plane that is busy-looping over multiple CPU cores, thus consuming the entire CPU core. Trying to schedule multiple such virtual machines over the same CPU cores can lead to failure. To improve performance, we can start by changing the CPU governor in Linux to `performance`, for example using `cpupower frequency-set -g performance`. It likely won't help much but try it first since it's considerably easier than the following steps. Disable Advanced Power Management (APM) or similar in BIOS. This will completely prevent the CPU cores from throttling down and they will run at their designed maximum clock frequency. This probably means turbo boost (increasing clock frequency on a smaller subset of cores while decreasing the frequency on remaining cores to remain at the same power and temperature envelope) will be disabled too. Performance across all cores will however be much more deterministic. This alone usually means that the Linux process scheduler will now keep processes on the same cores instead of moving them around. Before only some of the cores would run at a higher frequency and so would be more attractive to schedule work on. With all cores at the same frequency, there is no reason for the process scheduler to move processes around. This removes the main cause of cache thrashing. At least that's the simplified view of it but it appears to be working rather well in reality. If performance is still not adequate the next step would be to disable hyperthreading. Hyperthreading is a technology to expose two logical cores that are executed by the same physical core. It's a strategy to avoid pipeline stalls, essentially where the CPU waits for memory. By having two logical threads, the CPU core can switch to the other thread whenever it needs to wait for memory lookups. It increases total concurrent throughput, however, each logical thread will run slower than if it had run directly on a physical CPU core. You can avoid the effects of hyperthreading by only scheduling your qemu processes on half of the cores. You would need to inspect /proc/cpuinfo to determine the exact logical core layout and make sure you only schedule processes on one logical thread of each physical core. However, since you would then only use half of the threads, it is easier to just disable hyperthreading in BIOS altogether. Applying the mentioned mitigations has so far resolved performance issues in all cases. Report if it doesn't for you. Docker healthcheck ----------------- vrnetlab containers use the Docker healthcheck mechanism to report whether they've started up properly or not. FUAQ - Frequently or Unfrequently Asked Questions ------------------------------------------------- ##### Q: Why don't you ship pre-built docker images? A: I don't think Cisco, Juniper or Nokia would allow me to distribute their virtual router images and since one of the main points of vrnetlab is to have a self contained docker image I don't see any other way than for you to build your own image based on vrnetlab but where you get to download the router image yourself. ##### Q: Why don't you ship docker images where I can provide the image through a volume? A: I don't like the concept as it means you have to ship around an extra file. If it's a self-contained image then all you have to do is push it to your docker registry and then ask a box in your swarm cluster to spin it up! ##### Q: Using docker typically means no persistent storage. How is configuration persisted across restarts? A: It is not persisted. The state of the virtual routers is lost once they are stopped/removed. It's not possible to restart vrnetlab or at least it's not at all tested and I don't see how it would work really. Since the primary use case is lab / CI you should embrace the statelessness :) ##### Q: Will this consume less resources than the normal way of running XRv, vmX etc? A: No. vrnetlab still runs KVM (in docker) to start the virtual router which means that we will consume just as much CPU and memory, if not slightly more, than running the router in KVM. ##### Q: If it doesn't consume less resources than KVM, why use Docker? A: It's used primarily as a packaging format. All vrnetlab containers can be run with similar arguments. The differences between different platforms are effectively hidden to present a clean uniform interface. That's certainly not true for trying to run XRv or vMX directly with qemu / virsh. ##### Q: Do you plan to support classic IOS? A: IOS XE is available through the CSR1000v image which should satisfy all your oldskool needs. ##### Q: How do I connect a vrnetlab router with a normal docker container? A: I'm not entirely sure. For now you have to live with only communicating between vrnetlab routers. There's https://github.com/TOGoS/TUN2UDP and I suppose the same idea could be used to bridge the TCP-socket NICs used by vrnetlab to a tun device, but if all this should happen inside a docker container or if we should rely on setting this up on the docker host (using something similar to pipework) is not entirely clear to me. I'll probably work on it. ##### Q: How does this relate to GNS3, UNetLab and VIRL? A: It was a long time since I used GNS3 and I have only briefly looked at UNetLab and VIRL but from what I know or can see, these are all more targeted towards interactive labbing. You get a pretty UI and similar whereas vrnetlab is controlled in a completely programmatic fashion which makes them good at different things. vrnetlab is superb for CI and programmatic testing where the others probably target labs run by humans. Building with GitLab CI ----------------------- vrnetlab ships with a .gitlab-ci.yml config file so if you happen to be using GitLab CI you can use this file to let your CI infrastructure build the docker images and push them to your registry. GitLab features a built-in Docker registry which will be used per default - all you need to do is enable the registry for your vrnetlab project. The necessary information will be exposed as env vars in GitLab CI which is picked up by the build config. The CI runner executing the jobs must have the tag 'vrnetlab'. Make sure this runner supports running VMs (has KVM) and allows the execution of sibling docker containers. If you want, you can use an external docker registry by explicitly configuring the following environment variables: * DOCKER_USER - the username to authenticate to the docker registry with * DOCKER_PASSWORD - the password to authenticate to the docker registry with * DOCKER_REGISTRY - the URL to the docker registry, like reg.example.com:5000 Next you need to add the actual virtual router images to the git repository. You can create a separate branch where you add the images as to avoid potential git merge issues. I recommend using LFS: ``` git checkout -b images git lfs track "*.vmdk" git add xrv/iosxrv-k9-demo-6.0.0.vmdk .gitattributes git commit -a -m "Added Cisco XRv 6.0.0 image" git push your-git-repo images ``` Now CI should build the images and push to wherever $DOCKER_REGISTRY points. If you don't want to use LFS then just skip that command. When new changes are commited to the upstream repo/master you can just rebase your branch on top of that: ``` git checkout master git pull origin master git checkout images git rebase master git push --force your-git-repo images ``` Note that you have to force push since you've rewritten git history. LFS is a way to store large files with git but keeping them out of git. It's great for the virtual router images as they never change (version is in the filename) and so we don't really need git's version tracking for them. LFS is considerably faster than plain git. For very large files it is possible to run into LFS timeouts, try setting: ``` git config lfs.dialtimeout 60 ``` 07070100000009000041ED00000000000000000000000264D7C43700000000000000000000000000000000000000000000003000000000vrnetlab-git1691862071.9187175/ci-builder-image0707010000000A000081A400000000000000000000000164D7C43700000104000000000000000000000000000000000000003B00000000vrnetlab-git1691862071.9187175/ci-builder-image/DockerfileFROM debian:bullseye RUN apt-get update \ && apt-get install -y \ curl \ docker.io \ make \ && curl -s https://packagecloud.io/install/repositories/github/git-lfs/script.deb.sh | bash \ && apt-get install -y git-lfs \ && rm -rf /var/lib/apt/lists/* 0707010000000B000041ED00000000000000000000000264D7C43700000000000000000000000000000000000000000000002600000000vrnetlab-git1691862071.9187175/common0707010000000C000081ED00000000000000000000000164D7C43700000157000000000000000000000000000000000000003500000000vrnetlab-git1691862071.9187175/common/healthcheck.py#!/usr/bin/env python3 import sys try: health_file = open("/tmp/health", "r") health = health_file.read() health_file.close() except FileNotFoundError: print("health status file not found") sys.exit(2) exit_status, message = health.strip().split(" ", 1) if message != '': print(message) sys.exit(int(exit_status)) 0707010000000D000081A400000000000000000000000164D7C437000070C9000000000000000000000000000000000000003200000000vrnetlab-git1691862071.9187175/common/vrnetlab.py#!/usr/bin/env python3 import datetime import json import logging import math import os import random import re import subprocess import telnetlib import time import sys from pathlib import Path MAX_RETRIES = 60 def gen_mac(last_octet=None): """Generate a random MAC address that is in recognizable (0C:00) OUI space and that has the given last octet. """ return "0C:00:%02x:%02x:%02x:%02x" % ( random.randint(0x00, 0xFF), random.randint(0x00, 0xFF), random.randint(0x00, 0xFF), last_octet, ) # sorting function to naturally sort interfaces by names def natural_sort_key(s, _nsre=re.compile("([0-9]+)")): return [int(text) if text.isdigit() else text.lower() for text in _nsre.split(s)] def run_command(cmd, cwd=None, background=False, shell=False): res = None try: if background: p = subprocess.Popen(cmd, cwd=cwd, shell=shell) else: p = subprocess.Popen(cmd, stdout=subprocess.PIPE, cwd=cwd, shell=shell) res = p.communicate() except: pass return res # boot_delay delays the VM boot by number of seconds # set by BOOT_DELAY env var def boot_delay(): delay = os.getenv("BOOT_DELAY") if delay and (delay != "" or delay != 0): logging.getLogger().info(f"Delaying VM boot of by {delay} seconds") time.sleep(int(delay)) class VM: def __str__(self): return self.__class__.__name__ def _overlay_disk_image_format(self) -> str: res = run_command(["qemu-img", "info", "--output", "json", self.image]) if res is not None: image_info = json.loads(res[0]) if "format" in image_info: return image_info["format"] raise ValueError(f"Could not read image format for {self.image}") def __init__(self, username, password, disk_image=None, num=0, ram=4096): self.logger = logging.getLogger() # username / password to configure self.username = username self.password = password self.num = num disk_image = f'/opt/images/{disk_image}' self.image = disk_image self.running = False self.spins = 0 self.p = None self.tn = None # various settings self.uuid = None self.fake_start_date = None self.nic_type = "e1000" self.num_nics = 0 # number of nics that are actually *provisioned* (as in nics that will be added to container) self.num_provisioned_nics = int(os.environ.get("CLAB_INTFS", 0)) # "highest" provisioned nic num -- used for making sure we can allocate nics without needing # to have them allocated sequential from eth1 self.highest_provisioned_nic_num = 0 self.nics_per_pci_bus = 26 # tested to work with XRv self.smbios = [] self.start_nic_eth_idx = 1 # wait_pattern is the pattern we wait on the serial connection when pushing config commands self.wait_pattern = "#" overlay_disk_image = re.sub(r"(\.[^.]+$)", r"-overlay\1", disk_image) # append role to overlay name to have different overlay images for control and data plane images if hasattr(self, "role"): tokens = overlay_disk_image.split(".") tokens[0] = tokens[0] + "-" + self.role + str(self.num) overlay_disk_image = ".".join(tokens) if not os.path.exists(overlay_disk_image): self.logger.debug("Creating overlay disk image") run_command( [ "qemu-img", "create", "-f", "qcow2", "-F", self._overlay_disk_image_format(), "-b", disk_image, overlay_disk_image, ] ) self.qemu_args = ["qemu-system-x86_64", "-display", "none", "-machine", "pc"] self.qemu_args.extend( ["-monitor", "tcp:0.0.0.0:40%02d,server,nowait" % self.num] ) self.qemu_args.extend( [ "-m", str(ram), "-serial", "telnet:0.0.0.0:50%02d,server,nowait" % self.num, "-drive", "if=ide,file=%s" % overlay_disk_image, ] ) # enable hardware assist if KVM is available if os.path.exists("/dev/kvm"): self.qemu_args.insert(1, "-enable-kvm") def start(self): self.logger.info("Starting %s" % self.__class__.__name__) self.start_time = datetime.datetime.now() cmd = list(self.qemu_args) # uuid if self.uuid: cmd.extend(["-uuid", self.uuid]) # do we have a fake start date? if self.fake_start_date: cmd.extend(["-rtc", "base=" + self.fake_start_date]) # smbios # adding quotes to smbios value so it can be processed by bash shell for smbios_line in self.smbios: quoted_smbios = '"' + smbios_line + '"' cmd.extend(["-smbios", quoted_smbios]) # setup PCI buses for i in range(1, math.ceil(self.num_nics / self.nics_per_pci_bus) + 1): cmd.extend(["-device", "pci-bridge,chassis_nr={},id=pci.{}".format(i, i)]) # generate mgmt NICs cmd.extend(self.gen_mgmt()) # generate normal NICs cmd.extend(self.gen_nics()) self.logger.debug("qemu cmd: {}".format(" ".join(cmd))) self.p = subprocess.Popen( " ".join(cmd), stdout=subprocess.PIPE, stderr=subprocess.PIPE, universal_newlines=True, shell=True, executable="/bin/bash", ) try: outs, errs = self.p.communicate(timeout=2) self.logger.info("STDOUT: %s" % outs) self.logger.info("STDERR: %s" % errs) except: pass for i in range(1, MAX_RETRIES + 1): try: self.qm = telnetlib.Telnet("127.0.0.1", 4000 + self.num) break except: self.logger.info( "Unable to connect to qemu monitor (port {}), retrying in a second (attempt {})".format( 4000 + self.num, i ) ) time.sleep(1) if i == MAX_RETRIES: raise QemuBroken( "Unable to connect to qemu monitor on port {}".format( 4000 + self.num ) ) for i in range(1, MAX_RETRIES + 1): try: self.tn = telnetlib.Telnet("127.0.0.1", 5000 + self.num) break except: self.logger.info( "Unable to connect to qemu monitor (port {}), retrying in a second (attempt {})".format( 5000 + self.num, i ) ) time.sleep(1) if i == MAX_RETRIES: raise QemuBroken( "Unable to connect to qemu monitor on port {}".format( 5000 + self.num ) ) try: outs, errs = self.p.communicate(timeout=2) self.logger.info("STDOUT: %s" % outs) self.logger.info("STDERR: %s" % errs) except: pass def create_bridges(self): """Create a linux bridge for every attached eth interface Returns list of bridge names """ # based on https://github.com/plajjan/vrnetlab/pull/188 run_command(["mkdir", "-p", "/etc/qemu"]) # This is to whitlist all bridges run_command(["echo 'allow all' > /etc/qemu/bridge.conf"], shell=True) bridges = list() intfs = [x for x in os.listdir("/sys/class/net/") if "eth" in x if x != "eth0"] intfs.sort(key=natural_sort_key) self.logger.info("Creating bridges for interfaces: %s" % intfs) for idx, intf in enumerate(intfs): run_command( ["ip", "link", "add", "name", "br-%s" % idx, "type", "bridge"], background=True, ) run_command(["ip", "link", "set", "br-%s" % idx, "up"]) run_command(["ip", "link", "set", intf, "mtu", "65000"]) run_command(["ip", "link", "set", intf, "master", "br-%s" % idx]) run_command( ["echo 16384 > /sys/class/net/br-%s/bridge/group_fwd_mask" % idx], shell=True, ) bridges.append("br-%s" % idx) return bridges def create_ovs_bridges(self): """Create a OvS bridges for every attached eth interface Returns list of bridge names """ ifup_script = """#!/bin/sh switch="vr-ovs-$1" ip link set $1 up ip link set $1 mtu 65000 ovs-vsctl add-port ${switch} $1""" with open("/etc/vr-ovs-ifup", "w") as f: f.write(ifup_script) os.chmod("/etc/vr-ovs-ifup", 0o777) # start ovs services # system-id doesn't mean anything here run_command( [ "/usr/share/openvswitch/scripts/ovs-ctl", f"--system-id={random.randint(1000,50000)}", "start", ] ) time.sleep(3) bridges = list() intfs = [x for x in os.listdir("/sys/class/net/") if "eth" in x if x != "eth0"] intfs.sort(key=natural_sort_key) self.logger.info("Creating ovs bridges for interfaces: %s" % intfs) for idx, intf in enumerate(intfs): brname = f"vr-ovs-tap{idx+1}" # generate a mac for ovs bridge, since this mac we will need # to create a "drop flow" rule to filter grARP replies we can't have # ref: https://mail.openvswitch.org/pipermail/ovs-discuss/2021-February/050951.html brmac = gen_mac(0) self.logger.debug(f"Creating bridge {brname} with {brmac} hw address") if self.conn_mode == "ovs": run_command( f"ovs-vsctl add-br {brname} -- set bridge {brname} other-config:hwaddr={brmac}", shell=True, ) if self.conn_mode == "ovs-user": run_command( f"ovs-vsctl add-br {brname}", shell=True, ) run_command( f"ovs-vsctl set bridge {brname} datapath_type=netdev", shell=True, ) run_command( f"ovs-vsctl set bridge {brname} other-config:hwaddr={brmac}", shell=True, ) run_command(["ip", "link", "set", "dev", brname, "mtu", "9000"]) run_command( [ "ovs-vsctl", "set", "bridge", brname, "other-config:forward-bpdu=true", ] ) run_command(["ovs-vsctl", "add-port", brname, intf]) run_command(["ip", "link", "set", "dev", brname, "up"]) run_command( [ "ovs-ofctl", "add-flow", brname, f"table=0,arp,dl_src={brmac} actions=drop", ] ) bridges.append(brname) return bridges def create_tc_tap_ifup(self): """Create tap ifup script that is used in tc datapath mode""" ifup_script = """#!/bin/bash TAP_IF=$1 # get interface index number up to 3 digits (everything after first three chars) # tap0 -> 0 # tap123 -> 123 INDEX=${TAP_IF:3:3} ip link set $TAP_IF up ip link set $TAP_IF mtu 65000 # create tc eth<->tap redirect rules tc qdisc add dev eth$INDEX ingress tc filter add dev eth$INDEX parent ffff: protocol all u32 match u8 0 0 action mirred egress redirect dev tap$INDEX tc qdisc add dev $TAP_IF ingress tc filter add dev $TAP_IF parent ffff: protocol all u32 match u8 0 0 action mirred egress redirect dev eth$INDEX """ with open("/etc/tc-tap-ifup", "w") as f: f.write(ifup_script) os.chmod("/etc/tc-tap-ifup", 0o777) def create_macvtaps(self): """ Create Macvtap interfaces for each non dataplane interface """ intfs = [x for x in os.listdir("/sys/class/net/") if "eth" in x if x != "eth0"] self.data_ifaces = intfs intfs.sort(key=natural_sort_key) for idx, intf in enumerate(intfs): self.logger.debug("Creating macvtap interfaces for link: %s" % intf) run_command( [ "ip", "link", "add", "link", intf, "name", "macvtap{}".format(idx + 1), "type", "macvtap", "mode", "passthru", ], ) run_command( [ "ip", "link", "set", "dev", "macvtap{}".format(idx + 1), "up", ], ) def gen_mgmt(self): """Generate qemu args for the mgmt interface(s)""" res = [] # mgmt interface is special - we use qemu user mode network res.append("-device") res.append(self.nic_type + f",netdev=p00,mac={gen_mac(0)}") res.append("-netdev") res.append( "user,id=p00,net=10.0.0.0/24," "tftp=/tftpboot," "hostfwd=tcp::2022-10.0.0.15:22," "hostfwd=udp::2161-10.0.0.15:161," "hostfwd=tcp::2830-10.0.0.15:830," "hostfwd=tcp::2080-10.0.0.15:80," "hostfwd=tcp::2443-10.0.0.15:443" ) return res def nic_provision_delay(self) -> None: self.logger.debug( f"number of provisioned data plane interfaces is {self.num_provisioned_nics}" ) if self.num_provisioned_nics == 0: # no nics provisioned and/or not running from containerlab so we can bail return self.logger.debug("waiting for provisioned interfaces to appear...") # start_eth means eth index for VM # particularly for multiple slot LC start_eth = self.start_nic_eth_idx end_eth = self.start_nic_eth_idx + self.num_nics inf_path = Path("/sys/class/net/") while True: provisioned_nics = list(inf_path.glob("eth*")) # if we see num provisioned +1 (for mgmt) we have all nics ready to roll! if len(provisioned_nics) >= self.num_provisioned_nics + 1: nics = [ int(re.search(pattern=r"\d+", string=nic.name).group()) for nic in provisioned_nics ] # Ensure the max eth is in range of allocated eth index of VM LC nics = [nic for nic in nics if nic in range(start_eth, end_eth)] if nics: self.highest_provisioned_nic_num = max(nics) self.logger.debug( f"highest allocated interface id determined to be: {self.highest_provisioned_nic_num}..." ) self.logger.debug("interfaces provisioned, continuing...") return time.sleep(5) def gen_nics(self): """Generate qemu args for the normal traffic carrying interface(s)""" self.nic_provision_delay() res = [] bridges = [] if self.conn_mode == "tc": self.create_tc_tap_ifup() elif self.conn_mode in ["ovs", "ovs-user"]: bridges = self.create_ovs_bridges() if len(bridges) > self.num_nics: self.logger.error( "Number of dataplane interfaces '{}' exceeds the requested number of links '{}'".format( len(bridges), self.num_nics ) ) sys.exit(1) elif self.conn_mode == "macvtap": self.create_macvtaps() elif self.conn_mode == "bridge": bridges = self.create_bridges() if len(bridges) > self.num_nics: self.logger.error( "Number of dataplane interfaces '{}' exceeds the requested number of links '{}'".format( len(bridges), self.num_nics ) ) sys.exit(1) start_eth = self.start_nic_eth_idx end_eth = self.start_nic_eth_idx + self.num_nics pci_bus_ctr = 0 for i in range(start_eth, end_eth): # PCI bus counter is to ensure pci bus index starts from 1 # and continuing in sequence regardles the eth index pci_bus_ctr += 1 # calc which PCI bus we are on and the local add on that PCI bus x = pci_bus_ctr if "vEOS" in self.image: x = pci_bus_ctr + 1 pci_bus = math.floor(x / self.nics_per_pci_bus) + 1 addr = (x % self.nics_per_pci_bus) + 1 # if the matching container interface ethX doesn't exist, we don't create a nic if not os.path.exists(f"/sys/class/net/eth{i}"): if i >= self.highest_provisioned_nic_num: continue # current intf number is *under* the highest provisioned nic number, so we need # to allocate a "dummy" interface so that when the users data plane interface is # actually provisioned it is provisioned in the appropriate "slot" res.extend( [ "-device", "%(nic_type)s," "netdev=p%(i)02d," "bus=pci.%(pci_bus)s," "addr=0x%(addr)x" % { "nic_type": self.nic_type, "i": i, "pci_bus": pci_bus, "addr": addr, }, "-netdev", "socket,id=p%(i)02d,listen=:%(j)02d" % {"i": i, "j": i + 10000}, ] ) continue mac = "" if self.conn_mode == "macvtap": # get macvtap interface mac that will be used in qemu nic config if not os.path.exists("/sys/class/net/macvtap{}/address".format(i)): continue with open("/sys/class/net/macvtap%s/address" % i, "r") as f: mac = f.readline().strip("\n") else: mac = gen_mac(i) res.append("-device") res.append( "%(nic_type)s,netdev=p%(i)02d,mac=%(mac)s,bus=pci.%(pci_bus)s,addr=0x%(addr)x" % { "nic_type": self.nic_type, "i": i, "pci_bus": pci_bus, "addr": addr, "mac": mac, } ) if self.conn_mode == "tc": res.append("-netdev") res.append( f"tap,id=p{i:02d},ifname=tap{i},script=/etc/tc-tap-ifup,downscript=no" ) if self.conn_mode == "macvtap": # if required number of nics exceeds the number of attached interfaces # we skip excessive ones if not os.path.exists("/sys/class/net/macvtap{}/ifindex".format(i)): continue # init value of macvtap ifindex tapidx = 0 with open("/sys/class/net/macvtap%s/ifindex" % i, "r") as f: tapidx = f.readline().strip("\n") fd = 100 + i # fd start number for tap iface vhfd = 400 + i # vhost fd start number res.append("-netdev") res.append( "tap,id=p%(i)02d,fd=%(fd)s,vhost=on,vhostfd=%(vhfd)s %(fd)s<>/dev/tap%(tapidx)s %(vhfd)s<>/dev/vhost-net" % {"i": i, "fd": fd, "vhfd": vhfd, "tapidx": tapidx} ) elif self.conn_mode == "bridge": if i <= len(bridges): bridge = bridges[i - 1] # We're starting from 0 res.append("-netdev") res.append( "bridge,id=p%(i)02d,br=%(bridge)s" % {"i": i, "bridge": bridge} ) else: # We don't create more interfaces than we have bridges del res[-2:] # Removing recently added interface elif self.conn_mode in ["ovs", "ovs-user"]: if i <= len(bridges): res.append("-netdev") res.append( "tap,id=p%(i)02d,ifname=tap%(i)s,script=/etc/vr-ovs-ifup,downscript=no" % {"i": i} ) else: # We don't create more interfaces than we have bridges del res[-2:] # Removing recently added interface elif self.conn_mode == "vrxcon": res.append("-netdev") res.append( "socket,id=p%(i)02d,listen=:%(j)02d" % {"i": i, "j": i + 10000} ) return res def stop(self): """Stop this VM""" self.running = False try: self.p.terminate() except ProcessLookupError: return try: self.p.communicate(timeout=10) except: try: # this construct is included as an example at # https://docs.python.org/3.6/library/subprocess.html but has # failed on me so wrapping in another try block. It was this # communicate() that failed with: # ValueError: Invalid file object: <_io.TextIOWrapper name=3 encoding='ANSI_X3.4-1968'> self.p.kill() self.p.communicate(timeout=10) except: # just assume it's dead or will die? self.p.wait(timeout=10) def restart(self): """Restart this VM""" self.stop() self.start() def wait_write(self, cmd, wait="__defaultpattern__", con=None): """Wait for something on the serial port and then send command Defaults to using self.tn as connection but this can be overridden by passing a telnetlib.Telnet object in the con argument. """ con_name = "custom con" if con is None: con = self.tn if con == self.tn: con_name = "serial console" if con == self.qm: con_name = "qemu monitor" if wait: # use class default wait pattern if none was explicitly specified if wait == "__defaultpattern__": wait = self.wait_pattern self.logger.trace(f"waiting for '{wait}' on {con_name}") res = con.read_until(wait.encode()) cleaned_buf = ( con.read_very_eager() ) # Clear any remaining characters in buffer self.logger.trace(f"read from {con_name}: '{res.decode()}'") # log the cleaned buffer if it's not empty if cleaned_buf: self.logger.trace(f"cleaned buffer: '{cleaned_buf.decode()}'") self.logger.debug(f"writing to {con_name}: '{cmd}'") con.write("{}\r".format(cmd).encode()) def work(self): self.check_qemu() if not self.running: try: self.bootstrap_spin() except EOFError: self.logger.error("Telnet session was disconnected, restarting") self.restart() def check_qemu(self): """Check health of qemu. This is mostly just seeing if there's error output on STDOUT from qemu which means we restart it. """ if self.p is None: self.logger.debug("VM not started; starting!") self.start() # check for output try: outs, errs = self.p.communicate(timeout=1) except subprocess.TimeoutExpired: return self.logger.info("STDOUT: %s" % outs) self.logger.info("STDERR: %s" % errs) if errs != "": self.logger.debug("KVM error, restarting") self.stop() self.start() class VR: def __init__(self, username, password): self.logger = logging.getLogger() try: os.mkdir("/tftpboot") except: pass def update_health(self, exit_status, message): health_file = open("/tmp/health", "w") health_file.write("%d %s" % (exit_status, message)) health_file.close() def start(self, add_fwd_rules=True): """Start the virtual router""" self.logger.debug("Starting vrnetlab %s" % self.__class__.__name__) self.logger.debug("VMs: %s" % self.vms) if add_fwd_rules: run_command( ["socat", "TCP-LISTEN:22,fork", "TCP:127.0.0.1:2022"], background=True ) run_command( ["socat", "UDP-LISTEN:161,fork", "UDP:127.0.0.1:2161"], background=True ) run_command( ["socat", "TCP-LISTEN:830,fork", "TCP:127.0.0.1:2830"], background=True ) run_command( ["socat", "TCP-LISTEN:80,fork", "TCP:127.0.0.1:2080"], background=True ) run_command( ["socat", "TCP-LISTEN:443,fork", "TCP:127.0.0.1:2443"], background=True ) started = False while True: all_running = True for vm in self.vms: vm.work() if vm.running != True: all_running = False if all_running: self.update_health(0, "running") started = True else: if started: self.update_health(1, "VM failed - restarting") else: self.update_health(1, "starting") class QemuBroken(Exception): """Our Qemu instance is somehow broken""" # getMem returns the RAM size (in Mb) for a given VM mode. # RAM can be specified in the variant dict, provided by a user via the custom type definition, # or set via env vars. # If set via env vars, the getMem will return this value as the most specific one. # Otherwise, the ram provided to this function will be converted to Mb and returned. def getMem(vmMode: str, ram: int) -> int: if vmMode == "integrated": # Integrated VM can use both MEMORY and CP_MEMORY env vars if "MEMORY" in os.environ: return 1024 * get_digits(os.getenv("MEMORY")) if "CP_MEMORY" in os.environ: return 1024 * get_digits(os.getenv("CP_MEMORY")) if vmMode == "cp": if "CP_MEMORY" in os.environ: return 1024 * get_digits(os.getenv("CP_MEMORY")) if vmMode == "lc": if "LC_MEMORY" in os.environ: return 1024 * get_digits(os.getenv("LC_MEMORY")) return 1024 * int(ram) # getCpu returns the number of cpu cores for a given VM mode. # Cpu can be specified in the variant dict, provided by a user via the custom type definition, # or set via env vars. # If set via env vars, the function will return this value as the most specific one. # Otherwise, the number provided to this function via cpu param returned. def getCpu(vsimMode: str, cpu: int) -> int: if vsimMode == "integrated": # Integrated VM can use both MEMORY and CP_MEMORY env vars if "CPU" in os.environ: return int(os.getenv("CPU")) if "CP_CPU" in os.environ: return int(os.getenv("CP_CPU")) if vsimMode == "cp": if "CP_CPU" in os.environ: return int(os.getenv("CP_CPU")) if vsimMode == "lc": if "LC_CPU" in os.environ: return int(os.getenv("LC_CPU")) return cpu # strip all non-numeric characters from a string def get_digits(input_str: str) -> int: non_string_chars = re.findall(r"\d", input_str) return int("".join(non_string_chars)) 0707010000000E000041ED00000000000000000000000364D7C43700000000000000000000000000000000000000000000003200000000vrnetlab-git1691862071.9187175/config-engine-lite0707010000000F000081A400000000000000000000000164D7C4370000021E000000000000000000000000000000000000003D00000000vrnetlab-git1691862071.9187175/config-engine-lite/DockerfileFROM ubuntu:jammy MAINTAINER Kristian Larsson <kristian@spritelink.net> ENV DEBIAN_FRONTEND=noninteractive RUN apt-get update -qy \ && apt-get upgrade -qy \ && apt-get install -y \ bridge-utils \ iproute2 \ libffi-dev \ libffi-dev \ libjpeg8-dev \ libssl-dev \ libxml2-dev \ libxslt1-dev \ libyaml-dev \ python3-dev \ python3-ipy \ python3-lxml \ python3-pip \ zlib1g-dev \ && rm -rf /var/lib/apt/lists/* \ && pip3 install napalm ADD configengine / ENTRYPOINT ["/configengine"] 07070100000010000081A400000000000000000000000164D7C43700000177000000000000000000000000000000000000003B00000000vrnetlab-git1691862071.9187175/config-engine-lite/Makefileifdef DOCKER_REGISTRY ifneq ($(DOCKER_REGISTRY), $(shell echo $(DOCKER_REGISTRY) | sed -ne '/^[A-Za-z0-9.]\+:[0-9]\+$$/p')) $(error Bad docker registry URL. Should follow format registry.example.com:1234) endif REGISTRY=$(DOCKER_REGISTRY)/ else REGISTRY= endif all: docker build -t $(REGISTRY)vr-configengine . docker-push: docker push $(REGISTRY)vr-configengine 07070100000011000081A400000000000000000000000164D7C43700000D86000000000000000000000000000000000000003C00000000vrnetlab-git1691862071.9187175/config-engine-lite/README.mdvrnetlab Config Engine lite =========================== Config Engine lite is a small provisioning system shipped with vrnetlab, primarily written for three use cases: * configure routers in a vrnetlab topology such that the functionality of vrnetlab itself can be tested, for example, we want to make sure that interfaces are correctly mapped * accelerate labing. If you want to do some specific iBGP testing you might not be all too interested in setting IP addresses on the 7 routers required for your test or configure an entire IGP - use config engine to quickly provision the basics and do the rest by hand! * serve as inspiration for how you can write a provisioning system running It's called 'lite' since it doesn't aspire to become a full blown provisioning system. While it might grow and gain new functionality it will always be targeted for the requirements of the above, in particular the testing of vrnetlab itself. Usage ----- After building the docker image, you run it like this. There are two modes of operation, topology mode and single-router-mode. ### Topology mode Use config-engine-lite and jinja2 templates to configure your topomachine topology. ``` docker run -v $(pwd)/templates:/templates -v $(pwd)/topology:/topology --link router1 --link router2 vr-configengine --topo /topology/lltopo.json --xr /templates/xr.j2 --junos /templates/junos.j2 --run ``` * -v $(pwd)/templates:/templates - Mount a directory containing your templates inside the container * -v $(pwd)/topology/topology - Mount a directory containing your topology files inside the container * --link router1 --link router2 - Link all routers specified in your topology, enabling config-engine-lite to configure them * --topo /topology/lltopo.json - The low level topology built by topology-machine, This references to the /topology mountpoint * --ios /templates/ios.j2 - Configuration template for IOS (CSR 1000v), this references to the /templates mountpoint * --xr /templates/xr.j2 - Configuration template for ios-xr, this references to the /templates mountpoint * --junos /templates/junos.j2 - Configuration template for JunOS, this refrences to the /templates mountpoint * --run - Actually deploy the configuration. If this is not specified, the configuration changes will not be committed and config diff will be printed. ### Single-router mode Apply a configuration template to a single router, useful for bootstrapping a router for use with vr-bgp for instance. ``` docker run -v $(pwd)/templates:/templates --link router1 vr-configengine --type xrv --router router1 --config /templates/router1.j2 --attrs "key1=value1,key2=value2" ``` * -v $(pwd)/templates:/templates - Mount a directory containing your templates inside the container * --link router1 - Link the router you want to configure * --config /templates/router1.j2 - Your router configuration, references /templates moutpoint * --type vmx - Type of router to configure (valid values are vmx, xrv and csr) * --attr "key=value" - A key/value pair available in the template, can be specified multiple times. ### Common parameters These parameters are available in both modes * --wait-for-boot - Block until we can connect to the router via SSH. If neither --diff or --run is used this option will simply block until all your routers are started * --diff - Print configuration diff and discard the configuration * --run - Commit the configuration to the router 07070100000012000041ED00000000000000000000000264D7C43700000000000000000000000000000000000000000000004300000000vrnetlab-git1691862071.9187175/config-engine-lite/config_templates07070100000013000081A400000000000000000000000164D7C437000001B1000000000000000000000000000000000000005200000000vrnetlab-git1691862071.9187175/config-engine-lite/config_templates/ios-example.j2hostname {{ hostname }} interface Loopback0 ip address 10.255.0.{{id}} 255.255.255.255 ipv6 address 2001:db8::{{ id }}/128 ipv6 enable {%- for link in links %} interface {{ link.interface }} description {{ link.remote.router }}: {{ link.remote.interface }} ip address 10.1.{{ link.id }}.{{ link.octet }} 255.255.255.252 ipv6 address 2001:db8::1:{{ link.id }}:{{ link.octet }}/126 ipv6 enable no shut {%- endfor %} lldp run 07070100000014000081A400000000000000000000000164D7C437000002B8000000000000000000000000000000000000005400000000vrnetlab-git1691862071.9187175/config-engine-lite/config_templates/junos-example.j2system { host-name {{ hostname }}; } interfaces { lo0 { unit 0 { family inet { address 10.255.0.{{id}}/32; } family inet6 { address 2001:db8::{{id}}/128; } } } {%- for link in links %} {{link.interface}} { description "{{link.remote.router}}: {{link.remote.interface}}"; unit 0 { family inet { address 10.1.{{link.id}}.{{link.octet}}/30; } family inet6 { address 2001:db8::1:{{link.id}}:{{link.octet}}/126; } } } {%- endfor %} } protocols { lldp { interface all; } } 07070100000015000081A400000000000000000000000164D7C43700000166000000000000000000000000000000000000005100000000vrnetlab-git1691862071.9187175/config-engine-lite/config_templates/xr-example.j2hostname {{ hostname }} interface Loopback0 ipv4 address 10.255.0.{{id}}/32 ipv6 address 2001:db8::{{id}}/112 {%- for link in links %} interface {{ link.interface }} description {{link.remote.router}}: {{link.remote.interface}} ipv4 address 10.1.{{link.id}}.{{link.octet}}/30 ipv6 address 2001:db8::1:{{link.id}}:{{link.octet}}/126 {%- endfor %} lldp 07070100000016000081ED00000000000000000000000164D7C4370000260C000000000000000000000000000000000000003F00000000vrnetlab-git1691862071.9187175/config-engine-lite/configengine#!/usr/bin/env python3 import json import sys import jinja2 import pprint import napalm import paramiko.ssh_exception import jnpr.junos.exception import time class RouterList: def __init__(self): self.routers = [] def append(self, router): self.routers.append(router) def get(self, name): for router in self.routers: if router.name == name: return router def list(self): return self.routers class Router: def __init__(self): self.id = 0 self.name = "" self.type = "" self.template = None self.config = None self.links = [] self.attrs = {} self.device = None self.loaded = False def connect(self, wait_for_boot=False): if self.type == "xrv": driver = napalm.get_network_driver("iosxr") elif self.type == "vmx": driver = napalm.get_network_driver("junos") elif self.type == "csr": driver = napalm.get_network_driver("ios") else: raise Exception("Unknown device type: {}".format(self.type)) self.device = driver(self.get_address(), "vrnetlab", "VR-netlab9") TIMEOUT = 60*15 TIMER = 0 if wait_for_boot: while True: try: if TIMER >= TIMEOUT: # Timeout if router hasn't started in 15 minutes. raise TimeoutError("Timed out wating for router {} to start".format(self.name)) self.device.open() return self.device except paramiko.ssh_exception.SSHException: # xrv and csr throws this exception when connection # failed, wait a bit and retry. TIMER += 2 time.sleep(2) except jnpr.junos.exception.ConnectError: # vmx throws this exception when connection failed # wait a bit and retry TIMER += 2 time.sleep(2) else: self.device.open() return self.device def load_merge(self): if not self.device: self.connect() self.loaded = True return self.device.load_merge_candidate(config=self.config) def commit_config(self): if not self.loaded: raise Exception("Please run load_merge first") return self.device.commit_config() def discard_config(self): if not self.loaded: raise Exception("Please run load_merge first") return self.device.discard_config() def compare_config(self): if not self.loaded: raise Exception("Please run load_merge first") return self.device.compare_config() def get_address(self): import subprocess import socket try: return socket.gethostbyname(self.name) except socket.gaierror: # Try docker inspect if gethostbyname fails cmd = [ "docker", "inspect", "--format", "'{{.NetworkSettings.IPAddress}}'", self.name ] p = subprocess.Popen(cmd, stdout=subprocess.PIPE, cwd=".") return str(p.communicate()[0].strip().replace("'", "")) def add_attr(self, key, value): self.attrs[key] = value class ConfigBootstrap: def __init__(self, wait_for_boot): self.routers = RouterList() self.wait_for_boot = wait_for_boot def load_router(self, name, model, template, attrs): router = Router() router.id = 1 router.name = name for k in attrs: router.add_attr(k, attrs[k]) router.type = model router.template = template self.routers.append(router) def load_topology(self, topology, xr, junos, ios): self.routers = RouterList() i = 1 for name in topology["routers"]: elem = topology["routers"][name] router = Router() if "id" not in elem: router.id = i else: router.id = elem["id"] for key in elem: if key not in ["id", "type"]: router.add_attr(key, elem[key]) router.name = name router.type = elem["type"] if router.type == "xrv": router.template = xr if router.type == "vmx": router.template = junos if router.type == "csr": router.template = ios self.routers.append(router) i = i + 1 i = 1 for link in topology["links"]: left = self.routers.get(link["left"]["router"]) right = self.routers.get(link["right"]["router"]) left.links.append({ "interface": link["left"]["interface"], "numeric": link["left"]["numeric"], "id": max(left.id, right.id) + i, "octet": 1, "remote": { "router": link["right"]["router"], "interface": link["right"]["interface"], "numeric": link["right"]["numeric"] } }) right.links.append({ "interface": link["right"]["interface"], "numeric": link["right"]["numeric"], "id": max(left.id, right.id)+i, "octet": 2, "remote": { "router": link["left"]["router"], "interface": link["left"]["interface"], "numeric": link["left"]["numeric"] } }) i = i + 1 def connect(self): for router in self.routers.list(): router.connect(self.wait_for_boot) def render_config(self): for router in self.routers.list(): config = { "hostname": router.name, "links": router.links, "id": router.id } for key in router.attrs: config[key] = router.attrs[key] env = jinja2.Environment(loader=jinja2.FileSystemLoader(['./'])) template = env.get_template(router.template) router.config = template.render(config) router.load_merge() def apply_config(self): for router in self.routers.list(): router.commit_config() def diff_config(self): for router in self.routers.list(): print(router.compare_config()) router.discard_config() if __name__ == '__main__': import argparse import os parser = argparse.ArgumentParser() parser.add_argument("--topo", help="Low-level topology file from topomachine") parser.add_argument("--xr", help="IOS-XR Template") parser.add_argument("--junos", help="JunOS template") parser.add_argument("--ios", help="IOS template") parser.add_argument("--router", help="Name of your virtual router to configure, don't use together with --topo") parser.add_argument("--config", help="Template to apply to your router specified with --router") parser.add_argument("--type", help="Type of router specified with --router") parser.add_argument("--attr", help="Add extra attribute exposed in your config template", action="append") parser.add_argument("--wait-for-boot", help="Retry connection until successful", default=False, action="store_true") parser.add_argument("--run", help="Apply configuration", default=False, action="store_true") parser.add_argument("--diff", help="Display configuration diff but don't commit", default=False, action="store_true") args = parser.parse_args() cb = ConfigBootstrap(args.wait_for_boot) if args.topo: if args.router: print("You sould use either --router or --topo") sys.exit(1) if not os.path.isfile(args.topo): print("Topology file doesn't exist") sys.exit(1) if args.xr and not os.path.isfile(args.xr): print("IOS-XR template doesn't exist") sys.exit(1) if args.junos and not os.path.isfile(args.junos): print("JunOS template doesn't exist") sys.exit(1) if args.ios and not os.path.isfile(args.ios): print("IOS template doesn't exist") sys.exit(1) input_file = open(args.topo, "r") topology = json.loads(input_file.read()) input_file.close() cb.load_topology(topology, args.xr, args.junos, args.ios) cb.connect() if args.run or args.diff: cb.render_config() if args.run: cb.apply_config() if args.diff: cb.diff_config() if args.router: if not args.config or not os.path.isfile(args.config): print("Configuration template doesn't exist") sys.exit(1) if args.type not in [ "vmx", "csr", "xrv"]: print("Invalid router type {}".format(args.type)) sys.exit(1) attrs = {} if args.attr: try: for entry in args.attr: pcs = entry.split("=") attrs[pcs[0]] = pcs[1] except: print("Failed to parse extra attributes") sys.exit(1) cb.load_router(args.router, args.type, args.config, attrs) cb.connect() if args.run or args.diff: cb.render_config() if args.run: cb.apply_config() if args.diff: cb.diff_config() 07070100000017000041ED00000000000000000000000364D7C43700000000000000000000000000000000000000000000002300000000vrnetlab-git1691862071.9187175/csr07070100000018000081A400000000000000000000000164D7C4370000017B000000000000000000000000000000000000002C00000000vrnetlab-git1691862071.9187175/csr/MakefileVENDOR=Cisco NAME=CSR1000v IMAGE_FORMAT=qcow2 IMAGE_GLOB=*.qcow2 # match versions like: # csr1000v-universalk9.16.03.01a.qcow2 # csr1000v-universalk9.16.04.01.qcow2 VERSION=$(shell echo $(IMAGE) | sed -e 's/.\+[^0-9]\([0-9]\+\.[0-9]\+\.[0-9]\+[a-z]\?\)\([^0-9].*\|$$\)/\1/') -include ../makefile-sanity.include -include ../makefile.include -include ../makefile-install.include 07070100000019000081A400000000000000000000000164D7C43700000F9A000000000000000000000000000000000000002D00000000vrnetlab-git1691862071.9187175/csr/README.mdvrnetlab / Cisco CSR1000v =========================== This is the vrnetlab docker image for Cisco CSR1000v. On installation of CSR1000v the user is presented with the choice of output, which can be over serial console, a video console or through automatic detection of one or the other. Empirical studies show that the automatic detection is far from infallible and so we force the use of the serial console by feeding the VM an .iso image that contains a small bootstrap configuration that sets the output to serial console. This means we have to boot up the VM once to feed it this configuration and then restart it for the changes to take effect. Naturally we want to do this in the build process as to avoid having to restart the router once for every time we run the docker image. Unfortunately docker doesn't allow us to run docker build with `--privileged` so there is no KVM acceleration making this process excruciatingly slow were it to be performed in the docker build phase. Instead we build a basic image using docker build, which essentially just assembles the required files, then run it with `--privileged` to start up the VM and feed it the .iso image. After we are done we shut down the VM and commit this new state into the final docker image. This is unorthodox but works and saves us a lot of time. Building the docker image ------------------------- Put the .qcow2 file in this directory and run `make docker-image` and you should be good to go. The resulting image is called `vr-csr`. You can tag it with something else if you want, like `my-repo.example.com/vr-csr` and then push it to your repo. The tag is the same as the version of the CSR image, so if you have csr1000v-universalk9.16.04.01.qcow2 your final docker image will be called vr-csr:16.04.01 Please note that you will always need to specify version when starting your router as the "latest" tag is not added to any images since it has no meaning in this context. It's been tested to boot and respond to SSH with: * 16.03.01a (csr1000v-universalk9.16.03.01a.qcow2) * 16.04.01 (csr1000v-universalk9.16.04.01.qcow2) Usage ----- ``` docker run -d --privileged --name my-csr-router vr-csr ``` Interface mapping ----------------- IOS XE 16.03.01 and 16.04.01 does only support 10 interfaces, GigabitEthernet1 is always configured as a management interface and then we can only use 9 interfaces for traffic. If you configure vrnetlab to use more then 10 the interfaces will be mapped like the table below. The following images have been verified to NOT exhibit this behavior - csr1000v-universalk9.03.16.02.S.155-3.S2-ext.qcow2 - csr1000v-universalk9.03.17.02.S.156-1.S2-std.qcow2 | vr-csr | vr-xcon | | :---: | :---: | | Gi2 | 10 | | Gi3 | 1 | | Gi4 | 2 | | Gi5 | 3 | | Gi6 | 4 | | Gi7 | 5 | | Gi8 | 6 | | Gi9 | 7 | | Gi10 | 8 | | Gi11 | 9 | System requirements ------------------- CPU: 1 core RAM: 4GB Disk: <500MB License handling ---------------- You can feed a license file into CSR1000V by putting a text file containing the license in this directory next to your .qcow2 image. Name the license file the same as your .qcow2 file but append ".license", e.g. if you have "csr1000v-universalk9.16.04.01.qcow2" you would name the license file "csr1000v-universalk9.16.04.01.qcow2.license". The license is bound to a specific UDI and usually expires within a given time. To make sure that everything works out smoothly we configure the clock to a specific date during the installation process. This is because the license only has an expiration date not a start date. The license unlocks feature and throughput, the default throughput for CSR is 100Kbit/s and is totally useless if you want to configure the device with a fairly large configuration. FUAQ - Frequently or Unfrequently Asked Questions ------------------------------------------------- ##### Q: Has this been extensively tested? A: Nope. 0707010000001A000041ED00000000000000000000000264D7C43700000000000000000000000000000000000000000000002A00000000vrnetlab-git1691862071.9187175/csr/docker0707010000001B000081A400000000000000000000000164D7C437000001EC000000000000000000000000000000000000003500000000vrnetlab-git1691862071.9187175/csr/docker/DockerfileFROM debian:bullseye MAINTAINER Kristian Larsson <kristian@spritelink.net> ENV DEBIAN_FRONTEND=noninteractive RUN apt-get update -qy \ && apt-get upgrade -qy \ && apt-get install -y \ bridge-utils \ iproute2 \ python3-ipy \ socat \ qemu-kvm \ genisoimage \ && rm -rf /var/lib/apt/lists/* ARG VERSION ENV VERSION=${VERSION} ARG IMAGE COPY $IMAGE* / COPY *.py / EXPOSE 22 161/udp 830 5000 10000-10099 HEALTHCHECK CMD ["/healthcheck.py"] ENTRYPOINT ["/launch.py"] 0707010000001C000081ED00000000000000000000000164D7C43700001B62000000000000000000000000000000000000003400000000vrnetlab-git1691862071.9187175/csr/docker/launch.py#!/usr/bin/env python3 import datetime import logging import os import re import signal import subprocess import sys import telnetlib import time import vrnetlab def handle_SIGCHLD(signal, frame): os.waitpid(-1, os.WNOHANG) def handle_SIGTERM(signal, frame): sys.exit(0) signal.signal(signal.SIGINT, handle_SIGTERM) signal.signal(signal.SIGTERM, handle_SIGTERM) signal.signal(signal.SIGCHLD, handle_SIGCHLD) TRACE_LEVEL_NUM = 9 logging.addLevelName(TRACE_LEVEL_NUM, "TRACE") def trace(self, message, *args, **kws): # Yes, logger takes its '*args' as 'args'. if self.isEnabledFor(TRACE_LEVEL_NUM): self._log(TRACE_LEVEL_NUM, message, args, **kws) logging.Logger.trace = trace class CSR_vm(vrnetlab.VM): def __init__(self, username, password, install_mode=False): for e in os.listdir("/"): if re.search(".qcow2$", e): disk_image = "/" + e if re.search("\.license$", e): os.rename("/" + e, "/tftpboot/license.lic") self.license = False if os.path.isfile("/tftpboot/license.lic"): logger.info("License found") self.license = True super(CSR_vm, self).__init__(username, password, disk_image=disk_image) self.nic_type = "virtio-net-pci" self.install_mode = install_mode self.num_nics = 9 if self.install_mode: logger.trace("install mode") self.image_name = "config.iso" self.create_boot_image() self.qemu_args.extend(["-cdrom", "/" +self.image_name]) def create_boot_image(self): """ Creates a iso image with a bootstrap configuration """ cfg_file = open('/iosxe_config.txt', 'w') if self.license: cfg_file.write("do clock set 13:33:37 1 Jan 2010\r\n") cfg_file.write("interface GigabitEthernet1\r\n") cfg_file.write("ip address 10.0.0.15 255.255.255.0\r\n") cfg_file.write("no shut\r\n") cfg_file.write("exit\r\n") cfg_file.write("license accept end user agreement\r\n") cfg_file.write("yes\r\n") cfg_file.write("do license install tftp://10.0.0.2/license.lic\r\n\r\n") cfg_file.write("platform console serial\r\n\r\n") cfg_file.write("do wr\r\n") cfg_file.write("do reload\r\n") cfg_file.close() genisoimage_args = ["genisoimage", "-l", "-o", "/" + self.image_name, "/iosxe_config.txt"] subprocess.Popen(genisoimage_args) def bootstrap_spin(self): """ This function should be called periodically to do work. """ if self.spins > 300: # too many spins with no result -> give up self.stop() self.start() return (ridx, match, res) = self.tn.expect([b"Press RETURN to get started!"], 1) if match: # got a match! if ridx == 0: # login if self.install_mode: self.wait_write("", wait=None) self.wait_write("", None) self.wait_write("enable", wait=">") self.wait_write("clear platform software vnic-if nvtable") self.wait_write("") self.running = True return self.logger.debug("matched, Press RETURN to get started.") self.wait_write("", wait=None) # run main config! self.bootstrap_config() # close telnet connection self.tn.close() # startup time? startup_time = datetime.datetime.now() - self.start_time self.logger.info("Startup complete in: %s" % startup_time) # mark as running self.running = True return # no match, if we saw some output from the router it's probably # booting, so let's give it some more time if res != b'': self.logger.trace("OUTPUT: %s" % res.decode()) # reset spins if we saw some output self.spins = 0 self.spins += 1 return def bootstrap_config(self): """ Do the actual bootstrap config """ self.logger.info("applying bootstrap configuration") self.wait_write("", None) self.wait_write("enable", wait=">") self.wait_write("configure terminal", wait=">") self.wait_write("hostname csr1000v") self.wait_write("username %s privilege 15 password %s" % (self.username, self.password)) if int(self.version.split('.')[0]) >= 16: self.wait_write("ip domain name example.com") else: self.wait_write("ip domain-name example.com") self.wait_write("crypto key generate rsa modulus 2048") self.wait_write("interface GigabitEthernet1") self.wait_write("ip address 10.0.0.15 255.255.255.0") self.wait_write("no shut") self.wait_write("exit") self.wait_write("restconf") self.wait_write("netconf-yang") self.wait_write("line vty 0 4") self.wait_write("login local") self.wait_write("transport input all") self.wait_write("end") self.wait_write("copy running-config startup-config") self.wait_write("\r", None) class CSR(vrnetlab.VR): def __init__(self, username, password): super(CSR, self).__init__(username, password) self.vms = [ CSR_vm(username, password) ] class CSR_installer(CSR): """ CSR installer Will start the CSR with a mounted iso to make sure that we get console output on serial, not vga. """ def __init__(self, username, password): super(CSR, self).__init__(username, password) self.vms = [ CSR_vm(username, password, install_mode=True) ] def install(self): self.logger.info("Installing CSR") csr = self.vms[0] while not csr.running: csr.work() time.sleep(30) csr.stop() self.logger.info("Installation complete") if __name__ == '__main__': import argparse parser = argparse.ArgumentParser(description='') parser.add_argument('--trace', action='store_true', help='enable trace level logging') parser.add_argument('--username', default='vrnetlab', help='Username') parser.add_argument('--password', default='VR-netlab9', help='Password') parser.add_argument('--install', action='store_true', help='Install CSR') args = parser.parse_args() LOG_FORMAT = "%(asctime)s: %(module)-10s %(levelname)-8s %(message)s" logging.basicConfig(format=LOG_FORMAT) logger = logging.getLogger() logger.setLevel(logging.DEBUG) if args.trace: logger.setLevel(1) if args.install: vr = CSR_installer(args.username, args.password) vr.install() else: vr = CSR(args.username, args.password) vr.start() 0707010000001D000081ED00000000000000000000000164D7C4370000145A000000000000000000000000000000000000002F00000000vrnetlab-git1691862071.9187175/git-lfs-repo.sh#!/bin/bash unknown_os () { echo "Unfortunately, your operating system distribution and version are not supported by this script." echo echo "You can override the OS detection by setting os= and dist= prior to running this script." echo "You can find a list of supported OSes and distributions on our website: https://packagecloud.io/docs#os_distro_version" echo echo "For example, to force Ubuntu Trusty: os=ubuntu dist=trusty ./script.sh" echo echo "Please email support@packagecloud.io and let us know if you run into any issues." exit 1 } curl_check () { echo "Checking for curl..." if command -v curl > /dev/null; then echo "Detected curl..." else echo "Installing curl..." apt-get install -q -y curl fi } install_debian_keyring () { if [ "${os}" = "debian" ]; then echo "Installing debian-archive-keyring which is needed for installing " echo "apt-transport-https on many Debian systems." apt-get install -y debian-archive-keyring &> /dev/null fi } detect_os () { if [[ ( -z "${os}" ) && ( -z "${dist}" ) ]]; then # some systems dont have lsb-release yet have the lsb_release binary and # vice-versa if [ -e /etc/lsb-release ]; then . /etc/lsb-release if [ "${ID}" = "raspbian" ]; then os=${ID} dist=`cut --delimiter='.' -f1 /etc/debian_version` else os=${DISTRIB_ID} dist=${DISTRIB_CODENAME} if [ -z "$dist" ]; then dist=${DISTRIB_RELEASE} fi fi elif [ `which lsb_release 2>/dev/null` ]; then dist=`lsb_release -c | cut -f2` os=`lsb_release -i | cut -f2 | awk '{ print tolower($1) }'` elif [ -e /etc/debian_version ]; then # some Debians have jessie/sid in their /etc/debian_version # while others have '6.0.7' os=`cat /etc/issue | head -1 | awk '{ print tolower($1) }'` if grep -q '/' /etc/debian_version; then dist=`cut --delimiter='/' -f1 /etc/debian_version` else dist=`cut --delimiter='.' -f1 /etc/debian_version` fi else unknown_os fi fi if [ -z "$dist" ]; then unknown_os fi # remove whitespace from OS and dist name os="${os// /}" dist="${dist// /}" echo "Detected operating system as $os/$dist." } main () { detect_os curl_check # Need to first run apt-get update so that apt-transport-https can be # installed echo -n "Running apt-get update... " apt-get update &> /dev/null echo "done." # Install the debian-archive-keyring package on debian systems so that # apt-transport-https can be installed next install_debian_keyring echo -n "Installing apt-transport-https... " apt-get install -y apt-transport-https &> /dev/null echo "done." gpg_key_url="https://packagecloud.io/github/git-lfs/gpgkey" apt_config_url="https://packagecloud.io/install/repositories/github/git-lfs/config_file.list?os=${os}&dist=${dist}&source=script" apt_source_path="/etc/apt/sources.list.d/github_git-lfs.list" echo -n "Installing $apt_source_path..." # create an apt config file for this repository curl -sSf "${apt_config_url}" > $apt_source_path curl_exit_code=$? if [ "$curl_exit_code" = "22" ]; then echo echo echo -n "Unable to download repo config from: " echo "${apt_config_url}" echo echo "This usually happens if your operating system is not supported by " echo "packagecloud.io, or this script's OS detection failed." echo echo "You can override the OS detection by setting os= and dist= prior to running this script." echo "You can find a list of supported OSes and distributions on our website: https://packagecloud.io/docs#os_distro_version" echo echo "For example, to force Ubuntu Trusty: os=ubuntu dist=trusty ./script.sh" echo echo "If you are running a supported OS, please email support@packagecloud.io and report this." [ -e $apt_source_path ] && rm $apt_source_path exit 1 elif [ "$curl_exit_code" = "35" -o "$curl_exit_code" = "60" ]; then echo "curl is unable to connect to packagecloud.io over TLS when running: " echo " curl ${apt_config_url}" echo "This is usually due to one of two things:" echo echo " 1.) Missing CA root certificates (make sure the ca-certificates package is installed)" echo " 2.) An old version of libssl. Try upgrading libssl on your system to a more recent version" echo echo "Contact support@packagecloud.io with information about your system for help." [ -e $apt_source_path ] && rm $apt_source_path exit 1 elif [ "$curl_exit_code" -gt "0" ]; then echo echo "Unable to run: " echo " curl ${apt_config_url}" echo echo "Double check your curl installation and try again." [ -e $apt_source_path ] && rm $apt_source_path exit 1 else echo "done." fi echo -n "Importing packagecloud gpg key... " # import the gpg key curl -L "${gpg_key_url}" 2> /dev/null | apt-key add - &>/dev/null echo "done." echo -n "Running apt-get update... " # update apt on this system apt-get update &> /dev/null echo "done." echo echo "The repository is setup! You can now install packages." } main 0707010000001E000081A400000000000000000000000164D7C43700000C6C000000000000000000000000000000000000003800000000vrnetlab-git1691862071.9187175/makefile-install.include# # This Makefile can be included by images that need to run an install phase, # i.e. in addition to doing the docker build, we also want to run some stuff # inside that image to come up with the final output image. in the case of # JUNOS we want to do this as the first time the vMX RE boots up it detects # that it's in a vMX RE mode and then reboots. By starting it up and letting it # do this first check-and-reboot during the image build time we save ourselves # from doing this on *every* run of the container image later. # # Since we start the virtual router we are actually running a virtual machine # with qemu and for that we want KVM hardware acceleration, which requires # running docker with --privileged. `docker build` doesn't have the # --privileged argument, so instead we first run the build as normal up to the # point where we want to start the virtual router. Then we use `docker run # --privilged ...` do the needful and after commit it using `docker commit ...` # to create the final output image. # # One of the problems with this is that normally the docker build is kind of # idempotent in that it uses a command cache and if there are no changes to the # Dockerfile or input files it will not rerun those commands but use a cached # image. This greatly speeds up the build process. However, when doing this # manual `docker run` dance we miss this opportunity since it will always be # run.... so we worked around it. Before doing docker run we check the SHA sum # of the built image and compare this to the ones of the previously built # image. If they are the same it means the docker build was entirely cached and # there's no need to run the image, otherwise if there's a change we do run it. # When comparing the hashes we omit the last layer of the previous build. It # contains the committed changes from the install phase of the previous build. # Include this makefile to have your image automatically do that dance. docker-pre-build: -cat cidfile | xargs --no-run-if-empty docker rm -f -rm cidfile -docker tag $(REGISTRY)vr-$(VR_NAME):$(VERSION) $(REGISTRY)vr-$(VR_NAME):$(VERSION)-previous-build docker-build: docker-build-common -docker inspect --format '{{.RootFS.Layers}}' $(REGISTRY)vr-$(VR_NAME):$(VERSION)-previous-build | tr -d '][' | awk '{ $$(NF)=""; print }' > built-image-sha-previous docker inspect --format '{{.RootFS.Layers}}' $(REGISTRY)vr-$(VR_NAME):$(VERSION) | tr -d '][' > built-image-sha-current if [ "$$(cat built-image-sha-previous | sed -e 's/[[:space:]]*$$//')" = "$$(cat built-image-sha-current)" ]; then echo "Previous image is the same as current, retagging!"; \ docker tag $(REGISTRY)vr-$(VR_NAME):$(VERSION)-previous-build $(REGISTRY)vr-$(VR_NAME):$(VERSION) || true; \ else \ echo "Current build differ from previous, running install!"; \ docker run --cidfile cidfile --privileged $(REGISTRY)vr-$(VR_NAME):$(VERSION) --trace --install $(EXTRA_INSTALL_ARGS); \ docker commit --change='ENTRYPOINT ["/launch.py"]' $$(cat cidfile) $(REGISTRY)vr-$(VR_NAME):$(VERSION); \ docker rm -f $$(cat cidfile); \ fi docker rmi -f $(REGISTRY)vr-$(VR_NAME):$(VERSION)-previous-build || true rm built-image-sha* 0707010000001F000081A400000000000000000000000164D7C43700000172000000000000000000000000000000000000003700000000vrnetlab-git1691862071.9187175/makefile-sanity.includeifdef DOCKER_REGISTRY ifneq ($(DOCKER_REGISTRY), $(shell echo $(DOCKER_REGISTRY) | sed -ne '/^[A-Za-z0-9.\/\-]\+\(:[0-9]\+\)\?\([A-Za-z0-9.\/-]\+\)\?$$/p')) $(error Bad docker registry URL. Should follow format registry.example.com/foo, registry.example.com:1234 or registry.example.com:1234/foo) endif REGISTRY=$(DOCKER_REGISTRY)/ else REGISTRY=vrnetlab/ endif 07070100000020000081A400000000000000000000000164D7C437000009E9000000000000000000000000000000000000003000000000vrnetlab-git1691862071.9187175/makefile.includeVR_NAME=$(shell basename $$(pwd)) IMAGES=$(shell ls $(IMAGE_GLOB) 2>/dev/null) NUM_IMAGES=$(shell ls $(IMAGES) | wc -l) ifeq ($(NUM_IMAGES), 0) docker-image: no-image usage else docker-image: for IMAGE in $(IMAGES); do \ echo "Making $$IMAGE"; \ $(MAKE) IMAGE=$$IMAGE docker-build; \ done endif docker-clean-build: -rm -f docker/*.qcow2* docker/*.tgz* docker/*.vmdk* docker/*.iso docker-pre-build: ; docker-build-image-copy: cp $(IMAGE)* docker/ TAG_NAME = $(REGISTRY)vr-$(VR_NAME):$(VERSION) ifeq ($(PNS),) PNS:=$(shell whoami | sed 's/[^[:alnum:]._-]\+/_/g') endif docker-build-common: docker-clean-build docker-pre-build @if [ -z "$$IMAGE" ]; then echo "ERROR: No IMAGE specified"; exit 1; fi @if [ "$(IMAGE)" = "$(VERSION)" ]; then echo "ERROR: Incorrect version string ($(IMAGE)). The regexp for extracting version information is likely incorrect, check the regexp in the Makefile or open an issue at https://github.com/plajjan/vrnetlab/issues/new including the image file name you are using."; exit 1; fi @echo "Building docker image using $(IMAGE) as $(TAG_NAME)" cp ../common/* docker/ $(MAKE) IMAGE=$$IMAGE docker-build-image-copy (cd docker; docker build --build-arg http_proxy=$(http_proxy) --build-arg https_proxy=$(https_proxy) --build-arg IMAGE=$(IMAGE) --build-arg VERSION=$(VERSION) -t $(TAG_NAME) .) docker-build: docker-build-common docker-push: for IMAGE in $(IMAGES); do \ $(MAKE) IMAGE=$$IMAGE docker-push-image; \ done docker-push-image: @if [ -z "$$IMAGE" ]; then echo "ERROR: No IMAGE specified"; exit 1; fi @if [ "$(IMAGE)" = "$(VERSION)" ]; then echo "ERROR: Incorrect version string"; exit 1; fi docker push $(TAG_NAME) usage: @echo "Usage: put the $(VENDOR) $(NAME) $(IMAGE_FORMAT) image in this directory and run:" @echo " make" no-image: @echo "ERROR: you have no $(IMAGE_FORMAT) ($(IMAGE_GLOB)) image" version-test: @echo Extracting version from filename $(IMAGE) @echo Version: $(VERSION) docker-test: set -xe; for IMAGE in $(IMAGES); do \ $(MAKE) IMAGE=$$IMAGE docker-test-image; \ done CNT_PREFIX ?= $(PNS)-test-image-$(VR_NAME) TEST_TIMEOUT ?= 2400 docker-test-image: CONTAINER_NAME?=$(CNT_PREFIX)-$(VERSION) docker-test-image: ../test/test-image $(TAG_NAME) $(CONTAINER_NAME) $$TEST_PARAMS docker-test-clean: docker ps -aqf name=$(CNT_PREFIX) | xargs --no-run-if-empty docker rm -f docker-test-save-logs: for cnt in `docker ps -af name=$(CNT_PREFIX) --format '{{.Names}}'`; do \ docker logs $${cnt} > $${cnt}.log 2>&1; \ done all: docker-image 07070100000021000041ED00000000000000000000000364D7C43700000000000000000000000000000000000000000000002400000000vrnetlab-git1691862071.9187175/nxos07070100000022000081A400000000000000000000000164D7C43700000132000000000000000000000000000000000000002D00000000vrnetlab-git1691862071.9187175/nxos/MakefileVENDOR=Cisco NAME=NXOS Titanium IMAGE_FORMAT=qcow2 IMAGE_GLOB=*.qcow2 # match versions like: # TODO: add example file names here VERSION=$(shell echo $(IMAGE) | sed -e 's/.\+[^0-9]\([0-9]\.[0-9]\.[0-9]\.[A-Z][0-9]\.[0-9]\)[^0-9].*$$/\1/') -include ../makefile-sanity.include -include ../makefile.include 07070100000023000081A400000000000000000000000164D7C4370000049D000000000000000000000000000000000000002E00000000vrnetlab-git1691862071.9187175/nxos/README.mdvrnetlab / Cisco Nexus NXOS =========================== This is the vrnetlab docker image for Cisco Nexus NXOS Titanium emulator. Building the docker image ------------------------- Titanium doesn't appear to be exactly official but you can get it from the Internet. VIRL is said to include it, so you may have luck in extracting it from there. Anyway, put the .qcow2 file in this directory and run `make docker-image` and you should be good to go. The resulting image is called `vr-nxos`. You can tag it with something else if you want, like `my-repo.example.com/vr-nxos` and then push it to your repo. The tag is the same as the version of the NXOS image, so if you have nxosv-7.2.0.D1.1.qcow2 your final docker image will be called vr-nxos:7.2.0.D1.1 Usage ----- ``` docker run -d --privileged --name my-nxos-router vr-nxos ``` System requirements ------------------- CPU: 1 core RAM: 2GB Disk: <500MB FUAQ - Frequently or Unfrequently Asked Questions ------------------------------------------------- ##### Q: Has this been extensively tested? A: Nope. I don't use Nexus myself (yet) so not much testing at all really. Please do try it out and let me know if it works. 07070100000024000041ED00000000000000000000000264D7C43700000000000000000000000000000000000000000000002B00000000vrnetlab-git1691862071.9187175/nxos/docker07070100000025000081A400000000000000000000000164D7C437000001B7000000000000000000000000000000000000003600000000vrnetlab-git1691862071.9187175/nxos/docker/DockerfileFROM debian:bullseye MAINTAINER Kristian Larsson <kristian@spritelink.net> ENV DEBIAN_FRONTEND=noninteractive RUN apt-get update -qy \ && apt-get upgrade -qy \ && apt-get install -y \ bridge-utils \ iproute2 \ python3-ipy \ socat \ qemu-kvm \ && rm -rf /var/lib/apt/lists/* ARG IMAGE COPY $IMAGE* / COPY *.py / EXPOSE 22 161/udp 830 5000 10000-10099 HEALTHCHECK CMD ["/healthcheck.py"] ENTRYPOINT ["/launch.py"] 07070100000026000081ED00000000000000000000000164D7C4370000101C000000000000000000000000000000000000003500000000vrnetlab-git1691862071.9187175/nxos/docker/launch.py#!/usr/bin/env python3 import datetime import logging import os import random import re import signal import sys import telnetlib import time import vrnetlab def handle_SIGCHLD(signal, frame): os.waitpid(-1, os.WNOHANG) def handle_SIGTERM(signal, frame): sys.exit(0) signal.signal(signal.SIGINT, handle_SIGTERM) signal.signal(signal.SIGTERM, handle_SIGTERM) signal.signal(signal.SIGCHLD, handle_SIGCHLD) TRACE_LEVEL_NUM = 9 logging.addLevelName(TRACE_LEVEL_NUM, "TRACE") def trace(self, message, *args, **kws): # Yes, logger takes its '*args' as 'args'. if self.isEnabledFor(TRACE_LEVEL_NUM): self._log(TRACE_LEVEL_NUM, message, args, **kws) logging.Logger.trace = trace class NXOS_vm(vrnetlab.VM): def __init__(self, username, password): for e in os.listdir("/"): if re.search(".qcow2$", e): disk_image = "/" + e super(NXOS_vm, self).__init__(username, password, disk_image=disk_image) self.num_nics = 144 self.credentials = [ ['admin', 'admin'] ] def bootstrap_spin(self): """ This function should be called periodically to do work. """ if self.spins > 300: # too many spins with no result -> give up self.stop() self.start() return (ridx, match, res) = self.tn.expect([b"login:"], 1) if match: # got a match! if ridx == 0: # login self.logger.debug("matched login prompt") try: username, password = self.credentials.pop(0) except IndexError as exc: self.logger.error("no more credentials to try") return self.logger.debug("trying to log in with %s / %s" % (username, password)) self.wait_write(username, wait=None) self.wait_write(password, wait="Password:") # run main config! self.bootstrap_config() # close telnet connection self.tn.close() # startup time? startup_time = datetime.datetime.now() - self.start_time self.logger.info("Startup complete in: %s" % startup_time) # mark as running self.running = True return # no match, if we saw some output from the router it's probably # booting, so let's give it some more time if res != b'': self.logger.trace("OUTPUT: %s" % res.decode()) # reset spins if we saw some output self.spins = 0 self.spins += 1 return def bootstrap_config(self): """ Do the actual bootstrap config """ self.logger.info("applying bootstrap configuration") self.wait_write("", None) self.wait_write("configure") self.wait_write("username %s password 0 %s role network-admin" % (self.username, self.password)) # configure mgmt interface self.wait_write("interface mgmt0") self.wait_write("ip address 10.0.0.15/24") self.wait_write("exit") self.wait_write("exit") self.wait_write("copy running-config startup-config") class NXOS(vrnetlab.VR): def __init__(self, username, password): super(NXOS, self).__init__(username, password) self.vms = [ NXOS_vm(username, password) ] if __name__ == '__main__': import argparse parser = argparse.ArgumentParser(description='') parser.add_argument('--trace', action='store_true', help='enable trace level logging') parser.add_argument('--username', default='vrnetlab', help='Username') parser.add_argument('--password', default='VR-netlab9', help='Password') args = parser.parse_args() LOG_FORMAT = "%(asctime)s: %(module)-10s %(levelname)-8s %(message)s" logging.basicConfig(format=LOG_FORMAT) logger = logging.getLogger() logger.setLevel(logging.DEBUG) if args.trace: logger.setLevel(1) vr = NXOS(args.username, args.password) vr.start() 07070100000027000041ED00000000000000000000000364D7C43700000000000000000000000000000000000000000000002700000000vrnetlab-git1691862071.9187175/nxos9kv07070100000028000081A400000000000000000000000164D7C43700000152000000000000000000000000000000000000003000000000vrnetlab-git1691862071.9187175/nxos9kv/MakefileVENDOR=Cisco NAME=NX-OS 9000v IMAGE_FORMAT=qcow2 IMAGE_GLOB=*.qcow2 # match versions like: # nexus9300v.9.3.7.qcow2 VERSION=$(shell echo $(IMAGE) | sed -e 's/.\+[^0-9]\([0-9]\.[0-9]\.[0-9].*\)\.qcow2$$/\1/') -include ../makefile-sanity.include -include ../makefile.include docker-pre-build: cp OVMF-pure-efi.fd nxos_config.txt docker 07070100000029000081A400000000000000000000000164D7C437000009A7000000000000000000000000000000000000003100000000vrnetlab-git1691862071.9187175/nxos9kv/README.mdvrnetlab / Cisco NX-OSv 9000 ============================ This is the vrnetlab docker image for Cisco NX-OSv 9000 Virtual Switch. Building the docker image ------------------------- This is the officially supported image and is different from the Titanium emulator. The image can be downloaded directly from Cisco site. Additional files needed ----------------------- You need to download the nexus9x00v image from Cisco. You will also need to download the EFI boot image, such as the one from https://www.kraxel.org/repos/jenkins/edk2. You need to extract the OVMF-pure-efi.fd file from the RPM package, this EFI boot image is used to boot up the nexus9x00v image. Anyway, put the .qcow2 and the fd files in this directory and run `make docker-image` and you should be good to go. The resulting image is called `vr-nxos9k`. You can tag it with something else if you want, like `my-repo.example.com/vr-nxos9k` and then push it to your repo. The tag is the same as the version of the NXOS image, so if you have nexus9300v.9.3.7.qcow2 your final docker image will be called vr-nxos:9.3.7. Usage ----- ``` docker run -d --privileged --name my-nxos-router vr-nxos9k:9.3.7 ``` You may specify the number of NICs with --num-nics, the default is 24 with a maximum of 65 for 9300v. Initial Configuration --------------------- The initial configuration file called nxos_config.txt is required. This file should contain some minimal configuration. A sample configuration is included. System requirements ------------------- Currently there are two platforms, 9300v and 9500v. If you are using vrnetlab I assume you want the light weight 9300v. The resource requirement is based on 9300v. CPU: 1 core, 2 prefered, the VR is not stable in my tests with 1 core. RAM: 8GB, Cisco recommends 4GB, but in my case 4GB is not enough. Disk: 8GB I only tested with 9300v. Known issues ------------ It is known that a previously booted image may not start properly. We have implemented a check that once a container stops then restarts, all its configurations (stored in the overlay image) are wiped clean before running. You will have to configure from scratch for each run. FUAQ - Frequently or Unfrequently Asked Questions ------------------------------------------------- ##### Q: Has this been extensively tested? A: Only basic configuration by the author on Ubuntu 20.04 server running Intel Xeon chips, both CLI and Netconf work. Layer2/layer3 functions are not tested. 0707010000002A000041ED00000000000000000000000264D7C43700000000000000000000000000000000000000000000002E00000000vrnetlab-git1691862071.9187175/nxos9kv/docker0707010000002B000081A400000000000000000000000164D7C437000001B7000000000000000000000000000000000000003900000000vrnetlab-git1691862071.9187175/nxos9kv/docker/DockerfileFROM debian:bullseye ENV DEBIAN_FRONTEND=noninteractive RUN apt-get update -qy \ && apt-get upgrade -qy \ && apt-get install -y \ bridge-utils \ genisoimage \ iproute2 \ python3-ipy \ socat \ qemu-kvm \ && rm -rf /var/lib/apt/lists/* ARG IMAGE # binary files COPY $IMAGE *.fd / # the rest COPY *.py *.txt / EXPOSE 22 161/udp 830 5000 10000-10099 HEALTHCHECK CMD ["/healthcheck.py"] ENTRYPOINT ["/launch.py"] 0707010000002C000081ED00000000000000000000000164D7C43700001875000000000000000000000000000000000000003800000000vrnetlab-git1691862071.9187175/nxos9kv/docker/launch.py#!/usr/bin/env python3 import argparse import datetime import logging import os import re import signal import sys import vrnetlab def handle_SIGCHLD(signal, frame): os.waitpid(-1, os.WNOHANG) def handle_SIGTERM(signal, frame): sys.exit(0) signal.signal(signal.SIGINT, handle_SIGTERM) signal.signal(signal.SIGTERM, handle_SIGTERM) signal.signal(signal.SIGCHLD, handle_SIGCHLD) TRACE_LEVEL_NUM = 9 logging.addLevelName(TRACE_LEVEL_NUM, "TRACE") def trace(self, message, *args, **kws): # Yes, logger takes its '*args' as 'args'. if self.isEnabledFor(TRACE_LEVEL_NUM): self._log(TRACE_LEVEL_NUM, message, args, **kws) logging.Logger.trace = trace class NXOS9K_vm(vrnetlab.VM): def __init__(self, bios, username, password, num_nics): for e in os.listdir("/"): if re.search(".qcow2$", e): disk_image = "/" + e # the parent constructor needs to call create_overlay_image, # so must initialize the other parameters first self.bios = bios self.prompted = False super().__init__(username, password, disk_image=disk_image, ram=8192) self.num_nics = num_nics self.credentials = [ ['admin', 'Cisco1234'] ] def create_overlay_image(self): extended_args = ['-nographic', '-bios', self.bios, '-smp', '2'] # remove previously old overlay image, otherwise boot fails if os.path.exists(self.overlay_disk_image): os.remove(self.overlay_disk_image) # now re-create it! super().create_overlay_image() # use SATA driver for disk and set to drive 0 extended_args.extend(['-device', 'ahci,id=ahci0,bus=pci.0', '-drive', 'if=none,file=%s,id=drive-sata-disk0,format=qcow2' % self.overlay_disk_image, '-device', 'ide-drive,bus=ahci0.0,drive=drive-sata-disk0,id=drive-sata-disk0']) # create initial config and load it vrnetlab.run_command(['genisoimage', '-o', '/cfg.iso', '-l', '--iso-level', '2', 'nxos_config.txt']) extended_args.extend(['-drive', 'file=cfg.iso,media=cdrom']) return extended_args def bootstrap_spin(self): """ This function should be called periodically to do work. """ # press return to get prompt every 10 seconds if not self.prompted and self.spins % 10 == 0: self.wait_write('', wait=None) if self.spins > 300: # too many spins with no result -> give up self.prompted = False self.stop() # re-create overlay image self.create_overlay_image() self.start() return username, password = self.credentials[0] (ridx, match, res) = self.tn.expect([b'login:', b'Enter the password for "admin":', b'Confirm the password for "admin":'], 1) if match: # got a match! self.prompted = True self.logger.info("match found: %s", res.decode()) if ridx == 0: # login self.logger.info("matched login prompt") self.logger.info("trying to log in with %s / %s", username, password) self.wait_write(username, wait=None) self.wait_write(password, wait="Password:") # run main config! self.bootstrap_config() # close telnet connection self.tn.close() # startup time? startup_time = datetime.datetime.now() - self.start_time self.logger.info("Startup complete in: %s", startup_time) # mark as running self.running = True else: self.logger.info("Trying to reset admin password to %s", password) self.wait_write(password, wait=None) self.wait_write(password, wait='password') # no match, if we saw some output from the router it's probably # booting, so let's give it some more time elif res != b'': self.logger.trace("OUTPUT: %s", res.decode()) # reset spins if we saw some output self.spins = 0 self.spins += 1 return def bootstrap_config(self): """ Do the actual bootstrap config """ self.logger.info("applying bootstrap configuration") self.wait_write("", None) # figure out the running image self.wait_write("", None) self.wait_write("configure") self.wait_write("username %s password 0 %s role network-admin" % (self.username, self.password)) # configure mgmt interface self.wait_write("interface mgmt0") self.wait_write("ip address 10.0.0.15/24") self.wait_write("exit") # enable netconf with 10 sessions (max allowed) self.wait_write("feature netconf") self.wait_write("netconf sessions 10") self.wait_write("exit") self.wait_write("copy running-config startup-config") class NXOS9K(vrnetlab.VR): def __init__(self, bios, username, password, num_nics): super().__init__(username, password) self.vms = [ NXOS9K_vm(bios, username, password, num_nics) ] def main(): """Main method""" parser = argparse.ArgumentParser() parser.add_argument('--trace', action='store_true', help='enable trace level logging') parser.add_argument('--username', default='vrnetlab', help='Username') parser.add_argument('--password', default='VR-netlab9', help='Password') parser.add_argument('--bios', default='OVMF-pure-efi.fd', help='EFI bios image') parser.add_argument('--num-nics', type=int, default=24, help='Number of NICs') args = parser.parse_args() # check if the bios file exists if not os.path.exists(args.bios): print('Bios file %s does not exit' % args.bios) sys.exit(1) LOG_FORMAT = "%(asctime)s: %(module)-10s %(levelname)-8s %(message)s" logging.basicConfig(format=LOG_FORMAT) logger = logging.getLogger() logger.setLevel(logging.DEBUG) if args.trace: logger.setLevel(1) vr = NXOS9K(args.bios, args.username, args.password, args.num_nics) vr.start() if __name__ == '__main__': main() 0707010000002D000081A400000000000000000000000164D7C4370000007A000000000000000000000000000000000000003700000000vrnetlab-git1691862071.9187175/nxos9kv/nxos_config.txthostname nexus-switch username admin password Cisco1234 interface mgmt0 vrf member management ip address 10.0.0.15/24 0707010000002E000041ED00000000000000000000000364D7C43700000000000000000000000000000000000000000000002700000000vrnetlab-git1691862071.9187175/openwrt0707010000002F000081A400000000000000000000000164D7C43700000236000000000000000000000000000000000000003000000000vrnetlab-git1691862071.9187175/openwrt/MakefileVENDOR=OpenWRT NAME=OpenWRT IMAGE_FORMAT=img IMAGE_GLOB=*.img # match versions like: # openwrt-12.09-x86-kvm_guest-combined-ext4.img # openwrt-14.07-x86-kvm_guest-combined-ext4.img # openwrt-15.05.1-x86-kvm_guest-combined-ext4.img # openwrt-15.05-x86-kvm_guest-combined-ext4.img VERSION=$(shell echo $(IMAGE) | sed -e 's/openwrt-\([0-9][0-9]\.[0-9][0-9]\(\.[0-9]\+\)\?\)-.*/\1/') -include ../makefile-sanity.include -include ../makefile.include download: python3 download.py for F in `ls *.img.gz`; do gunzip -f $$F; done build: download $(MAKE) docker-image 07070100000030000081A400000000000000000000000164D7C43700000715000000000000000000000000000000000000003100000000vrnetlab-git1691862071.9187175/openwrt/README.mdvrnetlab / OpenWRT ================================== This is the vrnetlab docker image for OpenWRT. Building the docker image ------------------------- Run `make build` to automatically download images from the public OpenWRT image repository and build them into vrnetlab docker images. `build` consists of the `download` step and `docker-image` step, which can be run separately. Use `make download` to automatically download images from the public OpenWRT image repository at https://downloads.openwrt.org. The download script will get everything that has a two-digit major version, e.g. 12.09, 14.07, 15.05 etc. You can also download images manually by navigating to https://downloads.openwrt.org/ and grabbing the file. You have to gunzip it. Whichever way you get the images, once you have them, run `make docker-image` to build the docker images. The resulting image is called `vr-openwrt`. You can tag it with something else if you want, like `my-repo.example.com/vr-openwrt` and then push it to your repo. The tag is the same as the version of the OpenWRT image, so if you have openwrt-15.05-x86-kvm_guest-combined-ext4.img your final docker image will be called vr-openwrt:15.05. As per OpenWRT defaults, `br-lan`(`eth0`) is the LAN interface and `eth1` the WAN interface. Tested booting and responding to SSH: * openwrt-15.05-x86-kvm_guest-combined-ext4.img MD5:3d9b51a7e0cd728137318989a9fd35fb Usage ----- ``` docker run -d --privileged --name openwrt1 vr-openwrt:15.05 ``` System requirements ------------------- CPU: 1 core RAM: 128 MB Disk: 256 MB FAQ - Frequently or Unfrequently Asked Questions ------------------------------------------------- ##### Q: Has this been extensively tested? A: Nope. It starts and you can connect to it. Take it for a spin and provide some feedback :-) 07070100000031000041ED00000000000000000000000264D7C43700000000000000000000000000000000000000000000002E00000000vrnetlab-git1691862071.9187175/openwrt/docker07070100000032000081A400000000000000000000000164D7C437000001BA000000000000000000000000000000000000003900000000vrnetlab-git1691862071.9187175/openwrt/docker/DockerfileFROM debian:bullseye MAINTAINER Kristian Larsson <kristian@spritelink.net> ENV DEBIAN_FRONTEND=noninteractive RUN apt-get update -qy \ && apt-get upgrade -qy \ && apt-get install -y \ bridge-utils \ iproute2 \ python3-ipy \ socat \ qemu-kvm \ && rm -rf /var/lib/apt/lists/* ARG IMAGE COPY $IMAGE* / COPY *.py / EXPOSE 22 161/udp 80 830 5000 10000-10099 HEALTHCHECK CMD ["/healthcheck.py"] ENTRYPOINT ["/launch.py"] 07070100000033000081ED00000000000000000000000164D7C437000010DC000000000000000000000000000000000000003800000000vrnetlab-git1691862071.9187175/openwrt/docker/launch.py#!/usr/bin/env python3 import datetime import logging import os import re import signal import sys import telnetlib import vrnetlab def handle_SIGCHLD(signal, frame): os.waitpid(-1, os.WNOHANG) def handle_SIGTERM(signal, frame): sys.exit(0) signal.signal(signal.SIGINT, handle_SIGTERM) signal.signal(signal.SIGTERM, handle_SIGTERM) signal.signal(signal.SIGCHLD, handle_SIGCHLD) TRACE_LEVEL_NUM = 9 logging.addLevelName(TRACE_LEVEL_NUM, "TRACE") def trace(self, message, *args, **kws): # Yes, logger takes its '*args' as 'args'. if self.isEnabledFor(TRACE_LEVEL_NUM): self._log(TRACE_LEVEL_NUM, message, args, **kws) logging.Logger.trace = trace class OpenWRT_vm(vrnetlab.VM): def __init__(self, username, password): for e in os.listdir("/"): if re.search(".img$", e): disk_image = "/" + e super(OpenWRT_vm, self).__init__(username, password, disk_image=disk_image, ram=128) self.nic_type = "virtio-net-pci" self.num_nics = 1 def bootstrap_spin(self): """ This function should be called periodically to do work. """ if self.spins > 300: # too many spins with no result -> give up self.stop() self.start() return (ridx, match, res) = self.tn.expect([b"br-lan"], 1) if match: # got a match! if ridx == 0: # login self.logger.debug("VM started") # run main config! self.bootstrap_config() # close telnet connection self.tn.close() # startup time? startup_time = datetime.datetime.now() - self.start_time self.logger.info("Startup complete in: %s" % startup_time) # mark as running self.running = True return # no match, if we saw some output from the router it's probably # booting, so let's give it some more time if res != b'': self.logger.trace("OUTPUT: %s" % res.decode()) # reset spins if we saw some output self.spins = 0 self.spins += 1 return def bootstrap_config(self): """ Do the actual bootstrap config """ self.logger.info("applying bootstrap configuration") # Get a prompt self.wait_write("\r", None) # Configure interface self.wait_write("ifconfig br-lan 10.0.0.15 netmask 255.255.255.0", "#") # Set root password (ssh login prerequisite) self.wait_write("passwd", "#") self.wait_write(self.password, "New password:") self.wait_write(self.password, "Retype password:") # Create vrnetlab user self.wait_write("echo '%s:x:501:501:%s:/home/%s:/bin/ash' >> /etc/passwd" %(self.username, self.username, self.username), "#") self.wait_write("passwd %s" %(self.username)) self.wait_write(self.password, "New password:") self.wait_write(self.password, "Retype password:") # Add user to root group self.wait_write("sed -i '1d' /etc/group", "#") self.wait_write("sed -i '1i root:x:0:%s' /etc/group" % (self.username)) # Create home dir self.wait_write("mkdir -p /home/%s" %(self.username)) self.wait_write("chown %s /home/%s" %(self.username, self.username)) self.logger.info("completed bootstrap configuration") class OpenWRT(vrnetlab.VR): def __init__(self, username, password): super(OpenWRT, self).__init__(username, password) self.vms = [ OpenWRT_vm(username, password) ] if __name__ == '__main__': import argparse parser = argparse.ArgumentParser(description='') parser.add_argument('--trace', action='store_true', help='enable trace level logging') parser.add_argument('--username', default='vrnetlab', help='Username') parser.add_argument('--password', default='VR-netlab9', help='Password') args = parser.parse_args() LOG_FORMAT = "%(asctime)s: %(module)-10s %(levelname)-8s %(message)s" logging.basicConfig(format=LOG_FORMAT) logger = logging.getLogger() logger.setLevel(logging.DEBUG) if args.trace: logger.setLevel(1) vr = OpenWRT(args.username, args.password) vr.start() 07070100000034000081ED00000000000000000000000164D7C43700000724000000000000000000000000000000000000003300000000vrnetlab-git1691862071.9187175/openwrt/download.py#!/usr/bin/env python3 import os import re import requests from lxml import html def get_hrefs(url): 'Fetch, parse, strip and return [href,href,..]' res = requests.get(url) if not res.status_code == 200: return tree = html.fromstring(res.content) anchors = tree.xpath('//a[@href]') refs = list(map(lambda a: a.get('href').strip('/'), anchors)) return refs def get_file(url, save_dest): 'Fetch, write and return Content-Length' with requests.get(url, stream=True) as src: with open(save_dest, 'wb') as dest: dest.write(src.content) dest.close() src.close() return src.headers.get('Content-Length') def get_latest(releases): 'Find the latest NN out of many NN.nn.nn' release_matrix = {} for rel in releases: if not re.match('^\d{2}\.\d{2}\.\d+', rel): continue major = rel.split('.')[0] release_matrix.setdefault(major, '') release_matrix[major] = max(release_matrix[major], rel) return list(release_matrix.values()) def main(): base_url = "https://downloads.openwrt.org/releases" stable_releases = get_hrefs(base_url) latest_releases = get_latest(stable_releases) for release in latest_releases: base_x86_64= f'{base_url}/{release}/targets/x86/64' for filename in get_hrefs(base_x86_64): # ignore if not ext4 fs if not re.match('^openwrt-.*-combined-ext4.img.gz', filename) and \ not re.match('^openwrt-.*-ext4-combined.img.gz', filename): continue remote_file = f'{base_x86_64}/{filename}' local_file = os.path.basename(remote_file) size = get_file(remote_file, local_file) print(f'Downloaded {local_file} ({size} bytes)') main() 07070100000035000041ED00000000000000000000000364D7C43700000000000000000000000000000000000000000000002800000000vrnetlab-git1691862071.9187175/routeros07070100000036000081A400000000000000000000000164D7C43700000101000000000000000000000000000000000000003100000000vrnetlab-git1691862071.9187175/routeros/MakefileVENDOR=Mikrotik NAME=RouterOS IMAGE_FORMAT=vmdk IMAGE_GLOB=*.vmdk # match versions like: # chr-6.39.2.vmdk VERSION=$(shell echo $(IMAGE) | sed -n 's/.*\([0-9]\.[0-9][0-99]\.[0-9]\).*/\1/p') -include ../makefile-sanity.include -include ../makefile.include 07070100000037000081A400000000000000000000000164D7C4370000033F000000000000000000000000000000000000003200000000vrnetlab-git1691862071.9187175/routeros/README.mdvrnetlab / Mikrotik RouterOS (ROS) ================================== This is the vrnetlab docker image for Mikrotik RouterOS (ROS). Building the docker image ------------------------- Download the Cloud Hosted Router VMDK image from https://www.mikrotik.com/download Copy the vmdk image into this folder, then run `make docker-image`. Tested booting and responding to SSH: * chr-6.39.2.vmdk MD5:eb99636e3cdbd1ea79551170c68a9a27 Usage ----- ``` docker run -d --privileged --name my-ros-router vr-ros:6.39.2 ``` System requirements ------------------- CPU: 1 core RAM: <1GB Disk: <1GB FAQ - Frequently or Unfrequently Asked Questions ------------------------------------------------- ##### Q: Has this been extensively tested? A: Nope. It starts and you can connect to it. Take it for a spin and provide some feedback :-) 07070100000038000041ED00000000000000000000000264D7C43700000000000000000000000000000000000000000000002F00000000vrnetlab-git1691862071.9187175/routeros/docker07070100000039000081A400000000000000000000000164D7C437000001B7000000000000000000000000000000000000003A00000000vrnetlab-git1691862071.9187175/routeros/docker/DockerfileFROM debian:bullseye MAINTAINER Kristian Larsson <kristian@spritelink.net> ENV DEBIAN_FRONTEND=noninteractive RUN apt-get update -qy \ && apt-get upgrade -qy \ && apt-get install -y \ bridge-utils \ iproute2 \ python3-ipy \ socat \ qemu-kvm \ && rm -rf /var/lib/apt/lists/* ARG IMAGE COPY $IMAGE* / COPY *.py / EXPOSE 22 161/udp 830 5000 10000-10099 HEALTHCHECK CMD ["/healthcheck.py"] ENTRYPOINT ["/launch.py"] 0707010000003A000081ED00000000000000000000000164D7C43700000F92000000000000000000000000000000000000003900000000vrnetlab-git1691862071.9187175/routeros/docker/launch.py#!/usr/bin/env python3 import datetime import logging import os import re import signal import sys import telnetlib import vrnetlab def handle_SIGCHLD(signal, frame): os.waitpid(-1, os.WNOHANG) def handle_SIGTERM(signal, frame): sys.exit(0) signal.signal(signal.SIGINT, handle_SIGTERM) signal.signal(signal.SIGTERM, handle_SIGTERM) signal.signal(signal.SIGCHLD, handle_SIGCHLD) TRACE_LEVEL_NUM = 9 logging.addLevelName(TRACE_LEVEL_NUM, "TRACE") def trace(self, message, *args, **kws): # Yes, logger takes its '*args' as 'args'. if self.isEnabledFor(TRACE_LEVEL_NUM): self._log(TRACE_LEVEL_NUM, message, args, **kws) logging.Logger.trace = trace class ROS_vm(vrnetlab.VM): def __init__(self, username, password): for e in os.listdir("/"): if re.search(".vmdk$", e): disk_image = "/" + e super(ROS_vm, self).__init__(username, password, disk_image=disk_image, ram=256) self.qemu_args.extend(["-boot", "n"]) self.num_nics = 31 def bootstrap_spin(self): """ This function should be called periodically to do work. """ if self.spins > 300: # too many spins with no result -> give up self.stop() self.start() return (ridx, match, res) = self.tn.expect([b"MikroTik Login"], 1) if match: # got a match! if ridx == 0: # login self.logger.debug("VM started") # Login self.wait_write("\r", None) # Append +ct to username for the plain-text console version self.wait_write("admin+ct", wait="MikroTik Login: ") self.wait_write("", wait="Password: ") self.wait_write("n", wait="Do you want to see the software license? [Y/n]: ") self.logger.debug("Login completed") # run main config! self.bootstrap_config() # close telnet connection self.tn.close() # startup time? startup_time = datetime.datetime.now() - self.start_time self.logger.info("Startup complete in: %s" % startup_time) # mark as running self.running = True return # no match, if we saw some output from the router it's probably # booting, so let's give it some more time if res != b'': self.logger.trace("OUTPUT: %s" % res.decode()) # reset spins if we saw some output self.spins = 0 self.spins += 1 return def bootstrap_config(self): """ Do the actual bootstrap config """ self.logger.info("applying bootstrap configuration") self.wait_write("/ip address add interface=ether1 address=10.0.0.15 netmask=255.255.255.0", "[admin@MikroTik] > ") self.wait_write("/user add name=%s password=\"%s\" group=full" % (self.username, self.password), "[admin@MikroTik] > ") self.wait_write("\r", "[admin@MikroTik] > ") self.logger.info("completed bootstrap configuration") class ROS(vrnetlab.VR): def __init__(self, username, password): super(ROS, self).__init__(username, password) self.vms = [ ROS_vm(username, password) ] if __name__ == '__main__': import argparse parser = argparse.ArgumentParser(description='') parser.add_argument('--trace', action='store_true', help='enable trace level logging') parser.add_argument('--username', default='vrnetlab', help='Username') parser.add_argument('--password', default='VR-netlab9', help='Password') args = parser.parse_args() LOG_FORMAT = "%(asctime)s: %(module)-10s %(levelname)-8s %(message)s" logging.basicConfig(format=LOG_FORMAT) logger = logging.getLogger() logger.setLevel(logging.DEBUG) if args.trace: logger.setLevel(1) vr = ROS(args.username, args.password) vr.start() 0707010000003B000041ED00000000000000000000000364D7C43700000000000000000000000000000000000000000000002400000000vrnetlab-git1691862071.9187175/sros0707010000003C000081A400000000000000000000000164D7C437000001DD000000000000000000000000000000000000002D00000000vrnetlab-git1691862071.9187175/sros/MakefileVENDOR=Nokia NAME=VSR IMAGE_FORMAT=qcow2 IMAGE_GLOB=*.qcow2 # match versions like: # sros-vm-13.0.R7.qcow2 # sros-vm-14.0.R4.qcow2 # sros-vm-14.0.R7.qcow2 # sros-vm-15.0.R1.qcow2 # sros-vm-16.0.B0.qcow2 (pre-GA / beta image for 16, which is named completely differently when coming from Nokia) VERSION=$(shell echo $(IMAGE) | sed -e 's/.\+[^0-9]\([0-9]\+\.[0-9]\+\.[A-Z][0-9]\+\(-[0-9]\+\)\?\)[^0-9].*$$/\1/') -include ../makefile-sanity.include -include ../makefile.include 0707010000003D000081A400000000000000000000000164D7C4370000124E000000000000000000000000000000000000002E00000000vrnetlab-git1691862071.9187175/sros/README.mdvrnetlab / Nokia VSR SROS ========================= This is the vrnetlab docker image for Nokia VSR / SROS. Ask your Nokia representative for the VSR image. Put the sros.qcow2 file in this directory and run `make docker-image` and you should be good to go. The resulting image is called `vr-sros`. You can tag it with something else if you want, like `my-repo.example.com/vr-sros` and then push it to your repo. Please note that you will always need to specify version when starting your router as the "latest" tag is not added to any images since it has no meaning in this context. It's been tested to at least boot with: * 12.0.R6 * 13.0.R7 * 14.0.R4 * 14.0.R5 * 16.0.R1 * 16.0.R2 * 16.0.R2-1 * 16.0.R3 * 16.0.R3-1 * 16.0.R4 * 16.0.R4-1 Usage ----- The container must be `--privileged` to start KVM. ``` docker run -d --privileged --name my-sros-router vr-sros ``` It takes about 90 seconds for the virtual router to start and after this we can login over SSH / NETCONF with the specified credentials. You can specify how many ports the virtual router should have through the `--num-nics` argument. With 5 or fewer ports the router will be started in what is called "integrated" mode which means it's a single VM. The router will then be equipped with a m5-1gb-sfp-b MDA. The VSR release notes claim that up to 8 interfaces can be used but I have never gotten more than 5 to work (even when using a different MDA). If more than 5 ports are specified with the `--num-nics` argument the router will be started in what is known as "distributed" mode which means multiple VMs are used. The first VM is the control plane while remaining are "line cards". Again, the release notes state that 8 ports can be used per VM but I have not been able to get link up on more than 6 interface per line card VM. Thus, the number of line card VM started is dependent upon the number of ports specified through `--num-nics`. `--num-nics 6` means one line card VM (and one control plane VM) is started whereas `--num-nics 15` would yield three line card VMs (3x6=18 ports). In distributed mode the router is simulating an XRS-20. Each line card is equipped with one cx20-10g-sfp MDA (XMA really). Note how each VM, both control plane and line card, consume 6GB of RAM each. The ports follow the pattern X/1/[1..6] where X is the line card slot. For an integrated VM the slot is always 1 whereas for distributed mode there can be many line card slots. If you want to look at the startup process you can specify `-i -t` to docker run and you'll get an interactive terminal, do note that docker will terminate as soon as you close it though. Use `-d` for long running routers. License handling ---------------- You can feed a license file into SROS by putting a text file containing the license in this directory next to your .qcow2 image. Name the license file the same as your .qcow2 file but append ".license", e.g. if you have "sros-14.0.R3.qcow2" you would name the license file "sros-14.0.R3.qcow2.license". The license is bound to a specific UUID and usually expires within a given time. The UUID is the first part of the license file and the launch script will automatically extract this and start the VSR with this UUID. If you have a time limited license you can put the start time of the license in the license file simply by appending the date in ISO-8601 format (YYYY-mm-dd). The license usually has a '# BLA BLA TiMOS-XX.Y.*' at the end to signify what it is for, simply append the date there. The launch script will extract this date and start the VSR with this date + 1 day as to fool the licensing system. I suppose that you shouldn't configure NTP or similar on your VSR.... FUAQ - Frequently or Unfrequently Asked Questions ------------------------------------------------- ##### Q: I can't run any useful commands, like "configure", what up? A: Are you perhaps using release 14? Nokia introduced more limitations on the VSR when run without license. Apparently it wasn't enough to restart once an hour and have severe rate-limiting (250pps per interface) but they also limited the commands you can run, including "configure", which makes the VSR with SROS 14 and later completely useless without a license. ##### Q: How many interfaces are available? A: Many! You can specify the number of ports you want with the `--num-nics` argument. If you specify more than 5 the router will be started in "distributed" mode which means multiple line cards (VMs) are used. ##### Q: Why 6GB of RAM? It says only 4GB is required. A: SROS 16 seems to require 6GB and we don't build with different amount of CPU/RAM per versions so that's why every version gets the same. 0707010000003E000041ED00000000000000000000000264D7C43700000000000000000000000000000000000000000000002B00000000vrnetlab-git1691862071.9187175/sros/docker0707010000003F000081A400000000000000000000000164D7C437000001B7000000000000000000000000000000000000003600000000vrnetlab-git1691862071.9187175/sros/docker/DockerfileFROM debian:bullseye MAINTAINER Kristian Larsson <kristian@spritelink.net> ENV DEBIAN_FRONTEND=noninteractive RUN apt-get update -qy \ && apt-get upgrade -qy \ && apt-get install -y \ bridge-utils \ iproute2 \ python3-ipy \ socat \ qemu-kvm \ && rm -rf /var/lib/apt/lists/* ARG IMAGE COPY $IMAGE* / COPY *.py / EXPOSE 22 161/udp 830 5000 10000-10099 HEALTHCHECK CMD ["/healthcheck.py"] ENTRYPOINT ["/launch.py"] 07070100000040000081ED00000000000000000000000164D7C43700000153000000000000000000000000000000000000003A00000000vrnetlab-git1691862071.9187175/sros/docker/healthcheck.py#!/usr/bin/env python3 import sys try: health_file = open("/health", "r") health = health_file.read() health_file.close() except FileNotFoundError: print("health status file not found") sys.exit(2) exit_status, message = health.strip().split(" ", 1) if message != '': print(message) sys.exit(int(exit_status)) 07070100000041000081ED00000000000000000000000164D7C43700003C4A000000000000000000000000000000000000003500000000vrnetlab-git1691862071.9187175/sros/docker/launch.py#!/usr/bin/env python3 import datetime import logging import math import os import re import signal import sys import vrnetlab def handle_SIGCHLD(signal, frame): os.waitpid(-1, os.WNOHANG) def handle_SIGTERM(signal, frame): sys.exit(0) signal.signal(signal.SIGINT, handle_SIGTERM) signal.signal(signal.SIGTERM, handle_SIGTERM) signal.signal(signal.SIGCHLD, handle_SIGCHLD) TRACE_LEVEL_NUM = 9 logging.addLevelName(TRACE_LEVEL_NUM, "TRACE") def trace(self, message, *args, **kws): # Yes, logger takes its '*args' as 'args'. if self.isEnabledFor(TRACE_LEVEL_NUM): self._log(TRACE_LEVEL_NUM, message, args, **kws) logging.Logger.trace = trace def mangle_uuid(uuid): """ Mangle the UUID to fix endianness mismatch on first part """ parts = uuid.split("-") new_parts = [ uuid_rev_part(parts[0]), uuid_rev_part(parts[1]), uuid_rev_part(parts[2]), parts[3], parts[4] ] return '-'.join(new_parts) def uuid_rev_part(part): """ Reverse part of a UUID """ res = "" for i in reversed(range(0, len(part), 2)): res += part[i] res += part[i+1] return res # Add gNMI ports vrnetlab.HOST_FWDS.append(('tcp', 9339, 57400)) vrnetlab.HOST_FWDS.append(('tcp', 57400, 57400)) class SROS_vm(vrnetlab.VM): def __init__(self, username, password, num=0): super(SROS_vm, self).__init__(username, password, disk_image = "/sros.qcow2", num=num, ram=6144) self.uuid = "00000000-0000-0000-0000-000000000000" self.read_license() def bootstrap_spin(self): """ This function should be called periodically to do work. """ if self.spins > 60: # too many spins with no result, probably means SROS hasn't started # successfully, so we restart it self.logger.warning("no output from serial console, restarting VM") self.stop() self.start() self.spins = 0 return (ridx, match, res) = self.tn.expect([b"Login:", b"^[^ ]+#"], 1) if match: # got a match! if ridx == 0: # matched login prompt, so should login self.logger.debug("matched login prompt") self.wait_write("admin", wait=None) self.wait_write("admin", wait="Password:") # run main config! self.bootstrap_config() # close telnet connection self.tn.close() # calc startup time startup_time = datetime.datetime.now() - self.start_time self.logger.info("Startup complete in: %s" % startup_time) self.running = True return # no match, if we saw some output from the router it's probably # booting, so let's give it some more time if res != b'': self.logger.trace("OUTPUT: %s" % res.decode()) # reset spins if we saw some output self.spins = 0 self.spins += 1 return def read_license(self): """ Read the license file, if it exists, and extract the UUID and start time of the license """ if not os.path.isfile("/tftpboot/license.txt"): self.logger.info("No license file found") return lic_file = open("/tftpboot/license.txt", "r") license = "" for line in lic_file.readlines(): # ignore comments in license file if line.startswith('#'): continue license += line lic_file.close() try: uuid_input = license.split(" ")[0] self.uuid = mangle_uuid(uuid_input) self.uuid = uuid_input m = re.search("([0-9]{4}-[0-9]{2}-)([0-9]{2})", license) if m: self.fake_start_date = "%s%02d" % (m.group(1), int(m.group(2))+1) except: raise ValueError("Unable to parse license file") self.logger.info("License file found for UUID %s with start date %s" % (self.uuid, self.fake_start_date)) class SROS_integrated(SROS_vm): """ Integrated VSR-SIM """ def __init__(self, username, password, mode): super(SROS_integrated, self).__init__(username, password) self.mode = mode self.num_nics = 5 self.smbios = ["type=1,product=TIMOS:address=10.0.0.15/24@active license-file=tftp://10.0.0.2/license.txt slot=A chassis=SR-c12 card=cfm-xp-b mda/1=m20-1gb-xp-sfp"] def gen_mgmt(self): """ Generate mgmt interface(s) We override the default function since we want a fake NIC in there """ # call parent function to generate first mgmt interface (e1000) res = super(SROS_integrated, self).gen_mgmt() # add virtio NIC for internal control plane interface to vFPC res.append("-device") res.append("e1000,netdev=dummy0,mac=%s" % vrnetlab.gen_mac(1)) res.append("-netdev") res.append("tap,ifname=dummy0,id=dummy0,script=no,downscript=no") return res def bootstrap_config(self): """ Do the actual bootstrap config """ if self.username and self.password: self.wait_write("configure system security user \"%s\" password %s" % (self.username, self.password)) self.wait_write("configure system security user \"%s\" access console netconf grpc" % (self.username)) self.wait_write("configure system security user \"%s\" console member \"administrative\" \"default\"" % (self.username)) self.wait_write("configure system netconf no shutdown") self.wait_write("configure system grpc allow-unsecure-connection") self.wait_write("configure system grpc no shutdown") self.wait_write("configure system security profile \"administrative\" netconf base-op-authorization lock") self.wait_write("configure system login-control ssh inbound-max-sessions 30") self.wait_write("configure card 1 mda 1 shutdown") self.wait_write("configure card 1 mda 1 no mda-type") self.wait_write("configure card 1 shutdown") self.wait_write("configure card 1 no card-type") self.wait_write("configure card 1 card-type iom-xp-b") self.wait_write("configure card 1 mcm 1 mcm-type mcm-xp") self.wait_write("configure card 1 mda 1 mda-type m20-1gb-xp-sfp") self.wait_write("configure card 1 no shutdown") if self.mode != 'cli': self.wait_write("configure system management-interface yang-modules no nokia-modules") self.wait_write("configure system management-interface yang-modules nokia-combined-modules") self.wait_write("configure system management-interface yang-modules no base-r13-modules") self.wait_write("configure system management-interface configuration-mode {}".format(self.mode)) self.wait_write("admin save") self.wait_write("logout") class SROS_cp(SROS_vm): """ Control plane for distributed VSR-SIM """ def __init__(self, username, password, mode, major_release, num_lc=1): super(SROS_cp, self).__init__(username, password) self.num_lc = num_lc self.mode = mode self.num_nics = 0 if major_release >= 19: self.logger.info("SROS release 19 or higher, use card xcm-x20 instead of cpm-x20") self.smbios = ["type=1,product=TIMOS:address=10.0.0.15/24@active license-file=tftp://10.0.0.2/license.txt chassis=XRS-20 chassis-topology=XRS-40 slot=A sfm=sfm-x20-b card=xcm-x20"] else: self.smbios = ["type=1,product=TIMOS:address=10.0.0.15/24@active license-file=tftp://10.0.0.2/license.txt chassis=XRS-20 chassis-topology=XRS-40 slot=A sfm=sfm-x20-b card=cpm-x20"] def start(self): # use parent class start() function super(SROS_cp, self).start() # add interface to internal control plane bridge vrnetlab.run_command(["brctl", "addif", "int_cp", "vcp-int"]) vrnetlab.run_command(["ip", "link", "set", "vcp-int", "up"]) vrnetlab.run_command(["ip", "link", "set", "dev", "vcp-int", "mtu", "10000"]) def gen_mgmt(self): """ Generate mgmt interface(s) We override the default function since we want a NIC to the vFPC """ # call parent function to generate first mgmt interface (e1000) res = super(SROS_cp, self).gen_mgmt() # add virtio NIC for internal control plane interface to vFPC res.append("-device") res.append("e1000,netdev=vcp-int,mac=%s" % vrnetlab.gen_mac(1)) res.append("-netdev") res.append("tap,ifname=vcp-int,id=vcp-int,script=no,downscript=no") return res def bootstrap_config(self): """ Do the actual bootstrap config """ if self.username and self.password: self.wait_write("configure system security user \"%s\" password %s" % (self.username, self.password)) self.wait_write("configure system security user \"%s\" access console netconf grpc" % (self.username)) self.wait_write("configure system security user \"%s\" console member \"administrative\" \"default\"" % (self.username)) self.wait_write("configure system netconf no shutdown") self.wait_write("configure system grpc allow-unsecure-connection") self.wait_write("configure system grpc no shutdown") self.wait_write("configure system security profile \"administrative\" netconf base-op-authorization lock") self.wait_write("configure system login-control ssh inbound-max-sessions 30") # configure SFMs for i in range(1, 17): self.wait_write("configure sfm {} sfm-type sfm-x20-b".format(i)) # configure line card & MDAs for i in range(1, self.num_lc+1): self.wait_write("configure card {} card-type xcm-x20".format(i)) self.wait_write("configure card {} mda 1 mda-type cx20-10g-sfp".format(i)) if self.mode != 'cli': self.wait_write("configure system management-interface yang-modules no nokia-modules") self.wait_write("configure system management-interface yang-modules nokia-combined-modules") self.wait_write("configure system management-interface yang-modules no base-r13-modules") self.wait_write("configure system management-interface configuration-mode {}".format(self.mode)) self.wait_write("admin save") self.wait_write("logout") class SROS_lc(SROS_vm): """ Line card for distributed VSR-SIM """ def __init__(self, slot=1): super(SROS_lc, self).__init__(None, None, num=slot) self.slot = slot self.num_nics = 6 self.smbios = ["type=1,product=TIMOS:chassis=XRS-20 chassis-topology=XRS-40 slot={} sfm=sfm-x20-b card=xcm-x20 mda/1=cx20-10g-sfp".format(slot)] def start(self): # use parent class start() function super(SROS_lc, self).start() # add interface to internal control plane bridge vrnetlab.run_command(["brctl", "addif", "int_cp", "vfpc{}-int".format(self.slot)]) vrnetlab.run_command(["ip", "link", "set", "vfpc{}-int".format(self.slot), "up"]) vrnetlab.run_command(["ip", "link", "set", "dev", "vfpc{}-int".format(self.slot), "mtu", "10000"]) def gen_mgmt(self): """ Generate mgmt interface """ res = [] # mgmt interface res.extend(["-device", "e1000,netdev=mgmt,mac=%s" % vrnetlab.gen_mac(0)]) res.extend(["-netdev", "user,id=mgmt,net=10.0.0.0/24"]) # internal control plane interface to vFPC res.extend(["-device", "e1000,netdev=vfpc-int,mac=%s" % vrnetlab.gen_mac(0)]) res.extend(["-netdev", "tap,ifname=vfpc{}-int,id=vfpc-int,script=no,downscript=no".format(self.slot)]) return res def gen_nics(self): """ Generate qemu args for the normal traffic carrying interface(s) """ res = [] # TODO: should this offset business be put in the common vrnetlab? offset = 6 * (self.slot-1) for j in range(0, self.num_nics): i = offset + j + 1 res.append("-device") res.append(self.nic_type + ",netdev=p%(i)02d,mac=%(mac)s" % { 'i': i, 'mac': vrnetlab.gen_mac(i) }) res.append("-netdev") res.append("socket,id=p%(i)02d,listen=:100%(i)02d" % { 'i': i }) return res def bootstrap_spin(self): """ We have nothing to do for VSR-SIM line cards """ self.running = True self.tn.close() return class SROS(vrnetlab.VR): def __init__(self, username, password, num_nics, mode): super(SROS, self).__init__(username, password) major_release = 0 # move files into place for e in os.listdir("/"): match = re.match(r'[^0-9]+([0-9]+)\S+\.qcow2$', e) if match: major_release = int(match.group(1)) if re.search("\.qcow2$", e): os.rename("/" + e, "/sros.qcow2") if re.search("\.license$", e): os.rename("/" + e, "/tftpboot/license.txt") self.license = False if os.path.isfile("/tftpboot/license.txt"): self.logger.info("License found") self.license = True self.logger.info("Number of NICS: " + str(num_nics)) self.logger.info("Mode: " + str(mode)) # if we have more than 5 NICs or version is 19 or higher we use distributed VSR-SIM if num_nics > 5 or major_release >= 19: if not self.license: self.logger.error("More than 5 NICs require distributed VSR which requires a license but no license is found") sys.exit(1) num_lc = math.ceil(num_nics / 6) self.logger.info("Number of linecards: " + str(num_lc)) self.vms = [ SROS_cp(username, password, mode, major_release, num_lc=num_lc) ] for i in range(1, num_lc+1): self.vms.append(SROS_lc(i)) else: # 5 ports or less means integrated VSR-SIM self.vms = [ SROS_integrated(username, password, mode) ] # set up bridge for connecting CP with LCs vrnetlab.run_command(["brctl", "addbr", "int_cp"]) vrnetlab.run_command(["ip", "link", "set", "int_cp", "up"]) if __name__ == '__main__': import argparse parser = argparse.ArgumentParser(description='') parser.add_argument('--trace', action='store_true', help='enable trace level logging') parser.add_argument('--username', default='vrnetlab', help='Username') parser.add_argument('--password', default='VR-netlab9', help='Password') parser.add_argument('--num-nics', default=5, help='Number of NICs') parser.add_argument('--mode', choices=['cli', 'mixed', 'model-driven'], help='configuration mode of the system', default='cli') args = parser.parse_args() LOG_FORMAT = "%(asctime)s: %(module)-10s %(levelname)-8s %(message)s" logging.basicConfig(format=LOG_FORMAT) logger = logging.getLogger() logger.setLevel(logging.DEBUG) if args.trace: logger.setLevel(1) ia = SROS(args.username, args.password, num_nics=int(args.num_nics), mode=args.mode) ia.start() 07070100000042000041ED00000000000000000000000264D7C43700000000000000000000000000000000000000000000002400000000vrnetlab-git1691862071.9187175/test07070100000043000081ED00000000000000000000000164D7C43700000557000000000000000000000000000000000000002F00000000vrnetlab-git1691862071.9187175/test/test-image#!/bin/bash timeout=1200 pull=false while getopts "t:hp" opt; do case $opt in t) timeout=$OPTARG ;; p) pull=true ;; h) echo "Usage:" echo "$0 [-t timeout] image-name container-name [options for container]" echo "\t\t timeout: wait for timeout seconds, default 900" exit 0 ;; esac done shift $((OPTIND-1)) image=$1 name=$2 shift 2 #echo $@ if [ "${pull}" == "true" ]; then docker pull $image; fi # clean up any old instances docker rm -f $name > /dev/null 2>&1 set -e docker run -d --privileged --name $name $image --trace $@ SECONDS=0 last_uptime=0 echo "Waiting for $name to become healthy" set +e while [ $SECONDS -lt $timeout -a "$health" != "healthy" -a "$status" != "exited" ] do sleep 2 echo -n "." health=$(docker inspect --format '{{.State.Health.Status}}' $name) if [ $? -ne 0 ]; then exit 1; fi if [ $(( SECONDS - last_uptime )) -ge 120 ] then echo "$name is $health after $SECONDS seconds" last_uptime=$SECONDS fi status=$(docker inspect --format '{{.State.Status}}' $name) done echo "\n" if [ $health = "healthy" ] then echo -e "\e[32m$name became healthy in $SECONDS seconds\e[0m" docker stop $name else echo -e "\e[31m$name failed to become healthy after $SECONDS seconds\e[0m" # leave the container running for local troubleshooting false fi 07070100000044000041ED00000000000000000000000264D7C43700000000000000000000000000000000000000000000003000000000vrnetlab-git1691862071.9187175/topology-machine07070100000045000081A400000000000000000000000164D7C437000002B5000000000000000000000000000000000000003B00000000vrnetlab-git1691862071.9187175/topology-machine/DockerfileFROM debian:bullseye MAINTAINER Kristian Larsson <kristian@spritelink.net> ENV DEBIAN_FRONTEND=noninteractive RUN apt-get update -qy \ && apt-get upgrade -qy \ && apt-get install -y \ python3-jinja2 \ python3-yaml \ apt-transport-https \ ca-certificates \ curl \ gnupg2 \ software-properties-common \ && curl -fsSL https://download.docker.com/linux/debian/gpg | apt-key add - \ && add-apt-repository \ "deb [arch=amd64] https://download.docker.com/linux/debian \ $(lsb_release -cs) \ stable" \ && apt-get update -qy \ && apt-get -y install docker-ce \ && rm -rf /var/lib/apt/lists/* ADD topomachine / ADD *example* / ENTRYPOINT ["/topomachine"] 07070100000046000081A400000000000000000000000164D7C437000000D2000000000000000000000000000000000000003900000000vrnetlab-git1691862071.9187175/topology-machine/Makefile-include ../makefile-sanity.include all: docker build --build-arg http_proxy=$(http_proxy) --build-arg https_proxy=$(https_proxy) -t $(REGISTRY)topomachine . docker-push: docker push $(REGISTRY)topomachine 07070100000047000081A400000000000000000000000164D7C437000023D8000000000000000000000000000000000000003A00000000vrnetlab-git1691862071.9187175/topology-machine/README.mdvrnetlab topology machine ========================= The topology machine will help you manage a topology of virtual routers. In particular, there are two activites related to building a topology that can be really tedious and topology machine is meant to help you with these: * building a full-mesh * assigning interfaces to point-to-point links between routers Full-meshes defined in the configuration file will be expanded into point-to-point links between all the member routers. Each point-to-point link needs to have an interface assigned on the routers on the respective ends of the link. Once you have more than a handful of links it can become really tedious to sort out what goes where and any additions or deletions to your topology means instant headache... which is where topo machine comes in. You write a high level topology definition which topology machine can convert into a low level topology definition through --build: ``` topo --build hltopo.json > lltopo.json ``` The output is printed to stdout so you can view it, pipe or redirect to a file if you want. You only need to run the build the low level topology from the high level topology once. All subsequent use of --run, --template or similar uses that one resulting low level topology. Naturally, if you update the high level topology you must rerun the --build topology to update the low level topology. topomachine does not currently use any information from the low level topology (produced during a previous --build operation) which means that removing a link will very likely result in changes to the majority of links in the topology as they will be re-assigned to new interfaces. topology machine is able to run the machines for you, i.e. execute docker run for the routers defined in the configuration file and start vr-xcon with the relevant arguments to complete the topology: ``` topo --run lltopo.json ``` which will then start the docker containers based on the computed topology. There's a --dry-run option if you just want to see what commands would be executed. If you want to run multiple topologies at the same time you can specify a prefix for the docker container names using `--prefix` which prevents collisions if you use the same name for the virtual routers in the different topology configurations. Note how Last but not least, there is a template mode which you can use to produce configuration for your mangement system, which in turn is provisioning the routers. Since the provisioned configuration of the virtual routers needs to align with the "physical" topology built by vr-xcon it makes sense to let topology machine assist you in producing this service config. Use `--template` to produce output based on the provided topology information and template: ``` topo --template lltopo.json my-template.template ``` Output is printed to stdout which can redirected to a file. Jinja2 is used as the templating language. See example-template.template for how a config to a network provisioning system can be produced. It has the notion of a "base-config" applies common configuration to a device and the "backbone-interface" service which configures an interface on a router for backbone use. Configuration file format ------------------------- Feed it a config file in JSON format. There are three parts of the configuration file: * routers * p2p * fullmeshes All of which are demonstrated in the accompanying example-topology.json file. Configuration section "routers" ------------------------------- The routers section is a declaration of the routers in your topology. You need to fill in the type and version, which should match up with the vrnetlab routers you have available, e.g. if you have vr-xrv:5.3.3 you fill in type "xrv" and version "5.3.3". It's important for topo builder to know about the router type as it will later map interface to interface names like GigabitEthernet0/0/0/0 or ge-0/0/0 depending on router type. ``` { "routers": { "a-pe-router-1": { "type": "xrv", "version": "5.3.3" } } } ``` Any other keys filled in will be transparently passed through topology machine, which can be very useful for adding extra information for use with the `--template` option. Configuration section "p2p" --------------------------- "p2p" is the second section in the config file and you can use this to define point-to-point links. Each entry is keyed by the left side of the link followed by an array of the routers to add a link to. NOTE: The ends of each link are referred to as "left" and "right". There's no real importance in the naming - we just needed to call each end something. For example: ``` { "p2p": { "foo": [ "a", "b", "c" ] } } ``` The above config will generate three links: * foo <-> a * foo <-> b * foo <-> c It's also possible to add multiple links to the same router simply by adding a router twice: ``` { "p2p": { "foo": [ "a", "a" ] } } ``` * foo <-> a * foo <-> a Configuration section "fullmeshes" ---------------------------------- Last but not least we have the fullmeshes section which helps you build one or more fullmeshes. Name your fullmesh something and list the members: ``` { "fullmeshes": { "sweden": [ "gothenburg", "stockholm", "malmo" ] } } ``` Note how it's possible to create multiple full-meshes: ``` { "fullmeshes": { "sweden": [ "gothenburg", "stockholm", "malmo" ], "germany": [ "frankfurt", "berlin", "hamburg" ] } } ``` Build as a docker container --------------------------- You can build topomachine as a docker container for easy distribution. ``` $ cd topology-machine $ make ``` And you should now have a docker container named topomachine Use docker container -------------------- The topomachine docker container can be used to generate the low level topology (--build), to generate the docker run commands (--dry-run), to start the topology (--run), and to generate custom output using a jinja2 template (--template). For example: generate the low level topology: ``` $ docker run -t -v $(pwd):/data topomachine --build /data/example-hltopo.json > example-lltopo.json ``` generate the docker run commands: ``` $ docker run -v $(pwd):/data topomachine --run /data/example-lltopo.json --dry-run The following commands would be executed: docker run --privileged -d --name ams-core-1 vr-xrv:5.1.1.54U docker run --privileged -d --name ams-core-2 vr-xrv:5.1.1.54U docker run --privileged -d --name ams-edge-1 vr-xrv:5.1.1.54U docker run --privileged -d --name fra-core-1 vr-vmx:16.1R1.7 docker run --privileged -d --name fra-core-2 vr-vmx:16.1R1.7 docker run --privileged -d --name fra-edge-1 vr-vmx:16.1R1.7 docker run --privileged -d --name kul-core-1 vr-xrv:5.1.1.54U docker run --privileged -d --name par-core-1 vr-sros:13.0.B1-4281 docker run --privileged -d --name par-core-2 vr-sros:13.0.B1-4281 docker run --privileged -d --name par-edge-1 vr-sros:13.0.B1-4281 docker run --privileged -d --name png-edge-1 vr-xrv:5.1.1.54U docker run --privileged -d --name sgp-core-1 vr-xrv:5.1.1.54U docker run --privileged -d --name vr-xcon --link ams-core-1:ams-core-1 --link ams-core-2:ams-core-2 --link ams-edge-1:ams-edge-1 --link fra-core-1:fra-core-1 --link fra-core-2:fra-core-2 --link fra-edge-1:fra-edge-1 --link kul-core-1:kul-core-1 --link par-core-1:par-core-1 --link par-core-2:par-core-2 --link par-edge-1:par-edge-1 --link png-edge-1:png-edge-1 --link sgp-core-1:sgp-core-1 vr-xcon --p2p ams-edge-1/1--ams-core-1/1 ams-edge-1/2--ams-core-2/1 fra-core-2/1--sgp-core-1/1 fra-core-2/2--kul-core-1/1 fra-edge-1/1--fra-core-1/1 fra-edge-1/2--fra-core-2/3 par-core-1/1--sgp-core-1/2 par-core-1/2--kul-core-1/2 par-edge-1/1--par-core-1/3 par-edge-1/2--par-core-2/1 png-edge-1/1--sgp-core-1/3 png-edge-1/2--kul-core-1/3 kul-core-1/4--sgp-core-1/4 ams-core-1/2--ams-core-2/2 ams-core-1/3--fra-core-1/2 ams-core-1/4--fra-core-2/4 ams-core-1/5--par-core-1/4 ams-core-1/6--par-core-2/2 ams-core-2/3--fra-core-1/3 ams-core-2/4--fra-core-2/5 ams-core-2/5--par-core-1/5 ams-core-2/6--par-core-2/3 fra-core-1/4--fra-core-2/6 fra-core-1/5--par-core-1/6 fra-core-1/6--par-core-2/4 fra-core-2/7--par-core-1/7 fra-core-2/8--par-core-2/5 par-core-1/8--par-core-2/6 ``` start the topology: ``` $ docker run -v $(pwd):/data -v /var/run/docker.sock:/var/run/docker.sock topomachine --run /data/example-lltopo.json ``` generate custom output using a jinja2 template: ``` $ docker run -t -v $(pwd):/data topomachine --template /data/example-lltopo.json /data/example.template infrastructure { base-config fra-edge-1 { numeric-id 101; ipv4-address 10.0.0.101; ipv6-address 2001:db8::101; } base-config fra-core-2 { numeric-id 4; ipv4-address 10.0.0.4; ipv6-address 2001:db8::4; } ... backbone-interface par-core-1 2/1/2 { ipv4-address 10.1.1.1/30; ipv6-address 2001:db8::1:1:1/126; remote { neighbor par-core-2; interface 1/1/6; } } backbone-interface par-core-2 1/1/6 { ipv4-address 10.1.1.2/30; ipv6-address 2001:db8::1:1:2/126; remote { neighbor par-core-1; interface 2/1/2; } } } ``` 07070100000048000081A400000000000000000000000164D7C4370000056E000000000000000000000000000000000000004400000000vrnetlab-git1691862071.9187175/topology-machine/example-hltopo.json{ "routers": { "ams-core-1": { "id": 1, "type": "xrv", "version": "5.1.1.54U" }, "ams-core-2": { "id": 2, "type": "xrv", "version": "5.1.1.54U" }, "fra-core-1": { "id": 3, "type": "vmx", "version": "16.1R1.7" }, "fra-core-2": { "id": 4, "type": "vmx", "version": "16.1R1.7" }, "par-core-1": { "id": 5, "type": "sros", "version": "13.0.B1-4281" }, "par-core-2": { "id": 6, "type": "sros", "version": "13.0.B1-4281" }, "sgp-core-1": { "id": 7, "type": "xrv", "version": "5.1.1.54U" }, "kul-core-1": { "id": 8, "type": "xrv", "version": "5.1.1.54U" }, "ams-edge-1": { "id": 100, "type": "xrv", "version": "5.1.1.54U" }, "fra-edge-1": { "id": 101, "type": "vmx", "version": "16.1R1.7" }, "par-edge-1": { "id": 102, "type": "sros", "version": "13.0.B1-4281" }, "png-edge-1": { "id": 103, "type": "xrv", "version": "5.1.1.54U" } }, "p2p": { "fra-core-2": [ "sgp-core-1", "kul-core-1" ], "par-core-1": [ "sgp-core-1", "kul-core-1" ], "ams-edge-1": [ "ams-core-1", "ams-core-2" ], "fra-edge-1": [ "fra-core-1", "fra-core-2" ], "par-edge-1": [ "par-core-1", "par-core-2" ], "png-edge-1": [ "sgp-core-1", "kul-core-1" ] }, "fullmeshes": { "europe": [ "ams-core-1", "ams-core-2", "fra-core-1", "fra-core-2", "par-core-1", "par-core-2" ], "asia": [ "sgp-core-1", "kul-core-1" ] }, "hubs": { "ams-mgmt": [ "ams-core-1", "ams-core-2", "ams-edge-1" ] } } 07070100000049000081A400000000000000000000000164D7C43700000383000000000000000000000000000000000000004100000000vrnetlab-git1691862071.9187175/topology-machine/example.templateinfrastructure { {%- for router, val in config.routers.items() %} base-config {{router}} { numeric-id {{val.id}}; ipv4-address 10.0.0.{{val.id}}; ipv6-address 2001:db8::{{val.id}}; } {%- endfor %} {%- for link in config.links %} backbone-interface {{link.left.router}} {{link.left.interface}} { ipv4-address 10.1.{{link.left.router[-1]}}.1/30; ipv6-address 2001:db8::1:{{link.left.router[-1]}}:1/126; remote { neighbor {{link.right.router}}; interface {{link.right.interface}}; } } backbone-interface {{link.right.router}} {{link.right.interface}} { ipv4-address 10.1.{{link.left.router[-1]}}.2/30; ipv6-address 2001:db8::1:{{link.left.router[-1]}}:2/126; remote { neighbor {{link.left.router}}; interface {{link.left.interface}}; } } {%- endfor %} } 0707010000004A000081ED00000000000000000000000164D7C4370000318D000000000000000000000000000000000000003C00000000vrnetlab-git1691862071.9187175/topology-machine/topomachine#!/usr/bin/env python3 import json import os import sys from collections import OrderedDict import jinja2 class VrTopo: """ vrnetlab topo builder """ def __init__(self, config): self.routers = {} self.links = [] self.fullmeshes = {} self.hubs = {} if 'routers' in config: self.routers = config['routers'] # sanity checking - use a YANG model and pyang to validate input? for r, val in self.routers.items(): if 'type' not in val: raise ValueError("'type' is not defined for router %s" % r) if val['type'] not in ('dummy', 'xcon', 'bgp', 'xrv', 'xrv9k', 'vmx', 'sros', 'csr', 'nxos', 'nxos9kv', 'vqfx', 'vrp', 'veos', 'openwrt'): raise ValueError("Unknown type %s for router %s" % (val['type'], r)) # expand p2p links links = [] if 'p2p' in config: for router in sorted(config['p2p']): neighbors = config['p2p'][router] for neighbor in neighbors: links.append({ 'left': { 'router': router }, 'right': { 'router': neighbor }}) # expand fullmesh into links if 'fullmeshes' in config: for name in sorted(config['fullmeshes']): val = config['fullmeshes'][name] fmlinks = self.expand_fullmesh(val) links.extend(fmlinks) self.links = self.assign_interfaces(links) self.links_by_nodes = OrderedDict() for l in self.links: for (link, a, b) in ((l, 'left', 'right'), (l, 'right', 'left')): if link[a]['router'] not in self.links_by_nodes: self.links_by_nodes[link[a]['router']] = OrderedDict() spec = {'our_interface': link[a]['interface'], 'their_interface': link[b]['interface'], 'our_numeric': link[a]['numeric'], 'their_numeric': link[b]['numeric']} if link[b]['router'] not in self.links_by_nodes[link[a]['router']]: self.links_by_nodes[link[a]['router']][link[b]['router']] = [] self.links_by_nodes[link[a]['router']][link[b]['router']].append(spec) if 'hubs' in config: for hub in sorted(config['hubs']): self.hubs[hub] = [] for router in config['hubs'][hub]: ep = { 'router': router, 'numeric': self.get_interface(router) } ep['interface'] = self.intf_num_to_name(router, ep['numeric']) self.hubs[hub].append(ep) for router in sorted(self.routers): val = self.routers[router] if 'interfaces' in val: for num_id in val['interfaces']: val['interfaces'][num_id] = self.intf_num_to_name(router, num_id) def expand_fullmesh(self, routers): """ Flatten a full-mesh into a list of links Links are considered bi-directional, so you will only see a link A->B and not a B->A. """ pairs = {} for a in sorted(routers): for b in sorted(routers): left = min(a, b) right = max(a, b) if left == right: # don't create link to ourself continue if left not in pairs: pairs[left] = {} pairs[left][right] = 1 links = [] for a in sorted(pairs): for b in sorted(pairs[a]): links.append({'left': { 'router': a }, 'right': { 'router': b }}) return links def assign_interfaces(self, links): """ Assign numeric interfaces to links """ # assign interfaces to links for link in links: left = link['left'] left['numeric'] = self.get_interface(left['router']) left['interface'] = self.intf_num_to_name(left['router'], left['numeric']) right = link['right'] right['numeric'] = self.get_interface(right['router']) right['interface'] = self.intf_num_to_name(right['router'], right['numeric']) return links def intf_num_to_name(self, router, interface): """ Map numeric ID to interface name """ r = self.routers[router] if r['type'] == 'xrv' or r['type'] == 'xrv9k': return "GigabitEthernet0/0/0/%d" % (interface-1) if r['type'] == 'nxos9kv' or r['type'] == 'nxos': return "Ethernet1/%d" % (interface) elif r['type'] == 'vmx': return "ge-0/0/%d" % (interface-1) elif r['type'] == 'sros': return "{}/1/{}".format(1+int((interface-1)/6), 1+(interface-1)%6) elif r['type'] == 'bgp': return "tap%d" % (interface-1) elif r['type'] == 'csr': return "GigabitEthernet%d" % (interface+1) elif r['type'] == 'vqfx': return "xe-0/0/%d" % (interface-1) elif r['type'] == 'vrp': return "GigabitEthernet4/0/%d" % (interface) elif r['type'] == 'veos': return "Ethernet%d" % (interface) elif r['type'] == 'openwrt': return "eth%d" % (interface) return None def get_interface(self, router): """ Return next available interface """ if router not in self.routers: raise ValueError("Router %s is not defined in config" % router) if 'interfaces' not in self.routers[router]: self.routers[router]['interfaces'] = {} intfs = self.routers[router]['interfaces'] i = 1 for intf in range(len(intfs)): if i not in intfs: break i += 1 intfs[i] = None return i def output(self, output_format='json'): """ Output the resulting topology in given format output_format can only be json for now """ output = { 'routers': self.routers, 'links': self.links, 'links_by_nodes': self.links_by_nodes, 'hubs': self.hubs } if output_format == 'json': return json.dumps(output, sort_keys=True, indent=4) else: raise ValueError("Invalid output format") def run_command(cmd, dry_run=False): if dry_run: print(" ".join(cmd)) return import subprocess p = subprocess.Popen(cmd, stdout=subprocess.PIPE, stderr=subprocess.PIPE, universal_newlines=True) return p.communicate() def run_topology(config, dry_run, with_trace=False): if 'routers' not in config: print("No routers in config") sys.exit(1) trace = '' if with_trace: trace = '--trace' docker_networks = list(set([r['docker_network'] for r in config['routers'].values() if 'docker_network' in r])) if len(docker_networks) > 1: print("At most 1 docker network allowed") sys.exit(1) try: docker_network = docker_networks[0] run_command(["docker", "network", "create", docker_network, "||", "true"], dry_run) except IndexError: docker_network = None docker_registry = "" if os.getenv("DOCKER_REGISTRY"): docker_registry = os.getenv("DOCKER_REGISTRY") + "/" else: docker_registry = 'vrnetlab/' for router in sorted(config['routers']): val = config['routers'][router] if val["type"] == "dummy": continue name = "%s%s" % (args.prefix, router) cmd = ["docker", "run", "--privileged", "-d", "--name", name ] if 'docker_network' in val: cmd.append('--network ' + val['docker_network']) cmd.append('--network-alias ' + router) if 'ip' in val: cmd.append('--ip {}'.format(val['ip'])) cmd.append("%svr-%s:%s" % (docker_registry, val["type"], val["version"])) if trace: cmd.append(trace) if 'run_args' in val: cmd.extend(val["run_args"].split()) output,_ = run_command(["docker", "inspect", "--format", "{{.State.Running}}", name]) if not dry_run and output.strip() == "true": output,_ = run_command(["docker", "inspect", "--format", "{{.State.Health.Status}}", name]) print("Container already running. Health: %s" % output.strip()) else: run_command(cmd, dry_run) if 'links' in config: name = "%svr-xcon" % args.prefix cmd = ["docker", "run", "--rm", "--privileged", "-d", "--name", name] if docker_network: cmd.extend(["--network", docker_network]) else: for vr in sorted(config['routers']): cmd.extend(["--link", "%s%s:%s%s" % (args.prefix, vr, args.prefix, vr)]) cmd.append(docker_registry + "vr-xcon") cmd.append("--p2p") cmd.extend(["%s%s/%s--%s%s/%s" % (args.prefix, link["left"]["router"], link["left"]["numeric"], args.prefix, link["right"]["router"], link["right"]["numeric"]) for link in config['links']]) output,_ = run_command(["docker", "inspect", "--format", "{{.State.Running}}", name]) if not dry_run and output.strip() == "true": output,_ = run_command(["docker", "inspect", "--format", "{{.State.Health.Status}}", name]) print("Container already running. Health: %s" % output.strip()) else: run_command(cmd, dry_run) if 'hubs' in config: for hub, eps in config['hubs'].items(): name = "{}vr-xcon-hub-{}".format(args.prefix, hub) cmd = ["docker", "run", "--privileged", "-d", "--name", name] if docker_network: cmd.extend(["--network", docker_network]) else: for vr in sorted(config['routers']): cmd.extend(["--link", "%s%s:%s%s" % (args.prefix, vr, args.prefix, vr)]) cmd.append(docker_registry + "vr-xcon") cmd.append("--hub") cmd.extend(["%s%s/%s" % (args.prefix, ep["router"], ep["numeric"]) for ep in eps]) run_command(cmd, dry_run) if __name__ == '__main__': import argparse parser = argparse.ArgumentParser() parser.add_argument("--build", help="Build topology from config") parser.add_argument("--run", help="Run topology") parser.add_argument("--dry-run", action="store_true", default=False, help="Only print what would be performed during --run") parser.add_argument("--prefix", default='', help="docker container name prefix") parser.add_argument("--template", nargs=2, help="produce output based on topology information and a template") parser.add_argument("--variable", action='append', help="store variables") parser.add_argument("--with-trace", action='store_true', help="run virtual routers with --trace") args = parser.parse_args() if args.dry_run and not args.run: print("ERROR: --dry-run is only relevant with --run") sys.exit(1) if args.prefix and not args.run: print("ERROR: --prefix is only relevant with --run") sys.exit(1) if args.build: input_file = open(args.build, "r") config = json.loads(input_file.read(), object_pairs_hook=OrderedDict) input_file.close() try: vt = VrTopo(config) except Exception as exc: print("ERROR:", exc) sys.exit(1) print(vt.output()) if args.run: input_file = open(args.run, "r") config = json.loads(input_file.read(), object_pairs_hook=OrderedDict) input_file.close() if args.dry_run: print("The following commands would be executed:") run_topology(config, args.dry_run, args.with_trace) if args.template: input_file = open(args.template[0], "r") config = json.loads(input_file.read(), object_pairs_hook=OrderedDict) input_file.close() import sys vs = {} if args.variable: for var in args.variable: key,value = var.split("=", 2) vs[key] = value env = jinja2.Environment(loader=jinja2.FileSystemLoader(['./'])) template = env.get_template(args.template[1]) print(template.render(config=config, vars=vs)) 0707010000004B000041ED00000000000000000000000364D7C43700000000000000000000000000000000000000000000002400000000vrnetlab-git1691862071.9187175/veos0707010000004C000081A400000000000000000000000164D7C437000001BE000000000000000000000000000000000000002D00000000vrnetlab-git1691862071.9187175/veos/MakefileVENDOR=Arista NAME=vEOS IMAGE_FORMAT=vmdk IMAGE_GLOB=vEOS-lab*.vmdk # match versions like: # vEOS-lab-4.16.6M.vmdk # vEOS-lab-4.16.14M.vmdk # vEOS-lab-4.17.1.1F.vmdk # vEOS-lab-4.17.1F.vmdk # vEOS-lab-4.20.0-EFT2.vmdk VERSION=$(shell echo $(IMAGE) | sed -e 's/.*-\([0-9]\.\([0-9]\+\.\)\{1,2\}[0-9]\{1,2\}\([A-Z]\|\-EFT[0-9]\)\)\.vmdk$$/\1/') -include ../makefile-sanity.include -include ../makefile.include docker-pre-build: cp *.iso docker/ 0707010000004D000081A400000000000000000000000164D7C4370000072B000000000000000000000000000000000000002E00000000vrnetlab-git1691862071.9187175/veos/README.mdvrnetlab / Arista vEOS ====================== This is the vrnetlab docker image for Arista vEOS. Building the docker image ------------------------- Download vEOS in vmdk format and the Aboot file from https://www.arista.com/en/support/software-download Make sure you grab the Aboot file with 'serial' in the name, like Aboot-veos-serial-8.0.0.iso. You should get the vmdk filed starting with vEOS-lab-... do not use the "-combined" image, as it combines a vmdk with the Aboot without serial support. Place both the Aboot iso and the .vmdk file in this directory and run make. The resulting images is called `vr-veos`. You can tag it with something else if you want, like `my-repo.example.com/vr-veos` and then push it to your repo. The tag is the same as the version of the vEOS image, so if you have vEOS-lab-4.16.6M.vmdk your final docker image will be called vr-veos:4.16.6M Please note that you will always need to specify version when starting your router as the "latest" tag is not added to any images since it has no meaning in this context. It's been tested to boot, respond to SSH and have correct interface mapping with the following images: * vEOS-lab-4.16.6M.vmdk MD5:b3f7b7cee17f2e66bb38b453a4939fef It defaults to 144 NICs (3x48 port line cards). Usage ----- ``` docker run -d --privileged --name my-veos-router vr-veos ``` Starting vEOS can easily take more than 10 minutes to start; be patient. You can use --trace on the docker image to see boot output. System requirements ------------------- CPU: 1 core RAM: 2GB Disk: <1GB FUAQ - Frequently or Unfrequently Asked Questions ------------------------------------------------- ##### Q: Has this been extensively tested? A: Nope. I don't use Arista gear myself (yet) so not much testing at all really. Please do try it out and let me know if it works. 0707010000004E000041ED00000000000000000000000264D7C43700000000000000000000000000000000000000000000002B00000000vrnetlab-git1691862071.9187175/veos/docker0707010000004F000081A400000000000000000000000164D7C437000001CB000000000000000000000000000000000000003600000000vrnetlab-git1691862071.9187175/veos/docker/DockerfileFROM debian:bullseye MAINTAINER Kristian Larsson <kristian@spritelink.net> ENV DEBIAN_FRONTEND=noninteractive RUN apt-get update -qy \ && apt-get upgrade -qy \ && apt-get install -y \ bridge-utils \ iproute2 \ python3-ipy \ socat \ qemu-kvm \ && rm -rf /var/lib/apt/lists/* ARG IMAGE COPY $IMAGE* / COPY *.iso / COPY *.py / EXPOSE 22 161/udp 80 443 830 5000 10000-10099 HEALTHCHECK CMD ["/healthcheck.py"] ENTRYPOINT ["/launch.py"] 07070100000050000081ED00000000000000000000000164D7C43700001077000000000000000000000000000000000000003500000000vrnetlab-git1691862071.9187175/veos/docker/launch.py#!/usr/bin/env python3 import datetime import logging import os import random import re import signal import sys import telnetlib import time import vrnetlab def handle_SIGCHLD(signal, frame): os.waitpid(-1, os.WNOHANG) def handle_SIGTERM(signal, frame): sys.exit(0) signal.signal(signal.SIGINT, handle_SIGTERM) signal.signal(signal.SIGTERM, handle_SIGTERM) signal.signal(signal.SIGCHLD, handle_SIGCHLD) TRACE_LEVEL_NUM = 9 logging.addLevelName(TRACE_LEVEL_NUM, "TRACE") def trace(self, message, *args, **kws): # Yes, logger takes its '*args' as 'args'. if self.isEnabledFor(TRACE_LEVEL_NUM): self._log(TRACE_LEVEL_NUM, message, args, **kws) logging.Logger.trace = trace class VEOS_vm(vrnetlab.VM): def __init__(self, username, password): for e in os.listdir("/"): if re.search(".vmdk$", e): disk_image = "/" + e for e in os.listdir("/"): if re.search(".iso$", e): boot_iso = "/" + e super(VEOS_vm, self).__init__(username, password, disk_image=disk_image, ram=2048) self.num_nics = 20 self.qemu_args.extend(["-cdrom", boot_iso, "-boot", "d"]) def bootstrap_spin(self): """ This function should be called periodically to do work. """ if self.spins > 300: # too many spins with no result -> give up self.logger.info("To many spins with no result, restarting") self.stop() self.start() return (ridx, match, res) = self.tn.expect([b"login:"], 1) if match: # got a match! if ridx == 0: # login self.logger.debug("matched login prompt") self.logger.debug("trying to log in with 'admin'") self.wait_write("admin", wait=None) # run main config! self.bootstrap_config() # close telnet connection self.tn.close() # startup time? startup_time = datetime.datetime.now() - self.start_time self.logger.info("Startup complete in: %s" % startup_time) # mark as running self.running = True return # no match, if we saw some output from the router it's probably # booting, so let's give it some more time if res != b'': self.logger.trace("OUTPUT: %s" % res.decode()) # reset spins if we saw some output self.spins = 0 self.spins += 1 return def bootstrap_config(self): """ Do the actual bootstrap config """ self.logger.info("applying bootstrap configuration") self.wait_write("", None) self.wait_write("enable", ">") self.wait_write("configure") self.wait_write("username %s secret 0 %s role network-admin" % (self.username, self.password)) # configure mgmt interface self.wait_write("interface Management 1") self.wait_write("ip address 10.0.0.15/24") self.wait_write("exit") self.wait_write("management api http-commands") self.wait_write("protocol unix-socket") self.wait_write("no shutdown") self.wait_write("exit") self.wait_write("exit") self.wait_write("copy running-config startup-config") class VEOS(vrnetlab.VR): def __init__(self, username, password): super(VEOS, self).__init__(username, password) self.vms = [ VEOS_vm(username, password) ] if __name__ == '__main__': import argparse parser = argparse.ArgumentParser(description='') parser.add_argument('--trace', action='store_true', help='enable trace level logging') parser.add_argument('--username', default='vrnetlab', help='Username') parser.add_argument('--password', default='VR-netlab9', help='Password') args = parser.parse_args() LOG_FORMAT = "%(asctime)s: %(module)-10s %(levelname)-8s %(message)s" logging.basicConfig(format=LOG_FORMAT) logger = logging.getLogger() logger.setLevel(logging.DEBUG) if args.trace: logger.setLevel(1) vr = VEOS(args.username, args.password) vr.start() 07070100000051000041ED00000000000000000000000364D7C43700000000000000000000000000000000000000000000002300000000vrnetlab-git1691862071.9187175/vmx07070100000052000081A400000000000000000000000164D7C43700000263000000000000000000000000000000000000002C00000000vrnetlab-git1691862071.9187175/vmx/MakefileVENDOR=Juniper NAME=vMX IMAGE_FORMAT=tgz IMAGE_GLOB=*.tgz # match versions like: # vmx-14.1R6.4.tgz # vmx-15.1F4.15.tgz # vmx-bundle-15.1F6.9.tgz # vmx-bundle-16.1R1.7.tgz # vmx-bundle-16.1R2.11.tgz # vmx-bundle-17.1R1.8.tgz # vmx-bundle-16.1R4-S2.2.tgz # vmx-bundle-17.1R1-S1.tgz VERSION=$(shell echo $(IMAGE) | sed -e 's/.\+[^0-9]\([0-9][0-9]\.[0-9][A-Z][0-9]\+\(\.[0-9]\+\|-[SD][0-9]\+\(\.[0-9]\+\)\?\)\)[^0-9].*$$/\1/') EXTRA_INSTALL_ARGS=--dual-re -include ../makefile-sanity.include -include ../makefile.include -include ../makefile-install.include docker-build-image-copy: ./vmx-extract.sh $(IMAGE) 07070100000053000081A400000000000000000000000164D7C4370000196A000000000000000000000000000000000000002D00000000vrnetlab-git1691862071.9187175/vmx/README.mdvrnetlab / Juniper vMX ======================== This is the vrnetlab docker image for Juniper vMX. Building the docker image ------------------------- Download vMX from http://www.juniper.net/support/downloads/?p=vmx#sw Put the .tgz file in this directory and run `make` and you should be good to go. The resulting image is called `vr-vmx`. During the build it is normal to receive some error messages about files that do not exist, like; mv: cannot stat '/tmp/vmx*/images/jinstall64-vmx*img': No such file or directory mv: cannot stat '/tmp/vmx*/images/vPFE-lite-*.img': No such file or directory This is because different versions of JUNOS use different filenames. The build of vMX is excruciatingly slow, often taking 10-20 minutes. This is because the first time the VCP (control plane) starts up, it reads a config file that controls whether it should run as a VRR of VCP in a vMX. Previously this start was performed during docker run but it meant that the VCP would always restart once before the virtual router became available, thus leading to long bootup times (like 5 minutes). This first start of the VCP is now done during the build of the docker image and as docker build can't be run with --privileged it means that qemu is running without hardware KVM acceleration and thus taking a very long time. You will get a lot of trace output during this process so at least you can see what's going on. I think it's worth the longer build time since we build images few times but run them many. The router can run in standalone mode (single routing engine) or redundant mode (dual routing engines). This is controller with a runtime configuration options `--dual-re`. At build time, we build the VCP machines for both modes of operation: a standalone RE (files in `/vmx/re`) and dual RE (files in `/vmx/re{0,1}`). At runtime the VCP(s) are started from the correct directories. The bootstrap configuration is provided to the device via a "config-drive". During the install phase, the file `juniper.conf` is used to populate the metadata-usb image that is attached to the device. If you want, you can tag the resulting docker image with something else, like `my-repo.example.com/vr-vmx` and then push it to your repo. The tag is the same as the version of the JUNOS image, so if you have vmx-15.1F4.15.tgz your final docker image will be called vr-vmx:15.1F4.15. Please note that you will always need to specify version when starting your router as the "latest" tag is not added to any images since it has no meaning in this context. It's been tested to boot, respond to SSH and have correct interface mapping with the following images: * vmx-14.1R6.4.tgz MD5:49d37693fc4c5971fe99703149b39776 * vmx-15.1F4.15.tgz MD5:86c28d89d6db5497521ebbb2c7de4472 * vmx-bundle-15.1F6.9.tgz MD5:eb128cffde6ab29fdb27b2f52301c5f9 * vmx-bundle-16.1R1.7.tgz MD5:d96766848731c12c0492e3ae2349b426 * vmx-bundle-16.1R2.11.tgz MD5:24bc389420bf02fb6ede36afa79a0a19 * vmx-bundle-17.2R1.13.tgz MD5:64569e60a2fd671aad565c7bd3745e88 It is NOT working with the following images: * vmx-15.1F3.11.tgz MD5:978fc8c0db05179564d0680040db8196 Usage ----- The container must be `--privileged` to start KVM. ``` docker run -d --privileged --name my-vmx-router vr-vmx ``` It takes a couple of minutes for the virtual router to start and after this we can login over SSH / NETCONF with the specified credentials. If you want to look at the startup process you can specify `-i -t` to docker run and you'll get an interactive terminal, do note that docker will terminate as soon as you close it though. Use `-d` for long running routers. The vFPC has a serial port that is exposed on TCP port 5002. Normally you don't need to interact with it but I imagine it could be useful for some debugging. You can provide additional configuration, to be merged with running configuration on startup. Pass the complete configuration file in the correct format in the `EXTRA_CONFIG` environment variable to the container. Assuming you have the configuration stored in a file `extra-config.conf`, to read it into an environment variable use this: ``` docker run --privileged -it --name vmx15 --env EXTRA_CONFIG="`cat extra-config.conf`" vrnetlab/vr-vmx:15.1F6.9 --trace ``` By default the virtual router runs in standalone mode - a single routing engine. To change the mode to dual RE, pass `--dual-re` to the launch script. The second RE console is exposed on port 5001. The management ports (NETCONF, SSH, SNMP) are exposed on the container IP, offset by 1000. ``` docker run --privileged -d --name vmx15-dual-re vrnetlab/vr-vmx:15.1F6.9 --trace --dual-re # connect to re0 ssh vrnetlab@$CONTAINER_IP -p 22 # connect to re1 ssh vrnetlab@$CONTAINER_IP -p 1022 ``` System requirements ------------------- CPU: 4 cores - 3 for the vFPC (virtual FPC - the forwarding plane) and 1 for VCP (the RE / control plane). RAM: 6GB - 2 for VCP and 4 for vFPC Disk: ~5GB for JUNOS 15.1, ~7GB for JUNOS 16 (I know, it's huge!!) FUAQ - Frequently or Unfrequently Asked Questions ------------------------------------------------- ##### Q: Why use vMX and not VRR? A: Juniper does indeed publish a VRR image that only requires a single VM to run, which would decrease the required resources. The vMX VCP (RE / control plane image) can also be run in the same mode but would then lack certain forwarding features, notably multicast (which was a dealbreaker for me). vrnetlab doesn't focus on forwarding performance but the aim is to keep feature parity with real routers and if you can't test that your PIM neighbors come up correctly due to lack of multicast then.. well, that's no good. ##### Q: What about licenses? A: Older vMX in evaluation mode are limited to 30 days and with a throughput cap of 1Mbps. You can purchase bandwidth licenses to get rid of the time limit and have a higher throughput cap. vMX 15.1F4 introduced additive bandwidth licenses which means bandwidth licenses are added together, before which only the bandwidth license with the highest capacity would be used. In 16.1 the evaluation period of 30 days was removed to the benefit of a perpetual evaluation license but still with a global throughput cap of 1Mbps. ##### Q: I'm getting this error: qemu-system-x86_64: /build/qemu-XXUWBP/qemu-2.1+dfsg/hw/usb/dev-storage.c:236: usb_msd_send_status: Assertion `s->csw.sig == cpu_to_le32(0x53425355)' failed. A: Get a newer kernel & qemu. I've seen this on Ubuntu 15.10. Upgrading to 16.04 fixed it. 07070100000054000041ED00000000000000000000000264D7C43700000000000000000000000000000000000000000000002A00000000vrnetlab-git1691862071.9187175/vmx/docker07070100000055000081A400000000000000000000000164D7C43700000229000000000000000000000000000000000000003500000000vrnetlab-git1691862071.9187175/vmx/docker/DockerfileFROM debian:bullseye MAINTAINER Kristian Larsson <kristian@spritelink.net> ENV DEBIAN_FRONTEND=noninteractive RUN apt-get update -qy \ && apt-get upgrade -qy \ && apt-get install -y \ bridge-utils \ iproute2 \ python3-ipy \ socat \ qemu-kvm \ && rm -rf /var/lib/apt/lists/* ARG VERSION ENV VERSION=${VERSION} COPY vmx /vmx COPY *.py / COPY juniper.conf / EXPOSE 22 161/udp 830 5000 57400 10000-10099 # mgmt and console ports for re1 EXPOSE 1022 1161/udp 1830 5001 HEALTHCHECK CMD ["/healthcheck.py"] ENTRYPOINT ["/launch.py"] 07070100000056000081A400000000000000000000000164D7C437000006AC000000000000000000000000000000000000003700000000vrnetlab-git1691862071.9187175/vmx/docker/juniper.confgroups { re0 { system { host-name re0; } interfaces { fxp0 { unit 0 { family inet { address 10.0.0.15/24; } } } } } re1 { system { host-name re1; } interfaces { fxp0 { unit 0 { family inet { address 10.0.0.16/24; } } } } } } apply-groups [ re0 re1 ]; system { login { user vrnetlab { uid 2000; class super-user; authentication { encrypted-password "$6$CDmzGe/d$g43HmhI3FA.21JCYppnTg1h4q/JO4DOHSICLhhavqBem5zUTgKEcg5m9tBG1Ik6qmfb7L3v.wgj4/DkfgZejO0"; ## VR-netlab9 } } } root-authentication { encrypted-password "$6$vOte4zs5$j1X3fElYvJSt8VPNXx2KzRNrZIkp9CeRX83/W4wQo5K4Tl/MHZeMcvbymEzm9/2ya3S4hU993YDSLY26ROGnW/"; ## VR-netlab9 } services { ssh; extension-service { request-response { grpc { clear-text { port 57400; } } } } netconf { ssh; } } syslog { user * { any emergency; } file messages { any notice; authorization info; } file interactive-commands { interactive-commands any; } } } chassis { fpc 0 { pic 0 { number-of-ports 96; } } } 07070100000057000081ED00000000000000000000000164D7C43700003D1F000000000000000000000000000000000000003400000000vrnetlab-git1691862071.9187175/vmx/docker/launch.py#!/usr/bin/env python3 import datetime import logging import os import pathlib import re import select import signal import subprocess import sys import time import vrnetlab def handle_SIGCHLD(signal, frame): os.waitpid(-1, os.WNOHANG) def handle_SIGTERM(signal, frame): sys.exit(0) signal.signal(signal.SIGINT, handle_SIGTERM) signal.signal(signal.SIGTERM, handle_SIGTERM) signal.signal(signal.SIGCHLD, handle_SIGCHLD) TRACE_LEVEL_NUM = 9 logging.addLevelName(TRACE_LEVEL_NUM, "TRACE") def trace(self, message, *args, **kws): # Yes, logger takes its '*args' as 'args'. if self.isEnabledFor(TRACE_LEVEL_NUM): self._log(TRACE_LEVEL_NUM, message, args, **kws) logging.Logger.trace = trace #append port for gRPCs vrnetlab.HOST_FWDS.append(('tcp', 57400, 57400)) class VMX_vcp(vrnetlab.VM): def __init__(self, username, password, dual_re=False, re_instance=0, install_mode=False): self.dual_re = dual_re self.num = re_instance self.install_mode = install_mode self.base_vcp_dir = pathlib.Path("/vmx/re{}".format(self.num if self.dual_re else '')) vcp_image = str(self.base_vcp_dir / sorted(self.base_vcp_dir.glob("junos-vmx-*.qcow2"))[0]) super(VMX_vcp, self).__init__(username, password, disk_image=vcp_image, ram=2048, num=re_instance) self.num_nics = 0 self.qemu_args.extend(["-drive", "if=ide,file=" + str(self.base_vcp_dir / "vmxhdd.img")]) if dual_re: product = "VM-vcp_vmx2-161-dualre-{}".format(re_instance) else: product = "VM-vcp_vmx2-161-re-0" self.smbios = ["type=0,vendor=Juniper", "type=1,manufacturer=Juniper,product=%s,version=0.1.0" % product] # insert bootstrap config file into metadata image if self.install_mode: self.insert_bootstrap_config() else: self.insert_extra_config() # add metadata image if it exists if os.path.exists(self._metadata_usb): self.qemu_args.extend( ["-usb", "-drive", "id=my_usb_disk,media=disk,format=raw,file={},if=none".format(self._metadata_usb), "-device", "usb-storage,drive=my_usb_disk"]) @property def _metadata_usb(self): return self.base_vcp_dir / "metadata-usb-re{}.img".format(self.num if self.dual_re else '') @property def _vcp_int(self): return "vcp-int{}".format(self.num if self.dual_re else '') def start(self): # use parent class start() function super(VMX_vcp, self).start() # add interface to internal control plane bridge if not self.install_mode: vrnetlab.run_command(["brctl", "addif", "int_cp", self._vcp_int]) vrnetlab.run_command(["ip", "link", "set", self._vcp_int, "up"]) def gen_mgmt(self): """ Generate mgmt interface(s) We override the default function since we want a virtio NIC to the vFPC """ # call parent function to generate first mgmt interface (e1000) res = super(VMX_vcp, self).gen_mgmt() # install mode doesn't need host port forwarding rules. if running in # dual-re mode, replace host port forwarding rules for the backup # routing engine if self.install_mode: res[-1] = re.sub(r',hostfwd.*', '', res[-1]) elif self.dual_re and self.num == 1: res[-1] = re.sub(r',hostfwd.*', self.gen_host_forwards(mgmt_ip='10.0.0.16', offset=3000), res[-1]) if not self.install_mode: # add virtio NIC for internal control plane interface to vFPC res.append("-device") res.append("virtio-net-pci,netdev=%s,mac=%s" % (self._vcp_int, vrnetlab.gen_mac(1))) res.append("-netdev") res.append("tap,ifname=%(_vcp_int)s,id=%(_vcp_int)s,script=no,downscript=no" % { '_vcp_int': self._vcp_int }) return res def bootstrap_spin(self): """ This function should be called periodically to do work. returns False when it has failed and given up, otherwise True """ if self.spins > 300: # too many spins with no result -> restart self.logger.warning("no output from serial console, restarting VCP") self.stop() self.start() self.spins = 0 return (ridx, match, res) = self.tn.expect([b"(?<!Last )login:", b"root@(%|[^:]*:~ #)"], 1) if match: # got a match! if ridx == 0: # matched login prompt, so should login self.logger.info("matched login prompt") self.wait_write("root", wait=None) self.wait_write("VR-netlab9", "Password:") if ridx == 1: if self.install_mode: self.logger.info("requesting power-off") self.wait_write("cli", None) self.wait_write("request system power-off", '>') self.wait_write("yes", 'Power Off the system') self.running = True return # run extra config! self.do_extra_config() self.running = True self.tn.close() # calc startup time startup_time = datetime.datetime.now() - self.start_time self.logger.info("Startup complete in: %s" % startup_time) return else: # no match, if we saw some output from the router it's probably # booting, so let's give it some more time if res != b'': self.logger.trace("OUTPUT VCP[%d]: %s" % (self.num, res.decode())) # reset spins if we saw some output self.spins = 0 self.spins += 1 def do_extra_config(self): """ Do the actual bootstrap config """ self.wait_write("mount_msdosfs /dev/da0 /mnt", None) self.wait_write("cli", None) self.wait_write("configure", '>', 10) self.wait_write("load merge /mnt/extra-config.conf") self.wait_write("commit") self.wait_write("exit", "#") def wait_write(self, cmd, wait='#', timeout=None): """ Wait for something and then send command """ if wait: self.logger.trace("Waiting for %s" % wait) while True: (ridx, match, res) = self.tn.expect([wait.encode(), b"Retry connection attempts"], timeout=timeout) if match: if ridx == 0: break if ridx == 1: self.tn.write("yes\r".encode()) self.logger.trace("Read: %s" % res.decode()) self.logger.debug("writing to serial console: %s" % cmd) self.tn.write("{}\r".format(cmd).encode()) def insert_bootstrap_config(self): vrnetlab.run_command(["mount", "-o", "loop", self._metadata_usb, "/mnt"]) vrnetlab.run_command(["mkdir", "/tmp/vmm-config"]) vrnetlab.run_command(["tar", "-xzvf", "/mnt/vmm-config.tgz", "-C", "/tmp/vmm-config"]) vrnetlab.run_command(["mkdir", "/tmp/vmm-config/config"]) vrnetlab.run_command(["cp", "/juniper.conf", "/tmp/vmm-config/config/"]) vrnetlab.run_command(["tar", "zcf", "vmm-config.tgz", "-C", "/tmp/vmm-config", "."]) vrnetlab.run_command(["cp", "vmm-config.tgz", "/mnt/vmm-config.tgz"]) vrnetlab.run_command(["umount", "/mnt"]) def insert_extra_config(self): extra_config = os.getenv('EXTRA_CONFIG') if extra_config: self.logger.debug('extra_config = ' + extra_config) vrnetlab.run_command(["mount", "-o", "loop", self._metadata_usb, "/mnt"]) with open('/mnt/extra-config.conf', 'w') as f: f.write(extra_config) vrnetlab.run_command(["umount", "/mnt"]) class VMX_vfpc(vrnetlab.VM): def __init__(self): # "Hardcode" the num to 3 for this VM. This gives us a static mapping # for the console port (5002) independent of how many VCPs are running super(VMX_vfpc, self).__init__(None, None, disk_image = "/vmx/vfpc.img", num=3) self.num_nics = 96 self.nic_type = "virtio-net-pci" self.qemu_args.extend(["-cpu", "SandyBridge", "-M", "pc", "-smp", "3"]) # add metadata image if it exists if os.path.exists("/vmx/metadata-usb-fpc0.img"): self.qemu_args.extend( ["-usb", "-drive", "id=fpc_usb_disk,media=disk,format=raw,file=/vmx/metadata-usb-fpc0.img,if=none", "-device", "usb-storage,drive=fpc_usb_disk"]) def gen_mgmt(self): res = [] # mgmt interface res.extend(["-device", "virtio-net-pci,netdev=mgmt,mac=%s" % vrnetlab.gen_mac(0)]) res.extend(["-netdev", "user,id=mgmt,net=10.0.0.0/24"]) # internal control plane interface to vFPC res.extend(["-device", "virtio-net-pci,netdev=vfpc-int,mac=%s" % vrnetlab.gen_mac(0)]) res.extend(["-netdev", "tap,ifname=vfpc-int,id=vfpc-int,script=no,downscript=no"]) if self.version not in ("14.1.R6.4",): # dummy interface for some vMX versions - not sure why vFPC wants # it but without it we get a misalignment res.extend(["-device", "virtio-net-pci,netdev=dummy,mac=%s" % vrnetlab.gen_mac(0)]) res.extend(["-netdev", "tap,ifname=vfpc-dummy,id=dummy,script=no,downscript=no"]) return res def start(self): # use parent class start() function super(VMX_vfpc, self).start() # add interface to internal control plane bridge vrnetlab.run_command(["brctl", "addif", "int_cp", "vfpc-int"]) vrnetlab.run_command(["ip", "link", "set", "vfpc-int", "up"]) def bootstrap_spin(self): (ridx, match, res) = self.tn.expect([b"localhost login", b"qemux86-64 login", b"mounting /dev/sda2 on /mnt failed"], 1) if match: if ridx in (0, 1): # got login - vFPC start succeeded! self.logger.info("vFPC successfully started") self.running = True self.tn.close() if ridx == 2: # vFPC start failed - restart it self.logger.info("vFPC start failed, restarting") self.stop() self.start() if res != b'': pass #self.logger.trace("OUTPUT VFPC: %s" % res.decode()) return class VMX(vrnetlab.VR): """ Juniper vMX router """ def __init__(self, username, password, dual_re=False): self.dual_re = dual_re super(VMX, self).__init__(username, password) if not dual_re: self.vms = [ VMX_vcp(username, password), VMX_vfpc() ] else: self.vms = [ VMX_vcp(username, password, dual_re=True, re_instance=0), VMX_vcp(username, password, dual_re=True, re_instance=1), VMX_vfpc() ] # set up bridge for connecting VCP with vFPC vrnetlab.run_command(["brctl", "addbr", "int_cp"]) vrnetlab.run_command(["ip", "link", "set", "int_cp", "up"]) def start(self): # Set up socats for re1, with a different offset: $CONTAINER_IP:1022 -> 10.0.0.16:3022 if self.dual_re: self.start_socat(src_offset=1000, dst_offset=3000) super(VMX, self).start() class VMX_installer(VMX): """ VMX installer Will start the VMX VCP and then shut it down. Booting the VCP for the first time requires the VCP itself to load some config and then it will restart. Subsequent boots will not require this restart. By running this "install" when building the docker image we can decrease the normal startup time of the vMX. """ def __init__(self, username, password, dual_re=False): super().__init__(username, password, dual_re) if not dual_re: self.vms = [ VMX_vcp(username, password, install_mode=True) ] else: # When installing in dual-RE mode, boot a standalone RE and also 2x # dualre. The final image will end up with 3 VMs, but we choose # which are started with the `--dual-re` option. self.vms = [ VMX_vcp(username, password, dual_re=True, re_instance=0, install_mode=True), VMX_vcp(username, password, dual_re=True, re_instance=1, install_mode=True), VMX_vcp(username, password, dual_re=False, re_instance=2, install_mode=True)] def install(self): self.logger.info("Installing VMX (%d VCP)" % len(self.vms)) while not all(vcp.running for vcp in self.vms): for idx, vcp in enumerate(self.vms): if not vcp.running: self.logger.trace("RE[%d]: working" % idx) vcp.work() self.logger.debug("All %d VCPs running" % len(self.vms)) def waitable_pipes(): return [vcp.p.stdout for vcp in self.vms if vcp.running] + [vcp.p.stderr for vcp in self.vms if vcp.running] # wait for system to shut down cleanly while waitable_pipes(): read_pipes, _, _ = select.select(waitable_pipes(), [], []) for read_pipe in read_pipes: for idx, vcp in enumerate(self.vms): if read_pipe in (vcp.p.stdout, vcp.p.stderr): break try: vcp.p.communicate(timeout=1) except subprocess.TimeoutExpired: pass except Exception as exc: # assume it's dead self.logger.info("RE[%d]: Can't communicate with qemu process, assuming VM has shut down properly.\n%s" % (idx, str(exc))) vcp.stop() try: (ridx, match, res) = vcp.tn.expect([b"Powering system off"], 1) if res != b'': self.logger.trace("RE[%d]: OUTPUT VCP: %s" % (idx, res.decode())) except Exception as exc: # assume it's dead self.logger.info("RE[%d]: Can't communicate with qemu process, assuming VM has shut down properly.\n%s" % (idx, str(exc))) vcp.stop() self.logger.info("Installation complete") if __name__ == '__main__': import argparse parser = argparse.ArgumentParser(description='') parser.add_argument('--trace', action='store_true', help='enable trace level logging') parser.add_argument('--username', default='vrnetlab', help='Username') parser.add_argument('--password', default='VR-netlab9', help='Password') parser.add_argument('--install', action='store_true', help='Install vMX') parser.add_argument('--dual-re', action='store_true', help='Boot dual Routing Engines') parser.add_argument('--num-nics', type=int, default=96, help='Number of NICs, this parameter is IGNORED, only added to be compatible with other platforms') args = parser.parse_args() LOG_FORMAT = "%(asctime)s: %(module)-10s %(levelname)-8s %(message)s" logging.basicConfig(format=LOG_FORMAT) logger = logging.getLogger() logger.setLevel(logging.DEBUG) if args.trace: logger.setLevel(1) if args.install: vr = VMX_installer(args.username, args.password, args.dual_re) vr.install() else: vr = VMX(args.username, args.password, args.dual_re) vr.start() 07070100000058000081ED00000000000000000000000164D7C43700000531000000000000000000000000000000000000003200000000vrnetlab-git1691862071.9187175/vmx/vmx-extract.sh#!/bin/sh IMAGE=$1 echo "Extracting Juniper vMX tgz" rm -rf tmp docker/vmx mkdir -p tmp docker/vmx tar -zxvf ${IMAGE} -C tmp/ --wildcards vmx*/images/*img --wildcards vmx*/images/*qcow2 # VCP # The 're' directory contains files for a standalone RE mkdir -p docker/vmx/re mv -v tmp/vmx*/images/vmxhdd.img docker/vmx/re mv -v tmp/vmx*/images/junos-vmx*qcow2 docker/vmx/re # 16.1 and newer mv -v tmp/vmx*/images/jinstall64-vmx*img docker/vmx/re mv -v tmp/vmx*/images/metadata-usb-re*.img docker/vmx/re mv -v tmp/vmx*/images/metadata_usb.img docker/vmx/re/metadata-usb-re.img # old style # The 're0' and 're1' directories contain files for a dual-RE deployment for re in $(seq 0 1); do mkdir -v docker/vmx/re${re} cp -v docker/vmx/re/vmxhdd.img docker/vmx/re${re} cp -v docker/vmx/re/metadata-usb-re${re}.img docker/vmx/re${re} ls docker/vmx/re/junos-vmx*qcow2 && ln docker/vmx/re/junos-vmx*qcow2 docker/vmx/re${re}/ ls docker/vmx/re/jinstall64-vmx*img && ln docker/vmx/re/jinstall64-vmx*img docker/vmx/re${re}/ done # vFPC / vPFE mv -v tmp/vmx*/images/vPFE-lite-*.img docker/vmx/vfpc.img # 14.1 mv -v tmp/vmx*/images/vFPC*.img docker/vmx/vfpc.img # 15.1 and newer mv -v tmp/vmx*/images/metadata-usb-*.img docker/vmx/ mv -v tmp/vmx*/images/metadata_usb.img docker/vmx/metadata-usb-re.img # old style # clean up rm -rfv tmp 07070100000059000041ED00000000000000000000000364D7C43700000000000000000000000000000000000000000000002400000000vrnetlab-git1691862071.9187175/vqfx0707010000005A000081A400000000000000000000000164D7C4370000029C000000000000000000000000000000000000002D00000000vrnetlab-git1691862071.9187175/vqfx/MakefileVENDOR=Juniper NAME=vQFX IMAGE_FORMAT=qcow IMAGE_GLOB=vqfx*re*.qcow2 #IMAGE=vqfx-20.2R1.10-re-qemu.qcow2 # match versions like: # vqfx10k-re-15.1X53-D60.vmdk #VERSION=$(shell echo $(IMAGE) | sed -e 's/^vqfx-\([0-9]\+.[0-9][A-Z]\?[0-9]\?.\?[0-9]\+\?\).*$/\1/') VERSION=$(shell echo $(IMAGE) | cut -d'-' -f2-3) -include ../makefile-sanity.include -include ../makefile.include # vqfx10k-pfe-20160609-2.vmdk # TODO: we should make sure we only copy one PFE image (the latest?), in case there are many docker-pre-build: cp vqfx*-pfe*.qcow2 docker/ # TODO: upstream the rest of the fixes to make it work docker-test-image: @echo "Skipping test for $(VENDOR) $(NAME)" 0707010000005B000081A400000000000000000000000164D7C437000007DB000000000000000000000000000000000000002E00000000vrnetlab-git1691862071.9187175/vqfx/README.mdvrnetlab / Juniper vQFX ======================= This is the vrnetlab docker image for Juniper vQFX. Building the docker image ------------------------- Download vQFX from http://www.juniper.net/support/downloads/?p=vqfxeval#sw Put the two .vmdk files in this directory and run `make` to produce a docker image named `vr-vqfx`. The version tag of the image will be the same as the JUNOS version, e.g. vqfx10k-re-15.1X53-D60.vmdk will produce an image called vr-vqfx:15.1X53-D60. Please note that you should always specify the version number of the docker image when running your router. Docker defaults to using 'latest' Please note that you will always need to specify version when starting your router as the "latest" tag is not added to any images since it has no meaning in this context. Tested with: * vqfx10k-re-15.1X53-D60.vmdk MD5:758669e88213fbd7943f5da7f6d7bd59 Usage ----- The container must be `--privileged` to start KVM. ``` docker run -d --privileged --name my-vmx-router vr-vmx ``` It takes a couple of minutes for the virtual router to start and after this we can login over SSH / NETCONF with the specified credentials (defaults to vrnetlab / VR-netlab9). If you want to look at the startup process you can specify `-i -t` to docker run and you'll get an interactive terminal, do note that docker will terminate as soon as you close it though. Use `-d` for long running routers. The vPFE has a serial port that is exposed on TCP port 5001. Normally you don't need to interact with it but I imagine it could be useful for some debugging. The vPFE of the vQFX doesn't send it's output to serial per default so you have to catch it very early on in the boot so you can modify the GRUB parameters (press 'e' to do that) and add console=ttyS0 to the "linux..." line. System requirements ------------------- CPU: 2 cores RAM: 4096MB Disk: 1.5GB FUAQ - Frequently or Unfrequently Asked Questions ------------------------------------------------- ##### Q: Do you have a question? A: Uhhhh 0707010000005C000041ED00000000000000000000000264D7C43700000000000000000000000000000000000000000000002B00000000vrnetlab-git1691862071.9187175/vqfx/docker0707010000005D000081A400000000000000000000000164D7C43700000116000000000000000000000000000000000000003600000000vrnetlab-git1691862071.9187175/vqfx/docker/DockerfileFROM registry.opensuse.org/isv/suseinfra/containers/containerfile/vrnetlab-base:latest MAINTAINER Georg Pfuetzenreuter <georg.pfuetzenreuter@suse.com> ARG IMAGE COPY $IMAGE /opt/images/ COPY *-pfe-* /opt/images/ COPY launch.py /usr/local/bin/ # :-/ for /dev/net/tun USER root 0707010000005E000081ED00000000000000000000000164D7C437000029C4000000000000000000000000000000000000003500000000vrnetlab-git1691862071.9187175/vqfx/docker/launch.py#!/usr/bin/env python3 import datetime import logging import os import signal import sys import re import vrnetlab STARTUP_CONFIG_FILE = "/config/startup-config.cfg" def handle_SIGCHLD(signal, frame): os.waitpid(-1, os.WNOHANG) def handle_SIGTERM(signal, frame): sys.exit(0) signal.signal(signal.SIGINT, handle_SIGTERM) signal.signal(signal.SIGTERM, handle_SIGTERM) signal.signal(signal.SIGCHLD, handle_SIGCHLD) TRACE_LEVEL_NUM = 9 logging.addLevelName(TRACE_LEVEL_NUM, "TRACE") def trace(self, message, *args, **kws): # Yes, logger takes its '*args' as 'args'. if self.isEnabledFor(TRACE_LEVEL_NUM): self._log(TRACE_LEVEL_NUM, message, args, **kws) logging.Logger.trace = trace class VQFX_vcp(vrnetlab.VM): def __init__(self, hostname, username, password, conn_mode, version, disk_image): super(VQFX_vcp, self).__init__( username, password, disk_image=disk_image, ram=2048 ) self.num_nics = 12 self.conn_mode = conn_mode self.hostname = hostname self.version = version def start(self): # use parent class start() function super(VQFX_vcp, self).start() # add interface to internal control plane bridge vrnetlab.run_command(["brctl", "addif", "int_cp", "vcp-int"]) vrnetlab.run_command(["ip", "link", "set", "vcp-int", "up"]) def gen_mgmt(self): """Generate mgmt interface(s) We override the default function since we want a virtio NIC to the vFPC """ # call parent function to generate first mgmt interface (e1000) res = super(VQFX_vcp, self).gen_mgmt() # add virtio NIC for internal control plane interface to vFPC res.append("-device") res.append("e1000,netdev=vcp-int,mac=%s" % vrnetlab.gen_mac(1)) res.append("-netdev") res.append("tap,ifname=vcp-int,id=vcp-int,script=no,downscript=no") # dummy for i in range(1): res.append("-device") res.append("e1000,netdev=dummy%d,mac=%s" % (i, vrnetlab.gen_mac(1))) res.append("-netdev") res.append("tap,ifname=dummy%d,id=dummy%d,script=no,downscript=no" % (i, i)) return res def bootstrap_spin(self): """This function should be called periodically to do work. returns False when it has failed and given up, otherwise True """ if self.spins > 300: # too many spins with no result -> restart self.logger.warning("no output from serial console, restarting VCP") self.stop() self.start() self.spins = 0 return # logged_in_prompt prompt for v20+ versions logged_in_prompt = b"root@:RE:0%" #if self.version["major"] < 20: # logged_in_prompt = b"root@vqfx-re:RE:0%" (ridx, match, res) = self.tn.expect([b"login:", logged_in_prompt], 1) if match: # got a match! if ridx == 0: # matched login prompt, so should login self.logger.info("matched login prompt") self.wait_write("root", wait=None) # v19 has Juniper password for root login #if self.version["major"] < 20: self.wait_write("Juniper", wait="Password:") if ridx == 1: # run main config! self.bootstrap_config() self.startup_config() self.running = True self.tn.close() # calc startup time startup_time = datetime.datetime.now() - self.start_time self.logger.info("Startup complete in: %s" % startup_time) return else: # no match, if we saw some output from the router it's probably # booting, so let's give it some more time if res != b"": self.logger.trace("OUTPUT VCP: %s" % res.decode()) # reset spins if we saw some output self.spins = 0 self.spins += 1 def bootstrap_config(self): """Do the actual bootstrap config""" self.wait_write("cli", None) self.wait_write("set cli screen-length 0", ">", 10) self.wait_write("set cli screen-width 511", ">", 10) self.wait_write("set cli complete-on-space off", ">", 10) self.wait_write("configure", ">", 10) self.wait_write("set system services ssh") self.wait_write("set system services netconf ssh") self.wait_write("set system services netconf rfc-compliant") self.wait_write("delete system login user vagrant") self.wait_write( "set system login user %s class super-user authentication plain-text-password" % self.username ) self.wait_write(self.password, "New password:") self.wait_write(self.password, "Retype new password:") self.wait_write("set system root-authentication plain-text-password") self.wait_write(self.password, "New password:") self.wait_write(self.password, "Retype new password:") self.wait_write("delete interfaces") self.wait_write("set interfaces em0 unit 0 family inet address 10.0.0.15/24") self.wait_write("set interfaces em1 unit 0 family inet address 169.254.0.2/24") self.wait_write(f"set system host-name {self.hostname}") self.wait_write("commit") self.wait_write("exit") def startup_config(self): """Load additional config provided by user.""" if os.path.exists(STARTUP_CONFIG_FILE): self.logger.trace("Config File %s exists" % STARTUP_CONFIG_FILE) with open(STARTUP_CONFIG_FILE) as file: self.logger.trace("Opening Config File %s" % STARTUP_CONFIG_FILE) config_lines = file.readlines() config_lines = [line.rstrip() for line in config_lines] self.logger.trace("Parsed Config File %s" % STARTUP_CONFIG_FILE) self.logger.info("Writing lines from %s" % STARTUP_CONFIG_FILE) # Enter Config Mode on QFX self.wait_write("cli", None) self.wait_write("configure", ">", 10) # Appline lines from file for line in config_lines: self.wait_write(line) # Commit and GTFO self.wait_write("commit") self.wait_write("exit") self.logger.info("Done loading config file %s" % STARTUP_CONFIG_FILE) def wait_write(self, cmd, wait="#", timeout=None): """Wait for something and then send command""" if wait: self.logger.trace("Waiting for %s" % wait) while True: (ridx, match, res) = self.tn.expect( [wait.encode(), b"Retry connection attempts"], timeout=timeout ) if match: if ridx == 0: break if ridx == 1: self.tn.write("yes\r".encode()) self.logger.trace("Read: %s" % res.decode()) self.logger.debug("writing to serial console: %s" % cmd) self.tn.write("{}\r".format(cmd).encode()) class VQFX_vpfe(vrnetlab.VM): def __init__(self, disk_image): super(VQFX_vpfe, self).__init__( None, None, disk_image=disk_image, num=1, ram=2048 ) self.num_nics = 0 def gen_mgmt(self): res = [] # mgmt interface res.extend(["-device", "e1000,netdev=mgmt,mac=%s" % vrnetlab.gen_mac(0)]) res.extend(["-netdev", "user,id=mgmt,net=10.0.0.0/24"]) # internal control plane interface to vFPC res.extend(["-device", "e1000,netdev=vpfe-int,mac=%s" % vrnetlab.gen_mac(0)]) res.extend( ["-netdev", "tap,ifname=vpfe-int,id=vpfe-int,script=no,downscript=no"] ) return res def start(self): # use parent class start() function super(VQFX_vpfe, self).start() # add interface to internal control plane bridge vrnetlab.run_command(["brctl", "addif", "int_cp", "vpfe-int"]) vrnetlab.run_command(["ip", "link", "set", "vpfe-int", "up"]) def gen_nics(self): """ Override the parent's gen_nic function, since dataplane interfaces are not to be created for VCP """ return [] def bootstrap_spin(self): self.running = True self.tn.close() return class VQFX(vrnetlab.VR): """Juniper vQFX router""" def __init__(self, hostname, username, password, conn_mode): super(VQFX, self).__init__(username, password) self.read_version() self.vms = [ VQFX_vcp( hostname, username, password, conn_mode, self.ver, self.vcp_qcow_name ), VQFX_vpfe(self.pfe_qcow_name), ] # set up bridge for connecting VCP with vFPC vrnetlab.run_command(["brctl", "addbr", "int_cp"]) vrnetlab.run_command(["ip", "link", "set", "int_cp", "up"]) def read_version(self): for e in os.listdir("/opt/images"): vcp_match = re.match(r"vqfx-(\d+)\.(\w+)\.(\w+)\S+re\S+\.qcow2", e) if vcp_match: self.ver = { "major": int(vcp_match.group(1)), "minor": vcp_match.group(2), } self.vcp_qcow_name = vcp_match.group(0) # https://regex101.com/r/4ByEhT/1 pfe_match = re.match(r"vqfx-(\d+)\.(\w+)\S+-pfe.+qcow2?", e) if pfe_match: self.pfe_qcow_name = pfe_match.group(0) if __name__ == "__main__": import argparse parser = argparse.ArgumentParser(description="") parser.add_argument( "--trace", action="store_true", help="enable trace level logging" ) parser.add_argument("--hostname", default="vr-vqfx", help="QFX hostname") parser.add_argument("--username", default="vrnetlab", help="Username") parser.add_argument("--password", default="VR-netlab9", help="Password") parser.add_argument( "--connection-mode", default="tc", help="Connection mode to use in the datapath", ) args = parser.parse_args() LOG_FORMAT = "%(asctime)s: %(module)-10s %(levelname)-8s %(message)s" logging.basicConfig(format=LOG_FORMAT) logger = logging.getLogger() logger.setLevel(logging.DEBUG) if args.trace: logger.setLevel(1) vrnetlab.boot_delay() vr = VQFX( args.hostname, args.username, args.password, conn_mode=args.connection_mode ) vr.start() 0707010000005F000041ED00000000000000000000000364D7C43700000000000000000000000000000000000000000000002600000000vrnetlab-git1691862071.9187175/vr-bgp07070100000060000081A400000000000000000000000164D7C4370000029D000000000000000000000000000000000000003100000000vrnetlab-git1691862071.9187175/vr-bgp/DockerfileARG REGISTRY=vrnetlab/ FROM ${REGISTRY}vr-xcon MAINTAINER Kristian Larsson <kristian@spritelink.net> ENV DEBIAN_FRONTEND=noninteractive RUN apt-get update -qy \ && apt-get upgrade -qy \ && apt-get install -y \ iputils-ping \ iputils-tracepath \ git \ golang \ procps \ python \ python-setuptools \ python3-jinja2 \ python3-flask \ tcpdump \ telnet \ wget \ && rm -rf /var/lib/apt/lists/* \ && wget -O exabgp.tar.gz https://github.com/Exa-Networks/exabgp/archive/3.4.18.tar.gz \ && tar zxvf exabgp.tar.gz \ && cd /exabgp* && python setup.py install \ && cd / && rm -rf exabgp* ADD . / ENTRYPOINT ["/vr-bgp.py"] 07070100000061000081A400000000000000000000000164D7C4370000018F000000000000000000000000000000000000002F00000000vrnetlab-git1691862071.9187175/vr-bgp/Makefile-include ../makefile-sanity.include all: docker build --build-arg http_proxy=$(http_proxy) --build-arg https_proxy=$(https_proxy) --build-arg REGISTRY=$(REGISTRY) -t $(REGISTRY)vr-bgp . docker-push: docker push $(REGISTRY)vr-bgp docker-test: @echo "TODO: implement smoke test" docker-test-clean: @echo "TODO: implement smoke test" docker-test-save-logs: @echo "TODO: implement smoke test" 07070100000062000081A400000000000000000000000164D7C43700001504000000000000000000000000000000000000003000000000vrnetlab-git1691862071.9187175/vr-bgp/README.mdvrnetlab BGP speaker ==================== This is vr-bgp, the vrnetlab BGP speaker. It is specifically written as a test helper for a CI environment so that one can easily test BGP route policies. Under the hood we use ExaBGP together with a few Python helper programs to build a dead simple HTTP API. It uses the vrnetlab xcon program to connect to a virtual router port. See vr-xcon for more information on how that works under the hood. Naturally it assumes the router to test is a vrnetlab router of some sort. The idea is that you CI runner spins up one or more virtual routers of your choice, starts one or more vr-bgp instances to simulate different BGP relations, then instructs the vr-bgp instances to announce routes and checks the received routes to verify they comply with the routing policy. For example, a service provider network typically has different classes of BGP neighbors, e.g.: * iBGP full-mesh between core routes * iBGP route reflector sessions from core to edge routers * eBGP to peering partners * eBGP to customers * ... One would setup a vr-bgp instance to simulate each of these classes. Tell the "peering partner class vr-bgp" instance to announce 1.2.3.0/24 and you can then look at the vr-bgp instance simulating the iBGP full-mesh to make sure you properly receive this prefix, that it has the correct communities, local-preference and that MED is stripped / zeroized (if that is what you want!). Since the testing happens over a standard interface (BGP) it is simple to replace the virtual router with another vendor's and thus verify that the routing policy of all your vendors ultimately do the same thing. Configuring the virtual router is outside the scope of vrnetlab - you are supposed to use your normal provisioning system for this. vr-bgp exposes a super simple HTTP API to announce routes and collect received routes. vr-bgp only supports a single BGP neighbor (well, one per AFI - IPv4 / IPv6) at a time which might seem tedious at first but it also simplifies things a lot as we don't have to key information on individual neighbors. next-hops are stored as attributes of a prefix, which isn't entirely correct as it's really part of the NLRI information in BGP updates and not part of the path attributes. However, putting it as an attribute vastly simplifies things. The primary drawback is that there is no way to tell two prefixes with different next-hops apart. This normally does not happen for vr-bgp since we only have one BGP neighbor per AFI and that neighbor will only announce one next-hop per prefix but this might make us incompatible with BGP add-path. API --- The vr-bgp API is a very simple RESTful API running by default on port 5000, exposing three endpoints: * `GET http://docker-ip:5000/neighbors`: lists all configured neighbors and connection states * `GET http://docker-ip:5000/received`: lists all received prefixes by address family and their attributes * `POST http://docker-ip:5000/announce`: announces the prefixes specified in the body of the request, with optional attributes ### `GET /neighbors` ```javascript { "192.168.21.2": { "state": "up", "timestamp": "2017-05-31 07:42:06" }, "2001:db8:5::21:2": { "state": "up", "timestamp": "2017-05-31 07:42:06" } } ``` The example shows a vr-bgp speaker configured with two neighbors. Connections to both neighbors are established. ### `GET /received` ```javascript { "ipv4 unicast": { "22.0.0.0/24": { "as-path": [ 2792, 22 ], "community": [ [ 2792, 10300 ], [ 2792, 11276 ] ], "confederation-path": [], "next-hop": "192.168.22.1", "origin": "igp" } }, "ipv6 unicast": { "2001:11::/64": { "as-path": [ 2792, 11 ], "community": [ [ 11, 1234 ] ], "confederation-path": [], "next-hop": "2001:db8:5::22:1", "origin": "igp" } } } ``` The example shows two received prefixes for IPv4 and IPv4 address family with all attributes. Note that community string `2792:10300` is broken down into a list of integers `[2792, 10300]`. ### `POST /announce` ```javascript {"routes": [ { "prefix": "21.0.0.0/24" }, { "prefix": "21.1.0.0/24", "community": ["2792:10300"]}, { "prefix": "21.2.0.0/24", "as-path": [21, 65000] }, { "prefix": "21.3.0.0/24", "med": 100 } ] } ``` The example shows announcement configuration for four prefixes. By default, all prefixes originate in the local AS (21 in this example). Additional attributes exposed through the API are: * `community`: set any number of communities by providing a list of strings `["x:y", "w:z"]` * `as-path`: override the default as-path (local-as) by providing a list of integers `[21, 65000]` * `med`: set multi-exit discriminator (MED) attribute to an integer value Example ------- See the example directory for a full blown example of vr-bgp in action to verify a network's BGP routing policy. 07070100000063000081ED00000000000000000000000164D7C43700000957000000000000000000000000000000000000003000000000vrnetlab-git1691862071.9187175/vr-bgp/bgpapi.py#!/usr/bin/env python3 from flask import Flask, json, request import sys # keep track of what we announce so we can easily withdraw announced_routes = {} # keep track of received routes received_routes = {} app = Flask(__name__) @app.route('/announce', methods=['POST']) def announce(): global announced_routes if request.headers['Content-Type'] != 'application/json': return "Plxz send JSON" try: routes = request.json['routes'] new_routes = {route['prefix']: route for route in routes} except: return "Incorrectly formed query (probably)" # announce new routes to_announce = set(new_routes) for prefix in to_announce: route = new_routes[prefix] command = "announce route %(prefix)s next-hop self" % route if 'community' in route: command += " community [" + " ".join(route['community']) + "]" if 'med' in route: command += " med " + str(route['med']) if 'as-path' in route: command += " as-path [" + " ".join([str(x) for x in route['as-path']]) + "]" sys.stdout.write('%s\n' % command) sys.stdout.flush() # withdraw old routes to_withdraw = set(announced_routes) - set(new_routes) for prefix in to_withdraw: command = "withdraw route %s" % prefix sys.stdout.write('%s\n' % command) sys.stdout.flush() announced_routes = new_routes return 'announced: %d withdrawn: %d currently announcing: %d\n' % (len(to_announce), len(to_withdraw), len(announced_routes)) @app.route('/received', methods=['GET']) def received(): import sqlite3 conn = sqlite3.connect('/tmp/bgp.db') c = conn.cursor() c.execute("SELECT afi, prefix, attributes FROM received_routes") res = {} for row in c.fetchall(): if row[0] not in res: res[row[0]] = {} res[row[0]][row[1]] = json.loads(row[2]) return json.dumps(res) @app.route('/neighbors', methods=['GET']) def get_neighbors(): import sqlite3 conn = sqlite3.connect('/tmp/bgp.db') c = conn.cursor() c.execute("SELECT ip, state, ts FROM neighbors") res = {} for row in c.fetchall(): res[row[0]] = { 'state': row[1], 'timestamp': row[2] } return json.dumps(res) if __name__ == '__main__': app.run(host='0.0.0.0',debug=True) 07070100000064000081ED00000000000000000000000164D7C43700001226000000000000000000000000000000000000003000000000vrnetlab-git1691862071.9187175/vr-bgp/bgprec.py#!/usr/bin/env python3 from datetime import datetime import json import sqlite3 import sys # debug log file f = open("/tmp/bgp.log", "a") def log(msg): f.write(msg) f.write("\n") f.flush() conn = sqlite3.connect('/tmp/bgp.db', detect_types=sqlite3.PARSE_DECLTYPES) c = conn.cursor() try: c.execute("SELECT * FROM received_routes") except sqlite3.OperationalError: # create table to store received routes c.execute("CREATE TABLE received_routes (afi string, prefix string, attributes string)") c.execute("CREATE UNIQUE INDEX received_routes__prefix ON received_routes(afi, prefix)") # create table to store neighbor state c.execute("CREATE TABLE neighbors (ip string, state string, ts timestamp)") c.execute("CREATE UNIQUE INDEX neighbors_ip ON neighbors(ip)") def upsert_neighbor_state(ip, state, timestamp): """ Insert or update the state of a neighbor in the database """ c.execute("SELECT * FROM neighbors WHERE ip=?", [ip]) if c.fetchone() is None: log("INSERTING to db neighbor") c.execute("INSERT INTO neighbors (ip, state, ts) VALUES (?, ?, ?)", [ip, state, timestamp]) else: log("UPDATING db neighbor\n") c.execute("UPDATE neighbors SET state = ?, ts = ? WHERE ip = ?", [state, timestamp, ip]) conn.commit() def upsert_prefix(afi, prefix, attributes): """ Insert or update a prefix in the database """ c.execute("SELECT * FROM received_routes WHERE afi=? AND prefix=?", [afi, prefix]) if c.fetchone() is None: log("INSERTING to db prefix") c.execute("INSERT INTO received_routes (afi, prefix, attributes) VALUES (?, ?, ?)", [afi, prefix, json.dumps(attributes)]) else: log("UPDATING db prefix") c.execute("UPDATE received_routes SET attributes = ? WHERE afi = ? AND prefix = ?", [json.dumps(attributes), afi, prefix]) conn.commit() def remove_prefix(afi, prefix): """ Remove a prefix from the database """ c.execute("DELETE FROM received_routes WHERE afi=? AND prefix=?", [afi, prefix]) conn.commit() def parse_message(line): # Parse JSON string to dictionary msg = json.loads(line) timestamp = datetime.fromtimestamp(msg['time']) if msg['type'] == 'state': neighbor_ip = msg['neighbor']['ip'] state = msg['neighbor']['state'] upsert_neighbor_state(neighbor_ip, state, timestamp) if msg['type'] == 'update': if 'update' in msg['neighbor']['message']: update = msg['neighbor']['message']['update'] # handle announce if 'announce' in update: for afi, nexthops in update['announce'].items(): if 'null' in nexthops: log("Received EOR for {}".format(afi)) else: for nexthop, prefixes in nexthops.items(): if nexthop.startswith('fe80:'): # ignore IPv6 link local next-hops. BGP sends # both LL next-hop and GUA so we just ignore LL # and parse GUA continue for prefix in prefixes: log("announce {}".format(prefix)) attributes = update['attribute'] # store next-hop, which is NLRI information as # (path) attribute. this is not according to # RFC but gosh does it simplify things. attributes['next-hop'] = nexthop upsert_prefix(afi, prefix, attributes) # handle withdraws if 'withdraw' in update: for afi, prefixes in update['withdraw'].items(): for prefix in prefixes: log("Withdraw {}".format(prefix)) remove_prefix(afi, prefix) elif 'eor' in msg['neighbor']['message']: eor = msg['neighbor']['message']['eor'] log("Received EOR for {} {}".format(eor['afi'], eor['safi'])) else: log("Unknown message") raise Exception("Unknown message") blank = 0 while True: line = sys.stdin.readline().strip() # abort if we just see blank lines - prolly means exa died if line == "": blank += 1 # got 99 blank lines and this is one if blank > 99: break continue blank = 0 f.write(line + "\n") f.flush() parse_message(line) 07070100000065000081A400000000000000000000000164D7C43700000570000000000000000000000000000000000000003600000000vrnetlab-git1691862071.9187175/vr-bgp/exabgp.conf.tplgroup test { process bgpapi { run /bgpapi.py; } process bgprec { encoder json; receive { neighbor-changes; parsed; update; } run /bgprec.py; } router-id {{config.ROUTER_ID}}; local-as {{config.LOCAL_AS}}; {%- if config.IPV4_NEIGHBOR %} neighbor {{config.IPV4_NEIGHBOR}} { peer-as {{config.PEER_AS}}; local-address {{config.IPV4_LOCAL_ADDRESS}}; {%- if not config.ALLOW_MIXED_AFI_TRANSPORT %} family { ipv4 unicast; } {%- endif %} {%- if config.MD5 %} md5 "{{config.MD5}}"; {%- endif %} {%- if config.LISTEN %} listen 179; {%- endif %} {%- if config.TTLSECURITY %} ttl-security; {%- endif %} } {%- endif %} {%- if config.IPV6_NEIGHBOR %} neighbor {{config.IPV6_NEIGHBOR}} { peer-as {{config.PEER_AS}}; local-address {{config.IPV6_LOCAL_ADDRESS}}; {%- if not config.ALLOW_MIXED_AFI_TRANSPORT %} family { ipv6 unicast; } {%- endif %} {%- if config.MD5 %} md5 "{{config.MD5}}"; {%- endif %} {%- if config.LISTEN %} listen 179; {%- endif %} {%- if config.TTLSECURITY %} ttl-security; {%- endif %} } {%- endif %} } 07070100000066000041ED00000000000000000000000264D7C43700000000000000000000000000000000000000000000002E00000000vrnetlab-git1691862071.9187175/vr-bgp/example07070100000067000081A400000000000000000000000164D7C4370000198D000000000000000000000000000000000000003800000000vrnetlab-git1691862071.9187175/vr-bgp/example/README.mdvr-bgp example ============== This is an example showing how vr-bgp can be used in your CI environment to verify your BGP routing policy. The example makes use of a Juniper vMX router so make sure you have built the vr-vmx container (we use version 16.1R1.6 but you should be able to use older ones as well). `start.sh` runs the docker commands to start vr-vmx, which we call j1, then start up six vr-bgp instances which will simulate two customers (bgp-cust1 & bgp-cust2), two peers (bgp-peer1 & bgp-peer2) and two transits (bgp-transit1 & bgp-transit2). vr-xcon is then used to connect it all together. `test.py` is the actual test script. It is based on the standard unittest library in Python but have a couple of different helper functions to glue it together with the vr-bgp speakers. The overall policy is fairly simple, we should announce customers to peers & transit while peers, transit and customers are announced to customers. Inversely, peers should not be announced to other peers nor to transits. The different tests in test.py script will test exactly this. We use the following router-ids: 10.100.0.0/16 router-IDs 10.100.1.0/24 router-IDs for customers 10.100.1.1 cust1 10.100.1.2 cust2 10.100.2.0/24 router-IDs for peers 10.100.2.1 peer1 10.100.2.2 peer2 10.100.3.0/24 router-IDs for transits 10.100.3.1 transit1 10.100.3.2 transit2 And here are the link networks: 10.101.0.0/16 link networks 10.101.1.0/24 link networks for customers 10.101.1.0/30 DUT <-> cust1 10.101.1.1 cust1 10.101.1.2 DUT 10.101.1.4/30 DUT <-> cust2 10.101.1.5 cust2 10.101.1.6 DUT 10.101.2.0/24 link networks for peers 10.101.2.0/30 DUT <-> peer1 10.101.2.1 peer1 10.101.2.2 DUT 10.101.2.4/30 DUT <-> peer2 10.101.2.5 peer2 10.101.2.6 DUT 10.101.3.0/24 link networks for transits 10.101.3.0/30 DUT <-> transit1 10.101.3.1 transit1 10.101.3.2 DUT 10.101.3.4/30 DUT <-> transit2 10.101.3.5 transit2 10.101.3.6 DUT You need to configure the vMX router yourself. An example configuration is included in the file junos-config.txt Start the whole thing by executing the `start.sh` script. If you are not using vMX 16.1R1.7 you need to first edit the script and change the version of vr-vmx used. Wait for the vMX router to start (check the serial console). Once up, apply the configuration and you should be able to see that all BGP sessions become established; ``` root> show bgp summary Groups: 6 Peers: 12 Down peers: 6 Table Tot Paths Act Paths Suppressed History Damp State Pending inet.0 9 7 0 0 0 0 inet6.0 0 0 0 0 0 0 Peer AS InPkt OutPkt OutQ Flaps Last Up/Dwn State|#Active/Received/Accepted/Damped... 10.101.1.1 65011 25 27 0 8 9:17 2/3/2/0 0/0/0/0 10.101.1.5 65012 23 29 0 8 9:18 1/2/1/0 0/0/0/0 10.101.2.1 65021 22 25 0 8 9:17 1/1/1/0 0/0/0/0 10.101.2.5 65022 22 25 0 9 9:17 1/1/1/0 0/0/0/0 10.101.3.1 65031 22 24 0 9 9:16 1/1/1/0 0/0/0/0 10.101.3.5 65032 22 24 0 8 9:12 1/1/1/0 0/0/0/0 2001:db8::1:1 65011 24 25 0 7 9:12 Establ inet6.0: 0/0/0/0 2001:db8::1:5 65012 25 25 0 7 9:13 Establ inet6.0: 0/0/0/0 2001:db8::2:1 65021 24 25 0 6 9:14 Establ inet6.0: 0/0/0/0 2001:db8::2:5 65022 23 25 0 7 9:17 Establ inet6.0: 0/0/0/0 2001:db8::3:1 65031 24 25 0 6 9:14 Establ inet6.0: 0/0/0/0 2001:db8::3:5 65032 24 25 0 7 9:19 Establ inet6.0: 0/0/0/0 ``` Now run `test.py` and you should get something like this: ``` kll@htpc:~/vrnetlab/vr-bgp/example$ ./test.py test_bgp101 (__main__.BgpTest) bgp-cust1 should see bgp-cust2, bgp-peer1, bgp-peer2, bgp-transit1 and bgp-transit2 ... '12.0.0.0/24' not found in {}, Retrying in 3 seconds... ok test_bgp102 (__main__.BgpTest) bgp-cust2 should see bgp-cust1, bgp-peer1, bgp-peer2, bgp-transit1 and bgp-transit2 ... ok test_bgp103 (__main__.BgpTest) bgp-peer1 should see bgp-cust1, bgp-cust2 ... ok test_bgp104 (__main__.BgpTest) bgp-peer2 should see bgp-cust1, bgp-cust2 ... ok test_bgp105 (__main__.BgpTest) bgp-transit1 should see bgp-cust1, bgp-cust2 ... ok test_bgp106 (__main__.BgpTest) bgp-transit2 should see bgp-cust1, bgp-cust2 ... ok test_bgp201 (__main__.BgpTest) peer1 should not see peer2, transit1, transit2 ... ok test_bgp202 (__main__.BgpTest) peer2 should not see peer1, transit1, transit2 ... ok test_bgp203 (__main__.BgpTest) transit1 should not see peer1, peer2, transit2 ... ok test_bgp204 (__main__.BgpTest) transit2 should not see peer1, peer2, transit1 ... ok test_bgp205 (__main__.BgpTest) customer bogon filtering, peer1 should not see customer1 bogon ... ok test_bgp206 (__main__.BgpTest) peer1 should not see cust1 prefix with control community ... ok ---------------------------------------------------------------------- Ran 12 tests in 5.743s OK kll@htpc:~/vrnetlab/vr-bgp/example$ ``` We can see that the first test fails which can be rather common as BGP has not converged yet. Each test is automatically retried up to 10 times with an exponential back off timer. The tests are ordered such that "positive" tests, that look for the presence of a prefix, come first while "negative" tests that look for lack of prefixes come after. The positive tests will be retried until BGP has converged and we can then be sure about the result of the negative tests too. 07070100000068000081A400000000000000000000000164D7C437000018EC000000000000000000000000000000000000003F00000000vrnetlab-git1691862071.9187175/vr-bgp/example/junos-config.txt## Last commit: 2016-10-18 08:50:29 UTC by root version 16.1R1.7; system { root-authentication { encrypted-password "$5$y1GgXR4r$Z5xvMJ1Qq7ENI3l0i.YPZtvobXNeVp/8Fm5JM/RYN/C"; ## SECRET-DATA } login { user vrnetlab { uid 2000; class super-user; authentication { encrypted-password "$5$gnMDemJ9$9qSEMg/hZIdgIT8LFipKY8nNvhU3402O9UeVBDMNMs8"; ## SECRET-DATA } } } services { ssh; netconf { ssh; rfc-compliant; } } syslog { user * { any emergency; } file messages { any notice; authorization info; } file interactive-commands { interactive-commands any; } } } interfaces { ge-0/0/0 { description bgp-cust1; unit 0 { family inet { address 10.101.1.2/30; } family inet6 { address 2001:db8::1:2/126; } } } ge-0/0/1 { description bgp-cust2; unit 0 { family inet { address 10.101.1.6/30; } family inet6 { address 2001:db8::1:6/126; } } } ge-0/0/2 { description bgp-peer1; unit 0 { family inet { address 10.101.2.2/30; } family inet6 { address 2001:db8::2:2/126; } } } ge-0/0/3 { description bgp-peer2; unit 0 { family inet { address 10.101.2.6/30; } family inet6 { address 2001:db8::2:6/126; } } } ge-0/0/4 { description bgp-transit1; unit 0 { family inet { address 10.101.3.2/30; } family inet6 { address 2001:db8::3:2/126; } } } ge-0/0/5 { description bgp-transit2; unit 0 { family inet { address 10.101.3.6/30; } family inet6 { address 2001:db8::3:6/126; } } } fxp0 { unit 0 { family inet { address 10.0.0.15/24; } } } } routing-options { router-id 1.2.3.4; autonomous-system 2792; } protocols { bgp { group IPV4-CUSTOMERS { import IPV4-CUSTOMER-IN; family inet { unicast; } export IPV4-CUSTOMER-OUT; neighbor 10.101.1.1 { peer-as 65011; } neighbor 10.101.1.5 { peer-as 65012; } } group IPV4-PEERS { import IPV4-PEER-IN; family inet { unicast; } export IPV4-PEER-OUT; neighbor 10.101.2.1 { peer-as 65021; } neighbor 10.101.2.5 { peer-as 65022; } } group IPV4-TRANSITS { import IPV4-TRANSIT-IN; family inet { unicast; } export IPV4-TRANSIT-OUT; neighbor 10.101.3.1 { peer-as 65031; } neighbor 10.101.3.5 { peer-as 65032; } } group IPV6-CUSTOMERS { family inet6 { unicast; } neighbor 2001:db8::1:1 { peer-as 65011; } neighbor 2001:db8::1:5 { peer-as 65012; } } group IPV6-PEERS { family inet6 { unicast; } neighbor 2001:db8::2:1 { peer-as 65021; } neighbor 2001:db8::2:5 { peer-as 65022; } } group IPV6-TRANSITS { family inet6 { unicast; } neighbor 2001:db8::3:1 { peer-as 65031; } neighbor 2001:db8::3:5 { peer-as 65032; } } } } policy-options { prefix-list IPV4-BOGONS { 10.0.0.0/8; 192.168.0.0/16; } policy-statement IPV4-CUSTOMER-IN { term BOGONS { from { prefix-list-filter IPV4-BOGONS orlonger; } then reject; } term MARK { then { community add FROM-CUSTOMER; } } } policy-statement IPV4-CUSTOMER-OUT { term ANNOUNCE { from community [ FROM-CUSTOMER FROM-TRANSIT FROM-PEER ]; then accept; } term REJECT { then reject; } } policy-statement IPV4-PEER-IN { term CLEAN-COMMUNITY { then { community delete CLEAN-COMMUNITY; next term; } } term MARK { then { local-preference 250; community add FROM-PEER; } } } policy-statement IPV4-PEER-OUT { term DO-NOT-ANNOUNCE { from community DO-NOT-ANNOUNCE; then reject; } term ANNOUNCE { from community FROM-CUSTOMER; then accept; } term REJECT { then reject; } } policy-statement IPV4-TRANSIT-IN { term CLEAN-COMMUNITY { then { community delete CLEAN-COMMUNITY; next term; } } term MARK { then { local-preference 150; community add FROM-TRANSIT; } } } policy-statement IPV4-TRANSIT-OUT { term ANNOUNCE { from community FROM-CUSTOMER; then accept; } term REJECT { then reject; } } community CLEAN-COMMUNITY members 2792:*; community DO-NOT-ANNOUNCE members 65000:0; community FROM-CUSTOMER members 2792:10300; community FROM-PEER members 2792:10200; community FROM-TRANSIT members 2792:10201; } 07070100000069000081ED00000000000000000000000164D7C43700000621000000000000000000000000000000000000003700000000vrnetlab-git1691862071.9187175/vr-bgp/example/start.sh# start vMX virtual router docker run --name j --privileged -i -t -d vr-vmx:16.1R1.7 # start vr-bgp and vr-xcon to connect it all together docker rm -f bgp-cust1 bgp-cust2 bgp-peer1 bgp-peer2 bgp-transit1 bgp-transit2 bgp-xcon docker run --name bgp-cust1 --privileged -i -t -d vr-bgp --router-id 10.100.1.1 --ipv4-prefix 10.101.1.0/30 --ipv6-prefix 2001:db8::1:0/126 --local-as 65011 --peer-as 2792 docker run --name bgp-cust2 --privileged -i -t -d vr-bgp --router-id 10.100.1.2 --ipv4-prefix 10.101.1.4/30 --ipv6-prefix 2001:db8::1:4/126 --local-as 65012 --peer-as 2792 docker run --name bgp-peer1 --privileged -i -t -d vr-bgp --router-id 10.100.2.1 --ipv4-prefix 10.101.2.0/30 --ipv6-prefix 2001:db8::2:0/126 --local-as 65021 --peer-as 2792 docker run --name bgp-peer2 --privileged -i -t -d vr-bgp --router-id 10.100.2.2 --ipv4-prefix 10.101.2.4/30 --ipv6-prefix 2001:db8::2:4/126 --local-as 65022 --peer-as 2792 docker run --name bgp-transit1 --privileged -i -t -d vr-bgp --router-id 10.100.3.1 --ipv4-prefix 10.101.3.0/30 --ipv6-prefix 2001:db8::3:0/126 --local-as 65031 --peer-as 2792 docker run --name bgp-transit2 --privileged -i -t -d vr-bgp --router-id 10.100.3.2 --ipv4-prefix 10.101.3.4/30 --ipv6-prefix 2001:db8::3:4/126 --local-as 65032 --peer-as 2792 docker run --name bgp-xcon --privileged -i -t -d --link bgp-cust1 --link bgp-cust2 --link bgp-peer1 --link bgp-peer2 --link bgp-transit1 --link bgp-transit2 --link j1 vr-xcon --p2p j1/1--bgp-cust1/1 j1/2--bgp-cust2/1 j1/3--bgp-peer1/1 j1/4--bgp-peer2/1 j1/5--bgp-transit1/1 j1/6--bgp-transit2/1 --debug 0707010000006A000081ED00000000000000000000000164D7C43700002F28000000000000000000000000000000000000003600000000vrnetlab-git1691862071.9187175/vr-bgp/example/test.py#!/usr/bin/env python3 import datetime import json import logging import sys import time import unittest import urllib.request from functools import wraps all_speakers = [ 'bgp-cust1', 'bgp-cust2', 'bgp-peer1', 'bgp-peer2', 'bgp-transit1', 'bgp-transit2' ] speaker_containers = {} def retry(ExceptionToCheck, tries=4, delay=3, backoff=2, logger=None): """Retry calling the decorated function using an exponential backoff. http://www.saltycrane.com/blog/2009/11/trying-out-retry-decorator-python/ original from: http://wiki.python.org/moin/PythonDecoratorLibrary#Retry :param ExceptionToCheck: the exception to check. may be a tuple of exceptions to check :type ExceptionToCheck: Exception or tuple :param tries: number of times to try (not retry) before giving up :type tries: int :param delay: initial delay between retries in seconds :type delay: int :param backoff: backoff multiplier e.g. value of 2 will double the delay each retry :type backoff: int :param logger: logger to use. If None, print :type logger: logging.Logger instance """ def deco_retry(f): @wraps(f) def f_retry(*args, **kwargs): mtries, mdelay = tries, delay while mtries > 1: try: return f(*args, **kwargs) except ExceptionToCheck as e: msg = "%s, Retrying in %d seconds..." % (str(e), mdelay) if logger: logger.warning(msg) else: print(msg) time.sleep(mdelay) mtries -= 1 mdelay *= backoff return f(*args, **kwargs) return f_retry # true decorator return deco_retry def docker_inspect(name): """ Return inspection information about a running docker container """ container_name = speaker_containers[name] if not container_name: raise Exception("Couldn't map %s" % name) import subprocess out = subprocess.check_output(["docker", "inspect", container_name]) return json.loads(out.decode()) def docker_ip(name): """ Return IP address of docker container """ return docker_inspect(name)[0]['NetworkSettings']['IPAddress'] def announce(speaker, routes): ip = docker_ip(speaker) route_data = { 'routes': routes } params = json.dumps(route_data).encode() url = "http://%s:5000/announce" % ip req = urllib.request.Request(url, data=params, headers={'content-type': 'application/json'}) response = urllib.request.urlopen(req) def received(speaker, afi='ipv4 unicast'): ip = docker_ip(speaker) url = "http://%s:5000/received" % ip response = urllib.request.urlopen(url) data = json.loads(response.read().decode()) if afi not in data: return {} afi_data = data[afi] return afi_data def get_neighbors(speaker): ip = docker_ip(speaker) url = "http://%s:5000/neighbors" % ip response = urllib.request.urlopen(url) return json.loads(response.read().decode()) def wait_for_speakers(speakers, timeout=300): """ Wait for BGP speakers to start """ log = logging.getLogger() i = 0 while i < timeout: # assume up until proven otherwise all_up = True for speaker in speakers: try: neighbors = get_neighbors(speaker) except: log.debug("BGP speaker %s not up" % speaker) all_up = False break if all_up: log.debug("All speakers are up!") return time.sleep(1) i += 1 raise Exception("timed out") def wait_for_bgp(speakers, timeout=300): """ Wait for all BGP speakers to establish their BGP sessions """ log = logging.getLogger() i = 0 while i < timeout: # assume up until proven otherwise all_up = True for speaker in speakers: try: neighbors = get_neighbors(speaker) except: log.debug("BGP speaker %s not up" % speaker) all_up = False break if len(neighbors) == 0: all_up = False for neighbor, data in neighbors.items(): if data['state'] != 'up': log.debug("BGP speaker %s session not up" % speaker) all_up = False else: # convert timestamp to datetime object ts = datetime.datetime.strptime(data['timestamp'], "%Y-%m-%d %H:%M:%S") # what is delta between now and when peer came up? delta = datetime.datetime.utcnow() - ts # peer must be up for 10 seconds to let it "settle" if delta < datetime.timedelta(seconds=5): log.debug("BGP speaker %s session not up long enough" % speaker) all_up = False if all_up: log.debug("All BGP speaker sessions are up!") return time.sleep(1) i += 1 raise Exception("timed out") class BgpTest(unittest.TestCase): def setUp(self): # wait for bgp sessions to establish wait_for_speakers(all_speakers) # tell vr-bgp speakers to announce routes # customer announcements cust1_announce = [ { 'prefix': '11.0.0.0/24' }, # normal { 'prefix': '11.1.0.0/24', 'community': [ '65000:0' ] }, # do not announce to peers/transit { 'prefix': '10.0.11.0/24' } # 10.0.11.0/24 is BOGON and should be filtered ] announce('bgp-cust1', cust1_announce) cust2_announce = [ { 'prefix': '12.0.0.0/24' }, { 'prefix': '10.0.12.0/24' } ] announce('bgp-cust2', cust2_announce) # peer announcements peer1_announce = [ { 'prefix': '21.0.0.0/24', 'community': [ '2792:10300' ] } # fake we are customer - must be stripped ] announce('bgp-peer1', peer1_announce) peer2_announce = [ { 'prefix': '22.0.0.0/24' } ] announce('bgp-peer2', peer2_announce) # transit announcements tran1_announce = [ { 'prefix': '31.0.0.0/24', 'community': [ '2792:10300'] } # fake we are customer - must be stripped ] announce('bgp-transit1', tran1_announce) tran2_announce = [ { 'prefix': '32.0.0.0/24' } ] announce('bgp-transit2', tran2_announce) wait_for_bgp(all_speakers) # start off with "positive" tests, i.e. where we check for the presence of # prefixes. see test_bgp2xx for "negative" tests @retry(AssertionError, tries=10) def test_bgp101(self): """ bgp-cust1 should see bgp-cust2, bgp-peer1, bgp-peer2, bgp-transit1 and bgp-transit2 """ rec = received('bgp-cust1') self.assertIn('12.0.0.0/24', rec) self.assertIn('21.0.0.0/24', rec) self.assertIn('22.0.0.0/24', rec) self.assertIn('31.0.0.0/24', rec) self.assertIn('32.0.0.0/24', rec) @retry(AssertionError, tries=10) def test_bgp102(self): """ bgp-cust2 should see bgp-cust1, bgp-peer1, bgp-peer2, bgp-transit1 and bgp-transit2 """ rec = received('bgp-cust2') self.assertIn('11.0.0.0/24', rec) self.assertIn('21.0.0.0/24', rec) self.assertIn('22.0.0.0/24', rec) self.assertIn('31.0.0.0/24', rec) self.assertIn('32.0.0.0/24', rec) @retry(AssertionError, tries=10) def test_bgp103(self): """ bgp-peer1 should see bgp-cust1, bgp-cust2 """ rec = received('bgp-peer1') self.assertIn('11.0.0.0/24', rec) self.assertIn('12.0.0.0/24', rec) @retry(AssertionError, tries=10) def test_bgp104(self): """ bgp-peer2 should see bgp-cust1, bgp-cust2 """ rec = received('bgp-peer2') self.assertIn('11.0.0.0/24', rec) self.assertIn('12.0.0.0/24', rec) @retry(AssertionError, tries=10) def test_bgp105(self): """ bgp-transit1 should see bgp-cust1, bgp-cust2 """ rec = received('bgp-transit1') self.assertIn('11.0.0.0/24', rec) self.assertIn('12.0.0.0/24', rec) @retry(AssertionError, tries=10) def test_bgp106(self): """ bgp-transit2 should see bgp-cust1, bgp-cust2 """ rec = received('bgp-transit2') self.assertIn('11.0.0.0/24', rec) self.assertIn('12.0.0.0/24', rec) # "negative" tests (i.e. we don't see a particular prefix) are run after, # to make sure we don't catch the peers in the early phases when they # haven't announced everything @retry(AssertionError, tries=10) def test_bgp201(self): """ peer1 should not see peer2, transit1, transit2 """ rec = received('bgp-peer1') self.assertNotIn('22.0.0.0/24', rec) self.assertNotIn('31.0.0.0/24', rec) self.assertNotIn('32.0.0.0/24', rec) @retry(AssertionError, tries=10) def test_bgp202(self): """ peer2 should not see peer1, transit1, transit2 """ rec = received('bgp-peer2') self.assertNotIn('21.0.0.0/24', rec) self.assertNotIn('31.0.0.0/24', rec) self.assertNotIn('32.0.0.0/24', rec) @retry(AssertionError, tries=10) def test_bgp203(self): """ transit1 should not see peer1, peer2, transit2 """ rec = received('bgp-transit1') self.assertNotIn('21.0.0.0/24', rec) self.assertNotIn('22.0.0.0/24', rec) self.assertNotIn('32.0.0.0/24', rec) @retry(AssertionError, tries=10) def test_bgp204(self): """ transit2 should not see peer1, peer2, transit1 """ rec = received('bgp-transit2') self.assertNotIn('21.0.0.0/24', rec) self.assertNotIn('22.0.0.0/24', rec) self.assertNotIn('31.0.0.0/24', rec) @retry(AssertionError, tries=10) def test_bgp205(self): """ customer bogon filtering, peer1 should not see customer1 bogon """ rec = received('bgp-peer1') self.assertNotIn('10.0.11.0/24', rec) @retry(AssertionError, tries=10) def test_bgp206(self): """ peer1 should not see cust1 prefix with control community """ rec = received('bgp-peer1') self.assertNotIn('11.1.0.0/24', rec) if __name__ == '__main__': import argparse parser = argparse.ArgumentParser(description='') parser.add_argument('--debug', action='store_true') parser.add_argument('--wait-for-speakers', action='store_true') parser.add_argument('--wait-for-bgp-up', action='store_true') parser.add_argument('--bgp-cust1', default="bgp-cust1") parser.add_argument('--bgp-cust2', default="bgp-cust2") parser.add_argument('--bgp-peer1', default="bgp-peer1") parser.add_argument('--bgp-peer2', default="bgp-peer2") parser.add_argument('--bgp-transit1', default="bgp-transit1") parser.add_argument('--bgp-transit2', default="bgp-transit2") args, rest = parser.parse_known_args() speaker_containers['bgp-peer1'] = args.bgp_peer1 speaker_containers['bgp-peer2'] = args.bgp_peer2 speaker_containers['bgp-cust1'] = args.bgp_cust1 speaker_containers['bgp-cust2'] = args.bgp_cust2 speaker_containers['bgp-transit1'] = args.bgp_transit1 speaker_containers['bgp-transit2'] = args.bgp_transit2 # set up logging log = logging.getLogger() logging.basicConfig() log.setLevel(logging.INFO) if args.debug: log.setLevel(logging.DEBUG) if args.wait_for_speakers: wait_for_speakers(all_speakers) sys.exit(0) if args.wait_for_bgp_up: wait_for_bgp(all_speakers) sys.exit(0) sys.argv[1:] = rest unittest.main(verbosity=2) 0707010000006B000081A400000000000000000000000164D7C43700000B0D000000000000000000000000000000000000003C00000000vrnetlab-git1691862071.9187175/vr-bgp/example/xr-config.txtinterface GigabitEthernet0/0/0/0 description bgp-cust1 ipv4 address 10.101.1.2 255.255.255.252 ! interface GigabitEthernet0/0/0/1 description bgp-cust2 ipv4 address 10.101.1.6 255.255.255.252 ! interface GigabitEthernet0/0/0/2 description bgp-peer1 ipv4 address 10.101.2.2 255.255.255.252 ! interface GigabitEthernet0/0/0/3 description bgp-peer2 ipv4 address 10.101.2.6 255.255.255.252 ! interface GigabitEthernet0/0/0/4 description bgp-transit1 ipv4 address 10.101.3.2 255.255.255.252 ! interface GigabitEthernet0/0/0/5 description bgp-transit2 ipv4 address 10.101.3.6 255.255.255.252 ! prefix-set IPV4-BOGONS 10.0.0.0/8 le 32 end-set ! ! == INCOMING policies == ! # mark with peer community (2792:10200) route-policy IPV4-PEER-IN set community (2792:10200) set local-preference 250 done end-policy ! # mark with transit community (2792:10201) route-policy IPV4-TRANSIT-IN set community (2792:10201) set local-preference 200 done end-policy ! # mark with customer community (2792:10300) route-policy IPV4-CUSTOMER-IN if (destination in IPV4-BOGONS) then drop endif set community (2792:10300) set local-preference 350 done end-policy ! ! ! == OUTGOING policies == ! # announce customers (2792:10300) to peers route-policy IPV4-PEER-OUT if (community matches-any (2792:10300)) then done endif drop end-policy ! # announce customers (2792:10300) to transit route-policy IPV4-TRANSIT-OUT if (community matches-any (2792:10300)) then done endif drop end-policy ! # to customers we announce peers (10200), transits (10201) and other customers (10300) route-policy IPV4-CUSTOMER-OUT if (community matches-any (2792:10200, 2792:10201, 2792:10300)) then done endif drop end-policy ! router bgp 2792 address-family ipv4 unicast ! neighbor 10.101.1.1 remote-as 65011 description bgp-cust1 address-family ipv4 unicast route-policy IPV4-CUSTOMER-IN in route-policy IPV4-CUSTOMER-OUT out ! ! neighbor 10.101.1.5 remote-as 65012 description bgp-cust2 address-family ipv4 unicast route-policy IPV4-CUSTOMER-IN in route-policy IPV4-CUSTOMER-OUT out ! ! neighbor 10.101.2.1 remote-as 65021 description bgp-peer1 address-family ipv4 unicast route-policy IPV4-PEER-IN in route-policy IPV4-PEER-OUT out ! ! neighbor 10.101.2.5 remote-as 65022 description bgp-peer2 address-family ipv4 unicast route-policy IPV4-PEER-IN in route-policy IPV4-PEER-OUT out ! ! neighbor 10.101.3.1 remote-as 65031 description bgp-transit1 address-family ipv4 unicast route-policy IPV4-TRANSIT-IN in route-policy IPV4-TRANSIT-OUT out ! ! neighbor 10.101.3.5 remote-as 65032 description bgp-transit2 address-family ipv4 unicast route-policy IPV4-TRANSIT-IN in route-policy IPV4-TRANSIT-OUT out ! ! ! 0707010000006C000081ED00000000000000000000000164D7C43700001EC5000000000000000000000000000000000000003000000000vrnetlab-git1691862071.9187175/vr-bgp/vr-bgp.py#!/usr/bin/env python3 import ipaddress import logging import os import signal import subprocess import sys import time import jinja2 def handle_SIGCHLD(signal, frame): os.waitpid(-1, os.WNOHANG) def handle_SIGTERM(signal, frame): sys.exit(0) signal.signal(signal.SIGINT, handle_SIGTERM) signal.signal(signal.SIGTERM, handle_SIGTERM) signal.signal(signal.SIGCHLD, handle_SIGCHLD) def calculate_ip_addressing(input_net, man_address, man_next_hop): """ Calculate IP addressing (address, neighbor, default route) on the specified interface This function is AFI agnostic, just feed it ipaddress objects. :param input_net: the IPv4/IPv6 network to use :param man_address: optional override for the host address :param man_next_hop: optional override for default route :return: tuple of (local_address, neighbor, next_hop, prefixlen) """ net = ipaddress.ip_network(input_net) if net.prefixlen == (net.max_prefixlen-1): address = net[0] neighbor = net[1] next_hop = net[1] else: address = net[1] neighbor = net[2] next_hop = net[2] # override default options if man_address: if ipaddress.ip_address(man_address) not in net: print("local address {} not in network {}".format(man_address, net), file=sys.stderr) sys.exit(1) address = ipaddress.ip_address(man_address) if man_next_hop: if ipaddress.ip_address(man_next_hop) not in net: print("next-hop address {} not in network {}".format(man_next_hop, net), file=sys.stderr) sys.exit(1) next_hop = ipaddress.ip_address(man_next_hop) # sanity checks if next_hop == address: print("default route next-hop address ({}) can not be the same as the local address ({})".format(next_hop, address), file=sys.stderr) sys.exit(1) print("network: {} using address: {}".format(net, address)) return str(address), str(neighbor), str(next_hop), net.prefixlen if __name__ == '__main__': import argparse parser = argparse.ArgumentParser(description='') parser.add_argument('--debug', action="store_true", help='enable debug') parser.add_argument('--ipv4-local-address', help='local address or route table will be used') parser.add_argument('--ipv4-neighbor', help='IP address of the neighbor') parser.add_argument('--ipv4-prefix', help='IP prefix to configure on the link') parser.add_argument('--ipv4-next-hop', help='next-hop address for IPv4 default route') parser.add_argument('--ipv6-local-address', help='local address or route table will be used') parser.add_argument('--ipv6-neighbor', help='IP address of the neighbor') parser.add_argument('--ipv6-prefix', help='IP prefix to configure on the link') parser.add_argument('--ipv6-next-hop', help='next-hop address for IPv6 default route') parser.add_argument('--allow-mixed-afi-transport', action='store_true', help='do not limit announced prefixes to neighbor AFI') parser.add_argument('--listen', action="store_true", default=False, help='listen to incoming TCP connections') parser.add_argument('--local-as', required=True, help='local AS') parser.add_argument('--router-id', required=True, help='our router-id') parser.add_argument('--peer-as', required=True, help='peer AS') parser.add_argument('--md5', help='MD5') parser.add_argument('--trace', action='store_true', help='enable trace level logging') parser.add_argument('--vlan', type=int, help='VLAN ID to use') parser.add_argument('--ttl-security', action="store_true", help='Enable TTL security') args = parser.parse_args() LOG_FORMAT = "%(asctime)s: %(module)-10s %(levelname)-8s %(message)s" logging.basicConfig(format=LOG_FORMAT) logger = logging.getLogger() logger.setLevel(logging.INFO) if args.debug: logger.setLevel(logging.DEBUG) config = { 'IPV4_NEIGHBOR': None, 'IPV6_NEIGHBOR': None, 'IPV4_LOCAL_ADDRESS': None, 'IPV6_LOCAL_ADDRESS': None, 'LISTEN': args.listen, 'LOCAL_AS': args.local_as, 'PEER_AS': args.peer_as, 'ROUTER_ID': args.router_id or '192.0.2.255', 'MD5': args.md5, 'INTERFACE': 'tap0', 'INTERFACE_VLAN': None, 'ALLOW_MIXED_AFI_TRANSPORT': args.allow_mixed_afi_transport, 'TTLSECURITY': args.ttl_security } if args.vlan: vlan_intf = "tap0.{}".format(args.vlan) config['INTERFACE'] = vlan_intf config['INTERFACE_PHY'] = 'tap0' config['INTERFACE_VLAN'] = args.vlan if args.ipv4_prefix: config['IPV4_LOCAL_ADDRESS'], config['IPV4_NEIGHBOR'], config['IPV4_NEXT_HOP'], config['IPV4_PREFIXLEN'] = \ calculate_ip_addressing(args.ipv4_prefix, args.ipv4_local_address, args.ipv4_next_hop) if args.ipv4_neighbor: config['IPV4_NEIGHBOR'] = args.ipv4_neighbor else: if args.ipv4_neighbor: print("--ipv4-neighbor requires --ipv4-prefix to be specified", file=sys.stderr) sys.exit(1) if args.ipv4_next_hop: print("--ipv4-next-hop requires --ipv4-prefix to be specified", file=sys.stderr) sys.exit(1) if args.ipv4_local_address: print("--ipv4-local-address requires --ipv4-prefix to be specified", file=sys.stderr) sys.exit(1) if args.ipv6_prefix: config['IPV6_LOCAL_ADDRESS'], config['IPV6_NEIGHBOR'], config['IPV6_NEXT_HOP'], config['IPV6_PREFIXLEN'] = \ calculate_ip_addressing(args.ipv6_prefix, args.ipv6_local_address, args.ipv6_next_hop) if args.ipv6_neighbor: config['IPV6_NEIGHBOR'] = args.ipv6_neighbor else: if args.ipv6_neighbor: print("--ipv6-neighbor requires --ipv6-prefix to be specified", file=sys.stderr) sys.exit(1) if args.ipv6_next_hop: print("--ipv6-next-hop requires --ipv6-prefix to be specified", file=sys.stderr) sys.exit(1) if args.ipv6_local_address: print("--ipv6-local-address requires --ipv6-prefix to be specified", file=sys.stderr) sys.exit(1) # start vr-xcon & configure ip addressing if not os.path.exists("/dev/net/tun"): print("No TUN device - make sure you run the container with --privileged", file=sys.stderr) sys.exit(1) # start tcp2tap to listen on incoming TCP. vr-xcon will then connect us to # the virtual router xcon_params = ["/xcon.py", "--tap-listen", "1"] # if there is an address configured for v4/v6, pass it to xcon for af in (4, 6): if config["IPV{}_LOCAL_ADDRESS".format(af)]: address = "{}/{}".format(config["IPV{}_LOCAL_ADDRESS".format(af)], config["IPV{}_PREFIXLEN".format(af)]) xcon_params.extend(("--ipv{}-address".format(af), address)) xcon_params.extend(("--ipv{}-route".format(af), config["IPV{}_NEXT_HOP".format(af)])) if args.vlan: xcon_params.extend(("--vlan", str(args.vlan))) t2t = subprocess.Popen(xcon_params) # generate exabgp config using Jinja2 template env = jinja2.Environment(loader=jinja2.FileSystemLoader(['/'])) template = env.get_template("/exabgp.conf.tpl") exa_config = open("/exabgp.conf", "w") exa_config.write(template.render(config=config)) exa_config.close() # start exabgp exap = subprocess.Popen(["exabgp", "/exabgp.conf"]) while True: if exap.poll() == 0: print("exabgp stopped, restarting in 2s") time.sleep(2) exap = subprocess.Popen(["exabgp", "/exabgp.conf"]) time.sleep(1) 0707010000006D000041ED00000000000000000000000264D7C43700000000000000000000000000000000000000000000002700000000vrnetlab-git1691862071.9187175/vr-xcon0707010000006E000081A400000000000000000000000164D7C43700000446000000000000000000000000000000000000003200000000vrnetlab-git1691862071.9187175/vr-xcon/DockerfileFROM debian:bullseye AS build MAINTAINER Kristian Larsson <kristian@spritelink.net> ENV DEBIAN_FRONTEND=noninteractive RUN apt-get update -qy \ && apt-get upgrade -qy \ && apt-get install -y \ gnupg \ wget \ && wget -q -O - https://apt.acton-lang.io/acton.gpg | apt-key add - \ && echo "deb [arch=amd64] http://apt.acton-lang.io/ bullseye main" >> /etc/apt/sources.list.d/acton.list \ && apt-get update \ && apt-get install -qy acton \ && rm -rf /var/lib/apt/lists/* COPY xcon.act /xcon.act RUN actonc /xcon.act FROM debian:bullseye MAINTAINER Kristian Larsson <kristian@spritelink.net> ENV DEBIAN_FRONTEND=noninteractive RUN apt-get update -qy \ && apt-get upgrade -qy \ && apt-get install -y \ bridge-utils \ iproute2 \ python3-ipy \ tcpdump \ telnet \ && rm -rf /var/lib/apt/lists/* ADD xcon.py / COPY --from=build /xcon /xcon # The first line in the health file is the exit code: 0 or 1. The following lines are the output message HEALTHCHECK --interval=5s --start-period=1s CMD sed 1d /health; exit `head -n1 /health` ENTRYPOINT ["/xcon.py"] 0707010000006F000081A400000000000000000000000164D7C43700000170000000000000000000000000000000000000003000000000vrnetlab-git1691862071.9187175/vr-xcon/Makefile-include ../makefile-sanity.include all: docker build --build-arg http_proxy=$(http_proxy) --build-arg https_proxy=$(https_proxy) -t $(REGISTRY)vr-xcon . docker-push: docker push $(REGISTRY)vr-xcon docker-test: @echo "TODO: implement smoke test" docker-test-clean: @echo "TODO: implement smoke test" docker-test-save-logs: @echo "TODO: implement smoke test" 07070100000070000081A400000000000000000000000164D7C43700001DA5000000000000000000000000000000000000003100000000vrnetlab-git1691862071.9187175/vr-xcon/README.mdvrnetlab xcon ============= This is the vrnetlab docker image of xcon - the cross-connect app. vr-xcon is used to connect two or more vrnetlab containers with each other. Modes of operation ------------------ ### TcpBridge All vrnetlab routers are run by qemu which expose the router interfaces via TCP ports and vr-xcon connects these together. It can be seen as an overlay. The underlying TCP ports exposed by qemu listen on the docker0 interface (per default) of each container but as long as two vrnetlab containers have connectivity via their default network, vr-xcon should be able to perform it's job. ### Tcp2Tap vr-xcon also provides a mode to interconnect the TCP-socket exposed by qemu to a local tap interface, which makes it easy to use other apps together with vrnetlab router containers. Run vr-xcon with `--tap-listen INTERFACE` to listen to a port - the mapping is the same as for other vrnetlab routers, i.e. INTERFACE=1 will mean it listens on TCP port 10001 and this makes it easy to interconnect using vr-xcon. See vr-bgp for an example of how `--tap-listen` can be used in real life. In this mode, vr-xcon can also be used to configure IP addressing of the local tap interface. This includes setting an IPv4/IPv6 address, default route and VLAN ID. Building the docker image ------------------------- The vr-xcon container image is available from Docker Hub as vrnetlab/vr-xcon, so unless you have done local modifications, there is no real need to build your own container. Nevertheless, run `make` to build your own container image. The resulting image will be called 'vr-xcon'. The environment variable REGISTRY can be set to give the resulting image a prefix, for example by setting registry to 'registry.example.com:1234' the resulting image will be called 'registry.example.com:1234/vr-xcon' and can then be pushed to the registry through `docker push`. Usage ----- ### TcpBridge mode To connect the first interface of vr1 and vr2 and the second interface of vr1 with the first of vr3, run: ``` docker run -d --privileged --name vr-xcon --link vr1 --link vr2 --link vr3 vr-xcon --p2p vr1/1--vr2/1 vr1/2--vr3/1 ``` Note how --p2p is not repeated and the arguments to it are simply appended. It's possible to use the `--debug` option to have a debug written out for every packet. ### Tcp2Tap mode For example, say we have a virtual router _r1_, and want to connect an application running in the _app_ docker container to the overlay network. We will need to run vr-xcon in the _app_ container in _Tcp2Tap_ mode, and then connect the two containers with vr-xcon in _TcpBridge_ mode. The _r1_ interface already has IPv6 address 2003:1c08:161:1ff::1, we want our app to use 2003:1c08:161:1ff::42 and also use _r1_ as the default gateway to access the rest of the overlay networks. First, run vr-xcon in the _app_ container in the background (note this assumes the `app` container runs in _privileged_ mode): ``` docker exec -d app bash -c "/xcon.py --tap-listen 1 --ipv6-address 2003:1c08:161:1ff::42/64 --ipv6-route 2003:1c08:161:1ff::1" ``` Then, connect the _app_ container with _r1_ using vr-xcon in _TcpBridge_ mode: ``` docker run -d --privileged --name vr-xcon --link r1 --link app vr-xcon --p2p r1/1--app/1 ``` Experimental xcon-ng (acton) ---------------------------- An alternative implementation of xcon written in acton (https://www.acton-lang.org/) is currently in development. Currently it only supports TcpBridge mode but the argument format is equivalent. To use it just replace the entrypoint with `--entrypoint /xcon`. ``` docker run -d --privileged --name vr-xcon-ng --entrypoint /xcon --link vr1 --link vr2 --link vr3 vr-xcon --p2p vr1/1--vr2/1 vr1/2--vr3/1 ``` FUAQ - Frequently or Unfrequently Asked Questions ------------------------------------------------- ##### Q: Can I use '--' in the names of my vrnetlab containers? A: No, since -- is used as the separator in the --p2p argument list for separating two vrnetlab instances you can not use -- in the name of the container itself. ##### Q: Is this fast? A: I haven't tested but I would assume it is incredibly slow. ##### Q: What about jitter / PDV (packet delay variation)? A: Hehe, it can be really bad: 64 bytes from 192.168.1.2: icmp_seq=2097 ttl=64 time=4.769 ms 64 bytes from 192.168.1.2: icmp_seq=2098 ttl=64 time=8.317 ms 64 bytes from 192.168.1.2: icmp_seq=2099 ttl=64 time=15.112 ms 64 bytes from 192.168.1.2: icmp_seq=2100 ttl=64 time=38.859 ms 64 bytes from 192.168.1.2: icmp_seq=2101 ttl=64 time=1.940 ms UPDATE: Most packet delay variation seem to stem from the routers themselves. Different routers induce different amounts of PDV. ##### Q: Why not connect virtual routers via tap interfaces? A: It would require fiddling lots more with kernel level networking, which isn't fun in a docker environment. Since the TCP packets encapsulating the inner payload run on top of the docker0 bridge it's actually possible to run vrnetlab virtual routers on different docker hosts and use docker overlay networking to connect these together (although I've never tested). That wouldn't work with kernel tap interfaces or similar. ##### Q: Why not use normal docker networks? A: Docker isn't really built for network centric applications. The default networking provides a single interface to the container and adding more is a somewhat elaborate process. In addition, it appears that the networking needs to be setup before the container starts, which is a no-go for vrnetlab as one of the design criterias is to be able to setup the topology, or modify it, after the containers have been started. ##### Q: Why not use TCP listen & connect mode directly from qemu? A: While this would get rid of vr-xcon and potentially perform much much better it means the topology would have to be known ahead of time and you couldn't do any changes to the topology while the virtual routers are running. vr-xcon defines the topology after the fact that the virtual routes have been started, making it possible to change the topology by stopping vr-xcon and starting a new one with a new topology. ##### Q: Starting and stopping vr-xcon to build a new topology would mean packet loss, no? A: Yes indeed, it will very likely induce packet loss. I believe it could be kept relatively short though and the way qemu works the virtual routers would never see their interfaces go down so IS-IS, OSPF or similar should stay up. If I were to take a guess I think vr-xcon could be stopped and started within a few milliseconds, so even BFD could potentially be run with quite aggressive timers without a problem. Additionally, one can use one vr-xcon process/container per link such that it is possible to stop a single vr-xcon process without affecting other links. ##### Q: Why not use UDP mode as provided by qemu to directly connect the router? A: UDP could indeed be used and there would be two alternatives to this, one would be to set UDP src/dst pairs so that the qemu processes would talk directly to each other, this however has the inherent problem of knowing the topology ahead of time. See a previous answer on why this is bad. The other option would be to use "generic" UDP src/dst pairs and we could have a udpbridge that receives packets from the virtual routers and virtually cross-connects them to each other. This is virtually the same as the current vr-xcon concept, just using UDP instead of TCP but it would also require one end to be known, which brings us back to the problem of knowing the topology before the containers are started. 07070100000071000081A400000000000000000000000164D7C4370000131B000000000000000000000000000000000000003000000000vrnetlab-git1691862071.9187175/vr-xcon/xcon.actimport net import time import file actor Healthcheck(write_file_auth, expected): # keep track of TcpEndpoint actor states (ids, like "r1/1") var states = {} # Single actor for writing to the file to ensure the updates are flushed in # the same order as they are received by Healthcheck. var wf = file.WriteFile(write_file_auth, "health") def update(endpoint, state): states[endpoint] = state _flush() def _flush(): count = 0 for state in states.values(): if state == 3: count += 1 if count == expected: exit_code = 0 message = "All %d sockets connected" % expected else: exit_code = 1 message = "Expected %d sockets but only %d connected" % (expected, count) print("healthcheck: %d (%s)" % (exit_code, message)) health = "%d\n%s" % (exit_code, message) wf.write(health.encode()) _flush() actor TcpEndpoint(connect_auth, dns, name, interface, healthcheck): port = 10000 + interface id = "%s/%d" % (name, interface) var _other = None var _conn: ?net.TCPIPConnection = None var backoff = 0 var state = 0 # 0 = starting / waiting / backoff # 1 = wait for DNS # 2 = wait for connection # 3 = connected def _set_state(s): state = s healthcheck.update(id, state) def _on_tcp_connect(c): _set_state(3) print("TCP Client connection established to %s" % (id)) backoff = 0 def _on_tcp_receive(c, data): if _other is not None: _other.write(data) def _on_tcp_close(c): pass def _on_tcp_error(c, msg): print("Error for %s" % (id)) _reconnect(True) def _on_dns_resolve(resolved_addresses): if state != 1: print("Got unexpected DNS response, discarding...") else: if len(resolved_addresses) > 0: addr = resolved_addresses[0] print("Resolved %s to %s" % (name, addr)) # TODO: could potentially use .reconnect() here unless the # resolved address has changed, instead of closing current # connection and replacing it with a new one. The connection # must currently be explicitly closed to clean up resources in # the I/O subsystem. if _conn is not None: _conn.close(_on_tcp_close) _conn = net.TCPIPConnection(connect_auth, addr, port, _on_tcp_connect, _on_tcp_receive, _on_tcp_error) _set_state(2) def _on_dns_error(query, error): print("Error resolving DNS name", query, ":", error) _reconnect(True) def _connect(): if state != 0: print("Unexpected state for _connect:", state) return _set_state(1) dns.lookup_a(name, _on_dns_resolve, _on_dns_error) def _reconnect(error): _set_state(0) if error: backoff = min([backoff + 1.0, 5.0], 1.0) after backoff: _connect() _reconnect(False) def set_other(o): _other = o def write(data): if _conn is not None: _conn.write(data) def parse_side(i): parts = i.split("/", None) if len(parts) != 2: raise ValueError("Bad endpoint definition: %s" % i) return (host=parts[0], interface=int(parts[1])) actor main(env): print("Xcon starting up") connect_auth = net.TCPConnectAuth(net.TCPAuth(net.NetAuth(env.auth))) dns_auth = net.DNSAuth(net.NetAuth(env.auth)) dns = net.DNS(dns_auth) var i = 0 var p2p = [] while i < len(env.argv): arg = env.argv[i] print("arg: %s" % (arg)) # the --p2p arugment is followed by one or more link specs (python argparse nargs="+") if arg == "--p2p": i += 1 while i < len(env.argv) and env.argv[i][0] != "-": arg_link = env.argv[i] print("\t%s" % arg_link) parts = arg_link.split("--", None) if len(parts) != 2: print("Bad link", arg_link) link = ( left=parse_side(parts[0]), right=parse_side(parts[1]) ) p2p.append(link) i += 1 i += 1 fa = file.FileAuth(env.auth) wfa = file.WriteFileAuth(fa) hc = Healthcheck(wfa, len(p2p) * 2) var links = [] for link in p2p: left = link.left right = link.right left_ep = TcpEndpoint(connect_auth, dns, right.host, right.interface, hc) right_ep = TcpEndpoint(connect_auth, dns, left.host, left.interface, hc) left_ep.set_other(right_ep) right_ep.set_other(left_ep) links.append((left=left, right=right)) 07070100000072000081ED00000000000000000000000164D7C43700005890000000000000000000000000000000000000002F00000000vrnetlab-git1691862071.9187175/vr-xcon/xcon.py#!/usr/bin/env python3 import fcntl import ipaddress import logging import os import select import signal import socket import struct import subprocess import sys import time def handle_SIGCHLD(signal, frame): os.waitpid(-1, os.WNOHANG) def handle_SIGTERM(signal, frame): sys.exit(0) signal.signal(signal.SIGINT, handle_SIGTERM) signal.signal(signal.SIGTERM, handle_SIGTERM) signal.signal(signal.SIGCHLD, handle_SIGCHLD) class Tcp2Raw: def __init__(self, raw_intf = 'eth1', listen_port=10001): self.logger = logging.getLogger() # setup TCP side self.s = socket.socket(socket.AF_INET6, socket.SOCK_STREAM) self.s.bind(('::0', listen_port)) self.s.listen(1) self.tcp = None # track current state of TCP side tunnel. 0 = reading size, 1 = reading packet self.tcp_state = 0 self.tcp_buf = b'' self.tcp_remaining = 0 # setup raw side self.raw = socket.socket( socket.AF_PACKET , socket.SOCK_RAW , socket.ntohs(0x0003)) self.raw.bind((raw_intf, 0)) # don't block self.raw.setblocking(0) def work(self): while True: skts = [self.s, self.raw] if self.tcp is not None: skts.append(self.tcp) ir = select.select(skts,[],[])[0][0] if ir == self.s: self.logger.debug("received incoming TCP connection, setting up!") self.tcp, addr = self.s.accept() elif ir == self.tcp: self.logger.debug("received packet from TCP and sending to raw interface") try: buf = ir.recv(2048) except (ConnectionResetError, OSError): self.logger.warning("connection dropped") continue if len(buf) == 0: self.logger.info("no data from TCP socket, assuming client hung up, closing our socket") ir.close() self.tcp = None self.tcp_state = 0 self.tcp_buf = b'' self.tcp_remaining = 0 continue self.tcp_buf += buf self.logger.debug("read %d bytes from tcp, tcp_buf length %d" % (len(buf), len(self.tcp_buf))) while True: if self.tcp_state == 0: # we want to read the size, which is 4 bytes, if we # don't have enough bytes wait for the next spin if not len(self.tcp_buf) > 4: self.logger.debug("reading size - less than 4 bytes available in buf; waiting for next spin") break size = socket.ntohl(struct.unpack("I", self.tcp_buf[:4])[0]) # first 4 bytes is size of packet self.tcp_buf = self.tcp_buf[4:] # remove first 4 bytes of buf self.tcp_remaining = size self.tcp_state = 1 self.logger.debug("reading size - pkt size: %d" % self.tcp_remaining) if self.tcp_state == 1: # read packet data # we want to read the whole packet, which is specified # by tcp_remaining, if we don't have enough bytes we # wait for the next spin if len(self.tcp_buf) < self.tcp_remaining: self.logger.debug("reading packet - less than remaining bytes; waiting for next spin") break self.logger.debug("reading packet - reading %d bytes" % self.tcp_remaining) payload = self.tcp_buf[:self.tcp_remaining] self.tcp_buf = self.tcp_buf[self.tcp_remaining:] self.tcp_remaining = 0 self.tcp_state = 0 self.raw.send(payload) else: # we always get full packets from the raw interface payload = self.raw.recv(2048) buf = struct.pack("I", socket.htonl(len(payload))) + payload if self.tcp is None: self.logger.warning("received packet from raw interface but TCP not connected, discarding packet") else: self.logger.debug("received packet from raw interface and sending to TCP") try: self.tcp.send(buf) except: self.logger.warning("could not send packet to TCP session") class Tcp2Tap: def __init__(self, tap_intf = 'tap0', listen_port=10001): self.logger = logging.getLogger() # setup TCP side self.s = socket.socket(socket.AF_INET6, socket.SOCK_STREAM) self.s.bind(('::0', listen_port)) self.s.listen(1) self.tcp = None # track current state of TCP side tunnel. 0 = reading size, 1 = reading packet self.tcp_state = 0 self.tcp_buf = b'' self.tcp_remaining = 0 # setup tap side TUNSETIFF = 0x400454ca IFF_TUN = 0x0001 IFF_TAP = 0x0002 IFF_NO_PI = 0x1000 self.tap = os.open("/dev/net/tun", os.O_RDWR) # we want a tap interface, no packet info and it should be called tap0 # TODO: implement dynamic name using tap%d, right now we assume we are # only program in this namespace (docker container) that creates tap0 ifs = fcntl.ioctl(self.tap, TUNSETIFF, struct.pack("16sH", tap_intf.encode(), IFF_TAP | IFF_NO_PI)) # ifname - good for when we do dynamic interface name ifname = ifs[:16].decode().strip("\x00") def work(self): while True: skts = [self.s, self.tap] if self.tcp is not None: skts.append(self.tcp) ir = select.select(skts,[],[])[0][0] if ir == self.s: self.logger.debug("received incoming TCP connection, setting up!") self.tcp, addr = self.s.accept() elif ir == self.tcp: self.logger.debug("received packet from TCP and sending to tap interface") try: buf = ir.recv(2048) except (ConnectionResetError, OSError): self.logger.warning("connection dropped") continue if len(buf) == 0: self.logger.info("no data from TCP socket, assuming client hung up, closing our socket") ir.close() self.tcp = None self.tcp_state = 0 self.tcp_buf = b'' self.tcp_remaining = 0 continue self.tcp_buf += buf self.logger.debug("read %d bytes from tcp, tcp_buf length %d" % (len(buf), len(self.tcp_buf))) while True: if self.tcp_state == 0: # we want to read the size, which is 4 bytes, if we # don't have enough bytes wait for the next spin if not len(self.tcp_buf) > 4: self.logger.debug("reading size - less than 4 bytes available in buf; waiting for next spin") break size = socket.ntohl(struct.unpack("I", self.tcp_buf[:4])[0]) # first 4 bytes is size of packet self.tcp_buf = self.tcp_buf[4:] # remove first 4 bytes of buf self.tcp_remaining = size self.tcp_state = 1 self.logger.debug("reading size - pkt size: %d" % self.tcp_remaining) if self.tcp_state == 1: # read packet data # we want to read the whole packet, which is specified # by tcp_remaining, if we don't have enough bytes we # wait for the next spin if len(self.tcp_buf) < self.tcp_remaining: self.logger.debug("reading packet - less than remaining bytes; waiting for next spin") break self.logger.debug("reading packet - reading %d bytes" % self.tcp_remaining) payload = self.tcp_buf[:self.tcp_remaining] self.tcp_buf = self.tcp_buf[self.tcp_remaining:] self.tcp_remaining = 0 self.tcp_state = 0 os.write(self.tap, payload) else: # we always get full packets from the tap interface payload = os.read(self.tap, 2048) buf = struct.pack("I", socket.htonl(len(payload))) + payload if self.tcp is None: self.logger.warning("received packet from tap interface but TCP not connected, discarding packet") else: self.logger.debug("received packet from tap interface and sending to TCP") try: self.tcp.send(buf) except: self.logger.warning("could not send packet to TCP session") class TcpBridge: def __init__(self): self.logger = logging.getLogger() self.sockets = [] self.socket2remote = {} self.socket2hostintf = {} def hostintf2addr(self, hostintf): hostname, interface = hostintf.split("/") try: res = socket.getaddrinfo(hostname, "100%02d" % int(interface), socket.AF_INET) except socket.gaierror: raise NoVR("Unable to resolve %s" % hostname) sockaddr = res[0][4] return sockaddr def add_p2p(self, p2p): source, destination = p2p.split("--") src_router, src_interface = source.split("/") dst_router, dst_interface = destination.split("/") src = self.hostintf2addr(source) dst = self.hostintf2addr(destination) left = socket.socket(socket.AF_INET, socket.SOCK_STREAM) right = socket.socket(socket.AF_INET, socket.SOCK_STREAM) # dict to map back to hostname & interface self.socket2hostintf[left] = "%s/%s" % (src_router, src_interface) self.socket2hostintf[right] = "%s/%s" % (dst_router, dst_interface) try: left.connect(src) except: self.logger.info("Unable to connect to %s" % self.socket2hostintf[left]) try: right.connect(dst) except: self.logger.info("Unable to connect to %s" % self.socket2hostintf[right]) # add to list of sockets self.sockets.append(left) self.sockets.append(right) # dict for looking up remote in pair self.socket2remote[left] = right self.socket2remote[right] = left def work(self): while True: try: ir,_,_ = select.select(self.sockets, [], []) except select.error as exc: break for i in ir: remote = self.socket2remote[i] try: buf = i.recv(2048) except ConnectionResetError as exc: self.logger.warning("connection dropped, reconnecting to source %s" % self.socket2hostintf[i]) try: i.connect(self.hostintf2addr(self.socket2hostintf[i])) self.logger.debug("reconnect to %s successful" % self.socket2hostintf[i]) except Exception as exc: self.logger.warning("reconnect failed %s" % str(exc)) continue except OSError as exc: self.logger.warning("endpoint not connected, connecting to source %s" % self.socket2hostintf[i]) try: i.connect(self.hostintf2addr(self.socket2hostintf[i])) self.logger.debug("connect to %s successful" % self.socket2hostintf[i]) except: self.logger.warning("connect failed %s" % str(exc)) continue if len(buf) == 0: return self.logger.debug("%05d bytes %s -> %s " % (len(buf), self.socket2hostintf[i], self.socket2hostintf[remote])) try: remote.send(buf) except BrokenPipeError: self.logger.warning("unable to send packet %05d bytes %s -> %s due to remote being down, trying reconnect" % (len(buf), self.socket2hostintf[i], self.socket2hostintf[remote])) try: remote.connect(self.hostintf2addr(self.socket2hostintf[remote])) self.logger.debug("connect to %s successful" % self.socket2hostintf[remote]) except Exception as exc: self.logger.warning("connect failed %s" % str(exc)) continue class TcpHub: def __init__(self): self.logger = logging.getLogger() self.sockets = [] self.socket2hostintf = {} def ep2addr(self, hostintf): """ Return address based on endpoint """ hostname, interface = hostintf.split("/") try: res = socket.getaddrinfo(hostname, "100%02d" % int(interface), socket.AF_INET) except socket.gaierror: raise NoVR("Unable to resolve %s" % hostname) sockaddr = res[0][4] return sockaddr def add_ep(self, ep): host, interface = ep.split("/") remote = self.ep2addr(ep) s = socket.socket(socket.AF_INET, socket.SOCK_STREAM) # dict to map back to hostname & interface self.socket2hostintf[s] = "%s/%s" % (host, interface) try: s.connect(remote) except: self.logger.info("Unable to connect to %s" % self.socket2hostintf[remote]) # add to list of sockets self.sockets.append(s) def work(self): while True: try: ir,_,_ = select.select(self.sockets, [], []) except select.error as exc: break for i in ir: try: buf = i.recv(2048) except ConnectionResetError as exc: self.logger.warning("connection dropped, reconnecting to source %s" % self.socket2hostintf[i]) try: i.connect(self.ep2addr(self.socket2hostintf[i])) self.logger.debug("reconnect to %s successful" % self.socket2hostintf[i]) except Exception as exc: self.logger.warning("reconnect failed %s" % str(exc)) continue except OSError as exc: self.logger.warning("endpoint not connected, connecting to source %s" % self.socket2hostintf[i]) try: i.connect(self.ep2addr(self.socket2hostintf[i])) self.logger.debug("connect to %s successful" % self.socket2hostintf[i]) except: self.logger.warning("connect failed %s" % str(exc)) continue if len(buf) == 0: return # send to all other sockets for remote in self.sockets: self.logger.debug("%05d bytes %s -> %s " % (len(buf), self.socket2hostintf[i], self.socket2hostintf[remote])) # don't need to send to ourselves though if i is remote: continue try: remote.send(buf) except BrokenPipeError: self.logger.warning("unable to send packet %05d bytes %s -> %s due to remote being down, trying reconnect" % (len(buf), self.socket2hostintf[i], self.socket2hostintf[remote])) try: remote.connect(self.ep2addr(self.socket2hostintf[remote])) self.logger.debug("connect to %s successful" % self.socket2hostintf[remote]) except Exception as exc: self.logger.warning("connect failed %s" % str(exc)) continue class NoVR(Exception): """ No virtual router """ class TapConfigurator(object): def __init__(self, logger): self.logger = logger def _configure_interface_address(self, interface, address, default_route=None): next_hop = None net = ipaddress.ip_interface(address) if default_route: try: next_hop = ipaddress.ip_address(default_route) except ValueError: self.logger.error("next-hop address {} could not be parsed".format(default_route)) sys.exit(1) if default_route and next_hop not in net.network: self.logger.error("next-hop address {} not in network {}".format(next_hop, net)) sys.exit(1) subprocess.check_call(["ip", "-{}".format(net.version), "address", "add", str(net.ip) + "/" + str(net.network.prefixlen), "dev", interface]) if next_hop: try: subprocess.check_call(["ip", "-{}".format(net.version), "route", "del", "default"]) except: pass subprocess.check_call(["ip", "-{}".format(net.version), "route", "add", "default", "dev", interface, "via", str(next_hop)]) def configure_interface(self, interface='tap0', vlan=None, ipv4_address=None, ipv4_route=None, ipv6_address=None, ipv6_route=None): # enable the interface subprocess.check_call(["ip", "link", "set", interface, "up"]) interface_sysctl = interface if vlan: physical_interface = interface interface_sysctl = '{}/{}'.format(interface, vlan) interface = '{}.{}'.format(interface, vlan) subprocess.check_call(["ip", "link", "add", "link", physical_interface, "name", interface, "type", "vlan", "id", str(vlan)]) subprocess.check_call(["ip", "link", "set", interface, "up"]) if ipv4_address: self._configure_interface_address(interface, ipv4_address, ipv4_route) if ipv6_address: # stupid hack for docker engine disabling IPv6. It's somewhere around # version 17.04 that docker engine started disabling ipv6 on the sysctl # net.ipv6.conf.all and net.ipv6.conf.default while eth0 and lo still has # it, if docker engine is started with --ipv6. However, with the default at # disable we have to specifically enable it for interfaces created after the # container started... subprocess.check_call(["sysctl", "net.ipv6.conf.{}.disable_ipv6=0".format(interface_sysctl)]) self._configure_interface_address(interface, ipv6_address, ipv6_route) if __name__ == '__main__': import argparse parser = argparse.ArgumentParser(description='') parser.add_argument('--debug', action="store_true", default=False, help='enable debug') meg = parser.add_mutually_exclusive_group(required=True) meg.add_argument('--p2p', nargs='+', help='point-to-point link between virtual routers') meg.add_argument('--hub', nargs='+', help='hub between virtual routers, will forward any incoming packets to all outputs, like a hub') meg.add_argument('--raw-listen', help='raw to virtual router. Will listen on specified port for incoming connection; 1 for TCP/10001') meg.add_argument('--tap-listen', help='tap to virtual router. Will listen on specified port for incoming connection; 1 for TCP/10001') raw = parser.add_argument_group('raw') raw.add_argument('--raw-if', default="eth1", help='name of raw interface (use with other --raw-* arguments)') tap = parser.add_argument_group('tap') tap.add_argument('--tap-if', default="tap0", help='name of tap interface (use with other --tap-* arguments)') tap.add_argument('--ipv4-address', help='IPv4 address to use on the tap interface') tap.add_argument('--ipv4-route', help='default IPv4 route to use on the tap interface') tap.add_argument('--ipv6-address', help='IPv6 address to use on the tap interface') tap.add_argument('--ipv6-route', help='default IPv6 route to use on the tap interface') tap.add_argument('--vlan', type=int, help='VLAN ID to use on the tap interface') parser.add_argument('--trace', action="store_true", help="dummy, we don't support tracing but taking the option makes vrnetlab containers uniform") args = parser.parse_args() LOG_FORMAT = "%(asctime)s: %(module)-10s %(levelname)-8s %(message)s" logging.basicConfig(format=LOG_FORMAT) logger = logging.getLogger() logger.setLevel(logging.INFO) if args.debug: logger.setLevel(logging.DEBUG) # Fake healtcheck until supported in xcon.py with open("health", "w") as hc: hc.write("0") if args.p2p: tt = TcpBridge() for p2p in args.p2p: try: tt.add_p2p(p2p) except NoVR as exc: print(exc, " Is it started and did you link it?") sys.exit(1) tt.work() if args.hub: hub = TcpHub() for ep in args.hub: try: hub.add_ep(ep) except NoVR as exc: print(exc, " Is it started and did you link it?") sys.exit(1) hub.work() if args.tap_listen: # init Tcp2Tap to create interface t2t = Tcp2Tap(args.tap_if, 10000 + int(args.tap_listen)) # now (optionally) configure addressing tc = TapConfigurator(logger) tc.configure_interface(interface=args.tap_if, vlan=args.vlan, ipv4_address=args.ipv4_address, ipv4_route=args.ipv4_route, ipv6_address=args.ipv6_address, ipv6_route=args.ipv6_route) t2t.work() if args.raw_listen: while True: try: t2r = Tcp2Raw(args.raw_if, 10000 + int(args.raw_listen)) t2r.work() except Exception as exc: print(exc) time.sleep(1) 07070100000073000081A400000000000000000000000164D7C4370000046F000000000000000000000000000000000000002B00000000vrnetlab-git1691862071.9187175/vrnetlab.sh#!/bin/sh vr_mgmt_ip() { VROUTER=$1 VR_ADDRESS=$(docker inspect --format '{{.NetworkSettings.IPAddress}}' $VROUTER) echo $VR_ADDRESS } vrssh() { VROUTER=$1 USER=$2 VR_ADDRESS=$(vr_mgmt_ip $VROUTER) if [ -z "$USER" ] ; then if [ -x $(command -v sshpass) ]; then sshpass -p VR-netlab9 ssh -oStrictHostKeyChecking=no $VR_ADDRESS -l vrnetlab else ssh -oStrictHostKeyChecking=no $VR_ADDRESS -l vrnetlab fi else ssh -oStrictHostKeyChecking=no $VR_ADDRESS -l $USER fi } vrsftp() { VROUTER=$1 USER=$2 VR_ADDRESS=$(vr_mgmt_ip $VROUTER) if [ -z "$USER" ] ; then if [ -x $(command -v sshpass) ]; then sshpass -p VR-netlab9 sftp vrnetlab@$VR_ADDRESS else sftp vrnetlab@$VR_ADDRESS fi else sftp $USER@$VR_ADDRESS fi } vrcons() { VROUTER=$1 telnet $(vr_mgmt_ip $VROUTER) 5000 } vrbridge() { VR1=$1 VP1=$2 VR2=$3 VP2=$4 docker run -d --name "bridge-${VR1}-${VP1}-${VR2}-${VP2}" --link $VR1 --link $VR2 vr-xcon --p2p "${VR1}/${VP1}--${VR2}/${VP2}" } 07070100000074000041ED00000000000000000000000364D7C43700000000000000000000000000000000000000000000002300000000vrnetlab-git1691862071.9187175/vrp07070100000075000081A400000000000000000000000164D7C4370000013D000000000000000000000000000000000000002C00000000vrnetlab-git1691862071.9187175/vrp/MakefileVENDOR=Huawei NAME=VRP IMAGE_FORMAT=qcow2 IMAGE_GLOB=*.qcow2 # match versions like: # Simulator_V100R001C00SPC001T.qcow2 VERSION=$(shell echo $(IMAGE) | sed -e 's/.*\(V[0-9][0-9][0-9]R[0-9][0-9][0-9]C[0-9][0-9]SPC[0-9][0-9][0-9]\)T\?\(.*\|$$\)/\1/') -include ../makefile-sanity.include -include ../makefile.include 07070100000076000081A400000000000000000000000164D7C43700000438000000000000000000000000000000000000002D00000000vrnetlab-git1691862071.9187175/vrp/README.mdvrnetlab / Huawei VRP ===================== This is the vrnetlab docker image for Huawei VRP virtual router simulator. Building the docker image ------------------------- You probably can't get the image for this, but if you did, place it in this directory and run make. It's been tested to boot and respond to SSH with: * Simulator_V100R001C00SPC001T.qcow2 MD5:a4243883628c8ed18b7d5efb39dfee6d FUAQ - Frequently or Unfrequently Asked Questions ------------------------------------------------- ##### Q: My VRP isn't starting A: That's really not a question, is it? Anyway, I've had it take 15 minutes to start. Sometimes closer to 30 minutes when my machine was loaded. ##### Q: Looking at the trace log, VRP seems to be restarting A: VRP seems quite sensitive. Disabling CPU throttling has been known to help, that is, disabling APM in BIOS. Just changing the CPU power governor in Linux doesn't yield much of a difference. Disabling hyperthreading also helps. While hyperthreading yields higher concurrency performance, the performance per thread is actually lowered. 07070100000077000041ED00000000000000000000000264D7C43700000000000000000000000000000000000000000000002A00000000vrnetlab-git1691862071.9187175/vrp/docker07070100000078000081A400000000000000000000000164D7C437000001CF000000000000000000000000000000000000003500000000vrnetlab-git1691862071.9187175/vrp/docker/DockerfileFROM debian:bullseye MAINTAINER Kristian Larsson <kristian@spritelink.net> RUN apt-get update -qy \ && apt-get upgrade -qy \ && apt-get install -y \ qemu-kvm \ bridge-utils \ socat \ iproute2 \ python3-ipy \ python3-pexpect \ ssh \ && rm -rf /var/lib/apt/lists/* ARG IMAGE COPY $IMAGE / COPY *.py / EXPOSE 22 830 5000 10000-10099 HEALTHCHECK CMD ["/healthcheck.py"] ENTRYPOINT ["/launch.py"] 07070100000079000081ED00000000000000000000000164D7C43700001CDF000000000000000000000000000000000000003400000000vrnetlab-git1691862071.9187175/vrp/docker/launch.py#!/usr/bin/env python3 import datetime import logging import re import signal import sys import time import os import vrnetlab def handle_SIGCHLD(signal, frame): os.waitpid(-1, os.WNOHANG) def handle_SIGTERM(signal, frame): sys.exit(0) signal.signal(signal.SIGINT, handle_SIGTERM) signal.signal(signal.SIGTERM, handle_SIGTERM) signal.signal(signal.SIGCHLD, handle_SIGCHLD) TRACE_LEVEL_NUM = 9 logging.addLevelName(TRACE_LEVEL_NUM, "TRACE") def trace(self, message, *args, **kws): # Yes, logger takes its '*args' as 'args'. if self.isEnabledFor(TRACE_LEVEL_NUM): self._log(TRACE_LEVEL_NUM, message, args, **kws) logging.Logger.trace = trace class simulator_VM(vrnetlab.VM): no_paging_command = 'screen-length 0 temporary' def __init__(self, username, password): for e in os.listdir("/"): if re.search(".qcow2$", e): disk_image = "/" + e self.ram = 16384 self.vcpu = 6 self.disk_size = '40G' super(simulator_VM, self).__init__(username, password, disk_image=disk_image, ram=self.ram) self.num_nics = 14 self.wait_time = 30 self.nic_type = 'virtio-net-pci' vrnetlab.run_command(["qemu-img", "create", "-f", "qcow2", "DataDisk.qcow2", self.disk_size]) self.qemu_args.extend(["-smp", str(self.vcpu), "-cpu", "host", "-drive", "if=virtio,format=qcow2,file=DataDisk.qcow2"]) self.qemu_args.extend(["-D", "/var/log/qemu.log"]) def bootstrap_spin(self): """ This function should be called periodically to do work. """ if self.spins > 300: # too many spins with no result -> give up self.stop() self.start() return tn_switcher = { 0: 'root', # User Root Login 1: 'Huawei@123', # root password 2: 'Root@123', # time_client_start enter password 3: 'Root@123', # time_client_start enter again 4: '\n' # Press Enter to Continue } (ridx, match, res) = self.tn.expect([b'localhost login: ', b'Password: ', b'Enter Password:', b'Confirm Password:', b'other key continue'], 1) if match: # got a match! v = tn_switcher.get(ridx) self.wait_write(cmd=v, wait=None) # Enter the CLI, then config device if ridx == 3: # run main config! self.bootstrap_config() time.sleep(1) # send Ctrl + [ to close time_client_start # self.wait_write(cmd='\x1D', wait=None) # close telnet connection self.tn.close() # startup time? startup_time = datetime.datetime.now() - self.start_time self.logger.info("Startup complete in: %s" % startup_time) # mark as running self.running = True return time.sleep(5) # no match, if we saw some output from the router it's probably # booting, so let's give it some more time if res != b'': self.logger.trace("OUTPUT: %s" % res.decode()) # reset spins if we saw some output self.spins = 0 self.spins += 1 return def bootstrap_config(self): """ Do the actual bootstrap config """ # Wait for GigabitEthernet4/0/X interfaces to appear in running config # NOTE: do not use 'display current-configuration interface GigabitEtherhet 4/0/X' # to check. The output differs from 'display current-configuration'! self.wait_config("display current-configuration", 'interface GigabitEthernet4/0/1') self.wait_config("display current-configuration", 'interface GigabitEthernet4/0/4') self.wait_config("display current-configuration", 'interface GigabitEthernet4/0/14') # The first response might be the log message like # 12/active/linkDown/Major/occurredTime:2019-11-11 23:49:03/-/-/alarmID:0x08520003/VS=Admin-VS-CID=0x807a0404:The interface status changes. (ifName=GigabitEthernet4/0/14, AdminStatus=UP, OperStatus=UP, Reason=Interface physical link is up, mainIfname=GigabitEthernet4/0/14 # So wait three more times to make sure we get the correct response self.wait_config("display current-configuration", 'interface GigabitEthernet4/0/14') self.wait_config("display current-configuration", 'interface GigabitEthernet4/0/14') self.wait_config("display current-configuration", 'interface GigabitEthernet4/0/14') self.logger.info("applying bootstrap configuration") self.wait_write(cmd="", wait=None) self.wait_write(cmd="", wait=None) self.wait_write(cmd="system-view", wait=None) self.wait_write(cmd="sysname HUAWEI", wait="]") self.wait_write(cmd="ssh server key-exchange dh_group14_sha1", wait="]") self.wait_write(cmd="interface GigabitEthernet 0/0/0", wait="]") self.wait_write(cmd="ip address 10.0.0.15 24", wait="]") self.wait_write(cmd="commit", wait="]") # when simulator booting, config is not ok # Error: The system is busy in building configuration. Please wait for a moment... while True: (idx, match, res) = self.tn.expect([b'Error:'], 1) if match: if idx == 0: self.wait_write(cmd="commit", wait=None) time.sleep(5) else: break # add User vrnetlab self.wait_write(cmd="aaa", wait=None) self.wait_write(cmd="local-user %s password" % self.username, wait="]") self.wait_write(cmd="%s" % self.password, wait="Enter Password:") self.wait_write(cmd="%s" % self.password, wait="Confirm Password:") self.wait_write(cmd="local-user %s service-type ssh" % self.username, wait="]") self.wait_write(cmd="local-user %s user-group manage-ug" % self.username, wait="]") self.wait_write(cmd="commit", wait="]") class simulator(vrnetlab.VR): def __init__(self, username, password): super(simulator, self).__init__(username, password) self.vms = [simulator_VM(username, password)] if __name__ == '__main__': import argparse parser = argparse.ArgumentParser(description='') parser.add_argument('--trace', action='store_true', help='enable trace level logging') parser.add_argument('--username', default='vrnetlab', help='Username') parser.add_argument('--password', default='VR-netlab9', help='Password') parser.add_argument('--num-nics', default=14, type=int, help='Number of NICs, this parameter is IGNORED, only added to be compatible with other platforms') args = parser.parse_args() LOG_FORMAT = "%(asctime)s: %(module)-10s %(levelname)-8s %(message)s" logging.basicConfig(format=LOG_FORMAT) logger = logging.getLogger() logger.setLevel(logging.DEBUG) if args.trace: logger.setLevel(1) vr = simulator(args.username, args.password) vr.start() 0707010000007A000041ED00000000000000000000000364D7C43700000000000000000000000000000000000000000000002700000000vrnetlab-git1691862071.9187175/vsr10000707010000007B000081A400000000000000000000000164D7C43700000150000000000000000000000000000000000000003000000000vrnetlab-git1691862071.9187175/vsr1000/MakefileVENDOR=HP NAME=VSR1000 IMAGE_FORMAT=qcow2 IMAGE_GLOB=*.qco # match versions like: # VSR1000_HPE-CMW710-R0326-X64.qcow # VSR1000_HPE-CMW710-E0321P01-X64.qco VERSION=$(shell echo $(IMAGE) | sed -n 's/.*CMW\([0-9]\)\([0-9]\+\)-\([ER][0-9][0-9][0-9][0-9]\).*/\1.\2-\3/p') -include ../makefile-sanity.include -include ../makefile.include 0707010000007C000081A400000000000000000000000164D7C43700000419000000000000000000000000000000000000003100000000vrnetlab-git1691862071.9187175/vsr1000/README.mdvrnetlab / HP VSR1000 ===================== This is the vrnetlab docker image for HP VSR1000. Building the docker image ------------------------- Download the HPE VSR1001 image from https://h10145.www1.hpe.com/downloads/SoftwareReleases.aspx?ProductNumber=JG811AAE Unzip the downloaded zip file, place the .qco image in this directory and run `make docker-image`. The tag is the same as the version of the VSR1000 image, so if you have VSR1000_HPE-CMW710-R0326-X64.qco your docker image will be called vr-vsr1000:7.10-R0326 Tested booting and responding to SSH: * VSR1000_HPE-CMW710-R0326-X64.qco MD5:4153d638bfa72ca72a957ea8682ad0e2 Usage ----- ``` docker run -d --privileged --name my-vsr1000-router vr-vsr1000:7.10-R0326 ``` System requirements ------------------- CPU: 1 core RAM: 1GB Disk: <1GB FUAQ - Frequently or Unfrequently Asked Questions ------------------------------------------------- ##### Q: Has this been extensively tested? A: Nope. It starts and you can connect to it. Take it for a spin and provide some feedback :-) 0707010000007D000041ED00000000000000000000000264D7C43700000000000000000000000000000000000000000000002E00000000vrnetlab-git1691862071.9187175/vsr1000/docker0707010000007E000081A400000000000000000000000164D7C437000001BC000000000000000000000000000000000000003900000000vrnetlab-git1691862071.9187175/vsr1000/docker/DockerfileFROM debian:bullseye MAINTAINER Kristian Larsson <kristian@spritelink.net> ENV DEBIAN_FRONTEND=noninteractive RUN apt-get update -qy \ && apt-get upgrade -qy \ && apt-get install -y \ bridge-utils \ iproute2 \ python3-ipy \ socat \ qemu-kvm \ && rm -rf /var/lib/apt/lists/* ARG IMAGE COPY $IMAGE* / COPY *.py / EXPOSE 22 161/udp 830 5000 6000 10000-10099 HEALTHCHECK CMD ["/healthcheck.py"] ENTRYPOINT ["/launch.py"] 0707010000007F000081ED00000000000000000000000164D7C437000015EF000000000000000000000000000000000000003800000000vrnetlab-git1691862071.9187175/vsr1000/docker/launch.py#!/usr/bin/env python3 import datetime import logging import os import re import signal import sys import telnetlib import time import vrnetlab def handle_SIGCHLD(signal, frame): os.waitpid(-1, os.WNOHANG) def handle_SIGTERM(signal, frame): sys.exit(0) signal.signal(signal.SIGINT, handle_SIGTERM) signal.signal(signal.SIGTERM, handle_SIGTERM) signal.signal(signal.SIGCHLD, handle_SIGCHLD) TRACE_LEVEL_NUM = 9 logging.addLevelName(TRACE_LEVEL_NUM, "TRACE") def trace(self, message, *args, **kws): # Yes, logger takes its '*args' as 'args'. if self.isEnabledFor(TRACE_LEVEL_NUM): self._log(TRACE_LEVEL_NUM, message, args, **kws) logging.Logger.trace = trace class VSR_vm(vrnetlab.VM): def __init__(self, username, password): for e in os.listdir("/"): if re.search(".qco$", e): disk_image = "/" + e super(VSR_vm, self).__init__(username, password, disk_image=disk_image, ram=1024) # The VSR supports up to 15 user nics self.num_nics = 7 def bootstrap_spin(self): """ This function should be called periodically to do work. """ if self.spins > 300: # too many spins with no result -> give up self.stop() self.start() return (ridx, match, res) = self.tn.expect([b"Performing automatic"], 1) if match: # got a match! if ridx == 0: # login self.logger.debug("VM started") self.wait_write("", wait="(qemu)", con=self.qm) # To allow access to aux0 serial console self.logger.debug("Writing to QEMU Monitor") # Cred to @plajjan for this one commands = """\x04 system-view user-interface aux 0 authentication-mode none user-role network-admin quit """ key_map = { '\x04': 'ctrl-d', ' ': 'spc', '-': 'minus', '\n': 'kp_enter' } qemu_commands = [ "sendkey {}".format(key_map.get(c) or c) for c in commands ] for c in qemu_commands: self.wait_write(c, wait="(qemu)", con=self.qm) # Pace the characters sent via QEMU Monitor time.sleep(0.1) self.logger.debug("Done writing to QEMU Monitor") self.logger.debug("Switching to line aux0") self.tn = telnetlib.Telnet("127.0.0.1", 5000 + self.num) # run main config! self.bootstrap_config() # close telnet connection self.tn.close() # startup time? startup_time = datetime.datetime.now() - self.start_time self.logger.info("Startup complete in: %s" % startup_time) # mark as running self.running = True return # no match, if we saw some output from the router it's probably # booting, so let's give it some more time if res != b'': self.logger.trace("OUTPUT: %s" % res.decode()) # reset spins if we saw some output self.spins = 0 self.spins += 1 return def bootstrap_config(self): """ Do the actual bootstrap config """ self.logger.info("applying bootstrap configuration") self.wait_write("\r", None) # Wait for the prompt time.sleep(1) self.wait_write("system-view", "<HPE>") self.wait_write("ssh server enable", "[HPE]") self.wait_write("user-interface class vty", "[HPE]") self.wait_write("authentication-mode scheme", "[HPE-line-class-vty]") self.wait_write("protocol inbound ssh", "[HPE-line-class-vty]") self.wait_write("quit", "[HPE-line-class-vty]") self.wait_write("local-user %s" % (self.username), "[HPE]") self.wait_write("password simple %s" % (self.password), "[HPE-luser-manage-%s]" % (self.username)) self.wait_write("service-type ssh", "[HPE-luser-manage-%s]" % (self.username)) self.wait_write("authorization-attribute user-role network-admin", "[HPE-luser-manage-%s]" % (self.username)) self.wait_write("quit", "[HPE-luser-manage-%s]" % (self.username)) self.wait_write("interface GigabitEthernet%s/0" % (self.num_nics + 1), "[HPE]") self.wait_write("ip address 10.0.0.15 255.255.255.0", "[HPE-GigabitEthernet%s/0]" % (self.num_nics + 1)) self.wait_write("quit", "[HPE-GigabitEthernet%s/0]" % (self.num_nics + 1)) self.wait_write("quit", "[HPE]") self.wait_write("quit", "<HPE>") self.logger.info("completed bootstrap configuration") class VSR(vrnetlab.VR): def __init__(self, username, password): super(VSR, self).__init__(username, password) self.vms = [ VSR_vm(username, password) ] if __name__ == '__main__': import argparse parser = argparse.ArgumentParser(description='') parser.add_argument('--trace', action='store_true', help='enable trace level logging') parser.add_argument('--username', default='vrnetlab', help='Username') parser.add_argument('--password', default='VR-netlab9', help='Password') args = parser.parse_args() LOG_FORMAT = "%(asctime)s: %(module)-10s %(levelname)-8s %(message)s" logging.basicConfig(format=LOG_FORMAT) logger = logging.getLogger() logger.setLevel(logging.DEBUG) if args.trace: logger.setLevel(1) vr = VSR(args.username, args.password) vr.start() 07070100000080000041ED00000000000000000000000364D7C43700000000000000000000000000000000000000000000002400000000vrnetlab-git1691862071.9187175/vsrx07070100000081000081A400000000000000000000000164D7C43700000101000000000000000000000000000000000000002D00000000vrnetlab-git1691862071.9187175/vsrx/MakefileVENDOR=Juniper NAME=vSRX IMAGE_FORMAT=qcow IMAGE_GLOB=*.qcow2 IMAGE=junos-vsrx3-x86-64-20.2R1.10.qcow2 # match versions like: # 12.1X47-D15.4 VERSION=$(shell echo $(IMAGE) | cut -d - -f2,3) -include ../makefile-sanity.include -include ../makefile.include 07070100000082000081A400000000000000000000000164D7C43700000447000000000000000000000000000000000000002E00000000vrnetlab-git1691862071.9187175/vsrx/README.mdvrnetlab / Juniper vSRX ================================== This is the vrnetlab docker image for Juniper vSRX. Building the docker image ------------------------- The image can be downloaded automatically using ```./get-vsrx.sh```. The script will download the official Juniper Vagrant box (216 MB), uncompress it and convert the vSRX image to a QCOW2 format. Run ```make docker-image``` to build the docker image. Tested booting and responding to SSH: * ffp-12.1X47-D15.4-packetmode.qcow2 MD5:692628eb87e067db33459a0030ec81b0 Usage ----- ``` docker run -d --privileged --name my-vsrx-box vrnetlab/vr-vsrx:12.1X47-D15.4 ``` System requirements ------------------- CPU: 2 core RAM: 2GB Disk: <1GB https://www.juniper.net/documentation/en_US/firefly12.1x46-d10/topics/reference/general/security-virtual-perimeter-system-requirement-with-kvm.html FAQ - Frequently or Unfrequently Asked Questions ------------------------------------------------- ##### Q: Has this been extensively tested? A: Nope. It starts and you can connect to it. Take it for a spin and provide some feedback :-) 07070100000083000041ED00000000000000000000000264D7C43700000000000000000000000000000000000000000000002B00000000vrnetlab-git1691862071.9187175/vsrx/docker07070100000084000081A400000000000000000000000164D7C437000000DB000000000000000000000000000000000000003600000000vrnetlab-git1691862071.9187175/vsrx/docker/DockerfileFROM registry.opensuse.org/isv/suseinfra/containers/containerfile/vrnetlab-base:latest MAINTAINER Georg Pfuetzenreuter <georg.pfuetzenreuter@suse.com> ARG IMAGE COPY $IMAGE* /opt/images/ COPY launch.py /usr/local/bin/ 07070100000085000081ED00000000000000000000000164D7C43700001356000000000000000000000000000000000000003500000000vrnetlab-git1691862071.9187175/vsrx/docker/launch.py#!/usr/bin/env python3 import datetime import logging import os import re import signal import sys import vrnetlab def handle_SIGCHLD(signal, frame): os.waitpid(-1, os.WNOHANG) def handle_SIGTERM(signal, frame): sys.exit(0) signal.signal(signal.SIGINT, handle_SIGTERM) signal.signal(signal.SIGTERM, handle_SIGTERM) signal.signal(signal.SIGCHLD, handle_SIGCHLD) TRACE_LEVEL_NUM = 9 logging.addLevelName(TRACE_LEVEL_NUM, "TRACE") def trace(self, message, *args, **kws): # Yes, logger takes its '*args' as 'args'. if self.isEnabledFor(TRACE_LEVEL_NUM): self._log(TRACE_LEVEL_NUM, message, args, **kws) logging.Logger.trace = trace # expose REST API vrnetlab.HOST_FWDS.append(('tcp', 3000, 3000)) class VSRX_vm(vrnetlab.VM): def __init__(self, username, password): for e in os.listdir("/opt/images"): if re.search(".qcow2$", e): disk_image = "/opt/images/" + e super(VSRX_vm, self).__init__(username, password, disk_image=disk_image, ram=6144) self.qemu_args.extend(["-smp", "2"]) self.nic_type = "virtio-net-pci" self.num_nics = 10 def bootstrap_spin(self): """ This function should be called periodically to do work. """ if self.spins > 300: # too many spins with no result -> give up self.stop() self.start() return (ridx, match, res) = self.tn.expect([b"login:"], 1) if match: # got a match! if ridx == 0: # login self.logger.info("VM started") # Login self.wait_write("\r", None) self.wait_write("root", wait="login:") self.wait_write("", wait="root@:~ #") self.logger.info("Login completed") # run main config! self.bootstrap_config() # close telnet connection self.tn.close() # startup time? startup_time = datetime.datetime.now() - self.start_time self.logger.info("Startup complete in: %s" % startup_time) # mark as running self.running = True return # no match, if we saw some output from the router it's probably # booting, so let's give it some more time if res != b'': self.logger.trace("OUTPUT:\n%s" % res.decode()) # reset spins if we saw some output self.spins = 0 self.spins += 1 return def bootstrap_config(self): """ Do the actual bootstrap config """ self.logger.info("applying bootstrap configuration") self.wait_write("cli", "%") self.wait_write("configure", ">") self.wait_write("set system services ssh", "#") self.wait_write("set system services netconf ssh", "#") self.wait_write("set system login user %s class super-user authentication plain-text-password" % ( self.username ), "#") self.wait_write(self.password, "New password:") self.wait_write(self.password, "Retype new password:") self.wait_write("set system root-authentication plain-text-password", "#") self.wait_write(self.password, "New password:") self.wait_write(self.password, "Retype new password:") self.wait_write("set interfaces fxp0 unit 0 family inet address 10.0.0.15/24", "#") self.wait_write("delete system license", "#") self.wait_write("commit", "#") self.wait_write("set system services rest http addresses 10.0.0.15", "#") self.wait_write(f'set system host-name {os.uname()[1]}') self.wait_write("commit", "#") self.wait_write("quit", "#") self.logger.info("completed bootstrap configuration") class VSRX(vrnetlab.VR): def __init__(self, username, password): super(VSRX, self).__init__(username, password) self.vms = [ VSRX_vm(username, password) ] if __name__ == '__main__': import argparse parser = argparse.ArgumentParser(description='') parser.add_argument('--trace', action='store_true', help='enable trace level logging') parser.add_argument('--username', default='vrnetlab', help='Username') parser.add_argument('--password', default='VR-netlab9', help='Password') args = parser.parse_args() LOG_FORMAT = "%(asctime)s: %(module)-10s %(levelname)-8s %(message)s" logging.basicConfig(format=LOG_FORMAT) logger = logging.getLogger() logger.setLevel(logging.DEBUG) logEnv = os.environ.get('LOG_LEVEL') if args.trace: logger.setLevel(1) elif logEnv: if logEnv.isnumeric(): logEnv = int(logEnv) try: logger.setLevel(logEnv) except ValueError: print(f'Illegal log level "{logEnv}"', file=sys.stderr) sys.exit(1) vr = VSRX(args.username, args.password) vr.start() 07070100000086000081ED00000000000000000000000164D7C437000002D2000000000000000000000000000000000000003000000000vrnetlab-git1691862071.9187175/vsrx/get-vsrx.sh#!/bin/bash IMAGE=$(ls temp/ffp* >/dev/null 2>&1) if [ ! -f temp/*.qcow2 ]; then echo -e "Creating temp directory\n" mkdir temp echo -e "Downloading file\n" wget -nc -O temp/virtualbox.box https://app.vagrantup.com/juniper/boxes/ffp-12.1X47-D15.4-packetmode/versions/0.5.0/providers/virtualbox.box echo -e "Extracting Vagrant Box\n" tar -xf temp/virtualbox.box -C temp echo -e "Converting VMDK to QCOW2\n" qemu-img convert -f vmdk -O qcow2 temp/packer-virtualbox-ovf-1427461878-disk1.vmdk temp/ffp-12.1X47-D15.4-packetmode.qcow2 echo -e "Moving QCOW2 image to main folder\n" mv temp/*.qcow2 . echo -e "Deleting temp directory\n" rm -r temp else echo "Image $IMAGE exists. Exiting." fi 07070100000087000041ED00000000000000000000000364D7C43700000000000000000000000000000000000000000000002300000000vrnetlab-git1691862071.9187175/xrv07070100000088000081A400000000000000000000000164D7C437000001B9000000000000000000000000000000000000002C00000000vrnetlab-git1691862071.9187175/xrv/MakefileVENDOR=Cisco NAME=XRv IMAGE_FORMAT=vmdk IMAGE_GLOB=*vmdk* # match versions like: # iosxrv-k9-demo-5.3.3.51U.vmdk # iosxrv-k9-demo-6.1.2.vmdk # iosxrv-k9-demo-6.2.2.15I.DT_IMAGE.vmdk # iosxrv-k9-demo-6.2.2.1T-dhcpfix.vmdk # iosxrv-k9-demo-6.2.2.22I.vmdk VERSION=$(shell echo $(IMAGE) | sed -e 's/.\+[^0-9][^0-9]\([0-9]\.[0-9]\.[0-9]\(\.[0-9A-Z]\+\)\?\)\([^0-9].*\|$$\)/\1/') -include ../makefile-sanity.include -include ../makefile.include 07070100000089000081A400000000000000000000000164D7C4370000103A000000000000000000000000000000000000002D00000000vrnetlab-git1691862071.9187175/xrv/README.mdvrnetlab / Cisco IOS XRv ======================== This is the vrnetlab docker image for Cisco IOS XRv. There are two flavours of virtual XR routers, XRv and XRv9000 where the latter has a much more complete forwarding plane. This is for XRv, if you have the XRv9k see the 'xrv9k' directory instead. It's not recommended to run XRv with less than 4GB of RAM. I have experienced weird issues when trying to use less RAM. Building the docker image ------------------------- Download IOS XRv from https://upload.cisco.com/cgi-bin/swc/fileexg/main.cgi?CONTYPES=Cisco-IOS-XRv Put the .vmdk file in this directory and run `make docker-image` and you should be good to go. The resulting image is called `vr-xrv`. You can tag it with something else if you want, like `my-repo.example.com/vr-xrv` and then push it to your repo. The tag is the same as the version of the XRv image, so if you have iosxrv-k9-demo.vmdk-5.3.3 your final docker image will be called vr-xrv:5.3.3 Please note that you will always need to specify version when starting your router as the "latest" tag is not added to any images since it has no meaning in this context. It's been tested to boot and respond to SSH with: * 5.1.1.54U (TeraStream build) * 5.1.3 (iosxrv-k9-demo-5.1.3.vmdk) * 5.2.2 (iosxrv-k9-demo-5.2.2.vmdk) * 5.3.0 (iosxrv-k9-demo-5.3.0.vmdk) * 5.3.2 (iosxrv-k9-demo-5.3.2.vmdk) * 5.3.3 (iosxrv-k9-demo.vmdk-5.3.3) * 5.3.3.51U (TeraStream build) * 6.0.0 (iosxrv-k9-demo-6.0.0.vmdk) * 6.0.1 (iosxrv-k9-demo.vmdk-6.0.1) Usage ----- ``` docker run -d --privileged --name my-xrv-router vr-xrv ``` You can run the image with `--privileged` to make use of KVM's hardware assisted virtualisation, without which CPU emulation will be used instead. Although I haven't measured, I imagine `--privileged` results in a considerable performance boost over emulation. Further, emulation mode hasn't been as thoroughly tested. It takes about 150 seconds for the virtual router to start and after this we can login over SSH / NETCONF with the specified credentials. If you want to look at the startup process you can specify `-i -t` to docker run and you'll get an interactive terminal, do note that docker will terminate as soon as you close it though. Use `-d` for long running routers. FUAQ - Frequently or Unfrequently Asked Questions ------------------------------------------------- ##### Q: What is the difference between XRv and XRv9000? A: Cisco is probably better at giving a thorough answer to this question but essentially XRv is meant for low-throughput labs while XRv9000 has a much higher performing forwarding plane that can be used for forwarding of production traffic. ##### Q: Why not use XRv9000? A: It seems that all the forwarding plane features that I am looking for are available in XRv and so there is very little benefit to XRv9000. On the contrary, XRv supports up to 128 interfaces (http://www.cisco.com/c/en/us/td/docs/ios_xr_sw/ios_xrv/install_config/b-xrv/b-xrv_chapter_01.html) with a single VM whereas XRv9000 seems to support up to 11 NICs (see http://www.cisco.com/c/en/us/td/docs/routers/virtual-routers/configuration/guide/b-xrv9k-cg/b-xrv9k-cg_chapter_0111.html). ##### Q: How many NICs are supported? A: 128, which is the maximum as specified by Cisco. I use multiple PCI buses to reach this number and while the current setting is for 128 I have successfully started XRv with more although I have not done any thorough testing. ##### Q: Is a license required? A: Yes and no. XRv can run in a demo mode or a production mode, where the former is free and the latter cost money. The download URL provided earlier is to the free demo version. In the demo mode there are hard-coded users (i.e. not very secure for production) and it is rate-limited to a total throughput of 2Mbps. ##### Q: How come CVAC is not used to feed the initial configuration? A: CVAC uses a virtual CD-ROM drive to feed an initial configuration into XR. Unfortuately it doesn't support generating crypto keys, which is required for SSH, and so it cannot replace the serial approach to 100% and therefore I opted to do everything over the serial interface. 0707010000008A000041ED00000000000000000000000264D7C43700000000000000000000000000000000000000000000002A00000000vrnetlab-git1691862071.9187175/xrv/docker0707010000008B000081A400000000000000000000000164D7C437000001B7000000000000000000000000000000000000003500000000vrnetlab-git1691862071.9187175/xrv/docker/DockerfileFROM debian:bullseye MAINTAINER Kristian Larsson <kristian@spritelink.net> ENV DEBIAN_FRONTEND=noninteractive RUN apt-get update -qy \ && apt-get upgrade -qy \ && apt-get install -y \ bridge-utils \ iproute2 \ python3-ipy \ socat \ qemu-kvm \ && rm -rf /var/lib/apt/lists/* ARG IMAGE COPY $IMAGE* / COPY *.py / EXPOSE 22 161/udp 830 5000 10000-10099 HEALTHCHECK CMD ["/healthcheck.py"] ENTRYPOINT ["/launch.py"] 0707010000008C000081ED00000000000000000000000164D7C43700001A05000000000000000000000000000000000000003400000000vrnetlab-git1691862071.9187175/xrv/docker/launch.py#!/usr/bin/env python3 import datetime import logging import os import random import re import signal import sys import telnetlib import time import vrnetlab def handle_SIGCHLD(signal, frame): os.waitpid(-1, os.WNOHANG) def handle_SIGTERM(signal, frame): sys.exit(0) signal.signal(signal.SIGINT, handle_SIGTERM) signal.signal(signal.SIGTERM, handle_SIGTERM) signal.signal(signal.SIGCHLD, handle_SIGCHLD) TRACE_LEVEL_NUM = 9 logging.addLevelName(TRACE_LEVEL_NUM, "TRACE") def trace(self, message, *args, **kws): # Yes, logger takes its '*args' as 'args'. if self.isEnabledFor(TRACE_LEVEL_NUM): self._log(TRACE_LEVEL_NUM, message, args, **kws) logging.Logger.trace = trace class XRV_vm(vrnetlab.VM): def __init__(self, username, password): for e in os.listdir("/"): if re.search(".vmdk", e): disk_image = "/" + e super(XRV_vm, self).__init__(username, password, disk_image=disk_image, ram=3072) self.num_nics = 128 self.credentials = [ ['admin', 'admin'] ] self.xr_ready = False def bootstrap_spin(self): """ """ if self.spins > 300: # too many spins with no result -> give up self.stop() self.start() return (ridx, match, res) = self.tn.expect([b"Press RETURN to get started", b"SYSTEM CONFIGURATION COMPLETE", b"Enter root-system username", b"Username:", b"^[^ ]+#"], 1) if match: # got a match! if ridx == 0: # press return to get started, so we press return! self.logger.debug("got 'press return to get started...'") self.wait_write("", wait=None) if ridx == 1: # system configuration complete self.logger.info("IOS XR system configuration is complete, should be able to proceed with bootstrap configuration") self.wait_write("", wait=None) self.xr_ready = True if ridx == 2: # initial user config self.logger.info("Creating initial user") self.wait_write(self.username, wait=None) self.wait_write(self.password, wait="Enter secret:") self.wait_write(self.password, wait="Enter secret again:") self.credentials.insert(0, [self.username, self.password]) if ridx == 3: # matched login prompt, so should login self.logger.debug("matched login prompt") try: username, password = self.credentials.pop(0) except IndexError as exc: self.logger.error("no more credentials to try") return self.logger.debug("trying to log in with %s / %s" % (username, password)) self.wait_write(username, wait=None) self.wait_write(password, wait="Password:") if self.xr_ready == True and ridx == 4: # run main config! self.bootstrap_config() # close telnet connection self.tn.close() # startup time? startup_time = datetime.datetime.now() - self.start_time self.logger.info("Startup complete in: %s" % startup_time) # mark as running self.running = True return # no match, if we saw some output from the router it's probably # booting, so let's give it some more time if res != b'': self.logger.trace("OUTPUT: %s" % res.decode()) # reset spins if we saw some output self.spins = 0 self.spins += 1 return def bootstrap_config(self): """ Do the actual bootstrap config """ self.logger.info("applying bootstrap configuration") self.wait_write("", None) self.wait_write("terminal length 0") self.wait_write("crypto key generate rsa") # check if we are prompted to overwrite current keys (ridx, match, res) = self.tn.expect([b"How many bits in the modulus", b"Do you really want to replace them", b"^[^ ]+#"], 10) if match: # got a match! if ridx == 0: self.wait_write("2048", None) elif ridx == 1: # press return to get started, so we press return! self.wait_write("no", None) # make sure we get our prompt back self.wait_write("") if self.username and self.password: self.wait_write("admin") self.wait_write("configure") self.wait_write("username %s group root-system" % (self.username)) self.wait_write("username %s group cisco-support" % (self.username)) self.wait_write("username %s secret %s" % (self.username, self.password)) self.wait_write("commit") self.wait_write("exit") self.wait_write("exit") self.wait_write("show interface description") self.wait_write("configure") # configure netconf self.wait_write("ssh server v2") self.wait_write("ssh server netconf port 830") # for 5.1.1 self.wait_write("ssh server netconf vrf default") # for 5.3.3 self.wait_write("netconf agent ssh") # for 5.1.1 self.wait_write("netconf-yang agent ssh") # for 5.3.3 # configure xml agent self.wait_write("xml agent tty") # configure mgmt interface self.wait_write("interface MgmtEth 0/0/CPU0/0") self.wait_write("no shutdown") self.wait_write("ipv4 address 10.0.0.15/24") self.wait_write("exit") self.wait_write("commit") self.wait_write("exit") class XRV(vrnetlab.VR): def __init__(self, username, password): super(XRV, self).__init__(username, password) self.vms = [ XRV_vm(username, password) ] if __name__ == '__main__': import argparse parser = argparse.ArgumentParser(description='') parser.add_argument('--trace', action='store_true', help='enable trace level logging') parser.add_argument('--username', default='vrnetlab', help='Username') parser.add_argument('--password', default='VR-netlab9', help='Password') args = parser.parse_args() LOG_FORMAT = "%(asctime)s: %(module)-10s %(levelname)-8s %(message)s" logging.basicConfig(format=LOG_FORMAT) logger = logging.getLogger() logger.setLevel(logging.DEBUG) if args.trace: logger.setLevel(1) vr = XRV(args.username, args.password) vr.start() 0707010000008D000041ED00000000000000000000000364D7C43700000000000000000000000000000000000000000000002500000000vrnetlab-git1691862071.9187175/xrv9k0707010000008E000081A400000000000000000000000164D7C43700000455000000000000000000000000000000000000002D00000000vrnetlab-git1691862071.9187175/xrv9k/LICENSEThe MIT License (MIT) Copyright (c) 2016 Kristian Larsson <kristian@spritelink.net> Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. 0707010000008F000081A400000000000000000000000164D7C43700000196000000000000000000000000000000000000002E00000000vrnetlab-git1691862071.9187175/xrv9k/MakefileVENDOR=Cisco NAME=XRv9k IMAGE_FORMAT=qcow2 IMAGE_GLOB=*qcow2* # match versions like: # TODO: add example file names here # xrv9k-fullk9-x.vrr-6.1.3.qcow2 # xrv9k-fullk9-x.vrr-6.2.1.qcow2 VERSION=$(shell echo $(IMAGE) | sed -e 's/.\+[^0-9]\([0-9]\.[0-9]\.[0-9]\(\.[0-9A-Z]\+\)\?\)\([^0-9].*\|$$\)/\1/') -include ../makefile-sanity.include -include ../makefile.include -include ../makefile-install.include 07070100000090000081A400000000000000000000000164D7C43700000E39000000000000000000000000000000000000002F00000000vrnetlab-git1691862071.9187175/xrv9k/README.mdvrnetlab / Cisco IOS XRv9k ========================== This is the vrnetlab docker image for Cisco IOS XRv9k. There are two flavours of virtual XR routers, XRv and XRv9k where the latter has a much more complete forwarding plane. This is for XRv9k if you have the non-9k see the 'xrv' directory instead. I've not tested XRv9k with less than 4 cores and 8G of RAM. Building the docker image ------------------------- Obtain the XRv9k release from Cisco. They generally ship an iso for a custom install as well as a pre-built qcow2 image. Some releases the pre-built qcow2 is quite large, so making your own from the iso is recommended. At some point we may support creating qcow2 from iso in vrnetlab, but that is currently not supported. Put the .qcow2 file in this directory and run `make docker-image` and you should be good to go. The resulting image is called `vr-xrv9k`. You can tag it with something else if you want, like `my-repo.example.com/vr-xrv` and then push it to your repo. The tag is the same as the version of the XRv9k image, so if you have xrv9k-fullk9-x.vrr-6.2.1.qcow2 your final docker image will be called vr-xrv9k:6.2.1 Please note that you will always need to specify version when starting your router as the "latest" tag is not added to any images since it has no meaning in this context. It's been tested to boot and respond to SSH with: * 6.1.3 (xrv9k-fullk9-x.vrr-6.1.3.qcow2) * 6.2.1 (xrv9k-fullk9-x.vrr-6.2.1.qcow2) * xrv9k-fullk9-x-6.4.2.qcow2 MD5:6958763192c7bb59a1b8049d377de1b4 Usage ----- ``` docker run -d --privileged --name my-xrv-router vr-xrv9k ``` You can run the image with `--privileged` to make use of KVM's hardware assisted virtualisation, without which CPU emulation will be used instead. Although I haven't measured, I imagine `--privileged` results in a considerable performance boost over emulation. Further, emulation mode hasn't been as thoroughly tested. It takes about 150 seconds for the virtual router to start and after this we can login over SSH / NETCONF with the specified credentials. If you want to look at the startup process you can specify `-i -t` to docker run and you'll get an interactive terminal, do note that docker will terminate as soon as you close it though. Use `-d` for long running routers. FUAQ - Frequently or Unfrequently Asked Questions ------------------------------------------------- ##### Q: What is the difference between XRv and XRv9k? A: Cisco is probably better at giving a thorough answer to this question but essentially XRv is meant for low-throughput labs while XRv9k has a much higher performing forwarding plane that can be used for forwarding of production traffic. ##### Q: How many NICs are supported? A: Cisco specifies a maximum of 11 NICs but that seems to be balooney as it successfully starts with 226 NICs. Be aware though that the startup time scales linearly with the number of interfaces so unless you actually need a lot of interfaces it is better to start it with fewer. The default is set to 24 which felt like a good compromise and also means only a single PCI bus is needed, which just felt like a good thing. ##### Q: Is a license required? A: Yes and no. XRv9k can run in a demo mode or a production mode, where the former is free and the latter cost money. ##### Q: How come CVAC is not used to feed the initial configuration? A: CVAC uses a virtual CD-ROM drive to feed an initial configuration into XR. Unfortuately it doesn't support generating crypto keys, which is required for SSH, and so it cannot replace the serial approach to 100% and therefore I opted to do everything over the serial interface. 07070100000091000041ED00000000000000000000000264D7C43700000000000000000000000000000000000000000000002C00000000vrnetlab-git1691862071.9187175/xrv9k/docker07070100000092000081A400000000000000000000000164D7C437000001BC000000000000000000000000000000000000003700000000vrnetlab-git1691862071.9187175/xrv9k/docker/DockerfileFROM debian:bullseye MAINTAINER Kristian Larsson <kristian@spritelink.net> ENV DEBIAN_FRONTEND=noninteractive RUN apt-get update -qy \ && apt-get upgrade -qy \ && apt-get install -y \ bridge-utils \ iproute2 \ python3-ipy \ socat \ qemu-kvm \ && rm -rf /var/lib/apt/lists/* ARG IMAGE COPY $IMAGE* / COPY *.py / EXPOSE 22 161/udp 830 5000-5003 10000-10099 HEALTHCHECK CMD ["/healthcheck.py"] ENTRYPOINT ["/launch.py"] 07070100000093000081ED00000000000000000000000164D7C4370000244F000000000000000000000000000000000000003600000000vrnetlab-git1691862071.9187175/xrv9k/docker/launch.py#!/usr/bin/env python3 import datetime import logging import os import random import re import signal import sys import telnetlib import time import vrnetlab def handle_SIGCHLD(signal, frame): os.waitpid(-1, os.WNOHANG) def handle_SIGTERM(signal, frame): sys.exit(0) signal.signal(signal.SIGINT, handle_SIGTERM) signal.signal(signal.SIGTERM, handle_SIGTERM) signal.signal(signal.SIGCHLD, handle_SIGCHLD) TRACE_LEVEL_NUM = 9 logging.addLevelName(TRACE_LEVEL_NUM, "TRACE") def trace(self, message, *args, **kws): # Yes, logger takes its '*args' as 'args'. if self.isEnabledFor(TRACE_LEVEL_NUM): self._log(TRACE_LEVEL_NUM, message, args, **kws) logging.Logger.trace = trace class XRV_vm(vrnetlab.VM): def __init__(self, username, password, ram, nics, install_mode=False): for e in os.listdir("/"): if re.search(".qcow2", e): disk_image = "/" + e super(XRV_vm, self).__init__(username, password, disk_image=disk_image, ram=ram*1024) self.num_nics = nics self.install_mode = install_mode self.qemu_args.extend(["-cpu", "host", "-smp", "cores=4,threads=1,sockets=1", "-serial", "telnet:0.0.0.0:50%02d,server,nowait" % (self.num + 1), "-serial", "telnet:0.0.0.0:50%02d,server,nowait" % (self.num + 2), "-serial", "telnet:0.0.0.0:50%02d,server,nowait" % (self.num + 3)]) self.credentials = [ ['admin', 'admin'] ] self.xr_ready = False def gen_mgmt(self): """ Generate qemu args for the mgmt interface(s) """ res = [] # mgmt interface res.extend(["-device", "virtio-net-pci,netdev=mgmt,mac=%s" % vrnetlab.gen_mac(0)]) res.extend(["-netdev", "user,id=mgmt,net=10.0.0.0/24,tftp=/tftpboot,%s" % self.gen_host_forwards()]) # dummy interface for xrv9k ctrl interface res.extend(["-device", "virtio-net-pci,netdev=ctrl-dummy,id=ctrl-dummy,mac=%s" % vrnetlab.gen_mac(0), "-netdev", "tap,ifname=ctrl-dummy,id=ctrl-dummy,script=no,downscript=no"]) # dummy interface for xrv9k dev interface res.extend(["-device", "virtio-net-pci,netdev=dev-dummy,id=dev-dummy,mac=%s" % vrnetlab.gen_mac(0), "-netdev", "tap,ifname=dev-dummy,id=dev-dummy,script=no,downscript=no"]) return res def bootstrap_spin(self): """ """ if self.spins > 300: # too many spins with no result -> give up self.stop() self.start() return (ridx, match, res) = self.tn.expect([b"Press RETURN to get started", b"Not settable: Success", # no SYSTEM CONFIGURATION COMPLETE in xrv9k? b"Enter root-system username", b"Username:", b"ios#"], 1) if match: # got a match! if ridx == 0: # press return to get started, so we press return! self.logger.debug("got 'press return to get started...'") self.wait_write("", wait=None) if ridx == 1: # system configuration complete self.logger.info("IOS XR system configuration is complete, should be able to proceed with bootstrap configuration") self.wait_write("", wait=None) self.xr_ready = True if ridx == 2: # initial user config if self.install_mode: self.running = True return self.logger.info("Creating initial user") self.wait_write(self.username, wait=None) self.wait_write(self.password, wait="Enter secret:") self.wait_write(self.password, wait="Enter secret again:") self.credentials.insert(0, [self.username, self.password]) if ridx == 3: # matched login prompt, so should login self.logger.debug("matched login prompt") try: username, password = self.credentials.pop(0) except IndexError as exc: self.logger.error("no more credentials to try") return self.logger.debug("trying to log in with %s / %s" % (username, password)) self.wait_write(username, wait=None) self.wait_write(password, wait="Password:") self.logger.debug("logged in with %s / %s" % (username, password)) if self.xr_ready == True and ridx == 4: # run main config! if not self.bootstrap_config(): # main config failed :/ self.logger.debug('bootstrap_config failed, restarting device') self.stop() self.start() return # close telnet connection self.tn.close() # startup time? startup_time = datetime.datetime.now() - self.start_time self.logger.info("Startup complete in: %s" % startup_time) # mark as running self.running = True return # no match, if we saw some output from the router it's probably # booting, so let's give it some more time if res != b'': self.logger.trace("OUTPUT: %s" % res.decode()) # reset spins if we saw some output self.spins = 0 self.spins += 1 return def bootstrap_config(self): """ Do the actual bootstrap config """ self.logger.info("applying bootstrap configuration") self.wait_write("", None) self.wait_write("terminal length 0") self.wait_write("crypto key generate rsa") # check if we are prompted to overwrite current keys (ridx, match, res) = self.tn.expect([b"How many bits in the modulus", b"Do you really want to replace them", b"^[^ ]+#"], 10) if match: # got a match! if ridx == 0: self.wait_write("2048", None) elif ridx == 1: # press return to get started, so we press return! self.wait_write("no", None) # make sure we get our prompt back self.wait_write("") # wait for Gi0/0/0/0 in config if not self.wait_config("show interfaces description", "Gi0/0/0/0"): return False # wait for call-home in config if not self.wait_config("show running-config call-home", "service active"): return False self.wait_write("configure") # configure netconf self.wait_write("ssh server v2") self.wait_write("ssh server netconf port 830") # for 5.1.1 self.wait_write("ssh server netconf vrf default") # for 5.3.3 self.wait_write("netconf agent ssh") # for 5.1.1 self.wait_write("netconf-yang agent ssh") # for 5.3.3 # configure xml agent self.wait_write("xml agent tty") # configure mgmt interface self.wait_write("interface MgmtEth 0/RP0/CPU0/0") self.wait_write("no shutdown") self.wait_write("ipv4 address 10.0.0.15/24") self.wait_write("exit") self.wait_write("commit") self.wait_write("exit") return True class XRV_Installer(vrnetlab.VR_Installer): """ XRV installer Will start the XRV and then shut it down. Booting the XRV for the first time requires the XRV itself to install internal packages then it will restart. Subsequent boots will not require this restart. By running this "install" when building the docker image we can decrease the normal startup time of the XRV. """ def __init__(self, username, password, ram, nics): super().__init__() self.vm = XRV_vm(username, password, ram, nics, install_mode=True) class XRV(vrnetlab.VR): def __init__(self, username, password, ram, nics): super(XRV, self).__init__(username, password) self.vms = [ XRV_vm(username, password, ram, nics) ] if __name__ == '__main__': import argparse parser = argparse.ArgumentParser(description='') parser.add_argument('--trace', action='store_true', help='enable trace level logging') parser.add_argument('--username', default='vrnetlab', help='Username') parser.add_argument('--password', default='VR-netlab9', help='Password') parser.add_argument('--install', action='store_true', help='Initial install') parser.add_argument('--num-nics', type=int, default=24, help='Number of NICS') parser.add_argument('--ram', type=int, default=16, help='RAM in GB') args = parser.parse_args() LOG_FORMAT = "%(asctime)s: %(module)-10s %(levelname)-8s %(message)s" logging.basicConfig(format=LOG_FORMAT) logger = logging.getLogger() logger.setLevel(logging.DEBUG) if args.trace: logger.setLevel(1) if args.install: vr = XRV_Installer(args.username, args.password, args.ram, args.num_nics) vr.install() else: vr = XRV(args.username, args.password, args.ram, args.num_nics) vr.start() 07070100000000000000000000000000000000000000010000000000000000000000000000000000000000000000000000000B00000000TRAILER!!!759 blocks
Locations
Projects
Search
Status Monitor
Help
OpenBuildService.org
Documentation
API Documentation
Code of Conduct
Contact
Support
@OBShq
Terms
openSUSE Build Service is sponsored by
The Open Build Service is an
openSUSE project
.
Sign Up
Log In
Places
Places
All Projects
Status Monitor