Interface Ordering on VMware (vMX / vSRX / vQFX)

Just a quick post on Interface ordering when using vmxnet3 on VMware. When you add more than 4 vmxnet3 data interfaces, on the addition of the 5th interface you may see that interface ordering is no longer sequential. This is not a vMX specific issue, it is due to the way the VMware balances PCI slot numbers to the guest PCI bus topology. More info here.

For vMX, interface mapping would initially look like this:

vnic1 – fxp0 (e1000)
vnic2 – internal link (e1000)
vnic3 – ge-0/0/0 (vmxnet3)
vnic4 – ge-0/0/1 (vmxnet3)
vnic5 – ge-0/0/2 (vmxnet3)
vnic6 – ge-0/0/3 (vmxnet3)

However on the addition of vnic7 – ge-0/0/4, the interface ordering changes to:

vnic1 – fxp0
vnic2 – internal link
vnic3 – ge-0/0/0
vnic4 – ge-0/0/2
vnic5 – ge-0/0/3
vnic6 – ge-0/0/4
vnic7 – ge-0/0/1

As you can see, the interface ordering is no longer sequential.

To workaround this issue, you should perform the following steps:

  1. Power off the VFP VM and open the VMware Edit Settings page.
  2. If you are using the vSphere Web Client, select the VM Options tab, click Advanced in the left frame, and click Edit Configuration at the bottom of the page.
    If you are using the vSphere desktop client, select General under Advanced in the left frame, and click Configuration Parameters at the bottom of the page.)
  3. Change the value of the following pciBridge settings from TRUE to FALSE:
    pciBridge5.present
    pciBridge6.present
    pciBridge7.present
    Do not make any changes to pciBridge0.present and pciBridge4.present
  4. Click OK to update the VM settings, and then add any further vmxnet3 network adapters as may be required.
  5. Now power up the VFP VM and check that the interface ordering is now correct.

Note, if you require more than 7 data interfaces, it may be necessary to make further changes. Repeat the steps above, however now change the pciBridge.preset from FALSE to TRUE and update the below parameters (add any that are missing):
pciBridge5.virtualDev = pcieRootPort
pciBridge5.functions = 8
pciBridge5.pciSlotNumber = 22

Juniper Day One: vMX Up and Running

My Day One book on the vMX is now available – the book gives an introduction to vMX and then goes on to walk through a complete build of it on Ubuntu Linux (Junipers preferred distribution). So you can get familiar with vMX and Junos, you will get straight in to the Lab and build and scale a topology, learning about EVPN and VPLS along the way.

vmx-front

The Day One is for network engineers or architects who are interested in learning more about vMX, and KVM in general. You might be thinking about how to deploy vMX in a production environment, or how to build and scale a lab or simulation, without access to physical routers.

It was a fun project and I’d like to thank Juniper for the opportunity.

Here’s how you can get a copy of the book:

Also thanks to Said van de Klundert for the review of the Day One.

Juniper vMX – Getting Started Guide (VMware)

As of Junos 15.1F4, Juniper are now officially supporting vMX on Vmware.

The installation process has quite a few steps to it, so following on my my vMX Getting Started Guide for KVM, here is a quick post showing you how to do it on your home lab running Vmware Hypervisor ESXi 6.0.

ESXi Installation

Let’s get started with the installation of ESXi. I’m doing this running ESXi as a nested VM on a Macbook, but the process would be the same if you were doing it on bare metal.

Register with Vmware and download the ESXi ISO from Vmware and then boot your machine from the ISO. The installation of ESXi is a simple process. Go through the installation steps one by one and reboot ESXi once the installation has completed.

Following the reboot ESXi will load up and if your management LAN is running DHCP the host will have been assigned an  IP address for management. You need to download the VMware client to be able to manage ESXi free. Open a web browser and connect to the ESXi IP – download the tools as suggested, and then load up the client.

Once the client is loaded, firstly you should license the ESXi host. You can get a free license from Vmware at the ESXi download page.

In the web client the license is applied at Home – Inventory – click configuration and then Licensed Features. You can then click edit to apply the license.

esxi-license

vMX Installation

If you have a valid login, you can download vMX directly from the vMX download page.

Now load up the client for your ESXi server and login.

There is no OVA build currently, so several steps need to be done manually.

Copy Files to the Datastore

Before progressing any further you will need to extract the vMX package. All of the vmdk files are located in the subdirectory “/vmdk”.

  • Software image for vMX VCP: jinstall64-vmx-15.1F4.15-domestic.vmdk
  • Software image for VCP file storage: vmxhdd.vmdk
  • Software image for VFP: vFPC-20151203.vmdk
  • metadata_usb.vmdk: Virtual hard disk with bootstrapping information. This is used by the VCP.

Click the summary tab, select the datastore under Storage, right click and select Browse Datastore.

Create a folder called “vmx” and then click the upload file button and upload all of the vmdk files listed above to this new folder.

vmx-ds

Set Up the vMX Network

If you are not familiar with vMX then at this point it would be a good idea to read over my vMX Getting Started Guide for KVM, so that you understand the architecture of vMX, and how the vMX virtual machines communicate with one another.

The VMware release is no different to the KVM release when it comes to the required default networks. There are a minimum of three networks that will need to be configured:

  • Management network (br-ext)
  • Internal network for VCP and VFP communication (br-int)
  • Data interfaces

To create these networks, go back to the ESXi client, select the ESXi server and click the Configuration tab. Select Networking under Hardware. In the top right corner click Add networking.

Management Network

  1. Select Virtual Machine as the connection type and click next
  2. Select Use vSwitch0 and click next
    vswitch
  3. At port group properties, set network label to br-ext and click next
    br-ext
  4. Now click finish

You will see the new port group “br-ext” has been added to the standard switch vSwitch0.

Internal Network

Again, select Networking under Hardware. In the top right corner click Add networking.

  1. Select Virtual Machine as the connection type and click next
  2. This time select Create a vSphere standard switch and clear all physical NIC check boxes, then click next
    br-int1
  3. For network label, use br-int
  4. You should now have a port group called “br-int”, with no adapters assigned

Data Network

Now add a data network, the process is repeated according to the number of data NICs that you wish to add. Time to create a single adapter named p1p1.

Again, select Networking under Hardware. In the top right corner click Add networking.

  1. Select Virtual Machine as the connection type and click next
  2. Select Create a vSphere standard switch and add the physical NIC that you want to use, and click next
  3. Name the connection p1p1, click next, and finish

Repeat this process if you have any more data adapters to add to vMX.

Complete the network configuration

You will now see the 3 networks in the networking summary screen – br-ext, br-int and p1p1.

You must enable promiscuous mode in all vSwitches so that packets with any MAC addresses can reach the vMX. e.g. for OSPF to work properly.

netsumm
For each vSwitch, click properties, then select vSwitch and click edit. Select security and change promiscuous mode to accept.

Set Up the vMX Virtual Machines

Just like vMX on KVM, there are two VMs that must be created – the virtual control plane (VCP) running the Junos OS, and the virtual forwarding plane (VFP) running an x86 visualised release of Trio running on Wind River Linux.

The process for creating both of the virtual machines is very similar. It’s a simple case of following the VMware wizard and choosing the correct settings for the VM.

VCP

This process below outlines the steps required to create the VCP virtual machine.

  1.  Within the VMware client, select the ESXi host, right click, new virtual machine
  2. Select to create a custom virtual machine, and press next
  3. Give the machines a suitable name, e.g. vcp-vmx1
  4. Select the datastore where you would like to store the VM and press next
  5. Set the virtual machine version to 8
  6. For the guest OS type, choose Other, Other (64-bit)
  7. Select one virtual socket, and 1 cpu core per socket, to assign a total of 1 CPU core to the VCP
  8. Provision 2GB of memory
  9. In the network setup, select 2 network adapters.
    Asign br-ext as the 1st adapter and br-int as the 2nd adapter.
    Set both to be e1000.
  10. Select LSI Logic Parallel as the SCSI controller
  11. When prompted to select the disk type, choose use an existing virtual disk, and then on the next screen browse to the correct datastore and select the jinstall64-vmx-15.1F4.15-domestic.vmdk image that you uploaded earlier
  12. At the advanced options page, simply click next
  13. Select to edit the virtual machine settings before completion and click continue
  14. Now you need to add two more hard drives – click Add, and then Hard Disk, this time selecting vmxhdd.vmdk as the second drive
  15. Repeat the add Hard Disk process again this time adding the metadata_usb.vmdk image as the third drive.

NOTE: this 3rd hard drive is important – if you don’t configure it then the first time VCP boots, VCP will setup as an “olive” not vMX!

You can now boot the VCP!

If the boot process appears to wait at “Loading /boot/loader” do not worry, on the VMware release you don’t see the full Junos OS boot process on the console.

VFP

This process below outlines the steps required to create the VFP virtual machine.

  1.  Within the VMware client, select the ESXi hosts, right click, new virtual machine
  2. Select custom and press next
  3. Give the machines a suitable name, e.g. vfp-vmx1
  4. Select the datastore where you would like to store the VM and press next
  5. Set the virtual machine version to 8
  6. For the guest OS, choose Other, Other (64-bit)
  7. When prompted to select the number of CPUs, for this build the minimum you can choose is three virtual sockets, and 1 cpu core per socket, to give a total of 3 CPU cores assigned to the VCP
  8. Provision 8GB of memory
  9. In the network setup, select at least 3 network adapters, assigning br-ext as the 1st adapter and br-int as the 2nd adapter. Set them both to be e1000 adapters. The data adapters can now be selected, set them to vmxnet3 or e1000 depending on your preference. For better performance, I’d suggest you use vmxnet3 because this is a paravirtualization adapter.
    NOTE: if you wish to use SR-IOV – at the time of writing SR-IOV is not officially supported on VMware, only on KVM.
  10. Select LSI Logic Parallel as the SCSI controller
  11. When prompted to select the disk to use, choose use an existing virtual disk, and then on the next screen browse to the datastore and select the vFPC-20151203.vmdk image that you uploaded earlier (bear in mind the image naming has changed from vPFE* to vFPC* in this latest release of vMX)
    NOTE: on my build the Juniper supplied image needed to be converted to thick provisioned using vmkfstools, otherwise the VM refused to boot (I was getting a VMware error related to free space even though the drives were not full). You may not have to do this! Thanks to @tomverhaeg for working through this strange issue with me!
  12. At the advanced options page, simply click next
  13. At Ready to Complete, you can click finish and boot the VFP!

NOTE: VMware virtual console for the VFP does not show anything beyond “Please Wait: Booting” – if you wish to login to the VFP you will need to configure serial console. The process for setting up serial console is described here.

 Verification

At this point if both machines have powered on successfully you should have a running vMX.

Now login to the VCP and run the Junos command “show chassis fpc”. After a few moments you should see the FPC as online and ge-* interfaces will appear.

Have fun!

For more information, please refer to the Juniper documentation on the VMware release of vMX.

 

Route Leaking with Junos

I’ve been working on a few projects recently that have in one way or another required the leaking of routes between different routing tables / routing instances. When I first started working with Junos I did find RIB groups a bit confusing, so here goes with a post about the feature.

In this post I’m going to show you three ways to leak routes between tables – using RIB groups, Instance Import and Logical Tunnels.

Lab topology

In this post I will re-use the topology I created in my last vMX post.

routeleak

The topology consists of 2 x vMX,  PE1 and PE2 will be the main routers on each vMX, and INTGW and CE2 will be Logical Systems.

The link between PE2 and CE2 will be via a VRF routing-instance “red”. All other routes are in table inet.0.

The object of the lab is to leak routes between inet.0 and red.inet.0 on PE2. We will leak a default route between inet.0 and red.inet.0 and leak CE2’s loopback in to inet.0

First let’s setup the topology…

INTGW

This device is simulating an Internet gateway. I will originate a default route in IS-IS to the rest of the topology.

Note: the Loopback address is not advertised in to IS-IS.

root@PE1> show configuration logical-systems INTGW
interfaces {
    ge-0/0/2 {
        unit 0 {
            family inet {
                address 192.168.12.1/24;
            }
            family iso;
        }
    }
    lo0 {
        unit 1 {
            family inet {
                address 1.1.1.1/32;
            }
            family iso {
                address 49.0001.0001.0001.0001.00;
            }
        }
    }
}
protocols {
    isis {
        export DEFAULT;
        interface ge-0/0/2.0 {
            point-to-point;
            level 1 disable;
        }
    }
}
policy-options {
    policy-statement DEFAULT {
        from {
            protocol aggregate;
            route-filter 0.0.0.0/0 exact;
        }
        then accept;
    }
}
routing-options {
    aggregate {
        route 0.0.0.0/0;
    }
}

PE1

This router really isn’t doing much of interest. It is simply running IS-IS on the interfaces between INTGW and PE2

PE2

This router is learning routes from INTGW and PE2 via IS-IS. The interface connection to CE2 is placed in a routing-instance “red”. PE2 and CE2 are exchanging routes using OSPF.

interfaces {
    ge-0/0/1 {
        unit 0 {
            family inet {
                address 192.168.34.3/24;
            }
        }
    }
}
routing-instances {
    red {
        instance-type virtual-router;
        interface ge-0/0/1.0;
        protocols {
            ospf {
                area 0.0.0.0 {
                    interface ge-0/0/1.0;
                }
            }
        }
    }
}

CE2

Nothing special about CE2.

interfaces {
    ge-0/0/2 {
        unit 0 {
            family inet {
                address 192.168.34.4/24;
            }
        }
    }
    lo0 {
        unit 4 {
            family inet {
                address 4.4.4.4/32;
            }
        }
    }
}
protocols {
    ospf {
        area 0.0.0.0 {
            interface ge-0/0/2.0;
            interface lo0.4 {
                passive;
            }
        }
    }
}

Objective

Our objective to to be able to to ping the Loopback address on INTGW from CE2. I am not advertising the address in to IS-IS so reachability is achieved via the default route.

root@PE2> show route 0.0.0.0

inet.0: 7 destinations, 7 routes (7 active, 0 holddown, 0 hidden)
+ = Active Route, - = Last Active, * = Both

0.0.0.0/0          *[IS-IS/165] 01:31:47, metric 30
                    > to 192.168.23.2 via ge-0/0/3.0

root@PE2> show route 1.1.1.1/32

root@PE2> ping rapid count 1 1.1.1.1
PING 1.1.1.1 (1.1.1.1): 56 data bytes
!
--- 1.1.1.1 ping statistics ---
1 packets transmitted, 1 packets received, 0% packet loss
round-trip min/avg/max/stddev = 3.370/3.370/3.370/nan ms

root@PE2>

If we take a look at INTGW and CE2, neither will currently have reachability to one another.

INTGW:
root@PE1:INTGW> ping rapid count 1 4.4.4.4
PING 4.4.4.4 (4.4.4.4): 56 data bytes
ping: sendto: No route to host
.
--- 4.4.4.4 ping statistics ---
1 packets transmitted, 0 packets received, 100% packet loss

CE2:
root@PE2:CE2> ping rapid count 1 1.1.1.1
PING 1.1.1.1 (1.1.1.1): 56 data bytes
ping: sendto: No route to host
.
--- 1.1.1.1 ping statistics ---
1 packets transmitted, 0 packets received, 100% packet loss

RIB Groups

First of all I will go though how to accomplish the objective with RIB Groups alone.

Essentially a RIB group will allow you to take a route that would be normally be destined for one table, e.g. inet.0, and place that route in another table also, e.g. red.inet.0. This can be done for static, connected, or dynamic routing.

A rib-group is created as below

routing-options {
    rib-groups {
        INET0_to_RED {
            import-rib [ inet.0 red.inet.0 ];
        }
    }
}

Note: the first entry after import-rib is not where we are pulling the routes from, it is where the route would normally be placed.

This config is simply stating any routes that would normally be placed in inet.0 should also be placed in red.inet.0.

However, creating the rib-group alone will not achieve anything – the rib-group must be applied elsewhere in the configuration. You have several options depending on what you want to do:

  • Interface routes – set routing-options interface-routes rib-group <name>
  • Static routes – set routing-options rib-group <name>
  • Dynamic routes, these are applied per protocol, e.g. set protocols ospf rib-group <name>

For this lab we’ll be leaking the IS-IS routes, so I apply the rib-group to IS-IS. Remember as I am leaking routes from inet.0 to red.inet.0 I must apply the rib-group in the master config, not under the routing instance.

The rib-group is applied in to the table where the routes would normally be placed.

set protocols isis rib-group inet INET0_to_RED

Now let’s take a look in the red.inet.0 table, do we see the routes?

root@PE2> show route table red.inet.0

red.inet.0: 7 destinations, 7 routes (7 active, 0 holddown, 0 hidden)
+ = Active Route, - = Last Active, * = Both

0.0.0.0/0          *[IS-IS/165] 05:05:37, metric 30
                    > to 192.168.23.2 via ge-0/0/3.0
2.2.2.2/32         *[IS-IS/18] 05:05:37, metric 10
                    > to 192.168.23.2 via ge-0/0/3.0
4.4.4.4/32         *[OSPF/10] 06:20:33, metric 1
                    > to 192.168.34.4 via ge-0/0/1.0
192.168.12.0/24    *[IS-IS/18] 05:05:37, metric 20
                    > to 192.168.23.2 via ge-0/0/3.0
192.168.34.0/24    *[Direct/0] 06:20:48
                    > via ge-0/0/1.0
192.168.34.3/32    *[Local/0] 06:20:48
                      Local via ge-0/0/1.0
224.0.0.5/32       *[OSPF/10] 06:20:48, metric 1
                      MultiRecv

Awesome! The IS-IS routes are there – we can see the default route and also the loopback on PE1. But what if we wanted to leak the default only? Junos has that covered with an import policy.

I’ll create a policy to accept the default only and apply that to the rib-group.

Just a quick note on the import policy – as IS-IS has a default import policy of accept, I need to add a final term to reject otherwise I will match everything! See this Juniper doc for a reminder of the default import/export policies for the various routing protocols.

policy-options {
    policy-statement DEFAULT {
        term t1 {
            from {
                route-filter 0.0.0.0/0 exact;
            }
            then accept;
        }
        term t2 {
            then reject;
        }
    }
}
rib-groups {
    INET0_to_RED {
        import-rib [ inet.0 red.inet.0 ];
        import-policy DEFAULT;
    }
}

Now the routing table only has the default route leaked!

root@PE2> show route table red.inet.0

red.inet.0: 5 destinations, 5 routes (5 active, 0 holddown, 0 hidden)
+ = Active Route, - = Last Active, * = Both

0.0.0.0/0          *[IS-IS/165] 00:04:57, metric 30
                    > to 192.168.23.2 via ge-0/0/3.0
4.4.4.4/32         *[OSPF/10] 06:46:46, metric 1
                    > to 192.168.34.4 via ge-0/0/1.0
192.168.34.0/24    *[Direct/0] 06:47:01
                    > via ge-0/0/1.0
192.168.34.3/32    *[Local/0] 06:47:01
                      Local via ge-0/0/1.0
224.0.0.5/32       *[OSPF/10] 06:47:01, metric 1
                      MultiRecv

Cool, so at this point the red routing-instance now has a default, but what about CE2, can that see the default?

root@PE2> show route logical-system CE2

inet.0: 4 destinations, 4 routes (4 active, 0 holddown, 0 hidden)
+ = Active Route, - = Last Active, * = Both

4.4.4.4/32         *[Direct/0] 06:50:16
                    > via lo0.4
192.168.34.0/24    *[Direct/0] 06:53:12
                    > via ge-0/0/2.0
192.168.34.4/32    *[Local/0] 06:53:12
                      Local via ge-0/0/2.0
224.0.0.5/32       *[OSPF/10] 06:51:52, metric 1
                      MultiRecv

No default route there! Well I purposely made my life difficult by running a different protocol between PE2 and CE2. Remember PE1 and PE2 are talking IS-IS, but PE2 and CE2 are talking OSPF. So whilst the IS-IS route is now in the red.inet.0 table, we need to create an export policy to redistribute the IS-IS route over to CE2 via OSPF. The export policy is applied to the routing-instance.

Note this time my policy does not need an explicit reject to be configured as the default export policy for OSPF is reject.

policy-options {
    policy-statement FROM_ISIS {
        from protocol isis;
        then accept;
    }
}

The export policy is applied to the routing-instance.

set routing-instances red protocols ospf export FROM_ISIS

Now we have an OSPF external default route present on CE2.

root@PE2> show route logical-system CE2

inet.0: 5 destinations, 5 routes (5 active, 0 holddown, 0 hidden)
+ = Active Route, - = Last Active, * = Both

0.0.0.0/0          *[OSPF/150] 00:00:02, metric 30, tag 0
                    > to 192.168.34.3 via ge-0/0/2.0
4.4.4.4/32         *[Direct/0] 06:55:43
                    > via lo0.4
192.168.34.0/24    *[Direct/0] 06:58:39
                    > via ge-0/0/2.0
192.168.34.4/32    *[Local/0] 06:58:39
                      Local via ge-0/0/2.0
224.0.0.5/32       *[OSPF/10] 06:57:19, metric 1
                      MultiRecv

Great, so CE2 knows how to route to INTGW via the default, but INTGW will not know how to route back at this point. I could do NAT on PE2 to hide the address of CE2, or repeat the rib-group process but this time to leak routes from red.inet.0 to inet.0 on PE2. We’ll do it with a rib-group.

root@PE2# show | compare
[edit routing-options rib-groups]
+   RED_to_INET0 {
+       import-rib [ red.inet.0 inet.0 ];
+   }
[edit routing-instances red protocols ospf]
+     rib-group RED_to_INET0;
[edit policy-options]
+   policy-statement FROM_OSPF {
+       from {
+           protocol ospf;
+           route-filter 4.4.4.4/32 exact;
+       }
+ 
[edit protocols isis]
+   export [ FROM_OSPF ];

Notice this time that the order of the import-rib has changed. We are copying routes from red.inet.0, as this is where the routes would normally be placed so red.inet.0 is the first entry in the import-rib statement. This also reminds us where to apply the rib-group.

The rib-group is applied to the routing-instance OSPF process, and again we must export the OSPF routes to IS-IS. Note this export is applied to the master IS-IS process, not the routing instance.

root@INTGW> show route table inet.0 0.0.0.0/0 exact

inet.0: 9 destinations, 9 routes (9 active, 0 holddown, 0 hidden)
+ = Active Route, - = Last Active, * = Both

0.0.0.0/0          *[IS-IS/165] 09:12:27, metric 20
                    > to 192.168.12.1 via ge-0/0/1.0

At this point we should have reachability between CE2 and INTGW.

root@PE2> ping 1.1.1.1 source 4.4.4.4 rapid logical-system CE2
PING 1.1.1.1 (1.1.1.1): 56 data bytes
!!!!!
--- 1.1.1.1 ping statistics ---
5 packets transmitted, 5 packets received, 0% packet loss
round-trip min/avg/max/stddev = 3.890/4.173/4.364/0.181 ms

Success !

Instance Import

Now to repeat this again, but this time using instance import! I’ll start by clearing out the rib-groups.

delete routing-options rib-groups INET0_to_RED
delete routing-options rib-groups RED_to_INET0
delete protocols isis rib-group inet INET0_to_RED
delete routing-instances red protocols ospf rib-group RED_to_INET0

First of all I’ll create a policy to import routes from inet.0 to red.inet.0

policy-options {
    policy-statement FROM_GLOBAL {
        term t1 {
            from {
                instance master;
                route-filter 0.0.0.0/0 exact;
            }
            then accept;
        }
        term t2 {
            then reject;
        }
    }
}

This policy is simply saying for a default route in the master inet.0 table then accept for import and deny everything else. We then apply this policy to the red routing-instance.

set routing-instances red routing-options instance-import FROM_GLOBAL

Now the policy has been configured, table red.inet.0 has a default route imported from the master instance. Since I left the IS-IS to OSPF export in place from the previous rib-group exercise, CE2 will also have the default route.

root@PE2> show route table red.inet.0 0.0.0.0

red.inet.0: 5 destinations, 5 routes (5 active, 0 holddown, 0 hidden)
+ = Active Route, - = Last Active, * = Both

0.0.0.0/0          *[IS-IS/165] 00:06:22, metric 30
                    > to 192.168.23.2 via ge-0/0/3.0

root@PE2> show route 0.0.0.0 logical-system CE2

inet.0: 5 destinations, 5 routes (5 active, 0 holddown, 0 hidden)
+ = Active Route, - = Last Active, * = Both

0.0.0.0/0          *[OSPF/150] 00:06:29, metric 30, tag 0
                    > to 192.168.34.3 via ge-0/0/2.0

I now need to leak the CE2 loopback IP between the red.inet.0 table and the master inet.0 table. Again this is a simple routing policy that is applied to the master routing-options.

routing-options {
    instance-import FROM_RED;
}
policy-options {
    policy-statement FROM_RED {
        term t1 {
            from {
                instance red;
                route-filter 4.4.4.4/32 exact;
            }
            then accept;
        }
        term t2 {
            then reject;
        }
    }
}

As the OSPF to IS-IS exports were left in place from the rib-group exercise, INTGW should now have the 4.4.4.4/32 route, and it does.

root@PE1> show route 4.4.4.4/32 logical-system INTGW

inet.0: 8 destinations, 8 routes (8 active, 0 holddown, 0 hidden)
+ = Active Route, - = Last Active, * = Both

4.4.4.4/32         *[IS-IS/165] 00:02:54, metric 21
                    > to 192.168.12.2 via ge-0/0/2.0

Verification – can I ping from CE2 to INTGW, yes!

root@PE2> ping 1.1.1.1 rapid source 4.4.4.4 logical-system CE2
PING 1.1.1.1 (1.1.1.1): 56 data bytes
!!!!!
--- 1.1.1.1 ping statistics ---
5 packets transmitted, 5 packets received, 0% packet loss
round-trip min/avg/max/stddev = 3.879/8.912/24.706/8.031 ms

Using the instance-import feature is perhaps a little more intuitive than rib-groups, although both can achieve the same end result.

Logical Tunnel

A final way of doing the leaking is to use Logical Tunnel interfaces. The configuration is very simple – an LT is created with one end of the tunnel in the master inet.0 table, and the other end added to red routing instance. We then simply run a routing protocol or static routing via the tunnel interface.

First of all we create the tunnel interfaces and assign one side to the correct routing instance.  This is a vMX so I also need to enable the tunnel services.

chassis {
    fpc 0 {
        pic 0 {
            tunnel-services;
        }
    }
}
interfaces {
    lt-0/0/0 {
        unit 0 {
            encapsulation ethernet;
            peer-unit 1;
            family inet {
                address 10.0.0.1/24;
            }
        }
        unit 1 {
            encapsulation ethernet;
            peer-unit 0;
            family inet {
                address 10.0.0.2/24;
            }
        }
    }
}
routing-instances {
    red {
        interface lt-0/0/0.1;      
        }
    }
}

From here, it’s a simple matter of running a routing protocol via the tunnel. As my red routing-instance is using OSPF routing with CE2, I configure the LT interfaces in OSPF within the master config and the routing-instance.

Note, because I’m running IS-IS between PE1 and PE2, on PE2 I’m also redistributing IS-IS routes to OSPF and OSPF routes to IS-IS to provide reachability.

root@PE2# show | compare
[edit protocols]
+   ospf {
+       export FROM_ISIS;
+       area 0.0.0.0 {
+           interface lt-0/0/0.0;
+       }
+   }
+   isis {
+       export FROM_OSPF;
+   }
[edit routing-instances red protocols ospf area 0.0.0.0]
+       interface lt-0/0/0.1;

root@PE2> show route logical-system CE2 0.0.0.0

inet.0: 8 destinations, 8 routes (8 active, 0 holddown, 0 hidden)
+ = Active Route, - = Last Active, * = Both

0.0.0.0/0          *[OSPF/150] 00:06:42, metric 30, tag 0
                    > to 192.168.34.3 via ge-0/0/2.0

root@PE1> show route table inet.0 4.4.4.4

inet.0: 9 destinations, 9 routes (9 active, 0 holddown, 0 hidden)
+ = Active Route, - = Last Active, * = Both

4.4.4.4/32         *[IS-IS/165] 00:07:08, metric 12
                    > to 192.168.23.3 via ge-0/0/3.0

Can I ping from CE2 to INTGW – yes!

root@PE2# run ping 1.1.1.1 source 4.4.4.4 rapid logical-system CE2
PING 1.1.1.1 (1.1.1.1): 56 data bytes
!!!!!
--- 1.1.1.1 ping statistics ---
5 packets transmitted, 5 packets received, 0% packet loss
round-trip min/avg/max/stddev = 3.330/3.609/3.884/0.213 ms

Conclusion

I’ve shown three way to leak routes between routing tables on Junos. There are of course many other ways of doing this – static route with next-table, or if I was running MPLS VPNs in this lab I’d also have route-targets to play with, or the auto-export feature for prefix leaking between local VRFs.

If you would like to see a post about these other methods, please say so in the comments. Thanks for reading this post 🙂

 

 

 

 

Juniper vMX – Lab Setup (2 vMX, EVPN, Logical Systems)

Following my Juniper vMX getting started guide post, I thought it would be useful to show how vMX could be used to create a lab environment.

This post follows on immediately where the last one finished. I will create a multi-router topology on a vMX instance using Logical Systems, and then go on to configure EVPN on this topology. As with the previous post, this is all running on my Macbook pro on a nested Ubuntu VM.

Lab topology

In this post I will create the following simple topology of 4 MX routers. You will be able to extend the principles shown here to expand your own topology to be as large and complex as you like.

vmx lab

The topology will consist of a 2 x vMX running on the same Ubuntu host.

I will configure EVPN however EVPN is unfortunately not supported within a Logical System, so R2 and R3 will be the main routers on each vMX and will be my EVPN PEs.

R1 and R4 will be created as Logical System routers.

I will connect ge-0/0/1 and ge-0/0/2 on each of vMX back to back using a linux bridge and these interfaces will then be used to provide the interconnection between the main router and Logical System using VLANs. I could use LT interfaces but where is the fun in that.

ge-0/0/3 on vMX1 and vMX2 will be interconnected using a Linux virtio bridge on the host.

vMX2 instance setup

First things first, let’s get the second instance of vMX running. If you remember from my 1st vMX post there is a configuration file for the vMX instance. Running a second vMX instance is no different and has it’s own settings file. I will copy vmx1’s config file and use that as the basis for the vMX2.

mdinham@ubuntu:~/vmx-14.1R5.4-1$ cd config/
mdinham@ubuntu:~/vmx-14.1R5.4-1/config$ cp vmx.conf vmx2.conf

Now let’s have a look at what settings need to be changed in vmx2.conf

The vMX identifier is changed to vmx2. I am using the same host management interface for both vMX1 and vMX2 and no changes are needed to the images.

HOST:
    identifier                : vmx2   # Maximum 4 characters
    host-management-interface : eth0
    routing-engine-image      : "/home/mdinham/vmx-14.1R5.4-1/images/jinstall64-vmx-14.1R5.4-domestic.img"
    routing-engine-hdd        : "/home/mdinham/vmx-14.1R5.4-1/images/vmxhdd.img"
    forwarding-engine-image   : "/home/mdinham/vmx-14.1R5.4-1/images/vPFE-lite-20150707.img"

The external bridge can be used by both vMX1 and vMX2 so no need to change this setting. This is used to bridge the management interfaces on vMX to the host management interface defined above.

BRIDGES:
    - type  : external
      name  : br-ext                  # Max 10 characters

For the vRE and vPFE  I will need to make some changes to console port, management ip address and MAC address. The MAC addresses taken from the locally administered MAC addresses ranges, so no problem to choose my own – taking care not to overlap with vMX1. Also, chose a console port number and management IP address that will not overlap with vMX1.

---
#vRE VM parameters
CONTROL_PLANE:
    vcpus       : 1
    memory-mb   : 2048
    console_port: 8603

    interfaces  :
      - type      : static
        ipaddr    : 192.168.100.52
        macaddr   : "0A:00:DD:C0:DE:0F"

---
#vPFE VM parameters
FORWARDING_PLANE:
    memory-mb   : 6144
    vcpus       : 3
    console_port: 8604
    device-type : virtio

    interfaces  :
      - type      : static
        ipaddr    : 192.168.100.53
        macaddr   : "0A:00:DD:C0:DE:11"

We also need to adjust the MAC addresses on each vMX2 interface.

---
#Interfaces
JUNOS_DEVICES:
   - interface            : ge-0/0/0
     mac-address          : "02:06:0A:0E:FF:F4"
     description          : "ge-0/0/0 interface"

   - interface            : ge-0/0/1
     mac-address          : "02:06:0A:0E:FF:F5"
     description          : "ge-0/0/0 interface"

   - interface            : ge-0/0/2
     mac-address          : "02:06:0A:0E:FF:F6"
     description          : "ge-0/0/0 interface"

   - interface            : ge-0/0/3
     mac-address          : "02:06:0A:0E:FF:F7"
     description          : "ge-0/0/0 interface"

vMX2 is now ready to be built. The same orchestration script that I used to create vMX1 is again used for vMX2, but this time I need to specify the configuration file.

Note: each time I use “vmx.sh” to perform stop/start operations on vMX2, I must specify the configuration file for vMX2.

The script will create the new vMX instance and automatically start it.

mdinham@ubuntu:~/vmx-14.1R5.4-1$ sudo ./vmx.sh -lv --install --cfg config/vmx2.conf

I’m now ready to connect to the console on vMX2. This is done the same way for vMX1 and vMX2, we simply reference the correct vMX instance when running the script.

mdinham@ubuntu:~/vmx-14.1R5.4-1$ ./vmx.sh --console vcp vmx2
--nesiac (ttyd0)
Login Console Port For vcp-vmx2 - 8603
Press Ctrl-] to exit anytime
--
Trying ::1...1R5.4 built 2015-07-02 08:01:42 UTC
Trying 127.0.0.1...
Connected to localhost.
Escape character is '^]'.


Amnesiac (ttyd0)

login:

If I look at the linux bridges the script automatically created you’ll see that another internal bridge is present to enable the RE and PFE communication on vMX2. The external bridge (management bridge) is shared by all vMX management interfaces.

mdinham@ubuntu:~/vmx-14.1R5.4-1/config$ brctl show
bridge name     bridge id               STP enabled     interfaces
br-ext          8000.000c2976a815       yes             br-ext-nic
                                                        eth0
                                                        vcp_ext-vmx1
                                                        vcp_ext-vmx2
                                                        vfp_ext-vmx1
                                                        vfp_ext-vmx2
br-int-vmx1     8000.525400866237       yes             br-int-vmx1-nic
                                                        vcp_int-vmx1
                                                        vfp_int-vmx1
br-int-vmx2     8000.5254006ec6d9       yes             br-int-vmx2-nic
                                                        vcp_int-vmx2
                                                        vfp_int-vmx2

virtio bindings

As I did in my vMX getting started post, for the Ethernet connectivity to the vMX I will be using KVM virtio paravirtualisation.

virtio bindings are flexible and can be used to map multiple vMX instances to a physical host interface, or to connect vMX instances or vMX interfaces together which we will be doing here. Linux bridges are used to stitch everything together.

At this point both vMX1 and vMX2 are running, but I need to create the virtio bindings to enable the communication between each MX.

For both vMX1 and vMX2 this is done in the same configuration file – config/vmx-junosdev.conf

I’ll create a link between vMX1 interfaces ge-0/0/1 and ge-0/0/2.

     - link_name  : vmx_link_ls
       endpoint_1 :
         - type        : junos_dev
           vm_name     : vmx1
           dev_name    : ge-0/0/1
       endpoint_2 :
         - type        : junos_dev
           vm_name     : vmx1
           dev_name    : ge-0/0/2

The same is done for vMX2

     - link_name  : vmx2_link_ls
       endpoint_1 :
         - type        : junos_dev
           vm_name     : vmx2
           dev_name    : ge-0/0/1
       endpoint_2 :
         - type        : junos_dev
           vm_name     : vmx2
           dev_name    : ge-0/0/2

Finally I will create a link between ge-0/0/3 on vMX1 and vMX2. I could use the same technique as shown above, but what if I wanted to connect more than 2 vMX together on the same Ethernet segment? It would be done like this with an additional bridge being defined and shared by each vMX.

     - link_name  : bridge_vmx_12
       endpoint_1 :
         - type        : junos_dev
           vm_name     : vmx1
           dev_name    : ge-0/0/3
       endpoint_2 :
         - type        : bridge_dev
           dev_name    : bridge_vmx12

     - link_name  : bridge_vmx_12
       endpoint_1 :
         - type        : junos_dev
           vm_name     : vmx2
           dev_name    : ge-0/0/3
       endpoint_2 :
         - type        : bridge_dev
           dev_name    : bridge_vmx12

Again the orchestration script vmx.sh is used to create the device bindings

mdinham@ubuntu:~/vmx-14.1R5.4-1$ sudo ./vmx.sh --bind-dev

Now let’s look at what bridges we have!

  • br-ext – the external bridge for management traffic
  • br-int-vmx1 – the internal bridge for vMX1 RE to PFE traffic
  • br-int-vmx2 – the internal bridge for vMX2 RE to PFE traffic
  • bridge_vmx12 – to enable the communication between ge-0/0/3 on vMX1 and vMX2
  • virbr0 – unused as all vMX interfaces are defined
  • vmx1_link_ls – connects ge-0/0/1 and ge-0/0/2 on vMX1
  • vmx2_link_ls – connects ge-0/0/1 and ge-0/0/2 on vMX2
  • vmx_link – connects ge-0/0/0 on vMX1 and vMX2 to eth1 on the host
mdinham@ubuntu:~/vmx-14.1R5.4-1$ brctl show
bridge name     bridge id            STP enabled     interfaces
br-ext          8000.000c2976a815    yes             br-ext-nic
                                                     eth0
                                                     vcp_ext-vmx1
                                                     vcp_ext-vmx2
                                                     vfp_ext-vmx1
                                                     vfp_ext-vmx2
br-int-vmx1     8000.525400866237    yes             br-int-vmx1-nic
                                                     vcp_int-vmx1
                                                     vfp_int-vmx1
br-int-vmx2     8000.5254006ec6d9    yes             br-int-vmx2-nic
                                                     vcp_int-vmx2
                                                     vfp_int-vmx2
bridge_vmx12    8000.fe060a0efff3    no              ge-0.0.3-vmx1
                                                     ge-0.0.3-vmx2
virbr0          8000.000000000000    yes
vmx2_link_ls    8000.fe060a0efff5    no              ge-0.0.1-vmx2
                                                     ge-0.0.2-vmx2
vmx_link        8000.000c2976a81f    no              eth1
                                                     ge-0.0.0-vmx1
                                                     ge-0.0.0-vmx2
vmx_link_ls     8000.fe060a0efff1    no              ge-0.0.1-vmx1
                                                     ge-0.0.2-vmx1

At this point vMX1 and vMX2 are ready to be configured.

EVPN Lab

EVPN is defined in RFC7432. It provides a number of enhancements over VPLS, particularly as MAC address learning now occurs in the control plane and is advertised between PEs using an MP-BGP MAC route.  Compared to VPLS which uses data plane flooding to learn MAC addresses, this BGP based approach enables EVPN to limit the flooding of unknown unicast. MAC addresses are now being routed which in multi homed scenarios enables all active links to be utilised. Neat stuff. Also look up the Juniper Day 1 on EVPN.

I’ve already configured a base configuration on R2 and R3. Note I changed the chassis network-services mode to enhanced-ip from the vMX default of enhanced-ethernet.

root@R2# show | compare
[edit]
+  chassis {
+      network-services enhanced-ip;
+  }
+  interfaces {
+      ge-0/0/3 {
+          unit 0 {
+              family inet {
+                  address 192.168.23.2/24;
+              }
+              family mpls;
+          }
+      }
+      lo0 {
+          unit 0 {
+              family inet {
+                  address 2.2.2.2/32;
+              }
+          }
+      }
+  }
+  protocols {
+      mpls {
+          interface ge-0/0/3.0;
+      }
+      ospf {
+          area 0.0.0.0 {
+              interface lo0.0 {
+                  passive;
+              }
+              interface ge-0/0/3.0;
+          }
+      }
+      ldp {
+          interface ge-0/0/3.0;
+      }
+  }

Comms are up between vMX1 and vMX2

root@R2> ping 192.168.23.3 rapid
PING 192.168.23.3 (192.168.23.3): 56 data bytes
!!!!!
--- 192.168.23.3 ping statistics ---
5 packets transmitted, 5 packets received, 0% packet loss
round-trip min/avg/max/stddev = 2.031/2.253/2.805/0.281 ms

root@R2> show ospf neighbor
Address          Interface              State     ID               Pri  Dead
192.168.23.3     ge-0/0/3.0             Full      3.3.3.3          128    37

root@R2> show ldp neighbor
Address            Interface          Label space ID         Hold time
192.168.23.3       ge-0/0/3.0         3.3.3.3:0                14

Now I have reachability between R2 and R3 I can go ahead and add the required base config for EVPN.

Note: EVPN is unfortunately not supported within a Logical System so I am configuring EVPN on the main routers.

From Junos 14.1R4 the chained composite next hop features for EVPN will automatically be configured. Chained composite next hops are required for EVPN and allow the ingress PE to take multiple actions before forwarding.

root@R2> ...configuration routing-options | display inheritance defaults
autonomous-system 65000;
##
## 'forwarding-table' was inherited from group 'junos-defaults'
##
forwarding-table {
    ##
    ## 'evpn-pplb' was inherited from group 'junos-defaults'
    ##
    export evpn-pplb;
    ##
    ## 'chained-composite-next-hop' was inherited from group 'junos-defaults'
    ##
    chained-composite-next-hop {
        ##
        ## 'ingress' was inherited from group 'junos-defaults'
        ##
        ingress {
            ##
            ## 'evpn' was inherited from group 'junos-defaults'
            ##
            evpn;
        }
    }
}

We require the evpn and inet-vpn MP-BGP address families. Here I am configuring an iBGP peering with R3.

root@R2# show | compare
[edit]
+  routing-options {
+      autonomous-system 65000;
+  }
[edit protocols]
+   bgp {
+       group internal {
+           type internal;
+           local-address 2.2.2.2;
+           family inet-vpn {
+               unicast
+           }
+           family evpn {
+               signaling;
+           }
+           neighbor 3.3.3.3;
+       }
+   }

At this point the core configuration for EVPN is complete.

Logical Systems

My configuration gets a little more complicated here, because I need to create R1 and R4 as Logical Systems on my vMX. I will do this now.

Remember that ge-0/0/1 and ge0/0/2 have been connected back to back by the virtio bridge. I will use ge-0/0/1 as the interface on R2/R3 and ge-0/0/2 as the interfaces on the Logical System routers R1/R4.

root@R2# show | compare
[edit]
+ logical-systems {
+     R1 {
+         interfaces {
+             ge-0/0/2 {
+                 unit 100 {
+                     vlan-id 100;
+                     family inet {
+                         address 192.168.14.1/24;
+                     }
+                 }
+             }
+         }
+     }
+ }
[edit interfaces]
+   ge-0/0/2 {
+       vlan-tagging;
+   }

Not required for this lab, but if you wanted to create multiple Logical System routers on the same vMX this can of course be done. In the example below I have created two routers R5 and R6, they are linked together via ge-0/0/1 (R5) and ge-0/0/2 (R6) with vlan 56 being used as the VLAN ID for this point to point link. You can of course configure OSPF/BGP/MPLS etc directly between these routers. The configuration is defined in the appropriate logical system stanza.

logical-systems {
    R5 {
        interfaces {
            ge-0/0/1 {
                unit 56 {
                    vlan-id 56;
                    family inet {
                        address 192.168.56.5/24;
                    }
                }
            }
            lo0 {
                unit 5 {
                    family inet {
                        address 5.5.5.5/32;
                    }
                }
            }
        }
    }
    R6 {
        interfaces {
            ge-0/0/2 {
                unit 56 {
                    vlan-id 56;
                    family inet {
                        address 192.168.56.6/24;
                    }
                }
            }
            lo0 {
                unit 6 {
                    family inet {
                        address 6.6.6.6/32;
                    }
                }
            }
        }
    }
}

Working with Logical Systems is simple and commands can be entered in a couple of ways. Configuration can also be entered directly when the CLI is set to a Logical System.

root@R2> set cli logical-system R1
Logical system: R1

root@R2:R1> ping 192.168.14.1 rapid
PING 192.168.14.1 (192.168.14.1): 56 data bytes
!!!!!
--- 192.168.14.1 ping statistics ---
5 packets transmitted, 5 packets received, 0% packet loss
round-trip min/avg/max/stddev = 0.012/0.091/0.242/0.085 ms

root@R2:R1> clear cli logical-system
Cleared default logical system

root@R2> ping logical-system R1 192.168.14.1 rapid
PING 192.168.14.1 (192.168.14.1): 56 data bytes
!!!!!
--- 192.168.14.1 ping statistics ---
5 packets transmitted, 5 packets received, 0% packet loss
round-trip min/avg/max/stddev = 0.009/0.013/0.026/0.007 ms

Completing the EVPN configuration

I’m going to be configuring the EVPN VLAN based service. This requires a separate EVI per VLAN. An EVI is a an EVPN instance spanning across the PEs participating in a particular EVPN.

There isn’t too much to the configuration. I configure the interface facing R1, and then define the evpn routing-instance.

root@R2# show | compare
[edit interfaces]
+   ge-0/0/1 {
+       flexible-vlan-tagging;
+       encapsulation flexible-ethernet-services;
+       unit 100 {
+           encapsulation vlan-bridge;
+           vlan-id 100;
+       }
+   }
[edit]
+  routing-instances {
+      EVPN100 {
+          instance-type evpn;
+          vlan-id 100;
+          interface ge-0/0/1.100;
+          route-distinguisher 2.2.2.2:1;
+          vrf-target target:1:1;
+          protocols {
+              evpn;
+          }
+      }
+  }

Note: If you try to configure an evpn routing-instance on a logical system, you won’t see the option for evpn.

root@R2> set cli logical-system R1
Logical system: R1

root@R2:R1> configure
Entering configuration mode

[edit]
root@R2:R1# set routing-instances evpn instance-type ?
Possible completions:
  forwarding           Forwarding instance
  l2backhaul-vpn       L2Backhaul/L2Wholesale routing instance
  l2vpn                Layer 2 VPN routing instance
  layer2-control       Layer 2 control protocols
  mpls-internet-multicast  Internet Multicast over MPLS routing instance
  no-forwarding        Nonforwarding instance
  virtual-router       Virtual routing instance
  virtual-switch       Virtual switch routing instance
  vpls                 VPLS routing instance
  vrf                  Virtual routing forwarding instance
[edit]

Verification

Let’s see if I can ping across the EVI from R1 to R4.

root@R2> set cli logical-system R1
Logical system: R1

root@R2:R1> ping 192.168.14.4 rapid
PING 192.168.14.4 (192.168.14.4): 56 data bytes
!!!!!
--- 192.168.14.4 ping statistics ---
5 packets transmitted, 5 packets received, 0% packet loss
round-trip min/avg/max/stddev = 3.644/26.570/97.943/36.259 ms

root@R2:R1> show arp
MAC Address Address Name Interface Flags
02:06:0a:0e:ff:f6 192.168.14.4 192.168.14.4 ge-0/0/2.100 none

Excellent!

Now what does this look like from R2’s perspective, we see 2 BGP paths received.

root@R2> show bgp summary
Groups: 1 Peers: 1 Down peers: 0
Table          Tot Paths  Act Paths Suppressed    History Damp State    Pending
bgp.evpn.0
                       2          2          0          0          0          0
Peer                     AS      InPkt     OutPkt    OutQ   Flaps Last Up/Dwn State|#Active/Received/Accepted/Damped...
3.3.3.3               65000         93         94       0       0       36:29 Establ
  bgp.evpn.0: 2/2/2/0
  EVPN100.evpn.0: 2/2/2/0
  __default_evpn__.evpn.0: 0/0/0/0

Looking more deeply we can see MAC addresses in the EVPN100 table, both the directly attached device and also the device attached to R3.

root@R2> show route table EVPN100.evpn.0

EVPN100.evpn.0: 4 destinations, 4 routes (4 active, 0 holddown, 0 hidden)
+ = Active Route, - = Last Active, * = Both

2:2.2.2.2:1::100::02:06:0a:0e:ff:f2/304
                   *[EVPN/170] 00:03:27
                      Indirect
2:3.3.3.3:1::100::02:06:0a:0e:ff:f6/304
                   *[BGP/170] 00:03:27, localpref 100, from 3.3.3.3
                      AS path: I, validation-state: unverified
                    > to 192.168.23.3 via ge-0/0/3.0
3:2.2.2.2:1::100::2.2.2.2/304
                   *[EVPN/170] 00:20:27
                      Indirect
3:3.3.3.3:1::100::3.3.3.3/304
                   *[BGP/170] 00:18:38, localpref 100, from 3.3.3.3
                      AS path: I, validation-state: unverified
                    > to 192.168.23.3 via ge-0/0/3.0

Here we can see EVPN database and MAC table information.

root@R2> show evpn database
Instance: EVPN100
VLAN  MAC address        Active source                  Timestamp        IP address
100   02:06:0a:0e:ff:f2  ge-0/0/1.100                   Jul 28 17:11:14
100   02:06:0a:0e:ff:f6  3.3.3.3                        Jul 28 17:11:15

root@R2> show evpn mac-table

MAC flags       (S -static MAC, D -dynamic MAC, L -locally learned, C -Control MAC
    O -OVSDB MAC, SE -Statistics enabled, NM -Non configured MAC, R -Remote PE MAC)

Routing instance : EVPN100
 Bridging domain : __EVPN100__, VLAN : 100
   MAC                 MAC      Logical          NH     RTR
   address             flags    interface        Index  ID
   02:06:0a:0e:ff:f2   D        ge-0/0/1.100
   02:06:0a:0e:ff:f6   DC                        1048575 1048575

Local MAC addresses are being advertised from R2 to R3.

root@R2> show route advertising-protocol bgp 3.3.3.3

EVPN100.evpn.0: 4 destinations, 4 routes (4 active, 0 holddown, 0 hidden)
  Prefix                  Nexthop              MED     Lclpref    AS path
  2:2.2.2.2:1::100::02:06:0a:0e:ff:f2/304
*                         Self                         100        I
  3:2.2.2.2:1::100::2.2.2.2/304
*                         Self                         100        I

Here we can see detailed information about the EVPN routing instance.

root@R3> show evpn instance EVPN100 extensive
Instance: EVPN100
  Route Distinguisher: 3.3.3.3:1
  VLAN ID: 100
  Per-instance MAC route label: 299792
  MAC database status                Local  Remote
    Total MAC addresses:                 1       1
    Default gateway MAC addresses:       0       0
  Number of local interfaces: 1 (1 up)
    Interface name  ESI                            Mode             Status
    ge-0/0/1.100    00:00:00:00:00:00:00:00:00:00  single-homed     Up
  Number of IRB interfaces: 0 (0 up)
  Number of bridge domains: 1
    VLAN ID  Intfs / up    Mode             MAC sync  IM route label
    100          1   1     Extended         Enabled   299872
  Number of neighbors: 1
    2.2.2.2
      Received routes
        MAC address advertisement:              1
        MAC+IP address advertisement:           0
        Inclusive multicast:                    1
        Ethernet auto-discovery:                0
  Number of ethernet segments: 0

Summary

In this post I showed how multiple vMX can be configured and interconnected on the same Linux host. I also built a topology of 4 logical routers on the two vMX and used EVPN to demonstrate the capability of vMX.

I’ve also completed a VPLS lab with 5 x Logical System routers running on a single vMX. If you would like to see a post on this type of configuration please mention in the comments or tweet @mattdinham.

Thanks for reading 🙂