MTU settings on Junos & IOS (part 5) with MPLS L2 VPN

This post follows on from part 4, but this time we’ll be configuring a Layer-2 Ethernet to Ethernet MPLS VPN between the 2 CEs.


I’m going to configure a Martini Layer 2 VPN. Martini uses LDP to signal and setup the VPN across the MPLS network.

With MPLS L2 VPNs there will also be a minimum of two labels. The top label being the transport label and the bottom label being the VC label. The transport label will be swapped hop by hop through the MPLS network. The VC label is used by the egress PE router to identify the Virtual Cirtcuit that the incoming packet relates to.

On an Ethernet segment, the frame would look like this for a payload of 1500 bytes.


  • L2 Header – 14 bytes. This is the Ethernet header for the frame. 14 bytes, or 18 bytes if the interface is an 802.1q trunk.
  • MPLS Label – 4 bytes. The transport label.
  • VC Label – 4 bytes. The VC ID.
  • Control Word – 4 bytes. This is an optional field only required when transporting FR or ATM, carrying additional L2 protocol information.
  • L2 Data – 1514 – 1518 bytes. The encapsulated Layer-2 frame that is being transmitted across the MPLS network. In this case 1500 bytes of data, plus the L2 header (an additional 4 bytes would be added if the VPN interface is a trunk).

Well this means that we will be putting a minimum of 1540 bytes (14 +4+4 + 4+1514) of data on the wire if there was 1500 bytes of customer data in the encapsulated L2 frame. If the customer interface is a trunk and the SP interface is a trunk, then we are up to 1548 bytes on the wire across the P network.


For this lab, I’ll be using the topology below. The base configurations are using OSPF as the routing protocol and LDP to exchange transport labels.

mplsl2vpnSoftware revisions are as follows

  • CE1, CE2, P, PE1: IOS (Cisco 7200 12.4(24)T)
  • PE2: Junos (Firefly 12.1X46)

The base configs are similar to part 3, using OSPF as the IGP and LDP to signal transport labels, so I’ll jump straight in to the Martini VPN config.


The configuration could not be simpler. The VCID must match at both sides and is set to 12.

interface FastEthernet1/0
 no ip address
 duplex full
 xconnect 12 encapsulation mpls


Not much to it on Junos either. Notice that I enable LDP on the loopback.

interfaces {
    ge-0/0/1 {
        encapsulation ethernet-ccc;
        unit 0 {
            family ccc;
protocols {
    ldp {
        interface lo0.0;
    l2circuit {
        neighbor {
            interface ge-0/0/1.0 {
                virtual-circuit-id 12;

Let’s check that the circuit is up

PE1#sh mpls l2transport vc detail
Local interface: Fa1/0 up, line protocol up, Ethernet up
  Destination address:, VC ID: 12, VC status: up
    Output interface: Gi0/0, imposed label stack {18 299776}
    Preferred path: not configured
    Default path: active
    Next hop:
  Create time: 03:48:16, last status change time: 00:15:19
  Signaling protocol: LDP, peer up
    MPLS VC labels: local 22, remote 299776
    Group ID: local 0, remote 0
    MTU: local 1500, remote 1500
    Remote interface description:
  Sequencing: receive disabled, send disabled
  VC statistics:
    packet totals: receive 3114, send 3124
    byte totals:   receive 319522, send 408559
    packet drops:  receive 0, seq error 0, send 3

[email protected]> show l2circuit connections extensive
Layer-2 Circuit Connections:

Legend for connection status (St)
EI -- encapsulation invalid      NP -- interface h/w not present
MM -- mtu mismatch               Dn -- down
EM -- encapsulation mismatch     VC-Dn -- Virtual circuit Down
CM -- control-word mismatch      Up -- operational
VM -- vlan id mismatch           CF -- Call admission control failure
OL -- no outgoing label          IB -- TDM incompatible bitrate
NC -- intf encaps not CCC/TCC    TM -- TDM misconfiguration
BK -- Backup Connection          ST -- Standby Connection
CB -- rcvd cell-bundle size bad  SP -- Static Pseudowire
LD -- local site signaled down   RS -- remote site standby
RD -- remote site signaled down  XX -- unknown

Legend for interface status
Up -- operational
Dn -- down
    Interface                 Type  St     Time last up          # Up trans
    ge-0/0/1.0(vc 12)         rmt   Up     May  3 21:09:11 2014           1
      Remote PE:, Negotiated control-word: Yes (Null)
      Incoming label: 299776, Outgoing label: 22
      Negotiated PW status TLV: No
      Local interface: ge-0/0/1.0, Status: Up, Encapsulation: ETHERNET

Looks good!

My CE config is very simple, I just have a point to point interface between the two CEs and OSPF running across the L2 link. Here is CE1’s config

interface Loopback0
 ip address
 ip ospf 1 area 0
interface FastEthernet0/1
 ip address
 ip ospf 1 area 0
 duplex full
 speed 100

CE1#sh ip ro
Routing entry for
 Known via "ospf 1", distance 110, metric 2, type intra area
 Last update from on FastEthernet0/1, 03:47:25 ago
 Routing Descriptor Blocks:
 *, from, 03:47:25 ago, via FastEthernet0/1
 Route metric is 2, traffic share count is 1

The obvious difference compared to L3 MPLS VPN is that the provider network has no involvement in customer routing.

What’s the maximum ping we can get from CE1 to CE2? The provider core is set to 1500 bytes on the PE-P-PE interfaces. It should be 1474 right? 1500 – 14 bytes of L2 headers – 2 labels – 1 control word. And it is.

CE1#ping repeat 1 size 1474

Type escape sequence to abort.
Sending 1, 1474-byte ICMP Echos to, timeout is 2 seconds:
Success rate is 100 percent (1/1), round-trip min/avg/max = 56/56/56 ms
CE1#ping repeat 1 size 1475

Type escape sequence to abort.
Sending 1, 1475-byte ICMP Echos to, timeout is 2 seconds:
Success rate is 0 percent (0/1)

Let’s look at a capture on the PE1 facing interfaces on the P router.


1514 bytes on the wire as expected.

Now how about I make the CE1-CE2 link dot1q. I’ve not changed anything on the PE routers, just enabled a 802.1q tagged interface using on each CE. As there will be an extra 4 bytes of overhead, the maximum ping size will drop to 1470.

CE1#ping repeat 1 size 1470

Type escape sequence to abort.
Sending 1, 1470-byte ICMP Echos to, timeout is 2 seconds:
Success rate is 100 percent (1/1), round-trip min/avg/max = 56/56/56 ms
CE1#ping repeat 1 size 1471

Type escape sequence to abort.
Sending 1, 1471-byte ICMP Echos to, timeout is 2 seconds:
Success rate is 0 percent (0/1)

We can see the extra header in the packet capture – exactly where you’d expect to see it – in between the CE generated Ethernet header and IP data.


Now we know how the data on the wire changes when an MPLS L2 VPN is created, it’s easy to make provision for the additional overhead across the MPLS core by increasing MTUs accordingly.

Thanks for reading my post. We’ve covered both Cisco and Juniper here, but be sure to check out other posts from the #JuniperFan bloggers here.