Tuesday, September 21, 2010

MPLS MTU for Enterprise MPLS VPN

We are in the midst of building a new 10GE MPLS VPN for our enterprise campus core network. After we setup our first MPLS VPN, we are able to connect up 2 different VRFs (a.k.a virtual networks) belonging to 2 different departments. Ping test shows that the RTT was close to 0 msec. However, Active Directory traffic was slowed to a claw.

We investigated and found no fault at the routing and mpls configuration. Later, I recalled an experience of slow access over GRE tunnel relating to MTU sizing. Further search on Cisco.com revealed this:

When configuring the network to use MPLS, set the core-facing interface MTU values greater than the edge-facing interface MTU values, using one of the following methods:

  • Set the interface MTU values on the core-facing interfaces to a higher value than the interface MTU values on the customer-facing interfaces to accommodate any packet labels, such as MPLS labels, that an interface might encounter. Make sure that the interface MTUs on the remote end interfaces have the same interface MTU values. The interface MTU values on both ends of the link must match.
  • Set the interface MTU values on the customer-facing interfaces to a lower value than the interface MTU on the core-facing interfaces to accommodate any packet labels, such as MPLS labels, than an interface might encounter. When you set the interface MTU on the edge interfaces, ensure that the interface MTUs on the remote end interfaces have the same values. The interface MTU values on both ends of the link must match.

We adopted the former approach and set MTU size of all core facing interfaces to 1520 bytes and leave all customer facing interfaces to default 1500 bytes.