What is DCI (Data Center Interconnect)?
DCI Technology is used to connect two or more Data Center to achieve business or IT objective.

Why DCI solutions?

  • Disaster Recovery or Data Center Maintenance
  • VM/Application/Host/Workload Mobility
  • Flexibility and scalability 

Transport Option:

  1. Over dark fiber or D-WDM 
  2. MPLS Transport 
  3. IP Transport 

We can select DCI technologies based on our transport.

  • Over dark fiber or D-WDM : VSS, vPC, FabricPath , Trill
  • MPLS Transport: EoMPLS , VPLS
  • IP: OTV, VxLAN

Is it possible to use EoMPLS or VPLS when we have IP as transport?                                                                                          No, we can’t.  Whenever we are giving an answer to such question I would request you to think about tunnel once and think that is it possible to use the tunnel? and you will find the answer as Yes :). So here we can use  EoMPLSoGRE or VPLSoGRE as a solution.


Here are some of the design considerations and common issues when designing for an Layer 2 DCI:

  • STP Isolation
  • Controlling unknown unicast flooding – Unknown unicast should not be sent across the DCI. Ideally, there should be no silent hosts; if there are, create exceptions only for these.
  • Policing of BUM traffic at the DC edge – Broadcast, unknown unicasts and multicasts should be policed at the edge to prevent a Layer 2 storm form taking out both DCs at the same time.
  • Localization of gateways – Routed traffic should not have to traverse the DCI to reach its gateway. This can add a lot of latency to the traffic. (HSRP Localization)
  • Ingress traffic optimization – How does traffic get directed into the primary DC? This normally involves manipulating BGP attributes or advertising a longer prefix over the primary DC than the secondary DC.

Why OTV as a solution?

In Traditional Layer 2 Extension technology we were having concerns:

1. Flooding Behavior
2. Pseudo-wire Maintenance
3. Multi-Homing
4. Transport Dependency.

OTV  is a Cisco proprietary protocol that can run over any transport as long as there is an IP connectivity which means there is no transports dependency. Preferably, there should be support for multicast, but it can run in a unicast mode as well.
OTV helps to control unicast flooding because it works on the assumption that there are no silent hosts, so all hosts should be known. If all hosts are known, then there is no need to flood frames. Flooding can be selectively enabled for silent hosts.

OTV devices also help to reduce broadcasts. They can snoop ARP responses and cache locally so that when the next host ARPs for the MAC, it will already be known by the OTV device, which can answer locally.
OTV does not have built-in policers for BUM traffic at the data center edge, but since it separates STP domains and does not flood unknown unicasts, a Layer 2 storm would not have the same impact as it would with the other technologies.

Let’s start OTV:

OTV is a new DCI (Data Center Interconnect) technology that is used to extend Layer 2 between data centers. Basically, OTV does MAC in IP routing by encapsulating an Ethernet frame in an IP packet before forwarding across the transport IP network. OTV supports both multicast and unicast-only transport Network. OTV uses IS-IS as control Plane means IS-IS is used to advertise MAC reachability between Data Center.

OTV Terminology:

  1. Edge Device– Edge device performs all OTV functions. It receives Layer2 traffic for all VLANs that need to be extended to a remote location and dynamically encapsulates the Ethernet frame into an IP Packet that then sent across the transport infrastructure.                                                                                                                       
  2. OTV internal interface: Internal Interfaces are the layer2 interfaces on the OTV Edge Device configured as a trunk that faces the local site and carries the VLANs extended through OTV. Internal interfaces take part in the STP domain and learn MAC addresses as a normal layer2 interface.
  3. OTV join-interface: Join interface is a Layer3 interface on the OTV Edge device which connects to the IP transport network. This interface is used as the source for OTV encapsulated traffic that is sent to remote OTV Edge Devices. Multiple overlays can share the same join interface.
  4. OTV overlay interface: Overlay interface is logical multi-access and multicast capable (Logical/tunnel interface) interface where all the OTV configuration is placed. It encapsulates the site L2 frames in IP unicast or multicast packets that are sent to other sites.
  5. Extended VLANs– Extended VLANs are VLANs which we are going to bridge over the OTV.
  6. Site VLAN:  OTV uses the site VLAN to detect and establish adjacency with another edge device and it’s an internal VLAN (means not going to span over OTV) which is used to elect AED. Please ensure that site VLAN is active on atleast on a port.                                                                                                                                                                                 
  7. Site Identifier:Site identifier is used as Loop prevention technique. site id must be unique per DC. 

As OTV uses IS-IS as control plane means IS-IS is used to advertise MAC reachability between DC’s and
prior to exchanging or advertising MAC address reachability, all OTV Edge Devices must discover each other and build a neighbor relationship with each other from OTV prospective.

There are two ways to deploy OTV :
1. Multicast
2. Unicast

For More Info:



OTV Interaction with STP:

As we know OTV extends VLANs between Data Centers but OTV does not extend STP across DC’s.
BPDUs sent and received ONLY on Internal Interfaces but OTV Edge Device will not originate or forward BPDUs on the overlay Interface.
In OTV, MAC reachability information is advertised and learned via the control plane protocol instead of learned using typical MAC flooding behaviour.


OTV devices are deployed in pairs to ensure resiliency. OTV provides loop-free multi-homing by electing a designated forwarding device per site for each VLAN which is known as Authoritative Edge Device (AED). An AED is an Edge Device that is responsible for forwarding frames (unicast/multicast/broadcast) into and out of a site and ensures that there are no loops within the OTV network.

Known unicast layer2 frames destined via the overlay will be sent directly to the join interface of the AED in the remote site that advertised the reachability for the destination MAC addresses. Different AEDs can be elected for different VLAN to balance traffic load.


Why Site VLAN and Site Identifier Required?

As we discussed that site VLAN is used to detect and establish adjacency with another edge device and it’s an internal VLAN which is used to elect AED. However, this mechanism of electing AED will create a single point of failure which can lead loop.

Will add more info soon

OTV interaction with HSRP:

To overcome from this situation we can use HSRP localization or HSRP hello packet filtering means we can block HSRP hellos on Overlay interface.


How to connect OTV VDC:

OTV step by step configuration(Please refer below topology):

Let’s assume core network is already configured where site A and site B have reachability over IP network as well as we have multicast supported transport and we are planning for DCI (OTV). Here’s the step by step approach and consideration:

 1. Enable OTV feature:

2. Configure OTV site VLAN and site identifier:

3. Create L2 VLANs that needs to be extended over OTV.

4. Configure Join Interface

5. Configure Overlay interface.

## Same way we can configure other OTV edge devices.

6. Please make sure multicast transport is configured correctly and keep below things in mind.

Further troubleshooting commands needs to be added.

How to configure OTV when we don’t have multicast supported transport?



Leave a Reply

Your email address will not be published. Required fields are marked *