Don’t Miss the “Cloud Native Infographic” !

Everything is cloud Native from 5G core to RAN, transport and orchestration. Either you know about it or Nothing about Cloud. In this “FREE” one page infographic poster, I have made it a QUICK and EASY reference for Cloud Native main concepts which are otherwise very complex to understand. Plus get notified when important blogs are published.

Cloud Native Infographic

Don’t Miss the “Cloud Native Infographic” !

Everything is cloud Native from 5G core to RAN, transport and orchestration. Either you know about it or Nothing about Cloud. In this “FREE” one page infographic poster, I have made it a QUICK and EASY reference for Cloud Native main concepts which are otherwise very complex to understand. Plus get notified when important blogs are published.

What your DWDM vendor did not tell you about latency for your Data Center Interconnect


DWDM latencyAre you concerned about  latency in your data center links running  over optical network ?

Or, are you planning for Data Center Interconnectivity in near future and want to know  the impact of delays involved. The read on, for some useful tips to know your options.

Chances are, you are already using DWDM to connect data centers but there are few things that you should be aware of as an operator regarding the sources of delays in your optical links in order to either mitigate them or at least know how to properly dimension your network for delay. As they say , NOT all DWDM systems are equal when it comes to latency in them so it would help to know what are the choices available when comparing DWDM systems.

In this era of Cloud networks, it would be naive to ignore the impact of delay in optical networks that run clouds. Impact of delay for financial markets that do High Frequency Transactions ( HFT) can be considerable and far reaching; fraction of millisecond can impact revenue and as per one estimate,these fractional delays can contribute to difference of revenues over the year as high as 100 Million USD.

Lets see, what are these factors that cause delay and what measures can be taken to deal with them. Perhaps your DWDM vendor did not tell them to you, yet.


1. Fiber Delay


Delay Impact = 5 micro seconds / KM

Speed of light in vacuum is 299,792,458 meters / seconds. This equates to 3.33 mirco seconds/KM. Since optical fiber has different refractive index compared to vacuum therefore  latency is more and equals 5 microseconds/ KM.

So if there is a 100 KM of fiber between two network elements the delay would be 500 micro seconds ( This does not include delay of the network elements themselves)

Not surprisingly, fiber delay  is the biggest culprit and contributes to the major delay  in optical network  and it would impact no matter which DWDM system you use. Long fiber link causes more delay than shorter fiber link.Little  can be done to mitigate the latency owing to fiber except selecting shortest route if it is an option.If there are two diversity routes available between two stations, operator is advised to use the shortest route as the primary path and use the longer one as secondary or backup path; if  new locations for data centers are being planned, the  locations can be strategically selected in order to have minimum latency owing to fiber itself. While calculating latency coming from fiber, optical lengths should be considered and not geographical length. Operators should have good record of fiber and optical distances to dimension the delay properly. A good OTDR trace is very handy to know the optical length of the route.


wherever possible use shorter distance for connectivity. Avoid using longer routes.


2. Transponding/Muxponding including FEC


Delay Impact = 10 micro seconds to few hundred micro seconds ( depending on model and make)

Also called “color conversion”. There are several processes involved in transponding/muxponding. Some of them are •Data encapsulation •Forward Error Correction ( FEC) •Performance Monitoring

Of these forward error correction is perhaps the biggest culprit. Delay due to it ranges from 15 to 100 microsecond.

A word about FEC:

Low OSNR is directly related to poor BER performance in DWDM networks. FEC or Forward error connection is a method used to achieve coding gain for higher bit rates. It is a method of encoding optical signal with extra error detection and correction overhead bytes enabling optical receivers to detect errors and correct them. Thus FEC can reduce BER and effectively increases distances reachable by high speed signals without regeneration.

Forwards Error Correction is a must have feature when going on longer distances ( several hundred kilometers)  but it does not bring considerable value at shorted distances ( metro links). There are some models of transponders/muxponders that can switch off the FEC option . There are other models specially designed without FEC option and optimized for low latency. Some vendors are offering transponders with “No-FEC” option for shorter distances that can give ultra low latency of few nano seconds. They claim that they can carry  signal in its native form without processing and  recommend them for data center connectivity.


Whenever possible use  latency optimized transponder/muxponder from your vendor. Check with the vendor if it provides any such model. For green field deployment,do make this a benchmark for evaluating, which vendors to go with by comparing all variants of transponders/muxponders, the vendors have to offer. Not all vendors provide these kind of variants. If possible, go without FEC for shorter distances.


3. Native Optical


Always remember a simple rule of thumb : The lesser  the processing of an optical signal, the better it is for latency. Said in another way, the lower we are in the OSI layers the better it is for latency.

So when it comes to latency

  • Layer2 switches perform better than routers
  • OTN/ SDH performs better than switches
  • DWDM performs better than switches ( DWDM is also called L0)

Simple: Higher layers have to process overhead at their respective layers that adds to latency.

This does not mean that we start thinking of removing switches and routers. They are not replaceable. However if  traffic is already processed at layer 2 or layer 3 at the point of origin and there is no need to do re-processing in the middle of the network , why not pack the traffic in ODU ( OTN)  and transport it to the destination. Better still if one can avoid processing at OTN layer, use native DWDM altogether.( considering OTN also adds overheads). Few vendors offer latency optimized short reach transponders without OTN processing.

Recommendation :

Be as close to L0 as possible


  4. Dispersion Compensation:


Delay due to DCF based Dispersion Compensation Module (DCM) = 5 to 1oo micro seconds

Delay due to Fiber Bragg Grating ( FBG) based DCM = Few nano seconds.

Are you obliged to use dispersion compensation ?

Perhaps you are running 10G lambdas, so you are already using some sort of dispersion compensation in your network.

Check for Dispersion compensation modules using Fiber Bragg Gratings. They are the latest type used to compensate for dispersion related issues. There have smaller length compared to the earlier systems using  Dispersion compensation Fibers ( DCF). Which were long spools of fiber usually 20% of the length of the fiber it compensates. Delay due to FBG is negligible compared to that of DCF.

Recommendation :

Try to remain DCM free if deploying green field DWDM. If still DCMs are needed because of 10g lambdas, opt for Fiber Bragg DCM to reduce latency



As can be seen there are multiple factors that contribute to latency. There might be other components in DWDM network that can cause some latency like amplifiers, WSS etc. But the impact of latency due to these factors is not as considerable as the factors listed above. A well engineered network has to take care of all these factors to reduce latency in networks. Here is a summary of recommendations for low latency data center Interconnect.

a. Optimize fiber routes. Use shortest path for main traffic  in order to have low latency.

b. Try to be as close to L0 as possible and as native as possible.The best is to use DWDM in order to remove delays caused by the extra overheads at upper layers.

c. Check for latency optimized transponders/muxponders by vendors that can give delay in the range of nano seconds. Disable FEC if not needed. FEC causes additional delay.

d. User Fiber Bragg based DCMs instead of DCFs.

Finally, I would love to hear on what steps you have taken to optimize your data center interconnect.




36 thoughts on “What your DWDM vendor did not tell you about latency for your Data Center Interconnect”

  1. The latency is one of the major factor affecting Data traffic. I totally agree with your points that affect the Latency direclty DCM’s, Long Optical Routes and the Switch/Routers themselves.

    very nice artical with many thanks

  2. Very good attempt to reveal the the underlying inherent reasons of signal delays in the multi-layered optical networks. I would like to suggest two more reasons for the delays; one related to the DSP (Digital Signal Processing) in coherent transmission and second due to encryption and decryption process at the source and destination end for information security.

    Thanks for brining this hidden subject into light anyway.

    1. Thanks Shaheen,

      Agree with you. Coherent cards give more latency than non coherent ones and also the encryption. However some vendors are giving encryption at L0 and the delay would be very less compared to the traditional ones at higher layers.


      1. .. but coherent also has the benefit to help you get rid of the dispersion compensation fiber so, especially for longer links, coherent transmission could help to reduce latency (and help a lot). Compliments for your nice post!


        1. Thanks Giacomo for stopping by and compliments,

          Sure, Coherent is the default choice for long distance because of the ability to cope with any type of dispersion. There are no two views for that.


  3. I am currently working for a telecommunications hardware vendor, and having created several extremely low-latency designs, I have learned that this type of network is very different from the traditional designs we create. If “lowest possible latency” is a concern to you in your transport network, I would encourage you to make that clear to your hardware vendor up front. The design considerations that Mr. Khan identifies above are exactly on target, but are also fundamental to a network, which means they are VERY difficult to change late in the design phase or in the deployment phase (without starting from scratch). The nice thing about WDM transport networks is that it is fairly easy to predict the impact to latency, so open the conversation early!

    1. Thanks Greg,

      Good point that is, “open the conversation early” with your vendor and as you said it is difficult if not impossible to change late in deployment phase so better to start talking about latency in early stages.


  4. GREAT article for background of issue of latency but the first (and maybe most prevalent) ‘Recommendation’ of ‘wherever possible use shorter distance for connectivity. Avoid using longer routes’ is kind of like telling a trucking company to not take business to save on burning fuel and wear and tear on the trucks.

    My opinion is to use a technology that *isn’t affected by latency* — and in the case of optical telecommunications that means ‘free-space’. I know that past options of ‘free space optics’ have all had issues with propagation — maintaining signal integrity while dealing with various weather conditions — but the field is progressing.

    As a disclaimer our firm Attochron has developed the first wireless laser telecommunications technology that can maintain link availability in the weather conditions that affect ‘FSO’ — namely fog and clear air turbulence (over the distances that matter in key markets of ‘backhaul, enterprise connectivity, data center links and even ‘Laser SATCOM’). Cheers.

  5. Good tips on reducing latency Faisal. Coherent technology would help minimize latency as it eliminates external DCM for wavelengths at 40 & 100G. However, more popular 10G still remains non-coherent if I’m not mistaken. It’s interesting to note that microwave links actually operate at the ‘speed of light’ in free space unlike light bouncing along through the fibre core. As a result microwave has around 5.4 µs / mile as opposed to 8 µs / mile latency for lit fibre.

    1. Thanks Bynoe Ghazi ! good insights for microwave and yes 10G is always non-coherent !
      A word though, Coherent eliminates DCMs but does involve more processing compared to non coherent optics thus kind of offsetting the benefits gained for latency.

  6. Sir, very nice article, i came to know many things related to latency. Here i would like to ask it is possible that fluctuation could occurred due to this latency, As we are using non coherent technology with DCM in 10G network, We observing fluctuation on router frequently where transmission is through DWDM. Kindly put light on this.

    1. Most likely, it could be for different reasons not related to DWDM. However I do recommend to check the PMD if your fiber is very old.



  7. It’s very simple to find out any topic on web as compared to
    books, as I found this paragraph at this web page.

  8. Very nice article which has been provided the insight of latency issues to be taken care during the data centre interconnect design. It will really help , not only for data centre design but also to manage terrerstrial DWDM network design / implementation.

    If there are two routes in Terrestrial / Date centre DWDM network, what is the latency difference to be maintained between two routes ? Because, normally in network architecture IP MPLS backbonelinks are redundant to each other . There will load balancing between two links configured in router. IP Engineering like to configure redundant links in IP level and they normally do not like transmission level protection switching.

    Which would be the best approach for IP MPLS links ?
    1. Protection switching in Transmission network and client side protection to routers ?
    2. Or Provide 2xIP MPLS links in parallel and load balacning in router ?

    1. Thanks Abdul Rauf, Would you really carry data center traffic over IP/MPLS. Why not carry it over DWDM. If you really would like to carry it over IP/MPLS , then you have not a big difference keeping protection over router or over transmission for the application you mentioned. Personally, I would keep it over transmission as it is more predictable.

      1. Very True. Keeping switching in tranmsmission is the best and provide client side protection to router.

  9. Thanks for the article, Faisal Kahn. It’s certainly not rocket science still we do need to be reminded from time to time to do things right.

    Regarding Abdul Raulf mentioning MPLS, I agree that this is a lot worse than using some WDM transport. Still it might be needed if fibre isn’t available or is too expensive. In the MPLS case though we need to consider much larger delays: Handling packets in a router, selecting the right MPLS VPN to use, move packets through a network hop by hop until the end router delivering the packets. And you even can’t be sure the route is delay-optimized – MPLS providers tends to ‘traffic engineer’ according to different rules – like how much you want to pay for the transport.

    I do wonder, Faisal, if you have figures for optical amplifiers as well or if those really should be considered ‘instant’.

    Any way – again thanks for your article

    Jan Ferré

  10. Vaibhav Salgaonkar

    Very Informative article. I have taken up a project of reducing latency of an existing transmission link for which the fiber route distance is about 30 kms. It is possible to achieve a 1 ms latency on this link. I have already suggested to use DWDM transport rather than SDH/OTN

  11. Dear Faisal,
    We are concerned whether OTN switch (o-e-o) generated latency will make an issue for lte network and data centre interconnect deployment. Do OTN switch latency such high that it overshadow the OTN solutions all together? Any instance of operators and companies using otn switch for lte or dci?

    Looking for reply.

    1. No OTN switch doesnt give very high latency as it is still layer 1 and you can use them. Just that between DWDM and OTN switch, DWDM is better.

  12. Hello,
    Thank you very much for this very nice and to the point article!
    Could you please provide a few references for this article so I can place them
    in a paper?
    The other question is, can I assume total of 10 ms of delay for dropping a lambda + OEO + adding a lambda for color conversion perposes?
    Best regards.

    1. Hi Yashar,

      Thanks for reaching out. This is quite old article. Sorry dont have the references. 10 ms seems pretty huge latency for the scenario you described. Latest transponders have latency in the range of hundred to few hundred micro seconds.

      1. Agreed. A 10Gbps O-E-O regen should be possible for ~10-15us with low-latency transponders (e.g. not a lot of fancy features – trading off latency for other features).

    1. I don’t think that I can agree with Faisal’s comment. The index of refraction for fiber is about 1.4475 and for air is about 1.0003, meaning that microwaves in air should be much faster than light in fiber. Of course, other factors that would impact the overall system latency include the transmitting/receiving electronics, how many repeater stations (or amplifiers, etc.) would be required for each system, etc. However, most of the research I have done leads me to the conclusion that microwave can be much lower latency than a similar fiber system. The tradeoff is that microwave is a shared media (RF spectrum) and does not have near the throughput that a similar fiber system can have (e.g. uW: 2-5Gbps across 3-5km vs Fiber: 19.2Tbps across 500km).

  13. Hi,

    I want to know what are the advantages of using DWDM over IP/MPLS in transport networks?
    In what scenarios DWDM preferred than MPLS?

Leave a Comment

Your email address will not be published. Required fields are marked *