I was thinking about latency, and how we often blame our code or database when an app feels slow. But sometimes the real issue isn’t our engineering at all. It’s simply Physics.

Imagine a server running in Atlanta, and a user hitting it from Chennai, India. Even if our backend is super optimized, the data still has to travel across the planet.

The Part We Usually Forget

Data travels through fiber at about 200,000 km per second. That is roughly two-thirds the speed of light.

If we convert that:

  • 1 second = 1,000 milliseconds
  • Time per km = $1000 ms / 200,000 km = 0.005 ms per km

So for every 1 km of distance, we already spend 0.005 ms.

Now multiply that by the Atlanta to Chennai distance, which is about ~14,594 km: $14,594 * 0.005 ms = ~73 ms one way

And that’s in a perfect, impossible, straight-line world.

The Reality of Networking

In real networks, the overhead adds up quickly:

  • Physical Routing: Cables run across oceans in curved paths, not straight lines.
  • Hardware Hops: Data must pass through various routers, switches, and firewalls.
  • Congestion: Network traffic adds further delays.

So instead of ~73 ms one way, the actual Round Trip Time (RTT) is often between 180 - 300 ms.

Solving for Distance

The point is simple: No matter how fast we write our code, we cannot change the speed of light. Distance always wins.

What we can do is run our services closer to the people using them:

  • If users are in India, run workloads in India.
  • If they are in North America, keep workloads in the US/Canada.
  • If they are in Europe or APAC, deploy there.

This isn’t about fancy software optimization; it’s about respecting physics and reducing how far data has to travel.

A Note on Trade-offs

It is worth mentioning that multi-region or edge deployment is not something we enable casually. It brings extra cost and significant operational work. Usually, it only makes sense when the latency improvements truly matter to our users or the business bottom line.