Latency is the time—measured in milliseconds—a round-trip signal takes to travel a network. Latency often has little to do with the physical distance a signal travels; it’s more affected by the number and type of nodes that a signal travels through. A satellite connection, for instance, often adds a great deal of latency because of the time it takes a signal to travel to the satellite and back. The latency between two geographically close locations on the Internet can be very high if one of them uses satellite. 

Latency is a major factor in TCP/IP performance. TCP, part of the TCP/IP communications protocol, provides end-to-end error checking of data transmission for reliable and sequential exchange of data. 

TCP ensures the ordered delivery of a stream of bytes by controlling segment size, flow control, and data exchange rate. If IP packets are lost or delivered out of order, TCP requests retransmission of lost packets and puts out-of-order packets back in order. Then TCP passes the data to the application layer.  

TCP is a very methodical and reliable protocol that continually tests the network state and reduces the data transfer rate if it detects congestion. Because TCP is so sensitive to network state, latency is a critical factor in how well it operates. As latency increases, TCP dramatically slows its rate of data transfer and the data also becomes more “bursty” in nature. 

Because TCP is so affected by latency, it’s important to reduce network latency whenever possible. If latency can’t be reduced, its effects on TCP can be largely overcome using a WAN optimizer that tunes and optimizes an Internet connection to counteract this effect.