Abstract
In data center networks, many network-intensive applications leverage large fan-in and many-to-one communication to achieve high performance. However, the special traffic patterns, such as micro-burst and high concurrency, easily cause TCP Incast problem and seriously degrade the application performance. To address the TCP Incast problem, we first reveal theoretically and empirically that alleviating packet burstiness is much more effective in reducing the Incast probability than controlling the congestion window. Inspired by the findings and insights from our experimental observations, we further propose a general supporting scheme Adaptive Pacing (AP), which dynamically adjusts burstiness according to the flow concurrency without any change on switch. Additionally, a sender-based approach is proposed to estimate the flow concurrency. Another feature of AP is its broad applicability. We integrate AP transparently into different TCP protocols (i.e., DCTCP, L2DCT and D2TCP). Through a series of large-scale NS2 simulations and testbed experiments, we show that AP significantly reduces the Incast probability across different TCP protocols and the network goodput can be increased consistently by on average 7times under severe congestion.
Original language | English (US) |
---|---|
Article number | 9216628 |
Pages (from-to) | 134-147 |
Number of pages | 14 |
Journal | IEEE/ACM Transactions on Networking |
Volume | 29 |
Issue number | 1 |
DOIs | |
State | Published - Feb 2021 |
Bibliographical note
Publisher Copyright:© 1993-2012 IEEE.
Keywords
- Data center
- Incast
- TCP
- congestion control
- pacing