Thursday, May 30, 2013

Tweaking tcp port - Improving performance of network intensive applications over high latency network

Building an application over a low latency network could give you very good performance, but sometimes over a high latency network, read() or write() system calls could be a cause of bottleneck for the applications performance.

Consider a network intensive application running on server A. This server continuously generates log and sends it over the network to another server B . Now, server B may be slow (low computation power) or the network may have high error/ latency, due to which the write buffer of A gets saturated. As a result server A stalls (write buffer of A is filled, but B is not ready to read the data yet).

By default in the linux kernel, each tcp port is configured with around 64-80KB read/write buffer.

This can be seen using the command :

$sudo su
#sysctl -a | grep net | grep mem

net.ipv4.tcp_mem = 93132        124179  186264
net.ipv4.tcp_wmem = 4096        16384   3973728
net.ipv4.tcp_rmem = 4096        87380   3973728

The numbers represent the minumum, default and maximum memory allocated to every tcp port  that is created on a machine.

For solving our current problem, all we need to do is increase the default port size to a larger value. Increasing the memory associated with a port can be done as follows:

#sysctl -w net.ipv4.tcp_rmem='4096        3000000   3973728'
#sysctl -w net.ipv4.tcp_rmem='4096        3000000   3973728'

NOTE: This change gets reflected on every port that is created on the server. Hence, one should be careful not to increase the default port size to a very large value, as each new connection would result in higher memory usage.



  1. Interesting! I will try to test that too. I also want to improve the performance of my network. Thanks for sharing.

    Racks And UPS Australia

    1. Thanks for reading! Do let me know if you are able to achieve increased network performance with this hack :)