Solaris Network Performance Tuning : know about TCP window size

Our Unixadmin, John from xyz company,  recently upgraded one of his server’s network interface card  from the old one with 100MB speed to a new one with 10Gig speed , and the reason is to improve the speed for the backup of the server. After the NIC Upgrade  john found that the server network performance was improved a little, but still the server is not able to utilize all the available network bandwidth. 

After a bit of investigation john realised that his server TCP window size has to be tuned  to utilize maximum network bandwidth available to the server.  In this post we will be discussing the procedure about “how john tuned the TCP window size  to increase his server network performance?”.

 

 

 

  Tuning TCP window size involves three TCP parameters, as mentioned below  

tcp_xmit_hiwat   ( TCP transmit window size )

tcp_recv_hiwat   ( TCP receive windows size)

tcp_max_buf   ( TCP maximum Buffer size)

  

If we assume “TCP connection for a server” is similar to “a car parking space in a building” , then the parameters  “TCP transmit buffer”, “TCP receive Buffer” and “TCP Maximum Buffer” will act as Parking Exit, Parking Entry and Actual Parking slot respectively.  We very well know that  the time required for car parking is always depends on the capacity of Entry and Exit gates, and It doesn’t matter how much parking space  we have in the building, In similar way, unless we have proper TCP window size for both send and receive buffers we cannot achieve the desired network performance to utilize all network bandwidth. Below diagram illustrates the purpose of  the TCP send / receive / max_buffer parameters.

 

 

 Warning : Tuning TCP window size to large value has the disadvantage of affecting every TCP socket on the system, which may consume needless memory, especially if the tcp_recv_hiwat value is set to several megabytes.

 In Solaris , the default TCP transmit buffer in bytes for “tcp_xmit_hiwat”:

 Solaris 2.6  TCP_XMIT_HIWATER  8192

Solaris  7    TCP_XMIT_HIWATER  8192

Solaris  8    TCP_XMIT_HIWATER 16384

Solaris  9    TCP_XMIT_HIWATER 49152

Solaris   10   TCP_XMIT_HIWATER 49152

 the default TCP receive buffer in bytes for “tcp_recv_hiwat”:

Solaris    2.6  TCP_RECV_HIWATER   8192

Solaris    7    TCP_RECV_HIWATER   8192

Solaris    8    TCP_RECV_HIWATER  24576

Solaris    9    TCP_RECV_HIWATER  49152

Solaris   10   TCP_RECV_HIWATER  49152

 

Below command will display the current settings in your machine,

# /usr/sbin/ndd -get /dev/tcp tcp_recv_hiwat

# /usr/sbin/ndd -get /dev/tcp tcp_xmit_hiwat

# /usr/sbin/ndd -get /dev/tcp tcp_max_buf

 Use ndd to set these parameters:

   # ndd -set /dev/tcp tcp_recv_hiwat 262144

  # ndd -set /dev/tcp tcp_xmit_hiwat 262144

  # ndd -set /dev/tcp tcp_max_buf 3145728

 

NOTE: Adjustments of TCP socket buffers can be made only up to the value of the tcp_max_buf setting. This is 1048576 bytes (1MB) by default. The tcp_max_buf parameter to allow for the larger buffer., which is beyond the 1MB tcp_max_buf default.   We’ll use 3MB in this case.

 Sample Scenario to Verify the Impact of TCP window size tuning:

 Below scenario demonstrates the tcp window size tuning and verifying the difference in network transmission with the help of iperf ( http://www.sunfreeware.com/indexintel10.html ) opensource tool. Iperf runs in client server mode. And below steps describes the procedure to use iperf to measure network bandwidth

 1. On one server, start the Iperf in server mode using the command – “iperf -s”

2. Start the iperf in client mode using the command – “iperf -c  -w   -mc 

3. Check the transmission speed and the duration to transfer the test data

 

In this scenario, we are using three servers gurkullinux01 , gurkulunix1 and gurkulunix2.  Server gurkullinux1( Linux) used as source server to transmit test data to both the server gurkulunix1 (Solaris) and gurkulunix2 (Solaris).

 

 

Initially, we will test the network bandwidth by sending 1GB test data from gurkullinux01(linux)  to both gurkulunix1(Solaris) and gurkulunix2(Solaris). And the network latency and

 Step1 :  Check the network latency and data transfer rate with default TCP window size on both solaris machines

 Testing network latency between gurkullinux01 ( Linux)  <-> gurkulunix1 (Solaris)

 a. Start the iperf in server mode on gurkulunix1. Please note that the default TCP Window size will be displayed as highlighted in blue

gurkulunix1#  /usr/local/bin/iperf -s

————————————————————

Server listening on TCP port 5001

TCP window size: 48.0 KByte (default)

————————————————————

[  4] local 192.168.1.101 port 5001 connected with 192.168.1.50 port 51087

[ ID] Interval       Transfer     Bandwidth

[  4]  0.0-15.7 sec  1.00 GBytes    549 Mbits/sec

 

b. Start the iperf in client mode with override value of 256K assigned to tcp window size . Please note that the override TCP window size will be displayed as highlighted in Blue. And the Actual network latency and the data transfer rate will be displayed as highlighted in red.

 

[root@gurkullinux01]# iperf   -w 256k -n 1G -mc 192.168.1.101

————————————————————

Client connecting to 192.168.1.101, TCP port 5001

TCP window size:   256 KByte (WARNING: requested   256 KByte)

————————————————————

[  3] local 192.168.1.50 port 51087 connected with 192.168.1.101 port 5001

[ ID] Interval       Transfer     Bandwidth

[  3]  0.0-15.7 sec  1.00 GBytes    549 Mbits/sec

[  3] MSS size 1448 bytes (MTU 1500 bytes, ethernet)

[root@gurkullinux01 Downloads]#

 

Testing network latency between gurkullinux01 ( Linux)  <-> gurkulunix2 (Solaris)

 Following  the same steps  that we used for gurkulunix1

 bash-3.2# /usr/local/bin/iperf -s

————————————————————

Server listening on TCP port 5001

TCP window size: 48.0 KByte (default)

————————————————————

[  4] local 192.168.1.102 port 5001 connected with 192.168.1.50 port 43155

[ ID] Interval       Transfer     Bandwidth

[  4]  0.0-16.5 sec  1.00 GBytes    520 Mbits/sec

 

[root@gurkullinux01]# iperf   -w 256k -n 1G -mc 192.168.1.102

————————————————————

Client connecting to 192.168.1.102, TCP port 5001

TCP window size:   256 KByte (WARNING: requested   256 KByte)

————————————————————

[  3] local 192.168.1.50 port 43155 connected with 192.168.1.102 port 5001

[ ID] Interval       Transfer     Bandwidth

[  3]  0.0-16.5 sec  1.00 GBytes    520 Mbits/sec

[  3] MSS size 1448 bytes (MTU 1500 bytes, ethernet)

[root@gurkullinux01 Downloads]#

 

 

Step 2 : Increase TCP send and recieve buffer sizes for gurkulunix1 from 48KB to 256KB. And increase the size of tcp maximum buffer from 1MB to 4MB. Please note that the TCP send and recieve buffer values cannot go beyond the size of the tcp max buffer.

 

gurkulunix1: bash-3.2# ndd -set /dev/tcp tcp_recv_hiwat 262144

gurkulunix1: bash-3.2# ndd -set /dev/tcp tcp_xmit_hiwat 262144

gurkulunix1: bash-3.2# ndd -set /dev/tcp tcp_max_buf    4194304

gurkulunix1: bash-3.2# ndd -get /dev/tcp tcp_recv_hiwat

262144

gurkulunix1: bash-3.2# ndd -get /dev/tcp tcp_xmit_hiwat

262144

gurkulunix1: bash-3.2# ndd -get  /dev/tcp tcp_max_buf

4194304

 

We are not making any changes to  gurkulunix2, and leaving it with the default TCP window size values.

 

bash-3.2# ndd -get /dev/tcp tcp_recv_hiwat

49152

bash-3.2# ndd -get /dev/tcp tcp_xmit_hiwat

49152

bash-3.2# ndd -get  /dev/tcp tcp_max_buf

1048576

bash-3.2#

 

Step 3:  Recheck the Network latency and data transmission rate .

Checking gurkullinux01 ( Linux)  <-> gurkulunix1 (Solaris)

 

gurkulunix1:bash-3.2# /usr/local/bin/iperf -s

————————————————————

Server listening on TCP port 5001

TCP window size:   256 KByte (default)

————————————————————

[  4] local 192.168.1.101 port 5001 connected with 192.168.1.50 port 51090

[ ID] Interval       Transfer     Bandwidth

[  4]  0.0-12.6 sec  1.00 GBytes    684 Mbits/sec

 [root@gurkullinux01 ]# iperf   -w 256k -n 1G -mc 192.168.1.101

————————————————————

Client connecting to 192.168.1.101, TCP port 5001

TCP window size:   256 KByte (WARNING: requested   256 KByte)

————————————————————

[  3] local 192.168.1.50 port 51090 connected with 192.168.1.101 port 5001

[ ID] Interval       Transfer     Bandwidth

[  3]  0.0-12.6 sec  1.00 GBytes    683 Mbits/sec

[  3] MSS size 1448 bytes (MTU 1500 bytes, ethernet)

 

Checking gurkullinux01 ( Linux)  <-> gurkulunix2 (Solaris)

 

gurkulunix2: bash-3.2#  /usr/local/bin/iperf -s

————————————————————

Server listening on TCP port 5001

TCP window size: 48.0 KByte (default)

————————————————————

[  4] local 192.168.1.102 port 5001 connected with 192.168.1.50 port 43159

[ ID] Interval       Transfer     Bandwidth

[  4]  0.0-16.3 sec  1.00 GBytes    528 Mbits/sec

 

[root@gurkullinux01 ]# iperf   -w 256k -n 1G -mc 192.168.1.102

————————————————————

Client connecting to 192.168.1.102, TCP port 5001

TCP window size:   256 KByte (WARNING: requested   256 KByte)

————————————————————

[  3] local 192.168.1.50 port 43159 connected with 192.168.1.102 port 5001

[ ID] Interval       Transfer     Bandwidth

[  3]  0.0-16.3 sec  1.00 GBytes    528 Mbits/sec

[  3] MSS size 1448 bytes (MTU 1500 bytes, ethernet)

[root@gurkullinux01 Downloads]#

 

From the network latency and data transfer rates ( highlighted in Red) for the both servers, we can easily verify that the network performance was improved for the gurkulunix1 server due to the TCP window size tuning.

That’s it for the day, Please feel to drop your experience with Network performance tuning

 

 

 

 

 

Ramdev

Ramdev

I have started unixadminschool.com ( aka gurkulindia.com) in 2009 as my own personal reference blog, and later sometime i have realized that my leanings might be helpful for other unixadmins if I manage my knowledge-base in more user friendly format. And the result is today's' unixadminschool.com. You can connect me at - https://www.linkedin.com/in/unixadminschool/

10 Responses

  1. Tushar says:

    Hey Mate,

    One quick update, it seems you forgot to add -get switch after ndd to get parameter value.

    Below command will display the current settings in your machine,
    # /usr/sbin/ndd /dev/tcp tcp_recv_hiwat
    # /usr/sbin/ndd /dev/tcp tcp_xmit_hiwat

  2. Tushar says:

    Thank you chief, 5 star to gurukulindia.com

  3. Ramdev Ramdev says:

    @Tushar, awesome.  I have fixed those typos.

  4. Thomas says:

    Hi,

    How do I detect if the tcp buffer is overflowing? Is there anything I can check in netstat ? What are the symptoms?

  5. Thomas says:

    Also, is tcp_max_buf = tcp_xmit_hiwat + tcp_recv_hiwat?

    From your analogy of the parking space, it seems you are saying that the data in the receive and send buffer will go into the space available in the max buffer? How is this so? Can you explain clearly the purpose of tcp_xmit_hiwat and tcp_recv_hiwat?

    • Ramdev Ramdev says:

      @Thomas –

      I understand you question. And these three different buffers here, it is just one buffer i.e. tcp_max_buf.
      I did changed the description of this parameters, little bit to avoid your confusion.

      tcp_xmit_hiwat ( TCP transmit window size )
      tcp_recv_hiwat ( TCP receive windows size)
      tcp_max_buf ( TCP maximum Buffer size)

      tcp_xmit_hiwat & tcp_recv_hiwat are sizes for transmit and receive windows, which are acting as entrance and exit windows for the tcp_max_buf. hope this helps

  6. pradeep varma.k says:

    Hi i am transferring data from windows machine (vm) to solaris 10 (vm) which is in same network but the uploading speed is very slow can u give me any solution on this

  1. September 16, 2015

    […] Read – Knowing about TCP window size […]

  2. September 17, 2015

    […] Read – Knowing about TCP window size […]

  3. September 18, 2015

    […] Read – Tuning TCP window size […]

Leave a Reply to Ramdev Cancel reply

Close
  Our next learning article is ready, subscribe it in your email

What is your Learning Goal for Next Six Months ? Talk to us