20/40/100G Host tuning for improved network performance

Even with the best hardware installed you will need to tune your host in a way that the different components perform at full capacity in unity with other devices and peripherals of the host. Tuning the Host is an important steps to maximize the through-put of your network. Below explains the basic steps when it comes to tune your 10/40 or 100G host for improved network performance.

Just like every aspect to improve performance,  when it comes to network tuning we will need to look into the hardware as well as the software part. For the hardware make sure you have the compatible modules like the CPU’s, the NIC and Memory modules. Make sure the BIOS firmware is up-to-date.

  • BIOS Settings

    Please have the below settings configured in the BIOS of the server.

  1. Enable Virtualization (Intel VT technology)

  2. Select Performance mode for CPU Power & Performance

  3. Turbo mode enabled

  4. CPU C-State Control disabled

  5. Fan control in performance mode if available

  • Memory

Ensure that each memory channel has at least one memory DIMM inserted. For best performance use the highest memory speed with fewest DIMMs and populate all memory channels for every CPU installed. You can check the memory configuration using dmidecode as follows:

# dmidecode -t memory

# dmidecode -t 16

  • PCI slot verification

We will need to verify the PCI slot the 10/100G NIC is connected. Use PCIe Gen3 slots, such as Gen x8 or Gen3 x16. PCIe Gen2 slots cannot provide enough bandwidth for 40G host and above. You can use lspci to check the speed of a PCI slot

# lspci -s 04:00.0 -vvv(Assuming the NIC is on the 04:00.0 bus address)

Pay attention to the following PCI attributes like width, speed, LnkSta & LnkSta and Vendor specific

  • CPU Architecture

You will need a CPU clock rate of at least 3GHz to achieve 40Gbps per flow. You can check the current status using cpupower (RHEL) or cpufreq (Debian)

#cpupower frequency-info In windows server

Configure the CPU governor to performance mode in Linux in control panel in case of Windows servers.

On a Windows Server, perform the following to get the closest NUMA node

Open a PowerShell window and execute

# Get-NetAdapterRss –Name <Connection Name>

OS Level system configuration for 20/40/100G servers

New enhancement in Linux kernels have made tuning much easier in general. Most of the drivers now used are pre configured for optimal through-put. However there are still some OS tweaks that we can apply to improve the network performance.

As a general approach to the topic, the first settings that needs to be looked into and tuned will be file, which is used to check what setting your system is using.

  • ysctl

TCP uses “congestion window”, to determine how many packets can be sent at once. The larger the congestion window size, the higher the throughput. In a Linux Kernel 2GB is the max allowable limit. The congestion window and other values can be altered in the sysctl.conf file. For better performance you can try using the below settings in sysctl and compare the through-put.

net.ipv4.tcp_timestamps = 0
net.ipv4.tcp_sack = 0
net.ipv4.tcp_low_latency = 1
net.core.netdev_max_backlog= 250000 
net.core.rmem_max= 16777216 
net.core.wmem_max= 16777216 
net.core.rmem_default= 16777216 
net.core.wmem_default= 16777216 
net.core.optmem_max= 16777216 
net.ipv4.tcp_rmem = 4096 87380 16777216
net.ipv4.tcp_wmem = 4096 65536 16777216
  • Ethernet Interface

Make sure the interface is running the latest firmware and driver for optimal performance. Latest upgrades of drivers all have the basic settings tuned to get maximum through put for the NIC. However, will need to change the MTU to 9000 for the interface(s).

If the Ethernet interfaces are bonded then you will need to check the bond status and active bond mode applied in the server.

You can check this here #> cat /proc/net/bonding/<bond interface>

  • You can leave the below settings on its default values on a 100G host using the latest issues of Linux (Centos 7). Changing the below did not have a considerable impact on the through-put in case of 100G NIC,, but results may vary in case of 10G host.

Interrupt Coalescence 
Ring Buffer size
LRO (off) and GRO (on)
net.core.netdev_max_backlog
txqueuelen (On a 10G host it can improve through-put to a small extend)
tcp_Dmestamps

Testing Tool

You can use tools like Iperf and nuttcp to do the testing, be advised that testing hosts which are far apart or with connection passing through multiple switches will not produce ideal results.  Both nuttcp and iperf3 have different strengths, both are available in the perfsonar-tools.

ServerAdminz provides Outsourced 24/7 Technical Support, Remote Server Administration, Server Security, Linux Server Management, Windows Server Management and Helpdesk Management to Datacenters, Hosting companies and ISPs around the world. We specialize in Extended Server Security, Server Hardening, Support of Linux/UNIX/Windows servers, products and services.If you are looking for a server management service provider, you can contact us on sales@serveradminz.com or +1 (845) 271 7172.

ServerAdminz is a server support company specialized in Outsourced 24/7 Web Hosting Support, Remote Infrastructure Management, NOC, Cloud and Enterprise Security Services. With over 10+ of years of experience in working with major Data Centers and ISPs with 130+ experienced technicians, we continue to manage more than 49,000 servers from 85+ countries and has bagged 5 international awards.

If you have any queries on 20-40-100G Host tuning services, share your thoughts and our representative will get back to you.[two_third last=”yes” spacing=”yes” center_content=”no” hide_on_mobile=”no” background_color=”” background_image=”” background_repeat=”no-repeat” background_position=”left top” border_position=”all” border_size=”0px” border_color=”” border_style=”” padding=”” margin_top=”” margin_bottom=”” animation_type=”” animation_direction=”” animation_speed=”0.1″ class=”” id=””]

    [/two_third]