![]() Finally, some network cards have Ethernet flow control enabled by default. These scripts assign the interrupts fired by each RX queue to a selected CPU. Server$ (let CPU=0 cd /sys/class/net/eth3/device/msi_irqs/ client$ (let CPU=0 cd /sys/class/net/eth2/device/msi_irqs/ Įcho $CPU > /proc/irq/$IRQ/smp_affinity_list For simplicity let's pin the RX queue #0 to CPU #0, RX queue #1 to CPU #1 and so on. ![]() The irqbalance service is stopped and the interrupts are manually assigned. Server$ iptables -t raw -I PREROUTING 1 -src 192.168.254.0/24 -j NOTRACKįinally, ensure the interrupts of multiqueue network cards are evenly distributed between CPUs. Make sure iptables and conntrack don't interfere with our traffic: client$ iptables -I INPUT 1 -src 192.168.254.0/24 -j ACCEPTĬlient$ iptables -t raw -I PREROUTING 1 -src 192.168.254.0/24 -j NOTRACK As usual, code used here is available on GitHub: udpclient.c, udpserver.c.įirst, let's explicitly assign the IP addresses: client$ ip addr add 192.168.254.1/24 dev eth2.Instead, it makes more sense to take a stable value - the lowest RTT from many runs done over one second. Since the numbers are pretty small, there is a lot of jitter when counting the averages. We're going to measure the round trip time.Both cards have fiber connected to a 10Gb switch. The client has a Solarflare 10Gb NIC, the server has an Intel 82599 10Gb NIC.Both hosts have 2GHz Xeon CPU's, with two sockets of 6 cores and Hyper Threading (HT) enabled - so 24 CPUs per host.Server echoes back the packets immediately after they are received. Client sends a small UDP frame (32 bytes of payload) and waits for the reply, measuring the round trip time (RTT).They communicate with a simple UDP echo protocol. We will have two physical Linux hosts: the 'client' and the 'server'. ![]() Our experiment will be setup up as follows: ![]() Some of the techniques covered here are also discussed in the scaling.txt kernel document. Fighting with latency is a great excuse to discuss modern features of multiqueue NICs. This time we are going to optimize our UDP application for latency. In a recent blog post we explained how to tweak a simple UDP application to maximize throughput. ![]()
0 Comments
Leave a Reply. |
Details
AuthorWrite something about yourself. No need to be fancy, just an overview. Archives
December 2022
Categories |