19-02-2015, 13:15 #1
[EN] Long story short: TCP/IP stack in Linux kernel is slow
A generally accepted rule of thumb is that 1 hertz of CPU processing is required to send or receive 1 bit/s of TCP/IP.
The last time I looked, kernel-based packet-by-packet processing on a single CPU core resulted in ~3 Gbps throughput at 1500-byte MTU. Disagree? Write a comment!
An obvious way to increase performance is to bypass the Linux kernel as much as possible. NIC-based TCP offload helps regular TCP-based applications. High-performance solutions use a custom TCP/IP stack (examples: Intel DPDK, 6Wind, A10, Linerate Systems – now F5).
TCP offload engine or TOE is a technology used in network interface cards (NIC) to offload processing of the entire TCP/IP stack to the network controller. It is primarily used with high-speed network interfaces, such as gigabit Ethernet and 10 Gigabit Ethernet, where processing overhead of the network stack becomes significant.
Since Ethernet (10Ge in this example) is bidirectional it is possible to send and receive 10 Gbit/s (for an aggregate throughput of 20 Gbit/s). Using the 1 Hz/(bit/s) rule this equates to eight 2.5 GHz cores.
Unlike other kernels, the Linux kernel does not include support for TOE hardware.
Much of the current work on TOE technology is by manufacturers of 10 Gigabit Ethernet interface cards, such as Broadcom, Chelsio Communications, Emulex, Mellanox Technologies, QLogic.
19-02-2015, 13:34 #2
Esse consumo do TCP/IP remete à antiga narrativa do Gabriel da IOFloods que postei no WHT-BR. Recordando, a FDCServers oferecia tráfego ilimitado em conexão 1GE para um servidor barato mas estabelecia franquia e cobrava uma fortuna pelo excedente de transferência dos servidores com preços normais. O Gabriel descobriu que o processador do baratinho não tinha poder computacional para usar mais do que 300Mbps do 1GE disponivel.
Vale como aviso para interessados em servidores com placas de rede baratas, VPS hospedados em servidores idem, e "SANs" fundo de quintal que utilizam TCP/IP.
Última edição por 5ms; 19-02-2015 às 13:40.
19-02-2015, 17:06 #3
- Data de Ingresso
- Oct 2010
- Rio de Janeiro
Em Windows algumas chamadas de sistema e bibliotecas não suportam TOE. Se este estiver ligado, alguns recursos de firewalling e de filtros stateful podem dar errado (tanto que a ordem numa empresa de software servidor de email que eu trabalhava, e que tinha suas próprias bibliotecas TCPIP -mais rápidas - ao invés de usar as chamadas do sistema, era desabilitar o TOE)...
22-02-2015, 17:41 #4
Pegando carona no tópico ...
What role FPGA server co-processors for virtual routing?
Part 2: Accelerating virtual routing functions using FPGAs
IP routing specialists have announced first virtual edge router products that run on servers. These include Alcatel-Lucent with its Virtualized Service Router and Juniper with its vMX. Gazettabyte asked Alcatel-Lucent's Steve Vogelsang about the impact FPGA accelerator cards could have on IP routing.
The co-processor cards in servers could become interesting for software-defined networking (SDN) and network function virtualisation (NFV).
The main challenge is that we require that our virtualised network functions (vNFs) and SDN data plane can run on any cloud infrastructure; we can’t assume that any specific accelerator card is installed. That makes it a challenge.
I can imagine, over time, that DPDK, the set of libraries and drivers for packet processing, and other open source libraries will support co-processors, making it easier to exploit by an SDN data plane or vNF.
For now we’re not too worried about pushing the limits of performance because the advantage of NFV is the operational simplicity. However, when we have vNFs running at significant scale, we will likely evaluate co-processor options to improve performance. This is similar to what Microsoft and others are doing with search algorithms and other applications.
Note that there are alternative co-processors that are more focussed on networking acceleration. An example is Netronome which is a purpose-built network co-processor for the x86 architecture. Not sure how it compares to Xilinx for networking functionality, but it may outperform FPGAs and be a better option if networking is the focus.
Some servers are also built to enable workload-specific processing architectures. Some of these are specialised on a single processor architecture while others such as HP's Moonshot allow installation of various processors including FPGAs.
When we have vNFs running at significant scale, we will likely evaluate co-processor options to improve performance
I don’t expect FPGA accelerator cards will have much impact on network processors (NPUs). We or any other vendor could build an NPU using a Xilinx or another FPGA. But we get much more performance by building our own NPU because we control how we use the chip area.
When designing an FPGA, Xilinx and other FPGA vendors have to decide how to allocate chip space to I/O, processing cores, programmable logic, memory, and other functional blocks. The resulting structure can deliver excellent performance for a variety of applications, but we can still deliver considerably more performance by designing our own chips allocating the chip space needed to the required functions.
I have experience with my previous company which built multiple generations of NPUs using FPGAs, but they could not come close to the capabilities of our FP3 chipset.
Última edição por 5ms; 22-02-2015 às 17:47.