IPsec Benchmark

From frogzie
Jump to navigation Jump to search

Impact of IPsec on the Network Throughput

1 Purpose

Quantify the impact of the Internet Protocol Security (IKEv2/IPsec) protocol in tunnel mode on the throughput performance of the network

Test tools

  1. iperf3, a widely used network performance measurement tool
  2. cnxbenchmark, a basic TCP connection speed measurement tool written in C
Usage  
cnxbenchmark - TCP client/server measuring connection speed
Usage
- as a server: cnxbenchmark [-p port]
- as a client: cnxbenchmark [-4|-6] [-p port] [-G msgsizeGiB] server_host
Examples
- Server listening to any connection request on port 4996
        cnxbenchmark -p 4996
- IPv4 Client sending 8 GiB of data to server on same host through port 4996
        cnxbenchmark -4 -p 4996 -G 8 localhost

Note: each of those test tools is made of a server and a client.

2 Environment

2 hosts connected through 1-Gbit/s unmanaged network switch (Netgear Prosafe GS108)

Hosts Hardware OS
cyber7 ThinkPad X230 CentOS 7.8
cyber8 ThinkPad X230 CentOS 8.1

Lenovo ThinkPad X230 (laptop) specifications

  • Intel core i5-3320M @ 2.60 GHz (dual core, 2 threads/core)
  • 8-GB RAM
  • Intel 82579LM Gigabit Network Connection
    • Interface named enp0s25
    • Maximum Transmission Unit (MTU): 1500 bytes
Settings for enp0s25 (ethtool)  
Supported ports: [ TP ]
Supported link modes:	10baseT/Half 10baseT/Full
			100baseT/Half 100baseT/Full
			1000baseT/Full
Supported pause frame use: No
Supports auto-negotiation: Yes
Supported FEC modes: Not reported
Advertised link modes:	10baseT/Half 10baseT/Full
			100baseT/Half 100baseT/Full
			1000baseT/Full
Advertised pause frame use: No
Advertised auto-negotiation: Yes
Advertised FEC modes: Not reported
Speed: 1000Mb/s
Duplex: Full
Port: Twisted Pair
PHYAD: 1
Transceiver: internal
Auto-negotiation: on
MDI-X: on (auto)
Supports Wake-on: pumbg
Wake-on: g
Current message level:	0x00000007 (7)
			drv probe link
Link detected: yes

VPN configuration   (cf. IPsec)

• IPv4  
conn rshometunnel
#	auto=start	# create the tunnel when IPsec is started
	authby=rsasig
	leftid=cyber7@rshome.lan
	left=192.168.0.210
	leftrsasigkey=0sAwEAAb7a...lghDjX7l
	rightid=cyber8@rshome.lan
	right=192.168.0.53
	rightrsasigkey=0sAwEAAbQ...3xU1GcMZ
• IPv6  
conn rshometunnel6
#	auto=start	# create the tunnel when IPsec is started
	authby=rsasig
	leftid=cyber7@rshome.lan
	left=2001:8003:22bc:1700:7787:2b17:cc6f:5b46
	leftrsasigkey=0sAwEAAb7a...lghDjX7l
	rightid=cyber8@rshome.lan
	right=2001:8003:22bc:1700:66be:1375:b866:a57b
	rightrsasigkey=0sAwEAAbQ...3xU1GcMZ

3 Scenarios

4 test scenarios were developed

  1. Unencrypted data transfer using the IPv4 protocol (no VPN setup)
  2. Unencrypted data transfer using the IPv6 protocol (no VPN setup)
  3. Encrypted transfer with IKEv2/IPsec in tunnel mode using the IPv4 protocol
  4. Encrypted transfer with IKEv2/IPsec in tunnel mode using the IPv6 protocol

Test setup

  • Test tool client sending data from cyber7 through a TCP socket to the server running on cyber8

4 Performance Benchmark

4.1 Firewall

Allow TCP/UDP connections to cyber 8 on port 4996 (used by the test tools)

  • firewall-cmd --zone=public --add-port=4996/tcp
  • firewall-cmd --zone=public --add-port=4996/udp
  • firewall-cmd --reload

4.2 iperf3

iperf3 allows testing both TCP and UDP throughputs.

cyber8

  • iperf3 -s -p 4996   (server listening to any TCP/UDP connection request on port 4996)

cyber7

  • IPv4   (client sending data to cyber8 using IPv4)
    • TCP:   iperf3 -c cyber8 -p 4996 -4
    • UDP:   iperf3 -c cyber8 -p 4996 -4 --udp -b 0
  • IPv6:   (client sending data to cyber8 using IPv6)
    • TCP:   iperf3 -c cyber8 -p 4996 -6
    • UDP:   iperf3 -c cyber8 -p 4996 -6 --udp -b 0
iperf3 TCP UDP
IPv4 IPv6 IPv4 IPv6
Unencrypted
(No VPN set up)
930 Mbit/s 917 Mbit/s 949 Mbit/s
no datagrams lost
936 Mbit/s
no datagrams lost
IKEv2/IPsec
AES GCM 256
890 Mbit/s 850 Mbit/s 902 Mbit/s
no datagrams lost
867 Mbit/s
~5% lost

Analysis

  • Encryption throughput penalty was about 4~5% with IPv4 and 7~8% with IPv6;
  • IPv6 was 1~2% slower than IPv4 when no encryption was performed and 4~5% slower when encryption was;
  • The IPv6 protocols didn't handle very well the high rate UDP transmission, confronting a significant percentage of datagrams dropped.

4.3 cnxbenchmark

Note

  • Only TCP throughput is tested with cnxbenchmark
  • IPsec encryption is performed by the kernel
  • System (sys) CPU usage of the client was provided by the command time

cyber8

  • cnxbenchmark -p 4996   (server listening to any connection request on port TCP:4996)

cyber7

  • IPv4:   time cnxbenchmark -p 4996 -G 8 -4 cyber8   (TCP client sending 8 GiB of data to cyber8 using IPv4)
  • IPv6:   time cnxbenchmark -p 4996 -G 8 -6 cyber8   (TCP client sending 8 GiB of data to cyber8 using IPv6)
cnxbenchmark
(8 GiB TCP transfer)
IPv4 IPv6
Unencrypted
(No VPN set up)
934 Mbit/s
cpu%sys:   2.1
922 Mbit/s
cpu%sys:   2.3
IKEv2/IPsec
AES GCM 256
896 Mbit/s
cpu%sys: 11.8
869 Mbit/s
cpu%sys: 13.2

Analysis

  • Encryption throughput penalty was about 4~5% with IPv4 and 5~6% with IPv6;
  • IPv6 was 1~2% slower than IPv4 when no encryption was performed and 3~4% slower when encryption was;
  • ~10% of kernel time was required for encryption.

5 Summary

Throughput decrease
due to IKEv2/IPsec
IPv4 4~5%
IPv6 6~7%
+10% sys CPU increase
Throughput decrease
using IPv6 (vs IPv4)
Unencrypted 1~2%
Encrypted ~4%
(IPv6 slower)

IPsec appeared to be slowing down the network transfer rate by 4~5% when using the IPv4 protocol (6~7% using IPv6) with the Linux kernel requiring an extra 10% of system CPU for the encryption.

IPv6 transmission was shown to be slower than IPv4 by 1~2% on unencrypted channels and by 4% on IPsec-encrypted channels.

IPv6 didn't perform very well in terms of transmission reliability with UDP.

Trying a MTU of 3000 bytes instead of the default 1500 had no noticeable effect on the network performance.

Adding IPComp payload compression didn't appear to bring any benefit when transmitting using TCP but added 1~2% penalty when using IPv4 UDP and 6~7% penalty when using IPv6 UDP.

6 See also

7 Annex: Unencrypted VPN with IPComp

Measurement of the network performance through unencrypted IPv4 VPN with either uncompressed or IPCompressed data.

Connection configuration  
conn rshomecnxexp
#	auto=start	# create the tunnel when IPsec is started [default: ignore]
	authby=rsasig
#	type=transport	# [default: tunnel]
#	compress=yes	# IPComp [default: no]
	ikev2=yes
	phase2=esp	# ah | esp [default: esp]
	phase2alg=null-sha1	# no encryption [default: encrypted as per RFC-4106]
	leftid=cyber7@rshome.lan
	left=192.168.0.210
	leftrsasigkey=0sAwEAAb7a...lghDjX7l
	rightid=cyber8@rshome.lan
	right=192.168.0.53
	rightrsasigkey=0sAwEAAbQ...3xU1GcMZ

Above parameter compress set to yes enables IPComp (Note: compression is disabled by default).

Test description  

1. TCP performance measured using cnxbenchmark

  • cyber8 (server with open firewall port 4996/tcp)
    cnxbenchmark -p 4996
  • cyber7 (client)
    time cnxbenchmark -4 -p 4996 -G 8 cyber8

2. UDP performance measured using iperf3

  • cyber8 (server with open firewall port 4996/udp)
    iperf3 -s -p 4996
  • cyber7 (client)
    iperf3 -c cyber8 -4 -p 4996 --udp -b 0

7.1 Tunnel mode

IPv4 Performance
(tunnel)
Direct connection
No VPN set up
(benchmark ref.)
Unencrypted VPN   Tunnel mode  
Uncompressed with IPComp
cnxbenchmark
(8 GiB TCP transfer)
934 Mbit/s
cpu%sys:   2.1
899 Mbit/s
cpu%sys:   15.9
903 Mbit/s
cpu%sys:   13.1
iperf3   (UDP) 949 Mbit/s
no datagrams lost
914 Mbit/s
~0.004% lost
918 Mbit/s
~0.0035% lost

7.2 Transport mode

IPv4 Performance
(transport)
Direct connection
No VPN set up
(benchmark ref.)
Unencrypted VPN Transport mode
Uncompressed with IPComp
cnxbenchmark
(8 GiB TCP transfer)
934 Mbit/s
cpu%sys:   2.1
918 Mbit/s
cpu%sys:   11.7
917 Mbit/s
cpu%sys:   11.8
iperf3   (UDP) 949 Mbit/s
no datagrams lost
930 Mbit/s
~0.00045% lost
930 Mbit/s
~0.00075% lost