I’ve known that ssh encryption has an effect on the speed of file xfers. So doing thing such as rsync (which will use ssh) or even plain scp can be pretty darn slow, especially on large files and on system with old/slow CPU.
I also know about the recommendation to use different type of encryption when transferring files. Some people recommend blowfish, others arcfour. So I thought I’d do a little bit of testing in a controlled environment.
I have two recent vintage HP servers with the following specs.
HP ProLiant DL360p Gen8
Dual quad core Intel(R) Xeon(R) CPU E5-2670 0 @ 2.60GHz (8 core, 16 threads total)
64G RAM
4 x 3TB, mdadm RAID10, formatted as XFS, mounted noatime,logbufs=8
Tigon ethernet NIC, connected as GigE, full duplex to HP ProCurve 2848 switch
(both servers connected to same switch)
The test file is:
3921247501 Mar 4 08:22 bigdata.tar.bz2 (3.8GB)
I am using OpenSSH_5.3p1, OpenSSL 1.0.0-fips 29 Mar 2010
Kernel is 3.8.1-1.el6.elrepo.x86_64 #1 SMP Thu Feb 28 19:15:22 EST 2013 x86_64 x86_64 x86_64 GNU/Linux
I am going to copy this file from hp1 to hp2, using scp, rsync and ftp. With scp, I’ll try different encryption, no compression to see how the different encryption affect the transfers. For comparison purposes, I also timed using plain ole FTP transfer, which mean no encryption and very little system processing; and the timing proves that. Also tested with plain rsync protocol (direct to rsyncd).
I run this 3 times. Without specifying encryption, ssh/scp will use the default, which depends on the version of OpenSSH (for this version, the default is aes128-ctr). NOTE: the file is rm’ed each time at the dest before I do copy.
run | Xfer type | real | user | system |
---|---|---|---|---|
1 | scp -o Compression=no | 0m52.175s | 0m12.709s | 0m6.504s |
2 | scp -o Compression=no | 0m47.872s | 0m12.603s | 0m6.806s |
3 | scp -o Compression=no | 0m49.317s | 0m12.748s | 0m6.710s |
1 | scp -c arcfour -o Compression=no | 0m49.536s | 0m14.161s | 0m6.903s |
2 | scp -c arcfour -o Compression=no | 0m49.088s | 0m14.045s | 0m6.921s |
3 | scp -c arcfour -o Compression=no | 0m50.698s | 0m14.162s | 0m6.728s |
1 | scp -c blowfish-cbc -o Compression=no | 0m58.673s | 0m44.295s | 0m13.495s |
2 | scp -c blowfish-cbc -o Compression=no | 0m56.399s | 0m43.860s | 0m9.036s |
3 | scp -c blowfish-cbc -o Compression=no | 0m54.869s | 0m43.949s | 0m10.673s |
1 | scp -c aes128-cbc -o Compression=no | 0m49.776s | 0m14.641s | 0m7.083s |
2 | scp -c aes128-cbc -o Compression=no | 0m48.527s | 0m15.154s | 0m7.068s |
3 | scp -c aes128-cbc -o Compression=no | 0m50.554s | 0m15.334s | 0m6.983s |
1 | ncftpput -m -u ftptest -p ‘XXXXXX’ hp2 /data/ /data/bigdata.tar.bz2 | 0m34.306s | 0m0.141s | 0m4.062s |
2 | ncftpput -m -u ftptest -p ‘XXXXXX’ hp2 /data/ /data/bigdata.tar.bz2 | 0m33.351s | 0m0.160s | 0m3.863s |
3 | ncftpput -m -u ftptest -p ‘XXXXXX’ hp2 /data/ /data/bigdata.tar.bz2 | 0m33.839s | 0m0.154s | 0m3.732s |
1 | rsync –stats -a /data/bigdata.tar.bz2 hp2::data/bigdata.tar.bz2.1 | 0m33.485s | 0m10.221s | 0m6.692s |
2 | rsync –stats -a /data/bigdata.tar.bz2 hp2::data/bigdata.tar.bz2.2 | 0m33.490s | 0m10.234s | 0m6.703s |
3 | rsync –stats -a /data/bigdata.tar.bz2 hp2::data/bigdata.tar.bz2.3 | 0m33.497s | 0m10.163s | 0m6.545s |
In terms of speed, we have:
Average over 3 runs
RSYNC: real=33.491 user=10.206 sys=6.6467 FTP: real=33.832 user=0.1517 sys=3.8857 AES128-CBC: real=49.619 user=15.043 sys=7.0447 ARCFOUR: real=49.774 user=14.1226 sys=6.8507 AES128-CTR: real=49.788 user=12.687 sys=6.6734 BLOWFISH-CBC: real=56.647 user=44.0347 sys=11.068
So it look like in modern OpenSSH, using AES, it’s a wash which cipher/encryption method you want to use.
Note that rsync protocol itself is pretty darn efficient, slightly faster than FTP.
3/6/13 Update
AES in SSH. I’ve tested again from an old Dell using Pentium 4 to the fast HP, with no AES support in hardware and the default AES128-CTR is much slower. However, good news is that AES128-CBC is still faster than BLOWFISH, but slightly slower than ARCFOUR. As for FTP and RSYNC, they are neck-and-neck in speed, no clear winner.
So my conclusion is that whether using AES with hardware support (in new Intel CPUs) or software, using the CBC (block mode) variant of AES is usually good enough.
You must be logged in to post a comment.