I messed up and missed that we were carrying a diff on x86_64-mont5.pl. This
was accidentally dropped in https://boringssl-review.googlesource.com/6616.
To confirm the merge is good now, check out at this revision and run:
git diff e701f16bd69b6f251ed537e40364c281e85a63b2^ crypto/bn/asm/x86_64-mont5.pl > /tmp/A
Then in OpenSSL's repository:
git diff d73cc256c8e256c32ed959456101b73ba9842f72^ d73cc256c8e256c32ed959456101b73ba9842f72 crypto/bn/asm/x86_64-mont5.pl > /tmp/B
And confirm the diffs vary in only metadata:
diff -u /tmp/A /tmp/B
--- /tmp/A 2015-12-03 11:53:23.127034998 -0500
+++ /tmp/B 2015-12-03 11:53:53.099314287 -0500
@@ -1,8 +1,8 @@
diff --git a/crypto/bn/asm/x86_64-mont5.pl b/crypto/bn/asm/x86_64-mont5.pl
-index 38def07..3c5a8fc 100644
+index 388e3c6..64e668f 100755
--- a/crypto/bn/asm/x86_64-mont5.pl
+++ b/crypto/bn/asm/x86_64-mont5.pl
-@@ -1770,6 +1770,15 @@ sqr8x_reduction:
+@@ -1784,6 +1784,15 @@ sqr8x_reduction:
.align 32
.L8x_tail_done:
add (%rdx),%r8 # can this overflow?
@@ -18,7 +18,7 @@
xor %rax,%rax
neg $carry
-@@ -3116,6 +3125,15 @@ sqrx8x_reduction:
+@@ -3130,6 +3139,15 @@ sqrx8x_reduction:
.align 32
.Lsqrx8x_tail_done:
add 24+8(%rsp),%r8 # can this overflow?
@@ -34,7 +34,7 @@
mov $carry,%rax # xor %rax,%rax
sub 16+8(%rsp),$carry # mov 16(%rsp),%cf
-@@ -3159,13 +3177,11 @@ my ($rptr,$nptr)=("%rdx","%rbp");
+@@ -3173,13 +3191,11 @@ my ($rptr,$nptr)=("%rdx","%rbp");
my @ri=map("%r$_",(10..13));
my @ni=map("%r$_",(14..15));
$code.=<<___;
Change-Id: I3fb5253783ed82e4831f5bffde75273bd9609c23
Reviewed-on: https://boringssl-review.googlesource.com/6618
Reviewed-by: Adam Langley <agl@google.com>
Since we pre-generate our perlasm, having the output of these files be
sensitive to the environment the run in is unhelpful. It would be bad to
suddenly change what features we do or don't compile in whenever workstations'
toolchains change or if developers do or don't have CC variables set.
Previously, all compiler-version-gated features were turned on in
https://boringssl-review.googlesource.com/6260, but this broke the build. I
also wasn't thorough enough in gathering performance numbers. So, flip them all
to off instead. I'll enable them one-by-one as they're tested.
This should result in no change to generated assembly.
Change-Id: Ib4259b3f97adc4939cb0557c5580e8def120d5bc
Reviewed-on: https://boringssl-review.googlesource.com/6383
Reviewed-by: Adam Langley <agl@google.com>
This reverts commit b9c26014de.
The win64 bot seems unhappy. Will sniff at it tomorrow. In
the meantime, get the tree green again.
Change-Id: I058ddb3ec549beee7eabb2f3f72feb0a4a5143b2
Reviewed-on: https://boringssl-review.googlesource.com/6353
Reviewed-by: Adam Langley <alangley@gmail.com>
Since we pre-generate our perlasm, having the output of these files be
sensitive to the environment the run in is unhelpful. It would be bad to
suddenly change what features we do or don't compile in whenever workstations'
toolchains change.
Enable all compiler-version-gated features as they should all be runtime-gated
anyway. This should align with what upstream's files would have produced on
modern toolschains. We should assume our assemblers can take whatever we'd like
to throw at them. (If it turns out some can't, we'd rather find out and
probably switch the problematic instructions to explicit byte sequences.)
This actually results in a fairly significant change to the assembly we
generate. I'm guessing upstream's buildsystem sets the CC environment variable,
while ours doesn't and so the version checks were all coming out conservative.
diffstat of generated files:
linux-x86/crypto/sha/sha1-586.S | 1176 ++++++++++++
linux-x86/crypto/sha/sha256-586.S | 2248 ++++++++++++++++++++++++
linux-x86_64/crypto/bn/rsaz-avx2.S | 1644 +++++++++++++++++
linux-x86_64/crypto/bn/rsaz-x86_64.S | 638 ++++++
linux-x86_64/crypto/bn/x86_64-mont.S | 332 +++
linux-x86_64/crypto/bn/x86_64-mont5.S | 1130 ++++++++++++
linux-x86_64/crypto/modes/aesni-gcm-x86_64.S | 754 ++++++++
linux-x86_64/crypto/modes/ghash-x86_64.S | 475 +++++
linux-x86_64/crypto/sha/sha1-x86_64.S | 1121 ++++++++++++
linux-x86_64/crypto/sha/sha256-x86_64.S | 1062 +++++++++++
linux-x86_64/crypto/sha/sha512-x86_64.S | 2241 ++++++++++++++++++++++++
mac-x86/crypto/sha/sha1-586.S | 1174 ++++++++++++
mac-x86/crypto/sha/sha256-586.S | 2248 ++++++++++++++++++++++++
mac-x86_64/crypto/bn/rsaz-avx2.S | 1637 +++++++++++++++++
mac-x86_64/crypto/bn/rsaz-x86_64.S | 638 ++++++
mac-x86_64/crypto/bn/x86_64-mont.S | 331 +++
mac-x86_64/crypto/bn/x86_64-mont5.S | 1130 ++++++++++++
mac-x86_64/crypto/modes/aesni-gcm-x86_64.S | 750 ++++++++
mac-x86_64/crypto/modes/ghash-x86_64.S | 475 +++++
mac-x86_64/crypto/sha/sha1-x86_64.S | 1121 ++++++++++++
mac-x86_64/crypto/sha/sha256-x86_64.S | 1062 +++++++++++
mac-x86_64/crypto/sha/sha512-x86_64.S | 2241 ++++++++++++++++++++++++
win-x86/crypto/sha/sha1-586.asm | 1173 ++++++++++++
win-x86/crypto/sha/sha256-586.asm | 2248 ++++++++++++++++++++++++
win-x86_64/crypto/bn/rsaz-avx2.asm | 1858 +++++++++++++++++++-
win-x86_64/crypto/bn/rsaz-x86_64.asm | 638 ++++++
win-x86_64/crypto/bn/x86_64-mont.asm | 352 +++
win-x86_64/crypto/bn/x86_64-mont5.asm | 1184 ++++++++++++
win-x86_64/crypto/modes/aesni-gcm-x86_64.asm | 933 ++++++++++
win-x86_64/crypto/modes/ghash-x86_64.asm | 515 +++++
win-x86_64/crypto/sha/sha1-x86_64.asm | 1152 ++++++++++++
win-x86_64/crypto/sha/sha256-x86_64.asm | 1088 +++++++++++
win-x86_64/crypto/sha/sha512-x86_64.asm | 2499 ++++++
SHA* gets faster. RSA and AES-GCM seem to be more of a wash and even slower
sometimes! This is a little concerning. Though when I repeated the latter two,
it's definitely noisy (RSA in particular), so we may wish to repeat in a more
controlled environment. We could also flip some of these toggles to something
other than the highest setting if it seems some of the variants aren't
desirable. We just shouldn't have them enabled or disabled on accident. This
aligns us closer to upstream though.
$ /tmp/bssl.old speed SHA-
Did 5028000 SHA-1 (16 bytes) operations in 1000048us (5027758.7 ops/sec): 80.4 MB/s
Did 1708000 SHA-1 (256 bytes) operations in 1000257us (1707561.2 ops/sec): 437.1 MB/s
Did 73000 SHA-1 (8192 bytes) operations in 1008406us (72391.5 ops/sec): 593.0 MB/s
Did 3041000 SHA-256 (16 bytes) operations in 1000311us (3040054.5 ops/sec): 48.6 MB/s
Did 779000 SHA-256 (256 bytes) operations in 1000820us (778361.7 ops/sec): 199.3 MB/s
Did 26000 SHA-256 (8192 bytes) operations in 1009875us (25745.8 ops/sec): 210.9 MB/s
Did 1837000 SHA-512 (16 bytes) operations in 1000251us (1836539.0 ops/sec): 29.4 MB/s
Did 803000 SHA-512 (256 bytes) operations in 1000969us (802222.6 ops/sec): 205.4 MB/s
Did 41000 SHA-512 (8192 bytes) operations in 1016768us (40323.8 ops/sec): 330.3 MB/s
$ /tmp/bssl.new speed SHA-
Did 5354000 SHA-1 (16 bytes) operations in 1000104us (5353443.2 ops/sec): 85.7 MB/s
Did 1779000 SHA-1 (256 bytes) operations in 1000121us (1778784.8 ops/sec): 455.4 MB/s
Did 87000 SHA-1 (8192 bytes) operations in 1012641us (85914.0 ops/sec): 703.8 MB/s
Did 3517000 SHA-256 (16 bytes) operations in 1000114us (3516599.1 ops/sec): 56.3 MB/s
Did 935000 SHA-256 (256 bytes) operations in 1000096us (934910.2 ops/sec): 239.3 MB/s
Did 38000 SHA-256 (8192 bytes) operations in 1004476us (37830.7 ops/sec): 309.9 MB/s
Did 2930000 SHA-512 (16 bytes) operations in 1000259us (2929241.3 ops/sec): 46.9 MB/s
Did 1008000 SHA-512 (256 bytes) operations in 1000509us (1007487.2 ops/sec): 257.9 MB/s
Did 45000 SHA-512 (8192 bytes) operations in 1000593us (44973.3 ops/sec): 368.4 MB/s
$ /tmp/bssl.old speed RSA
Did 820 RSA 2048 signing operations in 1017008us (806.3 ops/sec)
Did 27000 RSA 2048 verify operations in 1015400us (26590.5 ops/sec)
Did 1292 RSA 2048 (3 prime, e=3) signing operations in 1008185us (1281.5 ops/sec)
Did 65000 RSA 2048 (3 prime, e=3) verify operations in 1011388us (64268.1 ops/sec)
Did 120 RSA 4096 signing operations in 1061027us (113.1 ops/sec)
Did 8208 RSA 4096 verify operations in 1002717us (8185.8 ops/sec)
$ /tmp/bssl.new speed RSA
Did 760 RSA 2048 signing operations in 1003351us (757.5 ops/sec)
Did 25900 RSA 2048 verify operations in 1028931us (25171.8 ops/sec)
Did 1320 RSA 2048 (3 prime, e=3) signing operations in 1040806us (1268.2 ops/sec)
Did 63000 RSA 2048 (3 prime, e=3) verify operations in 1016042us (62005.3 ops/sec)
Did 104 RSA 4096 signing operations in 1008718us (103.1 ops/sec)
Did 6875 RSA 4096 verify operations in 1093441us (6287.5 ops/sec)
$ /tmp/bssl.old speed GCM
Did 5316000 AES-128-GCM (16 bytes) seal operations in 1000082us (5315564.1 ops/sec): 85.0 MB/s
Did 712000 AES-128-GCM (1350 bytes) seal operations in 1000252us (711820.6 ops/sec): 961.0 MB/s
Did 149000 AES-128-GCM (8192 bytes) seal operations in 1003182us (148527.4 ops/sec): 1216.7 MB/s
Did 5919750 AES-256-GCM (16 bytes) seal operations in 1000016us (5919655.3 ops/sec): 94.7 MB/s
Did 800000 AES-256-GCM (1350 bytes) seal operations in 1000951us (799239.9 ops/sec): 1079.0 MB/s
Did 152000 AES-256-GCM (8192 bytes) seal operations in 1000765us (151883.8 ops/sec): 1244.2 MB/s
$ /tmp/bssl.new speed GCM
Did 5315000 AES-128-GCM (16 bytes) seal operations in 1000125us (5314335.7 ops/sec): 85.0 MB/s
Did 755000 AES-128-GCM (1350 bytes) seal operations in 1000878us (754337.7 ops/sec): 1018.4 MB/s
Did 151000 AES-128-GCM (8192 bytes) seal operations in 1005655us (150150.9 ops/sec): 1230.0 MB/s
Did 5913500 AES-256-GCM (16 bytes) seal operations in 1000041us (5913257.6 ops/sec): 94.6 MB/s
Did 782000 AES-256-GCM (1350 bytes) seal operations in 1001484us (780841.2 ops/sec): 1054.1 MB/s
Did 121000 AES-256-GCM (8192 bytes) seal operations in 1006389us (120231.8 ops/sec): 984.9 MB/s
Change-Id: I0efb32f896c597abc7d7e55c31d038528a5c72a1
Reviewed-on: https://boringssl-review.googlesource.com/6260
Reviewed-by: Adam Langley <alangley@gmail.com>
bn_get_bits5 always reads two bytes, even when it doesn't need to. For some
sizes of |p|, this can result in reading just past the edge of the array.
Unroll the first iteration of the loop and avoid reading out of bounds.
Replace bn_get_bits5 altogether in C as it's not doing anything interesting.
Change-Id: Ibcc8cea7d9c644a2639445396455da47fe869a5c
Reviewed-on: https://boringssl-review.googlesource.com/1393
Reviewed-by: Adam Langley <agl@google.com>
(The issue was reported by Shay Gueron.)
The final reduction in Montgomery multiplication computes if (X >= m) then X =
X - m else X = X
In OpenSSL, this was done by computing T = X - m, doing a constant-time
selection of the *addresses* of X and T, and loading from the resulting
address. But this is not cache-neutral.
This patch changes the behaviour by loading both X and T into registers, and
doing a constant-time selection of the *values*.
TODO(fork): only some of the fixes from the original patch still apply to
the 1.0.2 code.
Initial fork from f2d678e6e89b6508147086610e985d4e8416e867 (1.0.2 beta).
(This change contains substantial changes from the original and
effectively starts a new history.)