Probably worth having actual test vectors for these, rather than
checking our code against itself. Additionally, small negative numbers
have, in the past been valuable test vectors (see long comment in
point_add from OpenSSL's ecp_nistp521.c).
Change-Id: Ia5aa8a80eb5b6d0089c3601c5fec2364e699794d
Reviewed-on: https://boringssl-review.googlesource.com/26848
Reviewed-by: Adam Langley <agl@google.com>
Commit-Queue: David Benjamin <davidben@google.com>
CQ-Verified: CQ bot account: commit-bot@chromium.org <commit-bot@chromium.org>
p224_felem_neg does not produce an output within the tight bounds
suitable for p224_felem_contract. This was found by inspection of the
code.
This only affects the final y-coordinate output of arbitrary-point
multiplication, so it is a no-op for ECDH and ECDSA.
Change-Id: I1d929458d1f21d02cd8e745d2f0f7040a6bb0627
Reviewed-on: https://boringssl-review.googlesource.com/26847
Commit-Queue: David Benjamin <davidben@google.com>
Commit-Queue: Adam Langley <agl@google.com>
Reviewed-by: Adam Langley <agl@google.com>
CQ-Verified: CQ bot account: commit-bot@chromium.org <commit-bot@chromium.org>
Primality testing checks for small words in random places.
Median of 29 RSA keygens: 0m0.811s -> 0m0.521s
(Accuracy beyond 0.1s is questionable, and this "speed up" is certainly
noise.)
Bug: 238
Change-Id: Ie5efab7291302a42ac6e283d25da0c094d8577e7
Reviewed-on: https://boringssl-review.googlesource.com/25885
Reviewed-by: Adam Langley <agl@google.com>
There are a number of random subtractions in RSA key generation. Add a
fixed-width version.
Median of 29 RSA keygens: 0m0.859s -> 0m0.811s
(Accuracy beyond 0.1s is questionable.)
Bug: 238
Change-Id: I9fa0771b95a438fd7d2635fd77a332146ccc96d9
Reviewed-on: https://boringssl-review.googlesource.com/25884
Commit-Queue: Adam Langley <agl@google.com>
Reviewed-by: Adam Langley <agl@google.com>
No semantic change: the table is the same as before, but now with less
magic.
Change-Id: I351c2446e9765f25b7dfb901c9e98f12099a325c
Reviewed-on: https://boringssl-review.googlesource.com/26744
Reviewed-by: Adam Langley <agl@google.com>
Reviewed-by: David Benjamin <davidben@google.com>
Commit-Queue: Adam Langley <agl@google.com>
CQ-Verified: CQ bot account: commit-bot@chromium.org <commit-bot@chromium.org>
Rather than writing the answer into the output, it wrote it into some
awkwardly-named temporaries. Thanks to Daniel Hirche for reporting this
issue!
Bug: chromium:825273
Change-Id: I5def4be045cd1925453c9873218e5449bf25e3f5
Reviewed-on: https://boringssl-review.googlesource.com/26785
Reviewed-by: Adam Langley <agl@google.com>
Commit-Queue: David Benjamin <davidben@google.com>
CQ-Verified: CQ bot account: commit-bot@chromium.org <commit-bot@chromium.org>
These are composite numbers whose composite witnesses aren't in the
first however many prime numbers, so deterministically checking small
numbers may not work.
We don't check composite witnesses deterministically but these are
probably decent tests. (Not sure how else to find composites with
scarce witnesses, but these seemed decent candidates.)
Change-Id: I23dcb7ba603a64c1f7d1e9a16942e7c29c76da51
Reviewed-on: https://boringssl-review.googlesource.com/26645
Commit-Queue: Steven Valdez <svaldez@google.com>
Reviewed-by: Steven Valdez <svaldez@google.com>
CQ-Verified: CQ bot account: commit-bot@chromium.org <commit-bot@chromium.org>
These were randomly generated.
Change-Id: I532afdaf469e6c80e518dae3a75547ff7cb0948f
Reviewed-on: https://boringssl-review.googlesource.com/26065
Reviewed-by: Steven Valdez <svaldez@google.com>
Commit-Queue: David Benjamin <davidben@google.com>
CQ-Verified: CQ bot account: commit-bot@chromium.org <commit-bot@chromium.org>
This clearly was supposed to be a return 1. See
https://github.com/openssl/openssl/issues/5537 for details.
(Additionally, now that our BIGNUMs may be non-minimal, this function
violates the rule that BIGNUM functions should not depend on widths. We
should use w >= bn_minimal_width(a) to retain the original behavior. But
the original behavior is nuts, so let's just fix it.)
Update-Note: BN_mask_bits no longer reports failure in some cases. These
cases were platform-dependent and not useful, and code search confirms
nothing was relying on it.
Change-Id: I31b1c2de6c5de9432c17ec3c714a5626594ee03c
Reviewed-on: https://boringssl-review.googlesource.com/26464
Commit-Queue: Steven Valdez <svaldez@google.com>
Reviewed-by: Steven Valdez <svaldez@google.com>
CQ-Verified: CQ bot account: commit-bot@chromium.org <commit-bot@chromium.org>
This isn't strictly necessary now that BIGNUMs are safe, but we get to
rely on type-system annotations from EC_SCALAR. Additionally,
EC_POINT_mul depends on BN_div, while the EC_SCALAR version does not.
Change-Id: I75e6967f3d35aef17278b94862f4e506baff5c23
Reviewed-on: https://boringssl-review.googlesource.com/26424
Reviewed-by: Steven Valdez <svaldez@google.com>
Commit-Queue: David Benjamin <davidben@google.com>
CQ-Verified: CQ bot account: commit-bot@chromium.org <commit-bot@chromium.org>
EC_KEY_copy left unset fields alone, which meant it was possible to
create an EC_KEY with mismatched private key and group. Nothing was
using EC_KEY_copy anyway, and in keeping of us generally preferring
fresh objects over object reuse, remove it. EC_KEY_dup itself can also
be made simpler by using the very setters available.
Additionally, skip copying the method table. As of
https://boringssl-review.googlesource.com/16344, we no longer copy the
ex_data, so we probably shouldn't copy the method pointers either,
aligning with RSAPrivateKey_dup.
Update-Note: If I missed anything and someone uses EC_KEY_copy, it
should be easy to port them to EC_KEY_dup.
Change-Id: Ibbdcea73345d91fa143fbe70a15bb527972693e8
Reviewed-on: https://boringssl-review.googlesource.com/26404
Reviewed-by: Steven Valdez <svaldez@google.com>
Commit-Queue: David Benjamin <davidben@google.com>
CQ-Verified: CQ bot account: commit-bot@chromium.org <commit-bot@chromium.org>
The probability of stumbling on a non-invertible b->A is negligible;
it's equivalent to accidentally factoring the RSA key. Relatedly,
document the slight caveat in BN_mod_inverse_blinded.
Change-Id: I308d17d12f5d6a12c444dda8c8fcc175ef2f5d45
Reviewed-on: https://boringssl-review.googlesource.com/26344
Commit-Queue: David Benjamin <davidben@google.com>
CQ-Verified: CQ bot account: commit-bot@chromium.org <commit-bot@chromium.org>
Reviewed-by: Adam Langley <agl@google.com>
NIST redid their website and broke all the old links.
Change-Id: I5b7cba878404bb63e49f221f6203c8e1e6545af4
Reviewed-on: https://boringssl-review.googlesource.com/26204
Reviewed-by: Adam Langley <agl@google.com>
Thumb2 addresses are a bit a mess, depending on whether a label is
interpreted as a function pointer value (for use with BX and BLX) or as
a program counter value (for use with PC-relative addressing). Clang's
integrated assembler mis-assembles this code. See
https://crbug.com/124610#c54 for details.
Instead, use the ADR pseudo-instruction which has clear semantics and
should be supported by every assembler that handles the OpenSSL Thumb2
code. (In other files, the ADR vs SUB conditionals are based on
__thumb2__ already. For some reason, this one is based on __APPLE__, I'm
guessing to deal with an older version of clang assembler.)
It's unclear to me which of clang or binutils is "correct" or if this is
even a well-defined notion beyond "whatever binutils does". But I will
note that https://github.com/openssl/openssl/pull/4669 suggests binutils
has also changed behavior around this before.
See also https://github.com/openssl/openssl/pull/5431 in OpenSSL.
Bug: chromium:124610
Change-Id: I5e7a0c8c0f54a3f65cc324ad599a41883675f368
Reviewed-on: https://boringssl-review.googlesource.com/26164
Commit-Queue: Steven Valdez <svaldez@google.com>
Reviewed-by: Steven Valdez <svaldez@google.com>
CQ-Verified: CQ bot account: commit-bot@chromium.org <commit-bot@chromium.org>
Right now, |g_wNAF| and |p_wNAF| are of same size.
This change makes GCC's "-Werror=logical-op" happy and adds a compile-time
assertion in case the initial size of either array ever changes.
Change-Id: I29e39a7a121a0a9d016c53da6b7c25675ddecbdc
Reviewed-on: https://boringssl-review.googlesource.com/26104
Reviewed-by: David Benjamin <davidben@google.com>
Commit-Queue: David Benjamin <davidben@google.com>
CQ-Verified: CQ bot account: commit-bot@chromium.org <commit-bot@chromium.org>
The point was to remove the silly moduli.
Change-Id: I48c507c9dd1fc46e38e8991ed528b02b8da3dc1d
Reviewed-on: https://boringssl-review.googlesource.com/26044
Commit-Queue: Steven Valdez <svaldez@google.com>
Reviewed-by: Steven Valdez <svaldez@google.com>
CQ-Verified: CQ bot account: commit-bot@chromium.org <commit-bot@chromium.org>
Better commit such details to comments before I forget them.
Change-Id: Ie36332235c692f4369413b4340a742b5ad895ce1
Reviewed-on: https://boringssl-review.googlesource.com/25984
Commit-Queue: Steven Valdez <svaldez@google.com>
Reviewed-by: Steven Valdez <svaldez@google.com>
CQ-Verified: CQ bot account: commit-bot@chromium.org <commit-bot@chromium.org>
It's doable, but a bit of effort due to the different radix.
Change-Id: Ibfa15c31bb37de930f155ee6d19551a2b6437073
Reviewed-on: https://boringssl-review.googlesource.com/25944
Commit-Queue: David Benjamin <davidben@google.com>
CQ-Verified: CQ bot account: commit-bot@chromium.org <commit-bot@chromium.org>
Reviewed-by: Steven Valdez <svaldez@google.com>
This reuses wnaf.c's window scheduling, but has access to the tuned
field arithemetic and pre-computed base point table. Unlike wnaf.c, we
do not make the points affine as it's not worth it for a single table.
(We already precomputed the base point table.)
Annoyingly, 32-bit x86 gets slower by a bit, but the other platforms are
faster. My guess is that that the generic code gets to use the
bn_mul_mont assembly and the compiler, faced with the increased 32-bit
register pressure and the extremely register-poor x86, is making
bad decisions on the otherwise P-256-tuned C code. The three platforms
that see much larger gains are significantly more important than 32-bit
x86 at this point, so go with this change.
armv7a (Nexus 5X) before/after [+14.4%]:
Did 2703 ECDSA P-256 verify operations in 5034539us (536.9 ops/sec)
Did 3127 ECDSA P-256 verify operations in 5091379us (614.2 ops/sec)
aarch64 (Nexus 5X) before/after [+9.2%]:
Did 6783 ECDSA P-256 verify operations in 5031324us (1348.2 ops/sec)
Did 7410 ECDSA P-256 verify operations in 5033291us (1472.2 ops/sec)
x86 before/after [-2.7%]:
Did 8961 ECDSA P-256 verify operations in 10075901us (889.3 ops/sec)
Did 8568 ECDSA P-256 verify operations in 10003001us (856.5 ops/sec)
x86_64 before/after [+8.6%]:
Did 29808 ECDSA P-256 verify operations in 10008662us (2978.2 ops/sec)
Did 32528 ECDSA P-256 verify operations in 10057137us (3234.3 ops/sec)
Change-Id: I5fa643149f5bfbbda9533e3008baadfee9979b93
Reviewed-on: https://boringssl-review.googlesource.com/25684
Reviewed-by: Adam Langley <agl@google.com>
Commit-Queue: Adam Langley <agl@google.com>
CQ-Verified: CQ bot account: commit-bot@chromium.org <commit-bot@chromium.org>
This was done by OpenSSL with the kind permission of Intel. This change
is imported from upstream's commit
dcf6e50f48e6bab92dcd2dacb27fc17c0de34199.
Change-Id: Ie8d3b700cd527a6e8cf66e0728051b2acd8cc6b9
Reviewed-on: https://boringssl-review.googlesource.com/25588
Commit-Queue: David Benjamin <davidben@google.com>
CQ-Verified: CQ bot account: commit-bot@chromium.org <commit-bot@chromium.org>
Reviewed-by: Adam Langley <agl@google.com>
This syncs up with OpenSSL master as of
50ea9d2b3521467a11559be41dcf05ee05feabd6. The non-license non-spelling
changes are CFI bits, which were added in upstream in
b84460ad3a3e4fcb22efaa0a8365b826f4264ecf.
Change-Id: I42280985f834d5b9133eacafc8ff9dbd2f0ea59a
Reviewed-on: https://boringssl-review.googlesource.com/25704
Reviewed-by: Adam Langley <agl@google.com>
These files are otherwise up-to-date with OpenSSL master as of
50ea9d2b3521467a11559be41dcf05ee05feabd6, modulo a couple of spelling
fixes which I've imported.
I've also reverted the same-line label and instruction patch to
x86_64-mont*.pl. The new delocate parser handles that fine.
Change-Id: Ife35c671a8104c3cc2fb6c5a03127376fccc4402
Reviewed-on: https://boringssl-review.googlesource.com/25644
Reviewed-by: Adam Langley <agl@google.com>
This imports 384e6de4c7e35e37fb3d6fbeb32ddcb5eb0d3d3f and
79ca382d4762c58c4b92fceb4e202e90c71292ae from upstream.
Differences from upstream:
- We've removed a number of unused functions.
- We never imported 3ff08e1dde56747011a702a9a5aae06cfa8ae5fc, which was
to give the assembly control over the memory layout in the tables. So
our "gather" is "select" (which is implemented the same because the
memory layout never did change) and our "scatter" is in C.
Change-Id: I90d4a17da9f5f693f4dc4706887dec15f010071b
Reviewed-on: https://boringssl-review.googlesource.com/25586
Reviewed-by: Adam Langley <agl@google.com>
As of upstream's 6aa36e8e5a062e31543e7796f0351ff9628832ce, the
corresponding file in OpenSSL has both an Intel and OpenSSL copyright
blocks. To properly sync up with OpenSSL, use the OpenSSL copyright
block and our version of the Intel copyright block.
Change-Id: I4dc072a11390a54d0ce38ec0b8893e48f52638de
Reviewed-on: https://boringssl-review.googlesource.com/25585
Reviewed-by: Adam Langley <agl@google.com>
Change-Id: I5fc029ceddfa60b2ccc97c138b94c1826f6d75fa
Reviewed-on: https://boringssl-review.googlesource.com/25844
Reviewed-by: David Benjamin <davidben@google.com>
Commit-Queue: David Benjamin <davidben@google.com>
CQ-Verified: CQ bot account: commit-bot@chromium.org <commit-bot@chromium.org>
OpenSSL's RSA API is poorly designed and does not have a single place to
properly initialize the key. See
https://github.com/openssl/openssl/issues/5158.
To workaround this flaw, we must lazily instantiate pre-computed
Montgomery bits with locking. This is a ton of complexity. More
importantly, it makes it very difficult to implement RSA without side
channels. The correct in-memory representation of d, dmp1, and dmq1
depend on n, p, and q, respectively. (Those values have private
magnitudes and must be sized relative to the respective moduli.)
08805fe279 attempted to fix up the various
widths under lock, when we set up BN_MONT_CTX. However, this introduces
threading issues because other threads may access those exposed
components (RSA_get0_* also count as exposed for these purposes because
they are get0 functions), while a private key operation is in progress.
Instead, we do the following:
- There is no actual need to minimize n, p, and q, but we have minimized
copies in the BN_MONT_CTXs, so use those.
- Store additional copies of d, dmp1, and dmq1, at the cost of more
memory used. These copies have the correct width and are private,
unlike d, dmp1, and dmq1 which are sadly exposed. Fix private key
operations to use them.
- Move the frozen bit out of rsa->flags, as that too was historically
accessible without locking.
(Serialization still uses the original BIGNUMs, but the RSAPrivateKey
serialization format already inherently leaks the magnitude, so this
doesn't matter.)
Change-Id: Ia3a9b0629f8efef23abb30bfed110d247d1db42f
Reviewed-on: https://boringssl-review.googlesource.com/25824
Commit-Queue: David Benjamin <davidben@google.com>
CQ-Verified: CQ bot account: commit-bot@chromium.org <commit-bot@chromium.org>
Reviewed-by: Adam Langley <agl@google.com>
If a caller is in the process on constructing an arbitrary |EC_GROUP|,
and they try to create an |EC_POINT| to set as the generator which is
invalid, we would previously crash.
Change-Id: Ida91354257a02bd56ac29ba3104c9782b8d70f6b
Reviewed-on: https://boringssl-review.googlesource.com/25764
Commit-Queue: David Benjamin <davidben@google.com>
Reviewed-by: David Benjamin <davidben@google.com>
CQ-Verified: CQ bot account: commit-bot@chromium.org <commit-bot@chromium.org>
This allows a BIGNUM consumer to avoid messing around with bn->d and
bn->top/width.
Bug: 232
Change-Id: I134cf412fef24eb404ff66c84831b4591d921a17
Reviewed-on: https://boringssl-review.googlesource.com/25484
Commit-Queue: David Benjamin <davidben@google.com>
CQ-Verified: CQ bot account: commit-bot@chromium.org <commit-bot@chromium.org>
Reviewed-by: Adam Langley <agl@google.com>
This is a bit easier to read than BN_less_than_consttime when we must do
>= or <=, about as much work to compute, and lots of code calls BN_cmp
on secret data. This also, by extension, makes BN_cmp_word
constant-time.
BN_equal_consttime is probably a little more efficient and is perfectly
readable, so leave that one around.
Change-Id: Id2e07fe312f01cb6fd10a1306dcbf6397990cf13
Reviewed-on: https://boringssl-review.googlesource.com/25444
Commit-Queue: David Benjamin <davidben@google.com>
CQ-Verified: CQ bot account: commit-bot@chromium.org <commit-bot@chromium.org>
Reviewed-by: Adam Langley <agl@google.com>
The loop and the outermost special-cases are basically the same.
Change-Id: I5e3ca60ad9a04efa66b479eebf8c3637a11cdceb
Reviewed-on: https://boringssl-review.googlesource.com/25406
Commit-Queue: David Benjamin <davidben@google.com>
CQ-Verified: CQ bot account: commit-bot@chromium.org <commit-bot@chromium.org>
Reviewed-by: Adam Langley <agl@google.com>
Same mistake as bn_mul_recursive.
Change-Id: I2374d37e5da61c82ccb1ad79da55597fa3f10640
Reviewed-on: https://boringssl-review.googlesource.com/25405
Commit-Queue: David Benjamin <davidben@google.com>
CQ-Verified: CQ bot account: commit-bot@chromium.org <commit-bot@chromium.org>
Reviewed-by: Adam Langley <agl@google.com>
This follows similar lines as the previous cleanups and fixes the
documentation of the preconditions.
And with that, RSA private key operations, provided p and q have the
same bit length, should be constant time, as far as I know. (Though I'm
sure I've missed something.)
bn_cmp_part_words and bn_cmp_words are no longer used and deleted.
Bug: 234
Change-Id: Iceefa39f57e466c214794c69b335c4d2c81f5577
Reviewed-on: https://boringssl-review.googlesource.com/25404
Commit-Queue: David Benjamin <davidben@google.com>
CQ-Verified: CQ bot account: commit-bot@chromium.org <commit-bot@chromium.org>
Reviewed-by: Adam Langley <agl@google.com>
The power of two computations here were extremely confusing and one of
the comments mixed && and ||. Remove the cached k = j + j value.
Optimizing the j*8, j*8, j*2, and j*4 multiplications is the compiler's
job. If it doesn't manage it, it was only a couple shifts anyway.
With that fixed, it becomes easier to tell that rr was actaully
allocated twice as large as necessary. I suspect rr is also
incorrectly-allocated in the bn_mul_part_recursive case, but I'll wait
until I've checked that function over first. (The array size
documentation on the other bn_{mul,sqr}_recursive functions have had
mistakes before.)
Change-Id: I298400b988e3bd108d01d6a7c8a5b262ddf81feb
Reviewed-on: https://boringssl-review.googlesource.com/25364
Commit-Queue: David Benjamin <davidben@google.com>
CQ-Verified: CQ bot account: commit-bot@chromium.org <commit-bot@chromium.org>
Reviewed-by: Adam Langley <agl@google.com>
I left the input length as int because the calling convention passes
these messy deltas around. This micro-optimization is almost certainly
pointless, but bn_sub_part_words is written in assembly, so I've left it
alone for now. The documented preconditions were also all completely
wrong, so I've fixed them. We actually only call them for even tighter
bounds (one of dna or dnb is 0 and the other is 0 or -1), at least
outside bn_mul_part_recursive which I still need to read through.
This leaves bn_mul_part_recursive, which is reachable for RSA keys which
are not a power of two in bit width.
The first iteration of this had an uncaught bug, so I added a few more
aggressive tests generated with:
A = 0x...
B = 0x...
# Chop off 0, 1 and > 1 word for both 32 and 64-bit.
for i in (0, 1, 2, 4):
for j in (0, 1, 2, 4):
a = A >> (32*i)
b = B >> (32*j)
p = a * b
print "Product = %x" % p
print "A = %x" % a
print "B = %x" % b
print
Bug: 234
Change-Id: I72848d992637c0390cdd3c4f81cb919393b59eb8
Reviewed-on: https://boringssl-review.googlesource.com/25344
Commit-Queue: David Benjamin <davidben@google.com>
CQ-Verified: CQ bot account: commit-bot@chromium.org <commit-bot@chromium.org>
Reviewed-by: Adam Langley <agl@google.com>
We still need BN_mul and, in particular, bn_mul_recursive will either
require bn_abs_sub_words be generalized or that we add a parallel
bn_abs_sub_part_words, but start with the easy one.
While I'm here, simplify the i and j mess in here. It's patterned after
the multiplication one, but can be much simpler.
Bug: 234
Change-Id: If936099d53304f2512262a1cbffb6c28ae30ccee
Reviewed-on: https://boringssl-review.googlesource.com/25325
Commit-Queue: David Benjamin <davidben@google.com>
CQ-Verified: CQ bot account: commit-bot@chromium.org <commit-bot@chromium.org>
Reviewed-by: Adam Langley <agl@google.com>
There is no more need for the "constant-time" reading beyond bn->top. We
can write the bytes out naively because RSA computations no longer call
bn_correct_top/bn_set_minimal_width.
Specifically, the final computation is a BN_mod_mul_montgomery to remove
the blinding, and that keeps the sizes correct.
Bug: 237
Change-Id: I6e90d81c323b644e179d899f411479ea16deab98
Reviewed-on: https://boringssl-review.googlesource.com/25324
Commit-Queue: David Benjamin <davidben@google.com>
CQ-Verified: CQ bot account: commit-bot@chromium.org <commit-bot@chromium.org>
Reviewed-by: Adam Langley <agl@google.com>
Alas, the existence of RSA keys with q > p is obnoxious, but we can
canonicalize it away. To my knowledge, the remaining leaks in RSA are:
- Key generation. This is kind of hopelessly non-constant-time but
perhaps deserves a more careful ponder. Though hopefully it does not
come in at a measurable point for practical purposes.
- Private key serialization. RSAPrivateKey inherently leaks the
magnitudes of d, dmp1, dmq1, and iqmp. This is unavoidable but
hopefully does not come in at a measurable point for practical
purposes.
- If p and q have different word widths, we currently fall back to the
variable-time BN_mod rather than Montgomery reduction at the start of
CRT. I can think of ways to apply Montgomery reduction, but it's
probably better to deny CRT to such keys, if not reject them outright.
- bn_mul_fixed and bn_sqr_fixed which affect the Montgomery
multiplication bn_mul_mont-less configurations, as well as the final
CRT multiplication. We should fix this.
Bug: 233
Change-Id: I8c2ecf8f8ec104e9f26299b66ac8cbb0cad04616
Reviewed-on: https://boringssl-review.googlesource.com/25263
Commit-Queue: David Benjamin <davidben@google.com>
CQ-Verified: CQ bot account: commit-bot@chromium.org <commit-bot@chromium.org>
Reviewed-by: Adam Langley <agl@google.com>
This is to be used in constant-time RSA CRT.
Bug: 233
Change-Id: Ibade5792324dc6aba38cab6971d255d41fb5eb91
Reviewed-on: https://boringssl-review.googlesource.com/25286
Commit-Queue: David Benjamin <davidben@google.com>
CQ-Verified: CQ bot account: commit-bot@chromium.org <commit-bot@chromium.org>
Reviewed-by: Adam Langley <agl@google.com>
Use the now constant-time modular arithmetic functions.
Bug: 236
Change-Id: I4567d67bfe62ca82ec295f2233d1a6c9b131e5d2
Reviewed-on: https://boringssl-review.googlesource.com/25285
Commit-Queue: David Benjamin <davidben@google.com>
CQ-Verified: CQ bot account: commit-bot@chromium.org <commit-bot@chromium.org>
Reviewed-by: Adam Langley <agl@google.com>
As the EC code will ultimately want to use these in "words" form by way
of EC_FELEM, and because it's much easier, I've implement these as
low-level words-based functions that require all inputs have the same
width. The BIGNUM versions which RSA and, for now, EC calls are
implemented on top of that.
Unfortunately, doing such things in constant-time and accounting for
undersized inputs requires some scratch space, and these functions don't
take BN_CTX. So I've added internal bn_mod_*_quick_ctx functions that
take a BN_CTX and the old functions now allocate a bit unnecessarily.
RSA only needs lshift (for BN_MONT_CTX) and sub (for CRT), but the
generic EC code wants add as well.
The generic EC code isn't even remotely constant-time, and I hope to
ultimately use stack-allocated EC_FELEMs, so I've made the actual
implementations here implemented in "words", which is much simpler
anyway due to not having to take care of widths.
I've also gone ahead and switched the EC code to these functions,
largely as a test of their performance (an earlier iteration made the EC
code noticeably slower). These operations are otherwise not
performance-critical in RSA.
The conversion from BIGNUM to BIGNUM+BN_CTX should be dropped by the
static linker already, and the unused BIGNUM+BN_CTX functions will fall
off when EC_FELEM happens.
Update-Note: BN_mod_*_quick bounce on malloc a bit now, but they're not
really used externally. The one caller I found was wpa_supplicant
which bounces on malloc already. They appear to be implementing
compressed coordinates by hand? We may be able to convince them to
call EC_POINT_set_compressed_coordinates_GFp.
Bug: 233, 236
Change-Id: I2bf361e9c089e0211b97d95523dbc06f1168e12b
Reviewed-on: https://boringssl-review.googlesource.com/25261
Commit-Queue: David Benjamin <davidben@google.com>
CQ-Verified: CQ bot account: commit-bot@chromium.org <commit-bot@chromium.org>
Reviewed-by: Adam Langley <agl@google.com>
d, dmp1, dmq1, and iqmp have private magnitudes. This is awkward because
the RSAPrivateKey serialization leaks the magnitudes. Do the best we can
and fix them up before any RSA operations.
This moves the piecemeal BN_MONT_CTX_set_locked into a common function
where we can do more complex canonicalization on the keys. Ideally this
would be done on key import, but the exposed struct (and OpenSSL 1.1.0's
bad API design) mean there is no single point in time when key import is
finished.
Also document the constraints on RSA_set0_* functions. (These
constraints aren't new. They just were never documented before.)
Update-Note: If someone tried to use an invalid RSA key where d >= n,
dmp1 >= p, dmq1 >= q, or iqmp >= p, this may break. Such keys would not
have passed RSA_check_key, but it's possible to manually assemble
keys that bypass it.
Bug: 232
Change-Id: I421f883128952f892ac0cde0d224873a625f37c5
Reviewed-on: https://boringssl-review.googlesource.com/25259
Commit-Queue: David Benjamin <davidben@google.com>
CQ-Verified: CQ bot account: commit-bot@chromium.org <commit-bot@chromium.org>
Reviewed-by: Adam Langley <agl@google.com>
The fallback functions still themselves leak, but I've left TODOs there.
This only affects BN_mod_mul_montgomery on platforms where we don't use
the bn_mul_mont assembly, but BN_mul additionally affects the final
multiplication in RSA CRT.
Bug: 232
Change-Id: Ia1ae16162c38e10c056b76d6b2afbed67f1a5e16
Reviewed-on: https://boringssl-review.googlesource.com/25260
Commit-Queue: David Benjamin <davidben@google.com>
CQ-Verified: CQ bot account: commit-bot@chromium.org <commit-bot@chromium.org>
Reviewed-by: Adam Langley <agl@google.com>
Functions that deserialize from bytes and Montgomery multiplication have
no reason to minimize their inputs.
Bug: 232
Change-Id: I121cc9b388033d684057b9df4ad0c08364849f58
Reviewed-on: https://boringssl-review.googlesource.com/25258
Commit-Queue: David Benjamin <davidben@google.com>
CQ-Verified: CQ bot account: commit-bot@chromium.org <commit-bot@chromium.org>
Reviewed-by: Adam Langley <agl@google.com>