This one is a little thorny. All the various block cipher modes
functions and callbacks take a void *key. This allows them to be used
with multiple kinds of block ciphers.
However, the implementations of those callbacks are the normal typed
functions, like AES_encrypt. Those take AES_KEY *key. While, at the ABI
level, this is perfectly fine, C considers this undefined behavior.
If we wish to preserve this genericness, we could either instantiate
multiple versions of these mode functions or create wrappers of
AES_encrypt, etc., that take void *key.
The former means more code and is tedious without C++ templates (maybe
someday...). The latter would not be difficult for a compiler to
optimize out. C mistakenly allowed comparing function pointers for
equality, which means a compiler cannot replace pointers to wrapper
functions with the real thing. (That said, the performance-sensitive
bits already act in chunks, e.g. ctr128_f, so the function call overhead
shouldn't matter.)
But our only 128-bit block cipher is AES anyway, so I just switched
things to use AES_KEY throughout. AES is doing fine, and hopefully we
would have the sense not to pair a hypothetical future block cipher with
so many modes!
Change-Id: Ied3e843f0e3042a439f09e655b29847ade9d4c7d
Reviewed-on: https://boringssl-review.googlesource.com/32107
Reviewed-by: Adam Langley <agl@google.com>
Rather than have plain-C functions, asm functions, and accelerated
functions, just have accelerated and non-accelerated, where the latter
are either provided by assembly or by C code.
Pertinently, this allows Aarch64 to use hardware accel for the basic
|AES_*| functions.
Change-Id: I0003c0c7a43d85a3eee8c8f37697f61a3070dd40
Reviewed-on: https://boringssl-review.googlesource.com/28385
Reviewed-by: David Benjamin <davidben@google.com>
Commit-Queue: David Benjamin <davidben@google.com>
CQ-Verified: CQ bot account: commit-bot@chromium.org <commit-bot@chromium.org>
This was reverted a second time because it ended up always setting the
final argument to CRYPTO_gcm128_init to zero, which disabled some
acceleration of GCM on ≥Haswell. With this update, that argument will be
set to 1 if |aes_hw_*| functions are being used.
Probably this will need to be reverted too for some reason. I'm hoping
to fill the entire git short description with “Revert”.
Change-Id: Ib4a06f937d35d95affdc0b63f29f01c4a8c47d03
Reviewed-on: https://boringssl-review.googlesource.com/28484
Reviewed-by: Adam Langley <agl@google.com>
Commit-Queue: Adam Langley <agl@google.com>
CQ-Verified: CQ bot account: commit-bot@chromium.org <commit-bot@chromium.org>
Broke Aarch64 on the main builders (but not the trybots, somehow.)
Change-Id: I53eb09c99ef42a59628b0506b5ddb125299b554a
Reviewed-on: https://boringssl-review.googlesource.com/28364
Reviewed-by: Adam Langley <agl@google.com>
This also happens to make the AES_[en|de]crypt functions use AES-NI
(where available) on Intel.
Update-Note: this substantially changes how AES-NI is triggered. Worth running bssl speed (on both k8 and ppc), before and after, to confirm that there are no regressions.
Change-Id: I5f22c1975236bbc1633c24ab60d683bca8ddd4c3
Reviewed-on: https://boringssl-review.googlesource.com/28026
Reviewed-by: David Benjamin <davidben@google.com>
crypto/{asn1,x509,x509v3,pem} were skipped as they are still OpenSSL
style.
Change-Id: I3cd9a60e1cb483a981aca325041f3fbce294247c
Reviewed-on: https://boringssl-review.googlesource.com/19504
Reviewed-by: Adam Langley <agl@google.com>
Commit-Queue: David Benjamin <davidben@google.com>
CQ-Verified: CQ bot account: commit-bot@chromium.org <commit-bot@chromium.org>
The changes to delocate.go are needed because modes/ does things like
return the address of a module function. Both of these need to be
changed from referencing the GOT to using local symbols.
Rather than testing whether |ghash| is |gcm_ghash_avx|, we can just keep
that information in a flag.
The test for |aesni_ctr32_encrypt_blocks| is more problematic, but I
believe that it's superfluous and can be dropped: if you passed in a
stream function that was semantically different from
|aesni_ctr32_encrypt_blocks| you would already have a bug because
|CRYPTO_gcm128_[en|de]crypt_ctr32| will handle a block at the end
themselves, and assume a big-endian, 32-bit counter anyway.
Change-Id: I68a84ebdab6c6006e11e9467e3362d7585461385
Reviewed-on: https://boringssl-review.googlesource.com/15064
Reviewed-by: Adam Langley <agl@google.com>