896332581e
Even without strict-aliasing, C does not allow casting pointers to types that don't match their alignment. After this change, UBSan is happy with our code at default settings but for the negative left shift language bug. Note: architectures without unaligned loads do not generate the same code for memcpy and pointer casts. But even ARMv6 can perform unaligned loads and stores (ARMv5 couldn't), so we should be okay here. Before: Did 11086000 AES-128-GCM (16 bytes) seal operations in 5000391us (2217026.6 ops/sec): 35.5 MB/s Did 370000 AES-128-GCM (1350 bytes) seal operations in 5005208us (73923.0 ops/sec): 99.8 MB/s Did 63000 AES-128-GCM (8192 bytes) seal operations in 5029958us (12525.0 ops/sec): 102.6 MB/s Did 9894000 AES-256-GCM (16 bytes) seal operations in 5000017us (1978793.3 ops/sec): 31.7 MB/s Did 316000 AES-256-GCM (1350 bytes) seal operations in 5005564us (63129.7 ops/sec): 85.2 MB/s Did 54000 AES-256-GCM (8192 bytes) seal operations in 5054156us (10684.3 ops/sec): 87.5 MB/s After: Did 11026000 AES-128-GCM (16 bytes) seal operations in 5000197us (2205113.1 ops/sec): 35.3 MB/s Did 370000 AES-128-GCM (1350 bytes) seal operations in 5005781us (73914.5 ops/sec): 99.8 MB/s Did 63000 AES-128-GCM (8192 bytes) seal operations in 5032695us (12518.1 ops/sec): 102.5 MB/s Did 9831750 AES-256-GCM (16 bytes) seal operations in 5000010us (1966346.1 ops/sec): 31.5 MB/s Did 316000 AES-256-GCM (1350 bytes) seal operations in 5005702us (63128.0 ops/sec): 85.2 MB/s Did 54000 AES-256-GCM (8192 bytes) seal operations in 5053642us (10685.4 ops/sec): 87.5 MB/s (Tested with the no-asm builds; most of this code isn't reachable otherwise.) Change-Id: I025c365d26491abed0116b0de3b7612159e52297 Reviewed-on: https://boringssl-review.googlesource.com/22804 Reviewed-by: Adam Langley <agl@google.com> |
||
---|---|---|
.. | ||
asm | ||
cbc.c | ||
cfb.c | ||
ctr.c | ||
gcm_test.cc | ||
gcm_tests.txt | ||
gcm.c | ||
internal.h | ||
ofb.c | ||
polyval.c |