Reduce the alignment tag on aead_aes_gcm_siv_asm_ctx.

This tag doesn't actually do anything, except cause UBSan to point out
that malloc doesn't align that tightly. malloc does, however, appear to
align up to 16-bytes, which is the actual alignment requirement of that
code. So just replace 64 with 16.

When we're juggling less things, it'd be nice to see what toolchain
support for the various aligned allocators looks like. Or maybe someday
we can use C++ new which one hopes is smart enough to deal with all
this.

Change-Id: Idbdde66852d5dad25a044d4c68ffa3b3f213025a
Reviewed-on: https://boringssl-review.googlesource.com/17706
Commit-Queue: David Benjamin <davidben@google.com>
Commit-Queue: Adam Langley <agl@google.com>
Reviewed-by: Adam Langley <agl@google.com>
CQ-Verified: CQ bot account: commit-bot@chromium.org <commit-bot@chromium.org>
This commit is contained in:
David Benjamin 2017-07-10 16:39:36 -04:00 committed by CQ bot account: commit-bot@chromium.org
parent 08fea48a91
commit b0651775c2

View File

@ -13,6 +13,9 @@
* CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE. */
#include <openssl/aead.h>
#include <assert.h>
#include <openssl/cipher.h>
#include <openssl/cpu.h>
#include <openssl/crypto.h>
@ -29,7 +32,7 @@
/* Optimised AES-GCM-SIV */
struct aead_aes_gcm_siv_asm_ctx {
alignas(64) uint8_t key[16*15];
alignas(16) uint8_t key[16*15];
int is_128_bit;
};
@ -67,6 +70,9 @@ static int aead_aes_gcm_siv_asm_init(EVP_AEAD_CTX *ctx, const uint8_t *key,
return 0;
}
/* malloc should return a 16-byte-aligned address. */
assert((((uintptr_t)gcm_siv_ctx) & 15) == 0);
if (key_bits == 128) {
aes128gcmsiv_aes_ks(key, &gcm_siv_ctx->key[0]);
gcm_siv_ctx->is_128_bit = 1;