This is an initial cut at aarch64 support. I have only qemu to test it
however—hopefully hardware will be coming soon.
This also affects 32-bit ARM in that aarch64 chips can run 32-bit code
and we would like to be able to take advantage of the crypto operations
even in 32-bit mode. AES and GHASH should Just Work in this case: the
-armx.pl files can be built for either 32- or 64-bit mode based on the
flavour argument given to the Perl script.
SHA-1 and SHA-256 don't work like this however because they've never
support for multiple implementations, thus BoringSSL built for 32-bit
won't use the SHA instructions on an aarch64 chip.
No dedicated ChaCha20 or Poly1305 support yet.
Change-Id: Ib275bc4894a365c8ec7c42f4e91af6dba3bd686c
Reviewed-on: https://boringssl-review.googlesource.com/2801
Reviewed-by: Adam Langley <agl@google.com>
Upstream (impressively quickly) fixed the missing intrinsic. Switch Windows
clang back to building the same code as MSVC. Also include the intrin.h header
rather than forward-declare the intrinsic. clang only works if the header is
explicitly included. Chromium forcibly includes it to work around these kinds
of issues, but we shouldn't rely on it.
BUG=crbug.com/438382
Change-Id: I0ff6d48e1a3aa455cff99f8dc4c407e88b84d446
Reviewed-on: https://boringssl-review.googlesource.com/2461
Reviewed-by: Adam Langley <agl@google.com>
Windows clang lacks _umul128, but it has inline assembly so just use
that.
Change-Id: I6ff5d2465edc703a4d47ef0efbcea43d6fcc79fa
Reviewed-on: https://boringssl-review.googlesource.com/2454
Reviewed-by: Adam Langley <agl@google.com>
It's never used, upstream or downstream. The 64-bit value is wrong anyway for
LLP64 platforms.
Change-Id: I56afc51f4c17ed3f1c30959b574034f181b5b0c7
Reviewed-on: https://boringssl-review.googlesource.com/2123
Reviewed-by: Adam Langley <agl@google.com>
Android uses these for some conversions from Java formats. The code is
sufficiently bespoke that putting the conversion functions into
BoringSSL doesn't make a lot of sense, but the alternative is to expose
these ones.
Change-Id: If1362bc4a5c44cba4023c909e2ba6488ae019ddb
Apart from the obvious little issues, this also works around a
(seeming) libtool/linker:
a.c defines a symbol:
int kFoo;
b.c uses it:
extern int kFoo;
int f() {
return kFoo;
}
compile them:
$ gcc -c a.c
$ gcc -c b.c
and create a dummy main in order to run it, main.c:
int f();
int main() {
return f();
}
this works as expected:
$ gcc main.c a.o b.o
but, if we make an archive:
$ ar q lib.a a.o b.o
and use that:
$ gcc main.c lib.a
Undefined symbols for architecture x86_64
"_kFoo", referenced from:
_f in lib.a(b.o)
(It doesn't matter what order the .o files are put into the .a)
Linux and Windows don't seem to have this problem.
nm on a.o shows that the symbol is of type "C", which is a "common symbol"[1].
Basically the linker will merge multiple common symbol definitions together.
If ones makes a.c read:
int kFoo = 0;
Then one gets a type "D" symbol - a "data section symbol" and everything works
just fine.
This might actually be a libtool bug instead of an ld bug: Looking at `xxd
lib.a | less`, the __.SYMDEF SORTED index at the beginning of the archive
doesn't contain an entry for kFoo unless initialised.
Change-Id: I4cdad9ba46e9919221c3cbd79637508959359427
Initial fork from f2d678e6e89b6508147086610e985d4e8416e867 (1.0.2 beta).
(This change contains substantial changes from the original and
effectively starts a new history.)