BN_bin2bn takes a size_t as it should, but it passes that into bn_wexpand which
takes unsigned. Switch bn_wexpand and bn_expand to take size_t before they
check bounds against INT_MAX.
BIGNUM itself still uses int everywhere and we may want to audit all the
arithmetic at some point. Although I suspect having bn_expand require that the
number of bits fit in an int is sufficient to make everything happy, unless
we're doing interesting arithmetic on the number of bits somewhere.
Change-Id: Id191a4a095adb7c938cde6f5a28bee56644720c6
Reviewed-on: https://boringssl-review.googlesource.com/5680
Reviewed-by: Adam Langley <agl@google.com>
inttypes.h kindly requires a feature macro in C++ on some platforms, due
to a bizarre footnote in C99 (see footnote 191 in section 7.8.1). As
bn.h is a public header, we must leak this wart to the consumer. On
platforms with unfriendly inttypes.h headers, using BN_DEC_FMT1 and
friends now require the feature macro be defined externally.
This broke the Chromium Android Clang builder:
http://build.chromium.org/p/chromium.linux/builders/Android%20Clang%20Builder%20%28dbg%29/builds/59288
Change-Id: I88275a6788c7babd0eae32cae86f115bfa93a591
Reviewed-on: https://boringssl-review.googlesource.com/4688
Reviewed-by: Adam Langley <agl@google.com>
It seems Android's inttypes.h refuses to define those macros on C++ unless
__STDC_FORMAT_MACROS is set. This unbreaks the roll on Android.
Change-Id: Iad6c971b4789f0302534d9e5022534c6124e0ff0
Reviewed-on: https://boringssl-review.googlesource.com/4202
Reviewed-by: Adam Langley <agl@google.com>
Win64 fires significantly more warnings than Win32. Also some recent
changes made it grumpy.
(We might want to reconsider enabling all of MSVC's warnings. Given the sorts
of warnings some of these are, I'm not sure MSVC's version of -Wall -Werror is
actually tenable. Plus, diverging from the Chromium build, especially before
the bots are ready, is going to break pretty readily.)
Change-Id: If3b8feccf910ceab4a233b0731e7624d7da46f87
Reviewed-on: https://boringssl-review.googlesource.com/3420
Reviewed-by: Adam Langley <agl@google.com>
This is an initial cut at aarch64 support. I have only qemu to test it
however—hopefully hardware will be coming soon.
This also affects 32-bit ARM in that aarch64 chips can run 32-bit code
and we would like to be able to take advantage of the crypto operations
even in 32-bit mode. AES and GHASH should Just Work in this case: the
-armx.pl files can be built for either 32- or 64-bit mode based on the
flavour argument given to the Perl script.
SHA-1 and SHA-256 don't work like this however because they've never
support for multiple implementations, thus BoringSSL built for 32-bit
won't use the SHA instructions on an aarch64 chip.
No dedicated ChaCha20 or Poly1305 support yet.
Change-Id: Ib275bc4894a365c8ec7c42f4e91af6dba3bd686c
Reviewed-on: https://boringssl-review.googlesource.com/2801
Reviewed-by: Adam Langley <agl@google.com>
Upstream (impressively quickly) fixed the missing intrinsic. Switch Windows
clang back to building the same code as MSVC. Also include the intrin.h header
rather than forward-declare the intrinsic. clang only works if the header is
explicitly included. Chromium forcibly includes it to work around these kinds
of issues, but we shouldn't rely on it.
BUG=crbug.com/438382
Change-Id: I0ff6d48e1a3aa455cff99f8dc4c407e88b84d446
Reviewed-on: https://boringssl-review.googlesource.com/2461
Reviewed-by: Adam Langley <agl@google.com>
Windows clang lacks _umul128, but it has inline assembly so just use
that.
Change-Id: I6ff5d2465edc703a4d47ef0efbcea43d6fcc79fa
Reviewed-on: https://boringssl-review.googlesource.com/2454
Reviewed-by: Adam Langley <agl@google.com>
It's never used, upstream or downstream. The 64-bit value is wrong anyway for
LLP64 platforms.
Change-Id: I56afc51f4c17ed3f1c30959b574034f181b5b0c7
Reviewed-on: https://boringssl-review.googlesource.com/2123
Reviewed-by: Adam Langley <agl@google.com>
Android uses these for some conversions from Java formats. The code is
sufficiently bespoke that putting the conversion functions into
BoringSSL doesn't make a lot of sense, but the alternative is to expose
these ones.
Change-Id: If1362bc4a5c44cba4023c909e2ba6488ae019ddb
Apart from the obvious little issues, this also works around a
(seeming) libtool/linker:
a.c defines a symbol:
int kFoo;
b.c uses it:
extern int kFoo;
int f() {
return kFoo;
}
compile them:
$ gcc -c a.c
$ gcc -c b.c
and create a dummy main in order to run it, main.c:
int f();
int main() {
return f();
}
this works as expected:
$ gcc main.c a.o b.o
but, if we make an archive:
$ ar q lib.a a.o b.o
and use that:
$ gcc main.c lib.a
Undefined symbols for architecture x86_64
"_kFoo", referenced from:
_f in lib.a(b.o)
(It doesn't matter what order the .o files are put into the .a)
Linux and Windows don't seem to have this problem.
nm on a.o shows that the symbol is of type "C", which is a "common symbol"[1].
Basically the linker will merge multiple common symbol definitions together.
If ones makes a.c read:
int kFoo = 0;
Then one gets a type "D" symbol - a "data section symbol" and everything works
just fine.
This might actually be a libtool bug instead of an ld bug: Looking at `xxd
lib.a | less`, the __.SYMDEF SORTED index at the beginning of the archive
doesn't contain an entry for kFoo unless initialised.
Change-Id: I4cdad9ba46e9919221c3cbd79637508959359427
Initial fork from f2d678e6e89b6508147086610e985d4e8416e867 (1.0.2 beta).
(This change contains substantial changes from the original and
effectively starts a new history.)