2015-07-29 02:34:45 +01:00
|
|
|
/* Copyright (c) 2015, Google Inc.
|
|
|
|
*
|
|
|
|
* Permission to use, copy, modify, and/or distribute this software for any
|
|
|
|
* purpose with or without fee is hereby granted, provided that the above
|
|
|
|
* copyright notice and this permission notice appear in all copies.
|
|
|
|
*
|
|
|
|
* THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
|
|
|
|
* WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
|
|
|
|
* MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY
|
|
|
|
* SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
|
|
|
|
* WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION
|
|
|
|
* OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN
|
|
|
|
* CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE. */
|
|
|
|
|
|
|
|
#include <openssl/ssl.h>
|
|
|
|
|
|
|
|
#include <assert.h>
|
|
|
|
#include <limits.h>
|
|
|
|
#include <stdlib.h>
|
|
|
|
#include <string.h>
|
|
|
|
|
|
|
|
#include <openssl/bio.h>
|
|
|
|
#include <openssl/err.h>
|
|
|
|
#include <openssl/mem.h>
|
|
|
|
|
2016-12-13 06:07:13 +00:00
|
|
|
#include "../crypto/internal.h"
|
2015-07-29 02:34:45 +01:00
|
|
|
#include "internal.h"
|
|
|
|
|
|
|
|
|
Move libssl's internals into the bssl namespace.
This is horrible, but everything else I tried was worse. The goal with
this CL is to take the extern "C" out of ssl/internal.h and move most
symbols to namespace bssl, so we can start using C++ helpers and
destructors without worry.
Complications:
- Public API functions must be extern "C" and match their declaration in
ssl.h, which is unnamespaced. C++ really does not want you to
interleave namespaced and unnamespaced things. One can actually write
a namespaced extern "C" function, but this means, from C++'s
perspective, the function is namespaced. Trying to namespace the
public header would worked but ended up too deep a rabbithole.
- Our STACK_OF macros do not work right in namespaces.
- The typedefs for our exposed but opaque types are visible in the
header files and copied into consuming projects as forward
declarations. We ultimately want to give SSL a destructor, but
clobbering an unnamespaced ssl_st::~ssl_st seems bad manners.
- MSVC complains about ambiguous names if one typedefs SSL to bssl::SSL.
This CL opts for:
- ssl/*.cc must begin with #define BORINGSSL_INTERNAL_CXX_TYPES. This
informs the public headers to create forward declarations which are
compatible with our namespaces.
- For now, C++-defined type FOO ends up at bssl::FOO with a typedef
outside. Later I imagine we'll rename many of them.
- Internal functions get namespace bssl, so we stop worrying about
stomping the tls1_prf symbol. Exported C functions are stuck as they
are. Rather than try anything weird, bite the bullet and reorder files
which have a mix of public and private functions. I expect that over
time, the public functions will become fairly small as we move logic
to more idiomatic C++.
Files without any public C functions can just be written normally.
- To avoid MSVC troubles, some bssl types are renamed to CPlusPlusStyle
in advance of them being made idiomatic C++.
Bug: 132
Change-Id: Ic931895e117c38b14ff8d6e5a273e868796c7581
Reviewed-on: https://boringssl-review.googlesource.com/18124
Reviewed-by: David Benjamin <davidben@google.com>
2017-07-18 21:34:25 +01:00
|
|
|
namespace bssl {
|
|
|
|
|
2017-08-29 21:33:21 +01:00
|
|
|
// BIO uses int instead of size_t. No lengths will exceed uint16_t, so this will
|
|
|
|
// not overflow.
|
2017-07-15 00:36:07 +01:00
|
|
|
static_assert(0xffff <= INT_MAX, "uint16_t does not fit in int");
|
2015-07-29 02:34:45 +01:00
|
|
|
|
2017-07-15 00:36:07 +01:00
|
|
|
static_assert((SSL3_ALIGN_PAYLOAD & (SSL3_ALIGN_PAYLOAD - 1)) == 0,
|
|
|
|
"SSL3_ALIGN_PAYLOAD must be a power of 2");
|
2015-07-29 02:34:45 +01:00
|
|
|
|
2017-08-29 21:33:21 +01:00
|
|
|
// ensure_buffer ensures |buf| has capacity at least |cap|, aligned such that
|
|
|
|
// data written after |header_len| is aligned to a |SSL3_ALIGN_PAYLOAD|-byte
|
|
|
|
// boundary. It returns one on success and zero on error.
|
Size TLS read buffers based on the size requested.
Like the write half, rather than allocating the maximum size needed and
relying on the malloc implementation to pool this sanely, allocate the
size the TLS record-layer code believes it needs.
As currently arranged, this will cause us to alternate from a small
allocation (for the record header) and then an allocation sized to the
record itself. Windows is reportedly bad at pooling large allocations,
so, *if the server sends us smaller records*, this will avoid hitting
the problem cases.
If the server sends us size 16k records, the maximum allowed by ther
protocol, we simply must buffer up to that amount and will continue to
allocate similar sizes as before (although slightly smaller; this CL
also fixes small double-counting we did on the allocation sizes).
Separately, we'll gather some metrics in Chromium to see what common
record sizes are to determine if this optimization is sufficient. This
is intended as an easy optimization we can do now, in advance of ongoing
work to fix the extra layer of buffering between Chromium and BoringSSL
with an in-place decrypt API.
Bug: chromium:524258
Change-Id: I233df29df1212154c49fee4285ccc37be12f81dc
Reviewed-on: https://boringssl-review.googlesource.com/17329
Reviewed-by: Adam Langley <agl@google.com>
2017-06-22 23:07:15 +01:00
|
|
|
static int ensure_buffer(SSL3_BUFFER *buf, size_t header_len, size_t cap) {
|
|
|
|
if (cap > 0xffff) {
|
2015-07-29 02:34:45 +01:00
|
|
|
OPENSSL_PUT_ERROR(SSL, ERR_R_INTERNAL_ERROR);
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
Size TLS read buffers based on the size requested.
Like the write half, rather than allocating the maximum size needed and
relying on the malloc implementation to pool this sanely, allocate the
size the TLS record-layer code believes it needs.
As currently arranged, this will cause us to alternate from a small
allocation (for the record header) and then an allocation sized to the
record itself. Windows is reportedly bad at pooling large allocations,
so, *if the server sends us smaller records*, this will avoid hitting
the problem cases.
If the server sends us size 16k records, the maximum allowed by ther
protocol, we simply must buffer up to that amount and will continue to
allocate similar sizes as before (although slightly smaller; this CL
also fixes small double-counting we did on the allocation sizes).
Separately, we'll gather some metrics in Chromium to see what common
record sizes are to determine if this optimization is sufficient. This
is intended as an easy optimization we can do now, in advance of ongoing
work to fix the extra layer of buffering between Chromium and BoringSSL
with an in-place decrypt API.
Bug: chromium:524258
Change-Id: I233df29df1212154c49fee4285ccc37be12f81dc
Reviewed-on: https://boringssl-review.googlesource.com/17329
Reviewed-by: Adam Langley <agl@google.com>
2017-06-22 23:07:15 +01:00
|
|
|
if (buf->cap >= cap) {
|
|
|
|
return 1;
|
|
|
|
}
|
|
|
|
|
2017-08-29 21:33:21 +01:00
|
|
|
// Add up to |SSL3_ALIGN_PAYLOAD| - 1 bytes of slack for alignment.
|
2017-08-18 22:24:36 +01:00
|
|
|
//
|
|
|
|
// Since this buffer gets allocated quite frequently and doesn't contain any
|
|
|
|
// sensitive data, we allocate with malloc rather than |OPENSSL_malloc| and
|
|
|
|
// avoid zeroing on free.
|
|
|
|
uint8_t *new_buf = (uint8_t *)malloc(cap + SSL3_ALIGN_PAYLOAD - 1);
|
Size TLS read buffers based on the size requested.
Like the write half, rather than allocating the maximum size needed and
relying on the malloc implementation to pool this sanely, allocate the
size the TLS record-layer code believes it needs.
As currently arranged, this will cause us to alternate from a small
allocation (for the record header) and then an allocation sized to the
record itself. Windows is reportedly bad at pooling large allocations,
so, *if the server sends us smaller records*, this will avoid hitting
the problem cases.
If the server sends us size 16k records, the maximum allowed by ther
protocol, we simply must buffer up to that amount and will continue to
allocate similar sizes as before (although slightly smaller; this CL
also fixes small double-counting we did on the allocation sizes).
Separately, we'll gather some metrics in Chromium to see what common
record sizes are to determine if this optimization is sufficient. This
is intended as an easy optimization we can do now, in advance of ongoing
work to fix the extra layer of buffering between Chromium and BoringSSL
with an in-place decrypt API.
Bug: chromium:524258
Change-Id: I233df29df1212154c49fee4285ccc37be12f81dc
Reviewed-on: https://boringssl-review.googlesource.com/17329
Reviewed-by: Adam Langley <agl@google.com>
2017-06-22 23:07:15 +01:00
|
|
|
if (new_buf == NULL) {
|
2015-07-29 02:34:45 +01:00
|
|
|
OPENSSL_PUT_ERROR(SSL, ERR_R_MALLOC_FAILURE);
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2017-08-29 21:33:21 +01:00
|
|
|
// Offset the buffer such that the record body is aligned.
|
Size TLS read buffers based on the size requested.
Like the write half, rather than allocating the maximum size needed and
relying on the malloc implementation to pool this sanely, allocate the
size the TLS record-layer code believes it needs.
As currently arranged, this will cause us to alternate from a small
allocation (for the record header) and then an allocation sized to the
record itself. Windows is reportedly bad at pooling large allocations,
so, *if the server sends us smaller records*, this will avoid hitting
the problem cases.
If the server sends us size 16k records, the maximum allowed by ther
protocol, we simply must buffer up to that amount and will continue to
allocate similar sizes as before (although slightly smaller; this CL
also fixes small double-counting we did on the allocation sizes).
Separately, we'll gather some metrics in Chromium to see what common
record sizes are to determine if this optimization is sufficient. This
is intended as an easy optimization we can do now, in advance of ongoing
work to fix the extra layer of buffering between Chromium and BoringSSL
with an in-place decrypt API.
Bug: chromium:524258
Change-Id: I233df29df1212154c49fee4285ccc37be12f81dc
Reviewed-on: https://boringssl-review.googlesource.com/17329
Reviewed-by: Adam Langley <agl@google.com>
2017-06-22 23:07:15 +01:00
|
|
|
size_t new_offset =
|
|
|
|
(0 - header_len - (uintptr_t)new_buf) & (SSL3_ALIGN_PAYLOAD - 1);
|
|
|
|
|
|
|
|
if (buf->buf != NULL) {
|
|
|
|
OPENSSL_memcpy(new_buf + new_offset, buf->buf + buf->offset, buf->len);
|
2017-08-18 22:24:36 +01:00
|
|
|
free(buf->buf); // Allocated with malloc().
|
Size TLS read buffers based on the size requested.
Like the write half, rather than allocating the maximum size needed and
relying on the malloc implementation to pool this sanely, allocate the
size the TLS record-layer code believes it needs.
As currently arranged, this will cause us to alternate from a small
allocation (for the record header) and then an allocation sized to the
record itself. Windows is reportedly bad at pooling large allocations,
so, *if the server sends us smaller records*, this will avoid hitting
the problem cases.
If the server sends us size 16k records, the maximum allowed by ther
protocol, we simply must buffer up to that amount and will continue to
allocate similar sizes as before (although slightly smaller; this CL
also fixes small double-counting we did on the allocation sizes).
Separately, we'll gather some metrics in Chromium to see what common
record sizes are to determine if this optimization is sufficient. This
is intended as an easy optimization we can do now, in advance of ongoing
work to fix the extra layer of buffering between Chromium and BoringSSL
with an in-place decrypt API.
Bug: chromium:524258
Change-Id: I233df29df1212154c49fee4285ccc37be12f81dc
Reviewed-on: https://boringssl-review.googlesource.com/17329
Reviewed-by: Adam Langley <agl@google.com>
2017-06-22 23:07:15 +01:00
|
|
|
}
|
|
|
|
|
|
|
|
buf->buf = new_buf;
|
|
|
|
buf->offset = new_offset;
|
2015-07-29 02:34:45 +01:00
|
|
|
buf->cap = cap;
|
|
|
|
return 1;
|
|
|
|
}
|
|
|
|
|
|
|
|
static void consume_buffer(SSL3_BUFFER *buf, size_t len) {
|
|
|
|
if (len > buf->len) {
|
|
|
|
abort();
|
|
|
|
}
|
|
|
|
buf->offset += (uint16_t)len;
|
|
|
|
buf->len -= (uint16_t)len;
|
|
|
|
buf->cap -= (uint16_t)len;
|
|
|
|
}
|
|
|
|
|
|
|
|
static void clear_buffer(SSL3_BUFFER *buf) {
|
2017-08-18 22:24:36 +01:00
|
|
|
free(buf->buf); // Allocated with malloc().
|
2016-12-13 06:07:13 +00:00
|
|
|
OPENSSL_memset(buf, 0, sizeof(SSL3_BUFFER));
|
2015-07-29 02:34:45 +01:00
|
|
|
}
|
|
|
|
|
|
|
|
uint8_t *ssl_read_buffer(SSL *ssl) {
|
|
|
|
return ssl->s3->read_buffer.buf + ssl->s3->read_buffer.offset;
|
|
|
|
}
|
|
|
|
|
|
|
|
size_t ssl_read_buffer_len(const SSL *ssl) {
|
|
|
|
return ssl->s3->read_buffer.len;
|
|
|
|
}
|
|
|
|
|
|
|
|
static int dtls_read_buffer_next_packet(SSL *ssl) {
|
|
|
|
SSL3_BUFFER *buf = &ssl->s3->read_buffer;
|
|
|
|
|
|
|
|
if (buf->len > 0) {
|
2017-08-29 21:33:21 +01:00
|
|
|
// It is an error to call |dtls_read_buffer_extend| when the read buffer is
|
|
|
|
// not empty.
|
2015-07-29 02:34:45 +01:00
|
|
|
OPENSSL_PUT_ERROR(SSL, ERR_R_INTERNAL_ERROR);
|
|
|
|
return -1;
|
|
|
|
}
|
|
|
|
|
2017-08-29 21:33:21 +01:00
|
|
|
// Read a single packet from |ssl->rbio|. |buf->cap| must fit in an int.
|
2015-07-29 02:34:45 +01:00
|
|
|
int ret = BIO_read(ssl->rbio, buf->buf + buf->offset, (int)buf->cap);
|
|
|
|
if (ret <= 0) {
|
2016-03-12 03:56:19 +00:00
|
|
|
ssl->rwstate = SSL_READING;
|
2015-07-29 02:34:45 +01:00
|
|
|
return ret;
|
|
|
|
}
|
2017-08-29 21:33:21 +01:00
|
|
|
// |BIO_read| was bound by |buf->cap|, so this cannot overflow.
|
2015-07-29 02:34:45 +01:00
|
|
|
buf->len = (uint16_t)ret;
|
|
|
|
return 1;
|
|
|
|
}
|
|
|
|
|
|
|
|
static int tls_read_buffer_extend_to(SSL *ssl, size_t len) {
|
|
|
|
SSL3_BUFFER *buf = &ssl->s3->read_buffer;
|
|
|
|
|
|
|
|
if (len > buf->cap) {
|
|
|
|
OPENSSL_PUT_ERROR(SSL, SSL_R_BUFFER_TOO_SMALL);
|
|
|
|
return -1;
|
|
|
|
}
|
|
|
|
|
2017-08-29 21:33:21 +01:00
|
|
|
// Read until the target length is reached.
|
2015-07-29 02:34:45 +01:00
|
|
|
while (buf->len < len) {
|
2017-08-29 21:33:21 +01:00
|
|
|
// The amount of data to read is bounded by |buf->cap|, which must fit in an
|
|
|
|
// int.
|
2015-07-29 02:34:45 +01:00
|
|
|
int ret = BIO_read(ssl->rbio, buf->buf + buf->offset + buf->len,
|
|
|
|
(int)(len - buf->len));
|
|
|
|
if (ret <= 0) {
|
2016-03-12 03:56:19 +00:00
|
|
|
ssl->rwstate = SSL_READING;
|
2015-07-29 02:34:45 +01:00
|
|
|
return ret;
|
|
|
|
}
|
2017-08-29 21:33:21 +01:00
|
|
|
// |BIO_read| was bound by |buf->cap - buf->len|, so this cannot
|
|
|
|
// overflow.
|
2015-07-29 02:34:45 +01:00
|
|
|
buf->len += (uint16_t)ret;
|
|
|
|
}
|
|
|
|
|
|
|
|
return 1;
|
|
|
|
}
|
|
|
|
|
|
|
|
int ssl_read_buffer_extend_to(SSL *ssl, size_t len) {
|
2017-08-29 21:33:21 +01:00
|
|
|
// |ssl_read_buffer_extend_to| implicitly discards any consumed data.
|
2015-07-29 02:34:45 +01:00
|
|
|
ssl_read_buffer_discard(ssl);
|
|
|
|
|
Size TLS read buffers based on the size requested.
Like the write half, rather than allocating the maximum size needed and
relying on the malloc implementation to pool this sanely, allocate the
size the TLS record-layer code believes it needs.
As currently arranged, this will cause us to alternate from a small
allocation (for the record header) and then an allocation sized to the
record itself. Windows is reportedly bad at pooling large allocations,
so, *if the server sends us smaller records*, this will avoid hitting
the problem cases.
If the server sends us size 16k records, the maximum allowed by ther
protocol, we simply must buffer up to that amount and will continue to
allocate similar sizes as before (although slightly smaller; this CL
also fixes small double-counting we did on the allocation sizes).
Separately, we'll gather some metrics in Chromium to see what common
record sizes are to determine if this optimization is sufficient. This
is intended as an easy optimization we can do now, in advance of ongoing
work to fix the extra layer of buffering between Chromium and BoringSSL
with an in-place decrypt API.
Bug: chromium:524258
Change-Id: I233df29df1212154c49fee4285ccc37be12f81dc
Reviewed-on: https://boringssl-review.googlesource.com/17329
Reviewed-by: Adam Langley <agl@google.com>
2017-06-22 23:07:15 +01:00
|
|
|
if (SSL_is_dtls(ssl)) {
|
2017-07-15 00:36:07 +01:00
|
|
|
static_assert(
|
Size TLS read buffers based on the size requested.
Like the write half, rather than allocating the maximum size needed and
relying on the malloc implementation to pool this sanely, allocate the
size the TLS record-layer code believes it needs.
As currently arranged, this will cause us to alternate from a small
allocation (for the record header) and then an allocation sized to the
record itself. Windows is reportedly bad at pooling large allocations,
so, *if the server sends us smaller records*, this will avoid hitting
the problem cases.
If the server sends us size 16k records, the maximum allowed by ther
protocol, we simply must buffer up to that amount and will continue to
allocate similar sizes as before (although slightly smaller; this CL
also fixes small double-counting we did on the allocation sizes).
Separately, we'll gather some metrics in Chromium to see what common
record sizes are to determine if this optimization is sufficient. This
is intended as an easy optimization we can do now, in advance of ongoing
work to fix the extra layer of buffering between Chromium and BoringSSL
with an in-place decrypt API.
Bug: chromium:524258
Change-Id: I233df29df1212154c49fee4285ccc37be12f81dc
Reviewed-on: https://boringssl-review.googlesource.com/17329
Reviewed-by: Adam Langley <agl@google.com>
2017-06-22 23:07:15 +01:00
|
|
|
DTLS1_RT_HEADER_LENGTH + SSL3_RT_MAX_ENCRYPTED_LENGTH <= 0xffff,
|
2017-07-15 00:36:07 +01:00
|
|
|
"DTLS read buffer is too large");
|
Size TLS read buffers based on the size requested.
Like the write half, rather than allocating the maximum size needed and
relying on the malloc implementation to pool this sanely, allocate the
size the TLS record-layer code believes it needs.
As currently arranged, this will cause us to alternate from a small
allocation (for the record header) and then an allocation sized to the
record itself. Windows is reportedly bad at pooling large allocations,
so, *if the server sends us smaller records*, this will avoid hitting
the problem cases.
If the server sends us size 16k records, the maximum allowed by ther
protocol, we simply must buffer up to that amount and will continue to
allocate similar sizes as before (although slightly smaller; this CL
also fixes small double-counting we did on the allocation sizes).
Separately, we'll gather some metrics in Chromium to see what common
record sizes are to determine if this optimization is sufficient. This
is intended as an easy optimization we can do now, in advance of ongoing
work to fix the extra layer of buffering between Chromium and BoringSSL
with an in-place decrypt API.
Bug: chromium:524258
Change-Id: I233df29df1212154c49fee4285ccc37be12f81dc
Reviewed-on: https://boringssl-review.googlesource.com/17329
Reviewed-by: Adam Langley <agl@google.com>
2017-06-22 23:07:15 +01:00
|
|
|
|
2017-08-29 21:33:21 +01:00
|
|
|
// The |len| parameter is ignored in DTLS.
|
Size TLS read buffers based on the size requested.
Like the write half, rather than allocating the maximum size needed and
relying on the malloc implementation to pool this sanely, allocate the
size the TLS record-layer code believes it needs.
As currently arranged, this will cause us to alternate from a small
allocation (for the record header) and then an allocation sized to the
record itself. Windows is reportedly bad at pooling large allocations,
so, *if the server sends us smaller records*, this will avoid hitting
the problem cases.
If the server sends us size 16k records, the maximum allowed by ther
protocol, we simply must buffer up to that amount and will continue to
allocate similar sizes as before (although slightly smaller; this CL
also fixes small double-counting we did on the allocation sizes).
Separately, we'll gather some metrics in Chromium to see what common
record sizes are to determine if this optimization is sufficient. This
is intended as an easy optimization we can do now, in advance of ongoing
work to fix the extra layer of buffering between Chromium and BoringSSL
with an in-place decrypt API.
Bug: chromium:524258
Change-Id: I233df29df1212154c49fee4285ccc37be12f81dc
Reviewed-on: https://boringssl-review.googlesource.com/17329
Reviewed-by: Adam Langley <agl@google.com>
2017-06-22 23:07:15 +01:00
|
|
|
len = DTLS1_RT_HEADER_LENGTH + SSL3_RT_MAX_ENCRYPTED_LENGTH;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (!ensure_buffer(&ssl->s3->read_buffer, ssl_record_prefix_len(ssl), len)) {
|
2015-07-29 02:34:45 +01:00
|
|
|
return -1;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (ssl->rbio == NULL) {
|
|
|
|
OPENSSL_PUT_ERROR(SSL, SSL_R_BIO_NOT_SET);
|
|
|
|
return -1;
|
|
|
|
}
|
|
|
|
|
|
|
|
int ret;
|
2016-08-02 21:22:34 +01:00
|
|
|
if (SSL_is_dtls(ssl)) {
|
2017-08-29 21:33:21 +01:00
|
|
|
// |len| is ignored for a datagram transport.
|
2015-07-29 02:34:45 +01:00
|
|
|
ret = dtls_read_buffer_next_packet(ssl);
|
|
|
|
} else {
|
|
|
|
ret = tls_read_buffer_extend_to(ssl, len);
|
|
|
|
}
|
|
|
|
|
|
|
|
if (ret <= 0) {
|
2017-08-29 21:33:21 +01:00
|
|
|
// If the buffer was empty originally and remained empty after attempting to
|
|
|
|
// extend it, release the buffer until the next attempt.
|
2015-07-29 02:34:45 +01:00
|
|
|
ssl_read_buffer_discard(ssl);
|
|
|
|
}
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
|
|
|
void ssl_read_buffer_consume(SSL *ssl, size_t len) {
|
|
|
|
SSL3_BUFFER *buf = &ssl->s3->read_buffer;
|
|
|
|
|
|
|
|
consume_buffer(buf, len);
|
2016-06-02 20:42:01 +01:00
|
|
|
|
2017-08-29 21:33:21 +01:00
|
|
|
// The TLS stack never reads beyond the current record, so there will never be
|
|
|
|
// unconsumed data. If read-ahead is ever reimplemented,
|
|
|
|
// |ssl_read_buffer_discard| will require a |memcpy| to shift the excess back
|
|
|
|
// to the front of the buffer, to ensure there is enough space for the next
|
|
|
|
// record.
|
2016-08-02 21:22:34 +01:00
|
|
|
assert(SSL_is_dtls(ssl) || len == 0 || buf->len == 0);
|
2015-07-29 02:34:45 +01:00
|
|
|
}
|
|
|
|
|
|
|
|
void ssl_read_buffer_discard(SSL *ssl) {
|
|
|
|
if (ssl->s3->read_buffer.len == 0) {
|
|
|
|
ssl_read_buffer_clear(ssl);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
void ssl_read_buffer_clear(SSL *ssl) {
|
|
|
|
clear_buffer(&ssl->s3->read_buffer);
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
int ssl_write_buffer_is_pending(const SSL *ssl) {
|
|
|
|
return ssl->s3->write_buffer.len > 0;
|
|
|
|
}
|
|
|
|
|
2017-07-15 00:36:07 +01:00
|
|
|
static_assert(SSL3_RT_HEADER_LENGTH * 2 +
|
|
|
|
SSL3_RT_SEND_MAX_ENCRYPTED_OVERHEAD * 2 +
|
|
|
|
SSL3_RT_MAX_PLAIN_LENGTH <=
|
|
|
|
0xffff,
|
|
|
|
"maximum TLS write buffer is too large");
|
2015-07-29 02:34:45 +01:00
|
|
|
|
2017-07-15 00:36:07 +01:00
|
|
|
static_assert(DTLS1_RT_HEADER_LENGTH + SSL3_RT_SEND_MAX_ENCRYPTED_OVERHEAD +
|
|
|
|
SSL3_RT_MAX_PLAIN_LENGTH <=
|
|
|
|
0xffff,
|
|
|
|
"maximum DTLS write buffer is too large");
|
2015-07-29 02:34:45 +01:00
|
|
|
|
|
|
|
int ssl_write_buffer_init(SSL *ssl, uint8_t **out_ptr, size_t max_len) {
|
|
|
|
SSL3_BUFFER *buf = &ssl->s3->write_buffer;
|
|
|
|
|
|
|
|
if (buf->buf != NULL) {
|
|
|
|
OPENSSL_PUT_ERROR(SSL, ERR_R_INTERNAL_ERROR);
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
Size TLS read buffers based on the size requested.
Like the write half, rather than allocating the maximum size needed and
relying on the malloc implementation to pool this sanely, allocate the
size the TLS record-layer code believes it needs.
As currently arranged, this will cause us to alternate from a small
allocation (for the record header) and then an allocation sized to the
record itself. Windows is reportedly bad at pooling large allocations,
so, *if the server sends us smaller records*, this will avoid hitting
the problem cases.
If the server sends us size 16k records, the maximum allowed by ther
protocol, we simply must buffer up to that amount and will continue to
allocate similar sizes as before (although slightly smaller; this CL
also fixes small double-counting we did on the allocation sizes).
Separately, we'll gather some metrics in Chromium to see what common
record sizes are to determine if this optimization is sufficient. This
is intended as an easy optimization we can do now, in advance of ongoing
work to fix the extra layer of buffering between Chromium and BoringSSL
with an in-place decrypt API.
Bug: chromium:524258
Change-Id: I233df29df1212154c49fee4285ccc37be12f81dc
Reviewed-on: https://boringssl-review.googlesource.com/17329
Reviewed-by: Adam Langley <agl@google.com>
2017-06-22 23:07:15 +01:00
|
|
|
if (!ensure_buffer(buf, ssl_seal_align_prefix_len(ssl), max_len)) {
|
2015-07-29 02:34:45 +01:00
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
*out_ptr = buf->buf + buf->offset;
|
|
|
|
return 1;
|
|
|
|
}
|
|
|
|
|
|
|
|
void ssl_write_buffer_set_len(SSL *ssl, size_t len) {
|
|
|
|
SSL3_BUFFER *buf = &ssl->s3->write_buffer;
|
|
|
|
|
|
|
|
if (len > buf->cap) {
|
|
|
|
abort();
|
|
|
|
}
|
|
|
|
buf->len = len;
|
|
|
|
}
|
|
|
|
|
|
|
|
static int tls_write_buffer_flush(SSL *ssl) {
|
|
|
|
SSL3_BUFFER *buf = &ssl->s3->write_buffer;
|
|
|
|
|
|
|
|
while (buf->len > 0) {
|
|
|
|
int ret = BIO_write(ssl->wbio, buf->buf + buf->offset, buf->len);
|
|
|
|
if (ret <= 0) {
|
2016-03-12 03:56:19 +00:00
|
|
|
ssl->rwstate = SSL_WRITING;
|
2015-07-29 02:34:45 +01:00
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
consume_buffer(buf, (size_t)ret);
|
|
|
|
}
|
|
|
|
ssl_write_buffer_clear(ssl);
|
|
|
|
return 1;
|
|
|
|
}
|
|
|
|
|
|
|
|
static int dtls_write_buffer_flush(SSL *ssl) {
|
|
|
|
SSL3_BUFFER *buf = &ssl->s3->write_buffer;
|
|
|
|
if (buf->len == 0) {
|
|
|
|
return 1;
|
|
|
|
}
|
|
|
|
|
|
|
|
int ret = BIO_write(ssl->wbio, buf->buf + buf->offset, buf->len);
|
2015-11-02 22:16:13 +00:00
|
|
|
if (ret <= 0) {
|
2016-03-12 03:56:19 +00:00
|
|
|
ssl->rwstate = SSL_WRITING;
|
2017-08-29 21:33:21 +01:00
|
|
|
// If the write failed, drop the write buffer anyway. Datagram transports
|
|
|
|
// can't write half a packet, so the caller is expected to retry from the
|
|
|
|
// top.
|
2015-11-02 22:16:13 +00:00
|
|
|
ssl_write_buffer_clear(ssl);
|
|
|
|
return ret;
|
|
|
|
}
|
2015-07-29 02:34:45 +01:00
|
|
|
ssl_write_buffer_clear(ssl);
|
2015-11-02 22:16:13 +00:00
|
|
|
return 1;
|
2015-07-29 02:34:45 +01:00
|
|
|
}
|
|
|
|
|
|
|
|
int ssl_write_buffer_flush(SSL *ssl) {
|
|
|
|
if (ssl->wbio == NULL) {
|
|
|
|
OPENSSL_PUT_ERROR(SSL, SSL_R_BIO_NOT_SET);
|
|
|
|
return -1;
|
|
|
|
}
|
|
|
|
|
2016-08-02 21:22:34 +01:00
|
|
|
if (SSL_is_dtls(ssl)) {
|
2015-07-29 02:34:45 +01:00
|
|
|
return dtls_write_buffer_flush(ssl);
|
|
|
|
} else {
|
|
|
|
return tls_write_buffer_flush(ssl);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
void ssl_write_buffer_clear(SSL *ssl) {
|
|
|
|
clear_buffer(&ssl->s3->write_buffer);
|
|
|
|
}
|
Move libssl's internals into the bssl namespace.
This is horrible, but everything else I tried was worse. The goal with
this CL is to take the extern "C" out of ssl/internal.h and move most
symbols to namespace bssl, so we can start using C++ helpers and
destructors without worry.
Complications:
- Public API functions must be extern "C" and match their declaration in
ssl.h, which is unnamespaced. C++ really does not want you to
interleave namespaced and unnamespaced things. One can actually write
a namespaced extern "C" function, but this means, from C++'s
perspective, the function is namespaced. Trying to namespace the
public header would worked but ended up too deep a rabbithole.
- Our STACK_OF macros do not work right in namespaces.
- The typedefs for our exposed but opaque types are visible in the
header files and copied into consuming projects as forward
declarations. We ultimately want to give SSL a destructor, but
clobbering an unnamespaced ssl_st::~ssl_st seems bad manners.
- MSVC complains about ambiguous names if one typedefs SSL to bssl::SSL.
This CL opts for:
- ssl/*.cc must begin with #define BORINGSSL_INTERNAL_CXX_TYPES. This
informs the public headers to create forward declarations which are
compatible with our namespaces.
- For now, C++-defined type FOO ends up at bssl::FOO with a typedef
outside. Later I imagine we'll rename many of them.
- Internal functions get namespace bssl, so we stop worrying about
stomping the tls1_prf symbol. Exported C functions are stuck as they
are. Rather than try anything weird, bite the bullet and reorder files
which have a mix of public and private functions. I expect that over
time, the public functions will become fairly small as we move logic
to more idiomatic C++.
Files without any public C functions can just be written normally.
- To avoid MSVC troubles, some bssl types are renamed to CPlusPlusStyle
in advance of them being made idiomatic C++.
Bug: 132
Change-Id: Ic931895e117c38b14ff8d6e5a273e868796c7581
Reviewed-on: https://boringssl-review.googlesource.com/18124
Reviewed-by: David Benjamin <davidben@google.com>
2017-07-18 21:34:25 +01:00
|
|
|
|
|
|
|
} // namespace bssl
|